提交 ef124412 编写于 作者: L Linus Torvalds

Merge tag 'usb-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB and Thunderbolt updates from Greg KH:
 "Here is the big set of USB and Thunderbolt driver updates for
  5.13-rc1.

  Lots of little things in here, with loads of tiny fixes and cleanups
  over these drivers, as well as these "larger" changes:

   - thunderbolt updates and new features added

   - xhci driver updates and split out of a mediatek-specific xhci
     driver from the main xhci module to make it easier to work with
     (something that I have been wanting for a while).

   - loads of typec feature additions and updates

   - dwc2 driver updates

   - dwc3 driver updates

   - gadget driver fixes and minor updates

   - loads of usb-serial cleanups and fixes and updates

   - usbip documentation updates and fixes

   - lots of other tiny USB driver updates

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'usb-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (371 commits)
  usb: Fix up movement of USB core kerneldoc location
  usb: dwc3: gadget: Handle DEV_TXF_FLUSH_BYPASS capability
  usb: dwc3: Capture new capability register GHWPARAMS9
  usb: gadget: prevent a ternary sign expansion bug
  usb: dwc3: core: Do core softreset when switch mode
  usb: dwc2: Get rid of useless error checks in suspend interrupt
  usb: dwc2: Update dwc2_handle_usb_suspend_intr function.
  usb: dwc2: Add exit hibernation mode before removing drive
  usb: dwc2: Add hibernation exiting flow by system resume
  usb: dwc2: Add hibernation entering flow by system suspend
  usb: dwc2: Allow exit hibernation in urb enqueue
  usb: dwc2: Move exit hibernation to dwc2_port_resume() function
  usb: dwc2: Move enter hibernation to dwc2_port_suspend() function
  usb: dwc2: Clear GINTSTS_RESTOREDONE bit after restore is generated.
  usb: dwc2: Clear fifo_map when resetting core.
  usb: dwc2: Allow exiting hibernation from gpwrdn rst detect
  usb: dwc2: Fix hibernation between host and device modes.
  usb: dwc2: Fix host mode hibernation exit with remote wakeup flow.
  usb: dwc2: Reset DEVADDR after exiting gadget hibernation.
  usb: dwc2: Update exit hibernation when port reset is asserted
  ...
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain RX speed per lane.
All RX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/rx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the number of RX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_speed
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports the XDomain TX speed per lane.
All TX lanes run at the same speed.
What: /sys/bus/thunderbolt/devices/<xdomain>/tx_lanes
Date: Feb 2021
KernelVersion: 5.11
Contact: Isaac Hazan <isaac.hazan@intel.com>
Description: This attribute reports number of TX lanes the XDomain
is using simultaneously through its upstream port.
What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl
Date: Jun 2018
KernelVersion: 4.17
......@@ -162,6 +134,13 @@ Contact: thunderbolt-software@lists.01.org
Description: This attribute contains name of this device extracted from
the device DROM.
What: /sys/bus/thunderbolt/devices/.../maxhopid
Date: Jul 2021
KernelVersion: 5.13
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Only set for XDomains. The maximum HopID the other host
supports as its input HopID.
What: /sys/bus/thunderbolt/devices/.../rx_speed
Date: Jan 2020
KernelVersion: 5.5
......
......@@ -197,6 +197,16 @@ properties:
$ref: /schemas/types.yaml#/definitions/uint32
enum: [1, 2, 3]
slow-charger-loop:
description: Allows PMIC charger loops which are slow(i.e. cannot meet the 15ms deadline) to
still comply to pSnkStby i.e Maximum power that can be consumed by sink while in Sink Standby
state as defined in 7.4.2 Sink Electrical Parameters of USB Power Delivery Specification
Revision 3.0, Version 1.2. When the property is set, the port requests pSnkStby(2.5W -
5V@500mA) upon entering SNK_DISCOVERY(instead of 3A or the 1.5A, Rp current advertised, during
SNK_DISCOVERY) and the actual currrent limit after reception of PS_Ready for PD link or during
SNK_READY for non-pd link.
type: boolean
required:
- compatible
......
Xilinx SuperSpeed DWC3 USB SoC controller
Required properties:
- compatible: Should contain "xlnx,zynqmp-dwc3"
- compatible: May contain "xlnx,zynqmp-dwc3" or "xlnx,versal-dwc3"
- reg: Base address and length of the register control block
- clocks: A list of phandles for the clocks listed in clock-names
- clock-names: Should contain the following:
"bus_clk" Master/Core clock, have to be >= 125 MHz for SS
operation and >= 60MHz for HS operation
"ref_clk" Clock source to core during PHY power down
- resets: A list of phandles for resets listed in reset-names
- reset-names:
"usb_crst" USB core reset
"usb_hibrst" USB hibernation reset
"usb_apbrst" USB APB reset
Required child node:
A child node must exist to represent the core DWC3 IP block. The name of
the node is not important. The content of the node is defined in dwc3.txt.
Optional properties for snps,dwc3:
- dma-coherent: Enable this flag if CCI is enabled in design. Adding this
flag configures Global SoC bus Configuration Register and
Xilinx USB 3.0 IP - USB coherency register to enable CCI.
- interrupt-names: Should contain the following:
"dwc_usb3" USB gadget mode interrupts
"otg" USB OTG mode interrupts
"hiber" USB hibernation interrupts
Example device node:
usb@0 {
#address-cells = <0x2>;
#size-cells = <0x1>;
compatible = "xlnx,zynqmp-dwc3";
reg = <0x0 0xff9d0000 0x0 0x100>;
clock-names = "bus_clk", "ref_clk";
clocks = <&clk125>, <&clk125>;
resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_APB>;
reset-names = "usb_crst", "usb_hibrst", "usb_apbrst";
ranges;
dwc3@fe200000 {
compatible = "snps,dwc3";
reg = <0x0 0xfe200000 0x40000>;
interrupts = <0x0 0x41 0x4>;
interrupt-names = "dwc_usb3", "otg", "hiber";
interrupts = <0 65 4>, <0 69 4>, <0 75 4>;
phys = <&psgtr 2 PHY_TYPE_USB3 0 2>;
phy-names = "usb3-phy";
dr_mode = "host";
dma-coherent;
};
};
......@@ -52,11 +52,8 @@ properties:
# Required child node:
patternProperties:
"^dwc3@[0-9a-f]+$":
type: object
description:
A child node must exist to represent the core DWC3 IP block
The content of the node is defined in dwc3.txt.
"^usb@[0-9a-f]+$":
$ref: snps,dwc3.yaml#
required:
- compatible
......@@ -87,7 +84,7 @@ examples:
dma-ranges = <0x40000000 0x40000000 0xc0000000>;
ranges;
dwc3@38100000 {
usb@38100000 {
compatible = "snps,dwc3";
reg = <0x38100000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
......
......@@ -122,6 +122,12 @@ properties:
description:
Set this flag to force EHCI reset after resume.
spurious-oc:
$ref: /schemas/types.yaml#/definitions/flag
description:
Set this flag to indicate that the hardware sometimes turns on
the OC bit when an over-current isn't actually present.
companion:
$ref: /schemas/types.yaml#/definitions/phandle
description:
......
......@@ -30,6 +30,7 @@ properties:
- mediatek,mt7629-xhci
- mediatek,mt8173-xhci
- mediatek,mt8183-xhci
- mediatek,mt8192-xhci
- const: mediatek,mtk-xhci
reg:
......@@ -45,7 +46,18 @@ properties:
- const: ippc # optional, only needed for case 1.
interrupts:
maxItems: 1
description:
use "interrupts-extended" when the interrupts are connected to the
separate interrupt controllers
minItems: 1
items:
- description: xHCI host controller interrupt
- description: optional, wakeup interrupt used to support runtime PM
interrupt-names:
items:
- const: host
- const: wakeup
power-domains:
description: A phandle to USB power domain node to control USB's MTCMOS
......@@ -99,9 +111,9 @@ properties:
vbus-supply:
description: Regulator of USB VBUS5v
usb3-lpm-capable:
description: supports USB3.0 LPM
type: boolean
usb3-lpm-capable: true
usb2-lpm-disable: true
imod-interval-ns:
description:
......@@ -127,10 +139,13 @@ properties:
- description:
The second cell represents the register base address of the glue
layer in syscon
- description:
- description: |
The third cell represents the hardware version of the glue layer,
1 is used by mt8173 etc, 2 is used by mt2712 etc
enum: [1, 2]
1 - used by mt8173 etc, revision 1 without following IPM rule;
2 - used by mt2712 etc, revision 2 following IPM rule;
101 - used by mt8183, specific 1.01;
102 - used by mt8192, specific 1.02;
enum: [1, 2, 101, 102]
mediatek,u3p-dis-msk:
$ref: /schemas/types.yaml#/definitions/uint32
......
......@@ -24,6 +24,7 @@ properties:
- mediatek,mt2712-mtu3
- mediatek,mt8173-mtu3
- mediatek,mt8183-mtu3
- mediatek,mt8192-mtu3
- const: mediatek,mtu3
reg:
......@@ -126,7 +127,7 @@ properties:
Any connector to the data bus of this controller should be modelled
using the OF graph bindings specified, if the "usb-role-switch"
property is used. See graph.txt
type: object
$ref: /schemas/graph.yaml#/properties/port
enable-manual-drd:
$ref: /schemas/types.yaml#/definitions/flag
......@@ -152,10 +153,13 @@ properties:
- description:
The second cell represents the register base address of the glue
layer in syscon
- description:
- description: |
The third cell represents the hardware version of the glue layer,
1 is used by mt8173 etc, 2 is used by mt2712 etc
enum: [1, 2]
1 - used by mt8173 etc, revision 1 without following IPM rule;
2 - used by mt2712 etc, revision 2 with following IPM rule;
101 - used by mt8183, specific 1.01;
102 - used by mt8192, specific 1.02;
enum: [1, 2, 101, 102]
mediatek,u3p-dis-msk:
$ref: /schemas/types.yaml#/definitions/uint32
......
......@@ -16,6 +16,7 @@ properties:
- qcom,msm8996-dwc3
- qcom,msm8998-dwc3
- qcom,sc7180-dwc3
- qcom,sc7280-dwc3
- qcom,sdm845-dwc3
- qcom,sdx55-dwc3
- qcom,sm8150-dwc3
......
......@@ -87,13 +87,19 @@ properties:
minItems: 1
snps,usb2-lpm-disable:
description: Indicate if we don't want to enable USB2 HW LPM
description: Indicate if we don't want to enable USB2 HW LPM for host
mode.
type: boolean
snps,usb3_lpm_capable:
description: Determines if platform is USB3 LPM capable
type: boolean
snps,usb2-gadget-lpm-disable:
description: Indicate if we don't want to enable USB2 HW LPM for gadget
mode.
type: boolean
snps,dis-start-transfer-quirk:
description:
When set, disable isoc START TRANSFER command failure SW work-around
......
......@@ -82,9 +82,9 @@ required:
additionalProperties: true
examples:
#hub connected to port 1
#device connected to port 2
#device connected to port 3
# hub connected to port 1
# device connected to port 2
# device connected to port 3
# interface 0 of configuration 1
# interface 0 of configuration 2
- |
......
USB NOP PHY
Required properties:
- compatible: should be usb-nop-xceiv
- #phy-cells: Must be 0
Optional properties:
- clocks: phandle to the PHY clock. Use as per Documentation/devicetree
/bindings/clock/clock-bindings.txt
This property is required if clock-frequency is specified.
- clock-names: Should be "main_clk"
- clock-frequency: the clock frequency (in Hz) that the PHY clock must
be configured to.
- vcc-supply: phandle to the regulator that provides power to the PHY.
- reset-gpios: Should specify the GPIO for reset.
- vbus-detect-gpio: should specify the GPIO detecting a VBus insertion
(see Documentation/devicetree/bindings/gpio/gpio.txt)
- vbus-regulator : should specifiy the regulator supplying current drawn from
the VBus line (see Documentation/devicetree/bindings/regulator/regulator.txt).
Example:
hsusb1_phy {
compatible = "usb-nop-xceiv";
clock-frequency = <19200000>;
clocks = <&osc 0>;
clock-names = "main_clk";
vcc-supply = <&hsusb1_vcc_regulator>;
reset-gpios = <&gpio1 7 GPIO_ACTIVE_LOW>;
vbus-detect-gpio = <&gpio2 13 GPIO_ACTIVE_HIGH>;
vbus-regulator = <&vbus_regulator>;
#phy-cells = <0>;
};
hsusb1_phy is a NOP USB PHY device that gets its clock from an oscillator
and expects that clock to be configured to 19.2MHz by the NOP PHY driver.
hsusb1_vcc_regulator provides power to the PHY and GPIO 7 controls RESET.
GPIO 13 detects VBus insertion, and accordingly notifies the vbus-regulator.
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/usb-nop-xceiv.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: USB NOP PHY
maintainers:
- Rob Herring <robh@kernel.org>
properties:
compatible:
const: usb-nop-xceiv
clocks:
maxItems: 1
clock-names:
const: main_clk
clock-frequency: true
'#phy-cells':
const: 0
vcc-supply:
description: phandle to the regulator that provides power to the PHY.
reset-gpios:
maxItems: 1
vbus-detect-gpio:
description: Should specify the GPIO detecting a VBus insertion
maxItems: 1
vbus-regulator:
description: Should specifiy the regulator supplying current drawn from
the VBus line.
$ref: /schemas/types.yaml#/definitions/phandle
required:
- compatible
- '#phy-cells'
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
hsusb1_phy {
compatible = "usb-nop-xceiv";
clock-frequency = <19200000>;
clocks = <&osc 0>;
clock-names = "main_clk";
vcc-supply = <&hsusb1_vcc_regulator>;
reset-gpios = <&gpio1 7 GPIO_ACTIVE_LOW>;
vbus-detect-gpio = <&gpio2 13 GPIO_ACTIVE_HIGH>;
vbus-regulator = <&vbus_regulator>;
#phy-cells = <0>;
};
...
......@@ -109,15 +109,16 @@ well as to make sure they aren't relying on some HCD-specific behavior.
USB-Standard Types
==================
In ``<linux/usb/ch9.h>`` you will find the USB data types defined in
chapter 9 of the USB specification. These data types are used throughout
USB, and in APIs including this host side API, gadget APIs, usb character
devices and debugfs interfaces.
In ``drivers/usb/common/common.c`` and ``drivers/usb/common/debug.c`` you
will find the USB data types defined in chapter 9 of the USB specification.
These data types are used throughout USB, and in APIs including this host
side API, gadget APIs, usb character devices and debugfs interfaces.
.. kernel-doc:: include/linux/usb/ch9.h
:internal:
.. kernel-doc:: drivers/usb/common/common.c
:export:
.. _usb_header:
.. kernel-doc:: drivers/usb/common/debug.c
:export:
Host-Side Data Types and Macros
===============================
......
......@@ -791,7 +791,6 @@ CONFIG_USB_XHCI_MVEBU=y
CONFIG_USB_XHCI_TEGRA=m
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_HCD_STI=y
CONFIG_USB_EHCI_TEGRA=y
CONFIG_USB_EHCI_EXYNOS=m
CONFIG_USB_EHCI_MV=m
CONFIG_USB_OHCI_HCD=y
......@@ -817,6 +816,7 @@ CONFIG_USB_DWC2=y
CONFIG_USB_CHIPIDEA=y
CONFIG_USB_CHIPIDEA_UDC=y
CONFIG_USB_CHIPIDEA_HOST=y
CONFIG_USB_CHIPIDEA_TEGRA=y
CONFIG_USB_ISP1760=y
CONFIG_USB_HSIC_USB3503=y
CONFIG_AB8500_USB=y
......
......@@ -828,7 +828,7 @@
ranges;
status = "disabled";
usb_dwc3_0: dwc3@38100000 {
usb_dwc3_0: usb@38100000 {
compatible = "snps,dwc3";
reg = <0x38100000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
......@@ -869,7 +869,7 @@
ranges;
status = "disabled";
usb_dwc3_1: dwc3@38200000 {
usb_dwc3_1: usb@38200000 {
compatible = "snps,dwc3";
reg = <0x38200000 0x10000>;
clocks = <&clk IMX8MP_CLK_HSIO_AXI>,
......
......@@ -874,7 +874,7 @@
clocks = <&infracfg CLK_INFRA_UNIPRO_SCK>,
<&infracfg CLK_INFRA_USB>;
clock-names = "sys_ck", "ref_ck";
mediatek,syscon-wakeup = <&pericfg 0x400 0>;
mediatek,syscon-wakeup = <&pericfg 0x420 101>;
#address-cells = <2>;
#size-cells = <2>;
ranges;
......
......@@ -25,13 +25,13 @@
/* Protocol timeouts in ms */
#define TBNET_LOGIN_DELAY 4500
#define TBNET_LOGIN_TIMEOUT 500
#define TBNET_LOGOUT_TIMEOUT 100
#define TBNET_LOGOUT_TIMEOUT 1000
#define TBNET_RING_SIZE 256
#define TBNET_LOCAL_PATH 0xf
#define TBNET_LOGIN_RETRIES 60
#define TBNET_LOGOUT_RETRIES 5
#define TBNET_LOGOUT_RETRIES 10
#define TBNET_MATCH_FRAGS_ID BIT(1)
#define TBNET_64K_FRAMES BIT(2)
#define TBNET_MAX_MTU SZ_64K
#define TBNET_FRAME_SIZE SZ_4K
#define TBNET_MAX_PAYLOAD_SIZE \
......@@ -154,8 +154,8 @@ struct tbnet_ring {
* @login_sent: ThunderboltIP login message successfully sent
* @login_received: ThunderboltIP login message received from the remote
* host
* @transmit_path: HopID the other end needs to use building the
* opposite side path.
* @local_transmit_path: HopID we are using to send out packets
* @remote_transmit_path: HopID the other end is using to send packets to us
* @connection_lock: Lock serializing access to @login_sent,
* @login_received and @transmit_path.
* @login_retries: Number of login retries currently done
......@@ -184,7 +184,8 @@ struct tbnet {
atomic_t command_id;
bool login_sent;
bool login_received;
u32 transmit_path;
int local_transmit_path;
int remote_transmit_path;
struct mutex connection_lock;
int login_retries;
struct delayed_work login_work;
......@@ -257,7 +258,7 @@ static int tbnet_login_request(struct tbnet *net, u8 sequence)
atomic_inc_return(&net->command_id));
request.proto_version = TBIP_LOGIN_PROTO_VERSION;
request.transmit_path = TBNET_LOCAL_PATH;
request.transmit_path = net->local_transmit_path;
return tb_xdomain_request(xd, &request, sizeof(request),
TB_CFG_PKG_XDOMAIN_RESP, &reply,
......@@ -364,10 +365,10 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
mutex_lock(&net->connection_lock);
if (net->login_sent && net->login_received) {
int retries = TBNET_LOGOUT_RETRIES;
int ret, retries = TBNET_LOGOUT_RETRIES;
while (send_logout && retries-- > 0) {
int ret = tbnet_logout_request(net);
ret = tbnet_logout_request(net);
if (ret != -ETIMEDOUT)
break;
}
......@@ -377,8 +378,16 @@ static void tbnet_tear_down(struct tbnet *net, bool send_logout)
tbnet_free_buffers(&net->rx_ring);
tbnet_free_buffers(&net->tx_ring);
if (tb_xdomain_disable_paths(net->xd))
ret = tb_xdomain_disable_paths(net->xd,
net->local_transmit_path,
net->rx_ring.ring->hop,
net->remote_transmit_path,
net->tx_ring.ring->hop);
if (ret)
netdev_warn(net->dev, "failed to disable DMA paths\n");
tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path);
net->remote_transmit_path = 0;
}
net->login_retries = 0;
......@@ -424,7 +433,7 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data)
if (!ret) {
mutex_lock(&net->connection_lock);
net->login_received = true;
net->transmit_path = pkg->transmit_path;
net->remote_transmit_path = pkg->transmit_path;
/* If we reached the number of max retries or
* previous logout, schedule another round of
......@@ -597,12 +606,18 @@ static void tbnet_connected_work(struct work_struct *work)
if (!connected)
return;
ret = tb_xdomain_alloc_in_hopid(net->xd, net->remote_transmit_path);
if (ret != net->remote_transmit_path) {
netdev_err(net->dev, "failed to allocate Rx HopID\n");
return;
}
/* Both logins successful so enable the high-speed DMA paths and
* start the network device queue.
*/
ret = tb_xdomain_enable_paths(net->xd, TBNET_LOCAL_PATH,
ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
net->rx_ring.ring->hop,
net->transmit_path,
net->remote_transmit_path,
net->tx_ring.ring->hop);
if (ret) {
netdev_err(net->dev, "failed to enable DMA paths\n");
......@@ -629,6 +644,7 @@ static void tbnet_connected_work(struct work_struct *work)
err_stop_rings:
tb_ring_stop(net->rx_ring.ring);
tb_ring_stop(net->tx_ring.ring);
tb_xdomain_release_in_hopid(net->xd, net->remote_transmit_path);
}
static void tbnet_login_work(struct work_struct *work)
......@@ -851,6 +867,7 @@ static int tbnet_open(struct net_device *dev)
struct tb_xdomain *xd = net->xd;
u16 sof_mask, eof_mask;
struct tb_ring *ring;
int hopid;
netif_carrier_off(dev);
......@@ -862,6 +879,15 @@ static int tbnet_open(struct net_device *dev)
}
net->tx_ring.ring = ring;
hopid = tb_xdomain_alloc_out_hopid(xd, -1);
if (hopid < 0) {
netdev_err(dev, "failed to allocate Tx HopID\n");
tb_ring_free(net->tx_ring.ring);
net->tx_ring.ring = NULL;
return hopid;
}
net->local_transmit_path = hopid;
sof_mask = BIT(TBIP_PDF_FRAME_START);
eof_mask = BIT(TBIP_PDF_FRAME_END);
......@@ -893,6 +919,8 @@ static int tbnet_stop(struct net_device *dev)
tb_ring_free(net->rx_ring.ring);
net->rx_ring.ring = NULL;
tb_xdomain_release_out_hopid(net->xd, net->local_transmit_path);
tb_ring_free(net->tx_ring.ring);
net->tx_ring.ring = NULL;
......@@ -1340,7 +1368,7 @@ static int __init tbnet_init(void)
* the moment.
*/
tb_property_add_immediate(tbnet_dir, "prtcstns",
TBNET_MATCH_FRAGS_ID);
TBNET_MATCH_FRAGS_ID | TBNET_64K_FRAMES);
ret = tb_register_property_dir("network", tbnet_dir);
if (ret) {
......
......@@ -124,12 +124,31 @@ static const struct software_node usb_connector_node = {
.properties = usb_connector_properties,
};
static const struct software_node altmodes_node = {
.name = "altmodes",
.parent = &usb_connector_node,
};
static const struct property_entry dp_altmode_properties[] = {
PROPERTY_ENTRY_U32("svid", 0xff01),
PROPERTY_ENTRY_U32("vdo", 0x0c0086),
{ }
};
static const struct software_node dp_altmode_node = {
.name = "displayport-altmode",
.parent = &altmodes_node,
.properties = dp_altmode_properties,
};
static const struct software_node *node_group[] = {
&fusb302_node,
&max17047_node,
&pi3usb30532_node,
&displayport_node,
&usb_connector_node,
&altmodes_node,
&dp_altmode_node,
NULL
};
......
......@@ -17,7 +17,7 @@
#define TB_CTL_RX_PKG_COUNT 10
#define TB_CTL_RETRIES 4
#define TB_CTL_RETRIES 1
/**
* struct tb_ctl - Thunderbolt control channel
......@@ -29,6 +29,7 @@
* @request_queue_lock: Lock protecting @request_queue
* @request_queue: List of outstanding requests
* @running: Is the control channel running at the moment
* @timeout_msec: Default timeout for non-raw control messages
* @callback: Callback called when hotplug message is received
* @callback_data: Data passed to @callback
*/
......@@ -43,6 +44,7 @@ struct tb_ctl {
struct list_head request_queue;
bool running;
int timeout_msec;
event_cb callback;
void *callback_data;
};
......@@ -613,6 +615,7 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
/**
* tb_ctl_alloc() - allocate a control channel
* @nhi: Pointer to NHI
* @timeout_msec: Default timeout used with non-raw control messages
* @cb: Callback called for plug events
* @cb_data: Data passed to @cb
*
......@@ -620,13 +623,15 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
*
* Return: Returns a pointer on success or NULL on failure.
*/
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb,
void *cb_data)
{
int i;
struct tb_ctl *ctl = kzalloc(sizeof(*ctl), GFP_KERNEL);
if (!ctl)
return NULL;
ctl->nhi = nhi;
ctl->timeout_msec = timeout_msec;
ctl->callback = cb;
ctl->callback_data = cb_data;
......@@ -802,14 +807,12 @@ static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
* tb_cfg_reset() - send a reset packet and wait for a response
* @ctl: Control channel pointer
* @route: Router string for the router to send reset
* @timeout_msec: Timeout in ms how long to wait for the response
*
* If the switch at route is incorrectly configured then we will not receive a
* reply (even though the switch will reset). The caller should check for
* -ETIMEDOUT and attempt to reconfigure the switch.
*/
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec)
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route)
{
struct cfg_reset_pkg request = { .header = tb_cfg_make_header(route) };
struct tb_cfg_result res = { 0 };
......@@ -831,7 +834,7 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
req->response_size = sizeof(reply);
req->response_type = TB_CFG_PKG_RESET;
res = tb_cfg_request_sync(ctl, req, timeout_msec);
res = tb_cfg_request_sync(ctl, req, ctl->timeout_msec);
tb_cfg_request_put(req);
......@@ -1007,7 +1010,7 @@ int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length)
{
struct tb_cfg_result res = tb_cfg_read_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
space, offset, length, ctl->timeout_msec);
switch (res.err) {
case 0:
/* Success */
......@@ -1033,7 +1036,7 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
enum tb_cfg_space space, u32 offset, u32 length)
{
struct tb_cfg_result res = tb_cfg_write_raw(ctl, buffer, route, port,
space, offset, length, TB_CFG_DEFAULT_TIMEOUT);
space, offset, length, ctl->timeout_msec);
switch (res.err) {
case 0:
/* Success */
......@@ -1071,7 +1074,7 @@ int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route)
u32 dummy;
struct tb_cfg_result res = tb_cfg_read_raw(ctl, &dummy, route, 0,
TB_CFG_SWITCH, 0, 1,
TB_CFG_DEFAULT_TIMEOUT);
ctl->timeout_msec);
if (res.err == 1)
return -EIO;
if (res.err)
......
......@@ -21,15 +21,14 @@ struct tb_ctl;
typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size);
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data);
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int timeout_msec, event_cb cb,
void *cb_data);
void tb_ctl_start(struct tb_ctl *ctl);
void tb_ctl_stop(struct tb_ctl *ctl);
void tb_ctl_free(struct tb_ctl *ctl);
/* configuration commands */
#define TB_CFG_DEFAULT_TIMEOUT 5000 /* msec */
struct tb_cfg_result {
u64 response_route;
u32 response_port; /*
......@@ -124,8 +123,7 @@ static inline struct tb_cfg_header tb_cfg_make_header(u64 route)
}
int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route,
int timeout_msec);
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route);
struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
u64 route, u32 port,
enum tb_cfg_space space, u32 offset,
......
......@@ -251,6 +251,29 @@ static ssize_t counters_write(struct file *file, const char __user *user_buf,
return ret < 0 ? ret : count;
}
static void cap_show_by_dw(struct seq_file *s, struct tb_switch *sw,
struct tb_port *port, unsigned int cap,
unsigned int offset, u8 cap_id, u8 vsec_id,
int dwords)
{
int i, ret;
u32 data;
for (i = 0; i < dwords; i++) {
if (port)
ret = tb_port_read(port, &data, TB_CFG_PORT, cap + offset + i, 1);
else
ret = tb_sw_read(sw, &data, TB_CFG_SWITCH, cap + offset + i, 1);
if (ret) {
seq_printf(s, "0x%04x <not accessible>\n", cap + offset + i);
continue;
}
seq_printf(s, "0x%04x %4d 0x%02x 0x%02x 0x%08x\n", cap + offset + i,
offset + i, cap_id, vsec_id, data);
}
}
static void cap_show(struct seq_file *s, struct tb_switch *sw,
struct tb_port *port, unsigned int cap, u8 cap_id,
u8 vsec_id, int length)
......@@ -267,10 +290,7 @@ static void cap_show(struct seq_file *s, struct tb_switch *sw,
else
ret = tb_sw_read(sw, data, TB_CFG_SWITCH, cap + offset, dwords);
if (ret) {
seq_printf(s, "0x%04x <not accessible>\n",
cap + offset);
if (dwords > 1)
seq_printf(s, "0x%04x ...\n", cap + offset + 1);
cap_show_by_dw(s, sw, port, cap, offset, cap_id, vsec_id, length);
return;
}
......@@ -341,15 +361,6 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s,
} else {
length = header.extended_short.length;
vsec_id = header.extended_short.vsec_id;
/*
* Ice Lake and Tiger Lake do not implement the
* full length of the capability, only first 32
* dwords so hard-code it here.
*/
if (!vsec_id &&
(tb_switch_is_ice_lake(port->sw) ||
tb_switch_is_tiger_lake(port->sw)))
length = 32;
}
break;
......
......@@ -13,7 +13,6 @@
#include <linux/sizes.h>
#include <linux/thunderbolt.h>
#define DMA_TEST_HOPID 8
#define DMA_TEST_TX_RING_SIZE 64
#define DMA_TEST_RX_RING_SIZE 256
#define DMA_TEST_FRAME_SIZE SZ_4K
......@@ -72,7 +71,9 @@ static const char * const dma_test_result_names[] = {
* @svc: XDomain service the driver is bound to
* @xd: XDomain the service belongs to
* @rx_ring: Software ring holding RX frames
* @rx_hopid: HopID used for receiving frames
* @tx_ring: Software ring holding TX frames
* @tx_hopid: HopID used for sending fames
* @packets_to_send: Number of packets to send
* @packets_to_receive: Number of packets to receive
* @packets_sent: Actual number of packets sent
......@@ -92,7 +93,9 @@ struct dma_test {
const struct tb_service *svc;
struct tb_xdomain *xd;
struct tb_ring *rx_ring;
int rx_hopid;
struct tb_ring *tx_ring;
int tx_hopid;
unsigned int packets_to_send;
unsigned int packets_to_receive;
unsigned int packets_sent;
......@@ -119,10 +122,12 @@ static void *dma_test_pattern;
static void dma_test_free_rings(struct dma_test *dt)
{
if (dt->rx_ring) {
tb_xdomain_release_in_hopid(dt->xd, dt->rx_hopid);
tb_ring_free(dt->rx_ring);
dt->rx_ring = NULL;
}
if (dt->tx_ring) {
tb_xdomain_release_out_hopid(dt->xd, dt->tx_hopid);
tb_ring_free(dt->tx_ring);
dt->tx_ring = NULL;
}
......@@ -151,6 +156,14 @@ static int dma_test_start_rings(struct dma_test *dt)
dt->tx_ring = ring;
e2e_tx_hop = ring->hop;
ret = tb_xdomain_alloc_out_hopid(xd, -1);
if (ret < 0) {
dma_test_free_rings(dt);
return ret;
}
dt->tx_hopid = ret;
}
if (dt->packets_to_receive) {
......@@ -168,11 +181,19 @@ static int dma_test_start_rings(struct dma_test *dt)
}
dt->rx_ring = ring;
ret = tb_xdomain_alloc_in_hopid(xd, -1);
if (ret < 0) {
dma_test_free_rings(dt);
return ret;
}
dt->rx_hopid = ret;
}
ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID,
ret = tb_xdomain_enable_paths(dt->xd, dt->tx_hopid,
dt->tx_ring ? dt->tx_ring->hop : 0,
DMA_TEST_HOPID,
dt->rx_hopid,
dt->rx_ring ? dt->rx_ring->hop : 0);
if (ret) {
dma_test_free_rings(dt);
......@@ -189,12 +210,18 @@ static int dma_test_start_rings(struct dma_test *dt)
static void dma_test_stop_rings(struct dma_test *dt)
{
int ret;
if (dt->rx_ring)
tb_ring_stop(dt->rx_ring);
if (dt->tx_ring)
tb_ring_stop(dt->tx_ring);
if (tb_xdomain_disable_paths(dt->xd))
ret = tb_xdomain_disable_paths(dt->xd, dt->tx_hopid,
dt->tx_ring ? dt->tx_ring->hop : 0,
dt->rx_hopid,
dt->rx_ring ? dt->rx_ring->hop : 0);
if (ret)
dev_warn(&dt->svc->dev, "failed to disable DMA paths\n");
dma_test_free_rings(dt);
......
......@@ -341,9 +341,34 @@ struct device_type tb_domain_type = {
.release = tb_domain_release,
};
static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
struct tb *tb = data;
if (!tb->cm_ops->handle_event) {
tb_warn(tb, "domain does not have event handler\n");
return true;
}
switch (type) {
case TB_CFG_PKG_XDOMAIN_REQ:
case TB_CFG_PKG_XDOMAIN_RESP:
if (tb_is_xdomain_enabled())
return tb_xdomain_handle_request(tb, type, buf, size);
break;
default:
tb->cm_ops->handle_event(tb, type, buf, size);
}
return true;
}
/**
* tb_domain_alloc() - Allocate a domain
* @nhi: Pointer to the host controller
* @timeout_msec: Control channel timeout for non-raw messages
* @privsize: Size of the connection manager private data
*
* Allocates and initializes a new Thunderbolt domain. Connection
......@@ -355,7 +380,7 @@ struct device_type tb_domain_type = {
*
* Return: allocated domain structure on %NULL in case of error
*/
struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize)
{
struct tb *tb;
......@@ -382,6 +407,10 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
if (!tb->wq)
goto err_remove_ida;
tb->ctl = tb_ctl_alloc(nhi, timeout_msec, tb_domain_event_cb, tb);
if (!tb->ctl)
goto err_destroy_wq;
tb->dev.parent = &nhi->pdev->dev;
tb->dev.bus = &tb_bus_type;
tb->dev.type = &tb_domain_type;
......@@ -391,6 +420,8 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
return tb;
err_destroy_wq:
destroy_workqueue(tb->wq);
err_remove_ida:
ida_simple_remove(&tb_domain_ida, tb->index);
err_free:
......@@ -399,30 +430,6 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
return NULL;
}
static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
struct tb *tb = data;
if (!tb->cm_ops->handle_event) {
tb_warn(tb, "domain does not have event handler\n");
return true;
}
switch (type) {
case TB_CFG_PKG_XDOMAIN_REQ:
case TB_CFG_PKG_XDOMAIN_RESP:
if (tb_is_xdomain_enabled())
return tb_xdomain_handle_request(tb, type, buf, size);
break;
default:
tb->cm_ops->handle_event(tb, type, buf, size);
}
return true;
}
/**
* tb_domain_add() - Add domain to the system
* @tb: Domain to add
......@@ -442,13 +449,6 @@ int tb_domain_add(struct tb *tb)
return -EINVAL;
mutex_lock(&tb->lock);
tb->ctl = tb_ctl_alloc(tb->nhi, tb_domain_event_cb, tb);
if (!tb->ctl) {
ret = -ENOMEM;
goto err_unlock;
}
/*
* tb_schedule_hotplug_handler may be called as soon as the config
* channel is started. Thats why we have to hold the lock here.
......@@ -493,7 +493,6 @@ int tb_domain_add(struct tb *tb)
device_del(&tb->dev);
err_ctl_stop:
tb_ctl_stop(tb->ctl);
err_unlock:
mutex_unlock(&tb->lock);
return ret;
......@@ -793,6 +792,10 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
* tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain
* @tb: Domain enabling the DMA paths
* @xd: XDomain DMA paths are created to
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* Calls connection manager specific method to enable DMA paths to the
* XDomain in question.
......@@ -801,18 +804,25 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
*/
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!tb->cm_ops->approve_xdomain_paths)
return -ENOTSUPP;
return tb->cm_ops->approve_xdomain_paths(tb, xd);
return tb->cm_ops->approve_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path, receive_ring);
}
/**
* tb_domain_disconnect_xdomain_paths() - Disable DMA paths for XDomain
* @tb: Domain disabling the DMA paths
* @xd: XDomain whose DMA paths are disconnected
* @transmit_path: HopID we are using to send out packets
* @transmit_ring: DMA ring used to send out packets
* @receive_path: HopID the other end is using to send packets to us
* @receive_ring: DMA ring used to receive packets from @receive_path
*
* Calls connection manager specific method to disconnect DMA paths to
* the XDomain in question.
......@@ -821,12 +831,15 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
*/
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!tb->cm_ops->disconnect_xdomain_paths)
return -ENOTSUPP;
return tb->cm_ops->disconnect_xdomain_paths(tb, xd);
return tb->cm_ops->disconnect_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path, receive_ring);
}
static int disconnect_xdomain(struct device *dev, void *data)
......@@ -837,7 +850,7 @@ static int disconnect_xdomain(struct device *dev, void *data)
xd = tb_to_xdomain(dev);
if (xd && xd->tb == tb)
ret = tb_xdomain_disable_paths(xd);
ret = tb_xdomain_disable_all_paths(xd);
return ret;
}
......
......@@ -277,6 +277,16 @@ struct tb_drom_entry_port {
u8 unknown4:2;
} __packed;
/* USB4 product descriptor */
struct tb_drom_entry_desc {
struct tb_drom_entry_header header;
u16 bcdUSBSpec;
u16 idVendor;
u16 idProduct;
u16 bcdProductFWRevision;
u32 TID;
u8 productHWRevision;
};
/**
* tb_drom_read_uid_only() - Read UID directly from DROM
......@@ -329,6 +339,16 @@ static int tb_drom_parse_entry_generic(struct tb_switch *sw,
if (!sw->device_name)
return -ENOMEM;
break;
case 9: {
const struct tb_drom_entry_desc *desc =
(const struct tb_drom_entry_desc *)entry;
if (!sw->vendor && !sw->device) {
sw->vendor = desc->idVendor;
sw->device = desc->idProduct;
}
break;
}
}
return 0;
......@@ -521,6 +541,51 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
return tb_eeprom_read_n(sw, offset, val, count);
}
static int tb_drom_parse(struct tb_switch *sw)
{
const struct tb_drom_header *header =
(const struct tb_drom_header *)sw->drom;
u32 crc;
crc = tb_crc8((u8 *) &header->uid, 8);
if (crc != header->uid_crc8) {
tb_sw_warn(sw,
"DROM UID CRC8 mismatch (expected: %#x, got: %#x), aborting\n",
header->uid_crc8, crc);
return -EINVAL;
}
if (!sw->uid)
sw->uid = header->uid;
sw->vendor = header->vendor_id;
sw->device = header->model_id;
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
}
return tb_drom_parse_entries(sw);
}
static int usb4_drom_parse(struct tb_switch *sw)
{
const struct tb_drom_header *header =
(const struct tb_drom_header *)sw->drom;
u32 crc;
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n",
header->data_crc32, crc);
return -EINVAL;
}
return tb_drom_parse_entries(sw);
}
/**
* tb_drom_read() - Copy DROM to sw->drom and parse it
* @sw: Router whose DROM to read and parse
......@@ -534,7 +599,6 @@ static int tb_drom_read_n(struct tb_switch *sw, u16 offset, u8 *val,
int tb_drom_read(struct tb_switch *sw)
{
u16 size;
u32 crc;
struct tb_drom_header *header;
int res, retries = 1;
......@@ -599,31 +663,21 @@ int tb_drom_read(struct tb_switch *sw)
goto err;
}
crc = tb_crc8((u8 *) &header->uid, 8);
if (crc != header->uid_crc8) {
tb_sw_warn(sw,
"drom uid crc8 mismatch (expected: %#x, got: %#x), aborting\n",
header->uid_crc8, crc);
goto err;
}
if (!sw->uid)
sw->uid = header->uid;
sw->vendor = header->vendor_id;
sw->device = header->model_id;
tb_check_quirks(sw);
tb_sw_dbg(sw, "DROM version: %d\n", header->device_rom_revision);
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) {
tb_sw_warn(sw,
"drom data crc32 mismatch (expected: %#x, got: %#x), continuing\n",
header->data_crc32, crc);
switch (header->device_rom_revision) {
case 3:
res = usb4_drom_parse(sw);
break;
default:
tb_sw_warn(sw, "DROM device_rom_revision %#x unknown\n",
header->device_rom_revision);
fallthrough;
case 1:
res = tb_drom_parse(sw);
break;
}
if (header->device_rom_revision > 2)
tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n",
header->device_rom_revision);
res = tb_drom_parse_entries(sw);
/* If the DROM parsing fails, wait a moment and retry once */
if (res == -EILSEQ && retries--) {
tb_sw_warn(sw, "parsing DROM failed, retrying\n");
......@@ -633,10 +687,11 @@ int tb_drom_read(struct tb_switch *sw)
goto parse;
}
return res;
if (!res)
return 0;
err:
kfree(sw->drom);
sw->drom = NULL;
return -EIO;
}
......@@ -557,7 +557,9 @@ static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
return 0;
}
static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct icm_fr_pkg_approve_xdomain_response reply;
struct icm_fr_pkg_approve_xdomain request;
......@@ -568,10 +570,10 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
request.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT | xd->link;
memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid));
request.transmit_path = xd->transmit_path;
request.transmit_ring = xd->transmit_ring;
request.receive_path = xd->receive_path;
request.receive_ring = xd->receive_ring;
request.transmit_path = transmit_path;
request.transmit_ring = transmit_ring;
request.receive_path = receive_path;
request.receive_ring = receive_ring;
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
......@@ -585,7 +587,9 @@ static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
return 0;
}
static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
u8 phy_port;
u8 cmd;
......@@ -1122,7 +1126,9 @@ static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
return 0;
}
static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct icm_tr_pkg_approve_xdomain_response reply;
struct icm_tr_pkg_approve_xdomain request;
......@@ -1132,10 +1138,10 @@ static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
request.hdr.code = ICM_APPROVE_XDOMAIN;
request.route_hi = upper_32_bits(xd->route);
request.route_lo = lower_32_bits(xd->route);
request.transmit_path = xd->transmit_path;
request.transmit_ring = xd->transmit_ring;
request.receive_path = xd->receive_path;
request.receive_ring = xd->receive_ring;
request.transmit_path = transmit_path;
request.transmit_ring = transmit_ring;
request.receive_path = receive_path;
request.receive_ring = receive_ring;
memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid));
memset(&reply, 0, sizeof(reply));
......@@ -1176,7 +1182,9 @@ static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd,
return 0;
}
static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
int ret;
......@@ -2416,7 +2424,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
struct icm *icm;
struct tb *tb;
tb = tb_domain_alloc(nhi, sizeof(struct icm));
tb = tb_domain_alloc(nhi, ICM_TIMEOUT, sizeof(struct icm));
if (!tb)
return NULL;
......
......@@ -501,6 +501,77 @@ ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
return ret < 0 ? ret : 0;
}
/**
* tb_property_copy_dir() - Take a deep copy of directory
* @dir: Directory to copy
*
* This function takes a deep copy of @dir and returns back the copy. In
* case of error returns %NULL. The resulting directory needs to be
* released by calling tb_property_free_dir().
*/
struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir)
{
struct tb_property *property, *p = NULL;
struct tb_property_dir *d;
if (!dir)
return NULL;
d = tb_property_create_dir(dir->uuid);
if (!d)
return NULL;
list_for_each_entry(property, &dir->properties, list) {
struct tb_property *p;
p = tb_property_alloc(property->key, property->type);
if (!p)
goto err_free;
p->length = property->length;
switch (property->type) {
case TB_PROPERTY_TYPE_DIRECTORY:
p->value.dir = tb_property_copy_dir(property->value.dir);
if (!p->value.dir)
goto err_free;
break;
case TB_PROPERTY_TYPE_DATA:
p->value.data = kmemdup(property->value.data,
property->length * 4,
GFP_KERNEL);
if (!p->value.data)
goto err_free;
break;
case TB_PROPERTY_TYPE_TEXT:
p->value.text = kzalloc(p->length * 4, GFP_KERNEL);
if (!p->value.text)
goto err_free;
strcpy(p->value.text, property->value.text);
break;
case TB_PROPERTY_TYPE_VALUE:
p->value.immediate = property->value.immediate;
break;
default:
break;
}
list_add_tail(&p->list, &d->properties);
}
return d;
err_free:
kfree(p);
tb_property_free_dir(d);
return NULL;
}
/**
* tb_property_add_immediate() - Add immediate property to directory
* @parent: Directory to add the property
......
......@@ -626,28 +626,6 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
TB_CFG_PORT, ADP_CS_4, 1);
}
/**
* tb_port_set_initial_credits() - Set initial port link credits allocated
* @port: Port to set the initial credits
* @credits: Number of credits to to allocate
*
* Set initial credits value to be used for ingress shared buffering.
*/
int tb_port_set_initial_credits(struct tb_port *port, u32 credits)
{
u32 data;
int ret;
ret = tb_port_read(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
if (ret)
return ret;
data &= ~ADP_CS_5_LCA_MASK;
data |= (credits << ADP_CS_5_LCA_SHIFT) & ADP_CS_5_LCA_MASK;
return tb_port_write(port, &data, TB_CFG_PORT, ADP_CS_5, 1);
}
/**
* tb_port_clear_counter() - clear a counter in TB_CFG_COUNTER
* @port: Port whose counters to clear
......@@ -1331,7 +1309,7 @@ int tb_switch_reset(struct tb_switch *sw)
TB_CFG_SWITCH, 2, 2);
if (res.err)
return res.err;
res = tb_cfg_reset(sw->tb->ctl, tb_route(sw), TB_CFG_DEFAULT_TIMEOUT);
res = tb_cfg_reset(sw->tb->ctl, tb_route(sw));
if (res.err > 0)
return -EIO;
return res.err;
......@@ -1762,6 +1740,18 @@ static struct attribute *switch_attrs[] = {
NULL,
};
static bool has_port(const struct tb_switch *sw, enum tb_port_type type)
{
const struct tb_port *port;
tb_switch_for_each_port(sw, port) {
if (!port->disabled && port->config.type == type)
return true;
}
return false;
}
static umode_t switch_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
......@@ -1770,7 +1760,8 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
if (attr == &dev_attr_authorized.attr) {
if (sw->tb->security_level == TB_SECURITY_NOPCIE ||
sw->tb->security_level == TB_SECURITY_DPONLY)
sw->tb->security_level == TB_SECURITY_DPONLY ||
!has_port(sw, TB_TYPE_PCIE_UP))
return 0;
} else if (attr == &dev_attr_device.attr) {
if (!sw->device)
......@@ -1849,6 +1840,39 @@ static void tb_switch_release(struct device *dev)
kfree(sw);
}
static int tb_switch_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct tb_switch *sw = tb_to_switch(dev);
const char *type;
if (sw->config.thunderbolt_version == USB4_VERSION_1_0) {
if (add_uevent_var(env, "USB4_VERSION=1.0"))
return -ENOMEM;
}
if (!tb_route(sw)) {
type = "host";
} else {
const struct tb_port *port;
bool hub = false;
/* Device is hub if it has any downstream ports */
tb_switch_for_each_port(sw, port) {
if (!port->disabled && !tb_is_upstream_port(port) &&
tb_port_is_null(port)) {
hub = true;
break;
}
}
type = hub ? "hub" : "device";
}
if (add_uevent_var(env, "USB4_TYPE=%s", type))
return -ENOMEM;
return 0;
}
/*
* Currently only need to provide the callbacks. Everything else is handled
* in the connection manager.
......@@ -1882,6 +1906,7 @@ static const struct dev_pm_ops tb_switch_pm_ops = {
struct device_type tb_switch_type = {
.name = "thunderbolt_device",
.release = tb_switch_release,
.uevent = tb_switch_uevent,
.pm = &tb_switch_pm_ops,
};
......@@ -2542,6 +2567,8 @@ int tb_switch_add(struct tb_switch *sw)
}
tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
tb_check_quirks(sw);
ret = tb_switch_set_uuid(sw);
if (ret) {
dev_err(&sw->dev, "failed to set UUID\n");
......
......@@ -15,6 +15,8 @@
#include "tb_regs.h"
#include "tunnel.h"
#define TB_TIMEOUT 100 /* ms */
/**
* struct tb_cm - Simple Thunderbolt connection manager
* @tunnel_list: List of active tunnels
......@@ -1077,7 +1079,9 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
return 0;
}
static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *nhi_port, *dst_port;
......@@ -1089,9 +1093,8 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
mutex_lock(&tb->lock);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, xd->transmit_ring,
xd->transmit_path, xd->receive_ring,
xd->receive_path);
tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path,
transmit_ring, receive_path, receive_ring);
if (!tunnel) {
mutex_unlock(&tb->lock);
return -ENOMEM;
......@@ -1110,29 +1113,40 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
return 0;
}
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
struct tb_port *dst_port;
struct tb_tunnel *tunnel;
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *nhi_port, *dst_port;
struct tb_tunnel *tunnel, *n;
struct tb_switch *sw;
sw = tb_to_switch(xd->dev.parent);
dst_port = tb_port_at(xd->route, sw);
nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
/*
* It is possible that the tunnel was already teared down (in
* case of cable disconnect) so it is fine if we cannot find it
* here anymore.
*/
tunnel = tb_find_tunnel(tb, TB_TUNNEL_DMA, NULL, dst_port);
tb_deactivate_and_free_tunnel(tunnel);
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
if (!tb_tunnel_is_dma(tunnel))
continue;
if (tunnel->src_port != nhi_port || tunnel->dst_port != dst_port)
continue;
if (tb_tunnel_match_dma(tunnel, transmit_path, transmit_ring,
receive_path, receive_ring))
tb_deactivate_and_free_tunnel(tunnel);
}
}
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring)
{
if (!xd->is_unplugged) {
mutex_lock(&tb->lock);
__tb_disconnect_xdomain_paths(tb, xd);
__tb_disconnect_xdomain_paths(tb, xd, transmit_path,
transmit_ring, receive_path,
receive_ring);
mutex_unlock(&tb->lock);
}
return 0;
......@@ -1208,12 +1222,12 @@ static void tb_handle_hotplug(struct work_struct *work)
* tb_xdomain_remove() so setting XDomain as
* unplugged here prevents deadlock if they call
* tb_xdomain_disable_paths(). We will tear down
* the path below.
* all the tunnels below.
*/
xd->is_unplugged = true;
tb_xdomain_remove(xd);
port->xdomain = NULL;
__tb_disconnect_xdomain_paths(tb, xd);
__tb_disconnect_xdomain_paths(tb, xd, -1, -1, -1, -1);
tb_xdomain_put(xd);
tb_port_unconfigure_xdomain(port);
} else if (tb_port_is_dpout(port) || tb_port_is_dpin(port)) {
......@@ -1562,7 +1576,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
struct tb_cm *tcm;
struct tb *tb;
tb = tb_domain_alloc(nhi, sizeof(*tcm));
tb = tb_domain_alloc(nhi, TB_TIMEOUT, sizeof(*tcm));
if (!tb)
return NULL;
......
......@@ -406,8 +406,12 @@ struct tb_cm_ops {
int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
const u8 *challenge, u8 *response);
int (*disconnect_pcie_paths)(struct tb *tb);
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int (*usb4_switch_op)(struct tb_switch *sw, u16 opcode, u32 *metadata,
u8 *status, const void *tx_data, size_t tx_data_len,
void *rx_data, size_t rx_data_len);
......@@ -625,7 +629,7 @@ void tb_domain_exit(void);
int tb_xdomain_init(void);
void tb_xdomain_exit(void);
struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);
struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize);
int tb_domain_add(struct tb *tb);
void tb_domain_remove(struct tb *tb);
int tb_domain_suspend_noirq(struct tb *tb);
......@@ -641,8 +645,12 @@ int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_disconnect_pcie_paths(struct tb *tb);
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
int receive_path, int receive_ring);
int tb_domain_disconnect_all_paths(struct tb *tb);
static inline struct tb *tb_domain_get(struct tb *tb)
......@@ -787,32 +795,6 @@ static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw)
return false;
}
static inline bool tb_switch_is_ice_lake(const struct tb_switch *sw)
{
if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) {
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_ICL_NHI0:
case PCI_DEVICE_ID_INTEL_ICL_NHI1:
return true;
}
}
return false;
}
static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw)
{
if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) {
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_TGL_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_NHI1:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1:
return true;
}
}
return false;
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
......@@ -860,7 +842,6 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_set_initial_credits(struct tb_port *port, u32 credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
int tb_port_unlock(struct tb_port *port);
int tb_port_enable(struct tb_port *port);
......
......@@ -119,6 +119,7 @@ static struct tb_switch *alloc_host(struct kunit *test)
sw->ports[7].config.type = TB_TYPE_NHI;
sw->ports[7].config.max_in_hop_id = 11;
sw->ports[7].config.max_out_hop_id = 11;
sw->ports[7].config.nfc_credits = 0x41800000;
sw->ports[8].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[8].config.max_in_hop_id = 8;
......@@ -1594,6 +1595,489 @@ static void tb_test_tunnel_port_on_path(struct kunit *test)
tb_tunnel_free(dp_tunnel);
}
static void tb_test_tunnel_dma(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA tunnel from NHI to port 1 and back.
*
* [Host 1]
* 1 ^ In HopID 1 -> Out HopID 8
* |
* v In HopID 8 -> Out HopID 1
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].next_hop_index, 8);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_rx(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA RX tunnel from port 1 to NHI.
*
* [Host 1]
* 1 ^
* |
* | In HopID 15 -> Out HopID 2
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 2);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 15);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 2);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_tx(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
/*
* Create DMA TX tunnel from NHI to port 1.
*
* [Host 1]
* 1 | In HopID 2 -> Out HopID 15
* |
* v
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 2, -1, -1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].next_hop_index, 15);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_chain(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
/*
* Create DMA tunnel from NHI to Device #2 port 3 and back.
*
* [Host 1]
* 1 ^ In HopID 1 -> Out HopID x
* |
* 1 | In HopID x -> Out HopID 1
* [Device #1]
* 7 \
* 1 \
* [Device #2]
* 3 | In HopID x -> Out HopID 8
* |
* v In HopID 8 -> Out HopID x
* ............ Domain border
* |
* [Host 2]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x701, true);
nhi = &host->ports[7];
port = &dev2->ports[3];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 8, 1, 8, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DMA);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, nhi);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, port);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)2);
/* RX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[0].in_hop_index, 8);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].out_port,
&dev2->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].in_port,
&dev1->ports[7]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port,
&dev1->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].in_port,
&host->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[0]->hops[2].next_hop_index, 1);
/* TX path */
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, nhi);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[0].in_hop_index, 1);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].in_port,
&dev1->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port,
&dev1->ports[7]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].in_port,
&dev2->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, port);
KUNIT_EXPECT_EQ(test, tunnel->paths[1]->hops[2].next_hop_index, 8);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dma_match(struct kunit *test)
{
struct tb_port *nhi, *port;
struct tb_tunnel *tunnel;
struct tb_switch *host;
host = alloc_host(test);
nhi = &host->ports[7];
port = &host->ports[1];
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, 15, 1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, 1, 15, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 8, -1, 8, -1));
tb_tunnel_free(tunnel);
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, 15, 1, -1, -1);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, 15, -1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, 1, -1, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1));
tb_tunnel_free(tunnel);
tunnel = tb_tunnel_alloc_dma(NULL, nhi, port, -1, -1, 15, 11);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 11));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, -1));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, 11));
KUNIT_ASSERT_TRUE(test, tb_tunnel_match_dma(tunnel, -1, -1, -1, -1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 15, 1));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, -1, -1, 10, 11));
KUNIT_ASSERT_FALSE(test, tb_tunnel_match_dma(tunnel, 15, 11, -1, -1));
tb_tunnel_free(tunnel);
}
static const u32 root_directory[] = {
0x55584401, /* "UXD" v1 */
0x00000018, /* Root directory length */
0x76656e64, /* "vend" */
0x6f726964, /* "orid" */
0x76000001, /* "v" R 1 */
0x00000a27, /* Immediate value, ! Vendor ID */
0x76656e64, /* "vend" */
0x6f726964, /* "orid" */
0x74000003, /* "t" R 3 */
0x0000001a, /* Text leaf offset, (“Apple Inc.”) */
0x64657669, /* "devi" */
0x63656964, /* "ceid" */
0x76000001, /* "v" R 1 */
0x0000000a, /* Immediate value, ! Device ID */
0x64657669, /* "devi" */
0x63656964, /* "ceid" */
0x74000003, /* "t" R 3 */
0x0000001d, /* Text leaf offset, (“Macintosh”) */
0x64657669, /* "devi" */
0x63657276, /* "cerv" */
0x76000001, /* "v" R 1 */
0x80000100, /* Immediate value, Device Revision */
0x6e657477, /* "netw" */
0x6f726b00, /* "ork" */
0x44000014, /* "D" R 20 */
0x00000021, /* Directory data offset, (Network Directory) */
0x4170706c, /* "Appl" */
0x6520496e, /* "e In" */
0x632e0000, /* "c." ! */
0x4d616369, /* "Maci" */
0x6e746f73, /* "ntos" */
0x68000000, /* "h" */
0x00000000, /* padding */
0xca8961c6, /* Directory UUID, Network Directory */
0x9541ce1c, /* Directory UUID, Network Directory */
0x5949b8bd, /* Directory UUID, Network Directory */
0x4f5a5f2e, /* Directory UUID, Network Directory */
0x70727463, /* "prtc" */
0x69640000, /* "id" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol ID */
0x70727463, /* "prtc" */
0x76657273, /* "vers" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol Version */
0x70727463, /* "prtc" */
0x72657673, /* "revs" */
0x76000001, /* "v" R 1 */
0x00000001, /* Immediate value, Network Protocol Revision */
0x70727463, /* "prtc" */
0x73746e73, /* "stns" */
0x76000001, /* "v" R 1 */
0x00000000, /* Immediate value, Network Protocol Settings */
};
static const uuid_t network_dir_uuid =
UUID_INIT(0xc66189ca, 0x1cce, 0x4195,
0xbd, 0xb8, 0x49, 0x59, 0x2e, 0x5f, 0x5a, 0x4f);
static void tb_test_property_parse(struct kunit *test)
{
struct tb_property_dir *dir, *network_dir;
struct tb_property *p;
dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, dir != NULL);
p = tb_property_find(dir, "foo", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, !p);
p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_STREQ(test, p->value.text, "Apple Inc.");
p = tb_property_find(dir, "vendorid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa27);
p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_TEXT);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_STREQ(test, p->value.text, "Macintosh");
p = tb_property_find(dir, "deviceid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0xa);
p = tb_property_find(dir, "missing", TB_PROPERTY_TYPE_DIRECTORY);
KUNIT_ASSERT_TRUE(test, !p);
p = tb_property_find(dir, "network", TB_PROPERTY_TYPE_DIRECTORY);
KUNIT_ASSERT_TRUE(test, p != NULL);
network_dir = p->value.dir;
KUNIT_EXPECT_TRUE(test, uuid_equal(network_dir->uuid, &network_dir_uuid));
p = tb_property_find(network_dir, "prtcid", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcvers", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcrevs", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x1);
p = tb_property_find(network_dir, "prtcstns", TB_PROPERTY_TYPE_VALUE);
KUNIT_ASSERT_TRUE(test, p != NULL);
KUNIT_EXPECT_EQ(test, p->value.immediate, (u32)0x0);
p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_VALUE);
KUNIT_EXPECT_TRUE(test, !p);
p = tb_property_find(network_dir, "deviceid", TB_PROPERTY_TYPE_TEXT);
KUNIT_EXPECT_TRUE(test, !p);
tb_property_free_dir(dir);
}
static void tb_test_property_format(struct kunit *test)
{
struct tb_property_dir *dir;
ssize_t block_len;
u32 *block;
int ret, i;
dir = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, dir != NULL);
ret = tb_property_format_dir(dir, NULL, 0);
KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory));
block_len = ret;
block = kunit_kzalloc(test, block_len * sizeof(u32), GFP_KERNEL);
KUNIT_ASSERT_TRUE(test, block != NULL);
ret = tb_property_format_dir(dir, block, block_len);
KUNIT_EXPECT_EQ(test, ret, 0);
for (i = 0; i < ARRAY_SIZE(root_directory); i++)
KUNIT_EXPECT_EQ(test, root_directory[i], block[i]);
tb_property_free_dir(dir);
}
static void compare_dirs(struct kunit *test, struct tb_property_dir *d1,
struct tb_property_dir *d2)
{
struct tb_property *p1, *p2, *tmp;
int n1, n2, i;
if (d1->uuid) {
KUNIT_ASSERT_TRUE(test, d2->uuid != NULL);
KUNIT_ASSERT_TRUE(test, uuid_equal(d1->uuid, d2->uuid));
} else {
KUNIT_ASSERT_TRUE(test, d2->uuid == NULL);
}
n1 = 0;
tb_property_for_each(d1, tmp)
n1++;
KUNIT_ASSERT_NE(test, n1, 0);
n2 = 0;
tb_property_for_each(d2, tmp)
n2++;
KUNIT_ASSERT_NE(test, n2, 0);
KUNIT_ASSERT_EQ(test, n1, n2);
p1 = NULL;
p2 = NULL;
for (i = 0; i < n1; i++) {
p1 = tb_property_get_next(d1, p1);
KUNIT_ASSERT_TRUE(test, p1 != NULL);
p2 = tb_property_get_next(d2, p2);
KUNIT_ASSERT_TRUE(test, p2 != NULL);
KUNIT_ASSERT_STREQ(test, &p1->key[0], &p2->key[0]);
KUNIT_ASSERT_EQ(test, p1->type, p2->type);
KUNIT_ASSERT_EQ(test, p1->length, p2->length);
switch (p1->type) {
case TB_PROPERTY_TYPE_DIRECTORY:
KUNIT_ASSERT_TRUE(test, p1->value.dir != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.dir != NULL);
compare_dirs(test, p1->value.dir, p2->value.dir);
break;
case TB_PROPERTY_TYPE_DATA:
KUNIT_ASSERT_TRUE(test, p1->value.data != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.data != NULL);
KUNIT_ASSERT_TRUE(test,
!memcmp(p1->value.data, p2->value.data,
p1->length * 4)
);
break;
case TB_PROPERTY_TYPE_TEXT:
KUNIT_ASSERT_TRUE(test, p1->value.text != NULL);
KUNIT_ASSERT_TRUE(test, p2->value.text != NULL);
KUNIT_ASSERT_STREQ(test, p1->value.text, p2->value.text);
break;
case TB_PROPERTY_TYPE_VALUE:
KUNIT_ASSERT_EQ(test, p1->value.immediate,
p2->value.immediate);
break;
default:
KUNIT_FAIL(test, "unexpected property type");
break;
}
}
}
static void tb_test_property_copy(struct kunit *test)
{
struct tb_property_dir *src, *dst;
u32 *block;
int ret, i;
src = tb_property_parse_dir(root_directory, ARRAY_SIZE(root_directory));
KUNIT_ASSERT_TRUE(test, src != NULL);
dst = tb_property_copy_dir(src);
KUNIT_ASSERT_TRUE(test, dst != NULL);
/* Compare the structures */
compare_dirs(test, src, dst);
/* Compare the resulting property block */
ret = tb_property_format_dir(dst, NULL, 0);
KUNIT_ASSERT_EQ(test, ret, (int)ARRAY_SIZE(root_directory));
block = kunit_kzalloc(test, sizeof(root_directory), GFP_KERNEL);
KUNIT_ASSERT_TRUE(test, block != NULL);
ret = tb_property_format_dir(dst, block, ARRAY_SIZE(root_directory));
KUNIT_EXPECT_TRUE(test, !ret);
for (i = 0; i < ARRAY_SIZE(root_directory); i++)
KUNIT_EXPECT_EQ(test, root_directory[i], block[i]);
tb_property_free_dir(dst);
tb_property_free_dir(src);
}
static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_path_basic),
KUNIT_CASE(tb_test_path_not_connected_walk),
......@@ -1616,6 +2100,14 @@ static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_tunnel_dp_max_length),
KUNIT_CASE(tb_test_tunnel_port_on_path),
KUNIT_CASE(tb_test_tunnel_usb3),
KUNIT_CASE(tb_test_tunnel_dma),
KUNIT_CASE(tb_test_tunnel_dma_rx),
KUNIT_CASE(tb_test_tunnel_dma_tx),
KUNIT_CASE(tb_test_tunnel_dma_chain),
KUNIT_CASE(tb_test_tunnel_dma_match),
KUNIT_CASE(tb_test_property_parse),
KUNIT_CASE(tb_test_property_format),
KUNIT_CASE(tb_test_property_copy),
{ }
};
......
......@@ -794,24 +794,14 @@ static u32 tb_dma_credits(struct tb_port *nhi)
return min(max_credits, 13U);
}
static int tb_dma_activate(struct tb_tunnel *tunnel, bool active)
{
struct tb_port *nhi = tunnel->src_port;
u32 credits;
credits = active ? tb_dma_credits(nhi) : 0;
return tb_port_set_initial_credits(nhi, credits);
}
static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
unsigned int efc, u32 credits)
static void tb_dma_init_path(struct tb_path *path, unsigned int efc, u32 credits)
{
int i;
path->egress_fc_enable = efc;
path->ingress_fc_enable = TB_PATH_ALL;
path->egress_shared_buffer = TB_PATH_NONE;
path->ingress_shared_buffer = isb;
path->ingress_shared_buffer = TB_PATH_NONE;
path->priority = 5;
path->weight = 1;
path->clear_fc = true;
......@@ -825,28 +815,28 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int isb,
* @tb: Pointer to the domain structure
* @nhi: Host controller port
* @dst: Destination null port which the other domain is connected to
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Set to %0 if TX path is not needed.
* @transmit_path: HopID used for transmitting packets
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Set to %0 if RX path is not needed.
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Set to %-1 if TX path is not needed.
* @receive_path: HopID used for receiving packets
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Set to %-1 if RX path is not needed.
*
* Return: Returns a tb_tunnel on success or NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
int receive_path)
struct tb_port *dst, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring)
{
struct tb_tunnel *tunnel;
size_t npaths = 0, i = 0;
struct tb_path *path;
u32 credits;
if (receive_ring)
if (receive_ring > 0)
npaths++;
if (transmit_ring)
if (transmit_ring > 0)
npaths++;
if (WARN_ON(!npaths))
......@@ -856,38 +846,96 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
if (!tunnel)
return NULL;
tunnel->activate = tb_dma_activate;
tunnel->src_port = nhi;
tunnel->dst_port = dst;
credits = tb_dma_credits(nhi);
if (receive_ring) {
if (receive_ring > 0) {
path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0,
"DMA RX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL,
credits);
tb_dma_init_path(path, TB_PATH_SOURCE | TB_PATH_INTERNAL, credits);
tunnel->paths[i++] = path;
}
if (transmit_ring) {
if (transmit_ring > 0) {
path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0,
"DMA TX");
if (!path) {
tb_tunnel_free(tunnel);
return NULL;
}
tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits);
tb_dma_init_path(path, TB_PATH_ALL, credits);
tunnel->paths[i++] = path;
}
return tunnel;
}
/**
* tb_tunnel_match_dma() - Match DMA tunnel
* @tunnel: Tunnel to match
* @transmit_path: HopID used for transmitting packets. Pass %-1 to ignore.
* @transmit_ring: NHI ring number used to send packets towards the
* other domain. Pass %-1 to ignore.
* @receive_path: HopID used for receiving packets. Pass %-1 to ignore.
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Pass %-1 to ignore.
*
* This function can be used to match specific DMA tunnel, if there are
* multiple DMA tunnels going through the same XDomain connection.
* Returns true if there is match and false otherwise.
*/
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring)
{
const struct tb_path *tx_path = NULL, *rx_path = NULL;
int i;
if (!receive_ring || !transmit_ring)
return false;
for (i = 0; i < tunnel->npaths; i++) {
const struct tb_path *path = tunnel->paths[i];
if (!path)
continue;
if (tb_port_is_nhi(path->hops[0].in_port))
tx_path = path;
else if (tb_port_is_nhi(path->hops[path->path_length - 1].out_port))
rx_path = path;
}
if (transmit_ring > 0 || transmit_path > 0) {
if (!tx_path)
return false;
if (transmit_ring > 0 &&
(tx_path->hops[0].in_hop_index != transmit_ring))
return false;
if (transmit_path > 0 &&
(tx_path->hops[tx_path->path_length - 1].next_hop_index != transmit_path))
return false;
}
if (receive_ring > 0 || receive_path > 0) {
if (!rx_path)
return false;
if (receive_path > 0 &&
(rx_path->hops[0].in_hop_index != receive_path))
return false;
if (receive_ring > 0 &&
(rx_path->hops[rx_path->path_length - 1].next_hop_index != receive_ring))
return false;
}
return true;
}
static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
{
int ret, up_max_rate, down_max_rate;
......
......@@ -70,9 +70,11 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int max_up,
int max_down);
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring,
int receive_path);
struct tb_port *dst, int transmit_path,
int transmit_ring, int receive_path,
int receive_ring);
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down, int max_up,
......
此差异已折叠。
......@@ -59,6 +59,7 @@
#include <linux/dma-mapping.h>
#include <linux/usb/gadget.h>
#include <linux/module.h>
#include <linux/dmapool.h>
#include <linux/iopoll.h>
#include "core.h"
......@@ -190,29 +191,13 @@ dma_addr_t cdns3_trb_virt_to_dma(struct cdns3_endpoint *priv_ep,
return priv_ep->trb_pool_dma + offset;
}
static int cdns3_ring_size(struct cdns3_endpoint *priv_ep)
{
switch (priv_ep->type) {
case USB_ENDPOINT_XFER_ISOC:
return TRB_ISO_RING_SIZE;
case USB_ENDPOINT_XFER_CONTROL:
return TRB_CTRL_RING_SIZE;
default:
if (priv_ep->use_streams)
return TRB_STREAM_RING_SIZE;
else
return TRB_RING_SIZE;
}
}
static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
if (priv_ep->trb_pool) {
dma_free_coherent(priv_dev->sysdev,
cdns3_ring_size(priv_ep),
priv_ep->trb_pool, priv_ep->trb_pool_dma);
dma_pool_free(priv_dev->eps_dma_pool,
priv_ep->trb_pool, priv_ep->trb_pool_dma);
priv_ep->trb_pool = NULL;
}
}
......@@ -226,7 +211,7 @@ static void cdns3_free_trb_pool(struct cdns3_endpoint *priv_ep)
int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
{
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
int ring_size = cdns3_ring_size(priv_ep);
int ring_size = TRB_RING_SIZE;
int num_trbs = ring_size / TRB_SIZE;
struct cdns3_trb *link_trb;
......@@ -234,10 +219,10 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
cdns3_free_trb_pool(priv_ep);
if (!priv_ep->trb_pool) {
priv_ep->trb_pool = dma_alloc_coherent(priv_dev->sysdev,
ring_size,
&priv_ep->trb_pool_dma,
GFP_DMA32 | GFP_ATOMIC);
priv_ep->trb_pool = dma_pool_alloc(priv_dev->eps_dma_pool,
GFP_DMA32 | GFP_ATOMIC,
&priv_ep->trb_pool_dma);
if (!priv_ep->trb_pool)
return -ENOMEM;
......@@ -834,9 +819,15 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
priv_ep->dir);
if ((priv_req->flags & REQUEST_UNALIGNED) &&
priv_ep->dir == USB_DIR_OUT && !request->status)
priv_ep->dir == USB_DIR_OUT && !request->status) {
/* Make DMA buffer CPU accessible */
dma_sync_single_for_cpu(priv_dev->sysdev,
priv_req->aligned_buf->dma,
priv_req->aligned_buf->size,
priv_req->aligned_buf->dir);
memcpy(request->buf, priv_req->aligned_buf->buf,
request->length);
}
priv_req->flags &= ~(REQUEST_PENDING | REQUEST_UNALIGNED);
/* All TRBs have finished, clear the counter */
......@@ -898,8 +889,8 @@ static void cdns3_free_aligned_request_buf(struct work_struct *work)
* interrupts.
*/
spin_unlock_irqrestore(&priv_dev->lock, flags);
dma_free_coherent(priv_dev->sysdev, buf->size,
buf->buf, buf->dma);
dma_free_noncoherent(priv_dev->sysdev, buf->size,
buf->buf, buf->dma, buf->dir);
kfree(buf);
spin_lock_irqsave(&priv_dev->lock, flags);
}
......@@ -926,10 +917,13 @@ static int cdns3_prepare_aligned_request_buf(struct cdns3_request *priv_req)
return -ENOMEM;
buf->size = priv_req->request.length;
buf->dir = usb_endpoint_dir_in(priv_ep->endpoint.desc) ?
DMA_TO_DEVICE : DMA_FROM_DEVICE;
buf->buf = dma_alloc_coherent(priv_dev->sysdev,
buf->buf = dma_alloc_noncoherent(priv_dev->sysdev,
buf->size,
&buf->dma,
buf->dir,
GFP_ATOMIC);
if (!buf->buf) {
kfree(buf);
......@@ -951,10 +945,17 @@ static int cdns3_prepare_aligned_request_buf(struct cdns3_request *priv_req)
}
if (priv_ep->dir == USB_DIR_IN) {
/* Make DMA buffer CPU accessible */
dma_sync_single_for_cpu(priv_dev->sysdev,
buf->dma, buf->size, buf->dir);
memcpy(buf->buf, priv_req->request.buf,
priv_req->request.length);
}
/* Transfer DMA buffer ownership back to device */
dma_sync_single_for_device(priv_dev->sysdev,
buf->dma, buf->size, buf->dir);
priv_req->flags |= REQUEST_UNALIGNED;
trace_cdns3_prepare_aligned_request(priv_req);
......@@ -3103,9 +3104,10 @@ static void cdns3_gadget_exit(struct cdns *cdns)
struct cdns3_aligned_buf *buf;
buf = cdns3_next_align_buf(&priv_dev->aligned_buf_list);
dma_free_coherent(priv_dev->sysdev, buf->size,
dma_free_noncoherent(priv_dev->sysdev, buf->size,
buf->buf,
buf->dma);
buf->dma,
buf->dir);
list_del(&buf->list);
kfree(buf);
......@@ -3113,6 +3115,7 @@ static void cdns3_gadget_exit(struct cdns *cdns)
dma_free_coherent(priv_dev->sysdev, 8, priv_dev->setup_buf,
priv_dev->setup_dma);
dma_pool_destroy(priv_dev->eps_dma_pool);
kfree(priv_dev->zlp_buf);
usb_put_gadget(&priv_dev->gadget);
......@@ -3185,6 +3188,14 @@ static int cdns3_gadget_start(struct cdns *cdns)
/* initialize endpoint container */
INIT_LIST_HEAD(&priv_dev->gadget.ep_list);
INIT_LIST_HEAD(&priv_dev->aligned_buf_list);
priv_dev->eps_dma_pool = dma_pool_create("cdns3_eps_dma_pool",
priv_dev->sysdev,
TRB_RING_SIZE, 8, 0);
if (!priv_dev->eps_dma_pool) {
dev_err(priv_dev->dev, "Failed to create TRB dma pool\n");
ret = -ENOMEM;
goto err1;
}
ret = cdns3_init_eps(priv_dev);
if (ret) {
......@@ -3235,6 +3246,8 @@ static int cdns3_gadget_start(struct cdns *cdns)
err2:
cdns3_free_all_eps(priv_dev);
err1:
dma_pool_destroy(priv_dev->eps_dma_pool);
usb_put_gadget(&priv_dev->gadget);
cdns->gadget_dev = NULL;
return ret;
......@@ -3304,6 +3317,8 @@ static int cdns3_gadget_resume(struct cdns *cdns, bool hibernated)
return 0;
cdns3_gadget_config(priv_dev);
if (hibernated)
writel(USB_CONF_DEVEN, &priv_dev->regs->usb_conf);
return 0;
}
......
......@@ -12,6 +12,7 @@
#ifndef __LINUX_CDNS3_GADGET
#define __LINUX_CDNS3_GADGET
#include <linux/usb/gadget.h>
#include <linux/dma-direction.h>
/*
* USBSS-DEV register interface.
......@@ -1205,6 +1206,7 @@ struct cdns3_aligned_buf {
void *buf;
dma_addr_t dma;
u32 size;
enum dma_data_direction dir;
unsigned in_use:1;
struct list_head list;
};
......@@ -1298,6 +1300,7 @@ struct cdns3_device {
struct cdns3_usb_regs __iomem *regs;
struct dma_pool *eps_dma_pool;
struct usb_ctrlrequest *setup_buf;
dma_addr_t setup_dma;
void *zlp_buf;
......
......@@ -361,6 +361,39 @@ static int cdns_imx_suspend(struct device *dev)
return 0;
}
/* Indicate if the controller was power lost before */
static inline bool cdns_imx_is_power_lost(struct cdns_imx *data)
{
u32 value;
value = cdns_imx_readl(data, USB3_CORE_CTRL1);
if ((value & SW_RESET_MASK) == ALL_SW_RESET)
return true;
else
return false;
}
static int __maybe_unused cdns_imx_system_resume(struct device *dev)
{
struct cdns_imx *data = dev_get_drvdata(dev);
int ret;
ret = cdns_imx_resume(dev);
if (ret)
return ret;
if (cdns_imx_is_power_lost(data)) {
dev_dbg(dev, "resume from power lost\n");
ret = cdns_imx_noncore_init(data);
if (ret)
cdns_imx_suspend(dev);
}
return ret;
}
#else
static int cdns_imx_platform_suspend(struct device *dev,
bool suspend, bool wakeup)
......@@ -372,6 +405,7 @@ static int cdns_imx_platform_suspend(struct device *dev,
static const struct dev_pm_ops cdns_imx_pm_ops = {
SET_RUNTIME_PM_OPS(cdns_imx_suspend, cdns_imx_resume, NULL)
SET_SYSTEM_SLEEP_PM_OPS(cdns_imx_suspend, cdns_imx_system_resume)
};
static const struct of_device_id cdns_imx_of_match[] = {
......
......@@ -19,6 +19,7 @@
#include "core.h"
#include "gadget-export.h"
#include "drd.h"
static int set_phy_power_on(struct cdns *cdns)
{
......@@ -236,6 +237,18 @@ static int cdns3_controller_resume(struct device *dev, pm_message_t msg)
if (!cdns->in_lpm)
return 0;
if (cdns_power_is_lost(cdns)) {
phy_exit(cdns->usb2_phy);
ret = phy_init(cdns->usb2_phy);
if (ret)
return ret;
phy_exit(cdns->usb3_phy);
ret = phy_init(cdns->usb3_phy);
if (ret)
return ret;
}
ret = set_phy_power_on(cdns);
if (ret)
return ret;
......@@ -270,10 +283,18 @@ static int cdns3_plat_runtime_resume(struct device *dev)
static int cdns3_plat_suspend(struct device *dev)
{
struct cdns *cdns = dev_get_drvdata(dev);
int ret;
cdns_suspend(cdns);
return cdns3_controller_suspend(dev, PMSG_SUSPEND);
ret = cdns3_controller_suspend(dev, PMSG_SUSPEND);
if (ret)
return ret;
if (device_may_wakeup(dev) && cdns->wakeup_irq)
enable_irq_wake(cdns->wakeup_irq);
return ret;
}
static int cdns3_plat_resume(struct device *dev)
......
......@@ -214,7 +214,6 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__field(int, no_interrupt)
__field(int, start_trb)
__field(int, end_trb)
__field(struct cdns3_trb *, start_trb_addr)
__field(int, flags)
__field(unsigned int, stream_id)
),
......@@ -230,12 +229,11 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->no_interrupt = req->request.no_interrupt;
__entry->start_trb = req->start_trb;
__entry->end_trb = req->end_trb;
__entry->start_trb_addr = req->trb;
__entry->flags = req->flags;
__entry->stream_id = req->request.stream_id;
),
TP_printk("%s: req: %p, req buff %p, length: %u/%u %s%s%s, status: %d,"
" trb: [start:%d, end:%d: virt addr %pa], flags:%x SID: %u",
" trb: [start:%d, end:%d], flags:%x SID: %u",
__get_str(name), __entry->req, __entry->buf, __entry->actual,
__entry->length,
__entry->zero ? "Z" : "z",
......@@ -244,7 +242,6 @@ DECLARE_EVENT_CLASS(cdns3_log_request,
__entry->status,
__entry->start_trb,
__entry->end_trb,
__entry->start_trb_addr,
__entry->flags,
__entry->stream_id
)
......
......@@ -727,7 +727,7 @@ int cdnsp_reset_device(struct cdnsp_device *pdev)
* are in Disabled state.
*/
for (i = 1; i < CDNSP_ENDPOINTS_NUM; ++i)
pdev->eps[i].ep_state |= EP_STOPPED;
pdev->eps[i].ep_state |= EP_STOPPED | EP_UNCONFIGURED;
trace_cdnsp_handle_cmd_reset_dev(slot_ctx);
......@@ -942,6 +942,7 @@ static int cdnsp_gadget_ep_enable(struct usb_ep *ep,
pep = to_cdnsp_ep(ep);
pdev = pep->pdev;
pep->ep_state &= ~EP_UNCONFIGURED;
if (dev_WARN_ONCE(pdev->dev, pep->ep_state & EP_ENABLED,
"%s is already enabled\n", pep->name))
......@@ -1023,9 +1024,13 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
goto finish;
}
cdnsp_cmd_stop_ep(pdev, pep);
pep->ep_state |= EP_DIS_IN_RROGRESS;
cdnsp_cmd_flush_ep(pdev, pep);
/* Endpoint was unconfigured by Reset Device command. */
if (!(pep->ep_state & EP_UNCONFIGURED)) {
cdnsp_cmd_stop_ep(pdev, pep);
cdnsp_cmd_flush_ep(pdev, pep);
}
/* Remove all queued USB requests. */
while (!list_empty(&pep->pending_list)) {
......@@ -1043,10 +1048,12 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
cdnsp_endpoint_zero(pdev, pep);
ret = cdnsp_update_eps_configuration(pdev, pep);
if (!(pep->ep_state & EP_UNCONFIGURED))
ret = cdnsp_update_eps_configuration(pdev, pep);
cdnsp_free_endpoint_rings(pdev, pep);
pep->ep_state &= ~EP_ENABLED;
pep->ep_state &= ~(EP_ENABLED | EP_UNCONFIGURED);
pep->ep_state |= EP_STOPPED;
finish:
......
......@@ -835,6 +835,7 @@ struct cdnsp_ep {
#define EP_WEDGE BIT(4)
#define EP0_HALTED_STATUS BIT(5)
#define EP_HAS_STREAMS BIT(6)
#define EP_UNCONFIGURED BIT(7)
bool skip;
};
......
......@@ -686,7 +686,7 @@ static void cdnsp_free_priv_device(struct cdnsp_device *pdev)
static int cdnsp_alloc_priv_device(struct cdnsp_device *pdev)
{
int ret = -ENOMEM;
int ret;
ret = cdnsp_init_device_ctx(pdev);
if (ret)
......@@ -1231,7 +1231,6 @@ int cdnsp_mem_init(struct cdnsp_device *pdev)
if (!pdev->dcbaa)
return -ENOMEM;
memset(pdev->dcbaa, 0, sizeof(*pdev->dcbaa));
pdev->dcbaa->dma = dma;
cdnsp_write_64(dma, &pdev->op_regs->dcbaa_ptr);
......
......@@ -525,9 +525,36 @@ EXPORT_SYMBOL_GPL(cdns_suspend);
int cdns_resume(struct cdns *cdns, u8 set_active)
{
struct device *dev = cdns->dev;
enum usb_role real_role;
bool role_changed = false;
int ret = 0;
if (cdns_power_is_lost(cdns)) {
if (cdns->role_sw) {
cdns->role = cdns_role_get(cdns->role_sw);
} else {
real_role = cdns_hw_role_state_machine(cdns);
if (real_role != cdns->role) {
ret = cdns_hw_role_switch(cdns);
if (ret)
return ret;
role_changed = true;
}
}
if (!role_changed) {
if (cdns->role == USB_ROLE_HOST)
ret = cdns_drd_host_on(cdns);
else if (cdns->role == USB_ROLE_DEVICE)
ret = cdns_drd_gadget_on(cdns);
if (ret)
return ret;
}
}
if (cdns->roles[cdns->role]->resume)
cdns->roles[cdns->role]->resume(cdns, false);
cdns->roles[cdns->role]->resume(cdns, cdns_power_is_lost(cdns));
if (set_active) {
pm_runtime_disable(dev);
......
......@@ -478,3 +478,18 @@ int cdns_drd_exit(struct cdns *cdns)
return 0;
}
/* Indicate the cdns3 core was power lost before */
bool cdns_power_is_lost(struct cdns *cdns)
{
if (cdns->version == CDNS3_CONTROLLER_V1) {
if (!(readl(&cdns->otg_v1_regs->simulate) & BIT(0)))
return true;
} else {
if (!(readl(&cdns->otg_v0_regs->simulate) & BIT(0)))
return true;
}
return false;
}
EXPORT_SYMBOL_GPL(cdns_power_is_lost);
......@@ -215,5 +215,5 @@ int cdns_drd_gadget_on(struct cdns *cdns);
void cdns_drd_gadget_off(struct cdns *cdns);
int cdns_drd_host_on(struct cdns *cdns);
void cdns_drd_host_off(struct cdns *cdns);
bool cdns_power_is_lost(struct cdns *cdns);
#endif /* __LINUX_CDNS3_DRD */
......@@ -285,11 +285,9 @@ static int tegra_usb_probe(struct platform_device *pdev)
}
usb->phy = devm_usb_get_phy_by_phandle(&pdev->dev, "nvidia,phy", 0);
if (IS_ERR(usb->phy)) {
err = PTR_ERR(usb->phy);
dev_err(&pdev->dev, "failed to get PHY: %d\n", err);
return err;
}
if (IS_ERR(usb->phy))
return dev_err_probe(&pdev->dev, PTR_ERR(usb->phy),
"failed to get PHY\n");
usb->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(usb->clk)) {
......
......@@ -32,7 +32,7 @@ struct ehci_ci_priv {
struct ci_hdrc_dma_aligned_buffer {
void *kmalloc_ptr;
void *old_xfer_buffer;
u8 data[0];
u8 data[];
};
static int ehci_ci_portpower(struct usb_hcd *hcd, int portnum, bool enable)
......
......@@ -929,8 +929,7 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
ss->xmit_fifo_size = acm->writesize;
ss->baud_base = le32_to_cpu(acm->line.dwDTERate);
ss->line = acm->minor;
ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
......@@ -942,7 +941,6 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
{
struct acm *acm = tty->driver_data;
unsigned int closing_wait, close_delay;
unsigned int old_closing_wait, old_close_delay;
int retval = 0;
close_delay = msecs_to_jiffies(ss->close_delay * 10);
......@@ -950,20 +948,12 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
ASYNC_CLOSING_WAIT_NONE :
msecs_to_jiffies(ss->closing_wait * 10);
/* we must redo the rounding here, so that the values match */
old_close_delay = jiffies_to_msecs(acm->port.close_delay) / 10;
old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
ASYNC_CLOSING_WAIT_NONE :
jiffies_to_msecs(acm->port.closing_wait) / 10;
mutex_lock(&acm->port.mutex);
if (!capable(CAP_SYS_ADMIN)) {
if ((ss->close_delay != old_close_delay) ||
(ss->closing_wait != old_closing_wait))
if ((close_delay != acm->port.close_delay) ||
(closing_wait != acm->port.closing_wait))
retval = -EPERM;
else
retval = -EOPNOTSUPP;
} else {
acm->port.close_delay = close_delay;
acm->port.closing_wait = closing_wait;
......@@ -1634,12 +1624,13 @@ static int acm_resume(struct usb_interface *intf)
struct urb *urb;
int rv = 0;
acm_unpoison_urbs(acm);
spin_lock_irq(&acm->write_lock);
if (--acm->susp_count)
goto out;
acm_unpoison_urbs(acm);
if (tty_port_initialized(&acm->port)) {
rv = usb_submit_urb(acm->ctrlurb, GFP_ATOMIC);
......@@ -1922,9 +1913,17 @@ static const struct usb_device_id acm_ids[] = {
#endif
#if IS_ENABLED(CONFIG_USB_SERIAL_XR)
{ USB_DEVICE(0x04e2, 0x1410), /* Ignore XR21V141X USB to Serial converter */
.driver_info = IGNORE_DEVICE,
},
{ USB_DEVICE(0x04e2, 0x1400), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1401), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1402), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1403), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1410), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1411), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1412), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1414), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1420), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1422), .driver_info = IGNORE_DEVICE },
{ USB_DEVICE(0x04e2, 0x1424), .driver_info = IGNORE_DEVICE },
#endif
/*Samsung phone in firmware update mode */
......
......@@ -25,6 +25,12 @@ static const char *const ep_type_names[] = {
[USB_ENDPOINT_XFER_INT] = "intr",
};
/**
* usb_ep_type_string() - Returns human readable-name of the endpoint type.
* @ep_type: The endpoint type to return human-readable name for. If it's not
* any of the types: USB_ENDPOINT_XFER_{CONTROL, ISOC, BULK, INT},
* usually got by usb_endpoint_type(), the string 'unknown' will be returned.
*/
const char *usb_ep_type_string(int ep_type)
{
if (ep_type < 0 || ep_type >= ARRAY_SIZE(ep_type_names))
......@@ -76,6 +82,12 @@ static const char *const ssp_rate[] = {
[USB_SSP_GEN_2x2] = "super-speed-plus-gen2x2",
};
/**
* usb_speed_string() - Returns human readable-name of the speed.
* @speed: The speed to return human-readable name for. If it's not
* any of the speeds defined in usb_device_speed enum, string for
* USB_SPEED_UNKNOWN will be returned.
*/
const char *usb_speed_string(enum usb_device_speed speed)
{
if (speed < 0 || speed >= ARRAY_SIZE(speed_names))
......@@ -84,6 +96,14 @@ const char *usb_speed_string(enum usb_device_speed speed)
}
EXPORT_SYMBOL_GPL(usb_speed_string);
/**
* usb_get_maximum_speed - Get maximum requested speed for a given USB
* controller.
* @dev: Pointer to the given USB controller device
*
* The function gets the maximum speed string from property "maximum-speed",
* and returns the corresponding enum usb_device_speed.
*/
enum usb_device_speed usb_get_maximum_speed(struct device *dev)
{
const char *maximum_speed;
......@@ -102,6 +122,15 @@ enum usb_device_speed usb_get_maximum_speed(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_maximum_speed);
/**
* usb_get_maximum_ssp_rate - Get the signaling rate generation and lane count
* of a SuperSpeed Plus capable device.
* @dev: Pointer to the given USB controller device
*
* If the string from "maximum-speed" property is super-speed-plus-genXxY where
* 'X' is the generation number and 'Y' is the number of lanes, then this
* function returns the corresponding enum usb_ssp_rate.
*/
enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev)
{
const char *maximum_speed;
......@@ -116,6 +145,12 @@ enum usb_ssp_rate usb_get_maximum_ssp_rate(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_maximum_ssp_rate);
/**
* usb_state_string - Returns human readable name for the state.
* @state: The state to return a human-readable name for. If it's not
* any of the states devices in usb_device_state_string enum,
* the string UNKNOWN will be returned.
*/
const char *usb_state_string(enum usb_device_state state)
{
static const char *const names[] = {
......@@ -165,6 +200,47 @@ enum usb_dr_mode usb_get_dr_mode(struct device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_dr_mode);
/**
* usb_decode_interval - Decode bInterval into the time expressed in 1us unit
* @epd: The descriptor of the endpoint
* @speed: The speed that the endpoint works as
*
* Function returns the interval expressed in 1us unit for servicing
* endpoint for data transfers.
*/
unsigned int usb_decode_interval(const struct usb_endpoint_descriptor *epd,
enum usb_device_speed speed)
{
unsigned int interval = 0;
switch (usb_endpoint_type(epd)) {
case USB_ENDPOINT_XFER_CONTROL:
/* uframes per NAK */
if (speed == USB_SPEED_HIGH)
interval = epd->bInterval;
break;
case USB_ENDPOINT_XFER_ISOC:
interval = 1 << (epd->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
/* uframes per NAK */
if (speed == USB_SPEED_HIGH && usb_endpoint_dir_out(epd))
interval = epd->bInterval;
break;
case USB_ENDPOINT_XFER_INT:
if (speed >= USB_SPEED_HIGH)
interval = 1 << (epd->bInterval - 1);
else
interval = epd->bInterval;
break;
}
interval *= (speed >= USB_SPEED_HIGH) ? 125 : 1000;
return interval;
}
EXPORT_SYMBOL_GPL(usb_decode_interval);
#ifdef CONFIG_OF
/**
* of_usb_get_dr_mode_by_phy - Get dual role mode for the controller device
......
......@@ -207,8 +207,26 @@ static void usb_decode_set_isoch_delay(__u8 wValue, char *str, size_t size)
snprintf(str, size, "Set Isochronous Delay(Delay = %d ns)", wValue);
}
/*
* usb_decode_ctrl - returns a string representation of ctrl request
/**
* usb_decode_ctrl - Returns human readable representation of control request.
* @str: buffer to return a human-readable representation of control request.
* This buffer should have about 200 bytes.
* @size: size of str buffer.
* @bRequestType: matches the USB bmRequestType field
* @bRequest: matches the USB bRequest field
* @wValue: matches the USB wValue field (CPU byte order)
* @wIndex: matches the USB wIndex field (CPU byte order)
* @wLength: matches the USB wLength field (CPU byte order)
*
* Function returns decoded, formatted and human-readable description of
* control request packet.
*
* The usage scenario for this is for tracepoints, so function as a return
* use the same value as in parameters. This approach allows to use this
* function in TP_printk
*
* Important: wValue, wIndex, wLength parameters before invoking this function
* should be processed by le16_to_cpu macro.
*/
const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
__u8 bRequest, __u16 wValue, __u16 wIndex,
......
......@@ -157,38 +157,25 @@ static char *usb_dump_endpoint_descriptor(int speed, char *start, char *end,
switch (usb_endpoint_type(desc)) {
case USB_ENDPOINT_XFER_CONTROL:
type = "Ctrl";
if (speed == USB_SPEED_HIGH) /* uframes per NAK */
interval = desc->bInterval;
else
interval = 0;
dir = 'B'; /* ctrl is bidirectional */
break;
case USB_ENDPOINT_XFER_ISOC:
type = "Isoc";
interval = 1 << (desc->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
type = "Bulk";
if (speed == USB_SPEED_HIGH && dir == 'O') /* uframes per NAK */
interval = desc->bInterval;
else
interval = 0;
break;
case USB_ENDPOINT_XFER_INT:
type = "Int.";
if (speed == USB_SPEED_HIGH || speed >= USB_SPEED_SUPER)
interval = 1 << (desc->bInterval - 1);
else
interval = desc->bInterval;
break;
default: /* "can't happen" */
return start;
}
interval *= (speed == USB_SPEED_HIGH ||
speed >= USB_SPEED_SUPER) ? 125 : 1000;
if (interval % 1000)
interval = usb_decode_interval(desc, speed);
if (interval % 1000) {
unit = 'u';
else {
} else {
unit = 'm';
interval /= 1000;
}
......
......@@ -519,17 +519,13 @@ static int usb_unbind_interface(struct device *dev)
* @driver: the driver to be bound
* @iface: the interface to which it will be bound; must be in the
* usb device's active configuration
* @priv: driver data associated with that interface
* @data: driver data associated with that interface
*
* This is used by usb device drivers that need to claim more than one
* interface on a device when probing (audio and acm are current examples).
* No device driver should directly modify internal usb_interface or
* usb_device structure members.
*
* Few drivers should need to use this routine, since the most natural
* way to bind to an interface is to return the private data from
* the driver's probe() method.
*
* Callers must own the device lock, so driver probe() entries don't need
* extra locking, but other call contexts may need to explicitly claim that
* lock.
......@@ -537,7 +533,7 @@ static int usb_unbind_interface(struct device *dev)
* Return: 0 on success.
*/
int usb_driver_claim_interface(struct usb_driver *driver,
struct usb_interface *iface, void *priv)
struct usb_interface *iface, void *data)
{
struct device *dev;
int retval = 0;
......@@ -554,7 +550,7 @@ int usb_driver_claim_interface(struct usb_driver *driver,
return -ENODEV;
dev->driver = &driver->drvwrap.driver;
usb_set_intfdata(iface, priv);
usb_set_intfdata(iface, data);
iface->needs_binding = 0;
iface->condition = USB_INTERFACE_BOUND;
......
......@@ -84,40 +84,13 @@ static ssize_t interval_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct ep_device *ep = to_ep_device(dev);
unsigned int interval;
char unit;
unsigned interval = 0;
unsigned in;
in = (ep->desc->bEndpointAddress & USB_DIR_IN);
switch (usb_endpoint_type(ep->desc)) {
case USB_ENDPOINT_XFER_CONTROL:
if (ep->udev->speed == USB_SPEED_HIGH)
/* uframes per NAK */
interval = ep->desc->bInterval;
break;
case USB_ENDPOINT_XFER_ISOC:
interval = 1 << (ep->desc->bInterval - 1);
break;
case USB_ENDPOINT_XFER_BULK:
if (ep->udev->speed == USB_SPEED_HIGH && !in)
/* uframes per NAK */
interval = ep->desc->bInterval;
break;
case USB_ENDPOINT_XFER_INT:
if (ep->udev->speed == USB_SPEED_HIGH)
interval = 1 << (ep->desc->bInterval - 1);
else
interval = ep->desc->bInterval;
break;
}
interval *= (ep->udev->speed == USB_SPEED_HIGH) ? 125 : 1000;
if (interval % 1000)
interval = usb_decode_interval(ep->desc, ep->udev->speed);
if (interval % 1000) {
unit = 'u';
else {
} else {
unit = 'm';
interval /= 1000;
}
......
......@@ -2721,6 +2721,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
rhdev->rx_lanes = 1;
rhdev->tx_lanes = 1;
rhdev->ssp_rate = USB_SSP_GEN_UNKNOWN;
switch (hcd->speed) {
case HCD_USB11:
......@@ -2738,8 +2739,11 @@ int usb_add_hcd(struct usb_hcd *hcd,
case HCD_USB32:
rhdev->rx_lanes = 2;
rhdev->tx_lanes = 2;
fallthrough;
rhdev->ssp_rate = USB_SSP_GEN_2x2;
rhdev->speed = USB_SPEED_SUPER_PLUS;
break;
case HCD_USB31:
rhdev->ssp_rate = USB_SSP_GEN_2x1;
rhdev->speed = USB_SPEED_SUPER_PLUS;
break;
default:
......
......@@ -31,6 +31,7 @@
#include <linux/pm_qos.h>
#include <linux/kobject.h>
#include <linux/bitfield.h>
#include <linux/uaccess.h>
#include <asm/byteorder.h>
......@@ -2668,31 +2669,79 @@ int usb_authorize_device(struct usb_device *usb_dev)
return result;
}
/*
* Return 1 if port speed is SuperSpeedPlus, 0 otherwise
* check it from the link protocol field of the current speed ID attribute.
* current speed ID is got from ext port status request. Sublink speed attribute
* table is returned with the hub BOS SSP device capability descriptor
/**
* get_port_ssp_rate - Match the extended port status to SSP rate
* @hdev: The hub device
* @ext_portstatus: extended port status
*
* Match the extended port status speed id to the SuperSpeed Plus sublink speed
* capability attributes. Base on the number of connected lanes and speed,
* return the corresponding enum usb_ssp_rate.
*/
static int port_speed_is_ssp(struct usb_device *hdev, int speed_id)
static enum usb_ssp_rate get_port_ssp_rate(struct usb_device *hdev,
u32 ext_portstatus)
{
int ssa_count;
u32 ss_attr;
int i;
struct usb_ssp_cap_descriptor *ssp_cap = hdev->bos->ssp_cap;
u32 attr;
u8 speed_id;
u8 ssac;
u8 lanes;
int i;
if (!ssp_cap)
return 0;
goto out;
ssa_count = le32_to_cpu(ssp_cap->bmAttributes) &
speed_id = ext_portstatus & USB_EXT_PORT_STAT_RX_SPEED_ID;
lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
ssac = le32_to_cpu(ssp_cap->bmAttributes) &
USB_SSP_SUBLINK_SPEED_ATTRIBS;
for (i = 0; i <= ssa_count; i++) {
ss_attr = le32_to_cpu(ssp_cap->bmSublinkSpeedAttr[i]);
if (speed_id == (ss_attr & USB_SSP_SUBLINK_SPEED_SSID))
return !!(ss_attr & USB_SSP_SUBLINK_SPEED_LP);
for (i = 0; i <= ssac; i++) {
u8 ssid;
attr = le32_to_cpu(ssp_cap->bmSublinkSpeedAttr[i]);
ssid = FIELD_GET(USB_SSP_SUBLINK_SPEED_SSID, attr);
if (speed_id == ssid) {
u16 mantissa;
u8 lse;
u8 type;
/*
* Note: currently asymmetric lane types are only
* applicable for SSIC operate in SuperSpeed protocol
*/
type = FIELD_GET(USB_SSP_SUBLINK_SPEED_ST, attr);
if (type == USB_SSP_SUBLINK_SPEED_ST_ASYM_RX ||
type == USB_SSP_SUBLINK_SPEED_ST_ASYM_TX)
goto out;
if (FIELD_GET(USB_SSP_SUBLINK_SPEED_LP, attr) !=
USB_SSP_SUBLINK_SPEED_LP_SSP)
goto out;
lse = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSE, attr);
mantissa = FIELD_GET(USB_SSP_SUBLINK_SPEED_LSM, attr);
/* Convert to Gbps */
for (; lse < USB_SSP_SUBLINK_SPEED_LSE_GBPS; lse++)
mantissa /= 1000;
if (mantissa >= 10 && lanes == 1)
return USB_SSP_GEN_2x1;
if (mantissa >= 10 && lanes == 2)
return USB_SSP_GEN_2x2;
if (mantissa >= 5 && lanes == 2)
return USB_SSP_GEN_1x2;
goto out;
}
}
return 0;
out:
return USB_SSP_GEN_UNKNOWN;
}
/* Returns 1 if @hub is a WUSB root hub, 0 otherwise */
......@@ -2850,15 +2899,15 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
/* extended portstatus Rx and Tx lane count are zero based */
udev->rx_lanes = USB_EXT_PORT_RX_LANES(ext_portstatus) + 1;
udev->tx_lanes = USB_EXT_PORT_TX_LANES(ext_portstatus) + 1;
udev->ssp_rate = get_port_ssp_rate(hub->hdev, ext_portstatus);
} else {
udev->rx_lanes = 1;
udev->tx_lanes = 1;
udev->ssp_rate = USB_SSP_GEN_UNKNOWN;
}
if (hub_is_wusb(hub))
udev->speed = USB_SPEED_WIRELESS;
else if (hub_is_superspeedplus(hub->hdev) &&
port_speed_is_ssp(hub->hdev, ext_portstatus &
USB_EXT_PORT_STAT_RX_SPEED_ID))
else if (udev->ssp_rate != USB_SSP_GEN_UNKNOWN)
udev->speed = USB_SPEED_SUPER_PLUS;
else if (hub_is_superspeed(hub->hdev))
udev->speed = USB_SPEED_SUPER;
......@@ -3556,7 +3605,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
u16 portchange, portstatus;
if (!test_and_set_bit(port1, hub->child_usage_bits)) {
status = pm_runtime_get_sync(&port_dev->dev);
status = pm_runtime_resume_and_get(&port_dev->dev);
if (status < 0) {
dev_dbg(&udev->dev, "can't resume usb port, status %d\n",
status);
......@@ -4781,9 +4830,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
"%s SuperSpeed%s%s USB device number %d using %s\n",
(udev->config) ? "reset" : "new",
(udev->speed == USB_SPEED_SUPER_PLUS) ?
"Plus Gen 2" : " Gen 1",
(udev->rx_lanes == 2 && udev->tx_lanes == 2) ?
"x2" : "",
" Plus" : "",
(udev->ssp_rate == USB_SSP_GEN_2x2) ?
" Gen 2x2" :
(udev->ssp_rate == USB_SSP_GEN_2x1) ?
" Gen 2x1" :
(udev->ssp_rate == USB_SSP_GEN_1x2) ?
" Gen 1x2" : "",
devnum, driver_name);
}
......
......@@ -148,8 +148,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
{
unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
/* Wait at least 100 msec for power to become stable */
return max(delay, 100U);
if (!hub->hdev->parent) /* root hub */
return delay;
else /* Wait at least 100 msec for power to become stable */
return max(delay, 100U);
}
static inline int hub_port_debounce_be_connected(struct usb_hub *hub,
......
......@@ -406,6 +406,7 @@ static const struct usb_device_id usb_quirk_list[] = {
/* Realtek hub in Dell WD19 (Type-C) */
{ USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
{ USB_DEVICE(0x0bda, 0x5487), .driver_info = USB_QUIRK_RESET_RESUME },
/* Generic RTL8153 based ethernet adapters */
{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
......@@ -438,6 +439,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x17ef, 0xa012), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND },
/* Lenovo ThinkPad USB-C Dock Gen2 Ethernet (RTL8153 GigE) */
{ USB_DEVICE(0x17ef, 0xa387), .driver_info = USB_QUIRK_NO_LPM },
/* BUILDWIN Photo Frame */
{ USB_DEVICE(0x1908, 0x1315), .driver_info =
USB_QUIRK_HONOR_BNUMINTERFACES },
......
......@@ -167,7 +167,10 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr,
speed = "5000";
break;
case USB_SPEED_SUPER_PLUS:
speed = "10000";
if (udev->ssp_rate == USB_SSP_GEN_2x2)
speed = "20000";
else
speed = "10000";
break;
default:
speed = "unknown";
......
......@@ -398,6 +398,52 @@ int usb_for_each_dev(void *data, int (*fn)(struct usb_device *, void *))
}
EXPORT_SYMBOL_GPL(usb_for_each_dev);
struct each_hub_arg {
void *data;
int (*fn)(struct device *, void *);
};
static int __each_hub(struct usb_device *hdev, void *data)
{
struct each_hub_arg *arg = (struct each_hub_arg *)data;
struct usb_hub *hub;
int ret = 0;
int i;
hub = usb_hub_to_struct_hub(hdev);
if (!hub)
return 0;
mutex_lock(&usb_port_peer_mutex);
for (i = 0; i < hdev->maxchild; i++) {
ret = arg->fn(&hub->ports[i]->dev, arg->data);
if (ret)
break;
}
mutex_unlock(&usb_port_peer_mutex);
return ret;
}
/**
* usb_for_each_port - interate over all USB ports in the system
* @data: data pointer that will be handed to the callback function
* @fn: callback function to be called for each USB port
*
* Iterate over all USB ports and call @fn for each, passing it @data. If it
* returns anything other than 0, we break the iteration prematurely and return
* that value.
*/
int usb_for_each_port(void *data, int (*fn)(struct device *, void *))
{
struct each_hub_arg arg = {data, fn};
return usb_for_each_dev(&arg, __each_hub);
}
EXPORT_SYMBOL_GPL(usb_for_each_port);
/**
* usb_release_dev - free a usb device structure when all users of it are finished.
* @dev: device that's been disconnected
......@@ -982,17 +1028,15 @@ static struct notifier_block usb_bus_nb = {
.notifier_call = usb_bus_notify,
};
static struct dentry *usb_devices_root;
static void usb_debugfs_init(void)
{
usb_devices_root = debugfs_create_file("devices", 0444, usb_debug_root,
NULL, &usbfs_devices_fops);
debugfs_create_file("devices", 0444, usb_debug_root, NULL,
&usbfs_devices_fops);
}
static void usb_debugfs_cleanup(void)
{
debugfs_remove(usb_devices_root);
debugfs_remove(debugfs_lookup("devices", usb_debug_root));
}
/*
......
......@@ -131,54 +131,26 @@ int dwc2_restore_global_registers(struct dwc2_hsotg *hsotg)
* dwc2_exit_partial_power_down() - Exit controller from Partial Power Down.
*
* @hsotg: Programming view of the DWC_otg controller
* @rem_wakeup: indicates whether resume is initiated by Reset.
* @restore: Controller registers need to be restored
*/
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore)
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, int rem_wakeup,
bool restore)
{
u32 pcgcctl;
int ret = 0;
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL)
return -ENOTSUPP;
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl &= ~PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
struct dwc2_gregs_backup *gr;
udelay(100);
if (restore) {
ret = dwc2_restore_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore registers\n",
__func__);
return ret;
}
if (dwc2_is_host_mode(hsotg)) {
ret = dwc2_restore_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore host registers\n",
__func__);
return ret;
}
} else {
ret = dwc2_restore_device_registers(hsotg, 0);
if (ret) {
dev_err(hsotg->dev, "%s: failed to restore device registers\n",
__func__);
return ret;
}
}
}
gr = &hsotg->gr_backup;
return ret;
/*
* Restore host or device regisers with the same mode core enterted
* to partial power down by checking "GOTGCTL_CURMODE_HOST" backup
* value of the "gotgctl" register.
*/
if (gr->gotgctl & GOTGCTL_CURMODE_HOST)
return dwc2_host_exit_partial_power_down(hsotg, rem_wakeup,
restore);
else
return dwc2_gadget_exit_partial_power_down(hsotg, restore);
}
/**
......@@ -188,57 +160,10 @@ int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore)
*/
int dwc2_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{
u32 pcgcctl;
int ret = 0;
if (!hsotg->params.power_down)
return -ENOTSUPP;
/* Backup all registers */
ret = dwc2_backup_global_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup global registers\n",
__func__);
return ret;
}
if (dwc2_is_host_mode(hsotg)) {
ret = dwc2_backup_host_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup host registers\n",
__func__);
return ret;
}
} else {
ret = dwc2_backup_device_registers(hsotg);
if (ret) {
dev_err(hsotg->dev, "%s: failed to backup device registers\n",
__func__);
return ret;
}
}
/*
* Clear any pending interrupts since dwc2 will not be able to
* clear them after entering partial_power_down.
*/
dwc2_writel(hsotg, 0xffffffff, GINTSTS);
/* Put the controller in low power state */
pcgcctl = dwc2_readl(hsotg, PCGCTL);
pcgcctl |= PCGCTL_PWRCLMP;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
ndelay(20);
pcgcctl |= PCGCTL_RSTPDWNMODULE;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
ndelay(20);
pcgcctl |= PCGCTL_STOPPCLK;
dwc2_writel(hsotg, pcgcctl, PCGCTL);
return ret;
if (dwc2_is_host_mode(hsotg))
return dwc2_host_enter_partial_power_down(hsotg);
else
return dwc2_gadget_enter_partial_power_down(hsotg);
}
/**
......@@ -374,6 +299,12 @@ void dwc2_hib_restore_common(struct dwc2_hsotg *hsotg, int rem_wakeup,
__func__);
} else {
dev_dbg(hsotg->dev, "restore done generated here\n");
/*
* To avoid restore done interrupt storm after restore is
* generated clear GINTSTS_RESTOREDONE bit.
*/
dwc2_writel(hsotg, GINTSTS_RESTOREDONE, GINTSTS);
}
}
......@@ -460,9 +391,6 @@ static bool dwc2_iddig_filter_enabled(struct dwc2_hsotg *hsotg)
*/
int dwc2_enter_hibernation(struct dwc2_hsotg *hsotg, int is_host)
{
if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_HIBERNATION)
return -ENOTSUPP;
if (is_host)
return dwc2_host_enter_hibernation(hsotg);
else
......@@ -545,6 +473,22 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
dwc2_writel(hsotg, greset, GRSTCTL);
}
/*
* Switching from device mode to host mode by disconnecting
* device cable core enters and exits form hibernation.
* However, the fifo map remains not cleared. It results
* to a WARNING (WARNING: CPU: 5 PID: 0 at drivers/usb/dwc2/
* gadget.c:307 dwc2_hsotg_init_fifo+0x12/0x152 [dwc2])
* if in host mode we disconnect the micro a to b host
* cable. Because core reset occurs.
* To avoid the WARNING, fifo_map should be cleared
* in dwc2_core_reset() function by taking into account configs.
* fifo_map must be cleared only if driver is configured in
* "CONFIG_USB_DWC2_PERIPHERAL" or "CONFIG_USB_DWC2_DUAL_ROLE"
* mode.
*/
dwc2_clear_fifo_map(hsotg);
/* Wait for AHB master IDLE state */
if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) {
dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n",
......
......@@ -38,6 +38,7 @@
#ifndef __DWC2_CORE_H__
#define __DWC2_CORE_H__
#include <linux/acpi.h>
#include <linux/phy/phy.h>
#include <linux/regulator/consumer.h>
#include <linux/usb/gadget.h>
......@@ -426,7 +427,7 @@ enum dwc2_ep0_state {
* @g_tx_fifo_size: An array of TX fifo sizes in dedicated fifo
* mode. Each value corresponds to one EP
* starting from EP1 (max 15 values). Sizes are
* in DWORDS with possible values from from
* in DWORDS with possible values from
* 16-32768 (default: 256, 256, 256, 256, 768,
* 768, 768, 768, 0, 0, 0, 0, 0, 0, 0).
* @change_speed_quirk: Change speed configuration to DWC2_SPEED_PARAM_FULL
......@@ -865,6 +866,8 @@ struct dwc2_hregs_backup {
* @gadget_enabled: Peripheral mode sub-driver initialization indicator.
* @ll_hw_enabled: Status of low-level hardware resources.
* @hibernated: True if core is hibernated
* @in_ppd: True if core is partial power down mode.
* @bus_suspended: True if bus is suspended
* @reset_phy_on_wake: Quirk saying that we should assert PHY reset on a
* remote wakeup.
* @phy_off_for_suspend: Status of whether we turned the PHY off at suspend.
......@@ -1022,7 +1025,6 @@ struct dwc2_hregs_backup {
* a pointer to an array of register definitions, the
* array size and the base address where the register bank
* is to be found.
* @bus_suspended: True if bus is suspended
* @last_frame_num: Number of last frame. Range from 0 to 32768
* @frame_num_array: Used only if CONFIG_USB_DWC2_TRACK_MISSED_SOFS is
* defined, for missed SOFs tracking. Array holds that
......@@ -1060,6 +1062,8 @@ struct dwc2_hsotg {
unsigned int gadget_enabled:1;
unsigned int ll_hw_enabled:1;
unsigned int hibernated:1;
unsigned int in_ppd:1;
bool bus_suspended;
unsigned int reset_phy_on_wake:1;
unsigned int need_phy_for_wake:1;
unsigned int phy_off_for_suspend:1;
......@@ -1143,7 +1147,6 @@ struct dwc2_hsotg {
unsigned long hs_periodic_bitmap[
DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)];
u16 periodic_qh_count;
bool bus_suspended;
bool new_connection;
u16 last_frame_num;
......@@ -1301,7 +1304,8 @@ static inline bool dwc2_is_hs_iot(struct dwc2_hsotg *hsotg)
*/
int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait);
int dwc2_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, bool restore);
int dwc2_exit_partial_power_down(struct dwc2_hsotg *hsotg, int rem_wakeup,
bool restore);
int dwc2_enter_hibernation(struct dwc2_hsotg *hsotg, int is_host);
int dwc2_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
int reset, int is_host);
......@@ -1339,6 +1343,7 @@ irqreturn_t dwc2_handle_common_intr(int irq, void *dev);
/* The device ID match table */
extern const struct of_device_id dwc2_of_match_table[];
extern const struct acpi_device_id dwc2_acpi_match[];
int dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg);
int dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg);
......@@ -1409,11 +1414,19 @@ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup);
int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg);
int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset);
int dwc2_gadget_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_gadget_exit_partial_power_down(struct dwc2_hsotg *hsotg,
bool restore);
void dwc2_gadget_enter_clock_gating(struct dwc2_hsotg *hsotg);
void dwc2_gadget_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup);
int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg);
int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg);
int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg);
void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg);
void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg);
static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg)
{ hsotg->fifo_map = 0; }
#else
static inline int dwc2_hsotg_remove(struct dwc2_hsotg *dwc2)
{ return 0; }
......@@ -1442,6 +1455,14 @@ static inline int dwc2_gadget_enter_hibernation(struct dwc2_hsotg *hsotg)
static inline int dwc2_gadget_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset)
{ return 0; }
static inline int dwc2_gadget_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_gadget_exit_partial_power_down(struct dwc2_hsotg *hsotg,
bool restore)
{ return 0; }
static inline void dwc2_gadget_enter_clock_gating(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_gadget_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup) {}
static inline int dwc2_hsotg_tx_fifo_count(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg)
......@@ -1450,6 +1471,7 @@ static inline int dwc2_hsotg_tx_fifo_average_depth(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline void dwc2_gadget_init_lpm(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_gadget_program_ref_clk(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_clear_fifo_map(struct dwc2_hsotg *hsotg) {}
#endif
#if IS_ENABLED(CONFIG_USB_DWC2_HOST) || IS_ENABLED(CONFIG_USB_DWC2_DUAL_ROLE)
......@@ -1459,11 +1481,18 @@ void dwc2_hcd_connect(struct dwc2_hsotg *hsotg);
void dwc2_hcd_disconnect(struct dwc2_hsotg *hsotg, bool force);
void dwc2_hcd_start(struct dwc2_hsotg *hsotg);
int dwc2_core_init(struct dwc2_hsotg *hsotg, bool initial_setup);
int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex);
int dwc2_port_resume(struct dwc2_hsotg *hsotg);
int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg);
int dwc2_restore_host_registers(struct dwc2_hsotg *hsotg);
int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg);
int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset);
int dwc2_host_enter_partial_power_down(struct dwc2_hsotg *hsotg);
int dwc2_host_exit_partial_power_down(struct dwc2_hsotg *hsotg,
int rem_wakeup, bool restore);
void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg);
void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg, int rem_wakeup);
bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2);
static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg)
{ schedule_work(&hsotg->phy_reset_work); }
......@@ -1479,6 +1508,10 @@ static inline void dwc2_hcd_start(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_hcd_remove(struct dwc2_hsotg *hsotg) {}
static inline int dwc2_core_init(struct dwc2_hsotg *hsotg, bool initial_setup)
{ return 0; }
static inline int dwc2_port_suspend(struct dwc2_hsotg *hsotg, u16 windex)
{ return 0; }
static inline int dwc2_port_resume(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_hcd_init(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_backup_host_registers(struct dwc2_hsotg *hsotg)
......@@ -1490,6 +1523,14 @@ static inline int dwc2_host_enter_hibernation(struct dwc2_hsotg *hsotg)
static inline int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg,
int rem_wakeup, int reset)
{ return 0; }
static inline int dwc2_host_enter_partial_power_down(struct dwc2_hsotg *hsotg)
{ return 0; }
static inline int dwc2_host_exit_partial_power_down(struct dwc2_hsotg *hsotg,
int rem_wakeup, bool restore)
{ return 0; }
static inline void dwc2_host_enter_clock_gating(struct dwc2_hsotg *hsotg) {}
static inline void dwc2_host_exit_clock_gating(struct dwc2_hsotg *hsotg,
int rem_wakeup) {}
static inline bool dwc2_host_can_poweroff_phy(struct dwc2_hsotg *dwc2)
{ return false; }
static inline void dwc2_host_schedule_phy_reset(struct dwc2_hsotg *hsotg) {}
......
此差异已折叠。
......@@ -691,6 +691,8 @@ static int params_show(struct seq_file *seq, void *v)
print_param(seq, p, ulpi_fs_ls);
print_param(seq, p, host_support_fs_ls_low_power);
print_param(seq, p, host_ls_low_power_phy_clk);
print_param(seq, p, activate_stm_fs_transceiver);
print_param(seq, p, activate_stm_id_vb_detection);
print_param(seq, p, ts_dline);
print_param(seq, p, reload_ctl);
print_param_hex(seq, p, ahbcfg);
......
此差异已折叠。
此差异已折叠。
......@@ -59,7 +59,7 @@
#define DWC2_UNRESERVE_DELAY (msecs_to_jiffies(5))
/* If we get a NAK, wait this long before retrying */
#define DWC2_RETRY_WAIT_DELAY 1*1E6L
#define DWC2_RETRY_WAIT_DELAY (1 * 1E6L)
/**
* dwc2_periodic_channel_available() - Checks that a channel is available for a
......
......@@ -44,6 +44,7 @@
#define GOTGCTL_CHIRPEN BIT(27)
#define GOTGCTL_MULT_VALID_BC_MASK (0x1f << 22)
#define GOTGCTL_MULT_VALID_BC_SHIFT 22
#define GOTGCTL_CURMODE_HOST BIT(21)
#define GOTGCTL_OTGVER BIT(20)
#define GOTGCTL_BSESVLD BIT(19)
#define GOTGCTL_ASESVLD BIT(18)
......
......@@ -232,6 +232,12 @@ const struct of_device_id dwc2_of_match_table[] = {
};
MODULE_DEVICE_TABLE(of, dwc2_of_match_table);
const struct acpi_device_id dwc2_acpi_match[] = {
{ "BCM2848", (kernel_ulong_t)dwc2_set_bcm_params },
{ },
};
MODULE_DEVICE_TABLE(acpi, dwc2_acpi_match);
static void dwc2_set_param_otg_cap(struct dwc2_hsotg *hsotg)
{
u8 val;
......@@ -866,10 +872,12 @@ int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
return 0;
}
typedef void (*set_params_cb)(struct dwc2_hsotg *data);
int dwc2_init_params(struct dwc2_hsotg *hsotg)
{
const struct of_device_id *match;
void (*set_params)(struct dwc2_hsotg *data);
set_params_cb set_params;
dwc2_set_default_params(hsotg);
dwc2_get_device_properties(hsotg);
......@@ -878,6 +886,14 @@ int dwc2_init_params(struct dwc2_hsotg *hsotg)
if (match && match->data) {
set_params = match->data;
set_params(hsotg);
} else {
const struct acpi_device_id *amatch;
amatch = acpi_match_device(dwc2_acpi_match, hsotg->dev);
if (amatch && amatch->driver_data) {
set_params = (set_params_cb)amatch->driver_data;
set_params(hsotg);
}
}
dwc2_check_params(hsotg);
......
......@@ -316,6 +316,39 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
static int dwc2_driver_remove(struct platform_device *dev)
{
struct dwc2_hsotg *hsotg = platform_get_drvdata(dev);
struct dwc2_gregs_backup *gr;
int ret = 0;
gr = &hsotg->gr_backup;
/* Exit Hibernation when driver is removed. */
if (hsotg->hibernated) {
if (gr->gotgctl & GOTGCTL_CURMODE_HOST)
ret = dwc2_exit_hibernation(hsotg, 0, 0, 1);
else
ret = dwc2_exit_hibernation(hsotg, 0, 0, 0);
if (ret)
dev_err(hsotg->dev,
"exit hibernation failed.\n");
}
/* Exit Partial Power Down when driver is removed. */
if (hsotg->in_ppd) {
ret = dwc2_exit_partial_power_down(hsotg, 0, true);
if (ret)
dev_err(hsotg->dev,
"exit partial_power_down failed\n");
}
/* Exit clock gating when driver is removed. */
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_NONE &&
hsotg->bus_suspended) {
if (dwc2_is_device_mode(hsotg))
dwc2_gadget_exit_clock_gating(hsotg, 0);
else
dwc2_host_exit_clock_gating(hsotg, 0);
}
dwc2_debugfs_exit(hsotg);
if (hsotg->hcd_enabled)
......@@ -334,7 +367,7 @@ static int dwc2_driver_remove(struct platform_device *dev)
reset_control_assert(hsotg->reset);
reset_control_assert(hsotg->reset_ecc);
return 0;
return ret;
}
/**
......@@ -734,6 +767,7 @@ static struct platform_driver dwc2_platform_driver = {
.driver = {
.name = dwc2_driver_name,
.of_match_table = dwc2_of_match_table,
.acpi_match_table = ACPI_PTR(dwc2_acpi_match),
.pm = &dwc2_dev_pm_ops,
},
.probe = dwc2_driver_probe,
......
......@@ -149,4 +149,13 @@ config USB_DWC3_IMX8MP
functionality.
Say 'Y' or 'M' if you have one such device.
config USB_DWC3_XILINX
tristate "Xilinx Platforms"
depends on (ARCH_ZYNQMP || ARCH_VERSAL) && OF
default USB_DWC3
help
Support Xilinx SoCs with DesignWare Core USB3 IP.
This driver handles both ZynqMP and Versal SoC operations.
Say 'Y' or 'M' if you have one such device.
endif
......@@ -52,3 +52,4 @@ obj-$(CONFIG_USB_DWC3_OF_SIMPLE) += dwc3-of-simple.o
obj-$(CONFIG_USB_DWC3_ST) += dwc3-st.o
obj-$(CONFIG_USB_DWC3_QCOM) += dwc3-qcom.o
obj-$(CONFIG_USB_DWC3_IMX8MP) += dwc3-imx8mp.o
obj-$(CONFIG_USB_DWC3_XILINX) += dwc3-xilinx.o
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/**
/*
* debug.h - DesignWare USB3 DRD Controller Debug Header
*
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
......
此差异已折叠。
// SPDX-License-Identifier: GPL-2.0
/**
/*
* dwc3-exynos.c - Samsung Exynos DWC3 Specific Glue layer
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册