USB/Thunderbolt patches for 5.9-rc1
Here is the large set of USB and Thunderbolt patches for 5.9-rc1. Nothing really magic/major in here, just lots of little changes and updates: - clean up language usages in USB core and some drivers - Thunderbolt driver updates and additions - USB Gadget driver updates - dwc3 driver updates (like always...) - build with "W=1" warning fixups - mtu3 driver updates - usb-serial driver updates and device ids - typec additions and updates for new hardware - xhci debug code updates for future platforms - cdns3 driver updates - lots of other minor driver updates and fixes and cleanups All of these have been in linux-next for a while with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXymciA8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ylVVwCfU7JxgjFhAJTzC9K5efVqsrSHzxQAnijHrqUn pHgI9M1ZRVGwmv2RNWb3 =9/8I -----END PGP SIGNATURE----- Merge tag 'usb-5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb Pull USB/Thunderbolt updates from Greg KH: "Here is the large set of USB and Thunderbolt patches for 5.9-rc1. Nothing really magic/major in here, just lots of little changes and updates: - clean up language usages in USB core and some drivers - Thunderbolt driver updates and additions - USB Gadget driver updates - dwc3 driver updates (like always...) - build with "W=1" warning fixups - mtu3 driver updates - usb-serial driver updates and device ids - typec additions and updates for new hardware - xhci debug code updates for future platforms - cdns3 driver updates - lots of other minor driver updates and fixes and cleanups All of these have been in linux-next for a while with no reported issues" * tag 'usb-5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (330 commits) usb: common: usb-conn-gpio: Register charger usb: mtu3: simplify mtu3_req_complete() usb: mtu3: clear dual mode of u3port when disable device usb: mtu3: use MTU3_EP_WEDGE flag usb: mtu3: remove useless member @busy in mtu3_ep struct usb: mtu3: remove repeated error log usb: mtu3: add ->udc_set_speed() usb: mtu3: introduce a funtion to check maximum speed usb: mtu3: clear interrupts status when disable interrupts usb: mtu3: reinitialize CSR registers usb: mtu3: fix macro for maximum number of packets usb: mtu3: remove unnecessary pointer checks usb: xhci: Fix ASMedia ASM1142 DMA addressing usb: xhci: define IDs for various ASMedia host controllers usb: musb: convert to devm_platform_ioremap_resource_byname usb: gadget: tegra-xudc: convert to devm_platform_ioremap_resource_byname usb: gadget: r8a66597: convert to devm_platform_ioremap_resource_byname usb: dwc3: convert to devm_platform_ioremap_resource_byname usb: cdns3: convert to devm_platform_ioremap_resource_byname usb: phy: am335x: convert to devm_platform_ioremap_resource_byname ...
This commit is contained in:
commit
ecfd7940b8
@ -178,11 +178,18 @@ KernelVersion: 4.13
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: When new NVM image is written to the non-active NVM
|
||||
area (through non_activeX NVMem device), the
|
||||
authentication procedure is started by writing 1 to
|
||||
this file. If everything goes well, the device is
|
||||
authentication procedure is started by writing to
|
||||
this file.
|
||||
If everything goes well, the device is
|
||||
restarted with the new NVM firmware. If the image
|
||||
verification fails an error code is returned instead.
|
||||
|
||||
This file will accept writing values "1" or "2"
|
||||
- Writing "1" will flush the image to the storage
|
||||
area and authenticate the image in one action.
|
||||
- Writing "2" will run some basic validation on the image
|
||||
and flush it to the storage area.
|
||||
|
||||
When read holds status of the last authentication
|
||||
operation if an error occurred during the process. This
|
||||
is directly the status value from the DMA configuration
|
||||
@ -236,3 +243,49 @@ KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains XDomain service specific settings as
|
||||
bitmask. Format: %x
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/device
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: Retimer device identifier read from the hardware.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_authenticate
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: When new NVM image is written to the non-active NVM
|
||||
area (through non_activeX NVMem device), the
|
||||
authentication procedure is started by writing 1 to
|
||||
this file. If everything goes well, the device is
|
||||
restarted with the new NVM firmware. If the image
|
||||
verification fails an error code is returned instead.
|
||||
|
||||
When read holds status of the last authentication
|
||||
operation if an error occurred during the process.
|
||||
Format: %x.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_version
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: Holds retimer NVM version number. Format: %x.%x, major.minor.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/vendor
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: Retimer vendor identifier read from the hardware.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
|
||||
Date: Oct 2020
|
||||
KernelVersion: v5.9
|
||||
Contact: Mario Limonciello <mario.limonciello@dell.com>
|
||||
Description: For supported devices, automatically authenticate the new Thunderbolt
|
||||
image when the device is disconnected from the host system.
|
||||
|
||||
This file will accept writing values "1" or "2"
|
||||
- Writing "1" will flush the image to the storage
|
||||
area and prepare the device for authentication on disconnect.
|
||||
- Writing "2" will run some basic validation on the image
|
||||
and flush it to the storage area.
|
||||
|
@ -173,8 +173,8 @@ following ``udev`` rule::
|
||||
|
||||
ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1"
|
||||
|
||||
Upgrading NVM on Thunderbolt device or host
|
||||
-------------------------------------------
|
||||
Upgrading NVM on Thunderbolt device, host or retimer
|
||||
----------------------------------------------------
|
||||
Since most of the functionality is handled in firmware running on a
|
||||
host controller or a device, it is important that the firmware can be
|
||||
upgraded to the latest where possible bugs in it have been fixed.
|
||||
@ -185,9 +185,10 @@ for some machines:
|
||||
|
||||
`Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_
|
||||
|
||||
Before you upgrade firmware on a device or host, please make sure it is a
|
||||
suitable upgrade. Failing to do that may render the device (or host) in a
|
||||
state where it cannot be used properly anymore without special tools!
|
||||
Before you upgrade firmware on a device, host or retimer, please make
|
||||
sure it is a suitable upgrade. Failing to do that may render the device
|
||||
in a state where it cannot be used properly anymore without special
|
||||
tools!
|
||||
|
||||
Host NVM upgrade on Apple Macs is not supported.
|
||||
|
||||
|
@ -4,7 +4,7 @@ Broadcom USB Device Controller (BDC)
|
||||
Required properties:
|
||||
|
||||
- compatible: must be one of:
|
||||
"brcm,bdc-v0.16"
|
||||
"brcm,bdc-udc-v2"
|
||||
"brcm,bdc"
|
||||
- reg: the base register address and length
|
||||
- interrupts: the interrupt line for this controller
|
||||
@ -21,7 +21,7 @@ On Broadcom STB platforms, these properties are required:
|
||||
Example:
|
||||
|
||||
bdc@f0b02000 {
|
||||
compatible = "brcm,bdc-v0.16";
|
||||
compatible = "brcm,bdc-udc-v2";
|
||||
reg = <0xf0b02000 0xfc4>;
|
||||
interrupts = <0x0 0x60 0x0>;
|
||||
phys = <&usbphy_0 0x0>;
|
||||
|
@ -4,10 +4,11 @@
|
||||
$id: http://devicetree.org/schemas/usb/ingenic,jz4770-phy.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Ingenic JZ4770 USB PHY devicetree bindings
|
||||
title: Ingenic SoCs USB PHY devicetree bindings
|
||||
|
||||
maintainers:
|
||||
- Paul Cercueil <paul@crapouillou.net>
|
||||
- 周琰杰 (Zhou Yanjie) <zhouyanjie@wanyeetech.com>
|
||||
|
||||
properties:
|
||||
$nodename:
|
||||
@ -16,6 +17,9 @@ properties:
|
||||
compatible:
|
||||
enum:
|
||||
- ingenic,jz4770-phy
|
||||
- ingenic,jz4780-phy
|
||||
- ingenic,x1000-phy
|
||||
- ingenic,x1830-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
@ -11,22 +11,36 @@ maintainers:
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- const: "ti,keystone-dwc3"
|
||||
- const: "ti,am654-dwc3"
|
||||
items:
|
||||
- enum:
|
||||
- ti,keystone-dwc3
|
||||
- ti,am654-dwc3
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
description: Address and length of the register set for the USB subsystem on
|
||||
the SOC.
|
||||
|
||||
'#address-cells':
|
||||
const: 1
|
||||
|
||||
'#size-cells':
|
||||
const: 1
|
||||
|
||||
ranges: true
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
description: The irq number of this device that is used to interrupt the MPU.
|
||||
|
||||
|
||||
clocks:
|
||||
description: Clock ID for USB functional clock.
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
assigned-clocks:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
assigned-clock-parents:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
power-domains:
|
||||
description: Should contain a phandle to a PM domain provider node
|
||||
@ -42,33 +56,42 @@ properties:
|
||||
|
||||
phy-names:
|
||||
items:
|
||||
- const: "usb3-phy"
|
||||
- const: usb3-phy
|
||||
|
||||
dwc3:
|
||||
dma-coherent: true
|
||||
|
||||
dma-ranges: true
|
||||
|
||||
patternProperties:
|
||||
"usb@[a-f0-9]+$":
|
||||
type: object
|
||||
description: This is the node representing the DWC3 controller instance
|
||||
Documentation/devicetree/bindings/usb/dwc3.txt
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- "#address-cells"
|
||||
- "#size-cells"
|
||||
- ranges
|
||||
- interrupts
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||
|
||||
usb: usb@2680000 {
|
||||
dwc3@2680000 {
|
||||
compatible = "ti,keystone-dwc3";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
reg = <0x2680000 0x10000>;
|
||||
clocks = <&clkusb>;
|
||||
clock-names = "usb";
|
||||
interrupts = <GIC_SPI 393 IRQ_TYPE_EDGE_RISING>;
|
||||
ranges;
|
||||
|
||||
dwc3@2690000 {
|
||||
usb@2690000 {
|
||||
compatible = "synopsys,dwc3";
|
||||
reg = <0x2690000 0x70000>;
|
||||
interrupts = <GIC_SPI 393 IRQ_TYPE_EDGE_RISING>;
|
||||
|
@ -240,7 +240,7 @@ How to do isochronous (ISO) transfers?
|
||||
======================================
|
||||
|
||||
Besides the fields present on a bulk transfer, for ISO, you also
|
||||
also have to set ``urb->interval`` to say how often to make transfers; it's
|
||||
have to set ``urb->interval`` to say how often to make transfers; it's
|
||||
often one per frame (which is once every microframe for highspeed devices).
|
||||
The actual interval used will be a power of two that's no bigger than what
|
||||
you specify. You can use the :c:func:`usb_fill_int_urb` macro to fill
|
||||
|
@ -11,7 +11,7 @@ and HID reports can be sent/received through I/O on the
|
||||
/dev/hidgX character devices.
|
||||
|
||||
For more details about HID, see the developer page on
|
||||
http://www.usb.org/developers/hidpage/
|
||||
https://www.usb.org/developers/hidpage/
|
||||
|
||||
Configuration
|
||||
=============
|
||||
|
@ -142,7 +142,7 @@ Footnotes
|
||||
=========
|
||||
|
||||
[1] Remote Network Driver Interface Specification,
|
||||
[[http://msdn.microsoft.com/en-us/library/ee484414.aspx]].
|
||||
[[https://msdn.microsoft.com/en-us/library/ee484414.aspx]].
|
||||
|
||||
[2] Communications Device Class Abstract Control Model, spec for this
|
||||
and other USB classes can be found at
|
||||
@ -150,9 +150,9 @@ and other USB classes can be found at
|
||||
|
||||
[3] CDC Ethernet Control Model.
|
||||
|
||||
[4] [[http://msdn.microsoft.com/en-us/library/ff537109(v=VS.85).aspx]]
|
||||
[4] [[https://msdn.microsoft.com/en-us/library/ff537109(v=VS.85).aspx]]
|
||||
|
||||
[5] [[http://msdn.microsoft.com/en-us/library/ff539234(v=VS.85).aspx]]
|
||||
[5] [[https://msdn.microsoft.com/en-us/library/ff539234(v=VS.85).aspx]]
|
||||
|
||||
[6] To put it in some other nice words, Windows failed to respond to
|
||||
any user input.
|
||||
@ -160,6 +160,6 @@ any user input.
|
||||
[7] You may find [[http://www.cygnal.org/ubb/Forum9/HTML/001050.html]]
|
||||
useful.
|
||||
|
||||
[8] http://www.nirsoft.net/utils/usb_devices_view.html
|
||||
[8] https://www.nirsoft.net/utils/usb_devices_view.html
|
||||
|
||||
[9] [[http://msdn.microsoft.com/en-us/library/ff570620.aspx]]
|
||||
[9] [[https://msdn.microsoft.com/en-us/library/ff570620.aspx]]
|
||||
|
@ -1,5 +1,5 @@
|
||||
; Based on template INF file found at
|
||||
; <http://msdn.microsoft.com/en-us/library/ff570620.aspx>
|
||||
; <https://msdn.microsoft.com/en-us/library/ff570620.aspx>
|
||||
; which was:
|
||||
; Copyright (c) Microsoft Corporation
|
||||
; and released under the MLPL as found at:
|
||||
|
@ -7013,6 +7013,13 @@ L: linuxppc-dev@lists.ozlabs.org
|
||||
S: Maintained
|
||||
F: drivers/usb/gadget/udc/fsl*
|
||||
|
||||
FREESCALE USB PHY DRIVER
|
||||
M: Ran Wang <ran.wang_1@nxp.com>
|
||||
L: linux-usb@vger.kernel.org
|
||||
L: linuxppc-dev@lists.ozlabs.org
|
||||
S: Maintained
|
||||
F: drivers/usb/phy/phy-fsl-usb*
|
||||
|
||||
FREEVXFS FILESYSTEM
|
||||
M: Christoph Hellwig <hch@infradead.org>
|
||||
S: Maintained
|
||||
|
@ -26,6 +26,7 @@
|
||||
* 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
*/
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/gpio/machine.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/platform_device.h>
|
||||
@ -55,6 +56,9 @@
|
||||
|
||||
#include "common.h"
|
||||
|
||||
/* Name of the GPIO chip used by the OMAP for GPIOs 0..15 */
|
||||
#define OMAP_GPIO_LABEL "gpio-0-15"
|
||||
|
||||
/* At OMAP5912 OSK the Ethernet is directly connected to CS1 */
|
||||
#define OMAP_OSK_ETHR_START 0x04800300
|
||||
|
||||
@ -240,7 +244,9 @@ static struct tps65010_board tps_board = {
|
||||
|
||||
static struct i2c_board_info __initdata osk_i2c_board_info[] = {
|
||||
{
|
||||
/* This device will get the name "i2c-tps65010" */
|
||||
I2C_BOARD_INFO("tps65010", 0x48),
|
||||
.dev_name = "tps65010",
|
||||
.platform_data = &tps_board,
|
||||
|
||||
},
|
||||
@ -278,6 +284,16 @@ static void __init osk_init_cf(void)
|
||||
irq_set_irq_type(gpio_to_irq(62), IRQ_TYPE_EDGE_FALLING);
|
||||
}
|
||||
|
||||
static struct gpiod_lookup_table osk_usb_gpio_table = {
|
||||
.dev_id = "ohci",
|
||||
.table = {
|
||||
/* Power GPIO on the I2C-attached TPS65010 */
|
||||
GPIO_LOOKUP("i2c-tps65010", 1, "power", GPIO_ACTIVE_HIGH),
|
||||
GPIO_LOOKUP(OMAP_GPIO_LABEL, 9, "overcurrent",
|
||||
GPIO_ACTIVE_HIGH),
|
||||
},
|
||||
};
|
||||
|
||||
static struct omap_usb_config osk_usb_config __initdata = {
|
||||
/* has usb host connector (A) ... for development it can also
|
||||
* be used, with a NONSTANDARD gender-bending cable/dongle, as
|
||||
@ -581,6 +597,7 @@ static void __init osk_init(void)
|
||||
l |= (3 << 1);
|
||||
omap_writel(l, USB_TRANSCEIVER_CTRL);
|
||||
|
||||
gpiod_add_lookup_table(&osk_usb_gpio_table);
|
||||
omap1_usb_init(&osk_usb_config);
|
||||
|
||||
/* irq for tps65010 chip */
|
||||
|
@ -159,7 +159,7 @@ CONFIG_USB_KBD=y
|
||||
CONFIG_USB_MOUSE=y
|
||||
CONFIG_USB=y
|
||||
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
|
||||
CONFIG_USB_OTG_WHITELIST=y
|
||||
CONFIG_USB_OTG_PRODUCTLIST=y
|
||||
CONFIG_USB_WUSB_CBAF=m
|
||||
CONFIG_USB_C67X00_HCD=m
|
||||
CONFIG_USB_EHCI_HCD=y
|
||||
|
@ -96,7 +96,7 @@ CONFIG_SND_SIMPLE_CARD=y
|
||||
CONFIG_USB_CONN_GPIO=y
|
||||
CONFIG_USB=y
|
||||
CONFIG_USB_OTG=y
|
||||
CONFIG_USB_OTG_BLACKLIST_HUB=y
|
||||
CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB=y
|
||||
CONFIG_USB_OHCI_HCD=y
|
||||
CONFIG_USB_OHCI_HCD_PLATFORM=y
|
||||
CONFIG_USB_MUSB_HDRC=y
|
||||
|
@ -207,7 +207,7 @@ CONFIG_ZEROPLUS_FF=y
|
||||
CONFIG_USB_HIDDEV=y
|
||||
CONFIG_USB=y
|
||||
CONFIG_USB_DYNAMIC_MINORS=y
|
||||
CONFIG_USB_OTG_WHITELIST=y
|
||||
CONFIG_USB_OTG_PRODUCTLIST=y
|
||||
CONFIG_USB_MON=y
|
||||
CONFIG_USB_EHCI_HCD=y
|
||||
CONFIG_USB_EHCI_ROOT_HUB_TT=y
|
||||
|
@ -866,8 +866,8 @@ static int tbnet_open(struct net_device *dev)
|
||||
eof_mask = BIT(TBIP_PDF_FRAME_END);
|
||||
|
||||
ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE,
|
||||
RING_FLAG_FRAME | RING_FLAG_E2E, sof_mask,
|
||||
eof_mask, tbnet_start_poll, net);
|
||||
RING_FLAG_FRAME, sof_mask, eof_mask,
|
||||
tbnet_start_poll, net);
|
||||
if (!ring) {
|
||||
netdev_err(dev, "failed to allocate Rx ring\n");
|
||||
tb_ring_free(net->tx_ring.ring);
|
||||
|
@ -8,10 +8,15 @@ menuconfig USB4
|
||||
select CRYPTO_HASH
|
||||
select NVMEM
|
||||
help
|
||||
USB4 and Thunderbolt driver. USB4 is the public speficiation
|
||||
based on Thunderbolt 3 protocol. This driver is required if
|
||||
USB4 and Thunderbolt driver. USB4 is the public specification
|
||||
based on the Thunderbolt 3 protocol. This driver is required if
|
||||
you want to hotplug Thunderbolt and USB4 compliant devices on
|
||||
Apple hardware or on PCs with Intel Falcon Ridge or newer.
|
||||
|
||||
To compile this driver a module, choose M here. The module will be
|
||||
called thunderbolt.
|
||||
|
||||
config USB4_KUNIT_TEST
|
||||
bool "KUnit tests"
|
||||
depends on KUNIT=y
|
||||
depends on USB4=y
|
||||
|
@ -2,3 +2,6 @@
|
||||
obj-${CONFIG_USB4} := thunderbolt.o
|
||||
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
|
||||
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
|
||||
thunderbolt-objs += nvm.o retimer.o quirks.o
|
||||
|
||||
obj-${CONFIG_USB4_KUNIT_TEST} += test.o
|
||||
|
@ -812,6 +812,6 @@ void tb_domain_exit(void)
|
||||
{
|
||||
bus_unregister(&tb_bus_type);
|
||||
ida_destroy(&tb_domain_ida);
|
||||
tb_switch_exit();
|
||||
tb_nvm_exit();
|
||||
tb_xdomain_exit();
|
||||
}
|
||||
|
@ -599,6 +599,7 @@ parse:
|
||||
sw->uid = header->uid;
|
||||
sw->vendor = header->vendor_id;
|
||||
sw->device = header->model_id;
|
||||
tb_check_quirks(sw);
|
||||
|
||||
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
|
||||
if (crc != header->data_crc32) {
|
||||
|
@ -366,3 +366,17 @@ int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in)
|
||||
tb_port_dbg(in, "sink %d de-allocated\n", sink);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_lc_force_power() - Forces LC to be powered on
|
||||
* @sw: Thunderbolt switch
|
||||
*
|
||||
* This is useful to let authentication cycle pass even without
|
||||
* a Thunderbolt link present.
|
||||
*/
|
||||
int tb_lc_force_power(struct tb_switch *sw)
|
||||
{
|
||||
u32 in = 0xffff;
|
||||
|
||||
return tb_sw_write(sw, &in, TB_CFG_SWITCH, TB_LC_POWER, 1);
|
||||
}
|
||||
|
@ -24,12 +24,7 @@
|
||||
|
||||
#define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring")
|
||||
|
||||
/*
|
||||
* Used to enable end-to-end workaround for missing RX packets. Do not
|
||||
* use this ring for anything else.
|
||||
*/
|
||||
#define RING_E2E_UNUSED_HOPID 2
|
||||
#define RING_FIRST_USABLE_HOPID TB_PATH_MIN_HOPID
|
||||
#define RING_FIRST_USABLE_HOPID 1
|
||||
|
||||
/*
|
||||
* Minimal number of vectors when we use MSI-X. Two for control channel
|
||||
@ -440,7 +435,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
|
||||
|
||||
/*
|
||||
* Automatically allocate HopID from the non-reserved
|
||||
* range 8 .. hop_count - 1.
|
||||
* range 1 .. hop_count - 1.
|
||||
*/
|
||||
for (i = RING_FIRST_USABLE_HOPID; i < nhi->hop_count; i++) {
|
||||
if (ring->is_tx) {
|
||||
@ -496,10 +491,6 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
|
||||
dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
|
||||
transmit ? "TX" : "RX", hop, size);
|
||||
|
||||
/* Tx Ring 2 is reserved for E2E workaround */
|
||||
if (transmit && hop == RING_E2E_UNUSED_HOPID)
|
||||
return NULL;
|
||||
|
||||
ring = kzalloc(sizeof(*ring), GFP_KERNEL);
|
||||
if (!ring)
|
||||
return NULL;
|
||||
@ -614,19 +605,6 @@ void tb_ring_start(struct tb_ring *ring)
|
||||
flags = RING_FLAG_ENABLE | RING_FLAG_RAW;
|
||||
}
|
||||
|
||||
if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
|
||||
u32 hop;
|
||||
|
||||
/*
|
||||
* In order not to lose Rx packets we enable end-to-end
|
||||
* workaround which transfers Rx credits to an unused Tx
|
||||
* HopID.
|
||||
*/
|
||||
hop = RING_E2E_UNUSED_HOPID << REG_RX_OPTIONS_E2E_HOP_SHIFT;
|
||||
hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
|
||||
flags |= hop | RING_FLAG_E2E_FLOW_CONTROL;
|
||||
}
|
||||
|
||||
ring_iowrite64desc(ring, ring->descriptors_dma, 0);
|
||||
if (ring->is_tx) {
|
||||
ring_iowrite32desc(ring, ring->size, 12);
|
||||
@ -1123,9 +1101,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
/* cannot fail - table is allocated bin pcim_iomap_regions */
|
||||
nhi->iobase = pcim_iomap_table(pdev)[0];
|
||||
nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff;
|
||||
if (nhi->hop_count != 12 && nhi->hop_count != 32)
|
||||
dev_warn(&pdev->dev, "unexpected hop count: %d\n",
|
||||
nhi->hop_count);
|
||||
dev_dbg(&pdev->dev, "total paths: %d\n", nhi->hop_count);
|
||||
|
||||
nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
|
||||
sizeof(*nhi->tx_rings), GFP_KERNEL);
|
||||
|
170
drivers/thunderbolt/nvm.c
Normal file
170
drivers/thunderbolt/nvm.c
Normal file
@ -0,0 +1,170 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* NVM helpers
|
||||
*
|
||||
* Copyright (C) 2020, Intel Corporation
|
||||
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/idr.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#include "tb.h"
|
||||
|
||||
static DEFINE_IDA(nvm_ida);
|
||||
|
||||
/**
|
||||
* tb_nvm_alloc() - Allocate new NVM structure
|
||||
* @dev: Device owning the NVM
|
||||
*
|
||||
* Allocates new NVM structure with unique @id and returns it. In case
|
||||
* of error returns ERR_PTR().
|
||||
*/
|
||||
struct tb_nvm *tb_nvm_alloc(struct device *dev)
|
||||
{
|
||||
struct tb_nvm *nvm;
|
||||
int ret;
|
||||
|
||||
nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
|
||||
if (!nvm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL);
|
||||
if (ret < 0) {
|
||||
kfree(nvm);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
nvm->id = ret;
|
||||
nvm->dev = dev;
|
||||
|
||||
return nvm;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_add_active() - Adds active NVMem device to NVM
|
||||
* @nvm: NVM structure
|
||||
* @size: Size of the active NVM in bytes
|
||||
* @reg_read: Pointer to the function to read the NVM (passed directly to the
|
||||
* NVMem device)
|
||||
*
|
||||
* Registers new active NVmem device for @nvm. The @reg_read is called
|
||||
* directly from NVMem so it must handle possible concurrent access if
|
||||
* needed. The first parameter passed to @reg_read is @nvm structure.
|
||||
* Returns %0 in success and negative errno otherwise.
|
||||
*/
|
||||
int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read)
|
||||
{
|
||||
struct nvmem_config config;
|
||||
struct nvmem_device *nvmem;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
|
||||
config.name = "nvm_active";
|
||||
config.reg_read = reg_read;
|
||||
config.read_only = true;
|
||||
config.id = nvm->id;
|
||||
config.stride = 4;
|
||||
config.word_size = 4;
|
||||
config.size = size;
|
||||
config.dev = nvm->dev;
|
||||
config.owner = THIS_MODULE;
|
||||
config.priv = nvm;
|
||||
|
||||
nvmem = nvmem_register(&config);
|
||||
if (IS_ERR(nvmem))
|
||||
return PTR_ERR(nvmem);
|
||||
|
||||
nvm->active = nvmem;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_write_buf() - Write data to @nvm buffer
|
||||
* @nvm: NVM structure
|
||||
* @offset: Offset where to write the data
|
||||
* @val: Data buffer to write
|
||||
* @bytes: Number of bytes to write
|
||||
*
|
||||
* Helper function to cache the new NVM image before it is actually
|
||||
* written to the flash. Copies @bytes from @val to @nvm->buf starting
|
||||
* from @offset.
|
||||
*/
|
||||
int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
if (!nvm->buf) {
|
||||
nvm->buf = vmalloc(NVM_MAX_SIZE);
|
||||
if (!nvm->buf)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
nvm->flushed = false;
|
||||
nvm->buf_data_size = offset + bytes;
|
||||
memcpy(nvm->buf + offset, val, bytes);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_add_non_active() - Adds non-active NVMem device to NVM
|
||||
* @nvm: NVM structure
|
||||
* @size: Size of the non-active NVM in bytes
|
||||
* @reg_write: Pointer to the function to write the NVM (passed directly
|
||||
* to the NVMem device)
|
||||
*
|
||||
* Registers new non-active NVmem device for @nvm. The @reg_write is called
|
||||
* directly from NVMem so it must handle possible concurrent access if
|
||||
* needed. The first parameter passed to @reg_write is @nvm structure.
|
||||
* Returns %0 in success and negative errno otherwise.
|
||||
*/
|
||||
int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
|
||||
nvmem_reg_write_t reg_write)
|
||||
{
|
||||
struct nvmem_config config;
|
||||
struct nvmem_device *nvmem;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
|
||||
config.name = "nvm_non_active";
|
||||
config.reg_write = reg_write;
|
||||
config.root_only = true;
|
||||
config.id = nvm->id;
|
||||
config.stride = 4;
|
||||
config.word_size = 4;
|
||||
config.size = size;
|
||||
config.dev = nvm->dev;
|
||||
config.owner = THIS_MODULE;
|
||||
config.priv = nvm;
|
||||
|
||||
nvmem = nvmem_register(&config);
|
||||
if (IS_ERR(nvmem))
|
||||
return PTR_ERR(nvmem);
|
||||
|
||||
nvm->non_active = nvmem;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_nvm_free() - Release NVM and its resources
|
||||
* @nvm: NVM structure to release
|
||||
*
|
||||
* Releases NVM and the NVMem devices if they were registered.
|
||||
*/
|
||||
void tb_nvm_free(struct tb_nvm *nvm)
|
||||
{
|
||||
if (nvm) {
|
||||
if (nvm->non_active)
|
||||
nvmem_unregister(nvm->non_active);
|
||||
if (nvm->active)
|
||||
nvmem_unregister(nvm->active);
|
||||
vfree(nvm->buf);
|
||||
ida_simple_remove(&nvm_ida, nvm->id);
|
||||
}
|
||||
kfree(nvm);
|
||||
}
|
||||
|
||||
void tb_nvm_exit(void)
|
||||
{
|
||||
ida_destroy(&nvm_ida);
|
||||
}
|
@ -229,7 +229,7 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
|
||||
struct tb_port *dst, int dst_hopid, int link_nr,
|
||||
const char *name)
|
||||
{
|
||||
struct tb_port *in_port, *out_port;
|
||||
struct tb_port *in_port, *out_port, *first_port, *last_port;
|
||||
int in_hopid, out_hopid;
|
||||
struct tb_path *path;
|
||||
size_t num_hops;
|
||||
@ -239,12 +239,23 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
|
||||
if (!path)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* Number of hops on a path is the distance between the two
|
||||
* switches plus the source adapter port.
|
||||
*/
|
||||
num_hops = abs(tb_route_length(tb_route(src->sw)) -
|
||||
tb_route_length(tb_route(dst->sw))) + 1;
|
||||
first_port = last_port = NULL;
|
||||
i = 0;
|
||||
tb_for_each_port_on_path(src, dst, in_port) {
|
||||
if (!first_port)
|
||||
first_port = in_port;
|
||||
last_port = in_port;
|
||||
i++;
|
||||
}
|
||||
|
||||
/* Check that src and dst are reachable */
|
||||
if (first_port != src || last_port != dst) {
|
||||
kfree(path);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Each hop takes two ports */
|
||||
num_hops = i / 2;
|
||||
|
||||
path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
|
||||
if (!path->hops) {
|
||||
@ -559,21 +570,20 @@ bool tb_path_is_invalid(struct tb_path *path)
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_path_switch_on_path() - Does the path go through certain switch
|
||||
* tb_path_port_on_path() - Does the path go through certain port
|
||||
* @path: Path to check
|
||||
* @sw: Switch to check
|
||||
* @port: Switch to check
|
||||
*
|
||||
* Goes over all hops on path and checks if @sw is any of them.
|
||||
* Goes over all hops on path and checks if @port is any of them.
|
||||
* Direction does not matter.
|
||||
*/
|
||||
bool tb_path_switch_on_path(const struct tb_path *path,
|
||||
const struct tb_switch *sw)
|
||||
bool tb_path_port_on_path(const struct tb_path *path, const struct tb_port *port)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < path->path_length; i++) {
|
||||
if (path->hops[i].in_port->sw == sw ||
|
||||
path->hops[i].out_port->sw == sw)
|
||||
if (path->hops[i].in_port == port ||
|
||||
path->hops[i].out_port == port)
|
||||
return true;
|
||||
}
|
||||
|
||||
|
42
drivers/thunderbolt/quirks.c
Normal file
42
drivers/thunderbolt/quirks.c
Normal file
@ -0,0 +1,42 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Thunderbolt driver - quirks
|
||||
*
|
||||
* Copyright (c) 2020 Mario Limonciello <mario.limonciello@dell.com>
|
||||
*/
|
||||
|
||||
#include "tb.h"
|
||||
|
||||
static void quirk_force_power_link(struct tb_switch *sw)
|
||||
{
|
||||
sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER;
|
||||
}
|
||||
|
||||
struct tb_quirk {
|
||||
u16 vendor;
|
||||
u16 device;
|
||||
void (*hook)(struct tb_switch *sw);
|
||||
};
|
||||
|
||||
static const struct tb_quirk tb_quirks[] = {
|
||||
/* Dell WD19TB supports self-authentication on unplug */
|
||||
{ 0x00d4, 0xb070, quirk_force_power_link },
|
||||
};
|
||||
|
||||
/**
|
||||
* tb_check_quirks() - Check for quirks to apply
|
||||
* @sw: Thunderbolt switch
|
||||
*
|
||||
* Apply any quirks for the Thunderbolt controller
|
||||
*/
|
||||
void tb_check_quirks(struct tb_switch *sw)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) {
|
||||
const struct tb_quirk *q = &tb_quirks[i];
|
||||
|
||||
if (sw->device == q->device && sw->vendor == q->vendor)
|
||||
q->hook(sw);
|
||||
}
|
||||
}
|
485
drivers/thunderbolt/retimer.c
Normal file
485
drivers/thunderbolt/retimer.c
Normal file
@ -0,0 +1,485 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Thunderbolt/USB4 retimer support.
|
||||
*
|
||||
* Copyright (C) 2020, Intel Corporation
|
||||
* Authors: Kranthi Kuntala <kranthi.kuntala@intel.com>
|
||||
* Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/sched/signal.h>
|
||||
|
||||
#include "sb_regs.h"
|
||||
#include "tb.h"
|
||||
|
||||
#define TB_MAX_RETIMER_INDEX 6
|
||||
|
||||
static int tb_retimer_nvm_read(void *priv, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct tb_nvm *nvm = priv;
|
||||
struct tb_retimer *rt = tb_to_retimer(nvm->dev);
|
||||
int ret;
|
||||
|
||||
pm_runtime_get_sync(&rt->dev);
|
||||
|
||||
if (!mutex_trylock(&rt->tb->lock)) {
|
||||
ret = restart_syscall();
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, offset, val, bytes);
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
|
||||
out:
|
||||
pm_runtime_mark_last_busy(&rt->dev);
|
||||
pm_runtime_put_autosuspend(&rt->dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tb_retimer_nvm_write(void *priv, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct tb_nvm *nvm = priv;
|
||||
struct tb_retimer *rt = tb_to_retimer(nvm->dev);
|
||||
int ret = 0;
|
||||
|
||||
if (!mutex_trylock(&rt->tb->lock))
|
||||
return restart_syscall();
|
||||
|
||||
ret = tb_nvm_write_buf(nvm, offset, val, bytes);
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tb_retimer_nvm_add(struct tb_retimer *rt)
|
||||
{
|
||||
struct tb_nvm *nvm;
|
||||
u32 val, nvm_size;
|
||||
int ret;
|
||||
|
||||
nvm = tb_nvm_alloc(&rt->dev);
|
||||
if (IS_ERR(nvm))
|
||||
return PTR_ERR(nvm);
|
||||
|
||||
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_VERSION, &val,
|
||||
sizeof(val));
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
|
||||
nvm->major = val >> 16;
|
||||
nvm->minor = val >> 8;
|
||||
|
||||
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_FLASH_SIZE,
|
||||
&val, sizeof(val));
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
|
||||
nvm_size = (SZ_1M << (val & 7)) / 8;
|
||||
nvm_size = (nvm_size - SZ_16K) / 2;
|
||||
|
||||
ret = tb_nvm_add_active(nvm, nvm_size, tb_retimer_nvm_read);
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
|
||||
ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, tb_retimer_nvm_write);
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
|
||||
rt->nvm = nvm;
|
||||
return 0;
|
||||
|
||||
err_nvm:
|
||||
tb_nvm_free(nvm);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
|
||||
{
|
||||
unsigned int image_size, hdr_size;
|
||||
const u8 *buf = rt->nvm->buf;
|
||||
u16 ds_size, device;
|
||||
|
||||
image_size = rt->nvm->buf_data_size;
|
||||
if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* FARB pointer must point inside the image and must at least
|
||||
* contain parts of the digital section we will be reading here.
|
||||
*/
|
||||
hdr_size = (*(u32 *)buf) & 0xffffff;
|
||||
if (hdr_size + NVM_DEVID + 2 >= image_size)
|
||||
return -EINVAL;
|
||||
|
||||
/* Digital section start should be aligned to 4k page */
|
||||
if (!IS_ALIGNED(hdr_size, SZ_4K))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Read digital section size and check that it also fits inside
|
||||
* the image.
|
||||
*/
|
||||
ds_size = *(u16 *)(buf + hdr_size);
|
||||
if (ds_size >= image_size)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Make sure the device ID in the image matches the retimer
|
||||
* hardware.
|
||||
*/
|
||||
device = *(u16 *)(buf + hdr_size + NVM_DEVID);
|
||||
if (device != rt->device)
|
||||
return -EINVAL;
|
||||
|
||||
/* Skip headers in the image */
|
||||
buf += hdr_size;
|
||||
image_size -= hdr_size;
|
||||
|
||||
return usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
|
||||
image_size);
|
||||
}
|
||||
|
||||
static ssize_t device_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
|
||||
return sprintf(buf, "%#x\n", rt->device);
|
||||
}
|
||||
static DEVICE_ATTR_RO(device);
|
||||
|
||||
static ssize_t nvm_authenticate_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
int ret;
|
||||
|
||||
if (!mutex_trylock(&rt->tb->lock))
|
||||
return restart_syscall();
|
||||
|
||||
if (!rt->nvm)
|
||||
ret = -EAGAIN;
|
||||
else
|
||||
ret = sprintf(buf, "%#x\n", rt->auth_status);
|
||||
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
bool val;
|
||||
int ret;
|
||||
|
||||
pm_runtime_get_sync(&rt->dev);
|
||||
|
||||
if (!mutex_trylock(&rt->tb->lock)) {
|
||||
ret = restart_syscall();
|
||||
goto exit_rpm;
|
||||
}
|
||||
|
||||
if (!rt->nvm) {
|
||||
ret = -EAGAIN;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = kstrtobool(buf, &val);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
/* Always clear status */
|
||||
rt->auth_status = 0;
|
||||
|
||||
if (val) {
|
||||
if (!rt->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = tb_retimer_nvm_validate_and_write(rt);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
|
||||
}
|
||||
|
||||
exit_unlock:
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
exit_rpm:
|
||||
pm_runtime_mark_last_busy(&rt->dev);
|
||||
pm_runtime_put_autosuspend(&rt->dev);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(nvm_authenticate);
|
||||
|
||||
static ssize_t nvm_version_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
int ret;
|
||||
|
||||
if (!mutex_trylock(&rt->tb->lock))
|
||||
return restart_syscall();
|
||||
|
||||
if (!rt->nvm)
|
||||
ret = -EAGAIN;
|
||||
else
|
||||
ret = sprintf(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor);
|
||||
|
||||
mutex_unlock(&rt->tb->lock);
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR_RO(nvm_version);
|
||||
|
||||
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
|
||||
return sprintf(buf, "%#x\n", rt->vendor);
|
||||
}
|
||||
static DEVICE_ATTR_RO(vendor);
|
||||
|
||||
static struct attribute *retimer_attrs[] = {
|
||||
&dev_attr_device.attr,
|
||||
&dev_attr_nvm_authenticate.attr,
|
||||
&dev_attr_nvm_version.attr,
|
||||
&dev_attr_vendor.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct attribute_group retimer_group = {
|
||||
.attrs = retimer_attrs,
|
||||
};
|
||||
|
||||
static const struct attribute_group *retimer_groups[] = {
|
||||
&retimer_group,
|
||||
NULL
|
||||
};
|
||||
|
||||
static void tb_retimer_release(struct device *dev)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
|
||||
kfree(rt);
|
||||
}
|
||||
|
||||
struct device_type tb_retimer_type = {
|
||||
.name = "thunderbolt_retimer",
|
||||
.groups = retimer_groups,
|
||||
.release = tb_retimer_release,
|
||||
};
|
||||
|
||||
static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
|
||||
{
|
||||
struct tb_retimer *rt;
|
||||
u32 vendor, device;
|
||||
int ret;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return -EINVAL;
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor,
|
||||
sizeof(vendor));
|
||||
if (ret) {
|
||||
if (ret != -ENODEV)
|
||||
tb_port_warn(port, "failed read retimer VendorId: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_PRODUCT_ID, &device,
|
||||
sizeof(device));
|
||||
if (ret) {
|
||||
if (ret != -ENODEV)
|
||||
tb_port_warn(port, "failed read retimer ProductId: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (vendor != PCI_VENDOR_ID_INTEL && vendor != 0x8087) {
|
||||
tb_port_info(port, "retimer NVM format of vendor %#x is not supported\n",
|
||||
vendor);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check that it supports NVM operations. If not then don't add
|
||||
* the device at all.
|
||||
*/
|
||||
ret = usb4_port_retimer_nvm_sector_size(port, index);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
rt = kzalloc(sizeof(*rt), GFP_KERNEL);
|
||||
if (!rt)
|
||||
return -ENOMEM;
|
||||
|
||||
rt->index = index;
|
||||
rt->vendor = vendor;
|
||||
rt->device = device;
|
||||
rt->auth_status = auth_status;
|
||||
rt->port = port;
|
||||
rt->tb = port->sw->tb;
|
||||
|
||||
rt->dev.parent = &port->sw->dev;
|
||||
rt->dev.bus = &tb_bus_type;
|
||||
rt->dev.type = &tb_retimer_type;
|
||||
dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev),
|
||||
port->port, index);
|
||||
|
||||
ret = device_register(&rt->dev);
|
||||
if (ret) {
|
||||
dev_err(&rt->dev, "failed to register retimer: %d\n", ret);
|
||||
put_device(&rt->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = tb_retimer_nvm_add(rt);
|
||||
if (ret) {
|
||||
dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret);
|
||||
device_del(&rt->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_info(&rt->dev, "new retimer found, vendor=%#x device=%#x\n",
|
||||
rt->vendor, rt->device);
|
||||
|
||||
pm_runtime_no_callbacks(&rt->dev);
|
||||
pm_runtime_set_active(&rt->dev);
|
||||
pm_runtime_enable(&rt->dev);
|
||||
pm_runtime_set_autosuspend_delay(&rt->dev, TB_AUTOSUSPEND_DELAY);
|
||||
pm_runtime_mark_last_busy(&rt->dev);
|
||||
pm_runtime_use_autosuspend(&rt->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_retimer_remove(struct tb_retimer *rt)
|
||||
{
|
||||
dev_info(&rt->dev, "retimer disconnected\n");
|
||||
tb_nvm_free(rt->nvm);
|
||||
device_unregister(&rt->dev);
|
||||
}
|
||||
|
||||
struct tb_retimer_lookup {
|
||||
const struct tb_port *port;
|
||||
u8 index;
|
||||
};
|
||||
|
||||
static int retimer_match(struct device *dev, void *data)
|
||||
{
|
||||
const struct tb_retimer_lookup *lookup = data;
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
|
||||
return rt && rt->port == lookup->port && rt->index == lookup->index;
|
||||
}
|
||||
|
||||
static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
|
||||
{
|
||||
struct tb_retimer_lookup lookup = { .port = port, .index = index };
|
||||
struct device *dev;
|
||||
|
||||
dev = device_find_child(&port->sw->dev, &lookup, retimer_match);
|
||||
if (dev)
|
||||
return tb_to_retimer(dev);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_retimer_scan() - Scan for on-board retimers under port
|
||||
* @port: USB4 port to scan
|
||||
*
|
||||
* Tries to enumerate on-board retimers connected to @port. Found
|
||||
* retimers are registered as children of @port. Does not scan for cable
|
||||
* retimers for now.
|
||||
*/
|
||||
int tb_retimer_scan(struct tb_port *port)
|
||||
{
|
||||
u32 status[TB_MAX_RETIMER_INDEX] = {};
|
||||
int ret, i, last_idx = 0;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Send broadcast RT to make sure retimer indices facing this
|
||||
* port are set.
|
||||
*/
|
||||
ret = usb4_port_enumerate_retimers(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Before doing anything else, read the authentication status.
|
||||
* If the retimer has it set, store it for the new retimer
|
||||
* device instance.
|
||||
*/
|
||||
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
|
||||
usb4_port_retimer_nvm_authenticate_status(port, i, &status[i]);
|
||||
|
||||
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++) {
|
||||
/*
|
||||
* Last retimer is true only for the last on-board
|
||||
* retimer (the one connected directly to the Type-C
|
||||
* port).
|
||||
*/
|
||||
ret = usb4_port_retimer_is_last(port, i);
|
||||
if (ret > 0)
|
||||
last_idx = i;
|
||||
else if (ret < 0)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!last_idx)
|
||||
return 0;
|
||||
|
||||
/* Add on-board retimers if they do not exist already */
|
||||
for (i = 1; i <= last_idx; i++) {
|
||||
struct tb_retimer *rt;
|
||||
|
||||
rt = tb_port_find_retimer(port, i);
|
||||
if (rt) {
|
||||
put_device(&rt->dev);
|
||||
} else {
|
||||
ret = tb_retimer_add(port, i, status[i]);
|
||||
if (ret && ret != -EOPNOTSUPP)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int remove_retimer(struct device *dev, void *data)
|
||||
{
|
||||
struct tb_retimer *rt = tb_to_retimer(dev);
|
||||
struct tb_port *port = data;
|
||||
|
||||
if (rt && rt->port == port)
|
||||
tb_retimer_remove(rt);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_retimer_remove_all() - Remove all retimers under port
|
||||
* @port: USB4 port whose retimers to remove
|
||||
*
|
||||
* This removes all previously added retimers under @port.
|
||||
*/
|
||||
void tb_retimer_remove_all(struct tb_port *port)
|
||||
{
|
||||
if (port->cap_usb4)
|
||||
device_for_each_child_reverse(&port->sw->dev, port,
|
||||
remove_retimer);
|
||||
}
|
33
drivers/thunderbolt/sb_regs.h
Normal file
33
drivers/thunderbolt/sb_regs.h
Normal file
@ -0,0 +1,33 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* USB4 port sideband registers found on routers and retimers
|
||||
*
|
||||
* Copyright (C) 2020, Intel Corporation
|
||||
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
* Rajmohan Mani <rajmohan.mani@intel.com>
|
||||
*/
|
||||
|
||||
#ifndef _SB_REGS
|
||||
#define _SB_REGS
|
||||
|
||||
#define USB4_SB_VENDOR_ID 0x00
|
||||
#define USB4_SB_PRODUCT_ID 0x01
|
||||
#define USB4_SB_OPCODE 0x08
|
||||
|
||||
enum usb4_sb_opcode {
|
||||
USB4_SB_OPCODE_ERR = 0x20525245, /* "ERR " */
|
||||
USB4_SB_OPCODE_ONS = 0x444d4321, /* "!CMD" */
|
||||
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
|
||||
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
|
||||
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
|
||||
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
|
||||
USB4_SB_OPCODE_NVM_BLOCK_WRITE = 0x574b4c42, /* "BLKW" */
|
||||
USB4_SB_OPCODE_NVM_AUTH_WRITE = 0x48545541, /* "AUTH" */
|
||||
USB4_SB_OPCODE_NVM_READ = 0x52524641, /* "AFRR" */
|
||||
};
|
||||
|
||||
#define USB4_SB_METADATA 0x09
|
||||
#define USB4_SB_METADATA_NVM_AUTH_WRITE_MASK GENMASK(5, 0)
|
||||
#define USB4_SB_DATA 0x12
|
||||
|
||||
#endif
|
@ -13,21 +13,12 @@
|
||||
#include <linux/sched/signal.h>
|
||||
#include <linux/sizes.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#include "tb.h"
|
||||
|
||||
/* Switch NVM support */
|
||||
|
||||
#define NVM_DEVID 0x05
|
||||
#define NVM_VERSION 0x08
|
||||
#define NVM_CSS 0x10
|
||||
#define NVM_FLASH_SIZE 0x45
|
||||
|
||||
#define NVM_MIN_SIZE SZ_32K
|
||||
#define NVM_MAX_SIZE SZ_512K
|
||||
|
||||
static DEFINE_IDA(nvm_ida);
|
||||
|
||||
struct nvm_auth_status {
|
||||
struct list_head list;
|
||||
@ -35,6 +26,11 @@ struct nvm_auth_status {
|
||||
u32 status;
|
||||
};
|
||||
|
||||
enum nvm_write_ops {
|
||||
WRITE_AND_AUTHENTICATE = 1,
|
||||
WRITE_ONLY = 2,
|
||||
};
|
||||
|
||||
/*
|
||||
* Hold NVM authentication failure status per switch This information
|
||||
* needs to stay around even when the switch gets power cycled so we
|
||||
@ -164,8 +160,12 @@ static int nvm_validate_and_write(struct tb_switch *sw)
|
||||
}
|
||||
|
||||
if (tb_switch_is_usb4(sw))
|
||||
return usb4_switch_nvm_write(sw, 0, buf, image_size);
|
||||
return dma_port_flash_write(sw->dma_port, 0, buf, image_size);
|
||||
ret = usb4_switch_nvm_write(sw, 0, buf, image_size);
|
||||
else
|
||||
ret = dma_port_flash_write(sw->dma_port, 0, buf, image_size);
|
||||
if (!ret)
|
||||
sw->nvm->flushed = true;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int nvm_authenticate_host_dma_port(struct tb_switch *sw)
|
||||
@ -328,7 +328,8 @@ static int nvm_authenticate(struct tb_switch *sw)
|
||||
static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct tb_switch *sw = priv;
|
||||
struct tb_nvm *nvm = priv;
|
||||
struct tb_switch *sw = tb_to_switch(nvm->dev);
|
||||
int ret;
|
||||
|
||||
pm_runtime_get_sync(&sw->dev);
|
||||
@ -351,8 +352,9 @@ out:
|
||||
static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct tb_switch *sw = priv;
|
||||
int ret = 0;
|
||||
struct tb_nvm *nvm = priv;
|
||||
struct tb_switch *sw = tb_to_switch(nvm->dev);
|
||||
int ret;
|
||||
|
||||
if (!mutex_trylock(&sw->tb->lock))
|
||||
return restart_syscall();
|
||||
@ -363,55 +365,15 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
|
||||
* locally here and handle the special cases when the user asks
|
||||
* us to authenticate the image.
|
||||
*/
|
||||
if (!sw->nvm->buf) {
|
||||
sw->nvm->buf = vmalloc(NVM_MAX_SIZE);
|
||||
if (!sw->nvm->buf) {
|
||||
ret = -ENOMEM;
|
||||
goto unlock;
|
||||
}
|
||||
}
|
||||
|
||||
sw->nvm->buf_data_size = offset + bytes;
|
||||
memcpy(sw->nvm->buf + offset, val, bytes);
|
||||
|
||||
unlock:
|
||||
ret = tb_nvm_write_buf(nvm, offset, val, bytes);
|
||||
mutex_unlock(&sw->tb->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id,
|
||||
size_t size, bool active)
|
||||
{
|
||||
struct nvmem_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
|
||||
if (active) {
|
||||
config.name = "nvm_active";
|
||||
config.reg_read = tb_switch_nvm_read;
|
||||
config.read_only = true;
|
||||
} else {
|
||||
config.name = "nvm_non_active";
|
||||
config.reg_write = tb_switch_nvm_write;
|
||||
config.root_only = true;
|
||||
}
|
||||
|
||||
config.id = id;
|
||||
config.stride = 4;
|
||||
config.word_size = 4;
|
||||
config.size = size;
|
||||
config.dev = &sw->dev;
|
||||
config.owner = THIS_MODULE;
|
||||
config.priv = sw;
|
||||
|
||||
return nvmem_register(&config);
|
||||
}
|
||||
|
||||
static int tb_switch_nvm_add(struct tb_switch *sw)
|
||||
{
|
||||
struct nvmem_device *nvm_dev;
|
||||
struct tb_switch_nvm *nvm;
|
||||
struct tb_nvm *nvm;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
@ -423,18 +385,17 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
|
||||
* currently restrict NVM upgrade for Intel hardware. We may
|
||||
* relax this in the future when we learn other NVM formats.
|
||||
*/
|
||||
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL) {
|
||||
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL &&
|
||||
sw->config.vendor_id != 0x8087) {
|
||||
dev_info(&sw->dev,
|
||||
"NVM format of vendor %#x is not known, disabling NVM upgrade\n",
|
||||
sw->config.vendor_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
|
||||
if (!nvm)
|
||||
return -ENOMEM;
|
||||
|
||||
nvm->id = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL);
|
||||
nvm = tb_nvm_alloc(&sw->dev);
|
||||
if (IS_ERR(nvm))
|
||||
return PTR_ERR(nvm);
|
||||
|
||||
/*
|
||||
* If the switch is in safe-mode the only accessible portion of
|
||||
@ -446,7 +407,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
|
||||
|
||||
ret = nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val));
|
||||
if (ret)
|
||||
goto err_ida;
|
||||
goto err_nvm;
|
||||
|
||||
hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K;
|
||||
nvm_size = (SZ_1M << (val & 7)) / 8;
|
||||
@ -454,44 +415,34 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
|
||||
|
||||
ret = nvm_read(sw, NVM_VERSION, &val, sizeof(val));
|
||||
if (ret)
|
||||
goto err_ida;
|
||||
goto err_nvm;
|
||||
|
||||
nvm->major = val >> 16;
|
||||
nvm->minor = val >> 8;
|
||||
|
||||
nvm_dev = register_nvmem(sw, nvm->id, nvm_size, true);
|
||||
if (IS_ERR(nvm_dev)) {
|
||||
ret = PTR_ERR(nvm_dev);
|
||||
goto err_ida;
|
||||
}
|
||||
nvm->active = nvm_dev;
|
||||
ret = tb_nvm_add_active(nvm, nvm_size, tb_switch_nvm_read);
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
}
|
||||
|
||||
if (!sw->no_nvm_upgrade) {
|
||||
nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false);
|
||||
if (IS_ERR(nvm_dev)) {
|
||||
ret = PTR_ERR(nvm_dev);
|
||||
goto err_nvm_active;
|
||||
}
|
||||
nvm->non_active = nvm_dev;
|
||||
ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE,
|
||||
tb_switch_nvm_write);
|
||||
if (ret)
|
||||
goto err_nvm;
|
||||
}
|
||||
|
||||
sw->nvm = nvm;
|
||||
return 0;
|
||||
|
||||
err_nvm_active:
|
||||
if (nvm->active)
|
||||
nvmem_unregister(nvm->active);
|
||||
err_ida:
|
||||
ida_simple_remove(&nvm_ida, nvm->id);
|
||||
kfree(nvm);
|
||||
|
||||
err_nvm:
|
||||
tb_nvm_free(nvm);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void tb_switch_nvm_remove(struct tb_switch *sw)
|
||||
{
|
||||
struct tb_switch_nvm *nvm;
|
||||
struct tb_nvm *nvm;
|
||||
|
||||
nvm = sw->nvm;
|
||||
sw->nvm = NULL;
|
||||
@ -503,13 +454,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
|
||||
if (!nvm->authenticating)
|
||||
nvm_clear_auth_status(sw);
|
||||
|
||||
if (nvm->non_active)
|
||||
nvmem_unregister(nvm->non_active);
|
||||
if (nvm->active)
|
||||
nvmem_unregister(nvm->active);
|
||||
ida_simple_remove(&nvm_ida, nvm->id);
|
||||
vfree(nvm->buf);
|
||||
kfree(nvm);
|
||||
tb_nvm_free(nvm);
|
||||
}
|
||||
|
||||
/* port utility functions */
|
||||
@ -789,8 +734,11 @@ static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid,
|
||||
ida = &port->out_hopids;
|
||||
}
|
||||
|
||||
/* HopIDs 0-7 are reserved */
|
||||
if (min_hopid < TB_PATH_MIN_HOPID)
|
||||
/*
|
||||
* NHI can use HopIDs 1-max for other adapters HopIDs 0-7 are
|
||||
* reserved.
|
||||
*/
|
||||
if (port->config.type != TB_TYPE_NHI && min_hopid < TB_PATH_MIN_HOPID)
|
||||
min_hopid = TB_PATH_MIN_HOPID;
|
||||
|
||||
if (max_hopid < 0 || max_hopid > port_max_hopid)
|
||||
@ -847,6 +795,13 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid)
|
||||
ida_simple_remove(&port->out_hopids, hopid);
|
||||
}
|
||||
|
||||
static inline bool tb_switch_is_reachable(const struct tb_switch *parent,
|
||||
const struct tb_switch *sw)
|
||||
{
|
||||
u64 mask = (1ULL << parent->config.depth * 8) - 1;
|
||||
return (tb_route(parent) & mask) == (tb_route(sw) & mask);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_next_port_on_path() - Return next port for given port on a path
|
||||
* @start: Start port of the walk
|
||||
@ -876,12 +831,12 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
|
||||
return end;
|
||||
}
|
||||
|
||||
if (start->sw->config.depth < end->sw->config.depth) {
|
||||
if (tb_switch_is_reachable(prev->sw, end->sw)) {
|
||||
next = tb_port_at(tb_route(end->sw), prev->sw);
|
||||
/* Walk down the topology if next == prev */
|
||||
if (prev->remote &&
|
||||
prev->remote->sw->config.depth > prev->sw->config.depth)
|
||||
(next == prev || next->dual_link_port == prev))
|
||||
next = prev->remote;
|
||||
else
|
||||
next = tb_port_at(tb_route(end->sw), prev->sw);
|
||||
} else {
|
||||
if (tb_is_upstream_port(prev)) {
|
||||
next = prev->remote;
|
||||
@ -898,10 +853,16 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
|
||||
}
|
||||
}
|
||||
|
||||
return next;
|
||||
return next != prev ? next : NULL;
|
||||
}
|
||||
|
||||
static int tb_port_get_link_speed(struct tb_port *port)
|
||||
/**
|
||||
* tb_port_get_link_speed() - Get current link speed
|
||||
* @port: Port to check (USB4 or CIO)
|
||||
*
|
||||
* Returns link speed in Gb/s or negative errno in case of failure.
|
||||
*/
|
||||
int tb_port_get_link_speed(struct tb_port *port)
|
||||
{
|
||||
u32 val, speed;
|
||||
int ret;
|
||||
@ -1532,11 +1493,11 @@ static ssize_t nvm_authenticate_show(struct device *dev,
|
||||
return sprintf(buf, "%#x\n", status);
|
||||
}
|
||||
|
||||
static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
|
||||
bool disconnect)
|
||||
{
|
||||
struct tb_switch *sw = tb_to_switch(dev);
|
||||
bool val;
|
||||
int val;
|
||||
int ret;
|
||||
|
||||
pm_runtime_get_sync(&sw->dev);
|
||||
@ -1552,25 +1513,32 @@ static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = kstrtobool(buf, &val);
|
||||
ret = kstrtoint(buf, 10, &val);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
/* Always clear the authentication status */
|
||||
nvm_clear_auth_status(sw);
|
||||
|
||||
if (val) {
|
||||
if (!sw->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
if (val > 0) {
|
||||
if (!sw->nvm->flushed) {
|
||||
if (!sw->nvm->buf) {
|
||||
ret = -EINVAL;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
ret = nvm_validate_and_write(sw);
|
||||
if (ret || val == WRITE_ONLY)
|
||||
goto exit_unlock;
|
||||
}
|
||||
if (val == WRITE_AND_AUTHENTICATE) {
|
||||
if (disconnect) {
|
||||
ret = tb_lc_force_power(sw);
|
||||
} else {
|
||||
sw->nvm->authenticating = true;
|
||||
ret = nvm_authenticate(sw);
|
||||
}
|
||||
}
|
||||
|
||||
ret = nvm_validate_and_write(sw);
|
||||
if (ret)
|
||||
goto exit_unlock;
|
||||
|
||||
sw->nvm->authenticating = true;
|
||||
ret = nvm_authenticate(sw);
|
||||
}
|
||||
|
||||
exit_unlock:
|
||||
@ -1579,12 +1547,35 @@ exit_rpm:
|
||||
pm_runtime_mark_last_busy(&sw->dev);
|
||||
pm_runtime_put_autosuspend(&sw->dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t nvm_authenticate_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
int ret = nvm_authenticate_sysfs(dev, buf, false);
|
||||
if (ret)
|
||||
return ret;
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(nvm_authenticate);
|
||||
|
||||
static ssize_t nvm_authenticate_on_disconnect_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return nvm_authenticate_show(dev, attr, buf);
|
||||
}
|
||||
|
||||
static ssize_t nvm_authenticate_on_disconnect_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = nvm_authenticate_sysfs(dev, buf, true);
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(nvm_authenticate_on_disconnect);
|
||||
|
||||
static ssize_t nvm_version_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
@ -1642,6 +1633,7 @@ static struct attribute *switch_attrs[] = {
|
||||
&dev_attr_generation.attr,
|
||||
&dev_attr_key.attr,
|
||||
&dev_attr_nvm_authenticate.attr,
|
||||
&dev_attr_nvm_authenticate_on_disconnect.attr,
|
||||
&dev_attr_nvm_version.attr,
|
||||
&dev_attr_rx_speed.attr,
|
||||
&dev_attr_rx_lanes.attr,
|
||||
@ -1696,6 +1688,10 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
|
||||
if (tb_route(sw))
|
||||
return attr->mode;
|
||||
return 0;
|
||||
} else if (attr == &dev_attr_nvm_authenticate_on_disconnect.attr) {
|
||||
if (sw->quirks & QUIRK_FORCE_POWER_LINK_CONTROLLER)
|
||||
return attr->mode;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return sw->safe_mode ? 0 : attr->mode;
|
||||
@ -2440,6 +2436,9 @@ void tb_switch_remove(struct tb_switch *sw)
|
||||
tb_xdomain_remove(port->xdomain);
|
||||
port->xdomain = NULL;
|
||||
}
|
||||
|
||||
/* Remove any downstream retimers */
|
||||
tb_retimer_remove_all(port);
|
||||
}
|
||||
|
||||
if (!sw->is_unplugged)
|
||||
@ -2755,8 +2754,3 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void tb_switch_exit(void)
|
||||
{
|
||||
ida_destroy(&nvm_ida);
|
||||
}
|
||||
|
@ -206,27 +206,197 @@ static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
|
||||
}
|
||||
|
||||
static struct tb_port *tb_find_usb3_down(struct tb_switch *sw,
|
||||
const struct tb_port *port)
|
||||
const struct tb_port *port)
|
||||
{
|
||||
struct tb_port *down;
|
||||
|
||||
down = usb4_switch_map_usb3_down(sw, port);
|
||||
if (down) {
|
||||
if (WARN_ON(!tb_port_is_usb3_down(down)))
|
||||
goto out;
|
||||
if (WARN_ON(tb_usb3_port_is_enabled(down)))
|
||||
goto out;
|
||||
|
||||
if (down && !tb_usb3_port_is_enabled(down))
|
||||
return down;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
|
||||
struct tb_port *src_port,
|
||||
struct tb_port *dst_port)
|
||||
{
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
struct tb_tunnel *tunnel;
|
||||
|
||||
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
|
||||
if (tunnel->type == type &&
|
||||
((src_port && src_port == tunnel->src_port) ||
|
||||
(dst_port && dst_port == tunnel->dst_port))) {
|
||||
return tunnel;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
return tb_find_unused_port(sw, TB_TYPE_USB3_DOWN);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct tb_tunnel *tb_find_first_usb3_tunnel(struct tb *tb,
|
||||
struct tb_port *src_port,
|
||||
struct tb_port *dst_port)
|
||||
{
|
||||
struct tb_port *port, *usb3_down;
|
||||
struct tb_switch *sw;
|
||||
|
||||
/* Pick the router that is deepest in the topology */
|
||||
if (dst_port->sw->config.depth > src_port->sw->config.depth)
|
||||
sw = dst_port->sw;
|
||||
else
|
||||
sw = src_port->sw;
|
||||
|
||||
/* Can't be the host router */
|
||||
if (sw == tb->root_switch)
|
||||
return NULL;
|
||||
|
||||
/* Find the downstream USB4 port that leads to this router */
|
||||
port = tb_port_at(tb_route(sw), tb->root_switch);
|
||||
/* Find the corresponding host router USB3 downstream port */
|
||||
usb3_down = usb4_switch_map_usb3_down(tb->root_switch, port);
|
||||
if (!usb3_down)
|
||||
return NULL;
|
||||
|
||||
return tb_find_tunnel(tb, TB_TUNNEL_USB3, usb3_down, NULL);
|
||||
}
|
||||
|
||||
static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
|
||||
struct tb_port *dst_port, int *available_up, int *available_down)
|
||||
{
|
||||
int usb3_consumed_up, usb3_consumed_down, ret;
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_port *port;
|
||||
|
||||
tb_port_dbg(dst_port, "calculating available bandwidth\n");
|
||||
|
||||
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
|
||||
if (tunnel) {
|
||||
ret = tb_tunnel_consumed_bandwidth(tunnel, &usb3_consumed_up,
|
||||
&usb3_consumed_down);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else {
|
||||
usb3_consumed_up = 0;
|
||||
usb3_consumed_down = 0;
|
||||
}
|
||||
|
||||
*available_up = *available_down = 40000;
|
||||
|
||||
/* Find the minimum available bandwidth over all links */
|
||||
tb_for_each_port_on_path(src_port, dst_port, port) {
|
||||
int link_speed, link_width, up_bw, down_bw;
|
||||
|
||||
if (!tb_port_is_null(port))
|
||||
continue;
|
||||
|
||||
if (tb_is_upstream_port(port)) {
|
||||
link_speed = port->sw->link_speed;
|
||||
} else {
|
||||
link_speed = tb_port_get_link_speed(port);
|
||||
if (link_speed < 0)
|
||||
return link_speed;
|
||||
}
|
||||
|
||||
link_width = port->bonded ? 2 : 1;
|
||||
|
||||
up_bw = link_speed * link_width * 1000; /* Mb/s */
|
||||
/* Leave 10% guard band */
|
||||
up_bw -= up_bw / 10;
|
||||
down_bw = up_bw;
|
||||
|
||||
tb_port_dbg(port, "link total bandwidth %d Mb/s\n", up_bw);
|
||||
|
||||
/*
|
||||
* Find all DP tunnels that cross the port and reduce
|
||||
* their consumed bandwidth from the available.
|
||||
*/
|
||||
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
|
||||
int dp_consumed_up, dp_consumed_down;
|
||||
|
||||
if (!tb_tunnel_is_dp(tunnel))
|
||||
continue;
|
||||
|
||||
if (!tb_tunnel_port_on_path(tunnel, port))
|
||||
continue;
|
||||
|
||||
ret = tb_tunnel_consumed_bandwidth(tunnel,
|
||||
&dp_consumed_up,
|
||||
&dp_consumed_down);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
up_bw -= dp_consumed_up;
|
||||
down_bw -= dp_consumed_down;
|
||||
}
|
||||
|
||||
/*
|
||||
* If USB3 is tunneled from the host router down to the
|
||||
* branch leading to port we need to take USB3 consumed
|
||||
* bandwidth into account regardless whether it actually
|
||||
* crosses the port.
|
||||
*/
|
||||
up_bw -= usb3_consumed_up;
|
||||
down_bw -= usb3_consumed_down;
|
||||
|
||||
if (up_bw < *available_up)
|
||||
*available_up = up_bw;
|
||||
if (down_bw < *available_down)
|
||||
*available_down = down_bw;
|
||||
}
|
||||
|
||||
if (*available_up < 0)
|
||||
*available_up = 0;
|
||||
if (*available_down < 0)
|
||||
*available_down = 0;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tb_release_unused_usb3_bandwidth(struct tb *tb,
|
||||
struct tb_port *src_port,
|
||||
struct tb_port *dst_port)
|
||||
{
|
||||
struct tb_tunnel *tunnel;
|
||||
|
||||
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
|
||||
return tunnel ? tb_tunnel_release_unused_bandwidth(tunnel) : 0;
|
||||
}
|
||||
|
||||
static void tb_reclaim_usb3_bandwidth(struct tb *tb, struct tb_port *src_port,
|
||||
struct tb_port *dst_port)
|
||||
{
|
||||
int ret, available_up, available_down;
|
||||
struct tb_tunnel *tunnel;
|
||||
|
||||
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
|
||||
if (!tunnel)
|
||||
return;
|
||||
|
||||
tb_dbg(tb, "reclaiming unused bandwidth for USB3\n");
|
||||
|
||||
/*
|
||||
* Calculate available bandwidth for the first hop USB3 tunnel.
|
||||
* That determines the whole USB3 bandwidth for this branch.
|
||||
*/
|
||||
ret = tb_available_bandwidth(tb, tunnel->src_port, tunnel->dst_port,
|
||||
&available_up, &available_down);
|
||||
if (ret) {
|
||||
tb_warn(tb, "failed to calculate available bandwidth\n");
|
||||
return;
|
||||
}
|
||||
|
||||
tb_dbg(tb, "available bandwidth for USB3 %d/%d Mb/s\n",
|
||||
available_up, available_down);
|
||||
|
||||
tb_tunnel_reclaim_available_bandwidth(tunnel, &available_up, &available_down);
|
||||
}
|
||||
|
||||
static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
|
||||
{
|
||||
struct tb_switch *parent = tb_switch_parent(sw);
|
||||
int ret, available_up, available_down;
|
||||
struct tb_port *up, *down, *port;
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
struct tb_tunnel *tunnel;
|
||||
@ -235,6 +405,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
|
||||
if (!up)
|
||||
return 0;
|
||||
|
||||
if (!sw->link_usb4)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Look up available down port. Since we are chaining it should
|
||||
* be found right above this switch.
|
||||
@ -254,21 +427,48 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
|
||||
parent_up = tb_switch_find_port(parent, TB_TYPE_USB3_UP);
|
||||
if (!parent_up || !tb_port_is_enabled(parent_up))
|
||||
return 0;
|
||||
|
||||
/* Make all unused bandwidth available for the new tunnel */
|
||||
ret = tb_release_unused_usb3_bandwidth(tb, down, up);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
tunnel = tb_tunnel_alloc_usb3(tb, up, down);
|
||||
if (!tunnel)
|
||||
return -ENOMEM;
|
||||
ret = tb_available_bandwidth(tb, down, up, &available_up,
|
||||
&available_down);
|
||||
if (ret)
|
||||
goto err_reclaim;
|
||||
|
||||
tb_port_dbg(up, "available bandwidth for new USB3 tunnel %d/%d Mb/s\n",
|
||||
available_up, available_down);
|
||||
|
||||
tunnel = tb_tunnel_alloc_usb3(tb, up, down, available_up,
|
||||
available_down);
|
||||
if (!tunnel) {
|
||||
ret = -ENOMEM;
|
||||
goto err_reclaim;
|
||||
}
|
||||
|
||||
if (tb_tunnel_activate(tunnel)) {
|
||||
tb_port_info(up,
|
||||
"USB3 tunnel activation failed, aborting\n");
|
||||
tb_tunnel_free(tunnel);
|
||||
return -EIO;
|
||||
ret = -EIO;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
list_add_tail(&tunnel->list, &tcm->tunnel_list);
|
||||
if (tb_route(parent))
|
||||
tb_reclaim_usb3_bandwidth(tb, down, up);
|
||||
|
||||
return 0;
|
||||
|
||||
err_free:
|
||||
tb_tunnel_free(tunnel);
|
||||
err_reclaim:
|
||||
if (tb_route(parent))
|
||||
tb_reclaim_usb3_bandwidth(tb, down, up);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tb_create_usb3_tunnels(struct tb_switch *sw)
|
||||
@ -339,6 +539,9 @@ static void tb_scan_port(struct tb_port *port)
|
||||
tb_port_dbg(port, "port already has a remote\n");
|
||||
return;
|
||||
}
|
||||
|
||||
tb_retimer_scan(port);
|
||||
|
||||
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
|
||||
tb_downstream_route(port));
|
||||
if (IS_ERR(sw)) {
|
||||
@ -395,6 +598,9 @@ static void tb_scan_port(struct tb_port *port)
|
||||
if (tb_enable_tmu(sw))
|
||||
tb_sw_warn(sw, "failed to enable TMU\n");
|
||||
|
||||
/* Scan upstream retimers */
|
||||
tb_retimer_scan(upstream_port);
|
||||
|
||||
/*
|
||||
* Create USB 3.x tunnels only when the switch is plugged to the
|
||||
* domain. This is because we scan the domain also during discovery
|
||||
@ -404,43 +610,44 @@ static void tb_scan_port(struct tb_port *port)
|
||||
if (tcm->hotplug_active && tb_tunnel_usb3(sw->tb, sw))
|
||||
tb_sw_warn(sw, "USB3 tunnel creation failed\n");
|
||||
|
||||
tb_add_dp_resources(sw);
|
||||
tb_scan_switch(sw);
|
||||
}
|
||||
|
||||
static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
|
||||
struct tb_port *src_port,
|
||||
struct tb_port *dst_port)
|
||||
{
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
struct tb_tunnel *tunnel;
|
||||
|
||||
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
|
||||
if (tunnel->type == type &&
|
||||
((src_port && src_port == tunnel->src_port) ||
|
||||
(dst_port && dst_port == tunnel->dst_port))) {
|
||||
return tunnel;
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel)
|
||||
{
|
||||
struct tb_port *src_port, *dst_port;
|
||||
struct tb *tb;
|
||||
|
||||
if (!tunnel)
|
||||
return;
|
||||
|
||||
tb_tunnel_deactivate(tunnel);
|
||||
list_del(&tunnel->list);
|
||||
|
||||
/*
|
||||
* In case of DP tunnel make sure the DP IN resource is deallocated
|
||||
* properly.
|
||||
*/
|
||||
if (tb_tunnel_is_dp(tunnel)) {
|
||||
struct tb_port *in = tunnel->src_port;
|
||||
tb = tunnel->tb;
|
||||
src_port = tunnel->src_port;
|
||||
dst_port = tunnel->dst_port;
|
||||
|
||||
tb_switch_dealloc_dp_resource(in->sw, in);
|
||||
switch (tunnel->type) {
|
||||
case TB_TUNNEL_DP:
|
||||
/*
|
||||
* In case of DP tunnel make sure the DP IN resource is
|
||||
* deallocated properly.
|
||||
*/
|
||||
tb_switch_dealloc_dp_resource(src_port->sw, src_port);
|
||||
fallthrough;
|
||||
|
||||
case TB_TUNNEL_USB3:
|
||||
tb_reclaim_usb3_bandwidth(tb, src_port, dst_port);
|
||||
break;
|
||||
|
||||
default:
|
||||
/*
|
||||
* PCIe and DMA tunnels do not consume guaranteed
|
||||
* bandwidth.
|
||||
*/
|
||||
break;
|
||||
}
|
||||
|
||||
tb_tunnel_free(tunnel);
|
||||
@ -473,6 +680,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
|
||||
continue;
|
||||
|
||||
if (port->remote->sw->is_unplugged) {
|
||||
tb_retimer_remove_all(port);
|
||||
tb_remove_dp_resources(port->remote->sw);
|
||||
tb_switch_lane_bonding_disable(port->remote->sw);
|
||||
tb_switch_remove(port->remote->sw);
|
||||
@ -524,7 +732,7 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
|
||||
if (down) {
|
||||
if (WARN_ON(!tb_port_is_pcie_down(down)))
|
||||
goto out;
|
||||
if (WARN_ON(tb_pci_port_is_enabled(down)))
|
||||
if (tb_pci_port_is_enabled(down))
|
||||
goto out;
|
||||
|
||||
return down;
|
||||
@ -534,51 +742,49 @@ out:
|
||||
return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN);
|
||||
}
|
||||
|
||||
static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in,
|
||||
struct tb_port *out)
|
||||
static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
|
||||
{
|
||||
struct tb_switch *sw = out->sw;
|
||||
struct tb_tunnel *tunnel;
|
||||
int bw, available_bw = 40000;
|
||||
struct tb_port *host_port, *port;
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
|
||||
while (sw && sw != in->sw) {
|
||||
bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */
|
||||
/* Leave 10% guard band */
|
||||
bw -= bw / 10;
|
||||
host_port = tb_route(in->sw) ?
|
||||
tb_port_at(tb_route(in->sw), tb->root_switch) : NULL;
|
||||
|
||||
/*
|
||||
* Check for any active DP tunnels that go through this
|
||||
* switch and reduce their consumed bandwidth from
|
||||
* available.
|
||||
*/
|
||||
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
|
||||
int consumed_bw;
|
||||
list_for_each_entry(port, &tcm->dp_resources, list) {
|
||||
if (!tb_port_is_dpout(port))
|
||||
continue;
|
||||
|
||||
if (!tb_tunnel_switch_on_path(tunnel, sw))
|
||||
continue;
|
||||
|
||||
consumed_bw = tb_tunnel_consumed_bandwidth(tunnel);
|
||||
if (consumed_bw < 0)
|
||||
return consumed_bw;
|
||||
|
||||
bw -= consumed_bw;
|
||||
if (tb_port_is_enabled(port)) {
|
||||
tb_port_dbg(port, "in use\n");
|
||||
continue;
|
||||
}
|
||||
|
||||
if (bw < available_bw)
|
||||
available_bw = bw;
|
||||
tb_port_dbg(port, "DP OUT available\n");
|
||||
|
||||
sw = tb_switch_parent(sw);
|
||||
/*
|
||||
* Keep the DP tunnel under the topology starting from
|
||||
* the same host router downstream port.
|
||||
*/
|
||||
if (host_port && tb_route(port->sw)) {
|
||||
struct tb_port *p;
|
||||
|
||||
p = tb_port_at(tb_route(port->sw), tb->root_switch);
|
||||
if (p != host_port)
|
||||
continue;
|
||||
}
|
||||
|
||||
return port;
|
||||
}
|
||||
|
||||
return available_bw;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tb_tunnel_dp(struct tb *tb)
|
||||
{
|
||||
int available_up, available_down, ret;
|
||||
struct tb_cm *tcm = tb_priv(tb);
|
||||
struct tb_port *port, *in, *out;
|
||||
struct tb_tunnel *tunnel;
|
||||
int available_bw;
|
||||
|
||||
/*
|
||||
* Find pair of inactive DP IN and DP OUT adapters and then
|
||||
@ -589,17 +795,21 @@ static void tb_tunnel_dp(struct tb *tb)
|
||||
in = NULL;
|
||||
out = NULL;
|
||||
list_for_each_entry(port, &tcm->dp_resources, list) {
|
||||
if (!tb_port_is_dpin(port))
|
||||
continue;
|
||||
|
||||
if (tb_port_is_enabled(port)) {
|
||||
tb_port_dbg(port, "in use\n");
|
||||
continue;
|
||||
}
|
||||
|
||||
tb_port_dbg(port, "available\n");
|
||||
tb_port_dbg(port, "DP IN available\n");
|
||||
|
||||
if (!in && tb_port_is_dpin(port))
|
||||
out = tb_find_dp_out(tb, port);
|
||||
if (out) {
|
||||
in = port;
|
||||
else if (!out && tb_port_is_dpout(port))
|
||||
out = port;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!in) {
|
||||
@ -616,32 +826,41 @@ static void tb_tunnel_dp(struct tb *tb)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Calculate available bandwidth between in and out */
|
||||
available_bw = tb_available_bw(tcm, in, out);
|
||||
if (available_bw < 0) {
|
||||
tb_warn(tb, "failed to determine available bandwidth\n");
|
||||
return;
|
||||
/* Make all unused USB3 bandwidth available for the new DP tunnel */
|
||||
ret = tb_release_unused_usb3_bandwidth(tb, in, out);
|
||||
if (ret) {
|
||||
tb_warn(tb, "failed to release unused bandwidth\n");
|
||||
goto err_dealloc_dp;
|
||||
}
|
||||
|
||||
tb_dbg(tb, "available bandwidth for new DP tunnel %u Mb/s\n",
|
||||
available_bw);
|
||||
ret = tb_available_bandwidth(tb, in, out, &available_up,
|
||||
&available_down);
|
||||
if (ret)
|
||||
goto err_reclaim;
|
||||
|
||||
tunnel = tb_tunnel_alloc_dp(tb, in, out, available_bw);
|
||||
tb_dbg(tb, "available bandwidth for new DP tunnel %u/%u Mb/s\n",
|
||||
available_up, available_down);
|
||||
|
||||
tunnel = tb_tunnel_alloc_dp(tb, in, out, available_up, available_down);
|
||||
if (!tunnel) {
|
||||
tb_port_dbg(out, "could not allocate DP tunnel\n");
|
||||
goto dealloc_dp;
|
||||
goto err_reclaim;
|
||||
}
|
||||
|
||||
if (tb_tunnel_activate(tunnel)) {
|
||||
tb_port_info(out, "DP tunnel activation failed, aborting\n");
|
||||
tb_tunnel_free(tunnel);
|
||||
goto dealloc_dp;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
list_add_tail(&tunnel->list, &tcm->tunnel_list);
|
||||
tb_reclaim_usb3_bandwidth(tb, in, out);
|
||||
return;
|
||||
|
||||
dealloc_dp:
|
||||
err_free:
|
||||
tb_tunnel_free(tunnel);
|
||||
err_reclaim:
|
||||
tb_reclaim_usb3_bandwidth(tb, in, out);
|
||||
err_dealloc_dp:
|
||||
tb_switch_dealloc_dp_resource(in->sw, in);
|
||||
}
|
||||
|
||||
@ -827,6 +1046,8 @@ static void tb_handle_hotplug(struct work_struct *work)
|
||||
goto put_sw;
|
||||
}
|
||||
if (ev->unplug) {
|
||||
tb_retimer_remove_all(port);
|
||||
|
||||
if (tb_port_has_remote(port)) {
|
||||
tb_port_dbg(port, "switch unplugged\n");
|
||||
tb_sw_set_unplugged(port->remote->sw);
|
||||
@ -1071,6 +1292,7 @@ static int tb_free_unplugged_xdomains(struct tb_switch *sw)
|
||||
if (tb_is_upstream_port(port))
|
||||
continue;
|
||||
if (port->xdomain && port->xdomain->is_unplugged) {
|
||||
tb_retimer_remove_all(port);
|
||||
tb_xdomain_remove(port->xdomain);
|
||||
port->xdomain = NULL;
|
||||
ret++;
|
||||
|
@ -18,8 +18,17 @@
|
||||
#include "ctl.h"
|
||||
#include "dma_port.h"
|
||||
|
||||
#define NVM_MIN_SIZE SZ_32K
|
||||
#define NVM_MAX_SIZE SZ_512K
|
||||
|
||||
/* Intel specific NVM offsets */
|
||||
#define NVM_DEVID 0x05
|
||||
#define NVM_VERSION 0x08
|
||||
#define NVM_FLASH_SIZE 0x45
|
||||
|
||||
/**
|
||||
* struct tb_switch_nvm - Structure holding switch NVM information
|
||||
* struct tb_nvm - Structure holding NVM information
|
||||
* @dev: Owner of the NVM
|
||||
* @major: Major version number of the active NVM portion
|
||||
* @minor: Minor version number of the active NVM portion
|
||||
* @id: Identifier used with both NVM portions
|
||||
@ -29,9 +38,14 @@
|
||||
* the actual NVM flash device
|
||||
* @buf_data_size: Number of bytes actually consumed by the new NVM
|
||||
* image
|
||||
* @authenticating: The switch is authenticating the new NVM
|
||||
* @authenticating: The device is authenticating the new NVM
|
||||
* @flushed: The image has been flushed to the storage area
|
||||
*
|
||||
* The user of this structure needs to handle serialization of possible
|
||||
* concurrent access.
|
||||
*/
|
||||
struct tb_switch_nvm {
|
||||
struct tb_nvm {
|
||||
struct device *dev;
|
||||
u8 major;
|
||||
u8 minor;
|
||||
int id;
|
||||
@ -40,6 +54,7 @@ struct tb_switch_nvm {
|
||||
void *buf;
|
||||
size_t buf_data_size;
|
||||
bool authenticating;
|
||||
bool flushed;
|
||||
};
|
||||
|
||||
#define TB_SWITCH_KEY_SIZE 32
|
||||
@ -97,6 +112,7 @@ struct tb_switch_tmu {
|
||||
* @device_name: Name of the device (or %NULL if not known)
|
||||
* @link_speed: Speed of the link in Gb/s
|
||||
* @link_width: Width of the link (1 or 2)
|
||||
* @link_usb4: Upstream link is USB4
|
||||
* @generation: Switch Thunderbolt generation
|
||||
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
|
||||
* @cap_lc: Offset to the link controller capability (%0 if not found)
|
||||
@ -117,6 +133,7 @@ struct tb_switch_tmu {
|
||||
* @depth: Depth in the chain this switch is connected (ICM only)
|
||||
* @rpm_complete: Completion used to wait for runtime resume to
|
||||
* complete (ICM only)
|
||||
* @quirks: Quirks used for this Thunderbolt switch
|
||||
*
|
||||
* When the switch is being added or removed to the domain (other
|
||||
* switches) you need to have domain lock held.
|
||||
@ -136,12 +153,13 @@ struct tb_switch {
|
||||
const char *device_name;
|
||||
unsigned int link_speed;
|
||||
unsigned int link_width;
|
||||
bool link_usb4;
|
||||
unsigned int generation;
|
||||
int cap_plug_events;
|
||||
int cap_lc;
|
||||
bool is_unplugged;
|
||||
u8 *drom;
|
||||
struct tb_switch_nvm *nvm;
|
||||
struct tb_nvm *nvm;
|
||||
bool no_nvm_upgrade;
|
||||
bool safe_mode;
|
||||
bool boot;
|
||||
@ -154,6 +172,7 @@ struct tb_switch {
|
||||
u8 link;
|
||||
u8 depth;
|
||||
struct completion rpm_complete;
|
||||
unsigned long quirks;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -195,6 +214,28 @@ struct tb_port {
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
/**
|
||||
* tb_retimer: Thunderbolt retimer
|
||||
* @dev: Device for the retimer
|
||||
* @tb: Pointer to the domain the retimer belongs to
|
||||
* @index: Retimer index facing the router USB4 port
|
||||
* @vendor: Vendor ID of the retimer
|
||||
* @device: Device ID of the retimer
|
||||
* @port: Pointer to the lane 0 adapter
|
||||
* @nvm: Pointer to the NVM if the retimer has one (%NULL otherwise)
|
||||
* @auth_status: Status of last NVM authentication
|
||||
*/
|
||||
struct tb_retimer {
|
||||
struct device dev;
|
||||
struct tb *tb;
|
||||
u8 index;
|
||||
u32 vendor;
|
||||
u32 device;
|
||||
struct tb_port *port;
|
||||
struct tb_nvm *nvm;
|
||||
u32 auth_status;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb_path_hop - routing information for a tb_path
|
||||
* @in_port: Ingress port of a switch
|
||||
@ -286,7 +327,11 @@ struct tb_path {
|
||||
|
||||
/* HopIDs 0-7 are reserved by the Thunderbolt protocol */
|
||||
#define TB_PATH_MIN_HOPID 8
|
||||
#define TB_PATH_MAX_HOPS 7
|
||||
/*
|
||||
* Support paths from the farthest (depth 6) router to the host and back
|
||||
* to the same level (not necessarily to the same router).
|
||||
*/
|
||||
#define TB_PATH_MAX_HOPS (7 * 2)
|
||||
|
||||
/**
|
||||
* struct tb_cm_ops - Connection manager specific operations vector
|
||||
@ -534,11 +579,11 @@ struct tb *icm_probe(struct tb_nhi *nhi);
|
||||
struct tb *tb_probe(struct tb_nhi *nhi);
|
||||
|
||||
extern struct device_type tb_domain_type;
|
||||
extern struct device_type tb_retimer_type;
|
||||
extern struct device_type tb_switch_type;
|
||||
|
||||
int tb_domain_init(void);
|
||||
void tb_domain_exit(void);
|
||||
void tb_switch_exit(void);
|
||||
int tb_xdomain_init(void);
|
||||
void tb_xdomain_exit(void);
|
||||
|
||||
@ -571,6 +616,15 @@ static inline void tb_domain_put(struct tb *tb)
|
||||
put_device(&tb->dev);
|
||||
}
|
||||
|
||||
struct tb_nvm *tb_nvm_alloc(struct device *dev);
|
||||
int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read);
|
||||
int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
|
||||
size_t bytes);
|
||||
int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
|
||||
nvmem_reg_write_t reg_write);
|
||||
void tb_nvm_free(struct tb_nvm *nvm);
|
||||
void tb_nvm_exit(void);
|
||||
|
||||
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
|
||||
u64 route);
|
||||
struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
|
||||
@ -741,6 +795,20 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
|
||||
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
|
||||
struct tb_port *prev);
|
||||
|
||||
/**
|
||||
* tb_for_each_port_on_path() - Iterate over each port on path
|
||||
* @src: Source port
|
||||
* @dst: Destination port
|
||||
* @p: Port used as iterator
|
||||
*
|
||||
* Walks over each port on path from @src to @dst.
|
||||
*/
|
||||
#define tb_for_each_port_on_path(src, dst, p) \
|
||||
for ((p) = tb_next_port_on_path((src), (dst), NULL); (p); \
|
||||
(p) = tb_next_port_on_path((src), (dst), (p)))
|
||||
|
||||
int tb_port_get_link_speed(struct tb_port *port);
|
||||
|
||||
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
|
||||
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
|
||||
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
|
||||
@ -769,8 +837,8 @@ void tb_path_free(struct tb_path *path);
|
||||
int tb_path_activate(struct tb_path *path);
|
||||
void tb_path_deactivate(struct tb_path *path);
|
||||
bool tb_path_is_invalid(struct tb_path *path);
|
||||
bool tb_path_switch_on_path(const struct tb_path *path,
|
||||
const struct tb_switch *sw);
|
||||
bool tb_path_port_on_path(const struct tb_path *path,
|
||||
const struct tb_port *port);
|
||||
|
||||
int tb_drom_read(struct tb_switch *sw);
|
||||
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
|
||||
@ -783,6 +851,7 @@ bool tb_lc_lane_bonding_possible(struct tb_switch *sw);
|
||||
bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in);
|
||||
int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in);
|
||||
int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in);
|
||||
int tb_lc_force_power(struct tb_switch *sw);
|
||||
|
||||
static inline int tb_route_length(u64 route)
|
||||
{
|
||||
@ -812,6 +881,21 @@ void tb_xdomain_remove(struct tb_xdomain *xd);
|
||||
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
|
||||
u8 depth);
|
||||
|
||||
int tb_retimer_scan(struct tb_port *port);
|
||||
void tb_retimer_remove_all(struct tb_port *port);
|
||||
|
||||
static inline bool tb_is_retimer(const struct device *dev)
|
||||
{
|
||||
return dev->type == &tb_retimer_type;
|
||||
}
|
||||
|
||||
static inline struct tb_retimer *tb_to_retimer(struct device *dev)
|
||||
{
|
||||
if (tb_is_retimer(dev))
|
||||
return container_of(dev, struct tb_retimer, dev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int usb4_switch_setup(struct tb_switch *sw);
|
||||
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
|
||||
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
@ -835,4 +919,35 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
|
||||
const struct tb_port *port);
|
||||
|
||||
int usb4_port_unlock(struct tb_port *port);
|
||||
int usb4_port_enumerate_retimers(struct tb_port *port);
|
||||
|
||||
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
|
||||
u8 size);
|
||||
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
|
||||
const void *buf, u8 size);
|
||||
int usb4_port_retimer_is_last(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index,
|
||||
unsigned int address, const void *buf,
|
||||
size_t size);
|
||||
int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index);
|
||||
int usb4_port_retimer_nvm_authenticate_status(struct tb_port *port, u8 index,
|
||||
u32 *status);
|
||||
int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
|
||||
unsigned int address, void *buf, size_t size);
|
||||
|
||||
int usb4_usb3_port_max_link_rate(struct tb_port *port);
|
||||
int usb4_usb3_port_actual_link_rate(struct tb_port *port);
|
||||
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw);
|
||||
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw);
|
||||
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw);
|
||||
|
||||
/* keep link controller awake during update */
|
||||
#define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
|
||||
|
||||
void tb_check_quirks(struct tb_switch *sw);
|
||||
|
||||
#endif
|
||||
|
@ -288,8 +288,19 @@ struct tb_regs_port_header {
|
||||
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
|
||||
|
||||
/* USB4 port registers */
|
||||
#define PORT_CS_1 0x01
|
||||
#define PORT_CS_1_LENGTH_SHIFT 8
|
||||
#define PORT_CS_1_TARGET_MASK GENMASK(18, 16)
|
||||
#define PORT_CS_1_TARGET_SHIFT 16
|
||||
#define PORT_CS_1_RETIMER_INDEX_SHIFT 20
|
||||
#define PORT_CS_1_WNR_WRITE BIT(24)
|
||||
#define PORT_CS_1_NR BIT(25)
|
||||
#define PORT_CS_1_RC BIT(26)
|
||||
#define PORT_CS_1_PND BIT(31)
|
||||
#define PORT_CS_2 0x02
|
||||
#define PORT_CS_18 0x12
|
||||
#define PORT_CS_18_BE BIT(8)
|
||||
#define PORT_CS_18_TCM BIT(9)
|
||||
#define PORT_CS_19 0x13
|
||||
#define PORT_CS_19_PC BIT(3)
|
||||
|
||||
@ -337,6 +348,25 @@ struct tb_regs_port_header {
|
||||
#define ADP_USB3_CS_0 0x00
|
||||
#define ADP_USB3_CS_0_V BIT(30)
|
||||
#define ADP_USB3_CS_0_PE BIT(31)
|
||||
#define ADP_USB3_CS_1 0x01
|
||||
#define ADP_USB3_CS_1_CUBW_MASK GENMASK(11, 0)
|
||||
#define ADP_USB3_CS_1_CDBW_MASK GENMASK(23, 12)
|
||||
#define ADP_USB3_CS_1_CDBW_SHIFT 12
|
||||
#define ADP_USB3_CS_1_HCA BIT(31)
|
||||
#define ADP_USB3_CS_2 0x02
|
||||
#define ADP_USB3_CS_2_AUBW_MASK GENMASK(11, 0)
|
||||
#define ADP_USB3_CS_2_ADBW_MASK GENMASK(23, 12)
|
||||
#define ADP_USB3_CS_2_ADBW_SHIFT 12
|
||||
#define ADP_USB3_CS_2_CMR BIT(31)
|
||||
#define ADP_USB3_CS_3 0x03
|
||||
#define ADP_USB3_CS_3_SCALE_MASK GENMASK(5, 0)
|
||||
#define ADP_USB3_CS_4 0x04
|
||||
#define ADP_USB3_CS_4_ALR_MASK GENMASK(6, 0)
|
||||
#define ADP_USB3_CS_4_ALR_20G 0x1
|
||||
#define ADP_USB3_CS_4_ULV BIT(7)
|
||||
#define ADP_USB3_CS_4_MSLR_MASK GENMASK(18, 12)
|
||||
#define ADP_USB3_CS_4_MSLR_SHIFT 12
|
||||
#define ADP_USB3_CS_4_MSLR_20G 0x1
|
||||
|
||||
/* Hop register from TB_CFG_HOPS. 8 byte per entry. */
|
||||
struct tb_regs_hop {
|
||||
@ -379,6 +409,7 @@ struct tb_regs_hop {
|
||||
#define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4
|
||||
#define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4)
|
||||
#define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1
|
||||
#define TB_LC_POWER 0x740
|
||||
|
||||
/* Link controller registers */
|
||||
#define TB_LC_PORT_ATTR 0x8d
|
||||
|
1626
drivers/thunderbolt/test.c
Normal file
1626
drivers/thunderbolt/test.c
Normal file
File diff suppressed because it is too large
Load Diff
@ -124,8 +124,9 @@ static void tb_pci_init_path(struct tb_path *path)
|
||||
path->drop_packages = 0;
|
||||
path->nfc_credits = 0;
|
||||
path->hops[0].initial_credits = 7;
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
if (path->path_length > 1)
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -422,7 +423,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
|
||||
u32 out_dp_cap, out_rate, out_lanes, in_dp_cap, in_rate, in_lanes, bw;
|
||||
struct tb_port *out = tunnel->dst_port;
|
||||
struct tb_port *in = tunnel->src_port;
|
||||
int ret;
|
||||
int ret, max_bw;
|
||||
|
||||
/*
|
||||
* Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for
|
||||
@ -471,10 +472,15 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
|
||||
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
|
||||
out_rate, out_lanes, bw);
|
||||
|
||||
if (tunnel->max_bw && bw > tunnel->max_bw) {
|
||||
if (in->sw->config.depth < out->sw->config.depth)
|
||||
max_bw = tunnel->max_down;
|
||||
else
|
||||
max_bw = tunnel->max_up;
|
||||
|
||||
if (max_bw && bw > max_bw) {
|
||||
u32 new_rate, new_lanes, new_bw;
|
||||
|
||||
ret = tb_dp_reduce_bandwidth(tunnel->max_bw, in_rate, in_lanes,
|
||||
ret = tb_dp_reduce_bandwidth(max_bw, in_rate, in_lanes,
|
||||
out_rate, out_lanes, &new_rate,
|
||||
&new_lanes);
|
||||
if (ret) {
|
||||
@ -535,7 +541,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
|
||||
static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
|
||||
int *consumed_down)
|
||||
{
|
||||
struct tb_port *in = tunnel->src_port;
|
||||
const struct tb_switch *sw = in->sw;
|
||||
@ -543,7 +550,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
|
||||
int ret;
|
||||
|
||||
if (tb_dp_is_usb4(sw)) {
|
||||
int timeout = 10;
|
||||
int timeout = 20;
|
||||
|
||||
/*
|
||||
* Wait for DPRX done. Normally it should be already set
|
||||
@ -579,10 +586,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
|
||||
lanes = tb_dp_cap_get_lanes(val);
|
||||
} else {
|
||||
/* No bandwidth management for legacy devices */
|
||||
*consumed_up = 0;
|
||||
*consumed_down = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return tb_dp_bandwidth(rate, lanes);
|
||||
if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
|
||||
*consumed_up = 0;
|
||||
*consumed_down = tb_dp_bandwidth(rate, lanes);
|
||||
} else {
|
||||
*consumed_up = tb_dp_bandwidth(rate, lanes);
|
||||
*consumed_down = 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_dp_init_aux_path(struct tb_path *path)
|
||||
@ -708,7 +725,10 @@ err_free:
|
||||
* @tb: Pointer to the domain structure
|
||||
* @in: DP in adapter port
|
||||
* @out: DP out adapter port
|
||||
* @max_bw: Maximum available bandwidth for the DP tunnel (%0 if not limited)
|
||||
* @max_up: Maximum available upstream bandwidth for the DP tunnel (%0
|
||||
* if not limited)
|
||||
* @max_down: Maximum available downstream bandwidth for the DP tunnel
|
||||
* (%0 if not limited)
|
||||
*
|
||||
* Allocates a tunnel between @in and @out that is capable of tunneling
|
||||
* Display Port traffic.
|
||||
@ -716,7 +736,8 @@ err_free:
|
||||
* Return: Returns a tb_tunnel on success or NULL on failure.
|
||||
*/
|
||||
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
|
||||
struct tb_port *out, int max_bw)
|
||||
struct tb_port *out, int max_up,
|
||||
int max_down)
|
||||
{
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path **paths;
|
||||
@ -734,7 +755,8 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
|
||||
tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth;
|
||||
tunnel->src_port = in;
|
||||
tunnel->dst_port = out;
|
||||
tunnel->max_bw = max_bw;
|
||||
tunnel->max_up = max_up;
|
||||
tunnel->max_down = max_down;
|
||||
|
||||
paths = tunnel->paths;
|
||||
|
||||
@ -854,6 +876,33 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
|
||||
return tunnel;
|
||||
}
|
||||
|
||||
static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
|
||||
{
|
||||
int ret, up_max_rate, down_max_rate;
|
||||
|
||||
ret = usb4_usb3_port_max_link_rate(up);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
up_max_rate = ret;
|
||||
|
||||
ret = usb4_usb3_port_max_link_rate(down);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
down_max_rate = ret;
|
||||
|
||||
return min(up_max_rate, down_max_rate);
|
||||
}
|
||||
|
||||
static int tb_usb3_init(struct tb_tunnel *tunnel)
|
||||
{
|
||||
tb_tunnel_dbg(tunnel, "allocating initial bandwidth %d/%d Mb/s\n",
|
||||
tunnel->allocated_up, tunnel->allocated_down);
|
||||
|
||||
return usb4_usb3_port_allocate_bandwidth(tunnel->src_port,
|
||||
&tunnel->allocated_up,
|
||||
&tunnel->allocated_down);
|
||||
}
|
||||
|
||||
static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
|
||||
{
|
||||
int res;
|
||||
@ -868,6 +917,86 @@ static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tb_usb3_consumed_bandwidth(struct tb_tunnel *tunnel,
|
||||
int *consumed_up, int *consumed_down)
|
||||
{
|
||||
/*
|
||||
* PCIe tunneling affects the USB3 bandwidth so take that it
|
||||
* into account here.
|
||||
*/
|
||||
*consumed_up = tunnel->allocated_up * (3 + 1) / 3;
|
||||
*consumed_down = tunnel->allocated_down * (3 + 1) / 3;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tb_usb3_release_unused_bandwidth(struct tb_tunnel *tunnel)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = usb4_usb3_port_release_bandwidth(tunnel->src_port,
|
||||
&tunnel->allocated_up,
|
||||
&tunnel->allocated_down);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
tb_tunnel_dbg(tunnel, "decreased bandwidth allocation to %d/%d Mb/s\n",
|
||||
tunnel->allocated_up, tunnel->allocated_down);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
|
||||
int *available_up,
|
||||
int *available_down)
|
||||
{
|
||||
int ret, max_rate, allocate_up, allocate_down;
|
||||
|
||||
ret = usb4_usb3_port_actual_link_rate(tunnel->src_port);
|
||||
if (ret <= 0) {
|
||||
tb_tunnel_warn(tunnel, "tunnel is not up\n");
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* 90% of the max rate can be allocated for isochronous
|
||||
* transfers.
|
||||
*/
|
||||
max_rate = ret * 90 / 100;
|
||||
|
||||
/* No need to reclaim if already at maximum */
|
||||
if (tunnel->allocated_up >= max_rate &&
|
||||
tunnel->allocated_down >= max_rate)
|
||||
return;
|
||||
|
||||
/* Don't go lower than what is already allocated */
|
||||
allocate_up = min(max_rate, *available_up);
|
||||
if (allocate_up < tunnel->allocated_up)
|
||||
allocate_up = tunnel->allocated_up;
|
||||
|
||||
allocate_down = min(max_rate, *available_down);
|
||||
if (allocate_down < tunnel->allocated_down)
|
||||
allocate_down = tunnel->allocated_down;
|
||||
|
||||
/* If no changes no need to do more */
|
||||
if (allocate_up == tunnel->allocated_up &&
|
||||
allocate_down == tunnel->allocated_down)
|
||||
return;
|
||||
|
||||
ret = usb4_usb3_port_allocate_bandwidth(tunnel->src_port, &allocate_up,
|
||||
&allocate_down);
|
||||
if (ret) {
|
||||
tb_tunnel_info(tunnel, "failed to allocate bandwidth\n");
|
||||
return;
|
||||
}
|
||||
|
||||
tunnel->allocated_up = allocate_up;
|
||||
*available_up -= tunnel->allocated_up;
|
||||
|
||||
tunnel->allocated_down = allocate_down;
|
||||
*available_down -= tunnel->allocated_down;
|
||||
|
||||
tb_tunnel_dbg(tunnel, "increased bandwidth allocation to %d/%d Mb/s\n",
|
||||
tunnel->allocated_up, tunnel->allocated_down);
|
||||
}
|
||||
|
||||
static void tb_usb3_init_path(struct tb_path *path)
|
||||
{
|
||||
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
|
||||
@ -879,8 +1008,9 @@ static void tb_usb3_init_path(struct tb_path *path)
|
||||
path->drop_packages = 0;
|
||||
path->nfc_credits = 0;
|
||||
path->hops[0].initial_credits = 7;
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
if (path->path_length > 1)
|
||||
path->hops[1].initial_credits =
|
||||
tb_initial_credits(path->hops[1].in_port->sw);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -947,6 +1077,29 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
|
||||
goto err_deactivate;
|
||||
}
|
||||
|
||||
if (!tb_route(down->sw)) {
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Read the initial bandwidth allocation for the first
|
||||
* hop tunnel.
|
||||
*/
|
||||
ret = usb4_usb3_port_allocated_bandwidth(down,
|
||||
&tunnel->allocated_up, &tunnel->allocated_down);
|
||||
if (ret)
|
||||
goto err_deactivate;
|
||||
|
||||
tb_tunnel_dbg(tunnel, "currently allocated bandwidth %d/%d Mb/s\n",
|
||||
tunnel->allocated_up, tunnel->allocated_down);
|
||||
|
||||
tunnel->init = tb_usb3_init;
|
||||
tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
|
||||
tunnel->release_unused_bandwidth =
|
||||
tb_usb3_release_unused_bandwidth;
|
||||
tunnel->reclaim_available_bandwidth =
|
||||
tb_usb3_reclaim_available_bandwidth;
|
||||
}
|
||||
|
||||
tb_tunnel_dbg(tunnel, "discovered\n");
|
||||
return tunnel;
|
||||
|
||||
@ -963,6 +1116,10 @@ err_free:
|
||||
* @tb: Pointer to the domain structure
|
||||
* @up: USB3 upstream adapter port
|
||||
* @down: USB3 downstream adapter port
|
||||
* @max_up: Maximum available upstream bandwidth for the USB3 tunnel (%0
|
||||
* if not limited).
|
||||
* @max_down: Maximum available downstream bandwidth for the USB3 tunnel
|
||||
* (%0 if not limited).
|
||||
*
|
||||
* Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and
|
||||
* @TB_TYPE_USB3_DOWN.
|
||||
@ -970,10 +1127,32 @@ err_free:
|
||||
* Return: Returns a tb_tunnel on success or %NULL on failure.
|
||||
*/
|
||||
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
|
||||
struct tb_port *down)
|
||||
struct tb_port *down, int max_up,
|
||||
int max_down)
|
||||
{
|
||||
struct tb_tunnel *tunnel;
|
||||
struct tb_path *path;
|
||||
int max_rate = 0;
|
||||
|
||||
/*
|
||||
* Check that we have enough bandwidth available for the new
|
||||
* USB3 tunnel.
|
||||
*/
|
||||
if (max_up > 0 || max_down > 0) {
|
||||
max_rate = tb_usb3_max_link_rate(down, up);
|
||||
if (max_rate < 0)
|
||||
return NULL;
|
||||
|
||||
/* Only 90% can be allocated for USB3 isochronous transfers */
|
||||
max_rate = max_rate * 90 / 100;
|
||||
tb_port_dbg(up, "required bandwidth for USB3 tunnel %d Mb/s\n",
|
||||
max_rate);
|
||||
|
||||
if (max_rate > max_up || max_rate > max_down) {
|
||||
tb_port_warn(up, "not enough bandwidth for USB3 tunnel\n");
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3);
|
||||
if (!tunnel)
|
||||
@ -982,6 +1161,8 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
|
||||
tunnel->activate = tb_usb3_activate;
|
||||
tunnel->src_port = down;
|
||||
tunnel->dst_port = up;
|
||||
tunnel->max_up = max_up;
|
||||
tunnel->max_down = max_down;
|
||||
|
||||
path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0,
|
||||
"USB3 Down");
|
||||
@ -1001,6 +1182,18 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
|
||||
tb_usb3_init_path(path);
|
||||
tunnel->paths[TB_USB3_PATH_UP] = path;
|
||||
|
||||
if (!tb_route(down->sw)) {
|
||||
tunnel->allocated_up = max_rate;
|
||||
tunnel->allocated_down = max_rate;
|
||||
|
||||
tunnel->init = tb_usb3_init;
|
||||
tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
|
||||
tunnel->release_unused_bandwidth =
|
||||
tb_usb3_release_unused_bandwidth;
|
||||
tunnel->reclaim_available_bandwidth =
|
||||
tb_usb3_reclaim_available_bandwidth;
|
||||
}
|
||||
|
||||
return tunnel;
|
||||
}
|
||||
|
||||
@ -1133,22 +1326,23 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_tunnel_switch_on_path() - Does the tunnel go through switch
|
||||
* tb_tunnel_port_on_path() - Does the tunnel go through port
|
||||
* @tunnel: Tunnel to check
|
||||
* @sw: Switch to check
|
||||
* @port: Port to check
|
||||
*
|
||||
* Returns true if @tunnel goes through @sw (direction does not matter),
|
||||
* Returns true if @tunnel goes through @port (direction does not matter),
|
||||
* false otherwise.
|
||||
*/
|
||||
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
|
||||
const struct tb_switch *sw)
|
||||
bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
|
||||
const struct tb_port *port)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < tunnel->npaths; i++) {
|
||||
if (!tunnel->paths[i])
|
||||
continue;
|
||||
if (tb_path_switch_on_path(tunnel->paths[i], sw))
|
||||
|
||||
if (tb_path_port_on_path(tunnel->paths[i], port))
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -1172,21 +1366,87 @@ static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel)
|
||||
/**
|
||||
* tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel
|
||||
* @tunnel: Tunnel to check
|
||||
* @consumed_up: Consumed bandwidth in Mb/s from @dst_port to @src_port.
|
||||
* Can be %NULL.
|
||||
* @consumed_down: Consumed bandwidth in Mb/s from @src_port to @dst_port.
|
||||
* Can be %NULL.
|
||||
*
|
||||
* Returns bandwidth currently consumed by @tunnel and %0 if the @tunnel
|
||||
* is not active or does consume bandwidth.
|
||||
* Stores the amount of isochronous bandwidth @tunnel consumes in
|
||||
* @consumed_up and @consumed_down. In case of success returns %0,
|
||||
* negative errno otherwise.
|
||||
*/
|
||||
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel)
|
||||
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
|
||||
int *consumed_down)
|
||||
{
|
||||
int up_bw = 0, down_bw = 0;
|
||||
|
||||
if (!tb_tunnel_is_active(tunnel))
|
||||
goto out;
|
||||
|
||||
if (tunnel->consumed_bandwidth) {
|
||||
int ret;
|
||||
|
||||
ret = tunnel->consumed_bandwidth(tunnel, &up_bw, &down_bw);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
tb_tunnel_dbg(tunnel, "consumed bandwidth %d/%d Mb/s\n", up_bw,
|
||||
down_bw);
|
||||
}
|
||||
|
||||
out:
|
||||
if (consumed_up)
|
||||
*consumed_up = up_bw;
|
||||
if (consumed_down)
|
||||
*consumed_down = down_bw;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_tunnel_release_unused_bandwidth() - Release unused bandwidth
|
||||
* @tunnel: Tunnel whose unused bandwidth to release
|
||||
*
|
||||
* If tunnel supports dynamic bandwidth management (USB3 tunnels at the
|
||||
* moment) this function makes it to release all the unused bandwidth.
|
||||
*
|
||||
* Returns %0 in case of success and negative errno otherwise.
|
||||
*/
|
||||
int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel)
|
||||
{
|
||||
if (!tb_tunnel_is_active(tunnel))
|
||||
return 0;
|
||||
|
||||
if (tunnel->consumed_bandwidth) {
|
||||
int ret = tunnel->consumed_bandwidth(tunnel);
|
||||
if (tunnel->release_unused_bandwidth) {
|
||||
int ret;
|
||||
|
||||
tb_tunnel_dbg(tunnel, "consumed bandwidth %d Mb/s\n", ret);
|
||||
return ret;
|
||||
ret = tunnel->release_unused_bandwidth(tunnel);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_tunnel_reclaim_available_bandwidth() - Reclaim available bandwidth
|
||||
* @tunnel: Tunnel reclaiming available bandwidth
|
||||
* @available_up: Available upstream bandwidth (in Mb/s)
|
||||
* @available_down: Available downstream bandwidth (in Mb/s)
|
||||
*
|
||||
* Reclaims bandwidth from @available_up and @available_down and updates
|
||||
* the variables accordingly (e.g decreases both according to what was
|
||||
* reclaimed by the tunnel). If nothing was reclaimed the values are
|
||||
* kept as is.
|
||||
*/
|
||||
void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
|
||||
int *available_up,
|
||||
int *available_down)
|
||||
{
|
||||
if (!tb_tunnel_is_active(tunnel))
|
||||
return;
|
||||
|
||||
if (tunnel->reclaim_available_bandwidth)
|
||||
tunnel->reclaim_available_bandwidth(tunnel, available_up,
|
||||
available_down);
|
||||
}
|
||||
|
@ -29,10 +29,16 @@ enum tb_tunnel_type {
|
||||
* @init: Optional tunnel specific initialization
|
||||
* @activate: Optional tunnel specific activation/deactivation
|
||||
* @consumed_bandwidth: Return how much bandwidth the tunnel consumes
|
||||
* @release_unused_bandwidth: Release all unused bandwidth
|
||||
* @reclaim_available_bandwidth: Reclaim back available bandwidth
|
||||
* @list: Tunnels are linked using this field
|
||||
* @type: Type of the tunnel
|
||||
* @max_bw: Maximum bandwidth (Mb/s) available for the tunnel (only for DP).
|
||||
* @max_up: Maximum upstream bandwidth (Mb/s) available for the tunnel.
|
||||
* Only set if the bandwidth needs to be limited.
|
||||
* @max_down: Maximum downstream bandwidth (Mb/s) available for the tunnel.
|
||||
* Only set if the bandwidth needs to be limited.
|
||||
* @allocated_up: Allocated upstream bandwidth (only for USB3)
|
||||
* @allocated_down: Allocated downstream bandwidth (only for USB3)
|
||||
*/
|
||||
struct tb_tunnel {
|
||||
struct tb *tb;
|
||||
@ -42,10 +48,18 @@ struct tb_tunnel {
|
||||
size_t npaths;
|
||||
int (*init)(struct tb_tunnel *tunnel);
|
||||
int (*activate)(struct tb_tunnel *tunnel, bool activate);
|
||||
int (*consumed_bandwidth)(struct tb_tunnel *tunnel);
|
||||
int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
|
||||
int *consumed_down);
|
||||
int (*release_unused_bandwidth)(struct tb_tunnel *tunnel);
|
||||
void (*reclaim_available_bandwidth)(struct tb_tunnel *tunnel,
|
||||
int *available_up,
|
||||
int *available_down);
|
||||
struct list_head list;
|
||||
enum tb_tunnel_type type;
|
||||
unsigned int max_bw;
|
||||
int max_up;
|
||||
int max_down;
|
||||
int allocated_up;
|
||||
int allocated_down;
|
||||
};
|
||||
|
||||
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
|
||||
@ -53,23 +67,30 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
|
||||
struct tb_port *down);
|
||||
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
|
||||
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
|
||||
struct tb_port *out, int max_bw);
|
||||
struct tb_port *out, int max_up,
|
||||
int max_down);
|
||||
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
|
||||
struct tb_port *dst, int transmit_ring,
|
||||
int transmit_path, int receive_ring,
|
||||
int receive_path);
|
||||
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
|
||||
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
|
||||
struct tb_port *down);
|
||||
struct tb_port *down, int max_up,
|
||||
int max_down);
|
||||
|
||||
void tb_tunnel_free(struct tb_tunnel *tunnel);
|
||||
int tb_tunnel_activate(struct tb_tunnel *tunnel);
|
||||
int tb_tunnel_restart(struct tb_tunnel *tunnel);
|
||||
void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
|
||||
bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
|
||||
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel,
|
||||
const struct tb_switch *sw);
|
||||
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel);
|
||||
bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
|
||||
const struct tb_port *port);
|
||||
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
|
||||
int *consumed_down);
|
||||
int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel);
|
||||
void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
|
||||
int *available_up,
|
||||
int *available_down);
|
||||
|
||||
static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
|
||||
{
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <linux/delay.h>
|
||||
#include <linux/ktime.h>
|
||||
|
||||
#include "sb_regs.h"
|
||||
#include "tb.h"
|
||||
|
||||
#define USB4_DATA_DWORDS 16
|
||||
@ -27,6 +28,12 @@ enum usb4_switch_op {
|
||||
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
|
||||
};
|
||||
|
||||
enum usb4_sb_target {
|
||||
USB4_SB_TARGET_ROUTER,
|
||||
USB4_SB_TARGET_PARTNER,
|
||||
USB4_SB_TARGET_RETIMER,
|
||||
};
|
||||
|
||||
#define USB4_NVM_READ_OFFSET_MASK GENMASK(23, 2)
|
||||
#define USB4_NVM_READ_OFFSET_SHIFT 2
|
||||
#define USB4_NVM_READ_LENGTH_MASK GENMASK(27, 24)
|
||||
@ -42,8 +49,8 @@ enum usb4_switch_op {
|
||||
|
||||
#define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0)
|
||||
|
||||
typedef int (*read_block_fn)(struct tb_switch *, unsigned int, void *, size_t);
|
||||
typedef int (*write_block_fn)(struct tb_switch *, const void *, size_t);
|
||||
typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
|
||||
typedef int (*write_block_fn)(void *, const void *, size_t);
|
||||
|
||||
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
|
||||
u32 value, int timeout_msec)
|
||||
@ -95,8 +102,8 @@ static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata)
|
||||
return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
|
||||
}
|
||||
|
||||
static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
|
||||
void *buf, size_t size, read_block_fn read_block)
|
||||
static int usb4_do_read_data(u16 address, void *buf, size_t size,
|
||||
read_block_fn read_block, void *read_block_data)
|
||||
{
|
||||
unsigned int retries = USB4_DATA_RETRIES;
|
||||
unsigned int offset;
|
||||
@ -113,13 +120,10 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
|
||||
dwaddress = address / 4;
|
||||
dwords = ALIGN(nbytes, 4) / 4;
|
||||
|
||||
ret = read_block(sw, dwaddress, data, dwords);
|
||||
ret = read_block(read_block_data, dwaddress, data, dwords);
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
continue;
|
||||
ret = -EIO;
|
||||
}
|
||||
if (ret != -ENODEV && retries--)
|
||||
continue;
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -133,8 +137,8 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address,
|
||||
const void *buf, size_t size, write_block_fn write_next_block)
|
||||
static int usb4_do_write_data(unsigned int address, const void *buf, size_t size,
|
||||
write_block_fn write_next_block, void *write_block_data)
|
||||
{
|
||||
unsigned int retries = USB4_DATA_RETRIES;
|
||||
unsigned int offset;
|
||||
@ -149,7 +153,7 @@ static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address,
|
||||
|
||||
memcpy(data + offset, buf, nbytes);
|
||||
|
||||
ret = write_next_block(sw, data, nbytes / 4);
|
||||
ret = write_next_block(write_block_data, data, nbytes / 4);
|
||||
if (ret) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
if (retries--)
|
||||
@ -192,6 +196,20 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool link_is_usb4(struct tb_port *port)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return false;
|
||||
|
||||
if (tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_18, 1))
|
||||
return false;
|
||||
|
||||
return !(val & PORT_CS_18_TCM);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_switch_setup() - Additional setup for USB4 device
|
||||
* @sw: USB4 router to setup
|
||||
@ -205,6 +223,7 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
|
||||
*/
|
||||
int usb4_switch_setup(struct tb_switch *sw)
|
||||
{
|
||||
struct tb_port *downstream_port;
|
||||
struct tb_switch *parent;
|
||||
bool tbt3, xhci;
|
||||
u32 val = 0;
|
||||
@ -217,6 +236,11 @@ int usb4_switch_setup(struct tb_switch *sw)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
parent = tb_switch_parent(sw);
|
||||
downstream_port = tb_port_at(tb_route(sw), parent);
|
||||
sw->link_usb4 = link_is_usb4(downstream_port);
|
||||
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3");
|
||||
|
||||
xhci = val & ROUTER_CS_6_HCI;
|
||||
tbt3 = !(val & ROUTER_CS_6_TNS);
|
||||
|
||||
@ -227,9 +251,7 @@ int usb4_switch_setup(struct tb_switch *sw)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
parent = tb_switch_parent(sw);
|
||||
|
||||
if (tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
|
||||
if (sw->link_usb4 && tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
|
||||
val |= ROUTER_CS_5_UTO;
|
||||
xhci = false;
|
||||
}
|
||||
@ -271,10 +293,11 @@ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid)
|
||||
return tb_sw_read(sw, uid, TB_CFG_SWITCH, ROUTER_CS_7, 2);
|
||||
}
|
||||
|
||||
static int usb4_switch_drom_read_block(struct tb_switch *sw,
|
||||
static int usb4_switch_drom_read_block(void *data,
|
||||
unsigned int dwaddress, void *buf,
|
||||
size_t dwords)
|
||||
{
|
||||
struct tb_switch *sw = data;
|
||||
u8 status = 0;
|
||||
u32 metadata;
|
||||
int ret;
|
||||
@ -311,8 +334,8 @@ static int usb4_switch_drom_read_block(struct tb_switch *sw,
|
||||
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size)
|
||||
{
|
||||
return usb4_switch_do_read_data(sw, address, buf, size,
|
||||
usb4_switch_drom_read_block);
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_switch_drom_read_block, sw);
|
||||
}
|
||||
|
||||
static int usb4_set_port_configured(struct tb_port *port, bool configured)
|
||||
@ -445,9 +468,10 @@ int usb4_switch_nvm_sector_size(struct tb_switch *sw)
|
||||
return metadata & USB4_NVM_SECTOR_SIZE_MASK;
|
||||
}
|
||||
|
||||
static int usb4_switch_nvm_read_block(struct tb_switch *sw,
|
||||
static int usb4_switch_nvm_read_block(void *data,
|
||||
unsigned int dwaddress, void *buf, size_t dwords)
|
||||
{
|
||||
struct tb_switch *sw = data;
|
||||
u8 status = 0;
|
||||
u32 metadata;
|
||||
int ret;
|
||||
@ -484,8 +508,8 @@ static int usb4_switch_nvm_read_block(struct tb_switch *sw,
|
||||
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
|
||||
size_t size)
|
||||
{
|
||||
return usb4_switch_do_read_data(sw, address, buf, size,
|
||||
usb4_switch_nvm_read_block);
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_switch_nvm_read_block, sw);
|
||||
}
|
||||
|
||||
static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
|
||||
@ -510,9 +534,10 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
|
||||
return status ? -EIO : 0;
|
||||
}
|
||||
|
||||
static int usb4_switch_nvm_write_next_block(struct tb_switch *sw,
|
||||
const void *buf, size_t dwords)
|
||||
static int usb4_switch_nvm_write_next_block(void *data, const void *buf,
|
||||
size_t dwords)
|
||||
{
|
||||
struct tb_switch *sw = data;
|
||||
u8 status;
|
||||
int ret;
|
||||
|
||||
@ -546,8 +571,8 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_switch_do_write_data(sw, address, buf, size,
|
||||
usb4_switch_nvm_write_next_block);
|
||||
return usb4_do_write_data(address, buf, size,
|
||||
usb4_switch_nvm_write_next_block, sw);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -710,7 +735,7 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
|
||||
if (!tb_port_is_pcie_down(p))
|
||||
continue;
|
||||
|
||||
if (pcie_idx == usb4_idx && !tb_pci_port_is_enabled(p))
|
||||
if (pcie_idx == usb4_idx)
|
||||
return p;
|
||||
|
||||
pcie_idx++;
|
||||
@ -741,7 +766,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
|
||||
if (!tb_port_is_usb3_down(p))
|
||||
continue;
|
||||
|
||||
if (usb_idx == usb4_idx && !tb_usb3_port_is_enabled(p))
|
||||
if (usb_idx == usb4_idx)
|
||||
return p;
|
||||
|
||||
usb_idx++;
|
||||
@ -769,3 +794,796 @@ int usb4_port_unlock(struct tb_port *port)
|
||||
val &= ~ADP_CS_4_LCK;
|
||||
return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
|
||||
}
|
||||
|
||||
static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
|
||||
u32 value, int timeout_msec)
|
||||
{
|
||||
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
|
||||
|
||||
do {
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT, offset, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if ((val & bit) == value)
|
||||
return 0;
|
||||
|
||||
usleep_range(50, 100);
|
||||
} while (ktime_before(ktime_get(), timeout));
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
|
||||
{
|
||||
if (dwords > USB4_DATA_DWORDS)
|
||||
return -EINVAL;
|
||||
|
||||
return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
|
||||
dwords);
|
||||
}
|
||||
|
||||
static int usb4_port_write_data(struct tb_port *port, const void *data,
|
||||
size_t dwords)
|
||||
{
|
||||
if (dwords > USB4_DATA_DWORDS)
|
||||
return -EINVAL;
|
||||
|
||||
return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
|
||||
dwords);
|
||||
}
|
||||
|
||||
static int usb4_port_sb_read(struct tb_port *port, enum usb4_sb_target target,
|
||||
u8 index, u8 reg, void *buf, u8 size)
|
||||
{
|
||||
size_t dwords = DIV_ROUND_UP(size, 4);
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return -EINVAL;
|
||||
|
||||
val = reg;
|
||||
val |= size << PORT_CS_1_LENGTH_SHIFT;
|
||||
val |= (target << PORT_CS_1_TARGET_SHIFT) & PORT_CS_1_TARGET_MASK;
|
||||
if (target == USB4_SB_TARGET_RETIMER)
|
||||
val |= (index << PORT_CS_1_RETIMER_INDEX_SHIFT);
|
||||
val |= PORT_CS_1_PND;
|
||||
|
||||
ret = tb_port_write(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_1, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_1,
|
||||
PORT_CS_1_PND, 0, 500);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_1, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (val & PORT_CS_1_NR)
|
||||
return -ENODEV;
|
||||
if (val & PORT_CS_1_RC)
|
||||
return -EIO;
|
||||
|
||||
return buf ? usb4_port_read_data(port, buf, dwords) : 0;
|
||||
}
|
||||
|
||||
static int usb4_port_sb_write(struct tb_port *port, enum usb4_sb_target target,
|
||||
u8 index, u8 reg, const void *buf, u8 size)
|
||||
{
|
||||
size_t dwords = DIV_ROUND_UP(size, 4);
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
if (!port->cap_usb4)
|
||||
return -EINVAL;
|
||||
|
||||
if (buf) {
|
||||
ret = usb4_port_write_data(port, buf, dwords);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
val = reg;
|
||||
val |= size << PORT_CS_1_LENGTH_SHIFT;
|
||||
val |= PORT_CS_1_WNR_WRITE;
|
||||
val |= (target << PORT_CS_1_TARGET_SHIFT) & PORT_CS_1_TARGET_MASK;
|
||||
if (target == USB4_SB_TARGET_RETIMER)
|
||||
val |= (index << PORT_CS_1_RETIMER_INDEX_SHIFT);
|
||||
val |= PORT_CS_1_PND;
|
||||
|
||||
ret = tb_port_write(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_1, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_1,
|
||||
PORT_CS_1_PND, 0, 500);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_usb4 + PORT_CS_1, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (val & PORT_CS_1_NR)
|
||||
return -ENODEV;
|
||||
if (val & PORT_CS_1_RC)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb4_port_sb_op(struct tb_port *port, enum usb4_sb_target target,
|
||||
u8 index, enum usb4_sb_opcode opcode, int timeout_msec)
|
||||
{
|
||||
ktime_t timeout;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
val = opcode;
|
||||
ret = usb4_port_sb_write(port, target, index, USB4_SB_OPCODE, &val,
|
||||
sizeof(val));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
timeout = ktime_add_ms(ktime_get(), timeout_msec);
|
||||
|
||||
do {
|
||||
/* Check results */
|
||||
ret = usb4_port_sb_read(port, target, index, USB4_SB_OPCODE,
|
||||
&val, sizeof(val));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
switch (val) {
|
||||
case 0:
|
||||
return 0;
|
||||
|
||||
case USB4_SB_OPCODE_ERR:
|
||||
return -EAGAIN;
|
||||
|
||||
case USB4_SB_OPCODE_ONS:
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
default:
|
||||
if (val != opcode)
|
||||
return -EIO;
|
||||
break;
|
||||
}
|
||||
} while (ktime_before(ktime_get(), timeout));
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_enumerate_retimers() - Send RT broadcast transaction
|
||||
* @port: USB4 port
|
||||
*
|
||||
* This forces the USB4 port to send broadcast RT transaction which
|
||||
* makes the retimers on the link to assign index to themselves. Returns
|
||||
* %0 in case of success and negative errno if there was an error.
|
||||
*/
|
||||
int usb4_port_enumerate_retimers(struct tb_port *port)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
val = USB4_SB_OPCODE_ENUMERATE_RETIMERS;
|
||||
return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
|
||||
USB4_SB_OPCODE, &val, sizeof(val));
|
||||
}
|
||||
|
||||
static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
|
||||
enum usb4_sb_opcode opcode,
|
||||
int timeout_msec)
|
||||
{
|
||||
return usb4_port_sb_op(port, USB4_SB_TARGET_RETIMER, index, opcode,
|
||||
timeout_msec);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_read() - Read from retimer sideband registers
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @reg: Sideband register to read
|
||||
* @buf: Data from @reg is stored here
|
||||
* @size: Number of bytes to read
|
||||
*
|
||||
* Function reads retimer sideband registers starting from @reg. The
|
||||
* retimer is connected to @port at @index. Returns %0 in case of
|
||||
* success, and read data is copied to @buf. If there is no retimer
|
||||
* present at given @index returns %-ENODEV. In any other failure
|
||||
* returns negative errno.
|
||||
*/
|
||||
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
|
||||
u8 size)
|
||||
{
|
||||
return usb4_port_sb_read(port, USB4_SB_TARGET_RETIMER, index, reg, buf,
|
||||
size);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_write() - Write to retimer sideband registers
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @reg: Sideband register to write
|
||||
* @buf: Data that is written starting from @reg
|
||||
* @size: Number of bytes to write
|
||||
*
|
||||
* Writes retimer sideband registers starting from @reg. The retimer is
|
||||
* connected to @port at @index. Returns %0 in case of success. If there
|
||||
* is no retimer present at given @index returns %-ENODEV. In any other
|
||||
* failure returns negative errno.
|
||||
*/
|
||||
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
|
||||
const void *buf, u8 size)
|
||||
{
|
||||
return usb4_port_sb_write(port, USB4_SB_TARGET_RETIMER, index, reg, buf,
|
||||
size);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_is_last() - Is the retimer last on-board retimer
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
*
|
||||
* If the retimer at @index is last one (connected directly to the
|
||||
* Type-C port) this function returns %1. If it is not returns %0. If
|
||||
* the retimer is not present returns %-ENODEV. Otherwise returns
|
||||
* negative errno.
|
||||
*/
|
||||
int usb4_port_retimer_is_last(struct tb_port *port, u8 index)
|
||||
{
|
||||
u32 metadata;
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_QUERY_LAST_RETIMER,
|
||||
500);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA, &metadata,
|
||||
sizeof(metadata));
|
||||
return ret ? ret : metadata & 1;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_nvm_sector_size() - Read retimer NVM sector size
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
*
|
||||
* Reads NVM sector size (in bytes) of a retimer at @index. This
|
||||
* operation can be used to determine whether the retimer supports NVM
|
||||
* upgrade for example. Returns sector size in bytes or negative errno
|
||||
* in case of error. Specifically returns %-ENODEV if there is no
|
||||
* retimer at @index.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
|
||||
{
|
||||
u32 metadata;
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE,
|
||||
500);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA, &metadata,
|
||||
sizeof(metadata));
|
||||
return ret ? ret : metadata & USB4_NVM_SECTOR_SIZE_MASK;
|
||||
}
|
||||
|
||||
static int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
|
||||
unsigned int address)
|
||||
{
|
||||
u32 metadata, dwaddress;
|
||||
int ret;
|
||||
|
||||
dwaddress = address / 4;
|
||||
metadata = (dwaddress << USB4_NVM_SET_OFFSET_SHIFT) &
|
||||
USB4_NVM_SET_OFFSET_MASK;
|
||||
|
||||
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
|
||||
sizeof(metadata));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_port_retimer_op(port, index, USB4_SB_OPCODE_NVM_SET_OFFSET,
|
||||
500);
|
||||
}
|
||||
|
||||
struct retimer_info {
|
||||
struct tb_port *port;
|
||||
u8 index;
|
||||
};
|
||||
|
||||
static int usb4_port_retimer_nvm_write_next_block(void *data, const void *buf,
|
||||
size_t dwords)
|
||||
|
||||
{
|
||||
const struct retimer_info *info = data;
|
||||
struct tb_port *port = info->port;
|
||||
u8 index = info->index;
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_write(port, index, USB4_SB_DATA,
|
||||
buf, dwords * 4);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_port_retimer_op(port, index,
|
||||
USB4_SB_OPCODE_NVM_BLOCK_WRITE, 1000);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_nvm_write() - Write to retimer NVM
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @address: Byte address where to start the write
|
||||
* @buf: Data to write
|
||||
* @size: Size in bytes how much to write
|
||||
*
|
||||
* Writes @size bytes from @buf to the retimer NVM. Used for NVM
|
||||
* upgrade. Returns %0 if the data was written successfully and negative
|
||||
* errno in case of failure. Specifically returns %-ENODEV if there is
|
||||
* no retimer at @index.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int address,
|
||||
const void *buf, size_t size)
|
||||
{
|
||||
struct retimer_info info = { .port = port, .index = index };
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_nvm_set_offset(port, index, address);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_do_write_data(address, buf, size,
|
||||
usb4_port_retimer_nvm_write_next_block, &info);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_nvm_authenticate() - Start retimer NVM upgrade
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
*
|
||||
* After the new NVM image has been written via usb4_port_retimer_nvm_write()
|
||||
* this function can be used to trigger the NVM upgrade process. If
|
||||
* successful the retimer restarts with the new NVM and may not have the
|
||||
* index set so one needs to call usb4_port_enumerate_retimers() to
|
||||
* force index to be assigned.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/*
|
||||
* We need to use the raw operation here because once the
|
||||
* authentication completes the retimer index is not set anymore
|
||||
* so we do not get back the status now.
|
||||
*/
|
||||
val = USB4_SB_OPCODE_NVM_AUTH_WRITE;
|
||||
return usb4_port_sb_write(port, USB4_SB_TARGET_RETIMER, index,
|
||||
USB4_SB_OPCODE, &val, sizeof(val));
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_nvm_authenticate_status() - Read status of NVM upgrade
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @status: Raw status code read from metadata
|
||||
*
|
||||
* This can be called after usb4_port_retimer_nvm_authenticate() and
|
||||
* usb4_port_enumerate_retimers() to fetch status of the NVM upgrade.
|
||||
*
|
||||
* Returns %0 if the authentication status was successfully read. The
|
||||
* completion metadata (the result) is then stored into @status. If
|
||||
* reading the status fails, returns negative errno.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_authenticate_status(struct tb_port *port, u8 index,
|
||||
u32 *status)
|
||||
{
|
||||
u32 metadata, val;
|
||||
int ret;
|
||||
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_OPCODE, &val,
|
||||
sizeof(val));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
switch (val) {
|
||||
case 0:
|
||||
*status = 0;
|
||||
return 0;
|
||||
|
||||
case USB4_SB_OPCODE_ERR:
|
||||
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA,
|
||||
&metadata, sizeof(metadata));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
*status = metadata & USB4_SB_METADATA_NVM_AUTH_WRITE_MASK;
|
||||
return 0;
|
||||
|
||||
case USB4_SB_OPCODE_ONS:
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
default:
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
|
||||
void *buf, size_t dwords)
|
||||
{
|
||||
const struct retimer_info *info = data;
|
||||
struct tb_port *port = info->port;
|
||||
u8 index = info->index;
|
||||
u32 metadata;
|
||||
int ret;
|
||||
|
||||
metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT;
|
||||
if (dwords < USB4_DATA_DWORDS)
|
||||
metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT;
|
||||
|
||||
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
|
||||
sizeof(metadata));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_NVM_READ, 500);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return usb4_port_retimer_read(port, index, USB4_SB_DATA, buf,
|
||||
dwords * 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_port_retimer_nvm_read() - Read contents of retimer NVM
|
||||
* @port: USB4 port
|
||||
* @index: Retimer index
|
||||
* @address: NVM address (in bytes) to start reading
|
||||
* @buf: Data read from NVM is stored here
|
||||
* @size: Number of bytes to read
|
||||
*
|
||||
* Reads retimer NVM and copies the contents to @buf. Returns %0 if the
|
||||
* read was successful and negative errno in case of failure.
|
||||
* Specifically returns %-ENODEV if there is no retimer at @index.
|
||||
*/
|
||||
int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
|
||||
unsigned int address, void *buf, size_t size)
|
||||
{
|
||||
struct retimer_info info = { .port = port, .index = index };
|
||||
|
||||
return usb4_do_read_data(address, buf, size,
|
||||
usb4_port_retimer_nvm_read_block, &info);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_usb3_port_max_link_rate() - Maximum support USB3 link rate
|
||||
* @port: USB3 adapter port
|
||||
*
|
||||
* Return maximum supported link rate of a USB3 adapter in Mb/s.
|
||||
* Negative errno in case of error.
|
||||
*/
|
||||
int usb4_usb3_port_max_link_rate(struct tb_port *port)
|
||||
{
|
||||
int ret, lr;
|
||||
u32 val;
|
||||
|
||||
if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
|
||||
return -EINVAL;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_4, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
lr = (val & ADP_USB3_CS_4_MSLR_MASK) >> ADP_USB3_CS_4_MSLR_SHIFT;
|
||||
return lr == ADP_USB3_CS_4_MSLR_20G ? 20000 : 10000;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_usb3_port_actual_link_rate() - Established USB3 link rate
|
||||
* @port: USB3 adapter port
|
||||
*
|
||||
* Return actual established link rate of a USB3 adapter in Mb/s. If the
|
||||
* link is not up returns %0 and negative errno in case of failure.
|
||||
*/
|
||||
int usb4_usb3_port_actual_link_rate(struct tb_port *port)
|
||||
{
|
||||
int ret, lr;
|
||||
u32 val;
|
||||
|
||||
if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
|
||||
return -EINVAL;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_4, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (!(val & ADP_USB3_CS_4_ULV))
|
||||
return 0;
|
||||
|
||||
lr = val & ADP_USB3_CS_4_ALR_MASK;
|
||||
return lr == ADP_USB3_CS_4_ALR_20G ? 20000 : 10000;
|
||||
}
|
||||
|
||||
static int usb4_usb3_port_cm_request(struct tb_port *port, bool request)
|
||||
{
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
if (!tb_port_is_usb3_down(port))
|
||||
return -EINVAL;
|
||||
if (tb_route(port->sw))
|
||||
return -EINVAL;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (request)
|
||||
val |= ADP_USB3_CS_2_CMR;
|
||||
else
|
||||
val &= ~ADP_USB3_CS_2_CMR;
|
||||
|
||||
ret = tb_port_write(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* We can use val here directly as the CMR bit is in the same place
|
||||
* as HCA. Just mask out others.
|
||||
*/
|
||||
val &= ADP_USB3_CS_2_CMR;
|
||||
return usb4_port_wait_for_bit(port, port->cap_adap + ADP_USB3_CS_1,
|
||||
ADP_USB3_CS_1_HCA, val, 1500);
|
||||
}
|
||||
|
||||
static inline int usb4_usb3_port_set_cm_request(struct tb_port *port)
|
||||
{
|
||||
return usb4_usb3_port_cm_request(port, true);
|
||||
}
|
||||
|
||||
static inline int usb4_usb3_port_clear_cm_request(struct tb_port *port)
|
||||
{
|
||||
return usb4_usb3_port_cm_request(port, false);
|
||||
}
|
||||
|
||||
static unsigned int usb3_bw_to_mbps(u32 bw, u8 scale)
|
||||
{
|
||||
unsigned long uframes;
|
||||
|
||||
uframes = bw * 512UL << scale;
|
||||
return DIV_ROUND_CLOSEST(uframes * 8000, 1000 * 1000);
|
||||
}
|
||||
|
||||
static u32 mbps_to_usb3_bw(unsigned int mbps, u8 scale)
|
||||
{
|
||||
unsigned long uframes;
|
||||
|
||||
/* 1 uframe is 1/8 ms (125 us) -> 1 / 8000 s */
|
||||
uframes = ((unsigned long)mbps * 1000 * 1000) / 8000;
|
||||
return DIV_ROUND_UP(uframes, 512UL << scale);
|
||||
}
|
||||
|
||||
static int usb4_usb3_port_read_allocated_bandwidth(struct tb_port *port,
|
||||
int *upstream_bw,
|
||||
int *downstream_bw)
|
||||
{
|
||||
u32 val, bw, scale;
|
||||
int ret;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = tb_port_read(port, &scale, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_3, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
scale &= ADP_USB3_CS_3_SCALE_MASK;
|
||||
|
||||
bw = val & ADP_USB3_CS_2_AUBW_MASK;
|
||||
*upstream_bw = usb3_bw_to_mbps(bw, scale);
|
||||
|
||||
bw = (val & ADP_USB3_CS_2_ADBW_MASK) >> ADP_USB3_CS_2_ADBW_SHIFT;
|
||||
*downstream_bw = usb3_bw_to_mbps(bw, scale);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_usb3_port_allocated_bandwidth() - Bandwidth allocated for USB3
|
||||
* @port: USB3 adapter port
|
||||
* @upstream_bw: Allocated upstream bandwidth is stored here
|
||||
* @downstream_bw: Allocated downstream bandwidth is stored here
|
||||
*
|
||||
* Stores currently allocated USB3 bandwidth into @upstream_bw and
|
||||
* @downstream_bw in Mb/s. Returns %0 in case of success and negative
|
||||
* errno in failure.
|
||||
*/
|
||||
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = usb4_usb3_port_set_cm_request(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_usb3_port_read_allocated_bandwidth(port, upstream_bw,
|
||||
downstream_bw);
|
||||
usb4_usb3_port_clear_cm_request(port);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int usb4_usb3_port_read_consumed_bandwidth(struct tb_port *port,
|
||||
int *upstream_bw,
|
||||
int *downstream_bw)
|
||||
{
|
||||
u32 val, bw, scale;
|
||||
int ret;
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_1, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = tb_port_read(port, &scale, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_3, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
scale &= ADP_USB3_CS_3_SCALE_MASK;
|
||||
|
||||
bw = val & ADP_USB3_CS_1_CUBW_MASK;
|
||||
*upstream_bw = usb3_bw_to_mbps(bw, scale);
|
||||
|
||||
bw = (val & ADP_USB3_CS_1_CDBW_MASK) >> ADP_USB3_CS_1_CDBW_SHIFT;
|
||||
*downstream_bw = usb3_bw_to_mbps(bw, scale);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
|
||||
int upstream_bw,
|
||||
int downstream_bw)
|
||||
{
|
||||
u32 val, ubw, dbw, scale;
|
||||
int ret;
|
||||
|
||||
/* Read the used scale, hardware default is 0 */
|
||||
ret = tb_port_read(port, &scale, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_3, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
scale &= ADP_USB3_CS_3_SCALE_MASK;
|
||||
ubw = mbps_to_usb3_bw(upstream_bw, scale);
|
||||
dbw = mbps_to_usb3_bw(downstream_bw, scale);
|
||||
|
||||
ret = tb_port_read(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val &= ~(ADP_USB3_CS_2_AUBW_MASK | ADP_USB3_CS_2_ADBW_MASK);
|
||||
val |= dbw << ADP_USB3_CS_2_ADBW_SHIFT;
|
||||
val |= ubw;
|
||||
|
||||
return tb_port_write(port, &val, TB_CFG_PORT,
|
||||
port->cap_adap + ADP_USB3_CS_2, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_usb3_port_allocate_bandwidth() - Allocate bandwidth for USB3
|
||||
* @port: USB3 adapter port
|
||||
* @upstream_bw: New upstream bandwidth
|
||||
* @downstream_bw: New downstream bandwidth
|
||||
*
|
||||
* This can be used to set how much bandwidth is allocated for the USB3
|
||||
* tunneled isochronous traffic. @upstream_bw and @downstream_bw are the
|
||||
* new values programmed to the USB3 adapter allocation registers. If
|
||||
* the values are lower than what is currently consumed the allocation
|
||||
* is set to what is currently consumed instead (consumed bandwidth
|
||||
* cannot be taken away by CM). The actual new values are returned in
|
||||
* @upstream_bw and @downstream_bw.
|
||||
*
|
||||
* Returns %0 in case of success and negative errno if there was a
|
||||
* failure.
|
||||
*/
|
||||
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw)
|
||||
{
|
||||
int ret, consumed_up, consumed_down, allocate_up, allocate_down;
|
||||
|
||||
ret = usb4_usb3_port_set_cm_request(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
|
||||
&consumed_down);
|
||||
if (ret)
|
||||
goto err_request;
|
||||
|
||||
/* Don't allow it go lower than what is consumed */
|
||||
allocate_up = max(*upstream_bw, consumed_up);
|
||||
allocate_down = max(*downstream_bw, consumed_down);
|
||||
|
||||
ret = usb4_usb3_port_write_allocated_bandwidth(port, allocate_up,
|
||||
allocate_down);
|
||||
if (ret)
|
||||
goto err_request;
|
||||
|
||||
*upstream_bw = allocate_up;
|
||||
*downstream_bw = allocate_down;
|
||||
|
||||
err_request:
|
||||
usb4_usb3_port_clear_cm_request(port);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb4_usb3_port_release_bandwidth() - Release allocated USB3 bandwidth
|
||||
* @port: USB3 adapter port
|
||||
* @upstream_bw: New allocated upstream bandwidth
|
||||
* @downstream_bw: New allocated downstream bandwidth
|
||||
*
|
||||
* Releases USB3 allocated bandwidth down to what is actually consumed.
|
||||
* The new bandwidth is returned in @upstream_bw and @downstream_bw.
|
||||
*
|
||||
* Returns 0% in success and negative errno in case of failure.
|
||||
*/
|
||||
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
|
||||
int *downstream_bw)
|
||||
{
|
||||
int ret, consumed_up, consumed_down;
|
||||
|
||||
ret = usb4_usb3_port_set_cm_request(port);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
|
||||
&consumed_down);
|
||||
if (ret)
|
||||
goto err_request;
|
||||
|
||||
/*
|
||||
* Always keep 1000 Mb/s to make sure xHCI has at least some
|
||||
* bandwidth available for isochronous traffic.
|
||||
*/
|
||||
if (consumed_up < 1000)
|
||||
consumed_up = 1000;
|
||||
if (consumed_down < 1000)
|
||||
consumed_down = 1000;
|
||||
|
||||
ret = usb4_usb3_port_write_allocated_bandwidth(port, consumed_up,
|
||||
consumed_down);
|
||||
if (ret)
|
||||
goto err_request;
|
||||
|
||||
*upstream_bw = consumed_up;
|
||||
*downstream_bw = consumed_down;
|
||||
|
||||
err_request:
|
||||
usb4_usb3_port_clear_cm_request(port);
|
||||
return ret;
|
||||
}
|
||||
|
@ -501,6 +501,55 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler);
|
||||
|
||||
static int rebuild_property_block(void)
|
||||
{
|
||||
u32 *block, len;
|
||||
int ret;
|
||||
|
||||
ret = tb_property_format_dir(xdomain_property_dir, NULL, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
len = ret;
|
||||
|
||||
block = kcalloc(len, sizeof(u32), GFP_KERNEL);
|
||||
if (!block)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = tb_property_format_dir(xdomain_property_dir, block, len);
|
||||
if (ret) {
|
||||
kfree(block);
|
||||
return ret;
|
||||
}
|
||||
|
||||
kfree(xdomain_property_block);
|
||||
xdomain_property_block = block;
|
||||
xdomain_property_block_len = len;
|
||||
xdomain_property_block_gen++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void finalize_property_block(void)
|
||||
{
|
||||
const struct tb_property *nodename;
|
||||
|
||||
/*
|
||||
* On first XDomain connection we set up the the system
|
||||
* nodename. This delayed here because userspace may not have it
|
||||
* set when the driver is first probed.
|
||||
*/
|
||||
mutex_lock(&xdomain_lock);
|
||||
nodename = tb_property_find(xdomain_property_dir, "deviceid",
|
||||
TB_PROPERTY_TYPE_TEXT);
|
||||
if (!nodename) {
|
||||
tb_property_add_text(xdomain_property_dir, "deviceid",
|
||||
utsname()->nodename);
|
||||
rebuild_property_block();
|
||||
}
|
||||
mutex_unlock(&xdomain_lock);
|
||||
}
|
||||
|
||||
static void tb_xdp_handle_request(struct work_struct *work)
|
||||
{
|
||||
struct xdomain_request_work *xw = container_of(work, typeof(*xw), work);
|
||||
@ -529,6 +578,8 @@ static void tb_xdp_handle_request(struct work_struct *work)
|
||||
goto out;
|
||||
}
|
||||
|
||||
finalize_property_block();
|
||||
|
||||
switch (pkg->type) {
|
||||
case PROPERTIES_REQUEST:
|
||||
ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid,
|
||||
@ -1569,35 +1620,6 @@ bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type,
|
||||
return ret > 0;
|
||||
}
|
||||
|
||||
static int rebuild_property_block(void)
|
||||
{
|
||||
u32 *block, len;
|
||||
int ret;
|
||||
|
||||
ret = tb_property_format_dir(xdomain_property_dir, NULL, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
len = ret;
|
||||
|
||||
block = kcalloc(len, sizeof(u32), GFP_KERNEL);
|
||||
if (!block)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = tb_property_format_dir(xdomain_property_dir, block, len);
|
||||
if (ret) {
|
||||
kfree(block);
|
||||
return ret;
|
||||
}
|
||||
|
||||
kfree(xdomain_property_block);
|
||||
xdomain_property_block = block;
|
||||
xdomain_property_block_len = len;
|
||||
xdomain_property_block_gen++;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int update_xdomain(struct device *dev, void *data)
|
||||
{
|
||||
struct tb_xdomain *xd;
|
||||
@ -1702,8 +1724,6 @@ EXPORT_SYMBOL_GPL(tb_unregister_property_dir);
|
||||
|
||||
int tb_xdomain_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
xdomain_property_dir = tb_property_create_dir(NULL);
|
||||
if (!xdomain_property_dir)
|
||||
return -ENOMEM;
|
||||
@ -1712,22 +1732,16 @@ int tb_xdomain_init(void)
|
||||
* Initialize standard set of properties without any service
|
||||
* directories. Those will be added by service drivers
|
||||
* themselves when they are loaded.
|
||||
*
|
||||
* We also add node name later when first connection is made.
|
||||
*/
|
||||
tb_property_add_immediate(xdomain_property_dir, "vendorid",
|
||||
PCI_VENDOR_ID_INTEL);
|
||||
tb_property_add_text(xdomain_property_dir, "vendorid", "Intel Corp.");
|
||||
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
|
||||
tb_property_add_text(xdomain_property_dir, "deviceid",
|
||||
utsname()->nodename);
|
||||
tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);
|
||||
|
||||
ret = rebuild_property_block();
|
||||
if (ret) {
|
||||
tb_property_free_dir(xdomain_property_dir);
|
||||
xdomain_property_dir = NULL;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void tb_xdomain_exit(void)
|
||||
|
@ -408,7 +408,7 @@ static ssize_t adsl_state_store(struct device *dev,
|
||||
case CXPOLL_STOPPING:
|
||||
/* abort stop request */
|
||||
instance->poll_state = CXPOLL_POLLING;
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case CXPOLL_POLLING:
|
||||
case CXPOLL_SHUTDOWN:
|
||||
/* don't start polling */
|
||||
@ -802,7 +802,7 @@ static int cxacru_atm_start(struct usbatm_data *usbatm_instance,
|
||||
case CXPOLL_STOPPING:
|
||||
/* abort stop request */
|
||||
instance->poll_state = CXPOLL_POLLING;
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case CXPOLL_POLLING:
|
||||
case CXPOLL_SHUTDOWN:
|
||||
/* don't start polling */
|
||||
|
@ -570,7 +570,7 @@ MODULE_PARM_DESC(annex,
|
||||
#define LOAD_INTERNAL 0xA0
|
||||
#define F8051_USBCS 0x7f92
|
||||
|
||||
/**
|
||||
/*
|
||||
* uea_send_modem_cmd - Send a command for pre-firmware devices.
|
||||
*/
|
||||
static int uea_send_modem_cmd(struct usb_device *usb,
|
||||
@ -672,7 +672,7 @@ err:
|
||||
uea_leaves(usb);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* uea_load_firmware - Load usb firmware for pre-firmware devices.
|
||||
*/
|
||||
static int uea_load_firmware(struct usb_device *usb, unsigned int ver)
|
||||
|
@ -228,7 +228,7 @@ static int c67x00_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
|
||||
* Main part of host controller driver
|
||||
*/
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_hcd_irq
|
||||
*
|
||||
* This function is called from the interrupt handler in c67x00-drv.c
|
||||
@ -260,7 +260,7 @@ static void c67x00_hcd_irq(struct c67x00_sie *sie, u16 int_status, u16 msg)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_hcd_start: Host controller start hook
|
||||
*/
|
||||
static int c67x00_hcd_start(struct usb_hcd *hcd)
|
||||
@ -272,7 +272,7 @@ static int c67x00_hcd_start(struct usb_hcd *hcd)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_hcd_stop: Host controller stop hook
|
||||
*/
|
||||
static void c67x00_hcd_stop(struct usb_hcd *hcd)
|
||||
|
@ -262,7 +262,7 @@ u16 c67x00_ll_get_usb_ctl(struct c67x00_sie *sie)
|
||||
return hpi_read_word(sie->dev, USB_CTL_REG(sie->sie_num));
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_ll_usb_clear_status - clear the USB status bits
|
||||
*/
|
||||
void c67x00_ll_usb_clear_status(struct c67x00_sie *sie, u16 bits)
|
||||
@ -395,7 +395,7 @@ int c67x00_ll_reset(struct c67x00_device *dev)
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_ll_write_mem_le16 - write into c67x00 memory
|
||||
* Only data is little endian, addr has cpu endianess.
|
||||
*/
|
||||
@ -434,7 +434,7 @@ void c67x00_ll_write_mem_le16(struct c67x00_device *dev, u16 addr,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_ll_read_mem_le16 - read from c67x00 memory
|
||||
* Only data is little endian, addr has cpu endianess.
|
||||
*/
|
||||
|
@ -23,7 +23,7 @@
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* struct c67x00_ep_data: Host endpoint data structure
|
||||
*/
|
||||
struct c67x00_ep_data {
|
||||
@ -34,7 +34,7 @@ struct c67x00_ep_data {
|
||||
u16 next_frame; /* For int/isoc transactions */
|
||||
};
|
||||
|
||||
/**
|
||||
/*
|
||||
* struct c67x00_td
|
||||
*
|
||||
* Hardware parts are little endiannes, SW in CPU endianess.
|
||||
@ -130,7 +130,7 @@ struct c67x00_urb_priv {
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* dbg_td - Dump the contents of the TD
|
||||
*/
|
||||
static void dbg_td(struct c67x00_hcd *c67x00, struct c67x00_td *td, char *msg)
|
||||
@ -161,7 +161,7 @@ static inline u16 c67x00_get_current_frame_number(struct c67x00_hcd *c67x00)
|
||||
return c67x00_ll_husb_get_frame(c67x00->sie) & HOST_FRAME_MASK;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* frame_add
|
||||
* Software wraparound for framenumbers.
|
||||
*/
|
||||
@ -170,7 +170,7 @@ static inline u16 frame_add(u16 a, u16 b)
|
||||
return (a + b) & HOST_FRAME_MASK;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* frame_after - is frame a after frame b
|
||||
*/
|
||||
static inline int frame_after(u16 a, u16 b)
|
||||
@ -179,7 +179,7 @@ static inline int frame_after(u16 a, u16 b)
|
||||
(HOST_FRAME_MASK / 2);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* frame_after_eq - is frame a after or equal to frame b
|
||||
*/
|
||||
static inline int frame_after_eq(u16 a, u16 b)
|
||||
@ -190,7 +190,7 @@ static inline int frame_after_eq(u16 a, u16 b)
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_release_urb - remove link from all tds to this urb
|
||||
* Disconnects the urb from it's tds, so that it can be given back.
|
||||
* pre: urb->hcpriv != NULL
|
||||
@ -557,7 +557,7 @@ static int c67x00_claim_frame_bw(struct c67x00_hcd *c67x00, struct urb *urb,
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* td_addr and buf_addr must be word aligned
|
||||
*/
|
||||
static int c67x00_create_td(struct c67x00_hcd *c67x00, struct urb *urb,
|
||||
@ -685,7 +685,7 @@ static int c67x00_add_data_urb(struct c67x00_hcd *c67x00, struct urb *urb)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* return 0 in case more bandwidth is available, else errorcode
|
||||
*/
|
||||
static int c67x00_add_ctrl_urb(struct c67x00_hcd *c67x00, struct urb *urb)
|
||||
@ -822,7 +822,7 @@ static void c67x00_fill_frame(struct c67x00_hcd *c67x00)
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* Get TD from C67X00
|
||||
*/
|
||||
static inline void
|
||||
@ -970,7 +970,7 @@ static void c67x00_handle_isoc(struct c67x00_hcd *c67x00, struct c67x00_td *td)
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_check_td_list - handle tds which have been processed by the c67x00
|
||||
* pre: current_td == 0
|
||||
*/
|
||||
@ -1045,7 +1045,7 @@ static inline int c67x00_all_tds_processed(struct c67x00_hcd *c67x00)
|
||||
return !c67x00_ll_husb_get_current_td(c67x00->sie);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* Send td to C67X00
|
||||
*/
|
||||
static void c67x00_send_td(struct c67x00_hcd *c67x00, struct c67x00_td *td)
|
||||
@ -1081,7 +1081,7 @@ static void c67x00_send_frame(struct c67x00_hcd *c67x00)
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
/**
|
||||
/*
|
||||
* c67x00_do_work - Schedulers state machine
|
||||
*/
|
||||
static void c67x00_do_work(struct c67x00_hcd *c67x00)
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* cdns3-ti.c - TI specific Glue layer for Cadence USB Controller
|
||||
*
|
||||
* Copyright (C) 2019 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2019 Texas Instruments Incorporated - https://www.ti.com
|
||||
*/
|
||||
|
||||
#include <linux/bits.h>
|
||||
|
@ -27,13 +27,6 @@
|
||||
|
||||
static int cdns3_idle_init(struct cdns3 *cdns);
|
||||
|
||||
static inline
|
||||
struct cdns3_role_driver *cdns3_get_current_role_driver(struct cdns3 *cdns)
|
||||
{
|
||||
WARN_ON(!cdns->roles[cdns->role]);
|
||||
return cdns->roles[cdns->role];
|
||||
}
|
||||
|
||||
static int cdns3_role_start(struct cdns3 *cdns, enum usb_role role)
|
||||
{
|
||||
int ret;
|
||||
@ -93,7 +86,7 @@ static int cdns3_core_init_role(struct cdns3 *cdns)
|
||||
struct device *dev = cdns->dev;
|
||||
enum usb_dr_mode best_dr_mode;
|
||||
enum usb_dr_mode dr_mode;
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
dr_mode = usb_get_dr_mode(dev);
|
||||
cdns->role = USB_ROLE_NONE;
|
||||
@ -184,7 +177,7 @@ static int cdns3_core_init_role(struct cdns3 *cdns)
|
||||
goto err;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
err:
|
||||
cdns3_exit_roles(cdns);
|
||||
return ret;
|
||||
@ -198,11 +191,17 @@ err:
|
||||
*/
|
||||
static enum usb_role cdns3_hw_role_state_machine(struct cdns3 *cdns)
|
||||
{
|
||||
enum usb_role role;
|
||||
enum usb_role role = USB_ROLE_NONE;
|
||||
int id, vbus;
|
||||
|
||||
if (cdns->dr_mode != USB_DR_MODE_OTG)
|
||||
goto not_otg;
|
||||
if (cdns->dr_mode != USB_DR_MODE_OTG) {
|
||||
if (cdns3_is_host(cdns))
|
||||
role = USB_ROLE_HOST;
|
||||
if (cdns3_is_device(cdns))
|
||||
role = USB_ROLE_DEVICE;
|
||||
|
||||
return role;
|
||||
}
|
||||
|
||||
id = cdns3_get_id(cdns);
|
||||
vbus = cdns3_get_vbus(cdns);
|
||||
@ -239,14 +238,6 @@ static enum usb_role cdns3_hw_role_state_machine(struct cdns3 *cdns)
|
||||
dev_dbg(cdns->dev, "role %d -> %d\n", cdns->role, role);
|
||||
|
||||
return role;
|
||||
|
||||
not_otg:
|
||||
if (cdns3_is_host(cdns))
|
||||
role = USB_ROLE_HOST;
|
||||
if (cdns3_is_device(cdns))
|
||||
role = USB_ROLE_DEVICE;
|
||||
|
||||
return role;
|
||||
}
|
||||
|
||||
static int cdns3_idle_role_start(struct cdns3 *cdns)
|
||||
@ -282,7 +273,7 @@ static int cdns3_idle_init(struct cdns3 *cdns)
|
||||
|
||||
/**
|
||||
* cdns3_hw_role_switch - switch roles based on HW state
|
||||
* @cdns3: controller
|
||||
* @cdns: controller
|
||||
*/
|
||||
int cdns3_hw_role_switch(struct cdns3 *cdns)
|
||||
{
|
||||
@ -320,7 +311,7 @@ exit:
|
||||
/**
|
||||
* cdsn3_role_get - get current role of controller.
|
||||
*
|
||||
* @dev: Pointer to device structure
|
||||
* @sw: pointer to USB role switch structure
|
||||
*
|
||||
* Returns role
|
||||
*/
|
||||
@ -334,8 +325,8 @@ static enum usb_role cdns3_role_get(struct usb_role_switch *sw)
|
||||
/**
|
||||
* cdns3_role_set - set current role of controller.
|
||||
*
|
||||
* @dev: pointer to device object
|
||||
* @role - the previous role
|
||||
* @sw: pointer to USB role switch structure
|
||||
* @role: the previous role
|
||||
* Handles below events:
|
||||
* - Role switch for dual-role devices
|
||||
* - USB_ROLE_GADGET <--> USB_ROLE_NONE for peripheral-only devices
|
||||
@ -356,7 +347,6 @@ static int cdns3_role_set(struct usb_role_switch *sw, enum usb_role role)
|
||||
case USB_ROLE_HOST:
|
||||
break;
|
||||
default:
|
||||
ret = -EPERM;
|
||||
goto pm_put;
|
||||
}
|
||||
}
|
||||
@ -367,17 +357,14 @@ static int cdns3_role_set(struct usb_role_switch *sw, enum usb_role role)
|
||||
case USB_ROLE_DEVICE:
|
||||
break;
|
||||
default:
|
||||
ret = -EPERM;
|
||||
goto pm_put;
|
||||
}
|
||||
}
|
||||
|
||||
cdns3_role_stop(cdns);
|
||||
ret = cdns3_role_start(cdns, role);
|
||||
if (ret) {
|
||||
if (ret)
|
||||
dev_err(cdns->dev, "set role %d has failed\n", role);
|
||||
ret = -EPERM;
|
||||
}
|
||||
|
||||
pm_put:
|
||||
pm_runtime_put_sync(cdns->dev);
|
||||
@ -402,7 +389,7 @@ static int cdns3_probe(struct platform_device *pdev)
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
|
||||
if (ret) {
|
||||
dev_err(dev, "error setting dma mask: %d\n", ret);
|
||||
return -ENODEV;
|
||||
return ret;
|
||||
}
|
||||
|
||||
cdns = devm_kzalloc(dev, sizeof(*cdns), GFP_KERNEL);
|
||||
@ -436,8 +423,7 @@ static int cdns3_probe(struct platform_device *pdev)
|
||||
if (cdns->dev_irq < 0)
|
||||
dev_err(dev, "couldn't get peripheral irq\n");
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dev");
|
||||
regs = devm_ioremap_resource(dev, res);
|
||||
regs = devm_platform_ioremap_resource_byname(pdev, "dev");
|
||||
if (IS_ERR(regs))
|
||||
return PTR_ERR(regs);
|
||||
cdns->dev_regs = regs;
|
||||
|
@ -29,7 +29,6 @@
|
||||
*/
|
||||
int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
|
||||
{
|
||||
int ret = 0;
|
||||
u32 reg;
|
||||
|
||||
switch (mode) {
|
||||
@ -61,7 +60,7 @@ int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cdns3_get_id(struct cdns3 *cdns)
|
||||
@ -84,25 +83,25 @@ int cdns3_get_vbus(struct cdns3 *cdns)
|
||||
return vbus;
|
||||
}
|
||||
|
||||
int cdns3_is_host(struct cdns3 *cdns)
|
||||
bool cdns3_is_host(struct cdns3 *cdns)
|
||||
{
|
||||
if (cdns->dr_mode == USB_DR_MODE_HOST)
|
||||
return 1;
|
||||
else if (!cdns3_get_id(cdns))
|
||||
return 1;
|
||||
return true;
|
||||
else if (cdns3_get_id(cdns) == CDNS3_ID_HOST)
|
||||
return true;
|
||||
|
||||
return 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
int cdns3_is_device(struct cdns3 *cdns)
|
||||
bool cdns3_is_device(struct cdns3 *cdns)
|
||||
{
|
||||
if (cdns->dr_mode == USB_DR_MODE_PERIPHERAL)
|
||||
return 1;
|
||||
return true;
|
||||
else if (cdns->dr_mode == USB_DR_MODE_OTG)
|
||||
if (cdns3_get_id(cdns))
|
||||
return 1;
|
||||
if (cdns3_get_id(cdns) == CDNS3_ID_PERIPHERAL)
|
||||
return true;
|
||||
|
||||
return 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -125,83 +124,95 @@ static void cdns3_otg_enable_irq(struct cdns3 *cdns)
|
||||
}
|
||||
|
||||
/**
|
||||
* cdns3_drd_switch_host - start/stop host
|
||||
* @cdns: Pointer to controller context structure
|
||||
* @on: 1 for start, 0 for stop
|
||||
* cdns3_drd_host_on - start host.
|
||||
* @cdns: Pointer to controller context structure.
|
||||
*
|
||||
* Returns 0 on success otherwise negative errno.
|
||||
*/
|
||||
int cdns3_drd_host_on(struct cdns3 *cdns)
|
||||
{
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* Enable host mode. */
|
||||
writel(OTGCMD_HOST_BUS_REQ | OTGCMD_OTG_DIS,
|
||||
&cdns->otg_regs->cmd);
|
||||
|
||||
dev_dbg(cdns->dev, "Waiting till Host mode is turned on\n");
|
||||
ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
|
||||
val & OTGSTS_XHCI_READY, 1, 100000);
|
||||
|
||||
if (ret)
|
||||
dev_err(cdns->dev, "timeout waiting for xhci_ready\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* cdns3_drd_host_off - stop host.
|
||||
* @cdns: Pointer to controller context structure.
|
||||
*/
|
||||
void cdns3_drd_host_off(struct cdns3 *cdns)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
writel(OTGCMD_HOST_BUS_DROP | OTGCMD_DEV_BUS_DROP |
|
||||
OTGCMD_DEV_POWER_OFF | OTGCMD_HOST_POWER_OFF,
|
||||
&cdns->otg_regs->cmd);
|
||||
|
||||
/* Waiting till H_IDLE state.*/
|
||||
readl_poll_timeout_atomic(&cdns->otg_regs->state, val,
|
||||
!(val & OTGSTATE_HOST_STATE_MASK),
|
||||
1, 2000000);
|
||||
}
|
||||
|
||||
/**
|
||||
* cdns3_drd_gadget_on - start gadget.
|
||||
* @cdns: Pointer to controller context structure.
|
||||
*
|
||||
* Returns 0 on success otherwise negative errno
|
||||
*/
|
||||
int cdns3_drd_switch_host(struct cdns3 *cdns, int on)
|
||||
int cdns3_drd_gadget_on(struct cdns3 *cdns)
|
||||
{
|
||||
int ret, val;
|
||||
u32 reg = OTGCMD_OTG_DIS;
|
||||
|
||||
/* switch OTG core */
|
||||
if (on) {
|
||||
writel(OTGCMD_HOST_BUS_REQ | reg, &cdns->otg_regs->cmd);
|
||||
writel(OTGCMD_DEV_BUS_REQ | reg, &cdns->otg_regs->cmd);
|
||||
|
||||
dev_dbg(cdns->dev, "Waiting till Host mode is turned on\n");
|
||||
ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
|
||||
val & OTGSTS_XHCI_READY,
|
||||
1, 100000);
|
||||
if (ret) {
|
||||
dev_err(cdns->dev, "timeout waiting for xhci_ready\n");
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
writel(OTGCMD_HOST_BUS_DROP | OTGCMD_DEV_BUS_DROP |
|
||||
OTGCMD_DEV_POWER_OFF | OTGCMD_HOST_POWER_OFF,
|
||||
&cdns->otg_regs->cmd);
|
||||
/* Waiting till H_IDLE state.*/
|
||||
readl_poll_timeout_atomic(&cdns->otg_regs->state, val,
|
||||
!(val & OTGSTATE_HOST_STATE_MASK),
|
||||
1, 2000000);
|
||||
dev_dbg(cdns->dev, "Waiting till Device mode is turned on\n");
|
||||
|
||||
ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
|
||||
val & OTGSTS_DEV_READY,
|
||||
1, 100000);
|
||||
if (ret) {
|
||||
dev_err(cdns->dev, "timeout waiting for dev_ready\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* cdns3_drd_switch_gadget - start/stop gadget
|
||||
* @cdns: Pointer to controller context structure
|
||||
* @on: 1 for start, 0 for stop
|
||||
*
|
||||
* Returns 0 on success otherwise negative errno
|
||||
* cdns3_drd_gadget_off - stop gadget.
|
||||
* @cdns: Pointer to controller context structure.
|
||||
*/
|
||||
int cdns3_drd_switch_gadget(struct cdns3 *cdns, int on)
|
||||
void cdns3_drd_gadget_off(struct cdns3 *cdns)
|
||||
{
|
||||
int ret, val;
|
||||
u32 reg = OTGCMD_OTG_DIS;
|
||||
u32 val;
|
||||
|
||||
/* switch OTG core */
|
||||
if (on) {
|
||||
writel(OTGCMD_DEV_BUS_REQ | reg, &cdns->otg_regs->cmd);
|
||||
|
||||
dev_dbg(cdns->dev, "Waiting till Device mode is turned on\n");
|
||||
|
||||
ret = readl_poll_timeout_atomic(&cdns->otg_regs->sts, val,
|
||||
val & OTGSTS_DEV_READY,
|
||||
1, 100000);
|
||||
if (ret) {
|
||||
dev_err(cdns->dev, "timeout waiting for dev_ready\n");
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* driver should wait at least 10us after disabling Device
|
||||
* before turning-off Device (DEV_BUS_DROP)
|
||||
*/
|
||||
usleep_range(20, 30);
|
||||
writel(OTGCMD_HOST_BUS_DROP | OTGCMD_DEV_BUS_DROP |
|
||||
OTGCMD_DEV_POWER_OFF | OTGCMD_HOST_POWER_OFF,
|
||||
&cdns->otg_regs->cmd);
|
||||
/* Waiting till DEV_IDLE state.*/
|
||||
readl_poll_timeout_atomic(&cdns->otg_regs->state, val,
|
||||
!(val & OTGSTATE_DEV_STATE_MASK),
|
||||
1, 2000000);
|
||||
}
|
||||
|
||||
return 0;
|
||||
/*
|
||||
* Driver should wait at least 10us after disabling Device
|
||||
* before turning-off Device (DEV_BUS_DROP).
|
||||
*/
|
||||
usleep_range(20, 30);
|
||||
writel(OTGCMD_HOST_BUS_DROP | OTGCMD_DEV_BUS_DROP |
|
||||
OTGCMD_DEV_POWER_OFF | OTGCMD_HOST_POWER_OFF,
|
||||
&cdns->otg_regs->cmd);
|
||||
/* Waiting till DEV_IDLE state.*/
|
||||
readl_poll_timeout_atomic(&cdns->otg_regs->state, val,
|
||||
!(val & OTGSTATE_DEV_STATE_MASK),
|
||||
1, 2000000);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -212,7 +223,7 @@ int cdns3_drd_switch_gadget(struct cdns3 *cdns, int on)
|
||||
*/
|
||||
static int cdns3_init_otg_mode(struct cdns3 *cdns)
|
||||
{
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
cdns3_otg_disable_irq(cdns);
|
||||
/* clear all interrupts */
|
||||
@ -223,7 +234,8 @@ static int cdns3_init_otg_mode(struct cdns3 *cdns)
|
||||
return ret;
|
||||
|
||||
cdns3_otg_enable_irq(cdns);
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -234,7 +246,7 @@ static int cdns3_init_otg_mode(struct cdns3 *cdns)
|
||||
*/
|
||||
int cdns3_drd_update_mode(struct cdns3 *cdns)
|
||||
{
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
switch (cdns->dr_mode) {
|
||||
case USB_DR_MODE_PERIPHERAL:
|
||||
@ -279,12 +291,12 @@ static irqreturn_t cdns3_drd_irq(int irq, void *data)
|
||||
u32 reg;
|
||||
|
||||
if (cdns->dr_mode != USB_DR_MODE_OTG)
|
||||
return ret;
|
||||
return IRQ_NONE;
|
||||
|
||||
reg = readl(&cdns->otg_regs->ivect);
|
||||
|
||||
if (!reg)
|
||||
return ret;
|
||||
return IRQ_NONE;
|
||||
|
||||
if (reg & OTGIEN_ID_CHANGE_INT) {
|
||||
dev_dbg(cdns->dev, "OTG IRQ: new ID: %d\n",
|
||||
@ -307,8 +319,8 @@ static irqreturn_t cdns3_drd_irq(int irq, void *data)
|
||||
int cdns3_drd_init(struct cdns3 *cdns)
|
||||
{
|
||||
void __iomem *regs;
|
||||
int ret = 0;
|
||||
u32 state;
|
||||
int ret;
|
||||
|
||||
regs = devm_ioremap_resource(cdns->dev, &cdns->otg_res);
|
||||
if (IS_ERR(regs))
|
||||
@ -359,19 +371,18 @@ int cdns3_drd_init(struct cdns3 *cdns)
|
||||
cdns3_drd_thread_irq,
|
||||
IRQF_SHARED,
|
||||
dev_name(cdns->dev), cdns);
|
||||
|
||||
if (ret) {
|
||||
dev_err(cdns->dev, "couldn't get otg_irq\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
state = readl(&cdns->otg_regs->sts);
|
||||
if (OTGSTS_OTG_NRDY(state) != 0) {
|
||||
if (OTGSTS_OTG_NRDY(state)) {
|
||||
dev_err(cdns->dev, "Cadence USB3 OTG device not ready\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cdns3_drd_exit(struct cdns3 *cdns)
|
||||
|
@ -153,15 +153,20 @@ struct cdns3_otg_common_regs {
|
||||
/* Only for CDNS3_CONTROLLER_V0 version */
|
||||
#define OVERRIDE_IDPULLUP_V0 BIT(24)
|
||||
|
||||
int cdns3_is_host(struct cdns3 *cdns);
|
||||
int cdns3_is_device(struct cdns3 *cdns);
|
||||
#define CDNS3_ID_PERIPHERAL 1
|
||||
#define CDNS3_ID_HOST 0
|
||||
|
||||
bool cdns3_is_host(struct cdns3 *cdns);
|
||||
bool cdns3_is_device(struct cdns3 *cdns);
|
||||
int cdns3_get_id(struct cdns3 *cdns);
|
||||
int cdns3_get_vbus(struct cdns3 *cdns);
|
||||
int cdns3_drd_init(struct cdns3 *cdns);
|
||||
int cdns3_drd_exit(struct cdns3 *cdns);
|
||||
int cdns3_drd_update_mode(struct cdns3 *cdns);
|
||||
int cdns3_drd_switch_gadget(struct cdns3 *cdns, int on);
|
||||
int cdns3_drd_switch_host(struct cdns3 *cdns, int on);
|
||||
int cdns3_drd_gadget_on(struct cdns3 *cdns);
|
||||
void cdns3_drd_gadget_off(struct cdns3 *cdns);
|
||||
int cdns3_drd_host_on(struct cdns3 *cdns);
|
||||
void cdns3_drd_host_off(struct cdns3 *cdns);
|
||||
int cdns3_set_mode(struct cdns3 *cdns, enum usb_dr_mode mode);
|
||||
|
||||
#endif /* __LINUX_CDNS3_DRD */
|
||||
|
@ -29,6 +29,7 @@ static struct usb_endpoint_descriptor cdns3_gadget_ep0_desc = {
|
||||
* @length: data length
|
||||
* @erdy: set it to 1 when ERDY packet should be sent -
|
||||
* exit from flow control state
|
||||
* @zlp: add zero length packet
|
||||
*/
|
||||
static void cdns3_ep0_run_transfer(struct cdns3_device *priv_dev,
|
||||
dma_addr_t dma_addr,
|
||||
@ -122,8 +123,6 @@ static void cdns3_ep0_complete_setup(struct cdns3_device *priv_dev,
|
||||
priv_dev->ep0_stage = CDNS3_SETUP_STAGE;
|
||||
writel((send_erdy ? EP_CMD_ERDY : 0) | EP_CMD_REQ_CMPL,
|
||||
&priv_dev->regs->ep_cmd);
|
||||
|
||||
cdns3_allow_enable_l1(priv_dev, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -160,13 +159,12 @@ static int cdns3_req_ep0_set_configuration(struct cdns3_device *priv_dev,
|
||||
if (result)
|
||||
return result;
|
||||
|
||||
if (config) {
|
||||
cdns3_set_hw_configuration(priv_dev);
|
||||
} else {
|
||||
if (!config) {
|
||||
cdns3_hw_reset_eps_config(priv_dev);
|
||||
usb_gadget_set_state(&priv_dev->gadget,
|
||||
USB_STATE_ADDRESS);
|
||||
}
|
||||
|
||||
break;
|
||||
case USB_STATE_CONFIGURED:
|
||||
result = cdns3_ep0_delegate_req(priv_dev, ctrl_req);
|
||||
@ -227,7 +225,7 @@ static int cdns3_req_ep0_set_address(struct cdns3_device *priv_dev,
|
||||
/**
|
||||
* cdns3_req_ep0_get_status - Handling of GET_STATUS standard USB request
|
||||
* @priv_dev: extended gadget object
|
||||
* @ctrl_req: pointer to received setup packet
|
||||
* @ctrl: pointer to received setup packet
|
||||
*
|
||||
* Returns 0 if success, error code on error
|
||||
*/
|
||||
@ -329,10 +327,10 @@ static int cdns3_ep0_feature_handle_device(struct cdns3_device *priv_dev,
|
||||
|
||||
tmode >>= 8;
|
||||
switch (tmode) {
|
||||
case TEST_J:
|
||||
case TEST_K:
|
||||
case TEST_SE0_NAK:
|
||||
case TEST_PACKET:
|
||||
case USB_TEST_J:
|
||||
case USB_TEST_K:
|
||||
case USB_TEST_SE0_NAK:
|
||||
case USB_TEST_PACKET:
|
||||
cdns3_set_register_bit(&priv_dev->regs->usb_cmd,
|
||||
USB_CMD_STMODE |
|
||||
USB_STS_TMODE_SEL(tmode - 1));
|
||||
@ -401,7 +399,7 @@ static int cdns3_ep0_feature_handle_endpoint(struct cdns3_device *priv_dev,
|
||||
* Handling of GET/SET_FEATURE standard USB request
|
||||
*
|
||||
* @priv_dev: extended gadget object
|
||||
* @ctrl_req: pointer to received setup packet
|
||||
* @ctrl: pointer to received setup packet
|
||||
* @set: must be set to 1 for SET_FEATURE request
|
||||
*
|
||||
* Returns 0 if success, error code on error
|
||||
@ -610,7 +608,7 @@ static bool cdns3_check_new_setup(struct cdns3_device *priv_dev)
|
||||
{
|
||||
u32 ep_sts_reg;
|
||||
|
||||
cdns3_select_ep(priv_dev, 0 | USB_DIR_OUT);
|
||||
cdns3_select_ep(priv_dev, USB_DIR_OUT);
|
||||
ep_sts_reg = readl(&priv_dev->regs->ep_sts);
|
||||
|
||||
return !!(ep_sts_reg & (EP_STS_SETUP | EP_STS_STPWAIT));
|
||||
@ -639,7 +637,6 @@ void cdns3_check_ep0_interrupt_proceed(struct cdns3_device *priv_dev, int dir)
|
||||
|
||||
if (priv_dev->wait_for_setup && ep_sts_reg & EP_STS_IOC) {
|
||||
priv_dev->wait_for_setup = 0;
|
||||
cdns3_allow_enable_l1(priv_dev, 0);
|
||||
cdns3_ep0_setup_phase(priv_dev);
|
||||
} else if ((ep_sts_reg & EP_STS_IOC) || (ep_sts_reg & EP_STS_ISP)) {
|
||||
priv_dev->ep0_data_dir = dir;
|
||||
@ -654,6 +651,9 @@ void cdns3_check_ep0_interrupt_proceed(struct cdns3_device *priv_dev, int dir)
|
||||
|
||||
/**
|
||||
* cdns3_gadget_ep0_enable
|
||||
* @ep: pointer to endpoint zero object
|
||||
* @desc: pointer to usb endpoint descriptor
|
||||
*
|
||||
* Function shouldn't be called by gadget driver,
|
||||
* endpoint 0 is allways active
|
||||
*/
|
||||
@ -665,6 +665,8 @@ static int cdns3_gadget_ep0_enable(struct usb_ep *ep,
|
||||
|
||||
/**
|
||||
* cdns3_gadget_ep0_disable
|
||||
* @ep: pointer to endpoint zero object
|
||||
*
|
||||
* Function shouldn't be called by gadget driver,
|
||||
* endpoint 0 is allways active
|
||||
*/
|
||||
@ -701,7 +703,6 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
|
||||
struct cdns3_endpoint *priv_ep = ep_to_cdns3_ep(ep);
|
||||
struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
|
||||
unsigned long flags;
|
||||
int erdy_sent = 0;
|
||||
int ret = 0;
|
||||
u8 zlp = 0;
|
||||
|
||||
@ -717,15 +718,8 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
|
||||
/* send STATUS stage. Should be called only for SET_CONFIGURATION */
|
||||
if (priv_dev->ep0_stage == CDNS3_STATUS_STAGE) {
|
||||
cdns3_select_ep(priv_dev, 0x00);
|
||||
|
||||
erdy_sent = !priv_dev->hw_configured_flag;
|
||||
cdns3_set_hw_configuration(priv_dev);
|
||||
|
||||
if (!erdy_sent)
|
||||
cdns3_ep0_complete_setup(priv_dev, 0, 1);
|
||||
|
||||
cdns3_allow_enable_l1(priv_dev, 1);
|
||||
|
||||
cdns3_ep0_complete_setup(priv_dev, 0, 1);
|
||||
request->actual = 0;
|
||||
priv_dev->status_completion_no_call = true;
|
||||
priv_dev->pending_status_request = request;
|
||||
@ -860,7 +854,7 @@ void cdns3_ep0_config(struct cdns3_device *priv_dev)
|
||||
/**
|
||||
* cdns3_init_ep0 Initializes software endpoint 0 of gadget
|
||||
* @priv_dev: extended gadget object
|
||||
* @ep_priv: extended endpoint object
|
||||
* @priv_ep: extended endpoint object
|
||||
*
|
||||
* Returns 0 on success else error code.
|
||||
*/
|
||||
|
@ -242,9 +242,10 @@ int cdns3_allocate_trb_pool(struct cdns3_endpoint *priv_ep)
|
||||
return -ENOMEM;
|
||||
|
||||
priv_ep->alloc_ring_size = ring_size;
|
||||
memset(priv_ep->trb_pool, 0, ring_size);
|
||||
}
|
||||
|
||||
memset(priv_ep->trb_pool, 0, ring_size);
|
||||
|
||||
priv_ep->num_trbs = num_trbs;
|
||||
|
||||
if (!priv_ep->num)
|
||||
@ -421,7 +422,7 @@ static int cdns3_start_all_request(struct cdns3_device *priv_dev,
|
||||
if ((priv_req->flags & REQUEST_INTERNAL) ||
|
||||
(priv_ep->flags & EP_TDLCHK_EN) ||
|
||||
priv_ep->use_streams) {
|
||||
trace_printk("Blocking external request\n");
|
||||
dev_dbg(priv_dev->dev, "Blocking external request\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
@ -644,7 +645,7 @@ static void cdns3_wa2_remove_old_request(struct cdns3_endpoint *priv_ep)
|
||||
|
||||
/**
|
||||
* cdns3_wa2_descmissing_packet - handles descriptor missing event.
|
||||
* @priv_dev: extended gadget object
|
||||
* @priv_ep: extended gadget object
|
||||
*
|
||||
* This function is used only for WA2. For more information see Work around 2
|
||||
* description.
|
||||
@ -1080,6 +1081,7 @@ static int cdns3_ep_run_stream_transfer(struct cdns3_endpoint *priv_ep,
|
||||
/**
|
||||
* cdns3_ep_run_transfer - start transfer on no-default endpoint hardware
|
||||
* @priv_ep: endpoint object
|
||||
* @request: request object
|
||||
*
|
||||
* Returns zero on success or negative value on failure
|
||||
*/
|
||||
@ -1314,7 +1316,6 @@ void cdns3_set_hw_configuration(struct cdns3_device *priv_dev)
|
||||
return;
|
||||
|
||||
writel(USB_CONF_CFGSET, &priv_dev->regs->usb_conf);
|
||||
writel(EP_CMD_ERDY | EP_CMD_REQ_CMPL, &priv_dev->regs->ep_cmd);
|
||||
|
||||
cdns3_set_register_bit(&priv_dev->regs->usb_conf,
|
||||
USB_CONF_U1EN | USB_CONF_U2EN);
|
||||
@ -1331,6 +1332,8 @@ void cdns3_set_hw_configuration(struct cdns3_device *priv_dev)
|
||||
cdns3_start_all_request(priv_dev, priv_ep);
|
||||
}
|
||||
}
|
||||
|
||||
cdns3_allow_enable_l1(priv_dev, 1);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1809,8 +1812,8 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
|
||||
struct cdns3_device *priv_dev = data;
|
||||
irqreturn_t ret = IRQ_NONE;
|
||||
unsigned long flags;
|
||||
int bit;
|
||||
u32 reg;
|
||||
unsigned int bit;
|
||||
unsigned long reg;
|
||||
|
||||
spin_lock_irqsave(&priv_dev->lock, flags);
|
||||
|
||||
@ -1841,7 +1844,7 @@ static irqreturn_t cdns3_device_thread_irq_handler(int irq, void *data)
|
||||
if (!reg)
|
||||
goto irqend;
|
||||
|
||||
for_each_set_bit(bit, (unsigned long *)®,
|
||||
for_each_set_bit(bit, ®,
|
||||
sizeof(u32) * BITS_PER_BYTE) {
|
||||
cdns3_check_ep_interrupt_proceed(priv_dev->eps[bit]);
|
||||
ret = IRQ_HANDLED;
|
||||
@ -2568,7 +2571,7 @@ not_found:
|
||||
/**
|
||||
* __cdns3_gadget_ep_set_halt Sets stall on selected endpoint
|
||||
* Should be called after acquiring spin_lock and selecting ep
|
||||
* @ep: endpoint object to set stall on.
|
||||
* @priv_ep: endpoint object to set stall on.
|
||||
*/
|
||||
void __cdns3_gadget_ep_set_halt(struct cdns3_endpoint *priv_ep)
|
||||
{
|
||||
@ -2589,7 +2592,7 @@ void __cdns3_gadget_ep_set_halt(struct cdns3_endpoint *priv_ep)
|
||||
/**
|
||||
* __cdns3_gadget_ep_clear_halt Clears stall on selected endpoint
|
||||
* Should be called after acquiring spin_lock and selecting ep
|
||||
* @ep: endpoint object to clear stall on
|
||||
* @priv_ep: endpoint object to clear stall on
|
||||
*/
|
||||
int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
|
||||
{
|
||||
@ -2814,7 +2817,7 @@ static int cdns3_gadget_udc_start(struct usb_gadget *gadget,
|
||||
dev_err(priv_dev->dev,
|
||||
"invalid maximum_speed parameter %d\n",
|
||||
max_speed);
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case USB_SPEED_UNKNOWN:
|
||||
/* default to superspeed */
|
||||
max_speed = USB_SPEED_SUPER;
|
||||
@ -2890,7 +2893,7 @@ static void cdns3_free_all_eps(struct cdns3_device *priv_dev)
|
||||
|
||||
/**
|
||||
* cdns3_init_eps Initializes software endpoints of gadget
|
||||
* @cdns3: extended gadget object
|
||||
* @priv_dev: extended gadget object
|
||||
*
|
||||
* Returns 0 on success, error code elsewhere
|
||||
*/
|
||||
@ -3014,7 +3017,7 @@ void cdns3_gadget_exit(struct cdns3 *cdns)
|
||||
kfree(priv_dev->zlp_buf);
|
||||
kfree(priv_dev);
|
||||
cdns->gadget_dev = NULL;
|
||||
cdns3_drd_switch_gadget(cdns, 0);
|
||||
cdns3_drd_gadget_off(cdns);
|
||||
}
|
||||
|
||||
static int cdns3_gadget_start(struct cdns3 *cdns)
|
||||
@ -3055,7 +3058,7 @@ static int cdns3_gadget_start(struct cdns3 *cdns)
|
||||
default:
|
||||
dev_err(cdns->dev, "invalid maximum_speed parameter %d\n",
|
||||
max_speed);
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case USB_SPEED_UNKNOWN:
|
||||
/* default to superspeed */
|
||||
max_speed = USB_SPEED_SUPER;
|
||||
@ -3145,7 +3148,7 @@ static int __cdns3_gadget_init(struct cdns3 *cdns)
|
||||
return ret;
|
||||
}
|
||||
|
||||
cdns3_drd_switch_gadget(cdns, 1);
|
||||
cdns3_drd_gadget_on(cdns);
|
||||
pm_runtime_get_sync(cdns->dev);
|
||||
|
||||
ret = cdns3_gadget_start(cdns);
|
||||
@ -3202,7 +3205,7 @@ static int cdns3_gadget_resume(struct cdns3 *cdns, bool hibernated)
|
||||
/**
|
||||
* cdns3_gadget_init - initialize device structure
|
||||
*
|
||||
* cdns: cdns3 instance
|
||||
* @cdns: cdns3 instance
|
||||
*
|
||||
* This function initializes the gadget.
|
||||
*/
|
||||
|
@ -19,7 +19,7 @@ static int __cdns3_host_init(struct cdns3 *cdns)
|
||||
struct platform_device *xhci;
|
||||
int ret;
|
||||
|
||||
cdns3_drd_switch_host(cdns, 1);
|
||||
cdns3_drd_host_on(cdns);
|
||||
|
||||
xhci = platform_device_alloc("xhci-hcd", PLATFORM_DEVID_AUTO);
|
||||
if (!xhci) {
|
||||
@ -53,7 +53,7 @@ static void cdns3_host_exit(struct cdns3 *cdns)
|
||||
{
|
||||
platform_device_unregister(cdns->host_dev);
|
||||
cdns->host_dev = NULL;
|
||||
cdns3_drd_switch_host(cdns, 0);
|
||||
cdns3_drd_host_off(cdns);
|
||||
}
|
||||
|
||||
int cdns3_host_init(struct cdns3 *cdns)
|
||||
|
@ -462,6 +462,10 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
||||
if (!IS_ERR(pdata.vbus_extcon.edev) ||
|
||||
of_property_read_bool(np, "usb-role-switch"))
|
||||
data->usbmisc_data->ext_vbus = 1;
|
||||
|
||||
/* usbmisc needs to know dr mode to choose wakeup setting */
|
||||
data->usbmisc_data->available_role =
|
||||
ci_hdrc_query_available_role(data->ci_pdev);
|
||||
}
|
||||
|
||||
ret = imx_usbmisc_init_post(data->usbmisc_data);
|
||||
|
@ -25,6 +25,7 @@ struct imx_usbmisc_data {
|
||||
unsigned int ext_id:1; /* ID from exteranl event */
|
||||
unsigned int ext_vbus:1; /* Vbus from exteranl event */
|
||||
struct usb_phy *usb_phy;
|
||||
enum usb_dr_mode available_role; /* runtime usb dr mode */
|
||||
};
|
||||
|
||||
int imx_usbmisc_init(struct imx_usbmisc_data *data);
|
||||
|
@ -120,7 +120,7 @@ static void ci_hdrc_pci_remove(struct pci_dev *pdev)
|
||||
usb_phy_generic_unregister(ci->phy);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* PCI device table
|
||||
* PCI device structure
|
||||
*
|
||||
|
@ -155,6 +155,7 @@ u32 hw_read_intr_status(struct ci_hdrc *ci)
|
||||
|
||||
/**
|
||||
* hw_port_test_set: writes port test mode (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @mode: new value
|
||||
*
|
||||
* This function returns an error code
|
||||
@ -877,6 +878,33 @@ void ci_hdrc_remove_device(struct platform_device *pdev)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ci_hdrc_remove_device);
|
||||
|
||||
/**
|
||||
* ci_hdrc_query_available_role: get runtime available operation mode
|
||||
*
|
||||
* The glue layer can get current operation mode (host/peripheral/otg)
|
||||
* This function should be called after ci core device has created.
|
||||
*
|
||||
* @pdev: the platform device of ci core.
|
||||
*
|
||||
* Return runtime usb_dr_mode.
|
||||
*/
|
||||
enum usb_dr_mode ci_hdrc_query_available_role(struct platform_device *pdev)
|
||||
{
|
||||
struct ci_hdrc *ci = platform_get_drvdata(pdev);
|
||||
|
||||
if (!ci)
|
||||
return USB_DR_MODE_UNKNOWN;
|
||||
if (ci->roles[CI_ROLE_HOST] && ci->roles[CI_ROLE_GADGET])
|
||||
return USB_DR_MODE_OTG;
|
||||
else if (ci->roles[CI_ROLE_HOST])
|
||||
return USB_DR_MODE_HOST;
|
||||
else if (ci->roles[CI_ROLE_GADGET])
|
||||
return USB_DR_MODE_PERIPHERAL;
|
||||
else
|
||||
return USB_DR_MODE_UNKNOWN;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ci_hdrc_query_available_role);
|
||||
|
||||
static inline void ci_role_destroy(struct ci_hdrc *ci)
|
||||
{
|
||||
ci_hdrc_gadget_destroy(ci);
|
||||
|
@ -18,7 +18,7 @@
|
||||
#include "bits.h"
|
||||
#include "otg.h"
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_device_show: prints information about device capabilities and status
|
||||
*/
|
||||
static int ci_device_show(struct seq_file *s, void *data)
|
||||
@ -47,7 +47,7 @@ static int ci_device_show(struct seq_file *s, void *data)
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(ci_device);
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_port_test_show: reads port test mode
|
||||
*/
|
||||
static int ci_port_test_show(struct seq_file *s, void *data)
|
||||
@ -67,7 +67,7 @@ static int ci_port_test_show(struct seq_file *s, void *data)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_port_test_write: writes port test mode
|
||||
*/
|
||||
static ssize_t ci_port_test_write(struct file *file, const char __user *ubuf,
|
||||
@ -115,7 +115,7 @@ static const struct file_operations ci_port_test_fops = {
|
||||
.release = single_release,
|
||||
};
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_qheads_show: DMA contents of all queue heads
|
||||
*/
|
||||
static int ci_qheads_show(struct seq_file *s, void *data)
|
||||
@ -147,7 +147,7 @@ static int ci_qheads_show(struct seq_file *s, void *data)
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(ci_qheads);
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_requests_show: DMA contents of all requests currently queued (all endpts)
|
||||
*/
|
||||
static int ci_requests_show(struct seq_file *s, void *data)
|
||||
|
@ -23,6 +23,7 @@
|
||||
|
||||
/**
|
||||
* hw_read_otgsc returns otgsc register bits value.
|
||||
* @ci: the controller
|
||||
* @mask: bitfield mask
|
||||
*/
|
||||
u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
|
||||
@ -75,6 +76,7 @@ u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
|
||||
|
||||
/**
|
||||
* hw_write_otgsc updates target bits of OTGSC register.
|
||||
* @ci: the controller
|
||||
* @mask: bitfield mask
|
||||
* @data: to be written
|
||||
*/
|
||||
@ -229,7 +231,7 @@ static void ci_otg_work(struct work_struct *work)
|
||||
|
||||
/**
|
||||
* ci_hdrc_otg_init - initialize otg struct
|
||||
* ci: the controller
|
||||
* @ci: the controller
|
||||
*/
|
||||
int ci_hdrc_otg_init(struct ci_hdrc *ci)
|
||||
{
|
||||
@ -248,7 +250,7 @@ int ci_hdrc_otg_init(struct ci_hdrc *ci)
|
||||
|
||||
/**
|
||||
* ci_hdrc_otg_destroy - destroy otg struct
|
||||
* ci: the controller
|
||||
* @ci: the controller
|
||||
*/
|
||||
void ci_hdrc_otg_destroy(struct ci_hdrc *ci)
|
||||
{
|
||||
|
@ -72,6 +72,7 @@ static inline int ep_to_bit(struct ci_hdrc *ci, int n)
|
||||
|
||||
/**
|
||||
* hw_device_state: enables/disables interrupts (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @dma: 0 => disable, !0 => enable and set dma engine
|
||||
*
|
||||
* This function returns an error code
|
||||
@ -91,6 +92,7 @@ static int hw_device_state(struct ci_hdrc *ci, u32 dma)
|
||||
|
||||
/**
|
||||
* hw_ep_flush: flush endpoint fifo (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
*
|
||||
@ -112,6 +114,7 @@ static int hw_ep_flush(struct ci_hdrc *ci, int num, int dir)
|
||||
|
||||
/**
|
||||
* hw_ep_disable: disables endpoint (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
*
|
||||
@ -126,6 +129,7 @@ static int hw_ep_disable(struct ci_hdrc *ci, int num, int dir)
|
||||
|
||||
/**
|
||||
* hw_ep_enable: enables endpoint (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
* @type: endpoint type
|
||||
@ -161,6 +165,7 @@ static int hw_ep_enable(struct ci_hdrc *ci, int num, int dir, int type)
|
||||
|
||||
/**
|
||||
* hw_ep_get_halt: return endpoint halt status
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
*
|
||||
@ -175,6 +180,7 @@ static int hw_ep_get_halt(struct ci_hdrc *ci, int num, int dir)
|
||||
|
||||
/**
|
||||
* hw_ep_prime: primes endpoint (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
* @is_ctrl: true if control endpoint
|
||||
@ -205,6 +211,7 @@ static int hw_ep_prime(struct ci_hdrc *ci, int num, int dir, int is_ctrl)
|
||||
/**
|
||||
* hw_ep_set_halt: configures ep halt & resets data toggle after clear (execute
|
||||
* without interruption)
|
||||
* @ci: the controller
|
||||
* @num: endpoint number
|
||||
* @dir: endpoint direction
|
||||
* @value: true => stall, false => unstall
|
||||
@ -231,6 +238,7 @@ static int hw_ep_set_halt(struct ci_hdrc *ci, int num, int dir, int value)
|
||||
|
||||
/**
|
||||
* hw_is_port_high_speed: test if port is high speed
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function returns true if high speed port
|
||||
*/
|
||||
@ -243,6 +251,7 @@ static int hw_port_is_high_speed(struct ci_hdrc *ci)
|
||||
/**
|
||||
* hw_test_and_clear_complete: test & clear complete status (execute without
|
||||
* interruption)
|
||||
* @ci: the controller
|
||||
* @n: endpoint number
|
||||
*
|
||||
* This function returns complete status
|
||||
@ -256,6 +265,7 @@ static int hw_test_and_clear_complete(struct ci_hdrc *ci, int n)
|
||||
/**
|
||||
* hw_test_and_clear_intr_active: test & clear active interrupts (execute
|
||||
* without interruption)
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function returns active interrutps
|
||||
*/
|
||||
@ -270,6 +280,7 @@ static u32 hw_test_and_clear_intr_active(struct ci_hdrc *ci)
|
||||
/**
|
||||
* hw_test_and_clear_setup_guard: test & clear setup guard (execute without
|
||||
* interruption)
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function returns guard value
|
||||
*/
|
||||
@ -281,6 +292,7 @@ static int hw_test_and_clear_setup_guard(struct ci_hdrc *ci)
|
||||
/**
|
||||
* hw_test_and_set_setup_guard: test & set setup guard (execute without
|
||||
* interruption)
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function returns guard value
|
||||
*/
|
||||
@ -291,6 +303,7 @@ static int hw_test_and_set_setup_guard(struct ci_hdrc *ci)
|
||||
|
||||
/**
|
||||
* hw_usb_set_address: configures USB address (execute without interruption)
|
||||
* @ci: the controller
|
||||
* @value: new USB address
|
||||
*
|
||||
* This function explicitly sets the address, without the "USBADRA" (advance)
|
||||
@ -305,6 +318,7 @@ static void hw_usb_set_address(struct ci_hdrc *ci, u8 value)
|
||||
/**
|
||||
* hw_usb_reset: restart device after a bus reset (execute without
|
||||
* interruption)
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function returns an error code
|
||||
*/
|
||||
@ -473,9 +487,10 @@ static void ci_add_buffer_entry(struct td_node *node, struct scatterlist *s)
|
||||
int empty_td_slot_index = (CI_MAX_BUF_SIZE - node->td_remaining_size)
|
||||
/ CI_HDRC_PAGE_SIZE;
|
||||
int i;
|
||||
u32 token;
|
||||
|
||||
node->ptr->token +=
|
||||
cpu_to_le32(sg_dma_len(s) << __ffs(TD_TOTAL_BYTES));
|
||||
token = le32_to_cpu(node->ptr->token) + (sg_dma_len(s) << __ffs(TD_TOTAL_BYTES));
|
||||
node->ptr->token = cpu_to_le32(token);
|
||||
|
||||
for (i = empty_td_slot_index; i < TD_PAGE_COUNT; i++) {
|
||||
u32 page = (u32) sg_dma_address(s) +
|
||||
@ -610,7 +625,7 @@ done:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
/**
|
||||
* free_pending_td: remove a pending request for the endpoint
|
||||
* @hwep: endpoint
|
||||
*/
|
||||
@ -636,8 +651,8 @@ static int reprime_dtd(struct ci_hdrc *ci, struct ci_hw_ep *hwep,
|
||||
|
||||
/**
|
||||
* _hardware_dequeue: handles a request at hardware level
|
||||
* @gadget: gadget
|
||||
* @hwep: endpoint
|
||||
* @hwep: endpoint
|
||||
* @hwreq: request
|
||||
*
|
||||
* This function returns an error code
|
||||
*/
|
||||
@ -1215,11 +1230,11 @@ __acquires(ci->lock)
|
||||
case USB_DEVICE_TEST_MODE:
|
||||
tmode = le16_to_cpu(req.wIndex) >> 8;
|
||||
switch (tmode) {
|
||||
case TEST_J:
|
||||
case TEST_K:
|
||||
case TEST_SE0_NAK:
|
||||
case TEST_PACKET:
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_J:
|
||||
case USB_TEST_K:
|
||||
case USB_TEST_SE0_NAK:
|
||||
case USB_TEST_PACKET:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
ci->test_mode = tmode;
|
||||
err = isr_setup_status_phase(
|
||||
ci);
|
||||
@ -1316,7 +1331,7 @@ __acquires(ci->lock)
|
||||
/******************************************************************************
|
||||
* ENDPT block
|
||||
*****************************************************************************/
|
||||
/**
|
||||
/*
|
||||
* ep_enable: configure endpoint, making it usable
|
||||
*
|
||||
* Check usb_ep_enable() at "usb_gadget.h" for details
|
||||
@ -1384,7 +1399,7 @@ static int ep_enable(struct usb_ep *ep,
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_disable: endpoint is no longer usable
|
||||
*
|
||||
* Check usb_ep_disable() at "usb_gadget.h" for details
|
||||
@ -1424,7 +1439,7 @@ static int ep_disable(struct usb_ep *ep)
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_alloc_request: allocate a request object to use with this endpoint
|
||||
*
|
||||
* Check usb_ep_alloc_request() at "usb_gadget.h" for details
|
||||
@ -1445,7 +1460,7 @@ static struct usb_request *ep_alloc_request(struct usb_ep *ep, gfp_t gfp_flags)
|
||||
return (hwreq == NULL) ? NULL : &hwreq->req;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_free_request: frees a request object
|
||||
*
|
||||
* Check usb_ep_free_request() at "usb_gadget.h" for details
|
||||
@ -1478,7 +1493,7 @@ static void ep_free_request(struct usb_ep *ep, struct usb_request *req)
|
||||
spin_unlock_irqrestore(hwep->lock, flags);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_queue: queues (submits) an I/O request to an endpoint
|
||||
*
|
||||
* Check usb_ep_queue()* at usb_gadget.h" for details
|
||||
@ -1503,7 +1518,7 @@ static int ep_queue(struct usb_ep *ep, struct usb_request *req,
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_dequeue: dequeues (cancels, unlinks) an I/O request from an endpoint
|
||||
*
|
||||
* Check usb_ep_dequeue() at "usb_gadget.h" for details
|
||||
@ -1547,7 +1562,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_set_halt: sets the endpoint halt feature
|
||||
*
|
||||
* Check usb_ep_set_halt() at "usb_gadget.h" for details
|
||||
@ -1557,7 +1572,7 @@ static int ep_set_halt(struct usb_ep *ep, int value)
|
||||
return _ep_set_halt(ep, value, true);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_set_wedge: sets the halt feature and ignores clear requests
|
||||
*
|
||||
* Check usb_ep_set_wedge() at "usb_gadget.h" for details
|
||||
@ -1577,7 +1592,7 @@ static int ep_set_wedge(struct usb_ep *ep)
|
||||
return usb_ep_set_halt(ep);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ep_fifo_flush: flushes contents of a fifo
|
||||
*
|
||||
* Check usb_ep_fifo_flush() at "usb_gadget.h" for details
|
||||
@ -1603,7 +1618,7 @@ static void ep_fifo_flush(struct usb_ep *ep)
|
||||
spin_unlock_irqrestore(hwep->lock, flags);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* Endpoint-specific part of the API to the USB controller hardware
|
||||
* Check "usb_gadget.h" for details
|
||||
*/
|
||||
@ -1622,7 +1637,7 @@ static const struct usb_ep_ops usb_ep_ops = {
|
||||
/******************************************************************************
|
||||
* GADGET block
|
||||
*****************************************************************************/
|
||||
/**
|
||||
/*
|
||||
* ci_hdrc_gadget_connect: caller makes sure gadget driver is binded
|
||||
*/
|
||||
static void ci_hdrc_gadget_connect(struct usb_gadget *_gadget, int is_active)
|
||||
@ -1772,7 +1787,7 @@ static struct usb_ep *ci_udc_match_ep(struct usb_gadget *gadget,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* Device operations part of the API to the USB controller hardware,
|
||||
* which don't involve endpoints (or i/o)
|
||||
* Check "usb_gadget.h" for details
|
||||
@ -1924,7 +1939,7 @@ static void ci_udc_stop_for_otg_fsm(struct ci_hdrc *ci)
|
||||
mutex_unlock(&ci->fsm.lock);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_udc_stop: unregister a gadget driver
|
||||
*/
|
||||
static int ci_udc_stop(struct usb_gadget *gadget)
|
||||
@ -1955,7 +1970,7 @@ static int ci_udc_stop(struct usb_gadget *gadget)
|
||||
/******************************************************************************
|
||||
* BUS block
|
||||
*****************************************************************************/
|
||||
/**
|
||||
/*
|
||||
* udc_irq: ci interrupt handler
|
||||
*
|
||||
* This function returns IRQ_HANDLED if the IRQ has been handled
|
||||
@ -2086,7 +2101,7 @@ free_qh_pool:
|
||||
return retval;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* ci_hdrc_gadget_destroy: parent remove must call this to remove UDC
|
||||
*
|
||||
* No interrupts active, the IRQ has been released
|
||||
@ -2136,7 +2151,7 @@ static void udc_id_switch_for_host(struct ci_hdrc *ci)
|
||||
|
||||
/**
|
||||
* ci_hdrc_gadget_init - initialize device related bits
|
||||
* ci: the controller
|
||||
* @ci: the controller
|
||||
*
|
||||
* This function initializes the gadget, if the device is "device capable".
|
||||
*/
|
||||
|
@ -367,10 +367,10 @@ static u32 usbmisc_wakeup_setting(struct imx_usbmisc_data *data)
|
||||
{
|
||||
u32 wakeup_setting = MX6_USB_OTG_WAKEUP_BITS;
|
||||
|
||||
if (data->ext_id)
|
||||
if (data->ext_id || data->available_role != USB_DR_MODE_OTG)
|
||||
wakeup_setting &= ~MX6_BM_ID_WAKEUP;
|
||||
|
||||
if (data->ext_vbus)
|
||||
if (data->ext_vbus || data->available_role == USB_DR_MODE_HOST)
|
||||
wakeup_setting &= ~MX6_BM_VBUS_WAKEUP;
|
||||
|
||||
return wakeup_setting;
|
||||
@ -789,7 +789,7 @@ static int imx7d_charger_primary_detection(struct imx_usbmisc_data *data)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* Whole charger detection process:
|
||||
* 1. OPMODE override to be non-driving
|
||||
* 2. Data contact check
|
||||
|
@ -940,7 +940,8 @@ err:
|
||||
* @intf: usb interface the subdriver will associate with
|
||||
* @ep: interrupt endpoint to monitor for notifications
|
||||
* @bufsize: maximum message size to support for read/write
|
||||
*
|
||||
* @manage_power: call-back invoked during open and release to
|
||||
* manage the device's power
|
||||
* Create WDM usb class character device and associate it with intf
|
||||
* without binding, allowing another driver to manage the interface.
|
||||
*
|
||||
|
@ -1,5 +1,5 @@
|
||||
// SPDX-License-Identifier: GPL-2.0+
|
||||
/**
|
||||
/*
|
||||
* drivers/usb/class/usbtmc.c - USB Test & Measurement class driver
|
||||
*
|
||||
* Copyright (C) 2007 Stefan Kopp, Gechingen, Germany
|
||||
@ -2282,7 +2282,7 @@ static void usbtmc_interrupt(struct urb *urb)
|
||||
case -EOVERFLOW:
|
||||
dev_err(dev, "overflow with length %d, actual length is %d\n",
|
||||
data->iin_wMaxPacketSize, urb->actual_length);
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case -ECONNRESET:
|
||||
case -ENOENT:
|
||||
case -ESHUTDOWN:
|
||||
|
@ -40,6 +40,7 @@ config USB_CONN_GPIO
|
||||
tristate "USB GPIO Based Connection Detection Driver"
|
||||
depends on GPIOLIB
|
||||
select USB_ROLE_SWITCH
|
||||
select POWER_SUPPLY
|
||||
help
|
||||
The driver supports USB role switch between host and device via GPIO
|
||||
based USB cable detection, used typically if an input GPIO is used
|
||||
|
@ -1,8 +1,8 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* Common USB debugging functions
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -53,15 +53,15 @@ static const char *usb_decode_device_feature(u16 wValue)
|
||||
static const char *usb_decode_test_mode(u16 wIndex)
|
||||
{
|
||||
switch (wIndex) {
|
||||
case TEST_J:
|
||||
case USB_TEST_J:
|
||||
return ": TEST_J";
|
||||
case TEST_K:
|
||||
case USB_TEST_K:
|
||||
return ": TEST_K";
|
||||
case TEST_SE0_NAK:
|
||||
case USB_TEST_SE0_NAK:
|
||||
return ": TEST_SE0_NAK";
|
||||
case TEST_PACKET:
|
||||
case USB_TEST_PACKET:
|
||||
return ": TEST_PACKET";
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
return ": TEST_FORCE_EN";
|
||||
default:
|
||||
return ": UNKNOWN";
|
||||
@ -207,7 +207,7 @@ static void usb_decode_set_isoch_delay(__u8 wValue, char *str, size_t size)
|
||||
snprintf(str, size, "Set Isochronous Delay(Delay = %d ns)", wValue);
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* usb_decode_ctrl - returns a string representation of ctrl request
|
||||
*/
|
||||
const char *usb_decode_ctrl(char *str, size_t size, __u8 bRequestType,
|
||||
|
@ -1,5 +1,5 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* ulpi.c - USB ULPI PHY bus
|
||||
*
|
||||
* Copyright (C) 2015 Intel Corporation
|
||||
@ -143,6 +143,7 @@ static const struct device_type ulpi_dev_type = {
|
||||
/**
|
||||
* ulpi_register_driver - register a driver with the ULPI bus
|
||||
* @drv: driver being registered
|
||||
* @module: ends up being THIS_MODULE
|
||||
*
|
||||
* Registers a driver with the ULPI bus.
|
||||
*/
|
||||
@ -290,7 +291,7 @@ EXPORT_SYMBOL_GPL(ulpi_register_interface);
|
||||
|
||||
/**
|
||||
* ulpi_unregister_interface - unregister ULPI interface
|
||||
* @intrf: struct ulpi_interface
|
||||
* @ulpi: struct ulpi_interface
|
||||
*
|
||||
* Unregisters a ULPI device and it's interface that was created with
|
||||
* ulpi_create_interface().
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include <linux/of.h>
|
||||
#include <linux/pinctrl/consumer.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/power_supply.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/usb/role.h>
|
||||
|
||||
@ -38,9 +39,12 @@ struct usb_conn_info {
|
||||
struct gpio_desc *vbus_gpiod;
|
||||
int id_irq;
|
||||
int vbus_irq;
|
||||
|
||||
struct power_supply_desc desc;
|
||||
struct power_supply *charger;
|
||||
};
|
||||
|
||||
/**
|
||||
/*
|
||||
* "DEVICE" = VBUS and "HOST" = !ID, so we have:
|
||||
* Both "DEVICE" and "HOST" can't be set as active at the same time
|
||||
* so if "HOST" is active (i.e. ID is 0) we keep "DEVICE" inactive
|
||||
@ -104,6 +108,8 @@ static void usb_conn_detect_cable(struct work_struct *work)
|
||||
|
||||
dev_dbg(info->dev, "vbus regulator is %s\n",
|
||||
regulator_is_enabled(info->vbus) ? "enabled" : "disabled");
|
||||
|
||||
power_supply_changed(info->charger);
|
||||
}
|
||||
|
||||
static void usb_conn_queue_dwork(struct usb_conn_info *info,
|
||||
@ -121,10 +127,35 @@ static irqreturn_t usb_conn_isr(int irq, void *dev_id)
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static enum power_supply_property usb_charger_properties[] = {
|
||||
POWER_SUPPLY_PROP_ONLINE,
|
||||
};
|
||||
|
||||
static int usb_charger_get_property(struct power_supply *psy,
|
||||
enum power_supply_property psp,
|
||||
union power_supply_propval *val)
|
||||
{
|
||||
struct usb_conn_info *info = power_supply_get_drvdata(psy);
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_ONLINE:
|
||||
val->intval = info->last_role == USB_ROLE_DEVICE;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int usb_conn_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct power_supply_desc *desc;
|
||||
struct usb_conn_info *info;
|
||||
struct power_supply_config cfg = {
|
||||
.of_node = dev->of_node,
|
||||
};
|
||||
int ret = 0;
|
||||
|
||||
info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
|
||||
@ -203,6 +234,20 @@ static int usb_conn_probe(struct platform_device *pdev)
|
||||
}
|
||||
}
|
||||
|
||||
desc = &info->desc;
|
||||
desc->name = "usb-charger";
|
||||
desc->properties = usb_charger_properties;
|
||||
desc->num_properties = ARRAY_SIZE(usb_charger_properties);
|
||||
desc->get_property = usb_charger_get_property;
|
||||
desc->type = POWER_SUPPLY_TYPE_USB;
|
||||
cfg.drv_data = info;
|
||||
|
||||
info->charger = devm_power_supply_register(dev, desc, &cfg);
|
||||
if (IS_ERR(info->charger)) {
|
||||
dev_err(dev, "Unable to register charger\n");
|
||||
return PTR_ERR(info->charger);
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, info);
|
||||
|
||||
/* Perform initial detection */
|
||||
|
@ -55,18 +55,18 @@ config USB_OTG
|
||||
Select this only if your board has Mini-AB/Micro-AB
|
||||
connector.
|
||||
|
||||
config USB_OTG_WHITELIST
|
||||
config USB_OTG_PRODUCTLIST
|
||||
bool "Rely on OTG and EH Targeted Peripherals List"
|
||||
depends on USB
|
||||
help
|
||||
If you say Y here, the "otg_whitelist.h" file will be used as a
|
||||
product whitelist, so USB peripherals not listed there will be
|
||||
If you say Y here, the "otg_productlist.h" file will be used as a
|
||||
product list, so USB peripherals not listed there will be
|
||||
rejected during enumeration. This behavior is required by the
|
||||
USB OTG and EH specification for all devices not on your product's
|
||||
"Targeted Peripherals List". "Embedded Hosts" are likewise
|
||||
allowed to support only a limited number of peripherals.
|
||||
|
||||
config USB_OTG_BLACKLIST_HUB
|
||||
config USB_OTG_DISABLE_EXTERNAL_HUB
|
||||
bool "Disable external hubs"
|
||||
depends on USB_OTG || EXPERT
|
||||
help
|
||||
|
@ -298,10 +298,10 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
|
||||
goto skip_to_next_endpoint_or_interface_descriptor;
|
||||
}
|
||||
|
||||
/* Ignore blacklisted endpoints */
|
||||
if (udev->quirks & USB_QUIRK_ENDPOINT_BLACKLIST) {
|
||||
if (usb_endpoint_is_blacklisted(udev, ifp, d)) {
|
||||
dev_warn(ddev, "config %d interface %d altsetting %d has a blacklisted endpoint with address 0x%X, skipping\n",
|
||||
/* Ignore some endpoints */
|
||||
if (udev->quirks & USB_QUIRK_ENDPOINT_IGNORE) {
|
||||
if (usb_endpoint_is_ignored(udev, ifp, d)) {
|
||||
dev_warn(ddev, "config %d interface %d altsetting %d has an ignored endpoint with address 0x%X, skipping\n",
|
||||
cfgno, inum, asnum,
|
||||
d->bEndpointAddress);
|
||||
goto skip_to_next_endpoint_or_interface_descriptor;
|
||||
@ -427,7 +427,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
|
||||
i = maxp & (BIT(12) | BIT(11));
|
||||
maxp &= ~i;
|
||||
}
|
||||
/* fallthrough */
|
||||
fallthrough;
|
||||
default:
|
||||
maxpacket_maxes = high_speed_maxpacket_maxes;
|
||||
break;
|
||||
|
@ -133,6 +133,10 @@ static const struct class_info clas_info[] = {
|
||||
{USB_CLASS_CSCID, "scard"},
|
||||
{USB_CLASS_CONTENT_SEC, "c-sec"},
|
||||
{USB_CLASS_VIDEO, "video"},
|
||||
{USB_CLASS_PERSONAL_HEALTHCARE, "perhc"},
|
||||
{USB_CLASS_AUDIO_VIDEO, "av"},
|
||||
{USB_CLASS_BILLBOARD, "blbrd"},
|
||||
{USB_CLASS_USB_TYPE_C_BRIDGE, "bridg"},
|
||||
{USB_CLASS_WIRELESS_CONTROLLER, "wlcon"},
|
||||
{USB_CLASS_MISC, "misc"},
|
||||
{USB_CLASS_APP_SPEC, "app."},
|
||||
|
@ -1102,22 +1102,20 @@ static int usbdev_release(struct inode *inode, struct file *file)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int proc_control(struct usb_dev_state *ps, void __user *arg)
|
||||
static int do_proc_control(struct usb_dev_state *ps,
|
||||
struct usbdevfs_ctrltransfer *ctrl)
|
||||
{
|
||||
struct usb_device *dev = ps->dev;
|
||||
struct usbdevfs_ctrltransfer ctrl;
|
||||
unsigned int tmo;
|
||||
unsigned char *tbuf;
|
||||
unsigned wLength;
|
||||
int i, pipe, ret;
|
||||
|
||||
if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
|
||||
return -EFAULT;
|
||||
ret = check_ctrlrecip(ps, ctrl.bRequestType, ctrl.bRequest,
|
||||
ctrl.wIndex);
|
||||
ret = check_ctrlrecip(ps, ctrl->bRequestType, ctrl->bRequest,
|
||||
ctrl->wIndex);
|
||||
if (ret)
|
||||
return ret;
|
||||
wLength = ctrl.wLength; /* To suppress 64k PAGE_SIZE warning */
|
||||
wLength = ctrl->wLength; /* To suppress 64k PAGE_SIZE warning */
|
||||
if (wLength > PAGE_SIZE)
|
||||
return -EINVAL;
|
||||
ret = usbfs_increase_memory_usage(PAGE_SIZE + sizeof(struct urb) +
|
||||
@ -1129,52 +1127,52 @@ static int proc_control(struct usb_dev_state *ps, void __user *arg)
|
||||
ret = -ENOMEM;
|
||||
goto done;
|
||||
}
|
||||
tmo = ctrl.timeout;
|
||||
tmo = ctrl->timeout;
|
||||
snoop(&dev->dev, "control urb: bRequestType=%02x "
|
||||
"bRequest=%02x wValue=%04x "
|
||||
"wIndex=%04x wLength=%04x\n",
|
||||
ctrl.bRequestType, ctrl.bRequest, ctrl.wValue,
|
||||
ctrl.wIndex, ctrl.wLength);
|
||||
if (ctrl.bRequestType & 0x80) {
|
||||
ctrl->bRequestType, ctrl->bRequest, ctrl->wValue,
|
||||
ctrl->wIndex, ctrl->wLength);
|
||||
if (ctrl->bRequestType & 0x80) {
|
||||
pipe = usb_rcvctrlpipe(dev, 0);
|
||||
snoop_urb(dev, NULL, pipe, ctrl.wLength, tmo, SUBMIT, NULL, 0);
|
||||
snoop_urb(dev, NULL, pipe, ctrl->wLength, tmo, SUBMIT, NULL, 0);
|
||||
|
||||
usb_unlock_device(dev);
|
||||
i = usb_control_msg(dev, pipe, ctrl.bRequest,
|
||||
ctrl.bRequestType, ctrl.wValue, ctrl.wIndex,
|
||||
tbuf, ctrl.wLength, tmo);
|
||||
i = usb_control_msg(dev, pipe, ctrl->bRequest,
|
||||
ctrl->bRequestType, ctrl->wValue, ctrl->wIndex,
|
||||
tbuf, ctrl->wLength, tmo);
|
||||
usb_lock_device(dev);
|
||||
snoop_urb(dev, NULL, pipe, max(i, 0), min(i, 0), COMPLETE,
|
||||
tbuf, max(i, 0));
|
||||
if ((i > 0) && ctrl.wLength) {
|
||||
if (copy_to_user(ctrl.data, tbuf, i)) {
|
||||
if ((i > 0) && ctrl->wLength) {
|
||||
if (copy_to_user(ctrl->data, tbuf, i)) {
|
||||
ret = -EFAULT;
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (ctrl.wLength) {
|
||||
if (copy_from_user(tbuf, ctrl.data, ctrl.wLength)) {
|
||||
if (ctrl->wLength) {
|
||||
if (copy_from_user(tbuf, ctrl->data, ctrl->wLength)) {
|
||||
ret = -EFAULT;
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
pipe = usb_sndctrlpipe(dev, 0);
|
||||
snoop_urb(dev, NULL, pipe, ctrl.wLength, tmo, SUBMIT,
|
||||
tbuf, ctrl.wLength);
|
||||
snoop_urb(dev, NULL, pipe, ctrl->wLength, tmo, SUBMIT,
|
||||
tbuf, ctrl->wLength);
|
||||
|
||||
usb_unlock_device(dev);
|
||||
i = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), ctrl.bRequest,
|
||||
ctrl.bRequestType, ctrl.wValue, ctrl.wIndex,
|
||||
tbuf, ctrl.wLength, tmo);
|
||||
i = usb_control_msg(dev, usb_sndctrlpipe(dev, 0), ctrl->bRequest,
|
||||
ctrl->bRequestType, ctrl->wValue, ctrl->wIndex,
|
||||
tbuf, ctrl->wLength, tmo);
|
||||
usb_lock_device(dev);
|
||||
snoop_urb(dev, NULL, pipe, max(i, 0), min(i, 0), COMPLETE, NULL, 0);
|
||||
}
|
||||
if (i < 0 && i != -EPIPE) {
|
||||
dev_printk(KERN_DEBUG, &dev->dev, "usbfs: USBDEVFS_CONTROL "
|
||||
"failed cmd %s rqt %u rq %u len %u ret %d\n",
|
||||
current->comm, ctrl.bRequestType, ctrl.bRequest,
|
||||
ctrl.wLength, i);
|
||||
current->comm, ctrl->bRequestType, ctrl->bRequest,
|
||||
ctrl->wLength, i);
|
||||
}
|
||||
ret = i;
|
||||
done:
|
||||
@ -1184,30 +1182,37 @@ static int proc_control(struct usb_dev_state *ps, void __user *arg)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||
static int proc_control(struct usb_dev_state *ps, void __user *arg)
|
||||
{
|
||||
struct usbdevfs_ctrltransfer ctrl;
|
||||
|
||||
if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
|
||||
return -EFAULT;
|
||||
return do_proc_control(ps, &ctrl);
|
||||
}
|
||||
|
||||
static int do_proc_bulk(struct usb_dev_state *ps,
|
||||
struct usbdevfs_bulktransfer *bulk)
|
||||
{
|
||||
struct usb_device *dev = ps->dev;
|
||||
struct usbdevfs_bulktransfer bulk;
|
||||
unsigned int tmo, len1, pipe;
|
||||
int len2;
|
||||
unsigned char *tbuf;
|
||||
int i, ret;
|
||||
|
||||
if (copy_from_user(&bulk, arg, sizeof(bulk)))
|
||||
return -EFAULT;
|
||||
ret = findintfep(ps->dev, bulk.ep);
|
||||
ret = findintfep(ps->dev, bulk->ep);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = checkintf(ps, ret);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (bulk.ep & USB_DIR_IN)
|
||||
pipe = usb_rcvbulkpipe(dev, bulk.ep & 0x7f);
|
||||
if (bulk->ep & USB_DIR_IN)
|
||||
pipe = usb_rcvbulkpipe(dev, bulk->ep & 0x7f);
|
||||
else
|
||||
pipe = usb_sndbulkpipe(dev, bulk.ep & 0x7f);
|
||||
if (!usb_maxpacket(dev, pipe, !(bulk.ep & USB_DIR_IN)))
|
||||
pipe = usb_sndbulkpipe(dev, bulk->ep & 0x7f);
|
||||
if (!usb_maxpacket(dev, pipe, !(bulk->ep & USB_DIR_IN)))
|
||||
return -EINVAL;
|
||||
len1 = bulk.len;
|
||||
len1 = bulk->len;
|
||||
if (len1 >= (INT_MAX - sizeof(struct urb)))
|
||||
return -EINVAL;
|
||||
ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));
|
||||
@ -1218,8 +1223,8 @@ static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||
ret = -ENOMEM;
|
||||
goto done;
|
||||
}
|
||||
tmo = bulk.timeout;
|
||||
if (bulk.ep & 0x80) {
|
||||
tmo = bulk->timeout;
|
||||
if (bulk->ep & 0x80) {
|
||||
snoop_urb(dev, NULL, pipe, len1, tmo, SUBMIT, NULL, 0);
|
||||
|
||||
usb_unlock_device(dev);
|
||||
@ -1228,14 +1233,14 @@ static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||
snoop_urb(dev, NULL, pipe, len2, i, COMPLETE, tbuf, len2);
|
||||
|
||||
if (!i && len2) {
|
||||
if (copy_to_user(bulk.data, tbuf, len2)) {
|
||||
if (copy_to_user(bulk->data, tbuf, len2)) {
|
||||
ret = -EFAULT;
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (len1) {
|
||||
if (copy_from_user(tbuf, bulk.data, len1)) {
|
||||
if (copy_from_user(tbuf, bulk->data, len1)) {
|
||||
ret = -EFAULT;
|
||||
goto done;
|
||||
}
|
||||
@ -1254,6 +1259,15 @@ static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||
{
|
||||
struct usbdevfs_bulktransfer bulk;
|
||||
|
||||
if (copy_from_user(&bulk, arg, sizeof(bulk)))
|
||||
return -EFAULT;
|
||||
return do_proc_bulk(ps, &bulk);
|
||||
}
|
||||
|
||||
static void check_reset_of_active_ep(struct usb_device *udev,
|
||||
unsigned int epnum, char *ioctl_name)
|
||||
{
|
||||
@ -2013,33 +2027,31 @@ static int proc_reapurbnonblock(struct usb_dev_state *ps, void __user *arg)
|
||||
static int proc_control_compat(struct usb_dev_state *ps,
|
||||
struct usbdevfs_ctrltransfer32 __user *p32)
|
||||
{
|
||||
struct usbdevfs_ctrltransfer __user *p;
|
||||
__u32 udata;
|
||||
p = compat_alloc_user_space(sizeof(*p));
|
||||
if (copy_in_user(p, p32, (sizeof(*p32) - sizeof(compat_caddr_t))) ||
|
||||
get_user(udata, &p32->data) ||
|
||||
put_user(compat_ptr(udata), &p->data))
|
||||
struct usbdevfs_ctrltransfer ctrl;
|
||||
u32 udata;
|
||||
|
||||
if (copy_from_user(&ctrl, p32, sizeof(*p32) - sizeof(compat_caddr_t)) ||
|
||||
get_user(udata, &p32->data))
|
||||
return -EFAULT;
|
||||
return proc_control(ps, p);
|
||||
ctrl.data = compat_ptr(udata);
|
||||
return do_proc_control(ps, &ctrl);
|
||||
}
|
||||
|
||||
static int proc_bulk_compat(struct usb_dev_state *ps,
|
||||
struct usbdevfs_bulktransfer32 __user *p32)
|
||||
{
|
||||
struct usbdevfs_bulktransfer __user *p;
|
||||
compat_uint_t n;
|
||||
struct usbdevfs_bulktransfer bulk;
|
||||
compat_caddr_t addr;
|
||||
|
||||
p = compat_alloc_user_space(sizeof(*p));
|
||||
|
||||
if (get_user(n, &p32->ep) || put_user(n, &p->ep) ||
|
||||
get_user(n, &p32->len) || put_user(n, &p->len) ||
|
||||
get_user(n, &p32->timeout) || put_user(n, &p->timeout) ||
|
||||
get_user(addr, &p32->data) || put_user(compat_ptr(addr), &p->data))
|
||||
if (get_user(bulk.ep, &p32->ep) ||
|
||||
get_user(bulk.len, &p32->len) ||
|
||||
get_user(bulk.timeout, &p32->timeout) ||
|
||||
get_user(addr, &p32->data))
|
||||
return -EFAULT;
|
||||
|
||||
return proc_bulk(ps, p);
|
||||
bulk.data = compat_ptr(addr);
|
||||
return do_proc_bulk(ps, &bulk);
|
||||
}
|
||||
|
||||
static int proc_disconnectsignal_compat(struct usb_dev_state *ps, void __user *arg)
|
||||
{
|
||||
struct usbdevfs_disconnectsignal32 ds;
|
||||
|
@ -205,8 +205,6 @@ static int __check_usb_generic(struct device_driver *drv, void *data)
|
||||
udrv = to_usb_device_driver(drv);
|
||||
if (udrv == &usb_generic_driver)
|
||||
return 0;
|
||||
if (!udrv->id_table)
|
||||
return 0;
|
||||
|
||||
return usb_device_match_id(udev, udrv->id_table) != NULL;
|
||||
}
|
||||
|
@ -194,20 +194,21 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id,
|
||||
* make sure irq setup is not touched for xhci in generic hcd code
|
||||
*/
|
||||
if ((driver->flags & HCD_MASK) < HCD_USB3) {
|
||||
if (!dev->irq) {
|
||||
retval = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY | PCI_IRQ_MSI);
|
||||
if (retval < 0) {
|
||||
dev_err(&dev->dev,
|
||||
"Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
|
||||
pci_name(dev));
|
||||
retval = -ENODEV;
|
||||
goto disable_pci;
|
||||
}
|
||||
hcd_irq = dev->irq;
|
||||
hcd_irq = pci_irq_vector(dev, 0);
|
||||
}
|
||||
|
||||
hcd = usb_create_hcd(driver, &dev->dev, pci_name(dev));
|
||||
if (!hcd) {
|
||||
retval = -ENOMEM;
|
||||
goto disable_pci;
|
||||
goto free_irq_vectors;
|
||||
}
|
||||
|
||||
hcd->amd_resume_bug = (usb_hcd_amd_remote_wakeup_quirk(dev) &&
|
||||
@ -286,6 +287,9 @@ int usb_hcd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id,
|
||||
|
||||
put_hcd:
|
||||
usb_put_hcd(hcd);
|
||||
free_irq_vectors:
|
||||
if ((driver->flags & HCD_MASK) < HCD_USB3)
|
||||
pci_free_irq_vectors(dev);
|
||||
disable_pci:
|
||||
pci_disable_device(dev);
|
||||
dev_err(&dev->dev, "init %s fail, %d\n", pci_name(dev), retval);
|
||||
@ -343,6 +347,8 @@ void usb_hcd_pci_remove(struct pci_dev *dev)
|
||||
up_read(&companions_rwsem);
|
||||
}
|
||||
usb_put_hcd(hcd);
|
||||
if ((hcd->driver->flags & HCD_MASK) < HCD_USB3)
|
||||
pci_free_irq_vectors(dev);
|
||||
pci_disable_device(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_hcd_pci_remove);
|
||||
@ -454,7 +460,7 @@ static int suspend_common(struct device *dev, bool do_wakeup)
|
||||
* synchronized here.
|
||||
*/
|
||||
if (!hcd->msix_enabled)
|
||||
synchronize_irq(pci_dev->irq);
|
||||
synchronize_irq(pci_irq_vector(pci_dev, 0));
|
||||
|
||||
/* Downstream ports from this root hub should already be quiesced, so
|
||||
* there will be no DMA activity. Now we can shut down the upstream
|
||||
|
@ -564,7 +564,7 @@ static int rh_call_control (struct usb_hcd *hcd, struct urb *urb)
|
||||
case DeviceRequest | USB_REQ_GET_CONFIGURATION:
|
||||
tbuf[0] = 1;
|
||||
len = 1;
|
||||
/* FALLTHROUGH */
|
||||
fallthrough;
|
||||
case DeviceOutRequest | USB_REQ_SET_CONFIGURATION:
|
||||
break;
|
||||
case DeviceRequest | USB_REQ_GET_DESCRIPTOR:
|
||||
@ -633,7 +633,7 @@ static int rh_call_control (struct usb_hcd *hcd, struct urb *urb)
|
||||
case DeviceRequest | USB_REQ_GET_INTERFACE:
|
||||
tbuf[0] = 0;
|
||||
len = 1;
|
||||
/* FALLTHROUGH */
|
||||
fallthrough;
|
||||
case DeviceOutRequest | USB_REQ_SET_INTERFACE:
|
||||
break;
|
||||
case DeviceOutRequest | USB_REQ_SET_ADDRESS:
|
||||
@ -651,7 +651,7 @@ static int rh_call_control (struct usb_hcd *hcd, struct urb *urb)
|
||||
tbuf[0] = 0;
|
||||
tbuf[1] = 0;
|
||||
len = 2;
|
||||
/* FALLTHROUGH */
|
||||
fallthrough;
|
||||
case EndpointOutRequest | USB_REQ_CLEAR_FEATURE:
|
||||
case EndpointOutRequest | USB_REQ_SET_FEATURE:
|
||||
dev_dbg (hcd->self.controller, "no endpoint features yet\n");
|
||||
@ -2726,7 +2726,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
|
||||
case HCD_USB32:
|
||||
rhdev->rx_lanes = 2;
|
||||
rhdev->tx_lanes = 2;
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
case HCD_USB31:
|
||||
rhdev->speed = USB_SPEED_SUPER_PLUS;
|
||||
break;
|
||||
|
@ -35,7 +35,7 @@
|
||||
#include <asm/byteorder.h>
|
||||
|
||||
#include "hub.h"
|
||||
#include "otg_whitelist.h"
|
||||
#include "otg_productlist.h"
|
||||
|
||||
#define USB_VENDOR_GENESYS_LOGIC 0x05e3
|
||||
#define USB_VENDOR_SMSC 0x0424
|
||||
@ -1834,7 +1834,7 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
|
||||
return -E2BIG;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_USB_OTG_BLACKLIST_HUB
|
||||
#ifdef CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB
|
||||
if (hdev->parent) {
|
||||
dev_warn(&intf->dev, "ignoring external hub\n");
|
||||
return -ENODEV;
|
||||
@ -2403,7 +2403,7 @@ static int usb_enumerate_device(struct usb_device *udev)
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
if (IS_ENABLED(CONFIG_USB_OTG_WHITELIST) && hcd->tpl_support &&
|
||||
if (IS_ENABLED(CONFIG_USB_OTG_PRODUCTLIST) && hcd->tpl_support &&
|
||||
!is_targeted(udev)) {
|
||||
/* Maybe it can talk to us, though we can't talk to it.
|
||||
* (Includes HNP test device.)
|
||||
@ -4698,7 +4698,7 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
|
||||
r = 0;
|
||||
break;
|
||||
}
|
||||
/* FALL THROUGH */
|
||||
fallthrough;
|
||||
default:
|
||||
if (r == 0)
|
||||
r = -EPROTO;
|
||||
|
@ -34,7 +34,7 @@ struct usbport_trig_port {
|
||||
* Helpers
|
||||
***************************************/
|
||||
|
||||
/**
|
||||
/*
|
||||
* usbport_trig_usb_dev_observed - Check if dev is connected to observed port
|
||||
*/
|
||||
static bool usbport_trig_usb_dev_observed(struct usbport_trig_data *usbport_data,
|
||||
@ -64,7 +64,7 @@ static int usbport_trig_usb_dev_check(struct usb_device *usb_dev, void *data)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
/*
|
||||
* usbport_trig_update_count - Recalculate amount of connected matching devices
|
||||
*/
|
||||
static void usbport_trig_update_count(struct usbport_trig_data *usbport_data)
|
||||
@ -123,7 +123,7 @@ static const struct attribute_group ports_group = {
|
||||
* Adding & removing ports
|
||||
***************************************/
|
||||
|
||||
/**
|
||||
/*
|
||||
* usbport_trig_port_observed - Check if port should be observed
|
||||
*/
|
||||
static bool usbport_trig_port_observed(struct usbport_trig_data *usbport_data,
|
||||
|
@ -45,7 +45,7 @@ EXPORT_SYMBOL_GPL(usb_of_get_device_node);
|
||||
*
|
||||
* Determine whether a USB device has a so called combined node which is
|
||||
* shared with its sole interface. This is the case if and only if the device
|
||||
* has a node and its decriptors report the following:
|
||||
* has a node and its descriptors report the following:
|
||||
*
|
||||
* 1) bDeviceClass is 0 or 9, and
|
||||
* 2) bNumConfigurations is 1, and
|
||||
|
@ -1,18 +1,14 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
/*
|
||||
* drivers/usb/core/otg_whitelist.h
|
||||
*
|
||||
* Copyright (C) 2004 Texas Instruments
|
||||
*/
|
||||
/* Copyright (C) 2004 Texas Instruments */
|
||||
|
||||
/*
|
||||
* This OTG and Embedded Host Whitelist is "Targeted Peripheral List".
|
||||
* This OTG and Embedded Host list is "Targeted Peripheral List".
|
||||
* It should mostly use of USB_DEVICE() or USB_DEVICE_VER() entries..
|
||||
*
|
||||
* YOU _SHOULD_ CHANGE THIS LIST TO MATCH YOUR PRODUCT AND ITS TESTING!
|
||||
*/
|
||||
|
||||
static struct usb_device_id whitelist_table[] = {
|
||||
static struct usb_device_id productlist_table[] = {
|
||||
|
||||
/* hubs are optional in OTG, but very handy ... */
|
||||
{ USB_DEVICE_INFO(USB_CLASS_HUB, 0, 0), },
|
||||
@ -44,7 +40,7 @@ static struct usb_device_id whitelist_table[] = {
|
||||
|
||||
static int is_targeted(struct usb_device *dev)
|
||||
{
|
||||
struct usb_device_id *id = whitelist_table;
|
||||
struct usb_device_id *id = productlist_table;
|
||||
|
||||
/* HNP test device is _never_ targeted (see OTG spec 6.6.6) */
|
||||
if ((le16_to_cpu(dev->descriptor.idVendor) == 0x1a0a &&
|
||||
@ -59,7 +55,7 @@ static int is_targeted(struct usb_device *dev)
|
||||
/* NOTE: can't use usb_match_id() since interface caches
|
||||
* aren't set up yet. this is cut/paste from that code.
|
||||
*/
|
||||
for (id = whitelist_table; id->match_flags; id++) {
|
||||
for (id = productlist_table; id->match_flags; id++) {
|
||||
if ((id->match_flags & USB_DEVICE_ID_MATCH_VENDOR) &&
|
||||
id->idVendor != le16_to_cpu(dev->descriptor.idVendor))
|
||||
continue;
|
@ -25,17 +25,23 @@ static unsigned int quirk_count;
|
||||
|
||||
static char quirks_param[128];
|
||||
|
||||
static int quirks_param_set(const char *val, const struct kernel_param *kp)
|
||||
static int quirks_param_set(const char *value, const struct kernel_param *kp)
|
||||
{
|
||||
char *p, *field;
|
||||
char *val, *p, *field;
|
||||
u16 vid, pid;
|
||||
u32 flags;
|
||||
size_t i;
|
||||
int err;
|
||||
|
||||
val = kstrdup(value, GFP_KERNEL);
|
||||
if (!val)
|
||||
return -ENOMEM;
|
||||
|
||||
err = param_set_copystring(val, kp);
|
||||
if (err)
|
||||
if (err) {
|
||||
kfree(val);
|
||||
return err;
|
||||
}
|
||||
|
||||
mutex_lock(&quirk_mutex);
|
||||
|
||||
@ -60,10 +66,11 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
|
||||
if (!quirk_list) {
|
||||
quirk_count = 0;
|
||||
mutex_unlock(&quirk_mutex);
|
||||
kfree(val);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
for (i = 0, p = (char *)val; p && *p;) {
|
||||
for (i = 0, p = val; p && *p;) {
|
||||
/* Each entry consists of VID:PID:flags */
|
||||
field = strsep(&p, ":");
|
||||
if (!field)
|
||||
@ -144,6 +151,7 @@ static int quirks_param_set(const char *val, const struct kernel_param *kp)
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&quirk_mutex);
|
||||
kfree(val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -360,7 +368,7 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
|
||||
/* Sound Devices USBPre2 */
|
||||
{ USB_DEVICE(0x0926, 0x0202), .driver_info =
|
||||
USB_QUIRK_ENDPOINT_BLACKLIST },
|
||||
USB_QUIRK_ENDPOINT_IGNORE },
|
||||
|
||||
/* Keytouch QWERTY Panel keyboard */
|
||||
{ USB_DEVICE(0x0926, 0x3333), .driver_info =
|
||||
@ -494,24 +502,24 @@ static const struct usb_device_id usb_amd_resume_quirk_list[] = {
|
||||
};
|
||||
|
||||
/*
|
||||
* Entries for blacklisted endpoints that should be ignored when parsing
|
||||
* configuration descriptors.
|
||||
* Entries for endpoints that should be ignored when parsing configuration
|
||||
* descriptors.
|
||||
*
|
||||
* Matched for devices with USB_QUIRK_ENDPOINT_BLACKLIST.
|
||||
* Matched for devices with USB_QUIRK_ENDPOINT_IGNORE.
|
||||
*/
|
||||
static const struct usb_device_id usb_endpoint_blacklist[] = {
|
||||
static const struct usb_device_id usb_endpoint_ignore[] = {
|
||||
{ USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0202, 1), .driver_info = 0x85 },
|
||||
{ }
|
||||
};
|
||||
|
||||
bool usb_endpoint_is_blacklisted(struct usb_device *udev,
|
||||
struct usb_host_interface *intf,
|
||||
struct usb_endpoint_descriptor *epd)
|
||||
bool usb_endpoint_is_ignored(struct usb_device *udev,
|
||||
struct usb_host_interface *intf,
|
||||
struct usb_endpoint_descriptor *epd)
|
||||
{
|
||||
const struct usb_device_id *id;
|
||||
unsigned int address;
|
||||
|
||||
for (id = usb_endpoint_blacklist; id->match_flags; ++id) {
|
||||
for (id = usb_endpoint_ignore; id->match_flags; ++id) {
|
||||
if (!usb_match_device(udev, id))
|
||||
continue;
|
||||
|
||||
|
@ -486,7 +486,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
|
||||
case USB_ENDPOINT_XFER_INT:
|
||||
if (is_out)
|
||||
allowed |= URB_ZERO_PACKET;
|
||||
/* FALLTHROUGH */
|
||||
fallthrough;
|
||||
default: /* all non-iso endpoints */
|
||||
if (!is_out)
|
||||
allowed |= URB_SHORT_NOT_OK;
|
||||
@ -519,7 +519,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
|
||||
if ((urb->interval < 6)
|
||||
&& (xfertype == USB_ENDPOINT_XFER_INT))
|
||||
return -EINVAL;
|
||||
/* fall through */
|
||||
fallthrough;
|
||||
default:
|
||||
if (urb->interval <= 0)
|
||||
return -EINVAL;
|
||||
|
@ -19,9 +19,8 @@
|
||||
* just a collection of helper routines that implement the
|
||||
* generic USB things that the real drivers can use..
|
||||
*
|
||||
* Think of this as a "USB library" rather than anything else.
|
||||
* It should be considered a slave, with no callbacks. Callbacks
|
||||
* are evil.
|
||||
* Think of this as a "USB library" rather than anything else,
|
||||
* with no callbacks. Callbacks are evil.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
|
@ -37,7 +37,7 @@ extern void usb_authorize_interface(struct usb_interface *);
|
||||
extern void usb_detect_quirks(struct usb_device *udev);
|
||||
extern void usb_detect_interface_quirks(struct usb_device *udev);
|
||||
extern void usb_release_quirk_list(void);
|
||||
extern bool usb_endpoint_is_blacklisted(struct usb_device *udev,
|
||||
extern bool usb_endpoint_is_ignored(struct usb_device *udev,
|
||||
struct usb_host_interface *intf,
|
||||
struct usb_endpoint_descriptor *epd);
|
||||
extern int usb_remove_device(struct usb_device *udev);
|
||||
|
@ -1036,7 +1036,7 @@ struct dwc2_hregs_backup {
|
||||
* @fifo_mem: Total internal RAM for FIFOs (bytes)
|
||||
* @fifo_map: Each bit intend for concrete fifo. If that bit is set,
|
||||
* then that fifo is used
|
||||
* @gadget: Represents a usb slave device
|
||||
* @gadget: Represents a usb gadget device
|
||||
* @connected: Used in slave mode. True if device connected with host
|
||||
* @eps_in: The IN endpoints being supplied to the gadget framework
|
||||
* @eps_out: The OUT endpoints being supplied to the gadget framework
|
||||
|
@ -37,15 +37,15 @@ static ssize_t testmode_write(struct file *file, const char __user *ubuf, size_t
|
||||
return -EFAULT;
|
||||
|
||||
if (!strncmp(buf, "test_j", 6))
|
||||
testmode = TEST_J;
|
||||
testmode = USB_TEST_J;
|
||||
else if (!strncmp(buf, "test_k", 6))
|
||||
testmode = TEST_K;
|
||||
testmode = USB_TEST_K;
|
||||
else if (!strncmp(buf, "test_se0_nak", 12))
|
||||
testmode = TEST_SE0_NAK;
|
||||
testmode = USB_TEST_SE0_NAK;
|
||||
else if (!strncmp(buf, "test_packet", 11))
|
||||
testmode = TEST_PACKET;
|
||||
testmode = USB_TEST_PACKET;
|
||||
else if (!strncmp(buf, "test_force_enable", 17))
|
||||
testmode = TEST_FORCE_EN;
|
||||
testmode = USB_TEST_FORCE_ENABLE;
|
||||
else
|
||||
testmode = 0;
|
||||
|
||||
@ -78,19 +78,19 @@ static int testmode_show(struct seq_file *s, void *unused)
|
||||
case 0:
|
||||
seq_puts(s, "no test\n");
|
||||
break;
|
||||
case TEST_J:
|
||||
case USB_TEST_J:
|
||||
seq_puts(s, "test_j\n");
|
||||
break;
|
||||
case TEST_K:
|
||||
case USB_TEST_K:
|
||||
seq_puts(s, "test_k\n");
|
||||
break;
|
||||
case TEST_SE0_NAK:
|
||||
case USB_TEST_SE0_NAK:
|
||||
seq_puts(s, "test_se0_nak\n");
|
||||
break;
|
||||
case TEST_PACKET:
|
||||
case USB_TEST_PACKET:
|
||||
seq_puts(s, "test_packet\n");
|
||||
break;
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
seq_puts(s, "test_force_enable\n");
|
||||
break;
|
||||
default:
|
||||
|
@ -260,6 +260,7 @@ static void dwc2_gadget_wkup_alert_handler(struct dwc2_hsotg *hsotg)
|
||||
|
||||
gintsts2 = dwc2_readl(hsotg, GINTSTS2);
|
||||
gintmsk2 = dwc2_readl(hsotg, GINTMSK2);
|
||||
gintsts2 &= gintmsk2;
|
||||
|
||||
if (gintsts2 & GINTSTS2_WKUP_ALERT_INT) {
|
||||
dev_dbg(hsotg->dev, "%s: Wkup_Alert_Int\n", __func__);
|
||||
@ -882,11 +883,10 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
|
||||
struct dwc2_dma_desc *desc;
|
||||
struct dwc2_hsotg *hsotg = hs_ep->parent;
|
||||
u32 index;
|
||||
u32 maxsize = 0;
|
||||
u32 mask = 0;
|
||||
u8 pid = 0;
|
||||
|
||||
maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask);
|
||||
dwc2_gadget_get_desc_params(hs_ep, &mask);
|
||||
|
||||
index = hs_ep->next_desc;
|
||||
desc = &hs_ep->desc_list[index];
|
||||
@ -1561,11 +1561,11 @@ int dwc2_hsotg_set_test_mode(struct dwc2_hsotg *hsotg, int testmode)
|
||||
|
||||
dctl &= ~DCTL_TSTCTL_MASK;
|
||||
switch (testmode) {
|
||||
case TEST_J:
|
||||
case TEST_K:
|
||||
case TEST_SE0_NAK:
|
||||
case TEST_PACKET:
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_J:
|
||||
case USB_TEST_K:
|
||||
case USB_TEST_SE0_NAK:
|
||||
case USB_TEST_PACKET:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
dctl |= testmode << DCTL_TSTCTL_SHIFT;
|
||||
break;
|
||||
default:
|
||||
@ -2978,10 +2978,8 @@ static void dwc2_hsotg_epint(struct dwc2_hsotg *hsotg, unsigned int idx,
|
||||
u32 epctl_reg = dir_in ? DIEPCTL(idx) : DOEPCTL(idx);
|
||||
u32 epsiz_reg = dir_in ? DIEPTSIZ(idx) : DOEPTSIZ(idx);
|
||||
u32 ints;
|
||||
u32 ctrl;
|
||||
|
||||
ints = dwc2_gadget_read_ep_interrupts(hsotg, idx, dir_in);
|
||||
ctrl = dwc2_readl(hsotg, epctl_reg);
|
||||
|
||||
/* Clear endpoint interrupts */
|
||||
dwc2_writel(hsotg, ints, epint_reg);
|
||||
|
@ -3628,7 +3628,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
|
||||
"SetPortFeature - USB_PORT_FEAT_SUSPEND\n");
|
||||
if (windex != hsotg->otg_port)
|
||||
goto error;
|
||||
if (hsotg->params.power_down == 2)
|
||||
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_HIBERNATION)
|
||||
dwc2_enter_hibernation(hsotg, 1);
|
||||
else
|
||||
dwc2_port_suspend(hsotg, windex);
|
||||
@ -3646,7 +3646,7 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
|
||||
break;
|
||||
|
||||
case USB_PORT_FEAT_RESET:
|
||||
if (hsotg->params.power_down == 2 &&
|
||||
if (hsotg->params.power_down == DWC2_POWER_DOWN_PARAM_HIBERNATION &&
|
||||
hsotg->hibernated)
|
||||
dwc2_exit_hibernation(hsotg, 0, 1, 1);
|
||||
hprt0 = dwc2_read_hprt0(hsotg);
|
||||
|
@ -68,14 +68,14 @@ static void dwc2_set_his_params(struct dwc2_hsotg *hsotg)
|
||||
p->ahbcfg = GAHBCFG_HBSTLEN_INCR16 <<
|
||||
GAHBCFG_HBSTLEN_SHIFT;
|
||||
p->change_speed_quirk = true;
|
||||
p->power_down = false;
|
||||
p->power_down = DWC2_POWER_DOWN_PARAM_NONE;
|
||||
}
|
||||
|
||||
static void dwc2_set_s3c6400_params(struct dwc2_hsotg *hsotg)
|
||||
{
|
||||
struct dwc2_core_params *p = &hsotg->params;
|
||||
|
||||
p->power_down = 0;
|
||||
p->power_down = DWC2_POWER_DOWN_PARAM_NONE;
|
||||
p->phy_utmi_width = 8;
|
||||
}
|
||||
|
||||
@ -89,7 +89,7 @@ static void dwc2_set_rk_params(struct dwc2_hsotg *hsotg)
|
||||
p->host_perio_tx_fifo_size = 256;
|
||||
p->ahbcfg = GAHBCFG_HBSTLEN_INCR16 <<
|
||||
GAHBCFG_HBSTLEN_SHIFT;
|
||||
p->power_down = 0;
|
||||
p->power_down = DWC2_POWER_DOWN_PARAM_NONE;
|
||||
}
|
||||
|
||||
static void dwc2_set_ltq_params(struct dwc2_hsotg *hsotg)
|
||||
@ -319,11 +319,11 @@ static void dwc2_set_param_power_down(struct dwc2_hsotg *hsotg)
|
||||
int val;
|
||||
|
||||
if (hsotg->hw_params.hibernation)
|
||||
val = 2;
|
||||
val = DWC2_POWER_DOWN_PARAM_HIBERNATION;
|
||||
else if (hsotg->hw_params.power_optimized)
|
||||
val = 1;
|
||||
val = DWC2_POWER_DOWN_PARAM_PARTIAL;
|
||||
else
|
||||
val = 0;
|
||||
val = DWC2_POWER_DOWN_PARAM_NONE;
|
||||
|
||||
hsotg->params.power_down = val;
|
||||
}
|
||||
|
@ -582,6 +582,7 @@ static int dwc2_driver_probe(struct platform_device *dev)
|
||||
if (hsotg->gadget_enabled) {
|
||||
retval = usb_add_gadget_udc(hsotg->dev, &hsotg->gadget);
|
||||
if (retval) {
|
||||
hsotg->gadget.udc = NULL;
|
||||
dwc2_hsotg_remove(hsotg);
|
||||
goto error_init;
|
||||
}
|
||||
@ -593,7 +594,8 @@ error_init:
|
||||
if (hsotg->params.activate_stm_id_vb_detection)
|
||||
regulator_disable(hsotg->usb33d);
|
||||
error:
|
||||
dwc2_lowlevel_hw_disable(hsotg);
|
||||
if (hsotg->dr_mode != USB_DR_MODE_PERIPHERAL)
|
||||
dwc2_lowlevel_hw_disable(hsotg);
|
||||
return retval;
|
||||
}
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* core.c - DesignWare USB3 DRD Controller Core file
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* core.h - DesignWare USB3 DRD Core Header
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* debug.h - DesignWare USB3 DRD Controller Debug Header
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* debugfs.c - DesignWare USB3 DRD Controller DebugFS file
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -466,19 +466,19 @@ static int dwc3_testmode_show(struct seq_file *s, void *unused)
|
||||
case 0:
|
||||
seq_printf(s, "no test\n");
|
||||
break;
|
||||
case TEST_J:
|
||||
case USB_TEST_J:
|
||||
seq_printf(s, "test_j\n");
|
||||
break;
|
||||
case TEST_K:
|
||||
case USB_TEST_K:
|
||||
seq_printf(s, "test_k\n");
|
||||
break;
|
||||
case TEST_SE0_NAK:
|
||||
case USB_TEST_SE0_NAK:
|
||||
seq_printf(s, "test_se0_nak\n");
|
||||
break;
|
||||
case TEST_PACKET:
|
||||
case USB_TEST_PACKET:
|
||||
seq_printf(s, "test_packet\n");
|
||||
break;
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
seq_printf(s, "test_force_enable\n");
|
||||
break;
|
||||
default:
|
||||
@ -506,15 +506,15 @@ static ssize_t dwc3_testmode_write(struct file *file,
|
||||
return -EFAULT;
|
||||
|
||||
if (!strncmp(buf, "test_j", 6))
|
||||
testmode = TEST_J;
|
||||
testmode = USB_TEST_J;
|
||||
else if (!strncmp(buf, "test_k", 6))
|
||||
testmode = TEST_K;
|
||||
testmode = USB_TEST_K;
|
||||
else if (!strncmp(buf, "test_se0_nak", 12))
|
||||
testmode = TEST_SE0_NAK;
|
||||
testmode = USB_TEST_SE0_NAK;
|
||||
else if (!strncmp(buf, "test_packet", 11))
|
||||
testmode = TEST_PACKET;
|
||||
testmode = USB_TEST_PACKET;
|
||||
else if (!strncmp(buf, "test_force_enable", 17))
|
||||
testmode = TEST_FORCE_EN;
|
||||
testmode = USB_TEST_FORCE_ENABLE;
|
||||
else
|
||||
testmode = 0;
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* drd.c - DesignWare USB3 DRD Controller Dual-role support
|
||||
*
|
||||
* Copyright (C) 2017 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2017 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Roger Quadros <rogerq@ti.com>
|
||||
*/
|
||||
|
@ -1,5 +1,5 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* dwc3-haps.c - Synopsys HAPS PCI Specific glue layer
|
||||
*
|
||||
* Copyright (C) 2018 Synopsys, Inc.
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* dwc3-keystone.c - Keystone Specific Glue layer
|
||||
*
|
||||
* Copyright (C) 2010-2013 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2013 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Author: WingMan Kwok <w-kwok2@ti.com>
|
||||
*/
|
||||
|
@ -737,13 +737,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
|
||||
goto err_disable_clks;
|
||||
}
|
||||
|
||||
ret = reset_control_reset(priv->reset);
|
||||
ret = reset_control_deassert(priv->reset);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_assert_reset;
|
||||
|
||||
ret = dwc3_meson_g12a_get_phys(priv);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_assert_reset;
|
||||
|
||||
ret = priv->drvdata->setup_regmaps(priv, base);
|
||||
if (ret)
|
||||
@ -752,7 +752,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
|
||||
if (priv->vbus) {
|
||||
ret = regulator_enable(priv->vbus);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_assert_reset;
|
||||
}
|
||||
|
||||
/* Get dr_mode */
|
||||
@ -765,13 +765,13 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
|
||||
|
||||
ret = priv->drvdata->usb_init(priv);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_assert_reset;
|
||||
|
||||
/* Init PHYs */
|
||||
for (i = 0 ; i < PHY_COUNT ; ++i) {
|
||||
ret = phy_init(priv->phys[i]);
|
||||
if (ret)
|
||||
goto err_disable_clks;
|
||||
goto err_assert_reset;
|
||||
}
|
||||
|
||||
/* Set PHY Power */
|
||||
@ -809,6 +809,9 @@ err_phys_exit:
|
||||
for (i = 0 ; i < PHY_COUNT ; ++i)
|
||||
phy_exit(priv->phys[i]);
|
||||
|
||||
err_assert_reset:
|
||||
reset_control_assert(priv->reset);
|
||||
|
||||
err_disable_clks:
|
||||
clk_bulk_disable_unprepare(priv->drvdata->num_clks,
|
||||
priv->drvdata->clks);
|
||||
|
@ -1,8 +1,8 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* dwc3-of-simple.c - OF glue layer for simple integrations
|
||||
*
|
||||
* Copyright (c) 2015 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (c) 2015 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Author: Felipe Balbi <balbi@ti.com>
|
||||
*
|
||||
|
@ -1,8 +1,8 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
/*
|
||||
* dwc3-omap.c - OMAP Specific Glue layer
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -457,8 +457,6 @@ static int dwc3_omap_probe(struct platform_device *pdev)
|
||||
int ret;
|
||||
int irq;
|
||||
|
||||
u32 reg;
|
||||
|
||||
void __iomem *base;
|
||||
|
||||
if (!node) {
|
||||
@ -503,9 +501,6 @@ static int dwc3_omap_probe(struct platform_device *pdev)
|
||||
dwc3_omap_map_offset(omap);
|
||||
dwc3_omap_set_utmi_mode(omap);
|
||||
|
||||
/* check the DMA Status */
|
||||
reg = dwc3_omap_readl(omap->base, USBOTGSS_SYSCONFIG);
|
||||
|
||||
ret = dwc3_omap_extcon_register(omap);
|
||||
if (ret < 0)
|
||||
goto err1;
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* dwc3-pci.c - PCI Specific glue layer
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
|
@ -540,16 +540,6 @@ static int dwc3_qcom_of_register_core(struct platform_device *pdev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dwc3_acpi_pdata sdm845_acpi_pdata = {
|
||||
.qscratch_base_offset = SDM845_QSCRATCH_BASE_OFFSET,
|
||||
.qscratch_base_size = SDM845_QSCRATCH_SIZE,
|
||||
.dwc3_core_base_size = SDM845_DWC3_CORE_SIZE,
|
||||
.hs_phy_irq_index = 1,
|
||||
.dp_hs_phy_irq_index = 4,
|
||||
.dm_hs_phy_irq_index = 3,
|
||||
.ss_phy_irq_index = 2
|
||||
};
|
||||
|
||||
static int dwc3_qcom_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
@ -758,11 +748,23 @@ static const struct of_device_id dwc3_qcom_of_match[] = {
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, dwc3_qcom_of_match);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct dwc3_acpi_pdata sdm845_acpi_pdata = {
|
||||
.qscratch_base_offset = SDM845_QSCRATCH_BASE_OFFSET,
|
||||
.qscratch_base_size = SDM845_QSCRATCH_SIZE,
|
||||
.dwc3_core_base_size = SDM845_DWC3_CORE_SIZE,
|
||||
.hs_phy_irq_index = 1,
|
||||
.dp_hs_phy_irq_index = 4,
|
||||
.dm_hs_phy_irq_index = 3,
|
||||
.ss_phy_irq_index = 2
|
||||
};
|
||||
|
||||
static const struct acpi_device_id dwc3_qcom_acpi_match[] = {
|
||||
{ "QCOM2430", (unsigned long)&sdm845_acpi_pdata },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, dwc3_qcom_acpi_match);
|
||||
#endif
|
||||
|
||||
static struct platform_driver dwc3_qcom_driver = {
|
||||
.probe = dwc3_qcom_probe,
|
||||
|
@ -206,8 +206,8 @@ static int st_dwc3_probe(struct platform_device *pdev)
|
||||
if (!dwc3_data)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg-glue");
|
||||
dwc3_data->glue_base = devm_ioremap_resource(dev, res);
|
||||
dwc3_data->glue_base =
|
||||
devm_platform_ioremap_resource_byname(pdev, "reg-glue");
|
||||
if (IS_ERR(dwc3_data->glue_base))
|
||||
return PTR_ERR(dwc3_data->glue_base);
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* ep0.c - DesignWare USB3 DRD Controller Endpoint 0 Handling
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -425,11 +425,11 @@ static int dwc3_ep0_handle_test(struct dwc3 *dwc, enum usb_device_state state,
|
||||
return -EINVAL;
|
||||
|
||||
switch (wIndex >> 8) {
|
||||
case TEST_J:
|
||||
case TEST_K:
|
||||
case TEST_SE0_NAK:
|
||||
case TEST_PACKET:
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_J:
|
||||
case USB_TEST_K:
|
||||
case USB_TEST_SE0_NAK:
|
||||
case USB_TEST_PACKET:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
dwc->test_mode_nr = wIndex >> 8;
|
||||
dwc->test_mode = true;
|
||||
break;
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* gadget.c - DesignWare USB3 DRD Controller Gadget Framework Link
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -46,11 +46,11 @@ int dwc3_gadget_set_test_mode(struct dwc3 *dwc, int mode)
|
||||
reg &= ~DWC3_DCTL_TSTCTRL_MASK;
|
||||
|
||||
switch (mode) {
|
||||
case TEST_J:
|
||||
case TEST_K:
|
||||
case TEST_SE0_NAK:
|
||||
case TEST_PACKET:
|
||||
case TEST_FORCE_EN:
|
||||
case USB_TEST_J:
|
||||
case USB_TEST_K:
|
||||
case USB_TEST_SE0_NAK:
|
||||
case USB_TEST_PACKET:
|
||||
case USB_TEST_FORCE_ENABLE:
|
||||
reg |= mode << 1;
|
||||
break;
|
||||
default:
|
||||
@ -1403,7 +1403,7 @@ static int dwc3_gadget_start_isoc_quirk(struct dwc3_ep *dep)
|
||||
* Check if we can start isoc transfer on the next interval or
|
||||
* 4 uframes in the future with BIT[15:14] as dep->combo_num
|
||||
*/
|
||||
test_frame_number = dep->frame_number & 0x3fff;
|
||||
test_frame_number = dep->frame_number & DWC3_FRNUMBER_MASK;
|
||||
test_frame_number |= dep->combo_num << 14;
|
||||
test_frame_number += max_t(u32, 4, dep->interval);
|
||||
|
||||
@ -1450,7 +1450,7 @@ static int dwc3_gadget_start_isoc_quirk(struct dwc3_ep *dep)
|
||||
else if (test0 && test1)
|
||||
dep->combo_num = 0;
|
||||
|
||||
dep->frame_number &= 0x3fff;
|
||||
dep->frame_number &= DWC3_FRNUMBER_MASK;
|
||||
dep->frame_number |= dep->combo_num << 14;
|
||||
dep->frame_number += max_t(u32, 4, dep->interval);
|
||||
|
||||
@ -1463,6 +1463,7 @@ static int dwc3_gadget_start_isoc_quirk(struct dwc3_ep *dep)
|
||||
|
||||
static int __dwc3_gadget_start_isoc(struct dwc3_ep *dep)
|
||||
{
|
||||
const struct usb_endpoint_descriptor *desc = dep->endpoint.desc;
|
||||
struct dwc3 *dwc = dep->dwc;
|
||||
int ret;
|
||||
int i;
|
||||
@ -1480,6 +1481,27 @@ static int __dwc3_gadget_start_isoc(struct dwc3_ep *dep)
|
||||
return dwc3_gadget_start_isoc_quirk(dep);
|
||||
}
|
||||
|
||||
if (desc->bInterval <= 14 &&
|
||||
dwc->gadget.speed >= USB_SPEED_HIGH) {
|
||||
u32 frame = __dwc3_gadget_get_frame(dwc);
|
||||
bool rollover = frame <
|
||||
(dep->frame_number & DWC3_FRNUMBER_MASK);
|
||||
|
||||
/*
|
||||
* frame_number is set from XferNotReady and may be already
|
||||
* out of date. DSTS only provides the lower 14 bit of the
|
||||
* current frame number. So add the upper two bits of
|
||||
* frame_number and handle a possible rollover.
|
||||
* This will provide the correct frame_number unless more than
|
||||
* rollover has happened since XferNotReady.
|
||||
*/
|
||||
|
||||
dep->frame_number = (dep->frame_number & ~DWC3_FRNUMBER_MASK) |
|
||||
frame;
|
||||
if (rollover)
|
||||
dep->frame_number += BIT(14);
|
||||
}
|
||||
|
||||
for (i = 0; i < DWC3_ISOC_MAX_RETRIES; i++) {
|
||||
dep->frame_number = DWC3_ALIGN_FRAME(dep, i + 1);
|
||||
|
||||
@ -2716,7 +2738,9 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
|
||||
if (dep->flags & DWC3_EP_END_TRANSFER_PENDING)
|
||||
goto out;
|
||||
|
||||
if (status == -EXDEV && list_empty(&dep->started_list))
|
||||
if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
|
||||
list_empty(&dep->started_list) &&
|
||||
(list_empty(&dep->pending_list) || status == -EXDEV))
|
||||
dwc3_stop_active_transfer(dep, true, true);
|
||||
else if (dwc3_gadget_ep_should_continue(dep))
|
||||
if (__dwc3_gadget_kick_transfer(dep) == 0)
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* gadget.h - DesignWare USB3 DRD Gadget Header
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
@ -54,6 +54,8 @@ struct dwc3;
|
||||
/* U2 Device exit Latency */
|
||||
#define DWC3_DEFAULT_U2_DEV_EXIT_LAT 0x1FF /* Less then 511 microsec */
|
||||
|
||||
/* Frame/Microframe Number Mask */
|
||||
#define DWC3_FRNUMBER_MASK 0x3fff
|
||||
/* -------------------------------------------------------------------------- */
|
||||
|
||||
#define to_dwc3_request(r) (container_of(r, struct dwc3_request, request))
|
||||
|
@ -2,7 +2,7 @@
|
||||
/*
|
||||
* host.c - DesignWare USB3 DRD Controller Host Glue
|
||||
*
|
||||
* Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
*/
|
||||
|
@ -2,7 +2,7 @@
|
||||
/**
|
||||
* io.h - DesignWare USB3 DRD IO Header
|
||||
*
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2010-2011 Texas Instruments Incorporated - https://www.ti.com
|
||||
*
|
||||
* Authors: Felipe Balbi <balbi@ti.com>,
|
||||
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user