Merge branch 'Thunderbolt-networking'
Mika Westerberg says: ==================== Thunderbolt networking In addition of tunneling PCIe, Display Port and USB traffic, Thunderbolt allows connecting two hosts (domains) over a Thunderbolt cable. It is possible to tunnel arbitrary data packets over such connection using high-speed DMA rings available in the Thunderbolt host controller. In order to discover Thunderbolt services the other host supports, there is a software protocol running on top of the automatically configured control channel (ring 0). This protocol is called XDomain discovery protocol and it uses XDomain properties to describe the host (domain) and the services it supports. Once both sides have agreed what services are supported they can enable high-speed DMA rings to transfer data over the cable. This series adds support for the XDomain protocol so that we expose each remote connection as Thunderbolt XDomain device and each service as Thunderbolt service device. On top of that we create an API that allows writing drivers for these services and finally we provide an example Thunderbolt service driver that creates virtual ethernet inferface that allows tunneling networking packets over Thunderbolt cable. The API could be used for creating other future Thunderbolt services, such as tunneling SCSI over Thunderbolt, for example. The XDomain protocol and networking support is also available in macOS and Windows so this makes it possible to connect Linux to macOS and Windows as well. The patches are based on previous Thunderbolt networking patch series by Amir Levy and Michael Jamet, that can be found here: https://lwn.net/Articles/705998/ The main difference to that patch series is that we have the XDomain protocol running in the kernel now so there is no need for a separate userspace daemon. Note this does not affect the existing functionality, so security levels and NVM firmware upgrade continue to work as before (with the small exception that now sysfs also shows the XDomain connections and services in addition to normal Thunderbolt devices). It is also possible to connect up to 5 Thunderbolt devices and then another host, and the network driver works exactly the same. This is third version of the patch series. The previous versions can be be found here: v2: https://lkml.org/lkml/2017/9/25/225 v1: https://lwn.net/Articles/734019/ Changes from the v2: * Add comment regarding calculation of interrupt throttling value * Add UUIDs as strings in comments on top of each declaration * Add a patch removing __packed from existing ICM messages. They are all 32-bit aligned and should pack fine without the __packed. * Move adding MAINTAINERS entries to a separate patches * Added Michael and Yehezkel to be maintainers of the network driver * Remove __packed from the new ICM messages. They should pack fine as well without it. * Call register_netdev() after all other initialization is done in the network driver. * Use build_skb() instead of copying. We allocate order 1 page here to leave room for SKB shared info required by build_skb(). However, we do not leave room for full NET_SKB_PAD because the NHI hardware does not cope well if a frame crosses 4kB boundary. According comments in __build_skb() that should still be fine. * Added Reviewed-by tag from Andy. Changes from the v1: * Add include/linux/thunderbolt.h to MAINTAINERS * Correct Linux version and date of new sysfs entries in Documentation/ABI/testing/sysfs-bus-thunderbolt * Move network driver from drivers/thunderbolt/net.c to drivers/net/thunderbolt.c and update it to follow coding style in drivers/net/*. * Add MAINTAINERS entry for the network driver * Minor cleanups In case someone wants to try this out, the last patch adds documentation how the networking driver can be used. In short, if you connect Linux to a macOS or Windows, everything is done automatically (as those systems have the networking service enabled by default). For Linux to Linux connection one host needs to load the networking driver first (so that the other side can locate the networking service and load the corresponding driver). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
commit
c4b3630aff
@ -110,3 +110,51 @@ Description: When new NVM image is written to the non-active NVM
|
||||
is directly the status value from the DMA configuration
|
||||
based mailbox before the device is power cycled. Writing
|
||||
0 here clears the status.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/key
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains name of the property directory the XDomain
|
||||
service exposes. This entry describes the protocol in
|
||||
question. Following directories are already reserved by
|
||||
the Apple XDomain specification:
|
||||
|
||||
network: IP/ethernet over Thunderbolt
|
||||
targetdm: Target disk mode protocol over Thunderbolt
|
||||
extdisp: External display mode protocol over Thunderbolt
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/modalias
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: Stores the same MODALIAS value emitted by uevent for
|
||||
the XDomain service. Format: tbtsvc:kSpNvNrN
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcid
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains XDomain protocol identifier the XDomain
|
||||
service supports.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcvers
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains XDomain protocol version the XDomain
|
||||
service supports.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcrevs
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains XDomain software version the XDomain
|
||||
service supports.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/<xdomain>.<service>/prtcstns
|
||||
Date: Jan 2018
|
||||
KernelVersion: 4.15
|
||||
Contact: thunderbolt-software@lists.01.org
|
||||
Description: This contains XDomain service specific settings as
|
||||
bitmask. Format: %x
|
||||
|
@ -197,3 +197,27 @@ information is missing.
|
||||
|
||||
To recover from this mode, one needs to flash a valid NVM image to the
|
||||
host host controller in the same way it is done in the previous chapter.
|
||||
|
||||
Networking over Thunderbolt cable
|
||||
---------------------------------
|
||||
Thunderbolt technology allows software communication across two hosts
|
||||
connected by a Thunderbolt cable.
|
||||
|
||||
It is possible to tunnel any kind of traffic over Thunderbolt link but
|
||||
currently we only support Apple ThunderboltIP protocol.
|
||||
|
||||
If the other host is running Windows or macOS only thing you need to
|
||||
do is to connect Thunderbolt cable between the two hosts, the
|
||||
``thunderbolt-net`` is loaded automatically. If the other host is also
|
||||
Linux you should load ``thunderbolt-net`` manually on one host (it does
|
||||
not matter which one)::
|
||||
|
||||
# modprobe thunderbolt-net
|
||||
|
||||
This triggers module load on the other host automatically. If the driver
|
||||
is built-in to the kernel image, there is no need to do anything.
|
||||
|
||||
The driver will create one virtual ethernet interface per Thunderbolt
|
||||
port which are named like ``thunderbolt0`` and so on. From this point
|
||||
you can either use standard userspace tools like ``ifconfig`` to
|
||||
configure the interface or let your GUI to handle it automatically.
|
||||
|
@ -13278,6 +13278,15 @@ M: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
M: Yehezkel Bernat <yehezkel.bernat@intel.com>
|
||||
S: Maintained
|
||||
F: drivers/thunderbolt/
|
||||
F: include/linux/thunderbolt.h
|
||||
|
||||
THUNDERBOLT NETWORK DRIVER
|
||||
M: Michael Jamet <michael.jamet@intel.com>
|
||||
M: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
M: Yehezkel Bernat <yehezkel.bernat@intel.com>
|
||||
L: netdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/net/thunderbolt.c
|
||||
|
||||
THUNDERX GPIO DRIVER
|
||||
M: David Daney <david.daney@cavium.com>
|
||||
|
@ -483,6 +483,18 @@ config FUJITSU_ES
|
||||
This driver provides support for Extended Socket network device
|
||||
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
|
||||
|
||||
config THUNDERBOLT_NET
|
||||
tristate "Networking over Thunderbolt cable"
|
||||
depends on THUNDERBOLT && INET
|
||||
help
|
||||
Select this if you want to create network between two
|
||||
computers over a Thunderbolt cable. The driver supports Apple
|
||||
ThunderboltIP protocol and allows communication with any host
|
||||
supporting the same protocol including Windows and macOS.
|
||||
|
||||
To compile this driver a module, choose M here. The module will be
|
||||
called thunderbolt-net.
|
||||
|
||||
source "drivers/net/hyperv/Kconfig"
|
||||
|
||||
endif # NETDEVICES
|
||||
|
@ -74,3 +74,6 @@ obj-$(CONFIG_HYPERV_NET) += hyperv/
|
||||
obj-$(CONFIG_NTB_NETDEV) += ntb_netdev.o
|
||||
|
||||
obj-$(CONFIG_FUJITSU_ES) += fjes/
|
||||
|
||||
thunderbolt-net-y += thunderbolt.o
|
||||
obj-$(CONFIG_THUNDERBOLT_NET) += thunderbolt-net.o
|
||||
|
1362
drivers/net/thunderbolt.c
Normal file
1362
drivers/net/thunderbolt.c
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,3 +1,3 @@
|
||||
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
|
||||
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
|
||||
thunderbolt-objs += domain.o dma_port.o icm.o
|
||||
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o
|
||||
|
@ -289,20 +289,6 @@ static void tb_cfg_print_error(struct tb_ctl *ctl,
|
||||
}
|
||||
}
|
||||
|
||||
static void cpu_to_be32_array(__be32 *dst, const u32 *src, size_t len)
|
||||
{
|
||||
int i;
|
||||
for (i = 0; i < len; i++)
|
||||
dst[i] = cpu_to_be32(src[i]);
|
||||
}
|
||||
|
||||
static void be32_to_cpu_array(u32 *dst, __be32 *src, size_t len)
|
||||
{
|
||||
int i;
|
||||
for (i = 0; i < len; i++)
|
||||
dst[i] = be32_to_cpu(src[i]);
|
||||
}
|
||||
|
||||
static __be32 tb_crc(const void *data, size_t len)
|
||||
{
|
||||
return cpu_to_be32(~__crc32c_le(~0, data, len));
|
||||
@ -373,7 +359,7 @@ static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
|
||||
cpu_to_be32_array(pkg->buffer, data, len / 4);
|
||||
*(__be32 *) (pkg->buffer + len) = tb_crc(pkg->buffer, len);
|
||||
|
||||
res = ring_tx(ctl->tx, &pkg->frame);
|
||||
res = tb_ring_tx(ctl->tx, &pkg->frame);
|
||||
if (res) /* ring is stopped */
|
||||
tb_ctl_pkg_free(pkg);
|
||||
return res;
|
||||
@ -382,15 +368,15 @@ static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
|
||||
/**
|
||||
* tb_ctl_handle_event() - acknowledge a plug event, invoke ctl->callback
|
||||
*/
|
||||
static void tb_ctl_handle_event(struct tb_ctl *ctl, enum tb_cfg_pkg_type type,
|
||||
static bool tb_ctl_handle_event(struct tb_ctl *ctl, enum tb_cfg_pkg_type type,
|
||||
struct ctl_pkg *pkg, size_t size)
|
||||
{
|
||||
ctl->callback(ctl->callback_data, type, pkg->buffer, size);
|
||||
return ctl->callback(ctl->callback_data, type, pkg->buffer, size);
|
||||
}
|
||||
|
||||
static void tb_ctl_rx_submit(struct ctl_pkg *pkg)
|
||||
{
|
||||
ring_rx(pkg->ctl->rx, &pkg->frame); /*
|
||||
tb_ring_rx(pkg->ctl->rx, &pkg->frame); /*
|
||||
* We ignore failures during stop.
|
||||
* All rx packets are referenced
|
||||
* from ctl->rx_packets, so we do
|
||||
@ -458,6 +444,8 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
|
||||
break;
|
||||
|
||||
case TB_CFG_PKG_EVENT:
|
||||
case TB_CFG_PKG_XDOMAIN_RESP:
|
||||
case TB_CFG_PKG_XDOMAIN_REQ:
|
||||
if (*(__be32 *)(pkg->buffer + frame->size) != crc32) {
|
||||
tb_ctl_err(pkg->ctl,
|
||||
"RX: checksum mismatch, dropping packet\n");
|
||||
@ -465,8 +453,9 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
|
||||
}
|
||||
/* Fall through */
|
||||
case TB_CFG_PKG_ICM_EVENT:
|
||||
tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size);
|
||||
goto rx;
|
||||
if (tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size))
|
||||
goto rx;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
@ -625,11 +614,12 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data)
|
||||
if (!ctl->frame_pool)
|
||||
goto err;
|
||||
|
||||
ctl->tx = ring_alloc_tx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
|
||||
ctl->tx = tb_ring_alloc_tx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
|
||||
if (!ctl->tx)
|
||||
goto err;
|
||||
|
||||
ctl->rx = ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND);
|
||||
ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff,
|
||||
0xffff, NULL, NULL);
|
||||
if (!ctl->rx)
|
||||
goto err;
|
||||
|
||||
@ -662,9 +652,9 @@ void tb_ctl_free(struct tb_ctl *ctl)
|
||||
return;
|
||||
|
||||
if (ctl->rx)
|
||||
ring_free(ctl->rx);
|
||||
tb_ring_free(ctl->rx);
|
||||
if (ctl->tx)
|
||||
ring_free(ctl->tx);
|
||||
tb_ring_free(ctl->tx);
|
||||
|
||||
/* free RX packets */
|
||||
for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++)
|
||||
@ -683,8 +673,8 @@ void tb_ctl_start(struct tb_ctl *ctl)
|
||||
{
|
||||
int i;
|
||||
tb_ctl_info(ctl, "control channel starting...\n");
|
||||
ring_start(ctl->tx); /* is used to ack hotplug packets, start first */
|
||||
ring_start(ctl->rx);
|
||||
tb_ring_start(ctl->tx); /* is used to ack hotplug packets, start first */
|
||||
tb_ring_start(ctl->rx);
|
||||
for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++)
|
||||
tb_ctl_rx_submit(ctl->rx_packets[i]);
|
||||
|
||||
@ -705,8 +695,8 @@ void tb_ctl_stop(struct tb_ctl *ctl)
|
||||
ctl->running = false;
|
||||
mutex_unlock(&ctl->request_queue_lock);
|
||||
|
||||
ring_stop(ctl->rx);
|
||||
ring_stop(ctl->tx);
|
||||
tb_ring_stop(ctl->rx);
|
||||
tb_ring_stop(ctl->tx);
|
||||
|
||||
if (!list_empty(&ctl->request_queue))
|
||||
tb_ctl_WARN(ctl, "dangling request in request_queue\n");
|
||||
|
@ -8,6 +8,7 @@
|
||||
#define _TB_CFG
|
||||
|
||||
#include <linux/kref.h>
|
||||
#include <linux/thunderbolt.h>
|
||||
|
||||
#include "nhi.h"
|
||||
#include "tb_msgs.h"
|
||||
@ -15,7 +16,7 @@
|
||||
/* control channel */
|
||||
struct tb_ctl;
|
||||
|
||||
typedef void (*event_cb)(void *data, enum tb_cfg_pkg_type type,
|
||||
typedef bool (*event_cb)(void *data, enum tb_cfg_pkg_type type,
|
||||
const void *buf, size_t size);
|
||||
|
||||
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data);
|
||||
|
@ -20,6 +20,98 @@
|
||||
|
||||
static DEFINE_IDA(tb_domain_ida);
|
||||
|
||||
static bool match_service_id(const struct tb_service_id *id,
|
||||
const struct tb_service *svc)
|
||||
{
|
||||
if (id->match_flags & TBSVC_MATCH_PROTOCOL_KEY) {
|
||||
if (strcmp(id->protocol_key, svc->key))
|
||||
return false;
|
||||
}
|
||||
|
||||
if (id->match_flags & TBSVC_MATCH_PROTOCOL_ID) {
|
||||
if (id->protocol_id != svc->prtcid)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {
|
||||
if (id->protocol_version != svc->prtcvers)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (id->match_flags & TBSVC_MATCH_PROTOCOL_VERSION) {
|
||||
if (id->protocol_revision != svc->prtcrevs)
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static const struct tb_service_id *__tb_service_match(struct device *dev,
|
||||
struct device_driver *drv)
|
||||
{
|
||||
struct tb_service_driver *driver;
|
||||
const struct tb_service_id *ids;
|
||||
struct tb_service *svc;
|
||||
|
||||
svc = tb_to_service(dev);
|
||||
if (!svc)
|
||||
return NULL;
|
||||
|
||||
driver = container_of(drv, struct tb_service_driver, driver);
|
||||
if (!driver->id_table)
|
||||
return NULL;
|
||||
|
||||
for (ids = driver->id_table; ids->match_flags != 0; ids++) {
|
||||
if (match_service_id(ids, svc))
|
||||
return ids;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int tb_service_match(struct device *dev, struct device_driver *drv)
|
||||
{
|
||||
return !!__tb_service_match(dev, drv);
|
||||
}
|
||||
|
||||
static int tb_service_probe(struct device *dev)
|
||||
{
|
||||
struct tb_service *svc = tb_to_service(dev);
|
||||
struct tb_service_driver *driver;
|
||||
const struct tb_service_id *id;
|
||||
|
||||
driver = container_of(dev->driver, struct tb_service_driver, driver);
|
||||
id = __tb_service_match(dev, &driver->driver);
|
||||
|
||||
return driver->probe(svc, id);
|
||||
}
|
||||
|
||||
static int tb_service_remove(struct device *dev)
|
||||
{
|
||||
struct tb_service *svc = tb_to_service(dev);
|
||||
struct tb_service_driver *driver;
|
||||
|
||||
driver = container_of(dev->driver, struct tb_service_driver, driver);
|
||||
if (driver->remove)
|
||||
driver->remove(svc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tb_service_shutdown(struct device *dev)
|
||||
{
|
||||
struct tb_service_driver *driver;
|
||||
struct tb_service *svc;
|
||||
|
||||
svc = tb_to_service(dev);
|
||||
if (!svc || !dev->driver)
|
||||
return;
|
||||
|
||||
driver = container_of(dev->driver, struct tb_service_driver, driver);
|
||||
if (driver->shutdown)
|
||||
driver->shutdown(svc);
|
||||
}
|
||||
|
||||
static const char * const tb_security_names[] = {
|
||||
[TB_SECURITY_NONE] = "none",
|
||||
[TB_SECURITY_USER] = "user",
|
||||
@ -52,6 +144,10 @@ static const struct attribute_group *domain_attr_groups[] = {
|
||||
|
||||
struct bus_type tb_bus_type = {
|
||||
.name = "thunderbolt",
|
||||
.match = tb_service_match,
|
||||
.probe = tb_service_probe,
|
||||
.remove = tb_service_remove,
|
||||
.shutdown = tb_service_shutdown,
|
||||
};
|
||||
|
||||
static void tb_domain_release(struct device *dev)
|
||||
@ -128,17 +224,26 @@ err_free:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
|
||||
static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
|
||||
const void *buf, size_t size)
|
||||
{
|
||||
struct tb *tb = data;
|
||||
|
||||
if (!tb->cm_ops->handle_event) {
|
||||
tb_warn(tb, "domain does not have event handler\n");
|
||||
return;
|
||||
return true;
|
||||
}
|
||||
|
||||
tb->cm_ops->handle_event(tb, type, buf, size);
|
||||
switch (type) {
|
||||
case TB_CFG_PKG_XDOMAIN_REQ:
|
||||
case TB_CFG_PKG_XDOMAIN_RESP:
|
||||
return tb_xdomain_handle_request(tb, type, buf, size);
|
||||
|
||||
default:
|
||||
tb->cm_ops->handle_event(tb, type, buf, size);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -443,9 +548,92 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
|
||||
return tb->cm_ops->disconnect_pcie_paths(tb);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_domain_approve_xdomain_paths() - Enable DMA paths for XDomain
|
||||
* @tb: Domain enabling the DMA paths
|
||||
* @xd: XDomain DMA paths are created to
|
||||
*
|
||||
* Calls connection manager specific method to enable DMA paths to the
|
||||
* XDomain in question.
|
||||
*
|
||||
* Return: 0% in case of success and negative errno otherwise. In
|
||||
* particular returns %-ENOTSUPP if the connection manager
|
||||
* implementation does not support XDomains.
|
||||
*/
|
||||
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
|
||||
{
|
||||
if (!tb->cm_ops->approve_xdomain_paths)
|
||||
return -ENOTSUPP;
|
||||
|
||||
return tb->cm_ops->approve_xdomain_paths(tb, xd);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_domain_disconnect_xdomain_paths() - Disable DMA paths for XDomain
|
||||
* @tb: Domain disabling the DMA paths
|
||||
* @xd: XDomain whose DMA paths are disconnected
|
||||
*
|
||||
* Calls connection manager specific method to disconnect DMA paths to
|
||||
* the XDomain in question.
|
||||
*
|
||||
* Return: 0% in case of success and negative errno otherwise. In
|
||||
* particular returns %-ENOTSUPP if the connection manager
|
||||
* implementation does not support XDomains.
|
||||
*/
|
||||
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
|
||||
{
|
||||
if (!tb->cm_ops->disconnect_xdomain_paths)
|
||||
return -ENOTSUPP;
|
||||
|
||||
return tb->cm_ops->disconnect_xdomain_paths(tb, xd);
|
||||
}
|
||||
|
||||
static int disconnect_xdomain(struct device *dev, void *data)
|
||||
{
|
||||
struct tb_xdomain *xd;
|
||||
struct tb *tb = data;
|
||||
int ret = 0;
|
||||
|
||||
xd = tb_to_xdomain(dev);
|
||||
if (xd && xd->tb == tb)
|
||||
ret = tb_xdomain_disable_paths(xd);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_domain_disconnect_all_paths() - Disconnect all paths for the domain
|
||||
* @tb: Domain whose paths are disconnected
|
||||
*
|
||||
* This function can be used to disconnect all paths (PCIe, XDomain) for
|
||||
* example in preparation for host NVM firmware upgrade. After this is
|
||||
* called the paths cannot be established without resetting the switch.
|
||||
*
|
||||
* Return: %0 in case of success and negative errno otherwise.
|
||||
*/
|
||||
int tb_domain_disconnect_all_paths(struct tb *tb)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = tb_domain_disconnect_pcie_paths(tb);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return bus_for_each_dev(&tb_bus_type, NULL, tb, disconnect_xdomain);
|
||||
}
|
||||
|
||||
int tb_domain_init(void)
|
||||
{
|
||||
return bus_register(&tb_bus_type);
|
||||
int ret;
|
||||
|
||||
ret = tb_xdomain_init();
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = bus_register(&tb_bus_type);
|
||||
if (ret)
|
||||
tb_xdomain_exit();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void tb_domain_exit(void)
|
||||
@ -453,4 +641,5 @@ void tb_domain_exit(void)
|
||||
bus_unregister(&tb_bus_type);
|
||||
ida_destroy(&tb_domain_ida);
|
||||
tb_switch_exit();
|
||||
tb_xdomain_exit();
|
||||
}
|
||||
|
@ -60,6 +60,8 @@
|
||||
* @get_route: Find a route string for given switch
|
||||
* @device_connected: Handle device connected ICM message
|
||||
* @device_disconnected: Handle device disconnected ICM message
|
||||
* @xdomain_connected - Handle XDomain connected ICM message
|
||||
* @xdomain_disconnected - Handle XDomain disconnected ICM message
|
||||
*/
|
||||
struct icm {
|
||||
struct mutex request_lock;
|
||||
@ -74,6 +76,10 @@ struct icm {
|
||||
const struct icm_pkg_header *hdr);
|
||||
void (*device_disconnected)(struct tb *tb,
|
||||
const struct icm_pkg_header *hdr);
|
||||
void (*xdomain_connected)(struct tb *tb,
|
||||
const struct icm_pkg_header *hdr);
|
||||
void (*xdomain_disconnected)(struct tb *tb,
|
||||
const struct icm_pkg_header *hdr);
|
||||
};
|
||||
|
||||
struct icm_notification {
|
||||
@ -89,7 +95,10 @@ static inline struct tb *icm_to_tb(struct icm *icm)
|
||||
|
||||
static inline u8 phy_port_from_route(u64 route, u8 depth)
|
||||
{
|
||||
return tb_switch_phy_port_from_link(route >> ((depth - 1) * 8));
|
||||
u8 link;
|
||||
|
||||
link = depth ? route >> ((depth - 1) * 8) : route;
|
||||
return tb_phy_port_from_link(link);
|
||||
}
|
||||
|
||||
static inline u8 dual_link_from_link(u8 link)
|
||||
@ -320,6 +329,51 @@ static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int icm_fr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
|
||||
{
|
||||
struct icm_fr_pkg_approve_xdomain_response reply;
|
||||
struct icm_fr_pkg_approve_xdomain request;
|
||||
int ret;
|
||||
|
||||
memset(&request, 0, sizeof(request));
|
||||
request.hdr.code = ICM_APPROVE_XDOMAIN;
|
||||
request.link_info = xd->depth << ICM_LINK_INFO_DEPTH_SHIFT | xd->link;
|
||||
memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid));
|
||||
|
||||
request.transmit_path = xd->transmit_path;
|
||||
request.transmit_ring = xd->transmit_ring;
|
||||
request.receive_path = xd->receive_path;
|
||||
request.receive_ring = xd->receive_ring;
|
||||
|
||||
memset(&reply, 0, sizeof(reply));
|
||||
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
|
||||
1, ICM_TIMEOUT);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (reply.hdr.flags & ICM_FLAGS_ERROR)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int icm_fr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd)
|
||||
{
|
||||
u8 phy_port;
|
||||
u8 cmd;
|
||||
|
||||
phy_port = tb_phy_port_from_link(xd->link);
|
||||
if (phy_port == 0)
|
||||
cmd = NHI_MAILBOX_DISCONNECT_PA;
|
||||
else
|
||||
cmd = NHI_MAILBOX_DISCONNECT_PB;
|
||||
|
||||
nhi_mailbox_cmd(tb->nhi, cmd, 1);
|
||||
usleep_range(10, 50);
|
||||
nhi_mailbox_cmd(tb->nhi, cmd, 2);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void remove_switch(struct tb_switch *sw)
|
||||
{
|
||||
struct tb_switch *parent_sw;
|
||||
@ -475,6 +529,141 @@ icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
|
||||
tb_switch_put(sw);
|
||||
}
|
||||
|
||||
static void remove_xdomain(struct tb_xdomain *xd)
|
||||
{
|
||||
struct tb_switch *sw;
|
||||
|
||||
sw = tb_to_switch(xd->dev.parent);
|
||||
tb_port_at(xd->route, sw)->xdomain = NULL;
|
||||
tb_xdomain_remove(xd);
|
||||
}
|
||||
|
||||
static void
|
||||
icm_fr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr)
|
||||
{
|
||||
const struct icm_fr_event_xdomain_connected *pkg =
|
||||
(const struct icm_fr_event_xdomain_connected *)hdr;
|
||||
struct tb_xdomain *xd;
|
||||
struct tb_switch *sw;
|
||||
u8 link, depth;
|
||||
bool approved;
|
||||
u64 route;
|
||||
|
||||
/*
|
||||
* After NVM upgrade adding root switch device fails because we
|
||||
* initiated reset. During that time ICM might still send
|
||||
* XDomain connected message which we ignore here.
|
||||
*/
|
||||
if (!tb->root_switch)
|
||||
return;
|
||||
|
||||
link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
|
||||
depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
|
||||
ICM_LINK_INFO_DEPTH_SHIFT;
|
||||
approved = pkg->link_info & ICM_LINK_INFO_APPROVED;
|
||||
|
||||
if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {
|
||||
tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth);
|
||||
return;
|
||||
}
|
||||
|
||||
route = get_route(pkg->local_route_hi, pkg->local_route_lo);
|
||||
|
||||
xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);
|
||||
if (xd) {
|
||||
u8 xd_phy_port, phy_port;
|
||||
|
||||
xd_phy_port = phy_port_from_route(xd->route, xd->depth);
|
||||
phy_port = phy_port_from_route(route, depth);
|
||||
|
||||
if (xd->depth == depth && xd_phy_port == phy_port) {
|
||||
xd->link = link;
|
||||
xd->route = route;
|
||||
xd->is_unplugged = false;
|
||||
tb_xdomain_put(xd);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we find an existing XDomain connection remove it
|
||||
* now. We need to go through login handshake and
|
||||
* everything anyway to be able to re-establish the
|
||||
* connection.
|
||||
*/
|
||||
remove_xdomain(xd);
|
||||
tb_xdomain_put(xd);
|
||||
}
|
||||
|
||||
/*
|
||||
* Look if there already exists an XDomain in the same place
|
||||
* than the new one and in that case remove it because it is
|
||||
* most likely another host that got disconnected.
|
||||
*/
|
||||
xd = tb_xdomain_find_by_link_depth(tb, link, depth);
|
||||
if (!xd) {
|
||||
u8 dual_link;
|
||||
|
||||
dual_link = dual_link_from_link(link);
|
||||
if (dual_link)
|
||||
xd = tb_xdomain_find_by_link_depth(tb, dual_link,
|
||||
depth);
|
||||
}
|
||||
if (xd) {
|
||||
remove_xdomain(xd);
|
||||
tb_xdomain_put(xd);
|
||||
}
|
||||
|
||||
/*
|
||||
* If the user disconnected a switch during suspend and
|
||||
* connected another host to the same port, remove the switch
|
||||
* first.
|
||||
*/
|
||||
sw = get_switch_at_route(tb->root_switch, route);
|
||||
if (sw)
|
||||
remove_switch(sw);
|
||||
|
||||
sw = tb_switch_find_by_link_depth(tb, link, depth);
|
||||
if (!sw) {
|
||||
tb_warn(tb, "no switch exists at %u.%u, ignoring\n", link,
|
||||
depth);
|
||||
return;
|
||||
}
|
||||
|
||||
xd = tb_xdomain_alloc(sw->tb, &sw->dev, route,
|
||||
&pkg->local_uuid, &pkg->remote_uuid);
|
||||
if (!xd) {
|
||||
tb_switch_put(sw);
|
||||
return;
|
||||
}
|
||||
|
||||
xd->link = link;
|
||||
xd->depth = depth;
|
||||
|
||||
tb_port_at(route, sw)->xdomain = xd;
|
||||
|
||||
tb_xdomain_add(xd);
|
||||
tb_switch_put(sw);
|
||||
}
|
||||
|
||||
static void
|
||||
icm_fr_xdomain_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
|
||||
{
|
||||
const struct icm_fr_event_xdomain_disconnected *pkg =
|
||||
(const struct icm_fr_event_xdomain_disconnected *)hdr;
|
||||
struct tb_xdomain *xd;
|
||||
|
||||
/*
|
||||
* If the connection is through one or multiple devices, the
|
||||
* XDomain device is removed along with them so it is fine if we
|
||||
* cannot find it here.
|
||||
*/
|
||||
xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid);
|
||||
if (xd) {
|
||||
remove_xdomain(xd);
|
||||
tb_xdomain_put(xd);
|
||||
}
|
||||
}
|
||||
|
||||
static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
|
||||
{
|
||||
struct pci_dev *parent;
|
||||
@ -594,6 +783,12 @@ static void icm_handle_notification(struct work_struct *work)
|
||||
case ICM_EVENT_DEVICE_DISCONNECTED:
|
||||
icm->device_disconnected(tb, n->pkg);
|
||||
break;
|
||||
case ICM_EVENT_XDOMAIN_CONNECTED:
|
||||
icm->xdomain_connected(tb, n->pkg);
|
||||
break;
|
||||
case ICM_EVENT_XDOMAIN_DISCONNECTED:
|
||||
icm->xdomain_disconnected(tb, n->pkg);
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&tb->lock);
|
||||
@ -927,6 +1122,10 @@ static void icm_unplug_children(struct tb_switch *sw)
|
||||
|
||||
if (tb_is_upstream_port(port))
|
||||
continue;
|
||||
if (port->xdomain) {
|
||||
port->xdomain->is_unplugged = true;
|
||||
continue;
|
||||
}
|
||||
if (!port->remote)
|
||||
continue;
|
||||
|
||||
@ -943,6 +1142,13 @@ static void icm_free_unplugged_children(struct tb_switch *sw)
|
||||
|
||||
if (tb_is_upstream_port(port))
|
||||
continue;
|
||||
|
||||
if (port->xdomain && port->xdomain->is_unplugged) {
|
||||
tb_xdomain_remove(port->xdomain);
|
||||
port->xdomain = NULL;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!port->remote)
|
||||
continue;
|
||||
|
||||
@ -1009,8 +1215,10 @@ static int icm_start(struct tb *tb)
|
||||
tb->root_switch->no_nvm_upgrade = x86_apple_machine;
|
||||
|
||||
ret = tb_switch_add(tb->root_switch);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
tb_switch_put(tb->root_switch);
|
||||
tb->root_switch = NULL;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -1042,6 +1250,8 @@ static const struct tb_cm_ops icm_fr_ops = {
|
||||
.add_switch_key = icm_fr_add_switch_key,
|
||||
.challenge_switch_key = icm_fr_challenge_switch_key,
|
||||
.disconnect_pcie_paths = icm_disconnect_pcie_paths,
|
||||
.approve_xdomain_paths = icm_fr_approve_xdomain_paths,
|
||||
.disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths,
|
||||
};
|
||||
|
||||
struct tb *icm_probe(struct tb_nhi *nhi)
|
||||
@ -1064,6 +1274,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)
|
||||
icm->get_route = icm_fr_get_route;
|
||||
icm->device_connected = icm_fr_device_connected;
|
||||
icm->device_disconnected = icm_fr_device_disconnected;
|
||||
icm->xdomain_connected = icm_fr_xdomain_connected;
|
||||
icm->xdomain_disconnected = icm_fr_xdomain_disconnected;
|
||||
tb->cm_ops = &icm_fr_ops;
|
||||
break;
|
||||
|
||||
@ -1077,6 +1289,8 @@ struct tb *icm_probe(struct tb_nhi *nhi)
|
||||
icm->get_route = icm_ar_get_route;
|
||||
icm->device_connected = icm_fr_device_connected;
|
||||
icm->device_disconnected = icm_fr_device_disconnected;
|
||||
icm->xdomain_connected = icm_fr_xdomain_connected;
|
||||
icm->xdomain_disconnected = icm_fr_xdomain_disconnected;
|
||||
tb->cm_ops = &icm_fr_ops;
|
||||
break;
|
||||
}
|
||||
|
@ -21,6 +21,14 @@
|
||||
|
||||
#define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring")
|
||||
|
||||
/*
|
||||
* Used to enable end-to-end workaround for missing RX packets. Do not
|
||||
* use this ring for anything else.
|
||||
*/
|
||||
#define RING_E2E_UNUSED_HOPID 2
|
||||
/* HopIDs 0-7 are reserved by the Thunderbolt protocol */
|
||||
#define RING_FIRST_USABLE_HOPID 8
|
||||
|
||||
/*
|
||||
* Minimal number of vectors when we use MSI-X. Two for control channel
|
||||
* Rx/Tx and the rest four are for cross domain DMA paths.
|
||||
@ -206,8 +214,10 @@ static void ring_work(struct work_struct *work)
|
||||
struct tb_ring *ring = container_of(work, typeof(*ring), work);
|
||||
struct ring_frame *frame;
|
||||
bool canceled = false;
|
||||
unsigned long flags;
|
||||
LIST_HEAD(done);
|
||||
mutex_lock(&ring->lock);
|
||||
|
||||
spin_lock_irqsave(&ring->lock, flags);
|
||||
|
||||
if (!ring->running) {
|
||||
/* Move all frames to done and mark them as canceled. */
|
||||
@ -229,30 +239,14 @@ static void ring_work(struct work_struct *work)
|
||||
frame->eof = ring->descriptors[ring->tail].eof;
|
||||
frame->sof = ring->descriptors[ring->tail].sof;
|
||||
frame->flags = ring->descriptors[ring->tail].flags;
|
||||
if (frame->sof != 0)
|
||||
dev_WARN(&ring->nhi->pdev->dev,
|
||||
"%s %d got unexpected SOF: %#x\n",
|
||||
RING_TYPE(ring), ring->hop,
|
||||
frame->sof);
|
||||
/*
|
||||
* known flags:
|
||||
* raw not enabled, interupt not set: 0x2=0010
|
||||
* raw enabled: 0xa=1010
|
||||
* raw not enabled: 0xb=1011
|
||||
* partial frame (>MAX_FRAME_SIZE): 0xe=1110
|
||||
*/
|
||||
if (frame->flags != 0xa)
|
||||
dev_WARN(&ring->nhi->pdev->dev,
|
||||
"%s %d got unexpected flags: %#x\n",
|
||||
RING_TYPE(ring), ring->hop,
|
||||
frame->flags);
|
||||
}
|
||||
ring->tail = (ring->tail + 1) % ring->size;
|
||||
}
|
||||
ring_write_descriptors(ring);
|
||||
|
||||
invoke_callback:
|
||||
mutex_unlock(&ring->lock); /* allow callbacks to schedule new work */
|
||||
/* allow callbacks to schedule new work */
|
||||
spin_unlock_irqrestore(&ring->lock, flags);
|
||||
while (!list_empty(&done)) {
|
||||
frame = list_first_entry(&done, typeof(*frame), list);
|
||||
/*
|
||||
@ -260,29 +254,128 @@ invoke_callback:
|
||||
* Do not hold on to it.
|
||||
*/
|
||||
list_del_init(&frame->list);
|
||||
frame->callback(ring, frame, canceled);
|
||||
if (frame->callback)
|
||||
frame->callback(ring, frame, canceled);
|
||||
}
|
||||
}
|
||||
|
||||
int __ring_enqueue(struct tb_ring *ring, struct ring_frame *frame)
|
||||
int __tb_ring_enqueue(struct tb_ring *ring, struct ring_frame *frame)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
mutex_lock(&ring->lock);
|
||||
|
||||
spin_lock_irqsave(&ring->lock, flags);
|
||||
if (ring->running) {
|
||||
list_add_tail(&frame->list, &ring->queue);
|
||||
ring_write_descriptors(ring);
|
||||
} else {
|
||||
ret = -ESHUTDOWN;
|
||||
}
|
||||
mutex_unlock(&ring->lock);
|
||||
spin_unlock_irqrestore(&ring->lock, flags);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__tb_ring_enqueue);
|
||||
|
||||
/**
|
||||
* tb_ring_poll() - Poll one completed frame from the ring
|
||||
* @ring: Ring to poll
|
||||
*
|
||||
* This function can be called when @start_poll callback of the @ring
|
||||
* has been called. It will read one completed frame from the ring and
|
||||
* return it to the caller. Returns %NULL if there is no more completed
|
||||
* frames.
|
||||
*/
|
||||
struct ring_frame *tb_ring_poll(struct tb_ring *ring)
|
||||
{
|
||||
struct ring_frame *frame = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ring->lock, flags);
|
||||
if (!ring->running)
|
||||
goto unlock;
|
||||
if (ring_empty(ring))
|
||||
goto unlock;
|
||||
|
||||
if (ring->descriptors[ring->tail].flags & RING_DESC_COMPLETED) {
|
||||
frame = list_first_entry(&ring->in_flight, typeof(*frame),
|
||||
list);
|
||||
list_del_init(&frame->list);
|
||||
|
||||
if (!ring->is_tx) {
|
||||
frame->size = ring->descriptors[ring->tail].length;
|
||||
frame->eof = ring->descriptors[ring->tail].eof;
|
||||
frame->sof = ring->descriptors[ring->tail].sof;
|
||||
frame->flags = ring->descriptors[ring->tail].flags;
|
||||
}
|
||||
|
||||
ring->tail = (ring->tail + 1) % ring->size;
|
||||
}
|
||||
|
||||
unlock:
|
||||
spin_unlock_irqrestore(&ring->lock, flags);
|
||||
return frame;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_poll);
|
||||
|
||||
static void __ring_interrupt_mask(struct tb_ring *ring, bool mask)
|
||||
{
|
||||
int idx = ring_interrupt_index(ring);
|
||||
int reg = REG_RING_INTERRUPT_BASE + idx / 32 * 4;
|
||||
int bit = idx % 32;
|
||||
u32 val;
|
||||
|
||||
val = ioread32(ring->nhi->iobase + reg);
|
||||
if (mask)
|
||||
val &= ~BIT(bit);
|
||||
else
|
||||
val |= BIT(bit);
|
||||
iowrite32(val, ring->nhi->iobase + reg);
|
||||
}
|
||||
|
||||
/* Both @nhi->lock and @ring->lock should be held */
|
||||
static void __ring_interrupt(struct tb_ring *ring)
|
||||
{
|
||||
if (!ring->running)
|
||||
return;
|
||||
|
||||
if (ring->start_poll) {
|
||||
__ring_interrupt_mask(ring, false);
|
||||
ring->start_poll(ring->poll_data);
|
||||
} else {
|
||||
schedule_work(&ring->work);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_ring_poll_complete() - Re-start interrupt for the ring
|
||||
* @ring: Ring to re-start the interrupt
|
||||
*
|
||||
* This will re-start (unmask) the ring interrupt once the user is done
|
||||
* with polling.
|
||||
*/
|
||||
void tb_ring_poll_complete(struct tb_ring *ring)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&ring->nhi->lock, flags);
|
||||
spin_lock(&ring->lock);
|
||||
if (ring->start_poll)
|
||||
__ring_interrupt_mask(ring, false);
|
||||
spin_unlock(&ring->lock);
|
||||
spin_unlock_irqrestore(&ring->nhi->lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_poll_complete);
|
||||
|
||||
static irqreturn_t ring_msix(int irq, void *data)
|
||||
{
|
||||
struct tb_ring *ring = data;
|
||||
|
||||
schedule_work(&ring->work);
|
||||
spin_lock(&ring->nhi->lock);
|
||||
spin_lock(&ring->lock);
|
||||
__ring_interrupt(ring);
|
||||
spin_unlock(&ring->lock);
|
||||
spin_unlock(&ring->nhi->lock);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
@ -320,30 +413,81 @@ static void ring_release_msix(struct tb_ring *ring)
|
||||
ring->irq = 0;
|
||||
}
|
||||
|
||||
static struct tb_ring *ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
|
||||
bool transmit, unsigned int flags)
|
||||
static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
spin_lock_irq(&nhi->lock);
|
||||
|
||||
if (ring->hop < 0) {
|
||||
unsigned int i;
|
||||
|
||||
/*
|
||||
* Automatically allocate HopID from the non-reserved
|
||||
* range 8 .. hop_count - 1.
|
||||
*/
|
||||
for (i = RING_FIRST_USABLE_HOPID; i < nhi->hop_count; i++) {
|
||||
if (ring->is_tx) {
|
||||
if (!nhi->tx_rings[i]) {
|
||||
ring->hop = i;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
if (!nhi->rx_rings[i]) {
|
||||
ring->hop = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (ring->hop < 0 || ring->hop >= nhi->hop_count) {
|
||||
dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
|
||||
ret = -EINVAL;
|
||||
goto err_unlock;
|
||||
}
|
||||
if (ring->is_tx && nhi->tx_rings[ring->hop]) {
|
||||
dev_warn(&nhi->pdev->dev, "TX hop %d already allocated\n",
|
||||
ring->hop);
|
||||
ret = -EBUSY;
|
||||
goto err_unlock;
|
||||
} else if (!ring->is_tx && nhi->rx_rings[ring->hop]) {
|
||||
dev_warn(&nhi->pdev->dev, "RX hop %d already allocated\n",
|
||||
ring->hop);
|
||||
ret = -EBUSY;
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
if (ring->is_tx)
|
||||
nhi->tx_rings[ring->hop] = ring;
|
||||
else
|
||||
nhi->rx_rings[ring->hop] = ring;
|
||||
|
||||
err_unlock:
|
||||
spin_unlock_irq(&nhi->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
|
||||
bool transmit, unsigned int flags,
|
||||
u16 sof_mask, u16 eof_mask,
|
||||
void (*start_poll)(void *),
|
||||
void *poll_data)
|
||||
{
|
||||
struct tb_ring *ring = NULL;
|
||||
dev_info(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
|
||||
transmit ? "TX" : "RX", hop, size);
|
||||
|
||||
mutex_lock(&nhi->lock);
|
||||
if (hop >= nhi->hop_count) {
|
||||
dev_WARN(&nhi->pdev->dev, "invalid hop: %d\n", hop);
|
||||
goto err;
|
||||
}
|
||||
if (transmit && nhi->tx_rings[hop]) {
|
||||
dev_WARN(&nhi->pdev->dev, "TX hop %d already allocated\n", hop);
|
||||
goto err;
|
||||
} else if (!transmit && nhi->rx_rings[hop]) {
|
||||
dev_WARN(&nhi->pdev->dev, "RX hop %d already allocated\n", hop);
|
||||
goto err;
|
||||
}
|
||||
/* Tx Ring 2 is reserved for E2E workaround */
|
||||
if (transmit && hop == RING_E2E_UNUSED_HOPID)
|
||||
return NULL;
|
||||
|
||||
ring = kzalloc(sizeof(*ring), GFP_KERNEL);
|
||||
if (!ring)
|
||||
goto err;
|
||||
return NULL;
|
||||
|
||||
mutex_init(&ring->lock);
|
||||
spin_lock_init(&ring->lock);
|
||||
INIT_LIST_HEAD(&ring->queue);
|
||||
INIT_LIST_HEAD(&ring->in_flight);
|
||||
INIT_WORK(&ring->work, ring_work);
|
||||
@ -353,55 +497,88 @@ static struct tb_ring *ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
|
||||
ring->is_tx = transmit;
|
||||
ring->size = size;
|
||||
ring->flags = flags;
|
||||
ring->sof_mask = sof_mask;
|
||||
ring->eof_mask = eof_mask;
|
||||
ring->head = 0;
|
||||
ring->tail = 0;
|
||||
ring->running = false;
|
||||
|
||||
if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND))
|
||||
goto err;
|
||||
ring->start_poll = start_poll;
|
||||
ring->poll_data = poll_data;
|
||||
|
||||
ring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,
|
||||
size * sizeof(*ring->descriptors),
|
||||
&ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
|
||||
if (!ring->descriptors)
|
||||
goto err;
|
||||
goto err_free_ring;
|
||||
|
||||
if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND))
|
||||
goto err_free_descs;
|
||||
|
||||
if (nhi_alloc_hop(nhi, ring))
|
||||
goto err_release_msix;
|
||||
|
||||
if (transmit)
|
||||
nhi->tx_rings[hop] = ring;
|
||||
else
|
||||
nhi->rx_rings[hop] = ring;
|
||||
mutex_unlock(&nhi->lock);
|
||||
return ring;
|
||||
|
||||
err:
|
||||
if (ring)
|
||||
mutex_destroy(&ring->lock);
|
||||
err_release_msix:
|
||||
ring_release_msix(ring);
|
||||
err_free_descs:
|
||||
dma_free_coherent(&ring->nhi->pdev->dev,
|
||||
ring->size * sizeof(*ring->descriptors),
|
||||
ring->descriptors, ring->descriptors_dma);
|
||||
err_free_ring:
|
||||
kfree(ring);
|
||||
mutex_unlock(&nhi->lock);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags)
|
||||
/**
|
||||
* tb_ring_alloc_tx() - Allocate DMA ring for transmit
|
||||
* @nhi: Pointer to the NHI the ring is to be allocated
|
||||
* @hop: HopID (ring) to allocate
|
||||
* @size: Number of entries in the ring
|
||||
* @flags: Flags for the ring
|
||||
*/
|
||||
struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags)
|
||||
{
|
||||
return ring_alloc(nhi, hop, size, true, flags);
|
||||
}
|
||||
|
||||
struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags)
|
||||
{
|
||||
return ring_alloc(nhi, hop, size, false, flags);
|
||||
return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
|
||||
|
||||
/**
|
||||
* ring_start() - enable a ring
|
||||
*
|
||||
* Must not be invoked in parallel with ring_stop().
|
||||
* tb_ring_alloc_rx() - Allocate DMA ring for receive
|
||||
* @nhi: Pointer to the NHI the ring is to be allocated
|
||||
* @hop: HopID (ring) to allocate. Pass %-1 for automatic allocation.
|
||||
* @size: Number of entries in the ring
|
||||
* @flags: Flags for the ring
|
||||
* @sof_mask: Mask of PDF values that start a frame
|
||||
* @eof_mask: Mask of PDF values that end a frame
|
||||
* @start_poll: If not %NULL the ring will call this function when an
|
||||
* interrupt is triggered and masked, instead of callback
|
||||
* in each Rx frame.
|
||||
* @poll_data: Optional data passed to @start_poll
|
||||
*/
|
||||
void ring_start(struct tb_ring *ring)
|
||||
struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags, u16 sof_mask, u16 eof_mask,
|
||||
void (*start_poll)(void *), void *poll_data)
|
||||
{
|
||||
mutex_lock(&ring->nhi->lock);
|
||||
mutex_lock(&ring->lock);
|
||||
return tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask,
|
||||
start_poll, poll_data);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_alloc_rx);
|
||||
|
||||
/**
|
||||
* tb_ring_start() - enable a ring
|
||||
*
|
||||
* Must not be invoked in parallel with tb_ring_stop().
|
||||
*/
|
||||
void tb_ring_start(struct tb_ring *ring)
|
||||
{
|
||||
u16 frame_size;
|
||||
u32 flags;
|
||||
|
||||
spin_lock_irq(&ring->nhi->lock);
|
||||
spin_lock(&ring->lock);
|
||||
if (ring->nhi->going_away)
|
||||
goto err;
|
||||
if (ring->running) {
|
||||
@ -411,43 +588,65 @@ void ring_start(struct tb_ring *ring)
|
||||
dev_info(&ring->nhi->pdev->dev, "starting %s %d\n",
|
||||
RING_TYPE(ring), ring->hop);
|
||||
|
||||
if (ring->flags & RING_FLAG_FRAME) {
|
||||
/* Means 4096 */
|
||||
frame_size = 0;
|
||||
flags = RING_FLAG_ENABLE;
|
||||
} else {
|
||||
frame_size = TB_FRAME_SIZE;
|
||||
flags = RING_FLAG_ENABLE | RING_FLAG_RAW;
|
||||
}
|
||||
|
||||
if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
|
||||
u32 hop;
|
||||
|
||||
/*
|
||||
* In order not to lose Rx packets we enable end-to-end
|
||||
* workaround which transfers Rx credits to an unused Tx
|
||||
* HopID.
|
||||
*/
|
||||
hop = RING_E2E_UNUSED_HOPID << REG_RX_OPTIONS_E2E_HOP_SHIFT;
|
||||
hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
|
||||
flags |= hop | RING_FLAG_E2E_FLOW_CONTROL;
|
||||
}
|
||||
|
||||
ring_iowrite64desc(ring, ring->descriptors_dma, 0);
|
||||
if (ring->is_tx) {
|
||||
ring_iowrite32desc(ring, ring->size, 12);
|
||||
ring_iowrite32options(ring, 0, 4); /* time releated ? */
|
||||
ring_iowrite32options(ring,
|
||||
RING_FLAG_ENABLE | RING_FLAG_RAW, 0);
|
||||
ring_iowrite32options(ring, flags, 0);
|
||||
} else {
|
||||
ring_iowrite32desc(ring,
|
||||
(TB_FRAME_SIZE << 16) | ring->size, 12);
|
||||
ring_iowrite32options(ring, 0xffffffff, 4); /* SOF EOF mask */
|
||||
ring_iowrite32options(ring,
|
||||
RING_FLAG_ENABLE | RING_FLAG_RAW, 0);
|
||||
u32 sof_eof_mask = ring->sof_mask << 16 | ring->eof_mask;
|
||||
|
||||
ring_iowrite32desc(ring, (frame_size << 16) | ring->size, 12);
|
||||
ring_iowrite32options(ring, sof_eof_mask, 4);
|
||||
ring_iowrite32options(ring, flags, 0);
|
||||
}
|
||||
ring_interrupt_active(ring, true);
|
||||
ring->running = true;
|
||||
err:
|
||||
mutex_unlock(&ring->lock);
|
||||
mutex_unlock(&ring->nhi->lock);
|
||||
spin_unlock(&ring->lock);
|
||||
spin_unlock_irq(&ring->nhi->lock);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL_GPL(tb_ring_start);
|
||||
|
||||
/**
|
||||
* ring_stop() - shutdown a ring
|
||||
* tb_ring_stop() - shutdown a ring
|
||||
*
|
||||
* Must not be invoked from a callback.
|
||||
*
|
||||
* This method will disable the ring. Further calls to ring_tx/ring_rx will
|
||||
* return -ESHUTDOWN until ring_stop has been called.
|
||||
* This method will disable the ring. Further calls to
|
||||
* tb_ring_tx/tb_ring_rx will return -ESHUTDOWN until ring_stop has been
|
||||
* called.
|
||||
*
|
||||
* All enqueued frames will be canceled and their callbacks will be executed
|
||||
* with frame->canceled set to true (on the callback thread). This method
|
||||
* returns only after all callback invocations have finished.
|
||||
*/
|
||||
void ring_stop(struct tb_ring *ring)
|
||||
void tb_ring_stop(struct tb_ring *ring)
|
||||
{
|
||||
mutex_lock(&ring->nhi->lock);
|
||||
mutex_lock(&ring->lock);
|
||||
spin_lock_irq(&ring->nhi->lock);
|
||||
spin_lock(&ring->lock);
|
||||
dev_info(&ring->nhi->pdev->dev, "stopping %s %d\n",
|
||||
RING_TYPE(ring), ring->hop);
|
||||
if (ring->nhi->going_away)
|
||||
@ -468,8 +667,8 @@ void ring_stop(struct tb_ring *ring)
|
||||
ring->running = false;
|
||||
|
||||
err:
|
||||
mutex_unlock(&ring->lock);
|
||||
mutex_unlock(&ring->nhi->lock);
|
||||
spin_unlock(&ring->lock);
|
||||
spin_unlock_irq(&ring->nhi->lock);
|
||||
|
||||
/*
|
||||
* schedule ring->work to invoke callbacks on all remaining frames.
|
||||
@ -477,9 +676,10 @@ err:
|
||||
schedule_work(&ring->work);
|
||||
flush_work(&ring->work);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_stop);
|
||||
|
||||
/*
|
||||
* ring_free() - free ring
|
||||
* tb_ring_free() - free ring
|
||||
*
|
||||
* When this method returns all invocations of ring->callback will have
|
||||
* finished.
|
||||
@ -488,9 +688,9 @@ err:
|
||||
*
|
||||
* Must NOT be called from ring_frame->callback!
|
||||
*/
|
||||
void ring_free(struct tb_ring *ring)
|
||||
void tb_ring_free(struct tb_ring *ring)
|
||||
{
|
||||
mutex_lock(&ring->nhi->lock);
|
||||
spin_lock_irq(&ring->nhi->lock);
|
||||
/*
|
||||
* Dissociate the ring from the NHI. This also ensures that
|
||||
* nhi_interrupt_work cannot reschedule ring->work.
|
||||
@ -504,6 +704,7 @@ void ring_free(struct tb_ring *ring)
|
||||
dev_WARN(&ring->nhi->pdev->dev, "%s %d still running\n",
|
||||
RING_TYPE(ring), ring->hop);
|
||||
}
|
||||
spin_unlock_irq(&ring->nhi->lock);
|
||||
|
||||
ring_release_msix(ring);
|
||||
|
||||
@ -520,16 +721,15 @@ void ring_free(struct tb_ring *ring)
|
||||
RING_TYPE(ring),
|
||||
ring->hop);
|
||||
|
||||
mutex_unlock(&ring->nhi->lock);
|
||||
/**
|
||||
* ring->work can no longer be scheduled (it is scheduled only
|
||||
* by nhi_interrupt_work, ring_stop and ring_msix). Wait for it
|
||||
* to finish before freeing the ring.
|
||||
*/
|
||||
flush_work(&ring->work);
|
||||
mutex_destroy(&ring->lock);
|
||||
kfree(ring);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_ring_free);
|
||||
|
||||
/**
|
||||
* nhi_mailbox_cmd() - Send a command through NHI mailbox
|
||||
@ -595,7 +795,7 @@ static void nhi_interrupt_work(struct work_struct *work)
|
||||
int type = 0; /* current interrupt type 0: TX, 1: RX, 2: RX overflow */
|
||||
struct tb_ring *ring;
|
||||
|
||||
mutex_lock(&nhi->lock);
|
||||
spin_lock_irq(&nhi->lock);
|
||||
|
||||
/*
|
||||
* Starting at REG_RING_NOTIFY_BASE there are three status bitfields
|
||||
@ -630,10 +830,12 @@ static void nhi_interrupt_work(struct work_struct *work)
|
||||
hop);
|
||||
continue;
|
||||
}
|
||||
/* we do not check ring->running, this is done in ring->work */
|
||||
schedule_work(&ring->work);
|
||||
|
||||
spin_lock(&ring->lock);
|
||||
__ring_interrupt(ring);
|
||||
spin_unlock(&ring->lock);
|
||||
}
|
||||
mutex_unlock(&nhi->lock);
|
||||
spin_unlock_irq(&nhi->lock);
|
||||
}
|
||||
|
||||
static irqreturn_t nhi_msi(int irq, void *data)
|
||||
@ -651,6 +853,22 @@ static int nhi_suspend_noirq(struct device *dev)
|
||||
return tb_domain_suspend_noirq(tb);
|
||||
}
|
||||
|
||||
static void nhi_enable_int_throttling(struct tb_nhi *nhi)
|
||||
{
|
||||
/* Throttling is specified in 256ns increments */
|
||||
u32 throttle = DIV_ROUND_UP(128 * NSEC_PER_USEC, 256);
|
||||
unsigned int i;
|
||||
|
||||
/*
|
||||
* Configure interrupt throttling for all vectors even if we
|
||||
* only use few.
|
||||
*/
|
||||
for (i = 0; i < MSIX_MAX_VECS; i++) {
|
||||
u32 reg = REG_INT_THROTTLING_RATE + i * 4;
|
||||
iowrite32(throttle, nhi->iobase + reg);
|
||||
}
|
||||
}
|
||||
|
||||
static int nhi_resume_noirq(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
@ -663,6 +881,8 @@ static int nhi_resume_noirq(struct device *dev)
|
||||
*/
|
||||
if (!pci_device_is_present(pdev))
|
||||
tb->nhi->going_away = true;
|
||||
else
|
||||
nhi_enable_int_throttling(tb->nhi);
|
||||
|
||||
return tb_domain_resume_noirq(tb);
|
||||
}
|
||||
@ -705,7 +925,6 @@ static void nhi_shutdown(struct tb_nhi *nhi)
|
||||
devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
|
||||
flush_work(&nhi->interrupt_work);
|
||||
}
|
||||
mutex_destroy(&nhi->lock);
|
||||
ida_destroy(&nhi->msix_ida);
|
||||
}
|
||||
|
||||
@ -717,6 +936,8 @@ static int nhi_init_msi(struct tb_nhi *nhi)
|
||||
/* In case someone left them on. */
|
||||
nhi_disable_interrupts(nhi);
|
||||
|
||||
nhi_enable_int_throttling(nhi);
|
||||
|
||||
ida_init(&nhi->msix_ida);
|
||||
|
||||
/*
|
||||
@ -792,13 +1013,10 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
return res;
|
||||
}
|
||||
|
||||
mutex_init(&nhi->lock);
|
||||
spin_lock_init(&nhi->lock);
|
||||
|
||||
pci_set_master(pdev);
|
||||
|
||||
/* magic value - clock related? */
|
||||
iowrite32(3906250 / 10000, nhi->iobase + 0x38c00);
|
||||
|
||||
tb = icm_probe(nhi);
|
||||
if (!tb)
|
||||
tb = tb_probe(nhi);
|
||||
|
@ -7,144 +7,7 @@
|
||||
#ifndef DSL3510_H_
|
||||
#define DSL3510_H_
|
||||
|
||||
#include <linux/idr.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
/**
|
||||
* struct tb_nhi - thunderbolt native host interface
|
||||
* @lock: Must be held during ring creation/destruction. Is acquired by
|
||||
* interrupt_work when dispatching interrupts to individual rings.
|
||||
* @pdev: Pointer to the PCI device
|
||||
* @iobase: MMIO space of the NHI
|
||||
* @tx_rings: All Tx rings available on this host controller
|
||||
* @rx_rings: All Rx rings available on this host controller
|
||||
* @msix_ida: Used to allocate MSI-X vectors for rings
|
||||
* @going_away: The host controller device is about to disappear so when
|
||||
* this flag is set, avoid touching the hardware anymore.
|
||||
* @interrupt_work: Work scheduled to handle ring interrupt when no
|
||||
* MSI-X is used.
|
||||
* @hop_count: Number of rings (end point hops) supported by NHI.
|
||||
*/
|
||||
struct tb_nhi {
|
||||
struct mutex lock;
|
||||
struct pci_dev *pdev;
|
||||
void __iomem *iobase;
|
||||
struct tb_ring **tx_rings;
|
||||
struct tb_ring **rx_rings;
|
||||
struct ida msix_ida;
|
||||
bool going_away;
|
||||
struct work_struct interrupt_work;
|
||||
u32 hop_count;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb_ring - thunderbolt TX or RX ring associated with a NHI
|
||||
* @lock: Lock serializing actions to this ring. Must be acquired after
|
||||
* nhi->lock.
|
||||
* @nhi: Pointer to the native host controller interface
|
||||
* @size: Size of the ring
|
||||
* @hop: Hop (DMA channel) associated with this ring
|
||||
* @head: Head of the ring (write next descriptor here)
|
||||
* @tail: Tail of the ring (complete next descriptor here)
|
||||
* @descriptors: Allocated descriptors for this ring
|
||||
* @queue: Queue holding frames to be transferred over this ring
|
||||
* @in_flight: Queue holding frames that are currently in flight
|
||||
* @work: Interrupt work structure
|
||||
* @is_tx: Is the ring Tx or Rx
|
||||
* @running: Is the ring running
|
||||
* @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise.
|
||||
* @vector: MSI-X vector number the ring uses (only set if @irq is > 0)
|
||||
* @flags: Ring specific flags
|
||||
*/
|
||||
struct tb_ring {
|
||||
struct mutex lock;
|
||||
struct tb_nhi *nhi;
|
||||
int size;
|
||||
int hop;
|
||||
int head;
|
||||
int tail;
|
||||
struct ring_desc *descriptors;
|
||||
dma_addr_t descriptors_dma;
|
||||
struct list_head queue;
|
||||
struct list_head in_flight;
|
||||
struct work_struct work;
|
||||
bool is_tx:1;
|
||||
bool running:1;
|
||||
int irq;
|
||||
u8 vector;
|
||||
unsigned int flags;
|
||||
};
|
||||
|
||||
/* Leave ring interrupt enabled on suspend */
|
||||
#define RING_FLAG_NO_SUSPEND BIT(0)
|
||||
|
||||
struct ring_frame;
|
||||
typedef void (*ring_cb)(struct tb_ring*, struct ring_frame*, bool canceled);
|
||||
|
||||
/**
|
||||
* struct ring_frame - for use with ring_rx/ring_tx
|
||||
*/
|
||||
struct ring_frame {
|
||||
dma_addr_t buffer_phy;
|
||||
ring_cb callback;
|
||||
struct list_head list;
|
||||
u32 size:12; /* TX: in, RX: out*/
|
||||
u32 flags:12; /* RX: out */
|
||||
u32 eof:4; /* TX:in, RX: out */
|
||||
u32 sof:4; /* TX:in, RX: out */
|
||||
};
|
||||
|
||||
#define TB_FRAME_SIZE 0x100 /* minimum size for ring_rx */
|
||||
|
||||
struct tb_ring *ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags);
|
||||
struct tb_ring *ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags);
|
||||
void ring_start(struct tb_ring *ring);
|
||||
void ring_stop(struct tb_ring *ring);
|
||||
void ring_free(struct tb_ring *ring);
|
||||
|
||||
int __ring_enqueue(struct tb_ring *ring, struct ring_frame *frame);
|
||||
|
||||
/**
|
||||
* ring_rx() - enqueue a frame on an RX ring
|
||||
*
|
||||
* frame->buffer, frame->buffer_phy and frame->callback have to be set. The
|
||||
* buffer must contain at least TB_FRAME_SIZE bytes.
|
||||
*
|
||||
* frame->callback will be invoked with frame->size, frame->flags, frame->eof,
|
||||
* frame->sof set once the frame has been received.
|
||||
*
|
||||
* If ring_stop is called after the packet has been enqueued frame->callback
|
||||
* will be called with canceled set to true.
|
||||
*
|
||||
* Return: Returns ESHUTDOWN if ring_stop has been called. Zero otherwise.
|
||||
*/
|
||||
static inline int ring_rx(struct tb_ring *ring, struct ring_frame *frame)
|
||||
{
|
||||
WARN_ON(ring->is_tx);
|
||||
return __ring_enqueue(ring, frame);
|
||||
}
|
||||
|
||||
/**
|
||||
* ring_tx() - enqueue a frame on an TX ring
|
||||
*
|
||||
* frame->buffer, frame->buffer_phy, frame->callback, frame->size, frame->eof
|
||||
* and frame->sof have to be set.
|
||||
*
|
||||
* frame->callback will be invoked with once the frame has been transmitted.
|
||||
*
|
||||
* If ring_stop is called after the packet has been enqueued frame->callback
|
||||
* will be called with canceled set to true.
|
||||
*
|
||||
* Return: Returns ESHUTDOWN if ring_stop has been called. Zero otherwise.
|
||||
*/
|
||||
static inline int ring_tx(struct tb_ring *ring, struct ring_frame *frame)
|
||||
{
|
||||
WARN_ON(!ring->is_tx);
|
||||
return __ring_enqueue(ring, frame);
|
||||
}
|
||||
#include <linux/thunderbolt.h>
|
||||
|
||||
enum nhi_fw_mode {
|
||||
NHI_FW_SAFE_MODE,
|
||||
@ -157,6 +20,8 @@ enum nhi_mailbox_cmd {
|
||||
NHI_MAILBOX_SAVE_DEVS = 0x05,
|
||||
NHI_MAILBOX_DISCONNECT_PCIE_PATHS = 0x06,
|
||||
NHI_MAILBOX_DRV_UNLOADS = 0x07,
|
||||
NHI_MAILBOX_DISCONNECT_PA = 0x10,
|
||||
NHI_MAILBOX_DISCONNECT_PB = 0x11,
|
||||
NHI_MAILBOX_ALLOW_ALL_DEVS = 0x23,
|
||||
};
|
||||
|
||||
|
@ -17,13 +17,6 @@ enum ring_flags {
|
||||
RING_FLAG_ENABLE = 1 << 31,
|
||||
};
|
||||
|
||||
enum ring_desc_flags {
|
||||
RING_DESC_ISOCH = 0x1, /* TX only? */
|
||||
RING_DESC_COMPLETED = 0x2, /* set by NHI */
|
||||
RING_DESC_POSTED = 0x4, /* always set this */
|
||||
RING_DESC_INTERRUPT = 0x8, /* request an interrupt on completion */
|
||||
};
|
||||
|
||||
/**
|
||||
* struct ring_desc - TX/RX ring entry
|
||||
*
|
||||
@ -77,6 +70,8 @@ struct ring_desc {
|
||||
* ..: unknown
|
||||
*/
|
||||
#define REG_RX_OPTIONS_BASE 0x29800
|
||||
#define REG_RX_OPTIONS_E2E_HOP_MASK GENMASK(22, 12)
|
||||
#define REG_RX_OPTIONS_E2E_HOP_SHIFT 12
|
||||
|
||||
/*
|
||||
* three bitfields: tx, rx, rx overflow
|
||||
@ -95,6 +90,8 @@ struct ring_desc {
|
||||
#define REG_RING_INTERRUPT_BASE 0x38200
|
||||
#define RING_INTERRUPT_REG_COUNT(nhi) ((31 + 2 * nhi->hop_count) / 32)
|
||||
|
||||
#define REG_INT_THROTTLING_RATE 0x38c00
|
||||
|
||||
/* Interrupt Vector Allocation */
|
||||
#define REG_INT_VEC_ALLOC_BASE 0x38c40
|
||||
#define REG_INT_VEC_ALLOC_BITS 4
|
||||
|
670
drivers/thunderbolt/property.c
Normal file
670
drivers/thunderbolt/property.c
Normal file
@ -0,0 +1,670 @@
|
||||
/*
|
||||
* Thunderbolt XDomain property support
|
||||
*
|
||||
* Copyright (C) 2017, Intel Corporation
|
||||
* Authors: Michael Jamet <michael.jamet@intel.com>
|
||||
* Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/uuid.h>
|
||||
#include <linux/thunderbolt.h>
|
||||
|
||||
struct tb_property_entry {
|
||||
u32 key_hi;
|
||||
u32 key_lo;
|
||||
u16 length;
|
||||
u8 reserved;
|
||||
u8 type;
|
||||
u32 value;
|
||||
};
|
||||
|
||||
struct tb_property_rootdir_entry {
|
||||
u32 magic;
|
||||
u32 length;
|
||||
struct tb_property_entry entries[];
|
||||
};
|
||||
|
||||
struct tb_property_dir_entry {
|
||||
u32 uuid[4];
|
||||
struct tb_property_entry entries[];
|
||||
};
|
||||
|
||||
#define TB_PROPERTY_ROOTDIR_MAGIC 0x55584401
|
||||
|
||||
static struct tb_property_dir *__tb_property_parse_dir(const u32 *block,
|
||||
size_t block_len, unsigned int dir_offset, size_t dir_len,
|
||||
bool is_root);
|
||||
|
||||
static inline void parse_dwdata(void *dst, const void *src, size_t dwords)
|
||||
{
|
||||
be32_to_cpu_array(dst, src, dwords);
|
||||
}
|
||||
|
||||
static inline void format_dwdata(void *dst, const void *src, size_t dwords)
|
||||
{
|
||||
cpu_to_be32_array(dst, src, dwords);
|
||||
}
|
||||
|
||||
static bool tb_property_entry_valid(const struct tb_property_entry *entry,
|
||||
size_t block_len)
|
||||
{
|
||||
switch (entry->type) {
|
||||
case TB_PROPERTY_TYPE_DIRECTORY:
|
||||
case TB_PROPERTY_TYPE_DATA:
|
||||
case TB_PROPERTY_TYPE_TEXT:
|
||||
if (entry->length > block_len)
|
||||
return false;
|
||||
if (entry->value + entry->length > block_len)
|
||||
return false;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_VALUE:
|
||||
if (entry->length != 1)
|
||||
return false;
|
||||
break;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool tb_property_key_valid(const char *key)
|
||||
{
|
||||
return key && strlen(key) <= TB_PROPERTY_KEY_SIZE;
|
||||
}
|
||||
|
||||
static struct tb_property *
|
||||
tb_property_alloc(const char *key, enum tb_property_type type)
|
||||
{
|
||||
struct tb_property *property;
|
||||
|
||||
property = kzalloc(sizeof(*property), GFP_KERNEL);
|
||||
if (!property)
|
||||
return NULL;
|
||||
|
||||
strcpy(property->key, key);
|
||||
property->type = type;
|
||||
INIT_LIST_HEAD(&property->list);
|
||||
|
||||
return property;
|
||||
}
|
||||
|
||||
static struct tb_property *tb_property_parse(const u32 *block, size_t block_len,
|
||||
const struct tb_property_entry *entry)
|
||||
{
|
||||
char key[TB_PROPERTY_KEY_SIZE + 1];
|
||||
struct tb_property *property;
|
||||
struct tb_property_dir *dir;
|
||||
|
||||
if (!tb_property_entry_valid(entry, block_len))
|
||||
return NULL;
|
||||
|
||||
parse_dwdata(key, entry, 2);
|
||||
key[TB_PROPERTY_KEY_SIZE] = '\0';
|
||||
|
||||
property = tb_property_alloc(key, entry->type);
|
||||
if (!property)
|
||||
return NULL;
|
||||
|
||||
property->length = entry->length;
|
||||
|
||||
switch (property->type) {
|
||||
case TB_PROPERTY_TYPE_DIRECTORY:
|
||||
dir = __tb_property_parse_dir(block, block_len, entry->value,
|
||||
entry->length, false);
|
||||
if (!dir) {
|
||||
kfree(property);
|
||||
return NULL;
|
||||
}
|
||||
property->value.dir = dir;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_DATA:
|
||||
property->value.data = kcalloc(property->length, sizeof(u32),
|
||||
GFP_KERNEL);
|
||||
if (!property->value.data) {
|
||||
kfree(property);
|
||||
return NULL;
|
||||
}
|
||||
parse_dwdata(property->value.data, block + entry->value,
|
||||
entry->length);
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_TEXT:
|
||||
property->value.text = kcalloc(property->length, sizeof(u32),
|
||||
GFP_KERNEL);
|
||||
if (!property->value.text) {
|
||||
kfree(property);
|
||||
return NULL;
|
||||
}
|
||||
parse_dwdata(property->value.text, block + entry->value,
|
||||
entry->length);
|
||||
/* Force null termination */
|
||||
property->value.text[property->length * 4 - 1] = '\0';
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_VALUE:
|
||||
property->value.immediate = entry->value;
|
||||
break;
|
||||
|
||||
default:
|
||||
property->type = TB_PROPERTY_TYPE_UNKNOWN;
|
||||
break;
|
||||
}
|
||||
|
||||
return property;
|
||||
}
|
||||
|
||||
static struct tb_property_dir *__tb_property_parse_dir(const u32 *block,
|
||||
size_t block_len, unsigned int dir_offset, size_t dir_len, bool is_root)
|
||||
{
|
||||
const struct tb_property_entry *entries;
|
||||
size_t i, content_len, nentries;
|
||||
unsigned int content_offset;
|
||||
struct tb_property_dir *dir;
|
||||
|
||||
dir = kzalloc(sizeof(*dir), GFP_KERNEL);
|
||||
if (!dir)
|
||||
return NULL;
|
||||
|
||||
if (is_root) {
|
||||
content_offset = dir_offset + 2;
|
||||
content_len = dir_len;
|
||||
} else {
|
||||
dir->uuid = kmemdup(&block[dir_offset], sizeof(*dir->uuid),
|
||||
GFP_KERNEL);
|
||||
content_offset = dir_offset + 4;
|
||||
content_len = dir_len - 4; /* Length includes UUID */
|
||||
}
|
||||
|
||||
entries = (const struct tb_property_entry *)&block[content_offset];
|
||||
nentries = content_len / (sizeof(*entries) / 4);
|
||||
|
||||
INIT_LIST_HEAD(&dir->properties);
|
||||
|
||||
for (i = 0; i < nentries; i++) {
|
||||
struct tb_property *property;
|
||||
|
||||
property = tb_property_parse(block, block_len, &entries[i]);
|
||||
if (!property) {
|
||||
tb_property_free_dir(dir);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
list_add_tail(&property->list, &dir->properties);
|
||||
}
|
||||
|
||||
return dir;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_property_parse_dir() - Parses properties from given property block
|
||||
* @block: Property block to parse
|
||||
* @block_len: Number of dword elements in the property block
|
||||
*
|
||||
* This function parses the XDomain properties data block into format that
|
||||
* can be traversed using the helper functions provided by this module.
|
||||
* Upon success returns the parsed directory. In case of error returns
|
||||
* %NULL. The resulting &struct tb_property_dir needs to be released by
|
||||
* calling tb_property_free_dir() when not needed anymore.
|
||||
*
|
||||
* The @block is expected to be root directory.
|
||||
*/
|
||||
struct tb_property_dir *tb_property_parse_dir(const u32 *block,
|
||||
size_t block_len)
|
||||
{
|
||||
const struct tb_property_rootdir_entry *rootdir =
|
||||
(const struct tb_property_rootdir_entry *)block;
|
||||
|
||||
if (rootdir->magic != TB_PROPERTY_ROOTDIR_MAGIC)
|
||||
return NULL;
|
||||
if (rootdir->length > block_len)
|
||||
return NULL;
|
||||
|
||||
return __tb_property_parse_dir(block, block_len, 0, rootdir->length,
|
||||
true);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_property_create_dir() - Creates new property directory
|
||||
* @uuid: UUID used to identify the particular directory
|
||||
*
|
||||
* Creates new, empty property directory. If @uuid is %NULL then the
|
||||
* directory is assumed to be root directory.
|
||||
*/
|
||||
struct tb_property_dir *tb_property_create_dir(const uuid_t *uuid)
|
||||
{
|
||||
struct tb_property_dir *dir;
|
||||
|
||||
dir = kzalloc(sizeof(*dir), GFP_KERNEL);
|
||||
if (!dir)
|
||||
return NULL;
|
||||
|
||||
INIT_LIST_HEAD(&dir->properties);
|
||||
if (uuid) {
|
||||
dir->uuid = kmemdup(uuid, sizeof(*dir->uuid), GFP_KERNEL);
|
||||
if (!dir->uuid) {
|
||||
kfree(dir);
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
return dir;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_create_dir);
|
||||
|
||||
static void tb_property_free(struct tb_property *property)
|
||||
{
|
||||
switch (property->type) {
|
||||
case TB_PROPERTY_TYPE_DIRECTORY:
|
||||
tb_property_free_dir(property->value.dir);
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_DATA:
|
||||
kfree(property->value.data);
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_TEXT:
|
||||
kfree(property->value.text);
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
kfree(property);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_property_free_dir() - Release memory allocated for property directory
|
||||
* @dir: Directory to release
|
||||
*
|
||||
* This will release all the memory the directory occupies including all
|
||||
* descendants. It is OK to pass %NULL @dir, then the function does
|
||||
* nothing.
|
||||
*/
|
||||
void tb_property_free_dir(struct tb_property_dir *dir)
|
||||
{
|
||||
struct tb_property *property, *tmp;
|
||||
|
||||
if (!dir)
|
||||
return;
|
||||
|
||||
list_for_each_entry_safe(property, tmp, &dir->properties, list) {
|
||||
list_del(&property->list);
|
||||
tb_property_free(property);
|
||||
}
|
||||
kfree(dir->uuid);
|
||||
kfree(dir);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_free_dir);
|
||||
|
||||
static size_t tb_property_dir_length(const struct tb_property_dir *dir,
|
||||
bool recurse, size_t *data_len)
|
||||
{
|
||||
const struct tb_property *property;
|
||||
size_t len = 0;
|
||||
|
||||
if (dir->uuid)
|
||||
len += sizeof(*dir->uuid) / 4;
|
||||
else
|
||||
len += sizeof(struct tb_property_rootdir_entry) / 4;
|
||||
|
||||
list_for_each_entry(property, &dir->properties, list) {
|
||||
len += sizeof(struct tb_property_entry) / 4;
|
||||
|
||||
switch (property->type) {
|
||||
case TB_PROPERTY_TYPE_DIRECTORY:
|
||||
if (recurse) {
|
||||
len += tb_property_dir_length(
|
||||
property->value.dir, recurse, data_len);
|
||||
}
|
||||
/* Reserve dword padding after each directory */
|
||||
if (data_len)
|
||||
*data_len += 1;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_DATA:
|
||||
case TB_PROPERTY_TYPE_TEXT:
|
||||
if (data_len)
|
||||
*data_len += property->length;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static ssize_t __tb_property_format_dir(const struct tb_property_dir *dir,
|
||||
u32 *block, unsigned int start_offset, size_t block_len)
|
||||
{
|
||||
unsigned int data_offset, dir_end;
|
||||
const struct tb_property *property;
|
||||
struct tb_property_entry *entry;
|
||||
size_t dir_len, data_len = 0;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The structure of property block looks like following. Leaf
|
||||
* data/text is included right after the directory and each
|
||||
* directory follows each other (even nested ones).
|
||||
*
|
||||
* +----------+ <-- start_offset
|
||||
* | header | <-- root directory header
|
||||
* +----------+ ---
|
||||
* | entry 0 | -^--------------------.
|
||||
* +----------+ | |
|
||||
* | entry 1 | -|--------------------|--.
|
||||
* +----------+ | | |
|
||||
* | entry 2 | -|-----------------. | |
|
||||
* +----------+ | | | |
|
||||
* : : | dir_len | | |
|
||||
* . . | | | |
|
||||
* : : | | | |
|
||||
* +----------+ | | | |
|
||||
* | entry n | v | | |
|
||||
* +----------+ <-- data_offset | | |
|
||||
* | data 0 | <------------------|--' |
|
||||
* +----------+ | |
|
||||
* | data 1 | <------------------|-----'
|
||||
* +----------+ |
|
||||
* | 00000000 | padding |
|
||||
* +----------+ <-- dir_end <------'
|
||||
* | UUID | <-- directory UUID (child directory)
|
||||
* +----------+
|
||||
* | entry 0 |
|
||||
* +----------+
|
||||
* | entry 1 |
|
||||
* +----------+
|
||||
* : :
|
||||
* . .
|
||||
* : :
|
||||
* +----------+
|
||||
* | entry n |
|
||||
* +----------+
|
||||
* | data 0 |
|
||||
* +----------+
|
||||
*
|
||||
* We use dir_end to hold pointer to the end of the directory. It
|
||||
* will increase as we add directories and each directory should be
|
||||
* added starting from previous dir_end.
|
||||
*/
|
||||
dir_len = tb_property_dir_length(dir, false, &data_len);
|
||||
data_offset = start_offset + dir_len;
|
||||
dir_end = start_offset + data_len + dir_len;
|
||||
|
||||
if (data_offset > dir_end)
|
||||
return -EINVAL;
|
||||
if (dir_end > block_len)
|
||||
return -EINVAL;
|
||||
|
||||
/* Write headers first */
|
||||
if (dir->uuid) {
|
||||
struct tb_property_dir_entry *pe;
|
||||
|
||||
pe = (struct tb_property_dir_entry *)&block[start_offset];
|
||||
memcpy(pe->uuid, dir->uuid, sizeof(pe->uuid));
|
||||
entry = pe->entries;
|
||||
} else {
|
||||
struct tb_property_rootdir_entry *re;
|
||||
|
||||
re = (struct tb_property_rootdir_entry *)&block[start_offset];
|
||||
re->magic = TB_PROPERTY_ROOTDIR_MAGIC;
|
||||
re->length = dir_len - sizeof(*re) / 4;
|
||||
entry = re->entries;
|
||||
}
|
||||
|
||||
list_for_each_entry(property, &dir->properties, list) {
|
||||
const struct tb_property_dir *child;
|
||||
|
||||
format_dwdata(entry, property->key, 2);
|
||||
entry->type = property->type;
|
||||
|
||||
switch (property->type) {
|
||||
case TB_PROPERTY_TYPE_DIRECTORY:
|
||||
child = property->value.dir;
|
||||
ret = __tb_property_format_dir(child, block, dir_end,
|
||||
block_len);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
entry->length = tb_property_dir_length(child, false,
|
||||
NULL);
|
||||
entry->value = dir_end;
|
||||
dir_end = ret;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_DATA:
|
||||
format_dwdata(&block[data_offset], property->value.data,
|
||||
property->length);
|
||||
entry->length = property->length;
|
||||
entry->value = data_offset;
|
||||
data_offset += entry->length;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_TEXT:
|
||||
format_dwdata(&block[data_offset], property->value.text,
|
||||
property->length);
|
||||
entry->length = property->length;
|
||||
entry->value = data_offset;
|
||||
data_offset += entry->length;
|
||||
break;
|
||||
|
||||
case TB_PROPERTY_TYPE_VALUE:
|
||||
entry->length = property->length;
|
||||
entry->value = property->value.immediate;
|
||||
break;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
entry++;
|
||||
}
|
||||
|
||||
return dir_end;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_property_format_dir() - Formats directory to the packed XDomain format
|
||||
* @dir: Directory to format
|
||||
* @block: Property block where the packed data is placed
|
||||
* @block_len: Length of the property block
|
||||
*
|
||||
* This function formats the directory to the packed format that can be
|
||||
* then send over the thunderbolt fabric to receiving host. Returns %0 in
|
||||
* case of success and negative errno on faulure. Passing %NULL in @block
|
||||
* returns number of entries the block takes.
|
||||
*/
|
||||
ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
|
||||
size_t block_len)
|
||||
{
|
||||
ssize_t ret;
|
||||
|
||||
if (!block) {
|
||||
size_t dir_len, data_len = 0;
|
||||
|
||||
dir_len = tb_property_dir_length(dir, true, &data_len);
|
||||
return dir_len + data_len;
|
||||
}
|
||||
|
||||
ret = __tb_property_format_dir(dir, block, 0, block_len);
|
||||
return ret < 0 ? ret : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_property_add_immediate() - Add immediate property to directory
|
||||
* @parent: Directory to add the property
|
||||
* @key: Key for the property
|
||||
* @value: Immediate value to store with the property
|
||||
*/
|
||||
int tb_property_add_immediate(struct tb_property_dir *parent, const char *key,
|
||||
u32 value)
|
||||
{
|
||||
struct tb_property *property;
|
||||
|
||||
if (!tb_property_key_valid(key))
|
||||
return -EINVAL;
|
||||
|
||||
property = tb_property_alloc(key, TB_PROPERTY_TYPE_VALUE);
|
||||
if (!property)
|
||||
return -ENOMEM;
|
||||
|
||||
property->length = 1;
|
||||
property->value.immediate = value;
|
||||
|
||||
list_add_tail(&property->list, &parent->properties);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_add_immediate);
|
||||
|
||||
/**
|
||||
* tb_property_add_data() - Adds arbitrary data property to directory
|
||||
* @parent: Directory to add the property
|
||||
* @key: Key for the property
|
||||
* @buf: Data buffer to add
|
||||
* @buflen: Number of bytes in the data buffer
|
||||
*
|
||||
* Function takes a copy of @buf and adds it to the directory.
|
||||
*/
|
||||
int tb_property_add_data(struct tb_property_dir *parent, const char *key,
|
||||
const void *buf, size_t buflen)
|
||||
{
|
||||
/* Need to pad to dword boundary */
|
||||
size_t size = round_up(buflen, 4);
|
||||
struct tb_property *property;
|
||||
|
||||
if (!tb_property_key_valid(key))
|
||||
return -EINVAL;
|
||||
|
||||
property = tb_property_alloc(key, TB_PROPERTY_TYPE_DATA);
|
||||
if (!property)
|
||||
return -ENOMEM;
|
||||
|
||||
property->length = size / 4;
|
||||
property->value.data = kzalloc(size, GFP_KERNEL);
|
||||
memcpy(property->value.data, buf, buflen);
|
||||
|
||||
list_add_tail(&property->list, &parent->properties);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_add_data);
|
||||
|
||||
/**
|
||||
* tb_property_add_text() - Adds string property to directory
|
||||
* @parent: Directory to add the property
|
||||
* @key: Key for the property
|
||||
* @text: String to add
|
||||
*
|
||||
* Function takes a copy of @text and adds it to the directory.
|
||||
*/
|
||||
int tb_property_add_text(struct tb_property_dir *parent, const char *key,
|
||||
const char *text)
|
||||
{
|
||||
/* Need to pad to dword boundary */
|
||||
size_t size = round_up(strlen(text) + 1, 4);
|
||||
struct tb_property *property;
|
||||
|
||||
if (!tb_property_key_valid(key))
|
||||
return -EINVAL;
|
||||
|
||||
property = tb_property_alloc(key, TB_PROPERTY_TYPE_TEXT);
|
||||
if (!property)
|
||||
return -ENOMEM;
|
||||
|
||||
property->length = size / 4;
|
||||
property->value.data = kzalloc(size, GFP_KERNEL);
|
||||
strcpy(property->value.text, text);
|
||||
|
||||
list_add_tail(&property->list, &parent->properties);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_add_text);
|
||||
|
||||
/**
|
||||
* tb_property_add_dir() - Adds a directory to the parent directory
|
||||
* @parent: Directory to add the property
|
||||
* @key: Key for the property
|
||||
* @dir: Directory to add
|
||||
*/
|
||||
int tb_property_add_dir(struct tb_property_dir *parent, const char *key,
|
||||
struct tb_property_dir *dir)
|
||||
{
|
||||
struct tb_property *property;
|
||||
|
||||
if (!tb_property_key_valid(key))
|
||||
return -EINVAL;
|
||||
|
||||
property = tb_property_alloc(key, TB_PROPERTY_TYPE_DIRECTORY);
|
||||
if (!property)
|
||||
return -ENOMEM;
|
||||
|
||||
property->value.dir = dir;
|
||||
|
||||
list_add_tail(&property->list, &parent->properties);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_add_dir);
|
||||
|
||||
/**
|
||||
* tb_property_remove() - Removes property from a parent directory
|
||||
* @property: Property to remove
|
||||
*
|
||||
* Note memory for @property is released as well so it is not allowed to
|
||||
* touch the object after call to this function.
|
||||
*/
|
||||
void tb_property_remove(struct tb_property *property)
|
||||
{
|
||||
list_del(&property->list);
|
||||
kfree(property);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_remove);
|
||||
|
||||
/**
|
||||
* tb_property_find() - Find a property from a directory
|
||||
* @dir: Directory where the property is searched
|
||||
* @key: Key to look for
|
||||
* @type: Type of the property
|
||||
*
|
||||
* Finds and returns property from the given directory. Does not recurse
|
||||
* into sub-directories. Returns %NULL if the property was not found.
|
||||
*/
|
||||
struct tb_property *tb_property_find(struct tb_property_dir *dir,
|
||||
const char *key, enum tb_property_type type)
|
||||
{
|
||||
struct tb_property *property;
|
||||
|
||||
list_for_each_entry(property, &dir->properties, list) {
|
||||
if (property->type == type && !strcmp(property->key, key))
|
||||
return property;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_find);
|
||||
|
||||
/**
|
||||
* tb_property_get_next() - Get next property from directory
|
||||
* @dir: Directory holding properties
|
||||
* @prev: Previous property in the directory (%NULL returns the first)
|
||||
*/
|
||||
struct tb_property *tb_property_get_next(struct tb_property_dir *dir,
|
||||
struct tb_property *prev)
|
||||
{
|
||||
if (prev) {
|
||||
if (list_is_last(&prev->list, &dir->properties))
|
||||
return NULL;
|
||||
return list_next_entry(prev, list);
|
||||
}
|
||||
return list_first_entry_or_null(&dir->properties, struct tb_property,
|
||||
list);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tb_property_get_next);
|
@ -171,11 +171,11 @@ static int nvm_authenticate_host(struct tb_switch *sw)
|
||||
|
||||
/*
|
||||
* Root switch NVM upgrade requires that we disconnect the
|
||||
* existing PCIe paths first (in case it is not in safe mode
|
||||
* existing paths first (in case it is not in safe mode
|
||||
* already).
|
||||
*/
|
||||
if (!sw->safe_mode) {
|
||||
ret = tb_domain_disconnect_pcie_paths(sw->tb);
|
||||
ret = tb_domain_disconnect_all_paths(sw->tb);
|
||||
if (ret)
|
||||
return ret;
|
||||
/*
|
||||
@ -1363,6 +1363,9 @@ void tb_switch_remove(struct tb_switch *sw)
|
||||
if (sw->ports[i].remote)
|
||||
tb_switch_remove(sw->ports[i].remote->sw);
|
||||
sw->ports[i].remote = NULL;
|
||||
if (sw->ports[i].xdomain)
|
||||
tb_xdomain_remove(sw->ports[i].xdomain);
|
||||
sw->ports[i].xdomain = NULL;
|
||||
}
|
||||
|
||||
if (!sw->is_unplugged)
|
||||
|
@ -9,6 +9,7 @@
|
||||
|
||||
#include <linux/nvmem-provider.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/thunderbolt.h>
|
||||
#include <linux/uuid.h>
|
||||
|
||||
#include "tb_regs.h"
|
||||
@ -39,23 +40,7 @@ struct tb_switch_nvm {
|
||||
bool authenticating;
|
||||
};
|
||||
|
||||
/**
|
||||
* enum tb_security_level - Thunderbolt security level
|
||||
* @TB_SECURITY_NONE: No security, legacy mode
|
||||
* @TB_SECURITY_USER: User approval required at minimum
|
||||
* @TB_SECURITY_SECURE: One time saved key required at minimum
|
||||
* @TB_SECURITY_DPONLY: Only tunnel Display port (and USB)
|
||||
*/
|
||||
enum tb_security_level {
|
||||
TB_SECURITY_NONE,
|
||||
TB_SECURITY_USER,
|
||||
TB_SECURITY_SECURE,
|
||||
TB_SECURITY_DPONLY,
|
||||
};
|
||||
|
||||
#define TB_SWITCH_KEY_SIZE 32
|
||||
/* Each physical port contains 2 links on modern controllers */
|
||||
#define TB_SWITCH_LINKS_PER_PHY_PORT 2
|
||||
|
||||
/**
|
||||
* struct tb_switch - a thunderbolt switch
|
||||
@ -125,14 +110,25 @@ struct tb_switch {
|
||||
|
||||
/**
|
||||
* struct tb_port - a thunderbolt port, part of a tb_switch
|
||||
* @config: Cached port configuration read from registers
|
||||
* @sw: Switch the port belongs to
|
||||
* @remote: Remote port (%NULL if not connected)
|
||||
* @xdomain: Remote host (%NULL if not connected)
|
||||
* @cap_phy: Offset, zero if not found
|
||||
* @port: Port number on switch
|
||||
* @disabled: Disabled by eeprom
|
||||
* @dual_link_port: If the switch is connected using two ports, points
|
||||
* to the other port.
|
||||
* @link_nr: Is this primary or secondary port on the dual_link.
|
||||
*/
|
||||
struct tb_port {
|
||||
struct tb_regs_port_header config;
|
||||
struct tb_switch *sw;
|
||||
struct tb_port *remote; /* remote port, NULL if not connected */
|
||||
int cap_phy; /* offset, zero if not found */
|
||||
u8 port; /* port number on switch */
|
||||
bool disabled; /* disabled by eeprom */
|
||||
struct tb_port *remote;
|
||||
struct tb_xdomain *xdomain;
|
||||
int cap_phy;
|
||||
u8 port;
|
||||
bool disabled;
|
||||
struct tb_port *dual_link_port;
|
||||
u8 link_nr:1;
|
||||
};
|
||||
@ -205,6 +201,8 @@ struct tb_path {
|
||||
* @add_switch_key: Add key to switch
|
||||
* @challenge_switch_key: Challenge switch using key
|
||||
* @disconnect_pcie_paths: Disconnects PCIe paths before NVM update
|
||||
* @approve_xdomain_paths: Approve (establish) XDomain DMA paths
|
||||
* @disconnect_xdomain_paths: Disconnect XDomain DMA paths
|
||||
*/
|
||||
struct tb_cm_ops {
|
||||
int (*driver_ready)(struct tb *tb);
|
||||
@ -221,33 +219,8 @@ struct tb_cm_ops {
|
||||
int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
|
||||
const u8 *challenge, u8 *response);
|
||||
int (*disconnect_pcie_paths)(struct tb *tb);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb - main thunderbolt bus structure
|
||||
* @dev: Domain device
|
||||
* @lock: Big lock. Must be held when accessing any struct
|
||||
* tb_switch / struct tb_port.
|
||||
* @nhi: Pointer to the NHI structure
|
||||
* @ctl: Control channel for this domain
|
||||
* @wq: Ordered workqueue for all domain specific work
|
||||
* @root_switch: Root switch of this domain
|
||||
* @cm_ops: Connection manager specific operations vector
|
||||
* @index: Linux assigned domain number
|
||||
* @security_level: Current security level
|
||||
* @privdata: Private connection manager specific data
|
||||
*/
|
||||
struct tb {
|
||||
struct device dev;
|
||||
struct mutex lock;
|
||||
struct tb_nhi *nhi;
|
||||
struct tb_ctl *ctl;
|
||||
struct workqueue_struct *wq;
|
||||
struct tb_switch *root_switch;
|
||||
const struct tb_cm_ops *cm_ops;
|
||||
int index;
|
||||
enum tb_security_level security_level;
|
||||
unsigned long privdata[0];
|
||||
int (*approve_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
|
||||
int (*disconnect_xdomain_paths)(struct tb *tb, struct tb_xdomain *xd);
|
||||
};
|
||||
|
||||
static inline void *tb_priv(struct tb *tb)
|
||||
@ -368,13 +341,14 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
|
||||
struct tb *icm_probe(struct tb_nhi *nhi);
|
||||
struct tb *tb_probe(struct tb_nhi *nhi);
|
||||
|
||||
extern struct bus_type tb_bus_type;
|
||||
extern struct device_type tb_domain_type;
|
||||
extern struct device_type tb_switch_type;
|
||||
|
||||
int tb_domain_init(void);
|
||||
void tb_domain_exit(void);
|
||||
void tb_switch_exit(void);
|
||||
int tb_xdomain_init(void);
|
||||
void tb_xdomain_exit(void);
|
||||
|
||||
struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize);
|
||||
int tb_domain_add(struct tb *tb);
|
||||
@ -387,6 +361,9 @@ int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
|
||||
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
|
||||
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
|
||||
int tb_domain_disconnect_pcie_paths(struct tb *tb);
|
||||
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
|
||||
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd);
|
||||
int tb_domain_disconnect_all_paths(struct tb *tb);
|
||||
|
||||
static inline void tb_domain_put(struct tb *tb)
|
||||
{
|
||||
@ -409,11 +386,6 @@ struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link,
|
||||
u8 depth);
|
||||
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid);
|
||||
|
||||
static inline unsigned int tb_switch_phy_port_from_link(unsigned int link)
|
||||
{
|
||||
return (link - 1) / TB_SWITCH_LINKS_PER_PHY_PORT;
|
||||
}
|
||||
|
||||
static inline void tb_switch_put(struct tb_switch *sw)
|
||||
{
|
||||
put_device(&sw->dev);
|
||||
@ -471,4 +443,14 @@ static inline u64 tb_downstream_route(struct tb_port *port)
|
||||
| ((u64) port->port << (port->sw->config.depth * 8));
|
||||
}
|
||||
|
||||
bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type,
|
||||
const void *buf, size_t size);
|
||||
struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
|
||||
u64 route, const uuid_t *local_uuid,
|
||||
const uuid_t *remote_uuid);
|
||||
void tb_xdomain_add(struct tb_xdomain *xd);
|
||||
void tb_xdomain_remove(struct tb_xdomain *xd);
|
||||
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
|
||||
u8 depth);
|
||||
|
||||
#endif
|
||||
|
@ -15,23 +15,6 @@
|
||||
#include <linux/types.h>
|
||||
#include <linux/uuid.h>
|
||||
|
||||
enum tb_cfg_pkg_type {
|
||||
TB_CFG_PKG_READ = 1,
|
||||
TB_CFG_PKG_WRITE = 2,
|
||||
TB_CFG_PKG_ERROR = 3,
|
||||
TB_CFG_PKG_NOTIFY_ACK = 4,
|
||||
TB_CFG_PKG_EVENT = 5,
|
||||
TB_CFG_PKG_XDOMAIN_REQ = 6,
|
||||
TB_CFG_PKG_XDOMAIN_RESP = 7,
|
||||
TB_CFG_PKG_OVERRIDE = 8,
|
||||
TB_CFG_PKG_RESET = 9,
|
||||
TB_CFG_PKG_ICM_EVENT = 10,
|
||||
TB_CFG_PKG_ICM_CMD = 11,
|
||||
TB_CFG_PKG_ICM_RESP = 12,
|
||||
TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,
|
||||
|
||||
};
|
||||
|
||||
enum tb_cfg_space {
|
||||
TB_CFG_HOPS = 0,
|
||||
TB_CFG_PORT = 1,
|
||||
@ -118,11 +101,14 @@ enum icm_pkg_code {
|
||||
ICM_CHALLENGE_DEVICE = 0x5,
|
||||
ICM_ADD_DEVICE_KEY = 0x6,
|
||||
ICM_GET_ROUTE = 0xa,
|
||||
ICM_APPROVE_XDOMAIN = 0x10,
|
||||
};
|
||||
|
||||
enum icm_event_code {
|
||||
ICM_EVENT_DEVICE_CONNECTED = 3,
|
||||
ICM_EVENT_DEVICE_DISCONNECTED = 4,
|
||||
ICM_EVENT_XDOMAIN_CONNECTED = 6,
|
||||
ICM_EVENT_XDOMAIN_DISCONNECTED = 7,
|
||||
};
|
||||
|
||||
struct icm_pkg_header {
|
||||
@ -130,7 +116,7 @@ struct icm_pkg_header {
|
||||
u8 flags;
|
||||
u8 packet_id;
|
||||
u8 total_packets;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
#define ICM_FLAGS_ERROR BIT(0)
|
||||
#define ICM_FLAGS_NO_KEY BIT(1)
|
||||
@ -139,20 +125,20 @@ struct icm_pkg_header {
|
||||
|
||||
struct icm_pkg_driver_ready {
|
||||
struct icm_pkg_header hdr;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_pkg_driver_ready_response {
|
||||
struct icm_pkg_header hdr;
|
||||
u8 romver;
|
||||
u8 ramver;
|
||||
u16 security_level;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
/* Falcon Ridge & Alpine Ridge common messages */
|
||||
|
||||
struct icm_fr_pkg_get_topology {
|
||||
struct icm_pkg_header hdr;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
#define ICM_GET_TOPOLOGY_PACKETS 14
|
||||
|
||||
@ -167,7 +153,7 @@ struct icm_fr_pkg_get_topology_response {
|
||||
u32 reserved[2];
|
||||
u32 ports[16];
|
||||
u32 port_hop_info[16];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
#define ICM_SWITCH_USED BIT(0)
|
||||
#define ICM_SWITCH_UPSTREAM_PORT_MASK GENMASK(7, 1)
|
||||
@ -184,7 +170,7 @@ struct icm_fr_event_device_connected {
|
||||
u8 connection_id;
|
||||
u16 link_info;
|
||||
u32 ep_name[55];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
#define ICM_LINK_INFO_LINK_MASK 0x7
|
||||
#define ICM_LINK_INFO_DEPTH_SHIFT 4
|
||||
@ -197,13 +183,32 @@ struct icm_fr_pkg_approve_device {
|
||||
u8 connection_key;
|
||||
u8 connection_id;
|
||||
u16 reserved;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_event_device_disconnected {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_event_xdomain_connected {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
uuid_t remote_uuid;
|
||||
uuid_t local_uuid;
|
||||
u32 local_route_hi;
|
||||
u32 local_route_lo;
|
||||
u32 remote_route_hi;
|
||||
u32 remote_route_lo;
|
||||
};
|
||||
|
||||
struct icm_fr_event_xdomain_disconnected {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
uuid_t remote_uuid;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_add_device_key {
|
||||
struct icm_pkg_header hdr;
|
||||
@ -212,7 +217,7 @@ struct icm_fr_pkg_add_device_key {
|
||||
u8 connection_id;
|
||||
u16 reserved;
|
||||
u32 key[8];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_add_device_key_response {
|
||||
struct icm_pkg_header hdr;
|
||||
@ -220,7 +225,7 @@ struct icm_fr_pkg_add_device_key_response {
|
||||
u8 connection_key;
|
||||
u8 connection_id;
|
||||
u16 reserved;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_challenge_device {
|
||||
struct icm_pkg_header hdr;
|
||||
@ -229,7 +234,7 @@ struct icm_fr_pkg_challenge_device {
|
||||
u8 connection_id;
|
||||
u16 reserved;
|
||||
u32 challenge[8];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_challenge_device_response {
|
||||
struct icm_pkg_header hdr;
|
||||
@ -239,7 +244,29 @@ struct icm_fr_pkg_challenge_device_response {
|
||||
u16 reserved;
|
||||
u32 challenge[8];
|
||||
u32 response[8];
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_approve_xdomain {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
uuid_t remote_uuid;
|
||||
u16 transmit_path;
|
||||
u16 transmit_ring;
|
||||
u16 receive_path;
|
||||
u16 receive_ring;
|
||||
};
|
||||
|
||||
struct icm_fr_pkg_approve_xdomain_response {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
uuid_t remote_uuid;
|
||||
u16 transmit_path;
|
||||
u16 transmit_ring;
|
||||
u16 receive_path;
|
||||
u16 receive_ring;
|
||||
};
|
||||
|
||||
/* Alpine Ridge only messages */
|
||||
|
||||
@ -247,7 +274,7 @@ struct icm_ar_pkg_get_route {
|
||||
struct icm_pkg_header hdr;
|
||||
u16 reserved;
|
||||
u16 link_info;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
struct icm_ar_pkg_get_route_response {
|
||||
struct icm_pkg_header hdr;
|
||||
@ -255,6 +282,85 @@ struct icm_ar_pkg_get_route_response {
|
||||
u16 link_info;
|
||||
u32 route_hi;
|
||||
u32 route_lo;
|
||||
} __packed;
|
||||
};
|
||||
|
||||
/* XDomain messages */
|
||||
|
||||
struct tb_xdomain_header {
|
||||
u32 route_hi;
|
||||
u32 route_lo;
|
||||
u32 length_sn;
|
||||
};
|
||||
|
||||
#define TB_XDOMAIN_LENGTH_MASK GENMASK(5, 0)
|
||||
#define TB_XDOMAIN_SN_MASK GENMASK(28, 27)
|
||||
#define TB_XDOMAIN_SN_SHIFT 27
|
||||
|
||||
enum tb_xdp_type {
|
||||
UUID_REQUEST_OLD = 1,
|
||||
UUID_RESPONSE = 2,
|
||||
PROPERTIES_REQUEST,
|
||||
PROPERTIES_RESPONSE,
|
||||
PROPERTIES_CHANGED_REQUEST,
|
||||
PROPERTIES_CHANGED_RESPONSE,
|
||||
ERROR_RESPONSE,
|
||||
UUID_REQUEST = 12,
|
||||
};
|
||||
|
||||
struct tb_xdp_header {
|
||||
struct tb_xdomain_header xd_hdr;
|
||||
uuid_t uuid;
|
||||
u32 type;
|
||||
};
|
||||
|
||||
struct tb_xdp_properties {
|
||||
struct tb_xdp_header hdr;
|
||||
uuid_t src_uuid;
|
||||
uuid_t dst_uuid;
|
||||
u16 offset;
|
||||
u16 reserved;
|
||||
};
|
||||
|
||||
struct tb_xdp_properties_response {
|
||||
struct tb_xdp_header hdr;
|
||||
uuid_t src_uuid;
|
||||
uuid_t dst_uuid;
|
||||
u16 offset;
|
||||
u16 data_length;
|
||||
u32 generation;
|
||||
u32 data[0];
|
||||
};
|
||||
|
||||
/*
|
||||
* Max length of data array single XDomain property response is allowed
|
||||
* to carry.
|
||||
*/
|
||||
#define TB_XDP_PROPERTIES_MAX_DATA_LENGTH \
|
||||
(((256 - 4 - sizeof(struct tb_xdp_properties_response))) / 4)
|
||||
|
||||
/* Maximum size of the total property block in dwords we allow */
|
||||
#define TB_XDP_PROPERTIES_MAX_LENGTH 500
|
||||
|
||||
struct tb_xdp_properties_changed {
|
||||
struct tb_xdp_header hdr;
|
||||
uuid_t src_uuid;
|
||||
};
|
||||
|
||||
struct tb_xdp_properties_changed_response {
|
||||
struct tb_xdp_header hdr;
|
||||
};
|
||||
|
||||
enum tb_xdp_error {
|
||||
ERROR_SUCCESS,
|
||||
ERROR_UNKNOWN_PACKET,
|
||||
ERROR_UNKNOWN_DOMAIN,
|
||||
ERROR_NOT_SUPPORTED,
|
||||
ERROR_NOT_READY,
|
||||
};
|
||||
|
||||
struct tb_xdp_error_response {
|
||||
struct tb_xdp_header hdr;
|
||||
u32 error;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
1576
drivers/thunderbolt/xdomain.c
Normal file
1576
drivers/thunderbolt/xdomain.c
Normal file
File diff suppressed because it is too large
Load Diff
@ -170,4 +170,20 @@ static inline void be64_add_cpu(__be64 *var, u64 val)
|
||||
*var = cpu_to_be64(be64_to_cpu(*var) + val);
|
||||
}
|
||||
|
||||
static inline void cpu_to_be32_array(__be32 *dst, const u32 *src, size_t len)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < len; i++)
|
||||
dst[i] = cpu_to_be32(src[i]);
|
||||
}
|
||||
|
||||
static inline void be32_to_cpu_array(u32 *dst, const __be32 *src, size_t len)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < len; i++)
|
||||
dst[i] = be32_to_cpu(src[i]);
|
||||
}
|
||||
|
||||
#endif /* _LINUX_BYTEORDER_GENERIC_H */
|
||||
|
@ -683,5 +683,31 @@ struct fsl_mc_device_id {
|
||||
const char obj_type[16];
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb_service_id - Thunderbolt service identifiers
|
||||
* @match_flags: Flags used to match the structure
|
||||
* @protocol_key: Protocol key the service supports
|
||||
* @protocol_id: Protocol id the service supports
|
||||
* @protocol_version: Version of the protocol
|
||||
* @protocol_revision: Revision of the protocol software
|
||||
* @driver_data: Driver specific data
|
||||
*
|
||||
* Thunderbolt XDomain services are exposed as devices where each device
|
||||
* carries the protocol information the service supports. Thunderbolt
|
||||
* XDomain service drivers match against that information.
|
||||
*/
|
||||
struct tb_service_id {
|
||||
__u32 match_flags;
|
||||
char protocol_key[8 + 1];
|
||||
__u32 protocol_id;
|
||||
__u32 protocol_version;
|
||||
__u32 protocol_revision;
|
||||
kernel_ulong_t driver_data;
|
||||
};
|
||||
|
||||
#define TBSVC_MATCH_PROTOCOL_KEY 0x0001
|
||||
#define TBSVC_MATCH_PROTOCOL_ID 0x0002
|
||||
#define TBSVC_MATCH_PROTOCOL_VERSION 0x0004
|
||||
#define TBSVC_MATCH_PROTOCOL_REVISION 0x0008
|
||||
|
||||
#endif /* LINUX_MOD_DEVICETABLE_H */
|
||||
|
598
include/linux/thunderbolt.h
Normal file
598
include/linux/thunderbolt.h
Normal file
@ -0,0 +1,598 @@
|
||||
/*
|
||||
* Thunderbolt service API
|
||||
*
|
||||
* Copyright (C) 2014 Andreas Noever <andreas.noever@gmail.com>
|
||||
* Copyright (C) 2017, Intel Corporation
|
||||
* Authors: Michael Jamet <michael.jamet@intel.com>
|
||||
* Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*/
|
||||
|
||||
#ifndef THUNDERBOLT_H_
|
||||
#define THUNDERBOLT_H_
|
||||
|
||||
#include <linux/device.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/mod_devicetable.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/uuid.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
enum tb_cfg_pkg_type {
|
||||
TB_CFG_PKG_READ = 1,
|
||||
TB_CFG_PKG_WRITE = 2,
|
||||
TB_CFG_PKG_ERROR = 3,
|
||||
TB_CFG_PKG_NOTIFY_ACK = 4,
|
||||
TB_CFG_PKG_EVENT = 5,
|
||||
TB_CFG_PKG_XDOMAIN_REQ = 6,
|
||||
TB_CFG_PKG_XDOMAIN_RESP = 7,
|
||||
TB_CFG_PKG_OVERRIDE = 8,
|
||||
TB_CFG_PKG_RESET = 9,
|
||||
TB_CFG_PKG_ICM_EVENT = 10,
|
||||
TB_CFG_PKG_ICM_CMD = 11,
|
||||
TB_CFG_PKG_ICM_RESP = 12,
|
||||
TB_CFG_PKG_PREPARE_TO_SLEEP = 13,
|
||||
};
|
||||
|
||||
/**
|
||||
* enum tb_security_level - Thunderbolt security level
|
||||
* @TB_SECURITY_NONE: No security, legacy mode
|
||||
* @TB_SECURITY_USER: User approval required at minimum
|
||||
* @TB_SECURITY_SECURE: One time saved key required at minimum
|
||||
* @TB_SECURITY_DPONLY: Only tunnel Display port (and USB)
|
||||
*/
|
||||
enum tb_security_level {
|
||||
TB_SECURITY_NONE,
|
||||
TB_SECURITY_USER,
|
||||
TB_SECURITY_SECURE,
|
||||
TB_SECURITY_DPONLY,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb - main thunderbolt bus structure
|
||||
* @dev: Domain device
|
||||
* @lock: Big lock. Must be held when accessing any struct
|
||||
* tb_switch / struct tb_port.
|
||||
* @nhi: Pointer to the NHI structure
|
||||
* @ctl: Control channel for this domain
|
||||
* @wq: Ordered workqueue for all domain specific work
|
||||
* @root_switch: Root switch of this domain
|
||||
* @cm_ops: Connection manager specific operations vector
|
||||
* @index: Linux assigned domain number
|
||||
* @security_level: Current security level
|
||||
* @privdata: Private connection manager specific data
|
||||
*/
|
||||
struct tb {
|
||||
struct device dev;
|
||||
struct mutex lock;
|
||||
struct tb_nhi *nhi;
|
||||
struct tb_ctl *ctl;
|
||||
struct workqueue_struct *wq;
|
||||
struct tb_switch *root_switch;
|
||||
const struct tb_cm_ops *cm_ops;
|
||||
int index;
|
||||
enum tb_security_level security_level;
|
||||
unsigned long privdata[0];
|
||||
};
|
||||
|
||||
extern struct bus_type tb_bus_type;
|
||||
extern struct device_type tb_service_type;
|
||||
extern struct device_type tb_xdomain_type;
|
||||
|
||||
#define TB_LINKS_PER_PHY_PORT 2
|
||||
|
||||
static inline unsigned int tb_phy_port_from_link(unsigned int link)
|
||||
{
|
||||
return (link - 1) / TB_LINKS_PER_PHY_PORT;
|
||||
}
|
||||
|
||||
/**
|
||||
* struct tb_property_dir - XDomain property directory
|
||||
* @uuid: Directory UUID or %NULL if root directory
|
||||
* @properties: List of properties in this directory
|
||||
*
|
||||
* User needs to provide serialization if needed.
|
||||
*/
|
||||
struct tb_property_dir {
|
||||
const uuid_t *uuid;
|
||||
struct list_head properties;
|
||||
};
|
||||
|
||||
enum tb_property_type {
|
||||
TB_PROPERTY_TYPE_UNKNOWN = 0x00,
|
||||
TB_PROPERTY_TYPE_DIRECTORY = 0x44,
|
||||
TB_PROPERTY_TYPE_DATA = 0x64,
|
||||
TB_PROPERTY_TYPE_TEXT = 0x74,
|
||||
TB_PROPERTY_TYPE_VALUE = 0x76,
|
||||
};
|
||||
|
||||
#define TB_PROPERTY_KEY_SIZE 8
|
||||
|
||||
/**
|
||||
* struct tb_property - XDomain property
|
||||
* @list: Used to link properties together in a directory
|
||||
* @key: Key for the property (always terminated).
|
||||
* @type: Type of the property
|
||||
* @length: Length of the property data in dwords
|
||||
* @value: Property value
|
||||
*
|
||||
* Users use @type to determine which field in @value is filled.
|
||||
*/
|
||||
struct tb_property {
|
||||
struct list_head list;
|
||||
char key[TB_PROPERTY_KEY_SIZE + 1];
|
||||
enum tb_property_type type;
|
||||
size_t length;
|
||||
union {
|
||||
struct tb_property_dir *dir;
|
||||
u8 *data;
|
||||
char *text;
|
||||
u32 immediate;
|
||||
} value;
|
||||
};
|
||||
|
||||
struct tb_property_dir *tb_property_parse_dir(const u32 *block,
|
||||
size_t block_len);
|
||||
ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
|
||||
size_t block_len);
|
||||
struct tb_property_dir *tb_property_create_dir(const uuid_t *uuid);
|
||||
void tb_property_free_dir(struct tb_property_dir *dir);
|
||||
int tb_property_add_immediate(struct tb_property_dir *parent, const char *key,
|
||||
u32 value);
|
||||
int tb_property_add_data(struct tb_property_dir *parent, const char *key,
|
||||
const void *buf, size_t buflen);
|
||||
int tb_property_add_text(struct tb_property_dir *parent, const char *key,
|
||||
const char *text);
|
||||
int tb_property_add_dir(struct tb_property_dir *parent, const char *key,
|
||||
struct tb_property_dir *dir);
|
||||
void tb_property_remove(struct tb_property *tb_property);
|
||||
struct tb_property *tb_property_find(struct tb_property_dir *dir,
|
||||
const char *key, enum tb_property_type type);
|
||||
struct tb_property *tb_property_get_next(struct tb_property_dir *dir,
|
||||
struct tb_property *prev);
|
||||
|
||||
#define tb_property_for_each(dir, property) \
|
||||
for (property = tb_property_get_next(dir, NULL); \
|
||||
property; \
|
||||
property = tb_property_get_next(dir, property))
|
||||
|
||||
int tb_register_property_dir(const char *key, struct tb_property_dir *dir);
|
||||
void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir);
|
||||
|
||||
/**
|
||||
* struct tb_xdomain - Cross-domain (XDomain) connection
|
||||
* @dev: XDomain device
|
||||
* @tb: Pointer to the domain
|
||||
* @remote_uuid: UUID of the remote domain (host)
|
||||
* @local_uuid: Cached local UUID
|
||||
* @route: Route string the other domain can be reached
|
||||
* @vendor: Vendor ID of the remote domain
|
||||
* @device: Device ID of the demote domain
|
||||
* @lock: Lock to serialize access to the following fields of this structure
|
||||
* @vendor_name: Name of the vendor (or %NULL if not known)
|
||||
* @device_name: Name of the device (or %NULL if not known)
|
||||
* @is_unplugged: The XDomain is unplugged
|
||||
* @resume: The XDomain is being resumed
|
||||
* @transmit_path: HopID which the remote end expects us to transmit
|
||||
* @transmit_ring: Local ring (hop) where outgoing packets are pushed
|
||||
* @receive_path: HopID which we expect the remote end to transmit
|
||||
* @receive_ring: Local ring (hop) where incoming packets arrive
|
||||
* @service_ids: Used to generate IDs for the services
|
||||
* @properties: Properties exported by the remote domain
|
||||
* @property_block_gen: Generation of @properties
|
||||
* @properties_lock: Lock protecting @properties.
|
||||
* @get_properties_work: Work used to get remote domain properties
|
||||
* @properties_retries: Number of times left to read properties
|
||||
* @properties_changed_work: Work used to notify the remote domain that
|
||||
* our properties have changed
|
||||
* @properties_changed_retries: Number of times left to send properties
|
||||
* changed notification
|
||||
* @link: Root switch link the remote domain is connected (ICM only)
|
||||
* @depth: Depth in the chain the remote domain is connected (ICM only)
|
||||
*
|
||||
* This structure represents connection across two domains (hosts).
|
||||
* Each XDomain contains zero or more services which are exposed as
|
||||
* &struct tb_service objects.
|
||||
*
|
||||
* Service drivers may access this structure if they need to enumerate
|
||||
* non-standard properties but they need hold @lock when doing so
|
||||
* because properties can be changed asynchronously in response to
|
||||
* changes in the remote domain.
|
||||
*/
|
||||
struct tb_xdomain {
|
||||
struct device dev;
|
||||
struct tb *tb;
|
||||
uuid_t *remote_uuid;
|
||||
const uuid_t *local_uuid;
|
||||
u64 route;
|
||||
u16 vendor;
|
||||
u16 device;
|
||||
struct mutex lock;
|
||||
const char *vendor_name;
|
||||
const char *device_name;
|
||||
bool is_unplugged;
|
||||
bool resume;
|
||||
u16 transmit_path;
|
||||
u16 transmit_ring;
|
||||
u16 receive_path;
|
||||
u16 receive_ring;
|
||||
struct ida service_ids;
|
||||
struct tb_property_dir *properties;
|
||||
u32 property_block_gen;
|
||||
struct delayed_work get_properties_work;
|
||||
int properties_retries;
|
||||
struct delayed_work properties_changed_work;
|
||||
int properties_changed_retries;
|
||||
u8 link;
|
||||
u8 depth;
|
||||
};
|
||||
|
||||
int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path,
|
||||
u16 transmit_ring, u16 receive_path,
|
||||
u16 receive_ring);
|
||||
int tb_xdomain_disable_paths(struct tb_xdomain *xd);
|
||||
struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid);
|
||||
|
||||
static inline struct tb_xdomain *
|
||||
tb_xdomain_find_by_uuid_locked(struct tb *tb, const uuid_t *uuid)
|
||||
{
|
||||
struct tb_xdomain *xd;
|
||||
|
||||
mutex_lock(&tb->lock);
|
||||
xd = tb_xdomain_find_by_uuid(tb, uuid);
|
||||
mutex_unlock(&tb->lock);
|
||||
|
||||
return xd;
|
||||
}
|
||||
|
||||
static inline struct tb_xdomain *tb_xdomain_get(struct tb_xdomain *xd)
|
||||
{
|
||||
if (xd)
|
||||
get_device(&xd->dev);
|
||||
return xd;
|
||||
}
|
||||
|
||||
static inline void tb_xdomain_put(struct tb_xdomain *xd)
|
||||
{
|
||||
if (xd)
|
||||
put_device(&xd->dev);
|
||||
}
|
||||
|
||||
static inline bool tb_is_xdomain(const struct device *dev)
|
||||
{
|
||||
return dev->type == &tb_xdomain_type;
|
||||
}
|
||||
|
||||
static inline struct tb_xdomain *tb_to_xdomain(struct device *dev)
|
||||
{
|
||||
if (tb_is_xdomain(dev))
|
||||
return container_of(dev, struct tb_xdomain, dev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int tb_xdomain_response(struct tb_xdomain *xd, const void *response,
|
||||
size_t size, enum tb_cfg_pkg_type type);
|
||||
int tb_xdomain_request(struct tb_xdomain *xd, const void *request,
|
||||
size_t request_size, enum tb_cfg_pkg_type request_type,
|
||||
void *response, size_t response_size,
|
||||
enum tb_cfg_pkg_type response_type,
|
||||
unsigned int timeout_msec);
|
||||
|
||||
/**
|
||||
* tb_protocol_handler - Protocol specific handler
|
||||
* @uuid: XDomain messages with this UUID are dispatched to this handler
|
||||
* @callback: Callback called with the XDomain message. Returning %1
|
||||
* here tells the XDomain core that the message was handled
|
||||
* by this handler and should not be forwared to other
|
||||
* handlers.
|
||||
* @data: Data passed with the callback
|
||||
* @list: Handlers are linked using this
|
||||
*
|
||||
* Thunderbolt services can hook into incoming XDomain requests by
|
||||
* registering protocol handler. Only limitation is that the XDomain
|
||||
* discovery protocol UUID cannot be registered since it is handled by
|
||||
* the core XDomain code.
|
||||
*
|
||||
* The @callback must check that the message is really directed to the
|
||||
* service the driver implements.
|
||||
*/
|
||||
struct tb_protocol_handler {
|
||||
const uuid_t *uuid;
|
||||
int (*callback)(const void *buf, size_t size, void *data);
|
||||
void *data;
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
int tb_register_protocol_handler(struct tb_protocol_handler *handler);
|
||||
void tb_unregister_protocol_handler(struct tb_protocol_handler *handler);
|
||||
|
||||
/**
|
||||
* struct tb_service - Thunderbolt service
|
||||
* @dev: XDomain device
|
||||
* @id: ID of the service (shown in sysfs)
|
||||
* @key: Protocol key from the properties directory
|
||||
* @prtcid: Protocol ID from the properties directory
|
||||
* @prtcvers: Protocol version from the properties directory
|
||||
* @prtcrevs: Protocol software revision from the properties directory
|
||||
* @prtcstns: Protocol settings mask from the properties directory
|
||||
*
|
||||
* Each domain exposes set of services it supports as collection of
|
||||
* properties. For each service there will be one corresponding
|
||||
* &struct tb_service. Service drivers are bound to these.
|
||||
*/
|
||||
struct tb_service {
|
||||
struct device dev;
|
||||
int id;
|
||||
const char *key;
|
||||
u32 prtcid;
|
||||
u32 prtcvers;
|
||||
u32 prtcrevs;
|
||||
u32 prtcstns;
|
||||
};
|
||||
|
||||
static inline struct tb_service *tb_service_get(struct tb_service *svc)
|
||||
{
|
||||
if (svc)
|
||||
get_device(&svc->dev);
|
||||
return svc;
|
||||
}
|
||||
|
||||
static inline void tb_service_put(struct tb_service *svc)
|
||||
{
|
||||
if (svc)
|
||||
put_device(&svc->dev);
|
||||
}
|
||||
|
||||
static inline bool tb_is_service(const struct device *dev)
|
||||
{
|
||||
return dev->type == &tb_service_type;
|
||||
}
|
||||
|
||||
static inline struct tb_service *tb_to_service(struct device *dev)
|
||||
{
|
||||
if (tb_is_service(dev))
|
||||
return container_of(dev, struct tb_service, dev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_service_driver - Thunderbolt service driver
|
||||
* @driver: Driver structure
|
||||
* @probe: Called when the driver is probed
|
||||
* @remove: Called when the driver is removed (optional)
|
||||
* @shutdown: Called at shutdown time to stop the service (optional)
|
||||
* @id_table: Table of service identifiers the driver supports
|
||||
*/
|
||||
struct tb_service_driver {
|
||||
struct device_driver driver;
|
||||
int (*probe)(struct tb_service *svc, const struct tb_service_id *id);
|
||||
void (*remove)(struct tb_service *svc);
|
||||
void (*shutdown)(struct tb_service *svc);
|
||||
const struct tb_service_id *id_table;
|
||||
};
|
||||
|
||||
#define TB_SERVICE(key, id) \
|
||||
.match_flags = TBSVC_MATCH_PROTOCOL_KEY | \
|
||||
TBSVC_MATCH_PROTOCOL_ID, \
|
||||
.protocol_key = (key), \
|
||||
.protocol_id = (id)
|
||||
|
||||
int tb_register_service_driver(struct tb_service_driver *drv);
|
||||
void tb_unregister_service_driver(struct tb_service_driver *drv);
|
||||
|
||||
static inline void *tb_service_get_drvdata(const struct tb_service *svc)
|
||||
{
|
||||
return dev_get_drvdata(&svc->dev);
|
||||
}
|
||||
|
||||
static inline void tb_service_set_drvdata(struct tb_service *svc, void *data)
|
||||
{
|
||||
dev_set_drvdata(&svc->dev, data);
|
||||
}
|
||||
|
||||
static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
|
||||
{
|
||||
return tb_to_xdomain(svc->dev.parent);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct tb_nhi - thunderbolt native host interface
|
||||
* @lock: Must be held during ring creation/destruction. Is acquired by
|
||||
* interrupt_work when dispatching interrupts to individual rings.
|
||||
* @pdev: Pointer to the PCI device
|
||||
* @iobase: MMIO space of the NHI
|
||||
* @tx_rings: All Tx rings available on this host controller
|
||||
* @rx_rings: All Rx rings available on this host controller
|
||||
* @msix_ida: Used to allocate MSI-X vectors for rings
|
||||
* @going_away: The host controller device is about to disappear so when
|
||||
* this flag is set, avoid touching the hardware anymore.
|
||||
* @interrupt_work: Work scheduled to handle ring interrupt when no
|
||||
* MSI-X is used.
|
||||
* @hop_count: Number of rings (end point hops) supported by NHI.
|
||||
*/
|
||||
struct tb_nhi {
|
||||
spinlock_t lock;
|
||||
struct pci_dev *pdev;
|
||||
void __iomem *iobase;
|
||||
struct tb_ring **tx_rings;
|
||||
struct tb_ring **rx_rings;
|
||||
struct ida msix_ida;
|
||||
bool going_away;
|
||||
struct work_struct interrupt_work;
|
||||
u32 hop_count;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct tb_ring - thunderbolt TX or RX ring associated with a NHI
|
||||
* @lock: Lock serializing actions to this ring. Must be acquired after
|
||||
* nhi->lock.
|
||||
* @nhi: Pointer to the native host controller interface
|
||||
* @size: Size of the ring
|
||||
* @hop: Hop (DMA channel) associated with this ring
|
||||
* @head: Head of the ring (write next descriptor here)
|
||||
* @tail: Tail of the ring (complete next descriptor here)
|
||||
* @descriptors: Allocated descriptors for this ring
|
||||
* @queue: Queue holding frames to be transferred over this ring
|
||||
* @in_flight: Queue holding frames that are currently in flight
|
||||
* @work: Interrupt work structure
|
||||
* @is_tx: Is the ring Tx or Rx
|
||||
* @running: Is the ring running
|
||||
* @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise.
|
||||
* @vector: MSI-X vector number the ring uses (only set if @irq is > 0)
|
||||
* @flags: Ring specific flags
|
||||
* @sof_mask: Bit mask used to detect start of frame PDF
|
||||
* @eof_mask: Bit mask used to detect end of frame PDF
|
||||
* @start_poll: Called when ring interrupt is triggered to start
|
||||
* polling. Passing %NULL keeps the ring in interrupt mode.
|
||||
* @poll_data: Data passed to @start_poll
|
||||
*/
|
||||
struct tb_ring {
|
||||
spinlock_t lock;
|
||||
struct tb_nhi *nhi;
|
||||
int size;
|
||||
int hop;
|
||||
int head;
|
||||
int tail;
|
||||
struct ring_desc *descriptors;
|
||||
dma_addr_t descriptors_dma;
|
||||
struct list_head queue;
|
||||
struct list_head in_flight;
|
||||
struct work_struct work;
|
||||
bool is_tx:1;
|
||||
bool running:1;
|
||||
int irq;
|
||||
u8 vector;
|
||||
unsigned int flags;
|
||||
u16 sof_mask;
|
||||
u16 eof_mask;
|
||||
void (*start_poll)(void *data);
|
||||
void *poll_data;
|
||||
};
|
||||
|
||||
/* Leave ring interrupt enabled on suspend */
|
||||
#define RING_FLAG_NO_SUSPEND BIT(0)
|
||||
/* Configure the ring to be in frame mode */
|
||||
#define RING_FLAG_FRAME BIT(1)
|
||||
/* Enable end-to-end flow control */
|
||||
#define RING_FLAG_E2E BIT(2)
|
||||
|
||||
struct ring_frame;
|
||||
typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled);
|
||||
|
||||
/**
|
||||
* enum ring_desc_flags - Flags for DMA ring descriptor
|
||||
* %RING_DESC_ISOCH: Enable isonchronous DMA (Tx only)
|
||||
* %RING_DESC_CRC_ERROR: In frame mode CRC check failed for the frame (Rx only)
|
||||
* %RING_DESC_COMPLETED: Descriptor completed (set by NHI)
|
||||
* %RING_DESC_POSTED: Always set this
|
||||
* %RING_DESC_BUFFER_OVERRUN: RX buffer overrun
|
||||
* %RING_DESC_INTERRUPT: Request an interrupt on completion
|
||||
*/
|
||||
enum ring_desc_flags {
|
||||
RING_DESC_ISOCH = 0x1,
|
||||
RING_DESC_CRC_ERROR = 0x1,
|
||||
RING_DESC_COMPLETED = 0x2,
|
||||
RING_DESC_POSTED = 0x4,
|
||||
RING_DESC_BUFFER_OVERRUN = 0x04,
|
||||
RING_DESC_INTERRUPT = 0x8,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct ring_frame - For use with ring_rx/ring_tx
|
||||
* @buffer_phy: DMA mapped address of the frame
|
||||
* @callback: Callback called when the frame is finished (optional)
|
||||
* @list: Frame is linked to a queue using this
|
||||
* @size: Size of the frame in bytes (%0 means %4096)
|
||||
* @flags: Flags for the frame (see &enum ring_desc_flags)
|
||||
* @eof: End of frame protocol defined field
|
||||
* @sof: Start of frame protocol defined field
|
||||
*/
|
||||
struct ring_frame {
|
||||
dma_addr_t buffer_phy;
|
||||
ring_cb callback;
|
||||
struct list_head list;
|
||||
u32 size:12;
|
||||
u32 flags:12;
|
||||
u32 eof:4;
|
||||
u32 sof:4;
|
||||
};
|
||||
|
||||
/* Minimum size for ring_rx */
|
||||
#define TB_FRAME_SIZE 0x100
|
||||
|
||||
struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags);
|
||||
struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
|
||||
unsigned int flags, u16 sof_mask, u16 eof_mask,
|
||||
void (*start_poll)(void *), void *poll_data);
|
||||
void tb_ring_start(struct tb_ring *ring);
|
||||
void tb_ring_stop(struct tb_ring *ring);
|
||||
void tb_ring_free(struct tb_ring *ring);
|
||||
|
||||
int __tb_ring_enqueue(struct tb_ring *ring, struct ring_frame *frame);
|
||||
|
||||
/**
|
||||
* tb_ring_rx() - enqueue a frame on an RX ring
|
||||
* @ring: Ring to enqueue the frame
|
||||
* @frame: Frame to enqueue
|
||||
*
|
||||
* @frame->buffer, @frame->buffer_phy have to be set. The buffer must
|
||||
* contain at least %TB_FRAME_SIZE bytes.
|
||||
*
|
||||
* @frame->callback will be invoked with @frame->size, @frame->flags,
|
||||
* @frame->eof, @frame->sof set once the frame has been received.
|
||||
*
|
||||
* If ring_stop() is called after the packet has been enqueued
|
||||
* @frame->callback will be called with canceled set to true.
|
||||
*
|
||||
* Return: Returns %-ESHUTDOWN if ring_stop has been called. Zero otherwise.
|
||||
*/
|
||||
static inline int tb_ring_rx(struct tb_ring *ring, struct ring_frame *frame)
|
||||
{
|
||||
WARN_ON(ring->is_tx);
|
||||
return __tb_ring_enqueue(ring, frame);
|
||||
}
|
||||
|
||||
/**
|
||||
* tb_ring_tx() - enqueue a frame on an TX ring
|
||||
* @ring: Ring the enqueue the frame
|
||||
* @frame: Frame to enqueue
|
||||
*
|
||||
* @frame->buffer, @frame->buffer_phy, @frame->size, @frame->eof and
|
||||
* @frame->sof have to be set.
|
||||
*
|
||||
* @frame->callback will be invoked with once the frame has been transmitted.
|
||||
*
|
||||
* If ring_stop() is called after the packet has been enqueued @frame->callback
|
||||
* will be called with canceled set to true.
|
||||
*
|
||||
* Return: Returns %-ESHUTDOWN if ring_stop has been called. Zero otherwise.
|
||||
*/
|
||||
static inline int tb_ring_tx(struct tb_ring *ring, struct ring_frame *frame)
|
||||
{
|
||||
WARN_ON(!ring->is_tx);
|
||||
return __tb_ring_enqueue(ring, frame);
|
||||
}
|
||||
|
||||
/* Used only when the ring is in polling mode */
|
||||
struct ring_frame *tb_ring_poll(struct tb_ring *ring);
|
||||
void tb_ring_poll_complete(struct tb_ring *ring);
|
||||
|
||||
/**
|
||||
* tb_ring_dma_device() - Return device used for DMA mapping
|
||||
* @ring: Ring whose DMA device is retrieved
|
||||
*
|
||||
* Use this function when you are mapping DMA for buffers that are
|
||||
* passed to the ring for sending/receiving.
|
||||
*/
|
||||
static inline struct device *tb_ring_dma_device(struct tb_ring *ring)
|
||||
{
|
||||
return &ring->nhi->pdev->dev;
|
||||
}
|
||||
|
||||
#endif /* THUNDERBOLT_H_ */
|
@ -206,5 +206,12 @@ int main(void)
|
||||
DEVID_FIELD(fsl_mc_device_id, vendor);
|
||||
DEVID_FIELD(fsl_mc_device_id, obj_type);
|
||||
|
||||
DEVID(tb_service_id);
|
||||
DEVID_FIELD(tb_service_id, match_flags);
|
||||
DEVID_FIELD(tb_service_id, protocol_key);
|
||||
DEVID_FIELD(tb_service_id, protocol_id);
|
||||
DEVID_FIELD(tb_service_id, protocol_version);
|
||||
DEVID_FIELD(tb_service_id, protocol_revision);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1301,6 +1301,31 @@ static int do_fsl_mc_entry(const char *filename, void *symval,
|
||||
}
|
||||
ADD_TO_DEVTABLE("fslmc", fsl_mc_device_id, do_fsl_mc_entry);
|
||||
|
||||
/* Looks like: tbsvc:kSpNvNrN */
|
||||
static int do_tbsvc_entry(const char *filename, void *symval, char *alias)
|
||||
{
|
||||
DEF_FIELD(symval, tb_service_id, match_flags);
|
||||
DEF_FIELD_ADDR(symval, tb_service_id, protocol_key);
|
||||
DEF_FIELD(symval, tb_service_id, protocol_id);
|
||||
DEF_FIELD(symval, tb_service_id, protocol_version);
|
||||
DEF_FIELD(symval, tb_service_id, protocol_revision);
|
||||
|
||||
strcpy(alias, "tbsvc:");
|
||||
if (match_flags & TBSVC_MATCH_PROTOCOL_KEY)
|
||||
sprintf(alias + strlen(alias), "k%s", *protocol_key);
|
||||
else
|
||||
strcat(alias + strlen(alias), "k*");
|
||||
ADD(alias, "p", match_flags & TBSVC_MATCH_PROTOCOL_ID, protocol_id);
|
||||
ADD(alias, "v", match_flags & TBSVC_MATCH_PROTOCOL_VERSION,
|
||||
protocol_version);
|
||||
ADD(alias, "r", match_flags & TBSVC_MATCH_PROTOCOL_REVISION,
|
||||
protocol_revision);
|
||||
|
||||
add_wildcard(alias);
|
||||
return 1;
|
||||
}
|
||||
ADD_TO_DEVTABLE("tbsvc", tb_service_id, do_tbsvc_entry);
|
||||
|
||||
/* Does namelen bytes of name exactly match the symbol? */
|
||||
static bool sym_is(const char *name, unsigned namelen, const char *symbol)
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user