License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
/* SPDX-License-Identifier: GPL-2.0 */
2014-06-04 00:04:00 +04:00
/*
2018-10-01 12:31:22 +03:00
* Thunderbolt driver - bus logic ( NHI independent )
2014-06-04 00:04:00 +04:00
*
* Copyright ( c ) 2014 Andreas Noever < andreas . noever @ gmail . com >
2018-10-01 12:31:22 +03:00
* Copyright ( C ) 2018 , Intel Corporation
2014-06-04 00:04:00 +04:00
*/
# ifndef TB_H_
# define TB_H_
2017-06-06 15:25:17 +03:00
# include <linux/nvmem-provider.h>
2014-06-04 00:04:02 +04:00
# include <linux/pci.h>
2017-10-02 13:38:34 +03:00
# include <linux/thunderbolt.h>
2017-06-06 15:25:01 +03:00
# include <linux/uuid.h>
2022-05-26 13:59:20 +03:00
# include <linux/bitfield.h>
2014-06-04 00:04:02 +04:00
# include "tb_regs.h"
2014-06-04 00:04:00 +04:00
# include "ctl.h"
2017-06-06 15:25:14 +03:00
# include "dma_port.h"
2014-06-04 00:04:00 +04:00
2020-03-05 12:37:15 +03:00
# define NVM_MIN_SIZE SZ_32K
# define NVM_MAX_SIZE SZ_512K
2021-04-01 16:54:15 +03:00
# define NVM_DATA_DWORDS 16
2020-03-05 12:37:15 +03:00
/* Intel specific NVM offsets */
# define NVM_DEVID 0x05
# define NVM_VERSION 0x08
# define NVM_FLASH_SIZE 0x45
2017-06-06 15:25:17 +03:00
/**
2020-03-05 12:37:15 +03:00
* struct tb_nvm - Structure holding NVM information
* @ dev : Owner of the NVM
2017-06-06 15:25:17 +03:00
* @ major : Major version number of the active NVM portion
* @ minor : Minor version number of the active NVM portion
* @ id : Identifier used with both NVM portions
* @ active : Active portion NVMem device
* @ non_active : Non - active portion NVMem device
* @ buf : Buffer where the NVM image is stored before it is written to
* the actual NVM flash device
* @ buf_data_size : Number of bytes actually consumed by the new NVM
* image
2020-03-05 12:37:15 +03:00
* @ authenticating : The device is authenticating the new NVM
2020-06-23 19:14:28 +03:00
* @ flushed : The image has been flushed to the storage area
2020-03-05 12:37:15 +03:00
*
* The user of this structure needs to handle serialization of possible
* concurrent access .
2017-06-06 15:25:17 +03:00
*/
2020-03-05 12:37:15 +03:00
struct tb_nvm {
struct device * dev ;
2017-06-06 15:25:17 +03:00
u8 major ;
u8 minor ;
int id ;
struct nvmem_device * active ;
struct nvmem_device * non_active ;
void * buf ;
size_t buf_data_size ;
bool authenticating ;
2020-06-23 19:14:28 +03:00
bool flushed ;
2017-06-06 15:25:17 +03:00
} ;
2021-04-12 14:01:46 +03:00
enum tb_nvm_write_ops {
WRITE_AND_AUTHENTICATE = 1 ,
WRITE_ONLY = 2 ,
2021-04-12 15:25:08 +03:00
AUTHENTICATE_ONLY = 3 ,
2021-04-12 14:01:46 +03:00
} ;
2017-06-06 15:25:16 +03:00
# define TB_SWITCH_KEY_SIZE 32
2018-12-30 13:14:46 +03:00
# define TB_SWITCH_MAX_DEPTH 6
2019-12-17 15:33:40 +03:00
# define USB4_SWITCH_MAX_DEPTH 5
2017-06-06 15:25:16 +03:00
2019-12-17 15:33:43 +03:00
/**
* enum tb_switch_tmu_rate - TMU refresh rate
* @ TB_SWITCH_TMU_RATE_OFF : % 0 ( Disable Time Sync handshake )
* @ TB_SWITCH_TMU_RATE_HIFI : % 16 us time interval between successive
* transmission of the Delay Request TSNOS
* ( Time Sync Notification Ordered Set ) on a Link
* @ TB_SWITCH_TMU_RATE_NORMAL : % 1 ms time interval between successive
* transmission of the Delay Request TSNOS on
* a Link
*/
enum tb_switch_tmu_rate {
TB_SWITCH_TMU_RATE_OFF = 0 ,
TB_SWITCH_TMU_RATE_HIFI = 16 ,
TB_SWITCH_TMU_RATE_NORMAL = 1000 ,
} ;
/**
* struct tb_switch_tmu - Structure holding switch TMU configuration
* @ cap : Offset to the TMU capability ( % 0 if not found )
* @ has_ucap : Does the switch support uni - directional mode
* @ rate : TMU refresh rate related to upstream switch . In case of root
2021-12-17 04:16:38 +03:00
* switch this holds the domain rate . Reflects the HW setting .
2019-12-17 15:33:43 +03:00
* @ unidirectional : Is the TMU in uni - directional or bi - directional mode
2021-12-17 04:16:38 +03:00
* related to upstream switch . Don ' t care for root switch .
* Reflects the HW setting .
* @ unidirectional_request : Is the new TMU mode : uni - directional or bi - directional
* that is requested to be set . Related to upstream switch .
* Don ' t care for root switch .
* @ rate_request : TMU new refresh rate related to upstream switch that is
* requested to be set . In case of root switch , this holds
* the new domain rate that is requested to be set .
2019-12-17 15:33:43 +03:00
*/
struct tb_switch_tmu {
int cap ;
bool has_ucap ;
enum tb_switch_tmu_rate rate ;
bool unidirectional ;
2021-12-17 04:16:38 +03:00
bool unidirectional_request ;
enum tb_switch_tmu_rate rate_request ;
2019-12-17 15:33:43 +03:00
} ;
2021-12-17 04:16:39 +03:00
enum tb_clx {
TB_CLX_DISABLE ,
2022-05-26 13:59:20 +03:00
/* CL0s and CL1 are enabled and supported together */
2021-12-17 04:16:39 +03:00
TB_CL1 ,
TB_CL2 ,
} ;
2014-06-04 00:04:02 +04:00
/**
* struct tb_switch - a thunderbolt switch
2017-06-06 15:25:01 +03:00
* @ dev : Device for the switch
* @ config : Switch configuration
* @ ports : Ports in this switch
2017-06-06 15:25:14 +03:00
* @ dma_port : If the switch has port supporting DMA configuration based
* mailbox this will hold the pointer to that ( % NULL
2017-06-06 15:25:17 +03:00
* otherwise ) . If set it also means the switch has
* upgradeable NVM .
2019-12-17 15:33:43 +03:00
* @ tmu : The switch TMU configuration
2017-06-06 15:25:01 +03:00
* @ tb : Pointer to the domain the switch belongs to
* @ uid : Unique ID of the switch
* @ uuid : UUID of the switch ( or % NULL if not supported )
* @ vendor : Vendor ID of the switch
* @ device : Device ID of the switch
2017-06-06 15:25:05 +03:00
* @ vendor_name : Name of the vendor ( or % NULL if not known )
* @ device_name : Name of the device ( or % NULL if not known )
2019-03-21 20:03:00 +03:00
* @ link_speed : Speed of the link in Gb / s
* @ link_width : Width of the link ( 1 or 2 )
2020-03-04 18:09:14 +03:00
* @ link_usb4 : Upstream link is USB4
2017-06-06 15:25:13 +03:00
* @ generation : Switch Thunderbolt generation
2017-06-06 15:25:01 +03:00
* @ cap_plug_events : Offset to the plug events capability ( % 0 if not found )
2021-12-17 04:16:41 +03:00
* @ cap_vsec_tmu : Offset to the TMU vendor specific capability ( % 0 if not found )
2019-01-09 17:42:12 +03:00
* @ cap_lc : Offset to the link controller capability ( % 0 if not found )
2021-12-17 04:16:43 +03:00
* @ cap_lp : Offset to the low power ( CLx for TBT ) capability ( % 0 if not found )
2017-06-06 15:25:01 +03:00
* @ is_unplugged : The switch is going away
* @ drom : DROM of the switch ( % NULL if not found )
2017-06-06 15:25:17 +03:00
* @ nvm : Pointer to the NVM if the switch has one ( % NULL otherwise )
* @ no_nvm_upgrade : Prevent NVM upgrade of this switch
* @ safe_mode : The switch is in safe - mode
2018-01-22 13:50:09 +03:00
* @ boot : Whether the switch was already authorized on boot or not
2018-07-25 11:48:39 +03:00
* @ rpm : The switch supports runtime PM
2017-06-06 15:25:16 +03:00
* @ authorized : Whether the switch is authorized by user or policy
* @ security_level : Switch supported security level
2020-06-29 20:30:52 +03:00
* @ debugfs_dir : Pointer to the debugfs structure
2017-06-06 15:25:16 +03:00
* @ key : Contains the key used to challenge the device or % NULL if not
* supported . Size of the key is % TB_SWITCH_KEY_SIZE .
* @ connection_id : Connection ID used with ICM messaging
* @ connection_key : Connection key used with ICM messaging
* @ link : Root switch link this switch is connected ( ICM only )
* @ depth : Depth in the chain this switch is connected ( ICM only )
2019-05-28 18:56:20 +03:00
* @ rpm_complete : Completion used to wait for runtime resume to
* complete ( ICM only )
2020-06-23 19:14:29 +03:00
* @ quirks : Quirks used for this Thunderbolt switch
2021-03-10 14:34:12 +03:00
* @ credit_allocation : Are the below buffer allocation parameters valid
* @ max_usb3_credits : Router preferred number of buffers for USB 3. x
* @ min_dp_aux_credits : Router preferred minimum number of buffers for DP AUX
* @ min_dp_main_credits : Router preferred minimum number of buffers for DP MAIN
* @ max_pcie_credits : Router preferred number of buffers for PCIe
* @ max_dma_credits : Router preferred number of buffers for DMA / P2P
2021-12-17 04:16:39 +03:00
* @ clx : CLx state on the upstream link of the router
2017-06-06 15:25:16 +03:00
*
* When the switch is being added or removed to the domain ( other
2019-03-19 17:48:41 +03:00
* switches ) you need to have domain lock held .
2021-02-01 15:03:00 +03:00
*
* In USB4 terminology this structure represents a router .
2014-06-04 00:04:02 +04:00
*/
struct tb_switch {
2017-06-06 15:25:01 +03:00
struct device dev ;
2014-06-04 00:04:02 +04:00
struct tb_regs_switch_header config ;
struct tb_port * ports ;
2017-06-06 15:25:14 +03:00
struct tb_dma_port * dma_port ;
2019-12-17 15:33:43 +03:00
struct tb_switch_tmu tmu ;
2014-06-04 00:04:02 +04:00
struct tb * tb ;
2014-06-04 00:04:11 +04:00
u64 uid ;
2017-07-18 16:30:05 +03:00
uuid_t * uuid ;
2017-06-06 15:25:01 +03:00
u16 vendor ;
u16 device ;
2017-06-06 15:25:05 +03:00
const char * vendor_name ;
const char * device_name ;
2019-03-21 20:03:00 +03:00
unsigned int link_speed ;
unsigned int link_width ;
2020-03-04 18:09:14 +03:00
bool link_usb4 ;
2017-06-06 15:25:13 +03:00
unsigned int generation ;
2017-06-06 15:25:01 +03:00
int cap_plug_events ;
2021-12-17 04:16:41 +03:00
int cap_vsec_tmu ;
2019-01-09 17:42:12 +03:00
int cap_lc ;
2021-12-17 04:16:43 +03:00
int cap_lp ;
2017-06-06 15:25:01 +03:00
bool is_unplugged ;
2014-06-13 01:11:46 +04:00
u8 * drom ;
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm ;
2017-06-06 15:25:17 +03:00
bool no_nvm_upgrade ;
bool safe_mode ;
2018-01-22 13:50:09 +03:00
bool boot ;
2018-07-25 11:48:39 +03:00
bool rpm ;
2017-06-06 15:25:16 +03:00
unsigned int authorized ;
enum tb_security_level security_level ;
2020-06-29 20:30:52 +03:00
struct dentry * debugfs_dir ;
2017-06-06 15:25:16 +03:00
u8 * key ;
u8 connection_id ;
u8 connection_key ;
u8 link ;
u8 depth ;
2019-05-28 18:56:20 +03:00
struct completion rpm_complete ;
2020-06-23 19:14:29 +03:00
unsigned long quirks ;
2021-03-10 14:34:12 +03:00
bool credit_allocation ;
unsigned int max_usb3_credits ;
unsigned int min_dp_aux_credits ;
unsigned int min_dp_main_credits ;
unsigned int max_pcie_credits ;
unsigned int max_dma_credits ;
2021-12-17 04:16:39 +03:00
enum tb_clx clx ;
2014-06-04 00:04:02 +04:00
} ;
/**
* struct tb_port - a thunderbolt port , part of a tb_switch
2017-10-02 13:38:34 +03:00
* @ config : Cached port configuration read from registers
* @ sw : Switch the port belongs to
* @ remote : Remote port ( % NULL if not connected )
* @ xdomain : Remote host ( % NULL if not connected )
* @ cap_phy : Offset , zero if not found
2019-12-17 15:33:43 +03:00
* @ cap_tmu : Offset of the adapter specific TMU capability ( % 0 if not present )
2017-02-19 11:39:34 +03:00
* @ cap_adap : Offset of the adapter specific capability ( % 0 if not present )
2019-12-17 15:33:40 +03:00
* @ cap_usb4 : Offset to the USB4 port capability ( % 0 if not present )
2021-04-01 17:34:20 +03:00
* @ usb4 : Pointer to the USB4 port structure ( only if @ cap_usb4 is ! = % 0 )
2017-10-02 13:38:34 +03:00
* @ port : Port number on switch
2020-07-21 14:35:23 +03:00
* @ disabled : Disabled by eeprom or enabled but not implemented
2019-03-21 20:03:00 +03:00
* @ bonded : true if the port is bonded ( two lanes combined as one )
2017-10-02 13:38:34 +03:00
* @ dual_link_port : If the switch is connected using two ports , points
* to the other port .
* @ link_nr : Is this primary or secondary port on the dual_link .
2017-02-19 17:57:27 +03:00
* @ in_hopids : Currently allocated input HopIDs
* @ out_hopids : Currently allocated output HopIDs
2019-03-26 15:52:30 +03:00
* @ list : Used to link ports to DP resources list
2021-03-10 14:34:12 +03:00
* @ total_credits : Total number of buffers available for this port
* @ ctl_credits : Buffers reserved for control path
2021-03-22 19:09:35 +03:00
* @ dma_credits : Number of credits allocated for DMA tunneling for all
* DMA paths through this port .
2021-02-01 15:03:00 +03:00
*
* In USB4 terminology this structure represents an adapter ( protocol or
* lane adapter ) .
2014-06-04 00:04:02 +04:00
*/
struct tb_port {
struct tb_regs_port_header config ;
struct tb_switch * sw ;
2017-10-02 13:38:34 +03:00
struct tb_port * remote ;
struct tb_xdomain * xdomain ;
int cap_phy ;
2019-12-17 15:33:43 +03:00
int cap_tmu ;
2017-02-19 11:39:34 +03:00
int cap_adap ;
2019-12-17 15:33:40 +03:00
int cap_usb4 ;
2021-04-01 17:34:20 +03:00
struct usb4_port * usb4 ;
2017-10-02 13:38:34 +03:00
u8 port ;
bool disabled ;
2019-03-21 20:03:00 +03:00
bool bonded ;
2014-06-13 01:11:46 +04:00
struct tb_port * dual_link_port ;
u8 link_nr : 1 ;
2017-02-19 17:57:27 +03:00
struct ida in_hopids ;
struct ida out_hopids ;
2019-03-26 15:52:30 +03:00
struct list_head list ;
2021-03-10 14:34:12 +03:00
unsigned int total_credits ;
unsigned int ctl_credits ;
2021-03-22 19:09:35 +03:00
unsigned int dma_credits ;
2014-06-04 00:04:02 +04:00
} ;
2021-04-01 17:34:20 +03:00
/**
* struct usb4_port - USB4 port device
* @ dev : Device for the port
* @ port : Pointer to the lane 0 adapter
2021-04-01 18:20:17 +03:00
* @ can_offline : Does the port have necessary platform support to moved
* it into offline mode and back
2021-04-01 18:42:38 +03:00
* @ offline : The port is currently in offline mode
2021-04-01 17:34:20 +03:00
*/
struct usb4_port {
struct device dev ;
struct tb_port * port ;
2021-04-01 18:20:17 +03:00
bool can_offline ;
2021-04-01 18:42:38 +03:00
bool offline ;
2021-04-01 17:34:20 +03:00
} ;
2020-03-05 17:39:58 +03:00
/**
* tb_retimer : Thunderbolt retimer
* @ dev : Device for the retimer
* @ tb : Pointer to the domain the retimer belongs to
* @ index : Retimer index facing the router USB4 port
* @ vendor : Vendor ID of the retimer
* @ device : Device ID of the retimer
* @ port : Pointer to the lane 0 adapter
* @ nvm : Pointer to the NVM if the retimer has one ( % NULL otherwise )
* @ auth_status : Status of last NVM authentication
*/
struct tb_retimer {
struct device dev ;
struct tb * tb ;
u8 index ;
u32 vendor ;
u32 device ;
struct tb_port * port ;
struct tb_nvm * nvm ;
u32 auth_status ;
} ;
2014-06-04 00:04:07 +04:00
/**
* struct tb_path_hop - routing information for a tb_path
2017-02-19 23:11:41 +03:00
* @ in_port : Ingress port of a switch
* @ out_port : Egress port of a switch where the packet is routed out
* ( must be on the same switch than @ in_port )
* @ in_hop_index : HopID where the path configuration entry is placed in
* the path config space of @ in_port .
* @ in_counter_index : Used counter index ( not used in the driver
* currently , % - 1 to disable )
* @ next_hop_index : HopID of the packet when it is routed out from @ out_port
2017-02-20 00:43:26 +03:00
* @ initial_credits : Number of initial flow control credits allocated for
* the path
2020-12-10 17:07:59 +03:00
* @ nfc_credits : Number of non - flow controlled buffers allocated for the
* @ in_port .
2014-06-04 00:04:07 +04:00
*
* Hop configuration is always done on the IN port of a switch .
* in_port and out_port have to be on the same switch . Packets arriving on
* in_port with " hop " = in_hop_index will get routed to through out_port . The
2017-02-19 23:11:41 +03:00
* next hop to take ( on out_port - > remote ) is determined by
* next_hop_index . When routing packet to another switch ( out - > remote is
* set ) the @ next_hop_index must match the @ in_hop_index of that next
* hop to make routing possible .
2014-06-04 00:04:07 +04:00
*
* in_counter_index is the index of a counter ( in TB_CFG_COUNTERS ) on the in
* port .
*/
struct tb_path_hop {
struct tb_port * in_port ;
struct tb_port * out_port ;
int in_hop_index ;
2017-02-19 23:11:41 +03:00
int in_counter_index ;
2014-06-04 00:04:07 +04:00
int next_hop_index ;
2017-02-20 00:43:26 +03:00
unsigned int initial_credits ;
2020-12-10 17:07:59 +03:00
unsigned int nfc_credits ;
2014-06-04 00:04:07 +04:00
} ;
/**
* enum tb_path_port - path options mask
2017-02-19 23:11:41 +03:00
* @ TB_PATH_NONE : Do not activate on any hop on path
* @ TB_PATH_SOURCE : Activate on the first hop ( out of src )
* @ TB_PATH_INTERNAL : Activate on the intermediate hops ( not the first / last )
* @ TB_PATH_DESTINATION : Activate on the last hop ( into dst )
* @ TB_PATH_ALL : Activate on all hops on the path
2014-06-04 00:04:07 +04:00
*/
enum tb_path_port {
TB_PATH_NONE = 0 ,
2017-02-19 23:11:41 +03:00
TB_PATH_SOURCE = 1 ,
TB_PATH_INTERNAL = 2 ,
TB_PATH_DESTINATION = 4 ,
2014-06-04 00:04:07 +04:00
TB_PATH_ALL = 7 ,
} ;
/**
* struct tb_path - a unidirectional path between two ports
2017-02-19 23:11:41 +03:00
* @ tb : Pointer to the domain structure
* @ name : Name of the path ( used for debugging )
* @ ingress_shared_buffer : Shared buffering used for ingress ports on the path
* @ egress_shared_buffer : Shared buffering used for egress ports on the path
* @ ingress_fc_enable : Flow control for ingress ports on the path
* @ egress_fc_enable : Flow control for egress ports on the path
* @ priority : Priority group if the path
* @ weight : Weight of the path inside the priority group
* @ drop_packages : Drop packages from queue tail or head
* @ activated : Is the path active
2018-09-28 16:35:32 +03:00
* @ clear_fc : Clear all flow control from the path config space entries
* when deactivating this path
2017-02-19 23:11:41 +03:00
* @ hops : Path hops
* @ path_length : How many hops the path uses
2021-11-14 18:20:59 +03:00
* @ alloc_hopid : Does this path consume port HopID
2014-06-04 00:04:07 +04:00
*
2017-02-19 23:11:41 +03:00
* A path consists of a number of hops ( see & struct tb_path_hop ) . To
* establish a PCIe tunnel two paths have to be created between the two
* PCIe ports .
2014-06-04 00:04:07 +04:00
*/
struct tb_path {
struct tb * tb ;
2017-02-19 23:11:41 +03:00
const char * name ;
2014-06-04 00:04:07 +04:00
enum tb_path_port ingress_shared_buffer ;
enum tb_path_port egress_shared_buffer ;
enum tb_path_port ingress_fc_enable ;
enum tb_path_port egress_fc_enable ;
2019-04-24 21:34:13 +03:00
unsigned int priority : 3 ;
2014-06-04 00:04:07 +04:00
int weight : 4 ;
bool drop_packages ;
bool activated ;
2018-09-28 16:35:32 +03:00
bool clear_fc ;
2014-06-04 00:04:07 +04:00
struct tb_path_hop * hops ;
2017-02-19 23:11:41 +03:00
int path_length ;
2021-11-14 18:20:59 +03:00
bool alloc_hopid ;
2014-06-04 00:04:07 +04:00
} ;
2017-02-19 17:57:27 +03:00
/* HopIDs 0-7 are reserved by the Thunderbolt protocol */
# define TB_PATH_MIN_HOPID 8
2020-05-08 11:47:00 +03:00
/*
* Support paths from the farthest ( depth 6 ) router to the host and back
* to the same level ( not necessarily to the same router ) .
*/
# define TB_PATH_MAX_HOPS (7 * 2)
2017-02-19 17:57:27 +03:00
2019-12-06 19:36:07 +03:00
/* Possible wake types */
# define TB_WAKE_ON_CONNECT BIT(0)
# define TB_WAKE_ON_DISCONNECT BIT(1)
# define TB_WAKE_ON_USB4 BIT(2)
# define TB_WAKE_ON_USB3 BIT(3)
# define TB_WAKE_ON_PCIE BIT(4)
2021-01-14 17:44:17 +03:00
# define TB_WAKE_ON_DP BIT(5)
2019-12-06 19:36:07 +03:00
2017-06-06 15:25:00 +03:00
/**
* struct tb_cm_ops - Connection manager specific operations vector
2017-06-06 15:25:16 +03:00
* @ driver_ready : Called right after control channel is started . Used by
* ICM to send driver ready message to the firmware .
2017-06-06 15:25:00 +03:00
* @ start : Starts the domain
* @ stop : Stops the domain
* @ suspend_noirq : Connection manager specific suspend_noirq
* @ resume_noirq : Connection manager specific resume_noirq
2017-06-06 15:25:16 +03:00
* @ suspend : Connection manager specific suspend
2020-08-31 13:05:14 +03:00
* @ freeze_noirq : Connection manager specific freeze_noirq
* @ thaw_noirq : Connection manager specific thaw_noirq
2017-06-06 15:25:16 +03:00
* @ complete : Connection manager specific complete
2018-07-25 11:48:39 +03:00
* @ runtime_suspend : Connection manager specific runtime_suspend
* @ runtime_resume : Connection manager specific runtime_resume
2019-05-28 18:56:20 +03:00
* @ runtime_suspend_switch : Runtime suspend a switch
* @ runtime_resume_switch : Runtime resume a switch
2017-06-06 15:25:09 +03:00
* @ handle_event : Handle thunderbolt event
2018-01-21 13:08:04 +03:00
* @ get_boot_acl : Get boot ACL list
* @ set_boot_acl : Set boot ACL list
2020-11-10 11:47:14 +03:00
* @ disapprove_switch : Disapprove switch ( disconnect PCIe tunnel )
2017-06-06 15:25:16 +03:00
* @ approve_switch : Approve switch
* @ add_switch_key : Add key to switch
* @ challenge_switch_key : Challenge switch using key
2017-06-06 15:25:17 +03:00
* @ disconnect_pcie_paths : Disconnects PCIe paths before NVM update
2017-10-02 13:38:34 +03:00
* @ approve_xdomain_paths : Approve ( establish ) XDomain DMA paths
* @ disconnect_xdomain_paths : Disconnect XDomain DMA paths
2020-11-03 14:58:00 +03:00
* @ usb4_switch_op : Optional proxy for USB4 router operations . If set
* this will be called whenever USB4 router operation is
* performed . If this returns % - EOPNOTSUPP then the
* native USB4 router operation is called .
* @ usb4_switch_nvm_authenticate_status : Optional callback that the CM
* implementation can be used to
* return status of USB4 NVM_AUTH
* router operation .
2017-06-06 15:25:00 +03:00
*/
struct tb_cm_ops {
2017-06-06 15:25:16 +03:00
int ( * driver_ready ) ( struct tb * tb ) ;
2017-06-06 15:25:00 +03:00
int ( * start ) ( struct tb * tb ) ;
void ( * stop ) ( struct tb * tb ) ;
int ( * suspend_noirq ) ( struct tb * tb ) ;
int ( * resume_noirq ) ( struct tb * tb ) ;
2017-06-06 15:25:16 +03:00
int ( * suspend ) ( struct tb * tb ) ;
2020-08-31 13:05:14 +03:00
int ( * freeze_noirq ) ( struct tb * tb ) ;
int ( * thaw_noirq ) ( struct tb * tb ) ;
2017-06-06 15:25:16 +03:00
void ( * complete ) ( struct tb * tb ) ;
2018-07-25 11:48:39 +03:00
int ( * runtime_suspend ) ( struct tb * tb ) ;
int ( * runtime_resume ) ( struct tb * tb ) ;
2019-05-28 18:56:20 +03:00
int ( * runtime_suspend_switch ) ( struct tb_switch * sw ) ;
int ( * runtime_resume_switch ) ( struct tb_switch * sw ) ;
2017-06-06 15:25:09 +03:00
void ( * handle_event ) ( struct tb * tb , enum tb_cfg_pkg_type ,
const void * buf , size_t size ) ;
2018-01-21 13:08:04 +03:00
int ( * get_boot_acl ) ( struct tb * tb , uuid_t * uuids , size_t nuuids ) ;
int ( * set_boot_acl ) ( struct tb * tb , const uuid_t * uuids , size_t nuuids ) ;
2020-11-10 11:47:14 +03:00
int ( * disapprove_switch ) ( struct tb * tb , struct tb_switch * sw ) ;
2017-06-06 15:25:16 +03:00
int ( * approve_switch ) ( struct tb * tb , struct tb_switch * sw ) ;
int ( * add_switch_key ) ( struct tb * tb , struct tb_switch * sw ) ;
int ( * challenge_switch_key ) ( struct tb * tb , struct tb_switch * sw ,
const u8 * challenge , u8 * response ) ;
2017-06-06 15:25:17 +03:00
int ( * disconnect_pcie_paths ) ( struct tb * tb ) ;
2021-01-08 17:25:39 +03:00
int ( * approve_xdomain_paths ) ( struct tb * tb , struct tb_xdomain * xd ,
int transmit_path , int transmit_ring ,
int receive_path , int receive_ring ) ;
int ( * disconnect_xdomain_paths ) ( struct tb * tb , struct tb_xdomain * xd ,
int transmit_path , int transmit_ring ,
int receive_path , int receive_ring ) ;
2020-11-03 14:58:00 +03:00
int ( * usb4_switch_op ) ( struct tb_switch * sw , u16 opcode , u32 * metadata ,
u8 * status , const void * tx_data , size_t tx_data_len ,
void * rx_data , size_t rx_data_len ) ;
int ( * usb4_switch_nvm_authenticate_status ) ( struct tb_switch * sw ,
u32 * status ) ;
2017-06-06 15:25:00 +03:00
} ;
2014-06-04 00:04:07 +04:00
2017-06-06 15:25:00 +03:00
static inline void * tb_priv ( struct tb * tb )
{
return ( void * ) tb - > privdata ;
}
2018-07-25 11:48:39 +03:00
# define TB_AUTOSUSPEND_DELAY 15000 /* ms */
2014-06-04 00:04:02 +04:00
/* helper functions & macros */
/**
* tb_upstream_port ( ) - return the upstream port of a switch
*
* Every switch has an upstream port ( for the root switch it is the NHI ) .
*
* During switch alloc / init tb_upstream_port ( ) - > remote may be NULL , even for
* non root switches ( on the NHI port remote is always NULL ) .
*
* Return : Returns the upstream port of the switch .
*/
static inline struct tb_port * tb_upstream_port ( struct tb_switch * sw )
{
return & sw - > ports [ sw - > config . upstream_port_number ] ;
}
2019-03-07 16:26:45 +03:00
/**
* tb_is_upstream_port ( ) - Is the port upstream facing
* @ port : Port to check
*
* Returns true if @ port is upstream facing port . In case of dual link
* ports both return true .
*/
static inline bool tb_is_upstream_port ( const struct tb_port * port )
{
const struct tb_port * upstream_port = tb_upstream_port ( port - > sw ) ;
return port = = upstream_port | | port - > dual_link_port = = upstream_port ;
}
2019-03-06 20:23:38 +03:00
static inline u64 tb_route ( const struct tb_switch * sw )
2014-06-04 00:04:02 +04:00
{
return ( ( u64 ) sw - > config . route_hi ) < < 32 | sw - > config . route_lo ;
}
2017-06-06 15:25:16 +03:00
static inline struct tb_port * tb_port_at ( u64 route , struct tb_switch * sw )
{
u8 port ;
port = route > > ( sw - > config . depth * 8 ) ;
if ( WARN_ON ( port > sw - > config . max_port_number ) )
return NULL ;
return & sw - > ports [ port ] ;
}
2019-03-07 16:26:45 +03:00
/**
* tb_port_has_remote ( ) - Does the port have switch connected downstream
* @ port : Port to check
*
* Returns true only when the port is primary port and has remote set .
*/
static inline bool tb_port_has_remote ( const struct tb_port * port )
{
if ( tb_is_upstream_port ( port ) )
return false ;
if ( ! port - > remote )
return false ;
if ( port - > dual_link_port & & port - > link_nr )
return false ;
return true ;
}
2017-10-11 17:19:54 +03:00
static inline bool tb_port_is_null ( const struct tb_port * port )
{
return port & & port - > port & & port - > config . type = = TB_TYPE_PORT ;
}
2020-07-25 10:32:46 +03:00
static inline bool tb_port_is_nhi ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_NHI ;
}
2018-12-30 22:34:08 +03:00
static inline bool tb_port_is_pcie_down ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_PCIE_DOWN ;
}
2017-02-20 00:43:26 +03:00
static inline bool tb_port_is_pcie_up ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_PCIE_UP ;
}
2018-09-17 16:30:49 +03:00
static inline bool tb_port_is_dpin ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_DP_HDMI_IN ;
}
static inline bool tb_port_is_dpout ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_DP_HDMI_OUT ;
}
2019-12-17 15:33:44 +03:00
static inline bool tb_port_is_usb3_down ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_USB3_DOWN ;
}
static inline bool tb_port_is_usb3_up ( const struct tb_port * port )
{
return port & & port - > config . type = = TB_TYPE_USB3_UP ;
}
2014-06-04 00:04:02 +04:00
static inline int tb_sw_read ( struct tb_switch * sw , void * buffer ,
enum tb_cfg_space space , u32 offset , u32 length )
{
2019-03-19 18:07:37 +03:00
if ( sw - > is_unplugged )
return - ENODEV ;
2014-06-04 00:04:02 +04:00
return tb_cfg_read ( sw - > tb - > ctl ,
buffer ,
tb_route ( sw ) ,
0 ,
space ,
offset ,
length ) ;
}
2019-07-01 18:41:51 +03:00
static inline int tb_sw_write ( struct tb_switch * sw , const void * buffer ,
2014-06-04 00:04:02 +04:00
enum tb_cfg_space space , u32 offset , u32 length )
{
2019-03-19 18:07:37 +03:00
if ( sw - > is_unplugged )
return - ENODEV ;
2014-06-04 00:04:02 +04:00
return tb_cfg_write ( sw - > tb - > ctl ,
buffer ,
tb_route ( sw ) ,
0 ,
space ,
offset ,
length ) ;
}
static inline int tb_port_read ( struct tb_port * port , void * buffer ,
enum tb_cfg_space space , u32 offset , u32 length )
{
2019-03-19 18:07:37 +03:00
if ( port - > sw - > is_unplugged )
return - ENODEV ;
2014-06-04 00:04:02 +04:00
return tb_cfg_read ( port - > sw - > tb - > ctl ,
buffer ,
tb_route ( port - > sw ) ,
port - > port ,
space ,
offset ,
length ) ;
}
2017-06-06 15:24:53 +03:00
static inline int tb_port_write ( struct tb_port * port , const void * buffer ,
2014-06-04 00:04:02 +04:00
enum tb_cfg_space space , u32 offset , u32 length )
{
2019-03-19 18:07:37 +03:00
if ( port - > sw - > is_unplugged )
return - ENODEV ;
2014-06-04 00:04:02 +04:00
return tb_cfg_write ( port - > sw - > tb - > ctl ,
buffer ,
tb_route ( port - > sw ) ,
port - > port ,
space ,
offset ,
length ) ;
}
# define tb_err(tb, fmt, arg...) dev_err(&(tb)->nhi->pdev->dev, fmt, ## arg)
# define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## arg)
# define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## arg)
# define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## arg)
2018-10-01 12:31:19 +03:00
# define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg)
2014-06-04 00:04:02 +04:00
# define __TB_SW_PRINT(level, sw, fmt, arg...) \
do { \
2019-03-06 20:23:38 +03:00
const struct tb_switch * __sw = ( sw ) ; \
2014-06-04 00:04:02 +04:00
level ( __sw - > tb , " %llx: " fmt , \
tb_route ( __sw ) , # # arg ) ; \
} while ( 0 )
# define tb_sw_WARN(sw, fmt, arg...) __TB_SW_PRINT(tb_WARN, sw, fmt, ##arg)
# define tb_sw_warn(sw, fmt, arg...) __TB_SW_PRINT(tb_warn, sw, fmt, ##arg)
# define tb_sw_info(sw, fmt, arg...) __TB_SW_PRINT(tb_info, sw, fmt, ##arg)
2018-10-01 12:31:19 +03:00
# define tb_sw_dbg(sw, fmt, arg...) __TB_SW_PRINT(tb_dbg, sw, fmt, ##arg)
2014-06-04 00:04:02 +04:00
# define __TB_PORT_PRINT(level, _port, fmt, arg...) \
do { \
2019-03-06 20:23:38 +03:00
const struct tb_port * __port = ( _port ) ; \
2022-04-01 13:36:47 +03:00
level ( __port - > sw - > tb , " %llx:%u: " fmt , \
2014-06-04 00:04:02 +04:00
tb_route ( __port - > sw ) , __port - > port , # # arg ) ; \
} while ( 0 )
# define tb_port_WARN(port, fmt, arg...) \
__TB_PORT_PRINT ( tb_WARN , port , fmt , # # arg )
# define tb_port_warn(port, fmt, arg...) \
__TB_PORT_PRINT ( tb_warn , port , fmt , # # arg )
# define tb_port_info(port, fmt, arg...) \
__TB_PORT_PRINT ( tb_info , port , fmt , # # arg )
2018-10-01 12:31:19 +03:00
# define tb_port_dbg(port, fmt, arg...) \
__TB_PORT_PRINT ( tb_dbg , port , fmt , # # arg )
2014-06-04 00:04:02 +04:00
2017-06-06 15:25:16 +03:00
struct tb * icm_probe ( struct tb_nhi * nhi ) ;
2017-06-06 15:25:00 +03:00
struct tb * tb_probe ( struct tb_nhi * nhi ) ;
extern struct device_type tb_domain_type ;
2020-03-05 17:39:58 +03:00
extern struct device_type tb_retimer_type ;
2017-06-06 15:25:01 +03:00
extern struct device_type tb_switch_type ;
2021-04-01 17:34:20 +03:00
extern struct device_type usb4_port_device_type ;
2017-06-06 15:25:00 +03:00
int tb_domain_init ( void ) ;
void tb_domain_exit ( void ) ;
2017-10-02 13:38:34 +03:00
int tb_xdomain_init ( void ) ;
void tb_xdomain_exit ( void ) ;
2014-06-04 00:04:02 +04:00
2020-12-29 14:44:57 +03:00
struct tb * tb_domain_alloc ( struct tb_nhi * nhi , int timeout_msec , size_t privsize ) ;
2017-06-06 15:25:00 +03:00
int tb_domain_add ( struct tb * tb ) ;
void tb_domain_remove ( struct tb * tb ) ;
int tb_domain_suspend_noirq ( struct tb * tb ) ;
int tb_domain_resume_noirq ( struct tb * tb ) ;
2017-06-06 15:25:16 +03:00
int tb_domain_suspend ( struct tb * tb ) ;
2020-08-31 13:05:14 +03:00
int tb_domain_freeze_noirq ( struct tb * tb ) ;
int tb_domain_thaw_noirq ( struct tb * tb ) ;
2017-06-06 15:25:16 +03:00
void tb_domain_complete ( struct tb * tb ) ;
2018-07-25 11:48:39 +03:00
int tb_domain_runtime_suspend ( struct tb * tb ) ;
int tb_domain_runtime_resume ( struct tb * tb ) ;
2020-11-10 11:47:14 +03:00
int tb_domain_disapprove_switch ( struct tb * tb , struct tb_switch * sw ) ;
2017-06-06 15:25:16 +03:00
int tb_domain_approve_switch ( struct tb * tb , struct tb_switch * sw ) ;
int tb_domain_approve_switch_key ( struct tb * tb , struct tb_switch * sw ) ;
int tb_domain_challenge_switch_key ( struct tb * tb , struct tb_switch * sw ) ;
2017-06-06 15:25:17 +03:00
int tb_domain_disconnect_pcie_paths ( struct tb * tb ) ;
2021-01-08 17:25:39 +03:00
int tb_domain_approve_xdomain_paths ( struct tb * tb , struct tb_xdomain * xd ,
int transmit_path , int transmit_ring ,
int receive_path , int receive_ring ) ;
int tb_domain_disconnect_xdomain_paths ( struct tb * tb , struct tb_xdomain * xd ,
int transmit_path , int transmit_ring ,
int receive_path , int receive_ring ) ;
2017-10-02 13:38:34 +03:00
int tb_domain_disconnect_all_paths ( struct tb * tb ) ;
2017-06-06 15:25:00 +03:00
2018-10-22 14:47:01 +03:00
static inline struct tb * tb_domain_get ( struct tb * tb )
{
if ( tb )
get_device ( & tb - > dev ) ;
return tb ;
}
2017-06-06 15:25:00 +03:00
static inline void tb_domain_put ( struct tb * tb )
{
put_device ( & tb - > dev ) ;
}
2014-06-04 00:04:00 +04:00
2020-03-05 12:37:15 +03:00
struct tb_nvm * tb_nvm_alloc ( struct device * dev ) ;
int tb_nvm_add_active ( struct tb_nvm * nvm , size_t size , nvmem_reg_read_t reg_read ) ;
int tb_nvm_write_buf ( struct tb_nvm * nvm , unsigned int offset , void * val ,
size_t bytes ) ;
int tb_nvm_add_non_active ( struct tb_nvm * nvm , size_t size ,
nvmem_reg_write_t reg_write ) ;
void tb_nvm_free ( struct tb_nvm * nvm ) ;
void tb_nvm_exit ( void ) ;
2021-04-01 16:54:15 +03:00
typedef int ( * read_block_fn ) ( void * , unsigned int , void * , size_t ) ;
typedef int ( * write_block_fn ) ( void * , unsigned int , const void * , size_t ) ;
int tb_nvm_read_data ( unsigned int address , void * buf , size_t size ,
unsigned int retries , read_block_fn read_block ,
void * read_block_data ) ;
int tb_nvm_write_data ( unsigned int address , const void * buf , size_t size ,
unsigned int retries , write_block_fn write_next_block ,
void * write_block_data ) ;
2017-06-06 15:25:01 +03:00
struct tb_switch * tb_switch_alloc ( struct tb * tb , struct device * parent ,
u64 route ) ;
2017-06-06 15:25:17 +03:00
struct tb_switch * tb_switch_alloc_safe_mode ( struct tb * tb ,
struct device * parent , u64 route ) ;
2017-06-06 15:25:01 +03:00
int tb_switch_configure ( struct tb_switch * sw ) ;
int tb_switch_add ( struct tb_switch * sw ) ;
void tb_switch_remove ( struct tb_switch * sw ) ;
2020-06-05 14:25:02 +03:00
void tb_switch_suspend ( struct tb_switch * sw , bool runtime ) ;
2014-06-04 00:04:12 +04:00
int tb_switch_resume ( struct tb_switch * sw ) ;
2019-09-19 15:25:30 +03:00
int tb_switch_reset ( struct tb_switch * sw ) ;
2021-12-17 04:16:40 +03:00
int tb_switch_wait_for_bit ( struct tb_switch * sw , u32 offset , u32 bit ,
u32 value , int timeout_msec ) ;
2016-03-20 15:57:20 +03:00
void tb_sw_set_unplugged ( struct tb_switch * sw ) ;
2019-12-17 15:33:37 +03:00
struct tb_port * tb_switch_find_port ( struct tb_switch * sw ,
enum tb_port_type type ) ;
2017-06-06 15:25:16 +03:00
struct tb_switch * tb_switch_find_by_link_depth ( struct tb * tb , u8 link ,
u8 depth ) ;
2017-07-18 16:30:05 +03:00
struct tb_switch * tb_switch_find_by_uuid ( struct tb * tb , const uuid_t * uuid ) ;
2017-10-04 15:24:14 +03:00
struct tb_switch * tb_switch_find_by_route ( struct tb * tb , u64 route ) ;
2017-06-06 15:25:16 +03:00
2019-09-30 14:07:22 +03:00
/**
* tb_switch_for_each_port ( ) - Iterate over each switch port
* @ sw : Switch whose ports to iterate
* @ p : Port used as iterator
*
* Iterates over each switch port skipping the control port ( port % 0 ) .
*/
# define tb_switch_for_each_port(sw, p) \
for ( ( p ) = & ( sw ) - > ports [ 1 ] ; \
( p ) < = & ( sw ) - > ports [ ( sw ) - > config . max_port_number ] ; ( p ) + + )
2017-10-04 15:19:20 +03:00
static inline struct tb_switch * tb_switch_get ( struct tb_switch * sw )
{
if ( sw )
get_device ( & sw - > dev ) ;
return sw ;
}
2017-06-06 15:25:01 +03:00
static inline void tb_switch_put ( struct tb_switch * sw )
{
put_device ( & sw - > dev ) ;
}
static inline bool tb_is_switch ( const struct device * dev )
{
return dev - > type = = & tb_switch_type ;
}
static inline struct tb_switch * tb_to_switch ( struct device * dev )
{
if ( tb_is_switch ( dev ) )
return container_of ( dev , struct tb_switch , dev ) ;
return NULL ;
}
2017-02-20 00:43:26 +03:00
static inline struct tb_switch * tb_switch_parent ( struct tb_switch * sw )
{
return tb_to_switch ( sw - > dev . parent ) ;
}
2019-10-08 16:42:47 +03:00
static inline bool tb_switch_is_light_ridge ( const struct tb_switch * sw )
2019-01-08 19:55:09 +03:00
{
2020-07-25 10:40:47 +03:00
return sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL & &
sw - > config . device_id = = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE ;
2019-01-08 19:55:09 +03:00
}
2019-10-08 16:42:47 +03:00
static inline bool tb_switch_is_eagle_ridge ( const struct tb_switch * sw )
2019-01-08 19:55:09 +03:00
{
2020-07-25 10:40:47 +03:00
return sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL & &
sw - > config . device_id = = PCI_DEVICE_ID_INTEL_EAGLE_RIDGE ;
2019-01-08 19:55:09 +03:00
}
2019-10-08 16:42:47 +03:00
static inline bool tb_switch_is_cactus_ridge ( const struct tb_switch * sw )
2018-12-30 22:34:08 +03:00
{
2020-07-25 10:40:47 +03:00
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C :
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C :
return true ;
}
2018-12-30 22:34:08 +03:00
}
2020-07-25 10:40:47 +03:00
return false ;
2018-12-30 22:34:08 +03:00
}
2019-10-08 16:42:47 +03:00
static inline bool tb_switch_is_falcon_ridge ( const struct tb_switch * sw )
2018-12-30 22:34:08 +03:00
{
2020-07-25 10:40:47 +03:00
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE :
return true ;
}
2018-12-30 22:34:08 +03:00
}
2020-07-25 10:40:47 +03:00
return false ;
2018-12-30 22:34:08 +03:00
}
2019-03-22 16:16:53 +03:00
static inline bool tb_switch_is_alpine_ridge ( const struct tb_switch * sw )
{
2020-07-25 10:40:47 +03:00
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE :
2022-01-07 13:59:01 +03:00
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE :
2020-07-25 10:40:47 +03:00
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE :
return true ;
}
2019-03-22 16:16:53 +03:00
}
2020-07-25 10:40:47 +03:00
return false ;
2019-03-22 16:16:53 +03:00
}
static inline bool tb_switch_is_titan_ridge ( const struct tb_switch * sw )
{
2020-07-25 10:40:47 +03:00
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE :
return true ;
}
2019-03-22 16:16:53 +03:00
}
2020-07-25 10:40:47 +03:00
return false ;
2019-03-22 16:16:53 +03:00
}
2021-12-17 04:16:39 +03:00
static inline bool tb_switch_is_tiger_lake ( const struct tb_switch * sw )
{
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_TGL_NHI0 :
case PCI_DEVICE_ID_INTEL_TGL_NHI1 :
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0 :
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1 :
return true ;
}
}
return false ;
}
2019-12-17 15:33:40 +03:00
/**
* tb_switch_is_usb4 ( ) - Is the switch USB4 compliant
* @ sw : Switch to check
*
* Returns true if the @ sw is USB4 compliant router , false otherwise .
*/
static inline bool tb_switch_is_usb4 ( const struct tb_switch * sw )
{
return sw - > config . thunderbolt_version = = USB4_VERSION_1_0 ;
}
2019-06-25 15:10:01 +03:00
/**
* tb_switch_is_icm ( ) - Is the switch handled by ICM firmware
* @ sw : Switch to check
*
* In case there is a need to differentiate whether ICM firmware or SW CM
* is handling @ sw this function can be called . It is valid to call this
* after tb_switch_alloc ( ) and tb_switch_configure ( ) has been called
* ( latter only for SW CM case ) .
*/
static inline bool tb_switch_is_icm ( const struct tb_switch * sw )
{
return ! sw - > config . enabled ;
}
2019-03-21 20:03:00 +03:00
int tb_switch_lane_bonding_enable ( struct tb_switch * sw ) ;
void tb_switch_lane_bonding_disable ( struct tb_switch * sw ) ;
2020-04-02 14:50:52 +03:00
int tb_switch_configure_link ( struct tb_switch * sw ) ;
void tb_switch_unconfigure_link ( struct tb_switch * sw ) ;
2019-03-21 20:03:00 +03:00
2019-03-26 15:52:30 +03:00
bool tb_switch_query_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
int tb_switch_alloc_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
void tb_switch_dealloc_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
2019-12-17 15:33:43 +03:00
int tb_switch_tmu_init ( struct tb_switch * sw ) ;
int tb_switch_tmu_post_time ( struct tb_switch * sw ) ;
int tb_switch_tmu_disable ( struct tb_switch * sw ) ;
int tb_switch_tmu_enable ( struct tb_switch * sw ) ;
2021-12-17 04:16:38 +03:00
void tb_switch_tmu_configure ( struct tb_switch * sw ,
enum tb_switch_tmu_rate rate ,
bool unidirectional ) ;
2022-05-26 13:59:21 +03:00
void tb_switch_enable_tmu_1st_child ( struct tb_switch * sw ,
enum tb_switch_tmu_rate rate ) ;
2021-12-17 04:16:38 +03:00
/**
2022-05-26 13:59:20 +03:00
* tb_switch_tmu_is_enabled ( ) - Checks if the specified TMU mode is enabled
2021-12-17 04:16:38 +03:00
* @ sw : Router whose TMU mode to check
* @ unidirectional : If uni - directional ( bi - directional otherwise )
*
* Return true if hardware TMU configuration matches the one passed in
2022-05-26 13:59:20 +03:00
* as parameter . That is HiFi / Normal and either uni - directional or bi - directional .
2021-12-17 04:16:38 +03:00
*/
2022-05-26 13:59:20 +03:00
static inline bool tb_switch_tmu_is_enabled ( const struct tb_switch * sw ,
bool unidirectional )
2019-12-17 15:33:43 +03:00
{
2022-05-26 13:59:20 +03:00
return sw - > tmu . rate = = sw - > tmu . rate_request & &
2021-12-17 04:16:38 +03:00
sw - > tmu . unidirectional = = unidirectional ;
2019-12-17 15:33:43 +03:00
}
2022-05-26 13:59:20 +03:00
static inline const char * tb_switch_clx_name ( enum tb_clx clx )
{
switch ( clx ) {
/* CL0s and CL1 are enabled and supported together */
case TB_CL1 :
return " CL0s/CL1 " ;
default :
return " unknown " ;
}
}
2021-12-17 04:16:39 +03:00
int tb_switch_enable_clx ( struct tb_switch * sw , enum tb_clx clx ) ;
int tb_switch_disable_clx ( struct tb_switch * sw , enum tb_clx clx ) ;
/**
* tb_switch_is_clx_enabled ( ) - Checks if the CLx is enabled
2022-05-26 13:59:20 +03:00
* @ sw : Router to check for the CLx
* @ clx : The CLx state to check for
2021-12-17 04:16:39 +03:00
*
2022-05-26 13:59:20 +03:00
* Checks if the specified CLx is enabled on the router upstream link .
2021-12-17 04:16:39 +03:00
* Not applicable for a host router .
*/
2022-05-26 13:59:20 +03:00
static inline bool tb_switch_is_clx_enabled ( const struct tb_switch * sw ,
enum tb_clx clx )
2021-12-17 04:16:39 +03:00
{
2022-05-26 13:59:20 +03:00
return sw - > clx = = clx ;
2021-12-17 04:16:39 +03:00
}
2021-12-17 04:16:43 +03:00
/**
* tb_switch_is_clx_supported ( ) - Is CLx supported on this type of router
* @ sw : The router to check CLx support for
*/
static inline bool tb_switch_is_clx_supported ( const struct tb_switch * sw )
{
return tb_switch_is_usb4 ( sw ) | | tb_switch_is_titan_ridge ( sw ) ;
}
int tb_switch_mask_clx_objections ( struct tb_switch * sw ) ;
int tb_switch_pcie_l1_enable ( struct tb_switch * sw ) ;
2022-01-07 14:00:47 +03:00
int tb_switch_xhci_connect ( struct tb_switch * sw ) ;
void tb_switch_xhci_disconnect ( struct tb_switch * sw ) ;
2022-02-13 12:45:39 +03:00
int tb_port_state ( struct tb_port * port ) ;
2014-06-04 00:04:05 +04:00
int tb_wait_for_port ( struct tb_port * port , bool wait_if_unplugged ) ;
2014-06-04 00:04:07 +04:00
int tb_port_add_nfc_credits ( struct tb_port * port , int credits ) ;
int tb_port_clear_counter ( struct tb_port * port , int counter ) ;
2019-12-17 15:33:40 +03:00
int tb_port_unlock ( struct tb_port * port ) ;
2020-02-21 13:11:54 +03:00
int tb_port_enable ( struct tb_port * port ) ;
int tb_port_disable ( struct tb_port * port ) ;
2017-02-19 17:57:27 +03:00
int tb_port_alloc_in_hopid ( struct tb_port * port , int hopid , int max_hopid ) ;
void tb_port_release_in_hopid ( struct tb_port * port , int hopid ) ;
int tb_port_alloc_out_hopid ( struct tb_port * port , int hopid , int max_hopid ) ;
void tb_port_release_out_hopid ( struct tb_port * port , int hopid ) ;
2017-02-19 22:51:30 +03:00
struct tb_port * tb_next_port_on_path ( struct tb_port * start , struct tb_port * end ,
struct tb_port * prev ) ;
2014-06-04 00:04:05 +04:00
2021-03-10 14:34:12 +03:00
static inline bool tb_port_use_credit_allocation ( const struct tb_port * port )
{
return tb_port_is_null ( port ) & & port - > sw - > credit_allocation ;
}
2020-04-29 17:07:59 +03:00
/**
* tb_for_each_port_on_path ( ) - Iterate over each port on path
* @ src : Source port
* @ dst : Destination port
* @ p : Port used as iterator
*
* Walks over each port on path from @ src to @ dst .
*/
# define tb_for_each_port_on_path(src, dst, p) \
for ( ( p ) = tb_next_port_on_path ( ( src ) , ( dst ) , NULL ) ; ( p ) ; \
( p ) = tb_next_port_on_path ( ( src ) , ( dst ) , ( p ) ) )
2020-05-08 12:41:34 +03:00
int tb_port_get_link_speed ( struct tb_port * port ) ;
2020-09-24 11:43:58 +03:00
int tb_port_get_link_width ( struct tb_port * port ) ;
2022-02-13 12:54:15 +03:00
int tb_port_set_link_width ( struct tb_port * port , unsigned int width ) ;
int tb_port_set_lane_bonding ( struct tb_port * port , bool bonding ) ;
2020-09-24 11:44:01 +03:00
int tb_port_lane_bonding_enable ( struct tb_port * port ) ;
void tb_port_lane_bonding_disable ( struct tb_port * port ) ;
2021-03-22 17:54:54 +03:00
int tb_port_wait_for_link_width ( struct tb_port * port , int width ,
int timeout_msec ) ;
2021-03-22 18:01:59 +03:00
int tb_port_update_credits ( struct tb_port * port ) ;
2020-05-08 12:41:34 +03:00
2017-06-06 15:24:58 +03:00
int tb_switch_find_vse_cap ( struct tb_switch * sw , enum tb_switch_vse_cap vsec ) ;
2019-12-17 15:33:42 +03:00
int tb_switch_find_cap ( struct tb_switch * sw , enum tb_switch_cap cap ) ;
2020-06-29 20:21:07 +03:00
int tb_switch_next_cap ( struct tb_switch * sw , unsigned int offset ) ;
2017-06-06 15:24:58 +03:00
int tb_port_find_cap ( struct tb_port * port , enum tb_port_cap cap ) ;
2020-06-29 20:15:17 +03:00
int tb_port_next_cap ( struct tb_port * port , unsigned int offset ) ;
2017-10-12 16:45:50 +03:00
bool tb_port_is_enabled ( struct tb_port * port ) ;
2014-06-04 00:04:03 +04:00
2019-12-17 15:33:44 +03:00
bool tb_usb3_port_is_enabled ( struct tb_port * port ) ;
int tb_usb3_port_enable ( struct tb_port * port , bool enable ) ;
2017-02-20 00:43:26 +03:00
bool tb_pci_port_is_enabled ( struct tb_port * port ) ;
2017-02-19 14:48:29 +03:00
int tb_pci_port_enable ( struct tb_port * port , bool enable ) ;
2018-09-17 16:30:49 +03:00
int tb_dp_port_hpd_is_active ( struct tb_port * port ) ;
int tb_dp_port_hpd_clear ( struct tb_port * port ) ;
int tb_dp_port_set_hops ( struct tb_port * port , unsigned int video ,
unsigned int aux_tx , unsigned int aux_rx ) ;
bool tb_dp_port_is_enabled ( struct tb_port * port ) ;
int tb_dp_port_enable ( struct tb_port * port , bool enable ) ;
2017-02-20 00:43:26 +03:00
struct tb_path * tb_path_discover ( struct tb_port * src , int src_hopid ,
struct tb_port * dst , int dst_hopid ,
2021-11-14 18:20:59 +03:00
struct tb_port * * last , const char * name ,
bool alloc_hopid ) ;
2017-02-19 23:11:41 +03:00
struct tb_path * tb_path_alloc ( struct tb * tb , struct tb_port * src , int src_hopid ,
struct tb_port * dst , int dst_hopid , int link_nr ,
const char * name ) ;
2014-06-04 00:04:07 +04:00
void tb_path_free ( struct tb_path * path ) ;
int tb_path_activate ( struct tb_path * path ) ;
void tb_path_deactivate ( struct tb_path * path ) ;
bool tb_path_is_invalid ( struct tb_path * path ) ;
2020-03-24 15:44:13 +03:00
bool tb_path_port_on_path ( const struct tb_path * path ,
const struct tb_port * port ) ;
2014-06-04 00:04:07 +04:00
2021-03-22 19:09:35 +03:00
/**
* tb_path_for_each_hop ( ) - Iterate over each hop on path
* @ path : Path whose hops to iterate
* @ hop : Hop used as iterator
*
* Iterates over each hop on path .
*/
# define tb_path_for_each_hop(path, hop) \
for ( ( hop ) = & ( path ) - > hops [ 0 ] ; \
( hop ) < = & ( path ) - > hops [ ( path ) - > path_length - 1 ] ; ( hop ) + + )
2014-06-13 01:11:46 +04:00
int tb_drom_read ( struct tb_switch * sw ) ;
int tb_drom_read_uid_only ( struct tb_switch * sw , u64 * uid ) ;
2014-06-04 00:04:11 +04:00
2019-01-09 17:42:12 +03:00
int tb_lc_read_uuid ( struct tb_switch * sw , u32 * uuid ) ;
2020-04-02 12:42:44 +03:00
int tb_lc_configure_port ( struct tb_port * port ) ;
void tb_lc_unconfigure_port ( struct tb_port * port ) ;
2020-04-09 14:23:32 +03:00
int tb_lc_configure_xdomain ( struct tb_port * port ) ;
void tb_lc_unconfigure_xdomain ( struct tb_port * port ) ;
2020-11-26 12:52:43 +03:00
int tb_lc_start_lane_initialization ( struct tb_port * port ) ;
2021-12-17 04:16:43 +03:00
bool tb_lc_is_clx_supported ( struct tb_port * port ) ;
2022-01-07 14:00:47 +03:00
bool tb_lc_is_usb_plugged ( struct tb_port * port ) ;
bool tb_lc_is_xhci_connected ( struct tb_port * port ) ;
int tb_lc_xhci_connect ( struct tb_port * port ) ;
void tb_lc_xhci_disconnect ( struct tb_port * port ) ;
2019-12-06 19:36:07 +03:00
int tb_lc_set_wake ( struct tb_switch * sw , unsigned int flags ) ;
2019-01-09 18:25:43 +03:00
int tb_lc_set_sleep ( struct tb_switch * sw ) ;
2019-03-21 20:03:00 +03:00
bool tb_lc_lane_bonding_possible ( struct tb_switch * sw ) ;
2019-03-26 15:52:30 +03:00
bool tb_lc_dp_sink_query ( struct tb_switch * sw , struct tb_port * in ) ;
int tb_lc_dp_sink_alloc ( struct tb_switch * sw , struct tb_port * in ) ;
int tb_lc_dp_sink_dealloc ( struct tb_switch * sw , struct tb_port * in ) ;
2020-06-23 19:14:29 +03:00
int tb_lc_force_power ( struct tb_switch * sw ) ;
2014-06-04 00:04:02 +04:00
static inline int tb_route_length ( u64 route )
{
return ( fls64 ( route ) + TB_ROUTE_SHIFT - 1 ) / TB_ROUTE_SHIFT ;
}
2014-06-04 00:04:05 +04:00
/**
* tb_downstream_route ( ) - get route to downstream switch
*
* Port must not be the upstream port ( otherwise a loop is created ) .
*
* Return : Returns a route to the switch behind @ port .
*/
static inline u64 tb_downstream_route ( struct tb_port * port )
{
return tb_route ( port - > sw )
| ( ( u64 ) port - > port < < ( port - > sw - > config . depth * 8 ) ) ;
}
2020-10-22 13:22:06 +03:00
bool tb_is_xdomain_enabled ( void ) ;
2017-10-02 13:38:34 +03:00
bool tb_xdomain_handle_request ( struct tb * tb , enum tb_cfg_pkg_type type ,
const void * buf , size_t size ) ;
struct tb_xdomain * tb_xdomain_alloc ( struct tb * tb , struct device * parent ,
u64 route , const uuid_t * local_uuid ,
const uuid_t * remote_uuid ) ;
void tb_xdomain_add ( struct tb_xdomain * xd ) ;
void tb_xdomain_remove ( struct tb_xdomain * xd ) ;
struct tb_xdomain * tb_xdomain_find_by_link_depth ( struct tb * tb , u8 link ,
u8 depth ) ;
2021-04-01 18:42:38 +03:00
int tb_retimer_scan ( struct tb_port * port , bool add ) ;
2020-03-05 17:39:58 +03:00
void tb_retimer_remove_all ( struct tb_port * port ) ;
static inline bool tb_is_retimer ( const struct device * dev )
{
return dev - > type = = & tb_retimer_type ;
}
static inline struct tb_retimer * tb_to_retimer ( struct device * dev )
{
if ( tb_is_retimer ( dev ) )
return container_of ( dev , struct tb_retimer , dev ) ;
return NULL ;
}
2019-12-17 15:33:40 +03:00
int usb4_switch_setup ( struct tb_switch * sw ) ;
int usb4_switch_read_uid ( struct tb_switch * sw , u64 * uid ) ;
int usb4_switch_drom_read ( struct tb_switch * sw , unsigned int address , void * buf ,
size_t size ) ;
bool usb4_switch_lane_bonding_possible ( struct tb_switch * sw ) ;
2019-12-06 19:36:07 +03:00
int usb4_switch_set_wake ( struct tb_switch * sw , unsigned int flags ) ;
2019-12-17 15:33:40 +03:00
int usb4_switch_set_sleep ( struct tb_switch * sw ) ;
int usb4_switch_nvm_sector_size ( struct tb_switch * sw ) ;
int usb4_switch_nvm_read ( struct tb_switch * sw , unsigned int address , void * buf ,
size_t size ) ;
2021-04-12 15:25:08 +03:00
int usb4_switch_nvm_set_offset ( struct tb_switch * sw , unsigned int address ) ;
2019-12-17 15:33:40 +03:00
int usb4_switch_nvm_write ( struct tb_switch * sw , unsigned int address ,
const void * buf , size_t size ) ;
int usb4_switch_nvm_authenticate ( struct tb_switch * sw ) ;
2020-11-10 11:34:07 +03:00
int usb4_switch_nvm_authenticate_status ( struct tb_switch * sw , u32 * status ) ;
2021-03-10 14:34:12 +03:00
int usb4_switch_credits_init ( struct tb_switch * sw ) ;
2019-12-17 15:33:40 +03:00
bool usb4_switch_query_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
int usb4_switch_alloc_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
int usb4_switch_dealloc_dp_resource ( struct tb_switch * sw , struct tb_port * in ) ;
struct tb_port * usb4_switch_map_pcie_down ( struct tb_switch * sw ,
const struct tb_port * port ) ;
2019-12-17 15:33:44 +03:00
struct tb_port * usb4_switch_map_usb3_down ( struct tb_switch * sw ,
const struct tb_port * port ) ;
2021-04-01 17:34:20 +03:00
int usb4_switch_add_ports ( struct tb_switch * sw ) ;
void usb4_switch_remove_ports ( struct tb_switch * sw ) ;
2019-12-17 15:33:40 +03:00
int usb4_port_unlock ( struct tb_port * port ) ;
2020-04-02 12:42:44 +03:00
int usb4_port_configure ( struct tb_port * port ) ;
void usb4_port_unconfigure ( struct tb_port * port ) ;
2020-04-09 14:23:32 +03:00
int usb4_port_configure_xdomain ( struct tb_port * port ) ;
void usb4_port_unconfigure_xdomain ( struct tb_port * port ) ;
2021-04-01 18:38:05 +03:00
int usb4_port_router_offline ( struct tb_port * port ) ;
int usb4_port_router_online ( struct tb_port * port ) ;
2020-03-05 17:33:46 +03:00
int usb4_port_enumerate_retimers ( struct tb_port * port ) ;
2021-12-17 04:16:39 +03:00
bool usb4_port_clx_supported ( struct tb_port * port ) ;
2020-03-05 17:33:46 +03:00
2021-04-01 18:38:05 +03:00
int usb4_port_retimer_set_inbound_sbtx ( struct tb_port * port , u8 index ) ;
2020-03-05 17:33:46 +03:00
int usb4_port_retimer_read ( struct tb_port * port , u8 index , u8 reg , void * buf ,
u8 size ) ;
int usb4_port_retimer_write ( struct tb_port * port , u8 index , u8 reg ,
const void * buf , u8 size ) ;
int usb4_port_retimer_is_last ( struct tb_port * port , u8 index ) ;
int usb4_port_retimer_nvm_sector_size ( struct tb_port * port , u8 index ) ;
2021-04-12 15:29:16 +03:00
int usb4_port_retimer_nvm_set_offset ( struct tb_port * port , u8 index ,
unsigned int address ) ;
2020-03-05 17:33:46 +03:00
int usb4_port_retimer_nvm_write ( struct tb_port * port , u8 index ,
unsigned int address , const void * buf ,
size_t size ) ;
int usb4_port_retimer_nvm_authenticate ( struct tb_port * port , u8 index ) ;
int usb4_port_retimer_nvm_authenticate_status ( struct tb_port * port , u8 index ,
u32 * status ) ;
int usb4_port_retimer_nvm_read ( struct tb_port * port , u8 index ,
unsigned int address , void * buf , size_t size ) ;
2020-02-22 00:14:41 +03:00
int usb4_usb3_port_max_link_rate ( struct tb_port * port ) ;
int usb4_usb3_port_actual_link_rate ( struct tb_port * port ) ;
int usb4_usb3_port_allocated_bandwidth ( struct tb_port * port , int * upstream_bw ,
int * downstream_bw ) ;
int usb4_usb3_port_allocate_bandwidth ( struct tb_port * port , int * upstream_bw ,
int * downstream_bw ) ;
int usb4_usb3_port_release_bandwidth ( struct tb_port * port , int * upstream_bw ,
int * downstream_bw ) ;
2020-06-23 19:14:29 +03:00
2021-04-01 17:34:20 +03:00
static inline bool tb_is_usb4_port_device ( const struct device * dev )
{
return dev - > type = = & usb4_port_device_type ;
}
static inline struct usb4_port * tb_to_usb4_port_device ( struct device * dev )
{
if ( tb_is_usb4_port_device ( dev ) )
return container_of ( dev , struct usb4_port , dev ) ;
return NULL ;
}
struct usb4_port * usb4_port_device_add ( struct tb_port * port ) ;
void usb4_port_device_remove ( struct usb4_port * usb4 ) ;
2021-04-01 18:42:38 +03:00
int usb4_port_device_resume ( struct usb4_port * usb4 ) ;
2021-04-01 17:34:20 +03:00
2020-08-26 08:58:29 +03:00
/* Keep link controller awake during update */
2020-06-23 19:14:29 +03:00
# define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
void tb_check_quirks ( struct tb_switch * sw ) ;
2019-04-02 15:26:00 +03:00
# ifdef CONFIG_ACPI
void tb_acpi_add_links ( struct tb_nhi * nhi ) ;
2020-02-18 17:14:42 +03:00
bool tb_acpi_is_native ( void ) ;
bool tb_acpi_may_tunnel_usb3 ( void ) ;
bool tb_acpi_may_tunnel_dp ( void ) ;
bool tb_acpi_may_tunnel_pcie ( void ) ;
bool tb_acpi_is_xdomain_allowed ( void ) ;
2021-04-01 18:20:17 +03:00
int tb_acpi_init ( void ) ;
void tb_acpi_exit ( void ) ;
int tb_acpi_power_on_retimers ( struct tb_port * port ) ;
int tb_acpi_power_off_retimers ( struct tb_port * port ) ;
2019-04-02 15:26:00 +03:00
# else
static inline void tb_acpi_add_links ( struct tb_nhi * nhi ) { }
2020-02-18 17:14:42 +03:00
static inline bool tb_acpi_is_native ( void ) { return true ; }
static inline bool tb_acpi_may_tunnel_usb3 ( void ) { return true ; }
static inline bool tb_acpi_may_tunnel_dp ( void ) { return true ; }
static inline bool tb_acpi_may_tunnel_pcie ( void ) { return true ; }
static inline bool tb_acpi_is_xdomain_allowed ( void ) { return true ; }
2021-04-01 18:20:17 +03:00
static inline int tb_acpi_init ( void ) { return 0 ; }
static inline void tb_acpi_exit ( void ) { }
static inline int tb_acpi_power_on_retimers ( struct tb_port * port ) { return 0 ; }
static inline int tb_acpi_power_off_retimers ( struct tb_port * port ) { return 0 ; }
2019-04-02 15:26:00 +03:00
# endif
2020-06-29 20:30:52 +03:00
# ifdef CONFIG_DEBUG_FS
void tb_debugfs_init ( void ) ;
void tb_debugfs_exit ( void ) ;
void tb_switch_debugfs_init ( struct tb_switch * sw ) ;
void tb_switch_debugfs_remove ( struct tb_switch * sw ) ;
2020-10-07 17:53:44 +03:00
void tb_service_debugfs_init ( struct tb_service * svc ) ;
void tb_service_debugfs_remove ( struct tb_service * svc ) ;
2020-06-29 20:30:52 +03:00
# else
static inline void tb_debugfs_init ( void ) { }
static inline void tb_debugfs_exit ( void ) { }
static inline void tb_switch_debugfs_init ( struct tb_switch * sw ) { }
static inline void tb_switch_debugfs_remove ( struct tb_switch * sw ) { }
2020-10-07 17:53:44 +03:00
static inline void tb_service_debugfs_init ( struct tb_service * svc ) { }
static inline void tb_service_debugfs_remove ( struct tb_service * svc ) { }
2020-06-29 20:30:52 +03:00
# endif
2014-06-04 00:04:00 +04:00
# endif