License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
// SPDX-License-Identifier: GPL-2.0
2014-06-04 00:04:02 +04:00
/*
2018-10-01 12:31:22 +03:00
* Thunderbolt driver - switch / port utility functions
2014-06-04 00:04:02 +04:00
*
* Copyright ( c ) 2014 Andreas Noever < andreas . noever @ gmail . com >
2018-10-01 12:31:22 +03:00
* Copyright ( C ) 2018 , Intel Corporation
2014-06-04 00:04:02 +04:00
*/
# include <linux/delay.h>
2017-06-06 15:25:17 +03:00
# include <linux/idr.h>
2022-10-08 22:45:01 +03:00
# include <linux/module.h>
2017-06-06 15:25:17 +03:00
# include <linux/nvmem-provider.h>
2018-07-25 11:48:39 +03:00
# include <linux/pm_runtime.h>
2019-03-19 17:48:41 +03:00
# include <linux/sched/signal.h>
2017-06-06 15:25:17 +03:00
# include <linux/sizes.h>
2014-06-20 13:02:30 +04:00
# include <linux/slab.h>
2022-10-08 22:45:01 +03:00
# include <linux/string_helpers.h>
2014-06-04 00:04:02 +04:00
# include "tb.h"
2017-06-06 15:25:17 +03:00
/* Switch NVM support */
struct nvm_auth_status {
struct list_head list ;
2017-07-18 16:30:05 +03:00
uuid_t uuid ;
2017-06-06 15:25:17 +03:00
u32 status ;
} ;
/*
* Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we
* keep it separately .
*/
static LIST_HEAD ( nvm_auth_status_cache ) ;
static DEFINE_MUTEX ( nvm_auth_status_lock ) ;
static struct nvm_auth_status * __nvm_get_auth_status ( const struct tb_switch * sw )
{
struct nvm_auth_status * st ;
list_for_each_entry ( st , & nvm_auth_status_cache , list ) {
2017-07-18 16:30:05 +03:00
if ( uuid_equal ( & st - > uuid , sw - > uuid ) )
2017-06-06 15:25:17 +03:00
return st ;
}
return NULL ;
}
static void nvm_get_auth_status ( const struct tb_switch * sw , u32 * status )
{
struct nvm_auth_status * st ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
mutex_unlock ( & nvm_auth_status_lock ) ;
* status = st ? st - > status : 0 ;
}
static void nvm_set_auth_status ( const struct tb_switch * sw , u32 status )
{
struct nvm_auth_status * st ;
if ( WARN_ON ( ! sw - > uuid ) )
return ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
if ( ! st ) {
st = kzalloc ( sizeof ( * st ) , GFP_KERNEL ) ;
if ( ! st )
goto unlock ;
memcpy ( & st - > uuid , sw - > uuid , sizeof ( st - > uuid ) ) ;
INIT_LIST_HEAD ( & st - > list ) ;
list_add_tail ( & st - > list , & nvm_auth_status_cache ) ;
}
st - > status = status ;
unlock :
mutex_unlock ( & nvm_auth_status_lock ) ;
}
static void nvm_clear_auth_status ( const struct tb_switch * sw )
{
struct nvm_auth_status * st ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
if ( st ) {
list_del ( & st - > list ) ;
kfree ( st ) ;
}
mutex_unlock ( & nvm_auth_status_lock ) ;
}
static int nvm_validate_and_write ( struct tb_switch * sw )
{
2022-09-02 12:40:08 +03:00
unsigned int image_size ;
const u8 * buf ;
2017-06-06 15:25:17 +03:00
int ret ;
2022-09-02 12:40:08 +03:00
ret = tb_nvm_validate ( sw - > nvm ) ;
if ( ret )
return ret ;
2017-06-06 15:25:17 +03:00
2022-09-02 12:40:08 +03:00
ret = tb_nvm_write_headers ( sw - > nvm ) ;
if ( ret )
return ret ;
2017-06-06 15:25:17 +03:00
2022-09-02 12:40:08 +03:00
buf = sw - > nvm - > buf_data_start ;
image_size = sw - > nvm - > buf_data_size ;
2017-06-06 15:25:17 +03:00
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
2020-06-23 19:14:28 +03:00
ret = usb4_switch_nvm_write ( sw , 0 , buf , image_size ) ;
else
ret = dma_port_flash_write ( sw - > dma_port , 0 , buf , image_size ) ;
2022-09-02 12:40:08 +03:00
if ( ret )
return ret ;
sw - > nvm - > flushed = true ;
return 0 ;
2017-06-06 15:25:17 +03:00
}
2019-12-17 15:33:40 +03:00
static int nvm_authenticate_host_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:17 +03:00
{
2019-11-11 13:25:44 +03:00
int ret = 0 ;
2017-06-06 15:25:17 +03:00
/*
* Root switch NVM upgrade requires that we disconnect the
2017-10-02 13:38:34 +03:00
* existing paths first ( in case it is not in safe mode
2017-06-06 15:25:17 +03:00
* already ) .
*/
if ( ! sw - > safe_mode ) {
2019-11-11 13:25:44 +03:00
u32 status ;
2017-10-02 13:38:34 +03:00
ret = tb_domain_disconnect_all_paths ( sw - > tb ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
return ret ;
/*
* The host controller goes away pretty soon after this if
* everything goes well so getting timeout is expected .
*/
ret = dma_port_flash_update_auth ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
if ( ! ret | | ret = = - ETIMEDOUT )
return 0 ;
/*
* Any error from update auth operation requires power
* cycling of the host router .
*/
tb_sw_warn ( sw , " failed to authenticate NVM, power cycling \n " ) ;
if ( dma_port_flash_update_auth_status ( sw - > dma_port , & status ) > 0 )
nvm_set_auth_status ( sw , status ) ;
2017-06-06 15:25:17 +03:00
}
/*
* From safe mode we can get out by just power cycling the
* switch .
*/
dma_port_power_cycle ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
return ret ;
2017-06-06 15:25:17 +03:00
}
2019-12-17 15:33:40 +03:00
static int nvm_authenticate_device_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:17 +03:00
{
int ret , retries = 10 ;
ret = dma_port_flash_update_auth ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
switch ( ret ) {
case 0 :
case - ETIMEDOUT :
case - EACCES :
case - EINVAL :
/* Power cycle is required */
break ;
default :
2017-06-06 15:25:17 +03:00
return ret ;
2019-11-11 13:25:44 +03:00
}
2017-06-06 15:25:17 +03:00
/*
* Poll here for the authentication status . It takes some time
* for the device to respond ( we get timeout for a while ) . Once
* we get response the device needs to be power cycled in order
* to the new NVM to be taken into use .
*/
do {
u32 status ;
ret = dma_port_flash_update_auth_status ( sw - > dma_port , & status ) ;
if ( ret < 0 & & ret ! = - ETIMEDOUT )
return ret ;
if ( ret > 0 ) {
if ( status ) {
tb_sw_warn ( sw , " failed to authenticate NVM \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
tb_sw_info ( sw , " power cycling the switch now \n " ) ;
dma_port_power_cycle ( sw - > dma_port ) ;
return 0 ;
}
msleep ( 500 ) ;
} while ( - - retries ) ;
return - ETIMEDOUT ;
}
2019-12-17 15:33:40 +03:00
static void nvm_authenticate_start_dma_port ( struct tb_switch * sw )
{
struct pci_dev * root_port ;
/*
* During host router NVM upgrade we should not allow root port to
* go into D3cold because some root ports cannot trigger PME
* itself . To be on the safe side keep the root port in D0 during
* the whole upgrade process .
*/
2020-05-09 13:19:28 +03:00
root_port = pcie_find_root_port ( sw - > tb - > nhi - > pdev ) ;
2019-12-17 15:33:40 +03:00
if ( root_port )
pm_runtime_get_noresume ( & root_port - > dev ) ;
}
static void nvm_authenticate_complete_dma_port ( struct tb_switch * sw )
{
struct pci_dev * root_port ;
2020-05-09 13:19:28 +03:00
root_port = pcie_find_root_port ( sw - > tb - > nhi - > pdev ) ;
2019-12-17 15:33:40 +03:00
if ( root_port )
pm_runtime_put ( & root_port - > dev ) ;
}
static inline bool nvm_readable ( struct tb_switch * sw )
{
if ( tb_switch_is_usb4 ( sw ) ) {
/*
* USB4 devices must support NVM operations but it is
* optional for hosts . Therefore we query the NVM sector
* size here and if it is supported assume NVM
* operations are implemented .
*/
return usb4_switch_nvm_sector_size ( sw ) > 0 ;
}
/* Thunderbolt 2 and 3 devices support NVM through DMA port */
return ! ! sw - > dma_port ;
}
static inline bool nvm_upgradeable ( struct tb_switch * sw )
{
if ( sw - > no_nvm_upgrade )
return false ;
return nvm_readable ( sw ) ;
}
2021-04-12 15:25:08 +03:00
static int nvm_authenticate ( struct tb_switch * sw , bool auth_only )
2019-12-17 15:33:40 +03:00
{
int ret ;
2021-04-12 15:25:08 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
if ( auth_only ) {
ret = usb4_switch_nvm_set_offset ( sw , 0 ) ;
if ( ret )
return ret ;
}
sw - > nvm - > authenticating = true ;
2019-12-17 15:33:40 +03:00
return usb4_switch_nvm_authenticate ( sw ) ;
2021-04-12 15:25:08 +03:00
}
2023-03-27 20:20:16 +03:00
if ( auth_only )
return - EOPNOTSUPP ;
2019-12-17 15:33:40 +03:00
2021-04-12 15:25:08 +03:00
sw - > nvm - > authenticating = true ;
2019-12-17 15:33:40 +03:00
if ( ! tb_route ( sw ) ) {
nvm_authenticate_start_dma_port ( sw ) ;
ret = nvm_authenticate_host_dma_port ( sw ) ;
} else {
ret = nvm_authenticate_device_dma_port ( sw ) ;
}
return ret ;
}
2022-09-03 10:39:18 +03:00
/**
* tb_switch_nvm_read ( ) - Read router NVM
* @ sw : Router whose NVM to read
* @ address : Start address on the NVM
* @ buf : Buffer where the read data is copied
* @ size : Size of the buffer in bytes
*
* Reads from router NVM and returns the requested data in @ buf . Locking
* is up to the caller . Returns % 0 in success and negative errno in case
* of failure .
*/
int tb_switch_nvm_read ( struct tb_switch * sw , unsigned int address , void * buf ,
size_t size )
{
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_nvm_read ( sw , address , buf , size ) ;
return dma_port_flash_read ( sw - > dma_port , address , buf , size ) ;
}
static int nvm_read ( void * priv , unsigned int offset , void * val , size_t bytes )
2017-06-06 15:25:17 +03:00
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm = priv ;
struct tb_switch * sw = tb_to_switch ( nvm - > dev ) ;
2018-07-25 11:48:39 +03:00
int ret ;
pm_runtime_get_sync ( & sw - > dev ) ;
2019-05-28 18:56:20 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) ) {
ret = restart_syscall ( ) ;
goto out ;
}
2022-09-03 10:39:18 +03:00
ret = tb_switch_nvm_read ( sw , offset , val , bytes ) ;
2019-05-28 18:56:20 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
out :
2018-07-25 11:48:39 +03:00
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:17 +03:00
2018-07-25 11:48:39 +03:00
return ret ;
2017-06-06 15:25:17 +03:00
}
2022-09-03 10:39:18 +03:00
static int nvm_write ( void * priv , unsigned int offset , void * val , size_t bytes )
2017-06-06 15:25:17 +03:00
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm = priv ;
struct tb_switch * sw = tb_to_switch ( nvm - > dev ) ;
int ret ;
2017-06-06 15:25:17 +03:00
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:17 +03:00
/*
* Since writing the NVM image might require some special steps ,
* for example when CSS headers are written , we cache the image
* locally here and handle the special cases when the user asks
* us to authenticate the image .
*/
2020-03-05 12:37:15 +03:00
ret = tb_nvm_write_buf ( nvm , offset , val , bytes ) ;
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static int tb_switch_nvm_add ( struct tb_switch * sw )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm ;
2017-06-06 15:25:17 +03:00
int ret ;
2019-12-17 15:33:40 +03:00
if ( ! nvm_readable ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2022-09-02 12:40:08 +03:00
nvm = tb_nvm_alloc ( & sw - > dev ) ;
if ( IS_ERR ( nvm ) ) {
ret = PTR_ERR ( nvm ) = = - EOPNOTSUPP ? 0 : PTR_ERR ( nvm ) ;
goto err_nvm ;
2019-12-17 15:33:40 +03:00
}
2022-09-02 12:40:08 +03:00
ret = tb_nvm_read_version ( nvm ) ;
if ( ret )
goto err_nvm ;
2017-06-06 15:25:17 +03:00
/*
* If the switch is in safe - mode the only accessible portion of
* the NVM is the non - active one where userspace is expected to
* write new functional NVM .
*/
if ( ! sw - > safe_mode ) {
2022-09-02 12:40:08 +03:00
ret = tb_nvm_add_active ( nvm , nvm_read ) ;
2020-03-05 12:37:15 +03:00
if ( ret )
goto err_nvm ;
2023-09-20 12:13:11 +03:00
tb_sw_dbg ( sw , " NVM version %x.%x \n " , nvm - > major , nvm - > minor ) ;
2017-06-06 15:25:17 +03:00
}
2019-02-05 12:51:40 +03:00
if ( ! sw - > no_nvm_upgrade ) {
2022-09-02 12:40:08 +03:00
ret = tb_nvm_add_non_active ( nvm , nvm_write ) ;
2020-03-05 12:37:15 +03:00
if ( ret )
goto err_nvm ;
2017-06-06 15:25:17 +03:00
}
sw - > nvm = nvm ;
return 0 ;
2020-03-05 12:37:15 +03:00
err_nvm :
2022-09-02 12:40:08 +03:00
tb_sw_dbg ( sw , " NVM upgrade disabled \n " ) ;
sw - > no_nvm_upgrade = true ;
if ( ! IS_ERR ( nvm ) )
tb_nvm_free ( nvm ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static void tb_switch_nvm_remove ( struct tb_switch * sw )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm ;
2017-06-06 15:25:17 +03:00
nvm = sw - > nvm ;
sw - > nvm = NULL ;
if ( ! nvm )
return ;
/* Remove authentication status in case the switch is unplugged */
if ( ! nvm - > authenticating )
nvm_clear_auth_status ( sw ) ;
2020-03-05 12:37:15 +03:00
tb_nvm_free ( nvm ) ;
2017-06-06 15:25:17 +03:00
}
2014-06-04 00:04:02 +04:00
/* port utility functions */
2020-12-10 17:55:17 +03:00
static const char * tb_port_type ( const struct tb_regs_port_header * port )
2014-06-04 00:04:02 +04:00
{
switch ( port - > type > > 16 ) {
case 0 :
switch ( ( u8 ) port - > type ) {
case 0 :
return " Inactive " ;
case 1 :
return " Port " ;
case 2 :
return " NHI " ;
default :
return " unknown " ;
}
case 0x2 :
return " Ethernet " ;
case 0x8 :
return " SATA " ;
case 0xe :
return " DP/HDMI " ;
case 0x10 :
return " PCIe " ;
case 0x20 :
return " USB " ;
default :
return " unknown " ;
}
}
2021-03-10 14:34:12 +03:00
static void tb_dump_port ( struct tb * tb , const struct tb_port * port )
2014-06-04 00:04:02 +04:00
{
2021-03-10 14:34:12 +03:00
const struct tb_regs_port_header * regs = & port - > config ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb ,
" Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x)) \n " ,
2021-03-10 14:34:12 +03:00
regs - > port_number , regs - > vendor_id , regs - > device_id ,
regs - > revision , regs - > thunderbolt_version , tb_port_type ( regs ) ,
regs - > type ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " Max hop id (in/out): %d/%d \n " ,
2021-03-10 14:34:12 +03:00
regs - > max_in_hop_id , regs - > max_out_hop_id ) ;
tb_dbg ( tb , " Max counters: %d \n " , regs - > max_counters ) ;
tb_dbg ( tb , " NFC Credits: %#x \n " , regs - > nfc_credits ) ;
tb_dbg ( tb , " Credits (total/control): %u/%u \n " , port - > total_credits ,
port - > ctl_credits ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:05 +04:00
/**
* tb_port_state ( ) - get connectedness state of a port
2020-09-24 11:44:01 +03:00
* @ port : the port to check
2014-06-04 00:04:05 +04:00
*
* The port must have a TB_CAP_PHY ( i . e . it should be a real port ) .
*
* Return : Returns an enum tb_port_state on success or an error code on failure .
*/
2020-09-24 11:44:01 +03:00
int tb_port_state ( struct tb_port * port )
2014-06-04 00:04:05 +04:00
{
struct tb_cap_phy phy ;
int res ;
if ( port - > cap_phy = = 0 ) {
tb_port_WARN ( port , " does not have a PHY \n " ) ;
return - EINVAL ;
}
res = tb_port_read ( port , & phy , TB_CFG_PORT , port - > cap_phy , 2 ) ;
if ( res )
return res ;
return phy . state ;
}
/**
* tb_wait_for_port ( ) - wait for a port to become ready
2021-01-28 13:51:03 +03:00
* @ port : Port to wait
* @ wait_if_unplugged : Wait also when port is unplugged
2014-06-04 00:04:05 +04:00
*
* Wait up to 1 second for a port to reach state TB_PORT_UP . If
* wait_if_unplugged is set then we also wait if the port is in state
* TB_PORT_UNPLUGGED ( it takes a while for the device to be registered after
* switch resume ) . Otherwise we only wait if a device is registered but the link
* has not yet been established .
*
* Return : Returns an error code on failure . Returns 0 if the port is not
* connected or failed to reach state TB_PORT_UP within one second . Returns 1
* if the port is connected and in state TB_PORT_UP .
*/
int tb_wait_for_port ( struct tb_port * port , bool wait_if_unplugged )
{
int retries = 10 ;
int state ;
if ( ! port - > cap_phy ) {
tb_port_WARN ( port , " does not have PHY \n " ) ;
return - EINVAL ;
}
if ( tb_is_upstream_port ( port ) ) {
tb_port_WARN ( port , " is the upstream port \n " ) ;
return - EINVAL ;
}
while ( retries - - ) {
state = tb_port_state ( port ) ;
2022-03-23 17:13:32 +03:00
switch ( state ) {
case TB_PORT_DISABLED :
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " is disabled (state: 0) \n " ) ;
2014-06-04 00:04:05 +04:00
return 0 ;
2022-03-23 17:13:32 +03:00
case TB_PORT_UNPLUGGED :
2014-06-04 00:04:05 +04:00
if ( wait_if_unplugged ) {
/* used during resume */
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port ,
" is unplugged (state: 7), retrying... \n " ) ;
2014-06-04 00:04:05 +04:00
msleep ( 100 ) ;
2022-03-23 17:13:32 +03:00
break ;
2014-06-04 00:04:05 +04:00
}
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " is unplugged (state: 7) \n " ) ;
2014-06-04 00:04:05 +04:00
return 0 ;
2022-03-23 17:13:32 +03:00
case TB_PORT_UP :
case TB_PORT_TX_CL0S :
case TB_PORT_RX_CL0S :
case TB_PORT_CL1 :
case TB_PORT_CL2 :
tb_port_dbg ( port , " is connected, link is up (state: %d) \n " , state ) ;
2014-06-04 00:04:05 +04:00
return 1 ;
2022-03-23 17:13:32 +03:00
default :
if ( state < 0 )
return state ;
/*
* After plug - in the state is TB_PORT_CONNECTING . Give it some
* time .
*/
tb_port_dbg ( port ,
" is connected, link is not up (state: %d), retrying... \n " ,
state ) ;
msleep ( 100 ) ;
2014-06-04 00:04:05 +04:00
}
}
tb_port_warn ( port ,
" failed to reach state TB_PORT_UP. Ignoring port... \n " ) ;
return 0 ;
}
2014-06-04 00:04:07 +04:00
/**
* tb_port_add_nfc_credits ( ) - add / remove non flow controlled credits to port
2021-01-28 13:51:03 +03:00
* @ port : Port to add / remove NFC credits
* @ credits : Credits to add / remove
2014-06-04 00:04:07 +04:00
*
* Change the number of NFC credits allocated to @ port by @ credits . To remove
* NFC credits pass a negative amount of credits .
*
* Return : Returns 0 on success or an error code on failure .
*/
int tb_port_add_nfc_credits ( struct tb_port * port , int credits )
{
2018-10-11 11:38:22 +03:00
u32 nfc_credits ;
if ( credits = = 0 | | port - > sw - > is_unplugged )
2014-06-04 00:04:07 +04:00
return 0 ;
2018-10-11 11:38:22 +03:00
2020-08-18 17:10:00 +03:00
/*
* USB4 restricts programming NFC buffers to lane adapters only
* so skip other ports .
*/
if ( tb_switch_is_usb4 ( port - > sw ) & & ! tb_port_is_null ( port ) )
return 0 ;
2019-09-06 11:59:00 +03:00
nfc_credits = port - > config . nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK ;
2021-11-18 10:06:45 +03:00
if ( credits < 0 )
credits = max_t ( int , - nfc_credits , credits ) ;
2018-10-11 11:38:22 +03:00
nfc_credits + = credits ;
2019-09-06 11:59:00 +03:00
tb_port_dbg ( port , " adding %d NFC credits to %lu " , credits ,
port - > config . nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK ) ;
2018-10-11 11:38:22 +03:00
2019-09-06 11:59:00 +03:00
port - > config . nfc_credits & = ~ ADP_CS_4_NFC_BUFFERS_MASK ;
2018-10-11 11:38:22 +03:00
port - > config . nfc_credits | = nfc_credits ;
2014-06-04 00:04:07 +04:00
return tb_port_write ( port , & port - > config . nfc_credits ,
2019-09-06 11:59:00 +03:00
TB_CFG_PORT , ADP_CS_4 , 1 ) ;
2014-06-04 00:04:07 +04:00
}
/**
* tb_port_clear_counter ( ) - clear a counter in TB_CFG_COUNTER
2021-01-28 13:51:03 +03:00
* @ port : Port whose counters to clear
* @ counter : Counter index to clear
2014-06-04 00:04:07 +04:00
*
* Return : Returns 0 on success or an error code on failure .
*/
int tb_port_clear_counter ( struct tb_port * port , int counter )
{
u32 zero [ 3 ] = { 0 , 0 , 0 } ;
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " clearing counter %d \n " , counter ) ;
2014-06-04 00:04:07 +04:00
return tb_port_write ( port , zero , TB_CFG_COUNTERS , 3 * counter , 3 ) ;
}
2019-12-17 15:33:40 +03:00
/**
* tb_port_unlock ( ) - Unlock downstream port
* @ port : Port to unlock
*
* Needed for USB4 but can be called for any CIO / USB4 ports . Makes the
* downstream router accessible for CM .
*/
int tb_port_unlock ( struct tb_port * port )
{
if ( tb_switch_is_icm ( port - > sw ) )
return 0 ;
if ( ! tb_port_is_null ( port ) )
return - EINVAL ;
if ( tb_switch_is_usb4 ( port - > sw ) )
return usb4_port_unlock ( port ) ;
return 0 ;
}
2020-02-21 13:11:54 +03:00
static int __tb_port_enable ( struct tb_port * port , bool enable )
{
int ret ;
u32 phy ;
if ( ! tb_port_is_null ( port ) )
return - EINVAL ;
ret = tb_port_read ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
if ( enable )
phy & = ~ LANE_ADP_CS_1_LD ;
else
phy | = LANE_ADP_CS_1_LD ;
2022-02-13 12:42:23 +03:00
ret = tb_port_write ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
2022-10-08 22:45:01 +03:00
tb_port_dbg ( port , " lane %s \n " , str_enabled_disabled ( enable ) ) ;
2022-02-13 12:42:23 +03:00
return 0 ;
2020-02-21 13:11:54 +03:00
}
/**
* tb_port_enable ( ) - Enable lane adapter
* @ port : Port to enable ( can be % NULL )
*
* This is used for lane 0 and 1 adapters to enable it .
*/
int tb_port_enable ( struct tb_port * port )
{
return __tb_port_enable ( port , true ) ;
}
/**
* tb_port_disable ( ) - Disable lane adapter
* @ port : Port to disable ( can be % NULL )
*
* This is used for lane 0 and 1 adapters to disable it .
*/
int tb_port_disable ( struct tb_port * port )
{
return __tb_port_enable ( port , false ) ;
}
2021-01-27 14:25:51 +03:00
/*
2014-06-04 00:04:02 +04:00
* tb_init_port ( ) - initialize a port
*
* This is a helper method for tb_switch_alloc . Does not check or initialize
* any downstream switches .
*
* Return : Returns 0 on success or an error code on failure .
*/
2014-06-13 01:11:47 +04:00
static int tb_init_port ( struct tb_port * port )
2014-06-04 00:04:02 +04:00
{
int res ;
2014-06-04 00:04:05 +04:00
int cap ;
2014-06-13 01:11:47 +04:00
2021-08-03 15:34:55 +03:00
INIT_LIST_HEAD ( & port - > list ) ;
/* Control adapter does not have configuration space */
if ( ! port - > port )
return 0 ;
2014-06-04 00:04:02 +04:00
res = tb_port_read ( port , & port - > config , TB_CFG_PORT , 0 , 8 ) ;
2018-07-04 08:50:01 +03:00
if ( res ) {
if ( res = = - ENODEV ) {
tb_dbg ( port - > sw - > tb , " Port %d: not implemented \n " ,
port - > port ) ;
2020-07-21 14:35:23 +03:00
port - > disabled = true ;
2018-07-04 08:50:01 +03:00
return 0 ;
}
2014-06-04 00:04:02 +04:00
return res ;
2018-07-04 08:50:01 +03:00
}
2014-06-04 00:04:02 +04:00
2014-06-04 00:04:05 +04:00
/* Port 0 is the switch itself and has no PHY. */
2021-08-03 15:34:55 +03:00
if ( port - > config . type = = TB_TYPE_PORT ) {
2017-06-06 15:24:58 +03:00
cap = tb_port_find_cap ( port , TB_PORT_CAP_PHY ) ;
2014-06-04 00:04:05 +04:00
if ( cap > 0 )
port - > cap_phy = cap ;
else
tb_port_WARN ( port , " non switch port without a PHY \n " ) ;
2019-12-17 15:33:40 +03:00
cap = tb_port_find_cap ( port , TB_PORT_CAP_USB4 ) ;
if ( cap > 0 )
port - > cap_usb4 = cap ;
2021-03-10 14:34:12 +03:00
/*
* USB4 ports the buffers allocated for the control path
* can be read from the path config space . Legacy
* devices we use hard - coded value .
*/
2022-12-16 17:28:05 +03:00
if ( port - > cap_usb4 ) {
2021-03-10 14:34:12 +03:00
struct tb_regs_hop hop ;
if ( ! tb_port_read ( port , & hop , TB_CFG_HOPS , 0 , 2 ) )
port - > ctl_credits = hop . initial_credits ;
}
if ( ! port - > ctl_credits )
port - > ctl_credits = 2 ;
2021-08-03 15:34:55 +03:00
} else {
2017-02-19 11:39:34 +03:00
cap = tb_port_find_cap ( port , TB_PORT_CAP_ADAP ) ;
if ( cap > 0 )
port - > cap_adap = cap ;
2014-06-04 00:04:05 +04:00
}
2021-03-10 14:34:12 +03:00
port - > total_credits =
( port - > config . nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK ) > >
ADP_CS_4_TOTAL_BUFFERS_SHIFT ;
tb_dump_port ( port - > sw - > tb , port ) ;
2014-06-04 00:04:02 +04:00
return 0 ;
}
2017-02-19 17:57:27 +03:00
static int tb_port_alloc_hopid ( struct tb_port * port , bool in , int min_hopid ,
int max_hopid )
{
int port_max_hopid ;
struct ida * ida ;
if ( in ) {
port_max_hopid = port - > config . max_in_hop_id ;
ida = & port - > in_hopids ;
} else {
port_max_hopid = port - > config . max_out_hop_id ;
ida = & port - > out_hopids ;
}
2020-06-01 12:47:07 +03:00
/*
* NHI can use HopIDs 1 - max for other adapters HopIDs 0 - 7 are
* reserved .
*/
2020-07-25 10:32:46 +03:00
if ( ! tb_port_is_nhi ( port ) & & min_hopid < TB_PATH_MIN_HOPID )
2017-02-19 17:57:27 +03:00
min_hopid = TB_PATH_MIN_HOPID ;
if ( max_hopid < 0 | | max_hopid > port_max_hopid )
max_hopid = port_max_hopid ;
return ida_simple_get ( ida , min_hopid , max_hopid + 1 , GFP_KERNEL ) ;
}
/**
* tb_port_alloc_in_hopid ( ) - Allocate input HopID from port
* @ port : Port to allocate HopID for
* @ min_hopid : Minimum acceptable input HopID
* @ max_hopid : Maximum acceptable input HopID
*
* Return : HopID between @ min_hopid and @ max_hopid or negative errno in
* case of error .
*/
int tb_port_alloc_in_hopid ( struct tb_port * port , int min_hopid , int max_hopid )
{
return tb_port_alloc_hopid ( port , true , min_hopid , max_hopid ) ;
}
/**
* tb_port_alloc_out_hopid ( ) - Allocate output HopID from port
* @ port : Port to allocate HopID for
* @ min_hopid : Minimum acceptable output HopID
* @ max_hopid : Maximum acceptable output HopID
*
* Return : HopID between @ min_hopid and @ max_hopid or negative errno in
* case of error .
*/
int tb_port_alloc_out_hopid ( struct tb_port * port , int min_hopid , int max_hopid )
{
return tb_port_alloc_hopid ( port , false , min_hopid , max_hopid ) ;
}
/**
* tb_port_release_in_hopid ( ) - Release allocated input HopID from port
* @ port : Port whose HopID to release
* @ hopid : HopID to release
*/
void tb_port_release_in_hopid ( struct tb_port * port , int hopid )
{
ida_simple_remove ( & port - > in_hopids , hopid ) ;
}
/**
* tb_port_release_out_hopid ( ) - Release allocated output HopID from port
* @ port : Port whose HopID to release
* @ hopid : HopID to release
*/
void tb_port_release_out_hopid ( struct tb_port * port , int hopid )
{
ida_simple_remove ( & port - > out_hopids , hopid ) ;
}
2020-04-29 17:00:30 +03:00
static inline bool tb_switch_is_reachable ( const struct tb_switch * parent ,
const struct tb_switch * sw )
{
u64 mask = ( 1ULL < < parent - > config . depth * 8 ) - 1 ;
return ( tb_route ( parent ) & mask ) = = ( tb_route ( sw ) & mask ) ;
}
2017-02-19 22:51:30 +03:00
/**
* tb_next_port_on_path ( ) - Return next port for given port on a path
* @ start : Start port of the walk
* @ end : End port of the walk
* @ prev : Previous port ( % NULL if this is the first )
*
* This function can be used to walk from one port to another if they
* are connected through zero or more switches . If the @ prev is dual
* link port , the function follows that link and returns another end on
* that same link .
*
* If the @ end port has been reached , return % NULL .
*
* Domain tb - > lock must be held when this function is called .
*/
struct tb_port * tb_next_port_on_path ( struct tb_port * start , struct tb_port * end ,
struct tb_port * prev )
{
struct tb_port * next ;
if ( ! prev )
return start ;
if ( prev - > sw = = end - > sw ) {
if ( prev = = end )
return NULL ;
return end ;
}
2020-04-29 17:00:30 +03:00
if ( tb_switch_is_reachable ( prev - > sw , end - > sw ) ) {
next = tb_port_at ( tb_route ( end - > sw ) , prev - > sw ) ;
/* Walk down the topology if next == prev */
2017-02-19 22:51:30 +03:00
if ( prev - > remote & &
2020-04-29 17:00:30 +03:00
( next = = prev | | next - > dual_link_port = = prev ) )
2017-02-19 22:51:30 +03:00
next = prev - > remote ;
} else {
if ( tb_is_upstream_port ( prev ) ) {
next = prev - > remote ;
} else {
next = tb_upstream_port ( prev - > sw ) ;
/*
* Keep the same link if prev and next are both
* dual link ports .
*/
if ( next - > dual_link_port & &
next - > link_nr ! = prev - > link_nr ) {
next = next - > dual_link_port ;
}
}
}
2020-04-29 17:00:30 +03:00
return next ! = prev ? next : NULL ;
2017-02-19 22:51:30 +03:00
}
2020-05-08 12:41:34 +03:00
/**
* tb_port_get_link_speed ( ) - Get current link speed
* @ port : Port to check ( USB4 or CIO )
*
* Returns link speed in Gb / s or negative errno in case of failure .
*/
int tb_port_get_link_speed ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
u32 val , speed ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
speed = ( val & LANE_ADP_CS_1_CURRENT_SPEED_MASK ) > >
LANE_ADP_CS_1_CURRENT_SPEED_SHIFT ;
2022-09-29 13:00:09 +03:00
switch ( speed ) {
case LANE_ADP_CS_1_CURRENT_SPEED_GEN4 :
return 40 ;
case LANE_ADP_CS_1_CURRENT_SPEED_GEN3 :
return 20 ;
default :
return 10 ;
}
2019-03-21 20:03:00 +03:00
}
2023-07-31 05:25:39 +03:00
/**
* tb_port_get_link_generation ( ) - Returns link generation
* @ port : Lane adapter
*
* Returns link generation as number or negative errno in case of
* failure . Does not distinguish between Thunderbolt 1 and Thunderbolt 2
* links so for those always returns 2.
*/
int tb_port_get_link_generation ( struct tb_port * port )
{
int ret ;
ret = tb_port_get_link_speed ( port ) ;
if ( ret < 0 )
return ret ;
switch ( ret ) {
case 40 :
return 4 ;
case 20 :
return 3 ;
default :
return 2 ;
}
}
2023-08-10 22:37:15 +03:00
static const char * width_name ( enum tb_link_width width )
{
switch ( width ) {
case TB_LINK_WIDTH_SINGLE :
return " symmetric, single lane " ;
case TB_LINK_WIDTH_DUAL :
return " symmetric, dual lanes " ;
case TB_LINK_WIDTH_ASYM_TX :
return " asymmetric, 3 transmitters, 1 receiver " ;
case TB_LINK_WIDTH_ASYM_RX :
return " asymmetric, 3 receivers, 1 transmitter " ;
default :
return " unknown " ;
}
}
2020-09-24 11:43:58 +03:00
/**
* tb_port_get_link_width ( ) - Get current link width
* @ port : Port to check ( USB4 or CIO )
*
2022-09-29 13:00:09 +03:00
* Returns link width . Return the link width as encoded in & enum
* tb_link_width or negative errno in case of failure .
2020-09-24 11:43:58 +03:00
*/
int tb_port_get_link_width ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
u32 val ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
2022-09-29 13:00:09 +03:00
/* Matches the values in enum tb_link_width */
2019-03-21 20:03:00 +03:00
return ( val & LANE_ADP_CS_1_CURRENT_WIDTH_MASK ) > >
LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT ;
}
2023-08-10 22:37:15 +03:00
/**
* tb_port_width_supported ( ) - Is the given link width supported
* @ port : Port to check
* @ width : Widths to check ( bitmask )
*
* Can be called to any lane adapter . Checks if given @ width is
* supported by the hardware and returns % true if it is .
*/
bool tb_port_width_supported ( struct tb_port * port , unsigned int width )
2019-03-21 20:03:00 +03:00
{
u32 phy , widths ;
int ret ;
if ( ! port - > cap_phy )
return false ;
2023-08-10 22:37:15 +03:00
if ( width & ( TB_LINK_WIDTH_ASYM_TX | TB_LINK_WIDTH_ASYM_RX ) ) {
if ( tb_port_get_link_generation ( port ) < 4 | |
! usb4_port_asym_supported ( port ) )
return false ;
}
2019-03-21 20:03:00 +03:00
ret = tb_port_read ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_0 , 1 ) ;
if ( ret )
2020-03-03 13:17:16 +03:00
return false ;
2019-03-21 20:03:00 +03:00
2023-08-10 22:37:15 +03:00
/*
* The field encoding is the same as & enum tb_link_width ( which is
* passed to @ width ) .
*/
widths = FIELD_GET ( LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK , phy ) ;
return widths & width ;
2019-03-21 20:03:00 +03:00
}
2022-02-13 12:54:15 +03:00
/**
* tb_port_set_link_width ( ) - Set target link width of the lane adapter
* @ port : Lane adapter
2022-09-29 13:00:09 +03:00
* @ width : Target link width
2022-02-13 12:54:15 +03:00
*
* Sets the target link width of the lane adapter to @ width . Does not
* enable / disable lane bonding . For that call tb_port_set_lane_bonding ( ) .
*
* Return : % 0 in case of success and negative errno in case of error
*/
2022-09-29 13:00:09 +03:00
int tb_port_set_link_width ( struct tb_port * port , enum tb_link_width width )
2019-03-21 20:03:00 +03:00
{
u32 val ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
val & = ~ LANE_ADP_CS_1_TARGET_WIDTH_MASK ;
switch ( width ) {
2022-09-29 13:00:09 +03:00
case TB_LINK_WIDTH_SINGLE :
/* Gen 4 link cannot be single */
2023-07-31 05:25:39 +03:00
if ( tb_port_get_link_generation ( port ) > = 4 )
2022-09-29 13:00:09 +03:00
return - EOPNOTSUPP ;
2019-03-21 20:03:00 +03:00
val | = LANE_ADP_CS_1_TARGET_WIDTH_SINGLE < <
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT ;
break ;
2023-08-10 22:37:15 +03:00
2022-09-29 13:00:09 +03:00
case TB_LINK_WIDTH_DUAL :
2023-08-10 22:37:15 +03:00
if ( tb_port_get_link_generation ( port ) > = 4 )
return usb4_port_asym_set_link_width ( port , width ) ;
2019-03-21 20:03:00 +03:00
val | = LANE_ADP_CS_1_TARGET_WIDTH_DUAL < <
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT ;
break ;
2023-08-10 22:37:15 +03:00
case TB_LINK_WIDTH_ASYM_TX :
case TB_LINK_WIDTH_ASYM_RX :
return usb4_port_asym_set_link_width ( port , width ) ;
2019-03-21 20:03:00 +03:00
default :
return - EINVAL ;
}
return tb_port_write ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
}
2022-02-13 12:54:15 +03:00
/**
* tb_port_set_lane_bonding ( ) - Enable / disable lane bonding
* @ port : Lane adapter
* @ bonding : enable / disable bonding
*
* Enables or disables lane bonding . This should be called after target
* link width has been set ( tb_port_set_link_width ( ) ) . Note in most
* cases one should use tb_port_lane_bonding_enable ( ) instead to enable
* lane bonding .
*
* Return : % 0 in case of success and negative errno in case of error
*/
2022-09-29 13:00:09 +03:00
static int tb_port_set_lane_bonding ( struct tb_port * port , bool bonding )
2022-02-13 12:54:15 +03:00
{
u32 val ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
if ( bonding )
val | = LANE_ADP_CS_1_LB ;
else
val & = ~ LANE_ADP_CS_1_LB ;
2022-09-29 13:00:09 +03:00
return tb_port_write ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
2022-02-13 12:54:15 +03:00
}
2020-09-24 11:44:01 +03:00
/**
* tb_port_lane_bonding_enable ( ) - Enable bonding on port
* @ port : port to enable
*
2021-03-22 17:54:54 +03:00
* Enable bonding by setting the link width of the port and the other
* port in case of dual link port . Does not wait for the link to
* actually reach the bonded state so caller needs to call
* tb_port_wait_for_link_width ( ) before enabling any paths through the
* link to make sure the link is in expected state .
2020-09-24 11:44:01 +03:00
*
* Return : % 0 in case of success and negative errno in case of error
*/
int tb_port_lane_bonding_enable ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
2022-09-29 13:00:09 +03:00
enum tb_link_width width ;
2019-03-21 20:03:00 +03:00
int ret ;
/*
* Enable lane bonding for both links if not already enabled by
* for example the boot firmware .
*/
2022-09-29 13:00:09 +03:00
width = tb_port_get_link_width ( port ) ;
if ( width = = TB_LINK_WIDTH_SINGLE ) {
ret = tb_port_set_link_width ( port , TB_LINK_WIDTH_DUAL ) ;
2019-03-21 20:03:00 +03:00
if ( ret )
2022-02-13 12:54:15 +03:00
goto err_lane0 ;
2019-03-21 20:03:00 +03:00
}
2022-09-29 13:00:09 +03:00
width = tb_port_get_link_width ( port - > dual_link_port ) ;
if ( width = = TB_LINK_WIDTH_SINGLE ) {
ret = tb_port_set_link_width ( port - > dual_link_port ,
TB_LINK_WIDTH_DUAL ) ;
2022-02-13 12:54:15 +03:00
if ( ret )
goto err_lane0 ;
2019-03-21 20:03:00 +03:00
}
2022-09-29 13:00:09 +03:00
/*
* Only set bonding if the link was not already bonded . This
* avoids the lane adapter to re - enter bonding state .
*/
2023-11-07 13:22:40 +03:00
if ( width = = TB_LINK_WIDTH_SINGLE & & ! tb_is_upstream_port ( port ) ) {
2022-09-29 13:00:09 +03:00
ret = tb_port_set_lane_bonding ( port , true ) ;
if ( ret )
goto err_lane1 ;
}
/*
* When lane 0 bonding is set it will affect lane 1 too so
* update both .
*/
port - > bonded = true ;
port - > dual_link_port - > bonded = true ;
2019-03-21 20:03:00 +03:00
return 0 ;
2022-02-13 12:54:15 +03:00
err_lane1 :
2022-09-29 13:00:09 +03:00
tb_port_set_link_width ( port - > dual_link_port , TB_LINK_WIDTH_SINGLE ) ;
2022-02-13 12:54:15 +03:00
err_lane0 :
2022-09-29 13:00:09 +03:00
tb_port_set_link_width ( port , TB_LINK_WIDTH_SINGLE ) ;
2022-02-13 12:54:15 +03:00
return ret ;
2019-03-21 20:03:00 +03:00
}
2020-09-24 11:44:01 +03:00
/**
* tb_port_lane_bonding_disable ( ) - Disable bonding on port
* @ port : port to disable
*
* Disable bonding by setting the link width of the port and the
* other port in case of dual link port .
*/
void tb_port_lane_bonding_disable ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
2022-02-13 12:54:15 +03:00
tb_port_set_lane_bonding ( port , false ) ;
2022-09-29 13:00:09 +03:00
tb_port_set_link_width ( port - > dual_link_port , TB_LINK_WIDTH_SINGLE ) ;
tb_port_set_link_width ( port , TB_LINK_WIDTH_SINGLE ) ;
port - > dual_link_port - > bonded = false ;
port - > bonded = false ;
2019-03-21 20:03:00 +03:00
}
2021-03-22 17:54:54 +03:00
/**
* tb_port_wait_for_link_width ( ) - Wait until link reaches specific width
* @ port : Port to wait for
2023-08-10 22:37:15 +03:00
* @ width : Expected link width ( bitmask )
2021-03-22 17:54:54 +03:00
* @ timeout_msec : Timeout in ms how long to wait
*
* Should be used after both ends of the link have been bonded ( or
* bonding has been disabled ) to wait until the link actually reaches
2022-09-29 13:00:09 +03:00
* the expected state . Returns % - ETIMEDOUT if the width was not reached
* within the given timeout , % 0 if it did . Can be passed a mask of
* expected widths and succeeds if any of the widths is reached .
2021-03-22 17:54:54 +03:00
*/
2023-08-10 22:37:15 +03:00
int tb_port_wait_for_link_width ( struct tb_port * port , unsigned int width ,
2021-03-22 17:54:54 +03:00
int timeout_msec )
{
ktime_t timeout = ktime_add_ms ( ktime_get ( ) , timeout_msec ) ;
int ret ;
2022-09-29 13:00:09 +03:00
/* Gen 4 link does not support single lane */
2023-08-10 22:37:15 +03:00
if ( ( width & TB_LINK_WIDTH_SINGLE ) & &
2023-07-31 05:25:39 +03:00
tb_port_get_link_generation ( port ) > = 4 )
2022-09-29 13:00:09 +03:00
return - EOPNOTSUPP ;
2021-03-22 17:54:54 +03:00
do {
ret = tb_port_get_link_width ( port ) ;
2022-02-13 18:16:24 +03:00
if ( ret < 0 ) {
/*
* Sometimes we get port locked error when
* polling the lanes so we can ignore it and
* retry .
*/
if ( ret ! = - EACCES )
return ret ;
2023-08-10 22:37:15 +03:00
} else if ( ret & width ) {
2021-03-22 17:54:54 +03:00
return 0 ;
2022-02-13 18:16:24 +03:00
}
2021-03-22 17:54:54 +03:00
usleep_range ( 1000 , 2000 ) ;
} while ( ktime_before ( ktime_get ( ) , timeout ) ) ;
return - ETIMEDOUT ;
}
2021-03-22 18:01:59 +03:00
static int tb_port_do_update_credits ( struct tb_port * port )
{
u32 nfc_credits ;
int ret ;
ret = tb_port_read ( port , & nfc_credits , TB_CFG_PORT , ADP_CS_4 , 1 ) ;
if ( ret )
return ret ;
if ( nfc_credits ! = port - > config . nfc_credits ) {
u32 total ;
total = ( nfc_credits & ADP_CS_4_TOTAL_BUFFERS_MASK ) > >
ADP_CS_4_TOTAL_BUFFERS_SHIFT ;
tb_port_dbg ( port , " total credits changed %u -> %u \n " ,
port - > total_credits , total ) ;
port - > config . nfc_credits = nfc_credits ;
port - > total_credits = total ;
}
return 0 ;
}
/**
* tb_port_update_credits ( ) - Re - read port total credits
* @ port : Port to update
*
* After the link is bonded ( or bonding was disabled ) the port total
* credits may change , so this function needs to be called to re - read
* the credits . Updates also the second lane adapter .
*/
int tb_port_update_credits ( struct tb_port * port )
{
int ret ;
ret = tb_port_do_update_credits ( port ) ;
if ( ret )
return ret ;
return tb_port_do_update_credits ( port - > dual_link_port ) ;
}
2020-11-26 12:52:43 +03:00
static int tb_port_start_lane_initialization ( struct tb_port * port )
{
int ret ;
if ( tb_switch_is_usb4 ( port - > sw ) )
return 0 ;
ret = tb_lc_start_lane_initialization ( port ) ;
return ret = = - EINVAL ? 0 : ret ;
}
2021-04-01 18:42:38 +03:00
/*
* Returns true if the port had something ( router , XDomain ) connected
* before suspend .
*/
static bool tb_port_resume ( struct tb_port * port )
{
bool has_remote = tb_port_has_remote ( port ) ;
if ( port - > usb4 ) {
usb4_port_device_resume ( port - > usb4 ) ;
} else if ( ! has_remote ) {
/*
* For disconnected downstream lane adapters start lane
* initialization now so we detect future connects .
*
* For XDomain start the lane initialzation now so the
* link gets re - established .
*
* This is only needed for non - USB4 ports .
*/
if ( ! tb_is_upstream_port ( port ) | | port - > xdomain )
tb_port_start_lane_initialization ( port ) ;
}
return has_remote | | port - > xdomain ;
}
2017-10-12 16:45:50 +03:00
/**
* tb_port_is_enabled ( ) - Is the adapter port enabled
* @ port : Port to check
*/
bool tb_port_is_enabled ( struct tb_port * port )
{
switch ( port - > config . type ) {
case TB_TYPE_PCIE_UP :
case TB_TYPE_PCIE_DOWN :
return tb_pci_port_is_enabled ( port ) ;
2018-09-17 16:30:49 +03:00
case TB_TYPE_DP_HDMI_IN :
case TB_TYPE_DP_HDMI_OUT :
return tb_dp_port_is_enabled ( port ) ;
2019-12-17 15:33:44 +03:00
case TB_TYPE_USB3_UP :
case TB_TYPE_USB3_DOWN :
return tb_usb3_port_is_enabled ( port ) ;
2017-10-12 16:45:50 +03:00
default :
return false ;
}
}
2019-12-17 15:33:44 +03:00
/**
* tb_usb3_port_is_enabled ( ) - Is the USB3 adapter port enabled
* @ port : USB3 adapter port to check
*/
bool tb_usb3_port_is_enabled ( struct tb_port * port )
{
u32 data ;
if ( tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_USB3_CS_0 , 1 ) )
return false ;
return ! ! ( data & ADP_USB3_CS_0_PE ) ;
}
/**
* tb_usb3_port_enable ( ) - Enable USB3 adapter port
* @ port : USB3 adapter port to enable
* @ enable : Enable / disable the USB3 adapter
*/
int tb_usb3_port_enable ( struct tb_port * port , bool enable )
{
u32 word = enable ? ( ADP_USB3_CS_0_PE | ADP_USB3_CS_0_V )
: ADP_USB3_CS_0_V ;
if ( ! port - > cap_adap )
return - ENXIO ;
return tb_port_write ( port , & word , TB_CFG_PORT ,
port - > cap_adap + ADP_USB3_CS_0 , 1 ) ;
}
2017-02-20 00:43:26 +03:00
/**
* tb_pci_port_is_enabled ( ) - Is the PCIe adapter port enabled
* @ port : PCIe port to check
*/
bool tb_pci_port_is_enabled ( struct tb_port * port )
{
u32 data ;
2019-09-06 12:05:24 +03:00
if ( tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_PCIE_CS_0 , 1 ) )
2017-02-20 00:43:26 +03:00
return false ;
2019-09-06 12:05:24 +03:00
return ! ! ( data & ADP_PCIE_CS_0_PE ) ;
2017-02-20 00:43:26 +03:00
}
2017-02-19 14:48:29 +03:00
/**
* tb_pci_port_enable ( ) - Enable PCIe adapter port
* @ port : PCIe port to enable
* @ enable : Enable / disable the PCIe adapter
*/
int tb_pci_port_enable ( struct tb_port * port , bool enable )
{
2019-09-06 12:05:24 +03:00
u32 word = enable ? ADP_PCIE_CS_0_PE : 0x0 ;
2017-02-19 14:48:29 +03:00
if ( ! port - > cap_adap )
return - ENXIO ;
2019-09-06 12:05:24 +03:00
return tb_port_write ( port , & word , TB_CFG_PORT ,
port - > cap_adap + ADP_PCIE_CS_0 , 1 ) ;
2017-02-19 14:48:29 +03:00
}
2018-09-17 16:30:49 +03:00
/**
* tb_dp_port_hpd_is_active ( ) - Is HPD already active
* @ port : DP out port to check
*
2023-08-10 23:18:22 +03:00
* Checks if the DP OUT adapter port has HPD bit already set .
2018-09-17 16:30:49 +03:00
*/
int tb_dp_port_hpd_is_active ( struct tb_port * port )
{
u32 data ;
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_2 , 1 ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2023-08-10 23:18:22 +03:00
return ! ! ( data & ADP_DP_CS_2_HPD ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_hpd_clear ( ) - Clear HPD from DP IN port
* @ port : Port to clear HPD
*
2023-08-10 23:18:22 +03:00
* If the DP IN port has HPD set , this function can be used to clear it .
2018-09-17 16:30:49 +03:00
*/
int tb_dp_port_hpd_clear ( struct tb_port * port )
{
u32 data ;
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_3 , 1 ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2023-08-10 23:18:22 +03:00
data | = ADP_DP_CS_3_HPDC ;
2019-09-06 11:32:15 +03:00
return tb_port_write ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_3 , 1 ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_set_hops ( ) - Set video / aux Hop IDs for DP port
* @ port : DP IN / OUT port to set hops
* @ video : Video Hop ID
* @ aux_tx : AUX TX Hop ID
* @ aux_rx : AUX RX Hop ID
*
2021-11-18 10:10:59 +03:00
* Programs specified Hop IDs for DP IN / OUT port . Can be called for USB4
* router DP adapters too but does not program the values as the fields
* are read - only .
2018-09-17 16:30:49 +03:00
*/
int tb_dp_port_set_hops ( struct tb_port * port , unsigned int video ,
unsigned int aux_tx , unsigned int aux_rx )
{
u32 data [ 2 ] ;
int ret ;
2021-11-18 10:10:59 +03:00
if ( tb_switch_is_usb4 ( port - > sw ) )
return 0 ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2019-09-06 11:32:15 +03:00
data [ 0 ] & = ~ ADP_DP_CS_0_VIDEO_HOPID_MASK ;
data [ 1 ] & = ~ ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
data [ 1 ] & = ~ ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
data [ 0 ] | = ( video < < ADP_DP_CS_0_VIDEO_HOPID_SHIFT ) &
ADP_DP_CS_0_VIDEO_HOPID_MASK ;
data [ 1 ] | = aux_tx & ADP_DP_CS_1_AUX_TX_HOPID_MASK ;
data [ 1 ] | = ( aux_rx < < ADP_DP_CS_1_AUX_RX_HOPID_SHIFT ) &
ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
return tb_port_write ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_is_enabled ( ) - Is DP adapter port enabled
* @ port : DP adapter port to check
*/
bool tb_dp_port_is_enabled ( struct tb_port * port )
{
2019-09-19 14:55:23 +03:00
u32 data [ 2 ] ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
if ( tb_port_read ( port , data , TB_CFG_PORT , port - > cap_adap + ADP_DP_CS_0 ,
2019-09-19 14:55:23 +03:00
ARRAY_SIZE ( data ) ) )
2018-09-17 16:30:49 +03:00
return false ;
2019-09-06 11:32:15 +03:00
return ! ! ( data [ 0 ] & ( ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ) ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_enable ( ) - Enables / disables DP paths of a port
* @ port : DP IN / OUT port
* @ enable : Enable / disable DP path
*
* Once Hop IDs are programmed DP paths can be enabled or disabled by
* calling this function .
*/
int tb_dp_port_enable ( struct tb_port * port , bool enable )
{
2019-09-19 14:55:23 +03:00
u32 data [ 2 ] ;
2018-09-17 16:30:49 +03:00
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
if ( enable )
2019-09-06 11:32:15 +03:00
data [ 0 ] | = ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ;
2018-09-17 16:30:49 +03:00
else
2019-09-06 11:32:15 +03:00
data [ 0 ] & = ~ ( ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ) ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
return tb_port_write ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
}
2014-06-04 00:04:02 +04:00
/* switch utility functions */
2019-12-17 15:33:40 +03:00
static const char * tb_switch_generation_name ( const struct tb_switch * sw )
{
switch ( sw - > generation ) {
case 1 :
return " Thunderbolt 1 " ;
case 2 :
return " Thunderbolt 2 " ;
case 3 :
return " Thunderbolt 3 " ;
case 4 :
return " USB4 " ;
default :
return " Unknown " ;
}
}
static void tb_dump_switch ( const struct tb * tb , const struct tb_switch * sw )
2014-06-04 00:04:02 +04:00
{
2019-12-17 15:33:40 +03:00
const struct tb_regs_switch_header * regs = & sw - > config ;
tb_dbg ( tb , " %s Switch: %x:%x (Revision: %d, TB Version: %d) \n " ,
tb_switch_generation_name ( sw ) , regs - > vendor_id , regs - > device_id ,
regs - > revision , regs - > thunderbolt_version ) ;
tb_dbg ( tb , " Max Port Number: %d \n " , regs - > max_port_number ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " Config: \n " ) ;
tb_dbg ( tb ,
2014-06-04 00:04:02 +04:00
" Upstream Port Number: %d Depth: %d Route String: %#llx Enabled: %d, PlugEventsDelay: %dms \n " ,
2019-12-17 15:33:40 +03:00
regs - > upstream_port_number , regs - > depth ,
( ( ( u64 ) regs - > route_hi ) < < 32 ) | regs - > route_lo ,
regs - > enabled , regs - > plug_events_delay ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " unknown1: %#x unknown4: %#x \n " ,
2019-12-17 15:33:40 +03:00
regs - > __unknown1 , regs - > __unknown4 ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:12 +04:00
/**
2021-01-27 14:25:54 +03:00
* tb_switch_reset ( ) - reconfigure route , enable and send TB_CFG_PKG_RESET
2019-09-19 15:25:30 +03:00
* @ sw : Switch to reset
2014-06-04 00:04:12 +04:00
*
* Return : Returns 0 on success or an error code on failure .
*/
2019-09-19 15:25:30 +03:00
int tb_switch_reset ( struct tb_switch * sw )
2014-06-04 00:04:12 +04:00
{
struct tb_cfg_result res ;
2019-09-19 15:25:30 +03:00
if ( sw - > generation > 1 )
return 0 ;
tb_sw_dbg ( sw , " resetting switch \n " ) ;
res . err = tb_sw_write ( sw , ( ( u32 * ) & sw - > config ) + 2 ,
TB_CFG_SWITCH , 2 , 2 ) ;
2014-06-04 00:04:12 +04:00
if ( res . err )
return res . err ;
2020-12-22 14:40:31 +03:00
res = tb_cfg_reset ( sw - > tb - > ctl , tb_route ( sw ) ) ;
2014-06-04 00:04:12 +04:00
if ( res . err > 0 )
return - EIO ;
return res . err ;
}
2021-12-17 04:16:40 +03:00
/**
* tb_switch_wait_for_bit ( ) - Wait for specified value of bits in offset
* @ sw : Router to read the offset value from
* @ offset : Offset in the router config space to read from
* @ bit : Bit mask in the offset to wait for
* @ value : Value of the bits to wait for
* @ timeout_msec : Timeout in ms how long to wait
*
* Wait till the specified bits in specified offset reach specified value .
* Returns % 0 in case of success , % - ETIMEDOUT if the @ value was not reached
* within the given timeout or a negative errno in case of failure .
*/
int tb_switch_wait_for_bit ( struct tb_switch * sw , u32 offset , u32 bit ,
u32 value , int timeout_msec )
{
ktime_t timeout = ktime_add_ms ( ktime_get ( ) , timeout_msec ) ;
do {
u32 val ;
int ret ;
ret = tb_sw_read ( sw , & val , TB_CFG_SWITCH , offset , 1 ) ;
if ( ret )
return ret ;
if ( ( val & bit ) = = value )
return 0 ;
usleep_range ( 50 , 100 ) ;
} while ( ktime_before ( ktime_get ( ) , timeout ) ) ;
return - ETIMEDOUT ;
}
2021-01-27 14:25:51 +03:00
/*
2014-06-04 00:04:04 +04:00
* tb_plug_events_active ( ) - enable / disable plug events on a switch
*
* Also configures a sane plug_events_delay of 255 ms .
*
* Return : Returns 0 on success or an error code on failure .
*/
static int tb_plug_events_active ( struct tb_switch * sw , bool active )
{
u32 data ;
int res ;
2020-04-02 12:24:48 +03:00
if ( tb_switch_is_icm ( sw ) | | tb_switch_is_usb4 ( sw ) )
2017-06-06 15:25:01 +03:00
return 0 ;
2014-06-04 00:04:04 +04:00
sw - > config . plug_events_delay = 0xff ;
res = tb_sw_write ( sw , ( ( u32 * ) & sw - > config ) + 4 , TB_CFG_SWITCH , 4 , 1 ) ;
if ( res )
return res ;
res = tb_sw_read ( sw , & data , TB_CFG_SWITCH , sw - > cap_plug_events + 1 , 1 ) ;
if ( res )
return res ;
if ( active ) {
data = data & 0xFFFFFF83 ;
switch ( sw - > config . device_id ) {
2016-03-20 15:57:20 +03:00
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE :
case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE :
case PCI_DEVICE_ID_INTEL_PORT_RIDGE :
2014-06-04 00:04:04 +04:00
break ;
default :
2022-01-07 14:00:47 +03:00
/*
* Skip Alpine Ridge , it needs to have vendor
* specific USB hotplug event enabled for the
* internal xHCI to work .
*/
if ( ! tb_switch_is_alpine_ridge ( sw ) )
data | = TB_PLUG_EVENTS_USB_DISABLE ;
2014-06-04 00:04:04 +04:00
}
} else {
data = data | 0x7c ;
}
return tb_sw_write ( sw , & data , TB_CFG_SWITCH ,
sw - > cap_plug_events + 1 , 1 ) ;
}
2017-06-06 15:25:16 +03:00
static ssize_t authorized_show ( struct device * dev ,
struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %u \n " , sw - > authorized ) ;
2017-06-06 15:25:16 +03:00
}
2020-11-10 11:47:14 +03:00
static int disapprove_switch ( struct device * dev , void * not_used )
{
2021-07-31 02:53:04 +03:00
char * envp [ ] = { " AUTHORIZED=0 " , NULL } ;
2020-11-10 11:47:14 +03:00
struct tb_switch * sw ;
sw = tb_to_switch ( dev ) ;
if ( sw & & sw - > authorized ) {
int ret ;
/* First children */
ret = device_for_each_child_reverse ( & sw - > dev , NULL , disapprove_switch ) ;
if ( ret )
return ret ;
ret = tb_domain_disapprove_switch ( sw - > tb , sw ) ;
if ( ret )
return ret ;
sw - > authorized = 0 ;
2021-07-31 02:53:04 +03:00
kobject_uevent_env ( & sw - > dev . kobj , KOBJ_CHANGE , envp ) ;
2020-11-10 11:47:14 +03:00
}
return 0 ;
}
2017-06-06 15:25:16 +03:00
static int tb_switch_set_authorized ( struct tb_switch * sw , unsigned int val )
{
2021-07-31 02:53:04 +03:00
char envp_string [ 13 ] ;
2017-06-06 15:25:16 +03:00
int ret = - EINVAL ;
2021-07-31 02:53:04 +03:00
char * envp [ ] = { envp_string , NULL } ;
2017-06-06 15:25:16 +03:00
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
2020-11-10 11:47:14 +03:00
if ( ! ! sw - > authorized = = ! ! val )
2017-06-06 15:25:16 +03:00
goto unlock ;
switch ( val ) {
2020-11-10 11:47:14 +03:00
/* Disapprove switch */
case 0 :
if ( tb_route ( sw ) ) {
ret = disapprove_switch ( & sw - > dev , NULL ) ;
goto unlock ;
}
break ;
2017-06-06 15:25:16 +03:00
/* Approve switch */
case 1 :
if ( sw - > key )
ret = tb_domain_approve_switch_key ( sw - > tb , sw ) ;
else
ret = tb_domain_approve_switch ( sw - > tb , sw ) ;
break ;
/* Challenge switch */
case 2 :
if ( sw - > key )
ret = tb_domain_challenge_switch_key ( sw - > tb , sw ) ;
break ;
default :
break ;
}
if ( ! ret ) {
sw - > authorized = val ;
2021-07-31 02:53:04 +03:00
/*
* Notify status change to the userspace , informing the new
* value of / sys / bus / thunderbolt / devices / . . . / authorized .
*/
sprintf ( envp_string , " AUTHORIZED=%u " , sw - > authorized ) ;
kobject_uevent_env ( & sw - > dev . kobj , KOBJ_CHANGE , envp ) ;
2017-06-06 15:25:16 +03:00
}
unlock :
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
static ssize_t authorized_store ( struct device * dev ,
struct device_attribute * attr ,
const char * buf , size_t count )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
unsigned int val ;
ssize_t ret ;
ret = kstrtouint ( buf , 0 , & val ) ;
if ( ret )
return ret ;
if ( val > 2 )
return - EINVAL ;
2019-05-28 18:56:20 +03:00
pm_runtime_get_sync ( & sw - > dev ) ;
2017-06-06 15:25:16 +03:00
ret = tb_switch_set_authorized ( sw , val ) ;
2019-05-28 18:56:20 +03:00
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:16 +03:00
return ret ? ret : count ;
}
static DEVICE_ATTR_RW ( authorized ) ;
2018-01-22 13:50:09 +03:00
static ssize_t boot_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %u \n " , sw - > boot ) ;
2018-01-22 13:50:09 +03:00
}
static DEVICE_ATTR_RO ( boot ) ;
2017-06-06 15:25:01 +03:00
static ssize_t device_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2014-06-04 00:04:04 +04:00
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %#x \n " , sw - > device ) ;
2017-06-06 15:25:01 +03:00
}
static DEVICE_ATTR_RO ( device ) ;
2017-06-06 15:25:05 +03:00
static ssize_t
device_name_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %s \n " , sw - > device_name ? : " " ) ;
2017-06-06 15:25:05 +03:00
}
static DEVICE_ATTR_RO ( device_name ) ;
2019-10-03 20:32:40 +03:00
static ssize_t
generation_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %u \n " , sw - > generation ) ;
2019-10-03 20:32:40 +03:00
}
static DEVICE_ATTR_RO ( generation ) ;
2017-06-06 15:25:16 +03:00
static ssize_t key_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
ssize_t ret ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
if ( sw - > key )
2022-09-22 17:32:39 +03:00
ret = sysfs_emit ( buf , " %*phN \n " , TB_SWITCH_KEY_SIZE , sw - > key ) ;
2017-06-06 15:25:16 +03:00
else
2022-09-22 17:32:39 +03:00
ret = sysfs_emit ( buf , " \n " ) ;
2017-06-06 15:25:16 +03:00
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
static ssize_t key_store ( struct device * dev , struct device_attribute * attr ,
const char * buf , size_t count )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
u8 key [ TB_SWITCH_KEY_SIZE ] ;
ssize_t ret = count ;
2017-08-15 08:19:20 +03:00
bool clear = false ;
2017-06-06 15:25:16 +03:00
2017-08-15 08:19:20 +03:00
if ( ! strcmp ( buf , " \n " ) )
clear = true ;
else if ( hex2bin ( key , buf , sizeof ( key ) ) )
2017-06-06 15:25:16 +03:00
return - EINVAL ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
if ( sw - > authorized ) {
ret = - EBUSY ;
} else {
kfree ( sw - > key ) ;
2017-08-15 08:19:20 +03:00
if ( clear ) {
sw - > key = NULL ;
} else {
sw - > key = kmemdup ( key , sizeof ( key ) , GFP_KERNEL ) ;
if ( ! sw - > key )
ret = - ENOMEM ;
}
2017-06-06 15:25:16 +03:00
}
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
2017-08-15 08:19:12 +03:00
static DEVICE_ATTR ( key , 0600 , key_show , key_store ) ;
2017-06-06 15:25:16 +03:00
2019-03-21 20:03:00 +03:00
static ssize_t speed_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %u.0 Gb/s \n " , sw - > link_speed ) ;
2019-03-21 20:03:00 +03:00
}
/*
* Currently all lanes must run at the same speed but we expose here
* both directions to allow possible asymmetric links in the future .
*/
static DEVICE_ATTR ( rx_speed , 0444 , speed_show , NULL ) ;
static DEVICE_ATTR ( tx_speed , 0444 , speed_show , NULL ) ;
2022-09-29 13:00:09 +03:00
static ssize_t rx_lanes_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
2019-03-21 20:03:00 +03:00
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-29 13:00:09 +03:00
unsigned int width ;
switch ( sw - > link_width ) {
case TB_LINK_WIDTH_SINGLE :
case TB_LINK_WIDTH_ASYM_TX :
width = 1 ;
break ;
case TB_LINK_WIDTH_DUAL :
width = 2 ;
break ;
case TB_LINK_WIDTH_ASYM_RX :
width = 3 ;
break ;
default :
WARN_ON_ONCE ( 1 ) ;
return - EINVAL ;
}
2019-03-21 20:03:00 +03:00
2022-09-29 13:00:09 +03:00
return sysfs_emit ( buf , " %u \n " , width ) ;
2019-03-21 20:03:00 +03:00
}
2022-09-29 13:00:09 +03:00
static DEVICE_ATTR ( rx_lanes , 0444 , rx_lanes_show , NULL ) ;
2019-03-21 20:03:00 +03:00
2022-09-29 13:00:09 +03:00
static ssize_t tx_lanes_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
unsigned int width ;
switch ( sw - > link_width ) {
case TB_LINK_WIDTH_SINGLE :
case TB_LINK_WIDTH_ASYM_RX :
width = 1 ;
break ;
case TB_LINK_WIDTH_DUAL :
width = 2 ;
break ;
case TB_LINK_WIDTH_ASYM_TX :
width = 3 ;
break ;
default :
WARN_ON_ONCE ( 1 ) ;
return - EINVAL ;
}
return sysfs_emit ( buf , " %u \n " , width ) ;
}
static DEVICE_ATTR ( tx_lanes , 0444 , tx_lanes_show , NULL ) ;
2019-03-21 20:03:00 +03:00
2017-06-06 15:25:17 +03:00
static ssize_t nvm_authenticate_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
u32 status ;
nvm_get_auth_status ( sw , & status ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %#x \n " , status ) ;
2017-06-06 15:25:17 +03:00
}
2020-06-23 19:14:29 +03:00
static ssize_t nvm_authenticate_sysfs ( struct device * dev , const char * buf ,
bool disconnect )
2017-06-06 15:25:17 +03:00
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2021-04-12 15:25:08 +03:00
int val , ret ;
2017-06-06 15:25:17 +03:00
2019-05-28 18:56:20 +03:00
pm_runtime_get_sync ( & sw - > dev ) ;
if ( ! mutex_trylock ( & sw - > tb - > lock ) ) {
ret = restart_syscall ( ) ;
goto exit_rpm ;
}
2017-06-06 15:25:17 +03:00
2022-09-02 12:40:08 +03:00
if ( sw - > no_nvm_upgrade ) {
ret = - EOPNOTSUPP ;
goto exit_unlock ;
}
2017-06-06 15:25:17 +03:00
/* If NVMem devices are not yet added */
if ( ! sw - > nvm ) {
ret = - EAGAIN ;
goto exit_unlock ;
}
2020-06-23 19:14:28 +03:00
ret = kstrtoint ( buf , 10 , & val ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
goto exit_unlock ;
/* Always clear the authentication status */
nvm_clear_auth_status ( sw ) ;
2020-06-23 19:14:28 +03:00
if ( val > 0 ) {
2021-04-12 15:25:08 +03:00
if ( val = = AUTHENTICATE_ONLY ) {
if ( disconnect )
2020-06-23 19:14:28 +03:00
ret = - EINVAL ;
2021-04-12 15:25:08 +03:00
else
ret = nvm_authenticate ( sw , true ) ;
} else {
if ( ! sw - > nvm - > flushed ) {
if ( ! sw - > nvm - > buf ) {
ret = - EINVAL ;
goto exit_unlock ;
}
ret = nvm_validate_and_write ( sw ) ;
if ( ret | | val = = WRITE_ONLY )
goto exit_unlock ;
2020-06-23 19:14:28 +03:00
}
2021-04-12 15:25:08 +03:00
if ( val = = WRITE_AND_AUTHENTICATE ) {
if ( disconnect )
ret = tb_lc_force_power ( sw ) ;
else
ret = nvm_authenticate ( sw , false ) ;
2020-06-23 19:14:29 +03:00
}
2020-06-23 19:14:28 +03:00
}
2017-06-06 15:25:17 +03:00
}
exit_unlock :
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2019-05-28 18:56:20 +03:00
exit_rpm :
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:17 +03:00
2020-06-23 19:14:29 +03:00
return ret ;
}
static ssize_t nvm_authenticate_store ( struct device * dev ,
struct device_attribute * attr , const char * buf , size_t count )
{
int ret = nvm_authenticate_sysfs ( dev , buf , false ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
return ret ;
return count ;
}
static DEVICE_ATTR_RW ( nvm_authenticate ) ;
2020-06-23 19:14:29 +03:00
static ssize_t nvm_authenticate_on_disconnect_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
return nvm_authenticate_show ( dev , attr , buf ) ;
}
static ssize_t nvm_authenticate_on_disconnect_store ( struct device * dev ,
struct device_attribute * attr , const char * buf , size_t count )
{
int ret ;
ret = nvm_authenticate_sysfs ( dev , buf , true ) ;
return ret ? ret : count ;
}
static DEVICE_ATTR_RW ( nvm_authenticate_on_disconnect ) ;
2017-06-06 15:25:17 +03:00
static ssize_t nvm_version_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
int ret ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:17 +03:00
if ( sw - > safe_mode )
ret = - ENODATA ;
else if ( ! sw - > nvm )
ret = - EAGAIN ;
else
2022-09-22 17:32:39 +03:00
ret = sysfs_emit ( buf , " %x.%x \n " , sw - > nvm - > major , sw - > nvm - > minor ) ;
2017-06-06 15:25:17 +03:00
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static DEVICE_ATTR_RO ( nvm_version ) ;
2017-06-06 15:25:01 +03:00
static ssize_t vendor_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
2014-06-04 00:04:02 +04:00
{
2017-06-06 15:25:01 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
2014-06-04 00:04:02 +04:00
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %#x \n " , sw - > vendor ) ;
2017-06-06 15:25:01 +03:00
}
static DEVICE_ATTR_RO ( vendor ) ;
2017-06-06 15:25:05 +03:00
static ssize_t
vendor_name_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %s \n " , sw - > vendor_name ? : " " ) ;
2017-06-06 15:25:05 +03:00
}
static DEVICE_ATTR_RO ( vendor_name ) ;
2017-06-06 15:25:01 +03:00
static ssize_t unique_id_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2022-09-22 17:32:39 +03:00
return sysfs_emit ( buf , " %pUb \n " , sw - > uuid ) ;
2017-06-06 15:25:01 +03:00
}
static DEVICE_ATTR_RO ( unique_id ) ;
static struct attribute * switch_attrs [ ] = {
2017-06-06 15:25:16 +03:00
& dev_attr_authorized . attr ,
2018-01-22 13:50:09 +03:00
& dev_attr_boot . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_device . attr ,
2017-06-06 15:25:05 +03:00
& dev_attr_device_name . attr ,
2019-10-03 20:32:40 +03:00
& dev_attr_generation . attr ,
2017-06-06 15:25:16 +03:00
& dev_attr_key . attr ,
2017-06-06 15:25:17 +03:00
& dev_attr_nvm_authenticate . attr ,
2020-06-23 19:14:29 +03:00
& dev_attr_nvm_authenticate_on_disconnect . attr ,
2017-06-06 15:25:17 +03:00
& dev_attr_nvm_version . attr ,
2019-03-21 20:03:00 +03:00
& dev_attr_rx_speed . attr ,
& dev_attr_rx_lanes . attr ,
& dev_attr_tx_speed . attr ,
& dev_attr_tx_lanes . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_vendor . attr ,
2017-06-06 15:25:05 +03:00
& dev_attr_vendor_name . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_unique_id . attr ,
NULL ,
} ;
2017-06-06 15:25:16 +03:00
static umode_t switch_attr_is_visible ( struct kobject * kobj ,
struct attribute * attr , int n )
{
2020-09-01 11:27:17 +03:00
struct device * dev = kobj_to_dev ( kobj ) ;
2017-06-06 15:25:16 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
2020-09-03 13:13:21 +03:00
if ( attr = = & dev_attr_authorized . attr ) {
if ( sw - > tb - > security_level = = TB_SECURITY_NOPCIE | |
2021-07-27 17:25:01 +03:00
sw - > tb - > security_level = = TB_SECURITY_DPONLY )
2020-09-03 13:13:21 +03:00
return 0 ;
} else if ( attr = = & dev_attr_device . attr ) {
2018-09-11 15:34:23 +03:00
if ( ! sw - > device )
return 0 ;
} else if ( attr = = & dev_attr_device_name . attr ) {
if ( ! sw - > device_name )
return 0 ;
} else if ( attr = = & dev_attr_vendor . attr ) {
if ( ! sw - > vendor )
return 0 ;
} else if ( attr = = & dev_attr_vendor_name . attr ) {
if ( ! sw - > vendor_name )
return 0 ;
} else if ( attr = = & dev_attr_key . attr ) {
2017-06-06 15:25:16 +03:00
if ( tb_route ( sw ) & &
sw - > tb - > security_level = = TB_SECURITY_SECURE & &
sw - > security_level = = TB_SECURITY_SECURE )
return attr - > mode ;
return 0 ;
2019-03-21 20:03:00 +03:00
} else if ( attr = = & dev_attr_rx_speed . attr | |
attr = = & dev_attr_rx_lanes . attr | |
attr = = & dev_attr_tx_speed . attr | |
attr = = & dev_attr_tx_lanes . attr ) {
if ( tb_route ( sw ) )
return attr - > mode ;
return 0 ;
2019-02-05 12:51:40 +03:00
} else if ( attr = = & dev_attr_nvm_authenticate . attr ) {
2019-12-17 15:33:40 +03:00
if ( nvm_upgradeable ( sw ) )
2019-02-05 12:51:40 +03:00
return attr - > mode ;
return 0 ;
} else if ( attr = = & dev_attr_nvm_version . attr ) {
2019-12-17 15:33:40 +03:00
if ( nvm_readable ( sw ) )
2017-06-06 15:25:17 +03:00
return attr - > mode ;
return 0 ;
2018-01-22 13:50:09 +03:00
} else if ( attr = = & dev_attr_boot . attr ) {
if ( tb_route ( sw ) )
return attr - > mode ;
return 0 ;
2020-06-23 19:14:29 +03:00
} else if ( attr = = & dev_attr_nvm_authenticate_on_disconnect . attr ) {
if ( sw - > quirks & QUIRK_FORCE_POWER_LINK_CONTROLLER )
return attr - > mode ;
return 0 ;
2017-06-06 15:25:16 +03:00
}
2017-06-06 15:25:17 +03:00
return sw - > safe_mode ? 0 : attr - > mode ;
2017-06-06 15:25:16 +03:00
}
2021-01-09 02:09:19 +03:00
static const struct attribute_group switch_group = {
2017-06-06 15:25:16 +03:00
. is_visible = switch_attr_is_visible ,
2017-06-06 15:25:01 +03:00
. attrs = switch_attrs ,
} ;
2014-06-04 00:04:04 +04:00
2017-06-06 15:25:01 +03:00
static const struct attribute_group * switch_groups [ ] = {
& switch_group ,
NULL ,
} ;
static void tb_switch_release ( struct device * dev )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2017-06-06 15:25:01 +03:00
2017-06-06 15:25:14 +03:00
dma_port_free ( sw - > dma_port ) ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
2021-02-10 17:06:33 +03:00
ida_destroy ( & port - > in_hopids ) ;
ida_destroy ( & port - > out_hopids ) ;
2017-02-19 17:57:27 +03:00
}
2017-06-06 15:25:01 +03:00
kfree ( sw - > uuid ) ;
2017-06-06 15:25:05 +03:00
kfree ( sw - > device_name ) ;
kfree ( sw - > vendor_name ) ;
2014-06-04 00:04:02 +04:00
kfree ( sw - > ports ) ;
2014-06-13 01:11:47 +04:00
kfree ( sw - > drom ) ;
2017-06-06 15:25:16 +03:00
kfree ( sw - > key ) ;
2014-06-04 00:04:02 +04:00
kfree ( sw ) ;
}
2023-01-11 14:30:07 +03:00
static int tb_switch_uevent ( const struct device * dev , struct kobj_uevent_env * env )
2021-03-02 16:51:44 +03:00
{
2023-01-11 14:30:07 +03:00
const struct tb_switch * sw = tb_to_switch ( dev ) ;
2021-03-02 16:51:44 +03:00
const char * type ;
2022-09-23 01:30:43 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
if ( add_uevent_var ( env , " USB4_VERSION=%u.0 " ,
usb4_switch_version ( sw ) ) )
2021-03-02 16:51:44 +03:00
return - ENOMEM ;
}
if ( ! tb_route ( sw ) ) {
type = " host " ;
} else {
const struct tb_port * port ;
bool hub = false ;
/* Device is hub if it has any downstream ports */
tb_switch_for_each_port ( sw , port ) {
if ( ! port - > disabled & & ! tb_is_upstream_port ( port ) & &
tb_port_is_null ( port ) ) {
hub = true ;
break ;
}
}
type = hub ? " hub " : " device " ;
}
if ( add_uevent_var ( env , " USB4_TYPE=%s " , type ) )
return - ENOMEM ;
return 0 ;
}
2018-07-25 11:48:39 +03:00
/*
* Currently only need to provide the callbacks . Everything else is handled
* in the connection manager .
*/
static int __maybe_unused tb_switch_runtime_suspend ( struct device * dev )
{
2019-05-28 18:56:20 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
const struct tb_cm_ops * cm_ops = sw - > tb - > cm_ops ;
if ( cm_ops - > runtime_suspend_switch )
return cm_ops - > runtime_suspend_switch ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
}
static int __maybe_unused tb_switch_runtime_resume ( struct device * dev )
{
2019-05-28 18:56:20 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
const struct tb_cm_ops * cm_ops = sw - > tb - > cm_ops ;
if ( cm_ops - > runtime_resume_switch )
return cm_ops - > runtime_resume_switch ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
}
static const struct dev_pm_ops tb_switch_pm_ops = {
SET_RUNTIME_PM_OPS ( tb_switch_runtime_suspend , tb_switch_runtime_resume ,
NULL )
} ;
2017-06-06 15:25:01 +03:00
struct device_type tb_switch_type = {
. name = " thunderbolt_device " ,
. release = tb_switch_release ,
2021-03-02 16:51:44 +03:00
. uevent = tb_switch_uevent ,
2018-07-25 11:48:39 +03:00
. pm = & tb_switch_pm_ops ,
2017-06-06 15:25:01 +03:00
} ;
2017-06-06 15:25:13 +03:00
static int tb_switch_get_generation ( struct tb_switch * sw )
{
2022-12-28 10:45:35 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return 4 ;
2017-06-06 15:25:13 +03:00
2022-12-28 10:45:35 +03:00
if ( sw - > config . vendor_id = = PCI_VENDOR_ID_INTEL ) {
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE :
case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE :
case PCI_DEVICE_ID_INTEL_LIGHT_PEAK :
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C :
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C :
case PCI_DEVICE_ID_INTEL_PORT_RIDGE :
case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_4C_BRIDGE :
return 1 ;
2019-12-17 15:33:40 +03:00
2022-12-28 10:45:35 +03:00
case PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE :
return 2 ;
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE :
case PCI_DEVICE_ID_INTEL_ICL_NHI0 :
case PCI_DEVICE_ID_INTEL_ICL_NHI1 :
return 3 ;
}
2017-06-06 15:25:13 +03:00
}
2022-12-28 10:45:35 +03:00
/*
* For unknown switches assume generation to be 1 to be on the
* safe side .
*/
tb_sw_warn ( sw , " unsupported switch device id %#x \n " ,
sw - > config . device_id ) ;
return 1 ;
2017-06-06 15:25:13 +03:00
}
2019-12-17 15:33:40 +03:00
static bool tb_switch_exceeds_max_depth ( const struct tb_switch * sw , int depth )
{
int max_depth ;
if ( tb_switch_is_usb4 ( sw ) | |
( sw - > tb - > root_switch & & tb_switch_is_usb4 ( sw - > tb - > root_switch ) ) )
max_depth = USB4_SWITCH_MAX_DEPTH ;
else
max_depth = TB_SWITCH_MAX_DEPTH ;
return depth > max_depth ;
}
2014-06-04 00:04:02 +04:00
/**
2017-06-06 15:25:01 +03:00
* tb_switch_alloc ( ) - allocate a switch
* @ tb : Pointer to the owning domain
* @ parent : Parent device for this switch
* @ route : Route string for this switch
2014-06-04 00:04:02 +04:00
*
2017-06-06 15:25:01 +03:00
* Allocates and initializes a switch . Will not upload configuration to
* the switch . For that you need to call tb_switch_configure ( )
* separately . The returned switch should be released by calling
* tb_switch_put ( ) .
*
2018-12-30 13:17:52 +03:00
* Return : Pointer to the allocated switch or ERR_PTR ( ) in case of
* failure .
2014-06-04 00:04:02 +04:00
*/
2017-06-06 15:25:01 +03:00
struct tb_switch * tb_switch_alloc ( struct tb * tb , struct device * parent ,
u64 route )
2014-06-04 00:04:02 +04:00
{
struct tb_switch * sw ;
2018-12-30 13:14:46 +03:00
int upstream_port ;
2018-12-30 13:17:52 +03:00
int i , ret , depth ;
2018-12-30 13:14:46 +03:00
2019-12-17 15:33:40 +03:00
/* Unlock the downstream port so we can access the switch below */
if ( route ) {
struct tb_switch * parent_sw = tb_to_switch ( parent ) ;
struct tb_port * down ;
down = tb_port_at ( route , parent_sw ) ;
tb_port_unlock ( down ) ;
}
2018-12-30 13:14:46 +03:00
depth = tb_route_length ( route ) ;
upstream_port = tb_cfg_get_upstream_port ( tb - > ctl , route ) ;
2014-06-04 00:04:02 +04:00
if ( upstream_port < 0 )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( upstream_port ) ;
2014-06-04 00:04:02 +04:00
sw = kzalloc ( sizeof ( * sw ) , GFP_KERNEL ) ;
if ( ! sw )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( - ENOMEM ) ;
2014-06-04 00:04:02 +04:00
sw - > tb = tb ;
2018-12-30 13:17:52 +03:00
ret = tb_cfg_read ( tb - > ctl , & sw - > config , route , 0 , TB_CFG_SWITCH , 0 , 5 ) ;
if ( ret )
2017-06-06 15:25:01 +03:00
goto err_free_sw_ports ;
2019-12-17 15:33:40 +03:00
sw - > generation = tb_switch_get_generation ( sw ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " current switch config: \n " ) ;
2019-12-17 15:33:40 +03:00
tb_dump_switch ( tb , sw ) ;
2014-06-04 00:04:02 +04:00
/* configure switch */
sw - > config . upstream_port_number = upstream_port ;
2018-12-30 13:14:46 +03:00
sw - > config . depth = depth ;
sw - > config . route_hi = upper_32_bits ( route ) ;
sw - > config . route_lo = lower_32_bits ( route ) ;
2017-06-06 15:25:01 +03:00
sw - > config . enabled = 0 ;
2014-06-04 00:04:02 +04:00
2019-12-17 15:33:40 +03:00
/* Make sure we do not exceed maximum topology limit */
2019-12-21 01:05:26 +03:00
if ( tb_switch_exceeds_max_depth ( sw , depth ) ) {
ret = - EADDRNOTAVAIL ;
goto err_free_sw_ports ;
}
2019-12-17 15:33:40 +03:00
2014-06-04 00:04:02 +04:00
/* initialize ports */
sw - > ports = kcalloc ( sw - > config . max_port_number + 1 , sizeof ( * sw - > ports ) ,
2014-06-13 01:11:47 +04:00
GFP_KERNEL ) ;
2018-12-30 13:17:52 +03:00
if ( ! sw - > ports ) {
ret = - ENOMEM ;
2017-06-06 15:25:01 +03:00
goto err_free_sw_ports ;
2018-12-30 13:17:52 +03:00
}
2014-06-04 00:04:02 +04:00
for ( i = 0 ; i < = sw - > config . max_port_number ; i + + ) {
2014-06-13 01:11:47 +04:00
/* minimum setup for tb_find_cap and tb_drom_read to work */
sw - > ports [ i ] . sw = sw ;
sw - > ports [ i ] . port = i ;
2021-02-10 17:06:33 +03:00
/* Control port does not need HopID allocation */
if ( i ) {
ida_init ( & sw - > ports [ i ] . in_hopids ) ;
ida_init ( & sw - > ports [ i ] . out_hopids ) ;
}
2014-06-04 00:04:02 +04:00
}
2018-12-30 13:17:52 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_PLUG_EVENTS ) ;
2019-12-17 15:33:40 +03:00
if ( ret > 0 )
sw - > cap_plug_events = ret ;
2014-06-04 00:04:04 +04:00
2021-12-17 04:16:41 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_TIME2 ) ;
if ( ret > 0 )
sw - > cap_vsec_tmu = ret ;
2018-12-30 13:17:52 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_LINK_CONTROLLER ) ;
if ( ret > 0 )
sw - > cap_lc = ret ;
2019-01-09 17:42:12 +03:00
2021-12-17 04:16:43 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_CP_LP ) ;
if ( ret > 0 )
sw - > cap_lp = ret ;
2017-06-06 15:25:16 +03:00
/* Root switch is always authorized */
if ( ! route )
sw - > authorized = true ;
2017-06-06 15:25:01 +03:00
device_initialize ( & sw - > dev ) ;
sw - > dev . parent = parent ;
sw - > dev . bus = & tb_bus_type ;
sw - > dev . type = & tb_switch_type ;
sw - > dev . groups = switch_groups ;
dev_set_name ( & sw - > dev , " %u-%llx " , tb - > index , tb_route ( sw ) ) ;
return sw ;
err_free_sw_ports :
kfree ( sw - > ports ) ;
kfree ( sw ) ;
2018-12-30 13:17:52 +03:00
return ERR_PTR ( ret ) ;
2017-06-06 15:25:01 +03:00
}
2017-06-06 15:25:17 +03:00
/**
* tb_switch_alloc_safe_mode ( ) - allocate a switch that is in safe mode
* @ tb : Pointer to the owning domain
* @ parent : Parent device for this switch
* @ route : Route string for this switch
*
* This creates a switch in safe mode . This means the switch pretty much
* lacks all capabilities except DMA configuration port before it is
* flashed with a valid NVM firmware .
*
* The returned switch must be released by calling tb_switch_put ( ) .
*
2018-12-30 13:17:52 +03:00
* Return : Pointer to the allocated switch or ERR_PTR ( ) in case of failure
2017-06-06 15:25:17 +03:00
*/
struct tb_switch *
tb_switch_alloc_safe_mode ( struct tb * tb , struct device * parent , u64 route )
{
struct tb_switch * sw ;
sw = kzalloc ( sizeof ( * sw ) , GFP_KERNEL ) ;
if ( ! sw )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( - ENOMEM ) ;
2017-06-06 15:25:17 +03:00
sw - > tb = tb ;
sw - > config . depth = tb_route_length ( route ) ;
sw - > config . route_hi = upper_32_bits ( route ) ;
sw - > config . route_lo = lower_32_bits ( route ) ;
sw - > safe_mode = true ;
device_initialize ( & sw - > dev ) ;
sw - > dev . parent = parent ;
sw - > dev . bus = & tb_bus_type ;
sw - > dev . type = & tb_switch_type ;
sw - > dev . groups = switch_groups ;
dev_set_name ( & sw - > dev , " %u-%llx " , tb - > index , tb_route ( sw ) ) ;
return sw ;
}
2017-06-06 15:25:01 +03:00
/**
* tb_switch_configure ( ) - Uploads configuration to the switch
* @ sw : Switch to configure
*
* Call this function before the switch is added to the system . It will
* upload configuration to the switch and makes it available for the
2019-12-17 15:33:40 +03:00
* connection manager to use . Can be called to the switch again after
* resume from low power states to re - initialize it .
2017-06-06 15:25:01 +03:00
*
* Return : % 0 in case of success and negative errno in case of failure
*/
int tb_switch_configure ( struct tb_switch * sw )
{
struct tb * tb = sw - > tb ;
u64 route ;
int ret ;
route = tb_route ( sw ) ;
2019-12-17 15:33:40 +03:00
tb_dbg ( tb , " %s Switch at %#llx (depth: %d, up port: %d) \n " ,
2019-12-06 19:36:07 +03:00
sw - > config . enabled ? " restoring " : " initializing " , route ,
2019-12-17 15:33:40 +03:00
tb_route_length ( route ) , sw - > config . upstream_port_number ) ;
2017-06-06 15:25:01 +03:00
sw - > config . enabled = 1 ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
/*
* For USB4 devices , we need to program the CM version
* accordingly so that it knows to expose all the
2022-09-29 13:17:24 +03:00
* additional capabilities . Program it according to USB4
* version to avoid changing existing ( v1 ) routers behaviour .
2019-12-17 15:33:40 +03:00
*/
2022-09-29 13:17:24 +03:00
if ( usb4_switch_version ( sw ) < 2 )
sw - > config . cmuv = ROUTER_CS_4_CMUV_V1 ;
else
sw - > config . cmuv = ROUTER_CS_4_CMUV_V2 ;
2022-09-21 17:54:32 +03:00
sw - > config . plug_events_delay = 0xa ;
2019-12-17 15:33:40 +03:00
/* Enumerate the switch */
ret = tb_sw_write ( sw , ( u32 * ) & sw - > config + 1 , TB_CFG_SWITCH ,
ROUTER_CS_1 , 4 ) ;
if ( ret )
return ret ;
2017-06-06 15:25:01 +03:00
2019-12-17 15:33:40 +03:00
ret = usb4_switch_setup ( sw ) ;
} else {
if ( sw - > config . vendor_id ! = PCI_VENDOR_ID_INTEL )
tb_sw_warn ( sw , " unknown switch vendor id %#x \n " ,
sw - > config . vendor_id ) ;
if ( ! sw - > cap_plug_events ) {
tb_sw_warn ( sw , " cannot find TB_VSE_CAP_PLUG_EVENTS aborting \n " ) ;
return - ENODEV ;
}
/* Enumerate the switch */
ret = tb_sw_write ( sw , ( u32 * ) & sw - > config + 1 , TB_CFG_SWITCH ,
ROUTER_CS_1 , 3 ) ;
}
2018-10-11 12:33:08 +03:00
if ( ret )
return ret ;
2017-06-06 15:25:01 +03:00
return tb_plug_events_active ( sw , true ) ;
}
2022-10-11 12:11:09 +03:00
/**
* tb_switch_configuration_valid ( ) - Set the tunneling configuration to be valid
* @ sw : Router to configure
*
* Needs to be called before any tunnels can be setup through the
* router . Can be called to any router .
*
* Returns % 0 in success and negative errno otherwise .
*/
int tb_switch_configuration_valid ( struct tb_switch * sw )
{
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_configuration_valid ( sw ) ;
return 0 ;
}
2019-03-20 18:57:54 +03:00
static int tb_switch_set_uuid ( struct tb_switch * sw )
2017-06-06 15:25:01 +03:00
{
2019-12-17 15:33:40 +03:00
bool uid = false ;
2017-06-06 15:25:01 +03:00
u32 uuid [ 4 ] ;
2019-01-09 17:42:12 +03:00
int ret ;
2017-06-06 15:25:01 +03:00
if ( sw - > uuid )
2019-01-09 17:42:12 +03:00
return 0 ;
2017-06-06 15:25:01 +03:00
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
ret = usb4_switch_read_uid ( sw , & sw - > uid ) ;
if ( ret )
return ret ;
uid = true ;
} else {
/*
* The newer controllers include fused UUID as part of
* link controller specific registers
*/
ret = tb_lc_read_uuid ( sw , uuid ) ;
if ( ret ) {
if ( ret ! = - EINVAL )
return ret ;
uid = true ;
}
}
if ( uid ) {
2017-06-06 15:25:01 +03:00
/*
* ICM generates UUID based on UID and fills the upper
* two words with ones . This is not strictly following
* UUID format but we want to be compatible with it so
* we do the same here .
*/
uuid [ 0 ] = sw - > uid & 0xffffffff ;
uuid [ 1 ] = ( sw - > uid > > 32 ) & 0xffffffff ;
uuid [ 2 ] = 0xffffffff ;
uuid [ 3 ] = 0xffffffff ;
}
sw - > uuid = kmemdup ( uuid , sizeof ( uuid ) , GFP_KERNEL ) ;
2019-03-20 18:57:54 +03:00
if ( ! sw - > uuid )
2019-01-09 17:42:12 +03:00
return - ENOMEM ;
return 0 ;
2017-06-06 15:25:01 +03:00
}
2017-06-06 15:25:17 +03:00
static int tb_switch_add_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:14 +03:00
{
2017-06-06 15:25:17 +03:00
u32 status ;
int ret ;
2017-06-06 15:25:14 +03:00
switch ( sw - > generation ) {
case 2 :
/* Only root switch can be upgraded */
if ( tb_route ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2019-11-11 13:25:44 +03:00
2020-08-24 01:36:59 +03:00
fallthrough ;
2019-11-11 13:25:44 +03:00
case 3 :
2020-11-10 11:34:07 +03:00
case 4 :
2019-11-11 13:25:44 +03:00
ret = tb_switch_set_uuid ( sw ) ;
if ( ret )
return ret ;
2017-06-06 15:25:14 +03:00
break ;
default :
2017-06-06 15:25:17 +03:00
/*
* DMA port is the only thing available when the switch
* is in safe mode .
*/
if ( ! sw - > safe_mode )
return 0 ;
break ;
2017-06-06 15:25:14 +03:00
}
2020-11-10 11:34:07 +03:00
if ( sw - > no_nvm_upgrade )
return 0 ;
if ( tb_switch_is_usb4 ( sw ) ) {
ret = usb4_switch_nvm_authenticate_status ( sw , & status ) ;
if ( ret )
return ret ;
if ( status ) {
tb_sw_info ( sw , " switch flash authentication failed \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
return 0 ;
}
2019-02-05 12:51:40 +03:00
/* Root switch DMA port requires running firmware */
2019-06-25 15:10:01 +03:00
if ( ! tb_route ( sw ) & & ! tb_switch_is_icm ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2017-06-06 15:25:14 +03:00
sw - > dma_port = dma_port_alloc ( sw ) ;
2017-06-06 15:25:17 +03:00
if ( ! sw - > dma_port )
return 0 ;
2019-11-11 13:25:44 +03:00
/*
* If there is status already set then authentication failed
* when the dma_port_flash_update_auth ( ) returned . Power cycling
* is not needed ( it was done already ) so only thing we do here
* is to unblock runtime PM of the root port .
*/
nvm_get_auth_status ( sw , & status ) ;
if ( status ) {
if ( ! tb_route ( sw ) )
2019-12-17 15:33:40 +03:00
nvm_authenticate_complete_dma_port ( sw ) ;
2019-11-11 13:25:44 +03:00
return 0 ;
}
2017-06-06 15:25:17 +03:00
/*
* Check status of the previous flash authentication . If there
* is one we need to power cycle the switch in any case to make
* it functional again .
*/
ret = dma_port_flash_update_auth_status ( sw - > dma_port , & status ) ;
if ( ret < = 0 )
return ret ;
2018-11-26 12:47:46 +03:00
/* Now we can allow root port to suspend again */
if ( ! tb_route ( sw ) )
2019-12-17 15:33:40 +03:00
nvm_authenticate_complete_dma_port ( sw ) ;
2018-11-26 12:47:46 +03:00
2017-06-06 15:25:17 +03:00
if ( status ) {
tb_sw_info ( sw , " switch flash authentication failed \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
tb_sw_info ( sw , " power cycling the switch now \n " ) ;
dma_port_power_cycle ( sw - > dma_port ) ;
/*
* We return error here which causes the switch adding failure .
* It should appear back after power cycle is complete .
*/
return - ESHUTDOWN ;
2017-06-06 15:25:14 +03:00
}
2019-08-26 18:19:33 +03:00
static void tb_switch_default_link_ports ( struct tb_switch * sw )
{
int i ;
2021-08-03 15:34:56 +03:00
for ( i = 1 ; i < = sw - > config . max_port_number ; i + + ) {
2019-08-26 18:19:33 +03:00
struct tb_port * port = & sw - > ports [ i ] ;
struct tb_port * subordinate ;
if ( ! tb_port_is_null ( port ) )
continue ;
/* Check for the subordinate port */
if ( i = = sw - > config . max_port_number | |
! tb_port_is_null ( & sw - > ports [ i + 1 ] ) )
continue ;
/* Link them if not already done so (by DROM) */
subordinate = & sw - > ports [ i + 1 ] ;
if ( ! port - > dual_link_port & & ! subordinate - > dual_link_port ) {
port - > link_nr = 0 ;
port - > dual_link_port = subordinate ;
subordinate - > link_nr = 1 ;
subordinate - > dual_link_port = port ;
tb_sw_dbg ( sw , " linked ports %d <-> %d \n " ,
port - > port , subordinate - > port ) ;
}
}
}
2019-03-21 20:03:00 +03:00
static bool tb_switch_lane_bonding_possible ( struct tb_switch * sw )
{
const struct tb_port * up = tb_upstream_port ( sw ) ;
if ( ! up - > dual_link_port | | ! up - > dual_link_port - > remote )
return false ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_lane_bonding_possible ( sw ) ;
2019-03-21 20:03:00 +03:00
return tb_lc_lane_bonding_possible ( sw ) ;
}
static int tb_switch_update_link_attributes ( struct tb_switch * sw )
{
struct tb_port * up ;
bool change = false ;
int ret ;
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return 0 ;
up = tb_upstream_port ( sw ) ;
ret = tb_port_get_link_speed ( up ) ;
if ( ret < 0 )
return ret ;
if ( sw - > link_speed ! = ret )
change = true ;
sw - > link_speed = ret ;
ret = tb_port_get_link_width ( up ) ;
if ( ret < 0 )
return ret ;
if ( sw - > link_width ! = ret )
change = true ;
sw - > link_width = ret ;
/* Notify userspace that there is possible link attribute change */
if ( device_is_registered ( & sw - > dev ) & & change )
kobject_uevent ( & sw - > dev . kobj , KOBJ_CHANGE ) ;
return 0 ;
}
2023-08-10 22:37:15 +03:00
/* Must be called after tb_switch_update_link_attributes() */
static void tb_switch_link_init ( struct tb_switch * sw )
{
struct tb_port * up , * down ;
bool bonded ;
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return ;
tb_sw_dbg ( sw , " current link speed %u.0 Gb/s \n " , sw - > link_speed ) ;
tb_sw_dbg ( sw , " current link width %s \n " , width_name ( sw - > link_width ) ) ;
bonded = sw - > link_width > = TB_LINK_WIDTH_DUAL ;
/*
* Gen 4 links come up as bonded so update the port structures
* accordingly .
*/
up = tb_upstream_port ( sw ) ;
down = tb_switch_downstream_port ( sw ) ;
up - > bonded = bonded ;
if ( up - > dual_link_port )
up - > dual_link_port - > bonded = bonded ;
tb_port_update_credits ( up ) ;
down - > bonded = bonded ;
if ( down - > dual_link_port )
down - > dual_link_port - > bonded = bonded ;
tb_port_update_credits ( down ) ;
}
2019-03-21 20:03:00 +03:00
/**
* tb_switch_lane_bonding_enable ( ) - Enable lane bonding
* @ sw : Switch to enable lane bonding
*
* Connection manager can call this function to enable lane bonding of a
* switch . If conditions are correct and both switches support the feature ,
* lanes are bonded . It is safe to call this to any switch .
*/
2023-08-10 22:37:15 +03:00
static int tb_switch_lane_bonding_enable ( struct tb_switch * sw )
2019-03-21 20:03:00 +03:00
{
struct tb_port * up , * down ;
2023-08-10 22:37:15 +03:00
unsigned int width ;
2019-03-21 20:03:00 +03:00
int ret ;
if ( ! tb_switch_lane_bonding_possible ( sw ) )
return 0 ;
up = tb_upstream_port ( sw ) ;
2022-09-23 01:30:38 +03:00
down = tb_switch_downstream_port ( sw ) ;
2019-03-21 20:03:00 +03:00
2023-08-10 22:37:15 +03:00
if ( ! tb_port_width_supported ( up , TB_LINK_WIDTH_DUAL ) | |
! tb_port_width_supported ( down , TB_LINK_WIDTH_DUAL ) )
2019-03-21 20:03:00 +03:00
return 0 ;
2023-08-22 16:36:18 +03:00
/*
* Both lanes need to be in CL0 . Here we assume lane 0 already be in
* CL0 and check just for lane 1.
*/
if ( tb_wait_for_port ( down - > dual_link_port , false ) < = 0 )
return - ENOTCONN ;
2019-03-21 20:03:00 +03:00
ret = tb_port_lane_bonding_enable ( up ) ;
if ( ret ) {
tb_port_warn ( up , " failed to enable lane bonding \n " ) ;
return ret ;
}
ret = tb_port_lane_bonding_enable ( down ) ;
if ( ret ) {
tb_port_warn ( down , " failed to enable lane bonding \n " ) ;
tb_port_lane_bonding_disable ( up ) ;
return ret ;
}
2022-09-29 13:00:09 +03:00
/* Any of the widths are all bonded */
2023-08-10 22:37:15 +03:00
width = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX |
TB_LINK_WIDTH_ASYM_RX ;
2021-03-22 17:54:54 +03:00
2023-08-10 22:37:15 +03:00
return tb_port_wait_for_link_width ( down , width , 100 ) ;
2019-03-21 20:03:00 +03:00
}
/**
* tb_switch_lane_bonding_disable ( ) - Disable lane bonding
* @ sw : Switch whose lane bonding to disable
*
* Disables lane bonding between @ sw and parent . This can be called even
* if lanes were not bonded originally .
*/
2023-08-10 22:37:15 +03:00
static int tb_switch_lane_bonding_disable ( struct tb_switch * sw )
2019-03-21 20:03:00 +03:00
{
struct tb_port * up , * down ;
2022-09-29 13:00:09 +03:00
int ret ;
2019-03-21 20:03:00 +03:00
up = tb_upstream_port ( sw ) ;
if ( ! up - > bonded )
2023-08-10 22:37:15 +03:00
return 0 ;
2019-03-21 20:03:00 +03:00
2023-08-10 22:37:15 +03:00
/*
* If the link is Gen 4 there is no way to switch the link to
* two single lane links so avoid that here . Also don ' t bother
* if the link is not up anymore ( sw is unplugged ) .
*/
ret = tb_port_get_link_generation ( up ) ;
if ( ret < 0 )
return ret ;
if ( ret > = 4 )
return - EOPNOTSUPP ;
2019-03-21 20:03:00 +03:00
2023-08-10 22:37:15 +03:00
down = tb_switch_downstream_port ( sw ) ;
2019-03-21 20:03:00 +03:00
tb_port_lane_bonding_disable ( up ) ;
tb_port_lane_bonding_disable ( down ) ;
2021-03-22 17:54:54 +03:00
/*
* It is fine if we get other errors as the router might have
* been unplugged .
*/
2023-08-10 22:37:15 +03:00
return tb_port_wait_for_link_width ( down , TB_LINK_WIDTH_SINGLE , 100 ) ;
}
2023-11-07 15:34:27 +03:00
/* Note updating sw->link_width done in tb_switch_update_link_attributes() */
2023-08-10 22:37:15 +03:00
static int tb_switch_asym_enable ( struct tb_switch * sw , enum tb_link_width width )
{
struct tb_port * up , * down , * port ;
enum tb_link_width down_width ;
int ret ;
up = tb_upstream_port ( sw ) ;
down = tb_switch_downstream_port ( sw ) ;
if ( width = = TB_LINK_WIDTH_ASYM_TX ) {
down_width = TB_LINK_WIDTH_ASYM_RX ;
port = down ;
} else {
down_width = TB_LINK_WIDTH_ASYM_TX ;
port = up ;
}
ret = tb_port_set_link_width ( up , width ) ;
if ( ret )
return ret ;
ret = tb_port_set_link_width ( down , down_width ) ;
if ( ret )
return ret ;
/*
* Initiate the change in the router that one of its TX lanes is
* changing to RX but do so only if there is an actual change .
*/
if ( sw - > link_width ! = width ) {
ret = usb4_port_asym_start ( port ) ;
if ( ret )
return ret ;
ret = tb_port_wait_for_link_width ( up , width , 100 ) ;
if ( ret )
return ret ;
}
return 0 ;
}
2023-11-07 15:34:27 +03:00
/* Note updating sw->link_width done in tb_switch_update_link_attributes() */
2023-08-10 22:37:15 +03:00
static int tb_switch_asym_disable ( struct tb_switch * sw )
{
struct tb_port * up , * down ;
int ret ;
up = tb_upstream_port ( sw ) ;
down = tb_switch_downstream_port ( sw ) ;
ret = tb_port_set_link_width ( up , TB_LINK_WIDTH_DUAL ) ;
if ( ret )
return ret ;
ret = tb_port_set_link_width ( down , TB_LINK_WIDTH_DUAL ) ;
if ( ret )
return ret ;
/*
* Initiate the change in the router that has three TX lanes and
* is changing one of its TX lanes to RX but only if there is a
* change in the link width .
*/
if ( sw - > link_width > TB_LINK_WIDTH_DUAL ) {
if ( sw - > link_width = = TB_LINK_WIDTH_ASYM_TX )
ret = usb4_port_asym_start ( up ) ;
else
ret = usb4_port_asym_start ( down ) ;
if ( ret )
return ret ;
ret = tb_port_wait_for_link_width ( up , TB_LINK_WIDTH_DUAL , 100 ) ;
if ( ret )
return ret ;
}
return 0 ;
}
/**
* tb_switch_set_link_width ( ) - Configure router link width
* @ sw : Router to configure
* @ width : The new link width
*
* Set device router link width to @ width from router upstream port
* perspective . Supports also asymmetric links if the routers boths side
* of the link supports it .
*
* Does nothing for host router .
*
* Returns % 0 in case of success , negative errno otherwise .
*/
int tb_switch_set_link_width ( struct tb_switch * sw , enum tb_link_width width )
{
struct tb_port * up , * down ;
int ret = 0 ;
if ( ! tb_route ( sw ) )
return 0 ;
up = tb_upstream_port ( sw ) ;
down = tb_switch_downstream_port ( sw ) ;
switch ( width ) {
case TB_LINK_WIDTH_SINGLE :
ret = tb_switch_lane_bonding_disable ( sw ) ;
break ;
case TB_LINK_WIDTH_DUAL :
if ( sw - > link_width = = TB_LINK_WIDTH_ASYM_TX | |
sw - > link_width = = TB_LINK_WIDTH_ASYM_RX ) {
ret = tb_switch_asym_disable ( sw ) ;
if ( ret )
break ;
}
ret = tb_switch_lane_bonding_enable ( sw ) ;
break ;
case TB_LINK_WIDTH_ASYM_TX :
case TB_LINK_WIDTH_ASYM_RX :
ret = tb_switch_asym_enable ( sw , width ) ;
break ;
}
switch ( ret ) {
case 0 :
break ;
case - ETIMEDOUT :
tb_sw_warn ( sw , " timeout changing link width \n " ) ;
return ret ;
case - ENOTCONN :
case - EOPNOTSUPP :
case - ENODEV :
return ret ;
default :
tb_sw_dbg ( sw , " failed to change link width: %d \n " , ret ) ;
return ret ;
}
2021-03-22 17:54:54 +03:00
2021-03-22 18:01:59 +03:00
tb_port_update_credits ( down ) ;
tb_port_update_credits ( up ) ;
2023-08-10 22:37:15 +03:00
2019-03-21 20:03:00 +03:00
tb_switch_update_link_attributes ( sw ) ;
2021-03-22 18:01:59 +03:00
2023-08-10 22:37:15 +03:00
tb_sw_dbg ( sw , " link width set to %s \n " , width_name ( width ) ) ;
return ret ;
2019-03-21 20:03:00 +03:00
}
2020-04-02 14:50:52 +03:00
/**
* tb_switch_configure_link ( ) - Set link configured
* @ sw : Switch whose link is configured
*
* Sets the link upstream from @ sw configured ( from both ends ) so that
* it will not be disconnected when the domain exits sleep . Can be
* called for any switch .
*
* It is recommended that this is called after lane bonding is enabled .
*
* Returns % 0 on success and negative errno in case of error .
*/
int tb_switch_configure_link ( struct tb_switch * sw )
{
2020-04-02 12:42:44 +03:00
struct tb_port * up , * down ;
int ret ;
2020-04-02 14:50:52 +03:00
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return 0 ;
2020-04-02 12:42:44 +03:00
up = tb_upstream_port ( sw ) ;
if ( tb_switch_is_usb4 ( up - > sw ) )
ret = usb4_port_configure ( up ) ;
else
ret = tb_lc_configure_port ( up ) ;
if ( ret )
return ret ;
down = up - > remote ;
if ( tb_switch_is_usb4 ( down - > sw ) )
return usb4_port_configure ( down ) ;
return tb_lc_configure_port ( down ) ;
2020-04-02 14:50:52 +03:00
}
/**
* tb_switch_unconfigure_link ( ) - Unconfigure link
* @ sw : Switch whose link is unconfigured
*
* Sets the link unconfigured so the @ sw will be disconnected if the
* domain exists sleep .
*/
void tb_switch_unconfigure_link ( struct tb_switch * sw )
{
2020-04-02 12:42:44 +03:00
struct tb_port * up , * down ;
2020-04-02 14:50:52 +03:00
if ( sw - > is_unplugged )
return ;
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return ;
2020-04-02 12:42:44 +03:00
up = tb_upstream_port ( sw ) ;
if ( tb_switch_is_usb4 ( up - > sw ) )
usb4_port_unconfigure ( up ) ;
else
tb_lc_unconfigure_port ( up ) ;
down = up - > remote ;
if ( tb_switch_is_usb4 ( down - > sw ) )
usb4_port_unconfigure ( down ) ;
2020-04-02 14:50:52 +03:00
else
2020-04-02 12:42:44 +03:00
tb_lc_unconfigure_port ( down ) ;
2020-04-02 14:50:52 +03:00
}
2021-03-10 14:34:12 +03:00
static void tb_switch_credits_init ( struct tb_switch * sw )
{
if ( tb_switch_is_icm ( sw ) )
return ;
if ( ! tb_switch_is_usb4 ( sw ) )
return ;
if ( usb4_switch_credits_init ( sw ) )
tb_sw_info ( sw , " failed to determine preferred buffer allocation, using defaults \n " ) ;
}
2022-09-26 17:33:50 +03:00
static int tb_switch_port_hotplug_enable ( struct tb_switch * sw )
{
struct tb_port * port ;
if ( tb_switch_is_icm ( sw ) )
return 0 ;
tb_switch_for_each_port ( sw , port ) {
int res ;
if ( ! port - > cap_usb4 )
continue ;
res = usb4_port_hotplug_enable ( port ) ;
if ( res )
return res ;
}
return 0 ;
}
2017-06-06 15:25:01 +03:00
/**
* tb_switch_add ( ) - Add a switch to the domain
* @ sw : Switch to add
*
* This is the last step in adding switch to the domain . It will read
* identification information from DROM and initializes ports so that
* they can be used to connect other switches . The switch will be
* exposed to the userspace when this function successfully returns . To
* remove and release the switch , call tb_switch_remove ( ) .
*
* Return : % 0 in case of success and negative errno in case of failure
*/
int tb_switch_add ( struct tb_switch * sw )
{
int i , ret ;
2017-06-06 15:25:14 +03:00
/*
* Initialize DMA control port now before we read DROM . Recent
* host controllers have more complete DROM on NVM that includes
* vendor and model identification strings which we then expose
* to the userspace . NVM can be accessed through DMA
* configuration based mailbox .
*/
2017-06-06 15:25:17 +03:00
ret = tb_switch_add_dma_port ( sw ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to add DMA port \n " ) ;
2017-06-06 15:25:02 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2014-06-13 01:11:47 +04:00
2017-06-06 15:25:17 +03:00
if ( ! sw - > safe_mode ) {
2021-03-10 14:34:12 +03:00
tb_switch_credits_init ( sw ) ;
2017-06-06 15:25:17 +03:00
/* read drom */
ret = tb_drom_read ( sw ) ;
2022-03-03 16:13:26 +03:00
if ( ret )
dev_warn ( & sw - > dev , " reading DROM failed: %d \n " , ret ) ;
2018-10-01 12:31:19 +03:00
tb_sw_dbg ( sw , " uid: %#llx \n " , sw - > uid ) ;
2017-06-06 15:25:01 +03:00
2019-03-20 18:57:54 +03:00
ret = tb_switch_set_uuid ( sw ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to set UUID \n " ) ;
2019-03-20 18:57:54 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2017-06-06 15:25:17 +03:00
for ( i = 0 ; i < = sw - > config . max_port_number ; i + + ) {
if ( sw - > ports [ i ] . disabled ) {
2018-10-01 12:31:19 +03:00
tb_port_dbg ( & sw - > ports [ i ] , " disabled by eeprom \n " ) ;
2017-06-06 15:25:17 +03:00
continue ;
}
ret = tb_init_port ( & sw - > ports [ i ] ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to initialize port %d \n " , i ) ;
2017-06-06 15:25:17 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2014-06-13 01:11:47 +04:00
}
2019-03-21 20:03:00 +03:00
2023-02-03 16:55:41 +03:00
tb_check_quirks ( sw ) ;
2019-08-26 18:19:33 +03:00
tb_switch_default_link_ports ( sw ) ;
2019-03-21 20:03:00 +03:00
ret = tb_switch_update_link_attributes ( sw ) ;
if ( ret )
return ret ;
2019-12-17 15:33:43 +03:00
2023-08-10 22:37:15 +03:00
tb_switch_link_init ( sw ) ;
2023-05-24 13:33:57 +03:00
ret = tb_switch_clx_init ( sw ) ;
if ( ret )
return ret ;
2019-12-17 15:33:43 +03:00
ret = tb_switch_tmu_init ( sw ) ;
if ( ret )
return ret ;
2014-06-13 01:11:47 +04:00
}
2022-09-26 17:33:50 +03:00
ret = tb_switch_port_hotplug_enable ( sw ) ;
if ( ret )
return ret ;
2017-06-06 15:25:17 +03:00
ret = device_add ( & sw - > dev ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to add device: %d \n " , ret ) ;
2017-06-06 15:25:17 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2017-06-06 15:25:17 +03:00
2018-10-01 12:31:20 +03:00
if ( tb_route ( sw ) ) {
dev_info ( & sw - > dev , " new device found, vendor=%#x device=%#x \n " ,
sw - > vendor , sw - > device ) ;
if ( sw - > vendor_name & & sw - > device_name )
dev_info ( & sw - > dev , " %s %s \n " , sw - > vendor_name ,
sw - > device_name ) ;
}
2021-04-01 17:34:20 +03:00
ret = usb4_switch_add_ports ( sw ) ;
if ( ret ) {
dev_err ( & sw - > dev , " failed to add USB4 ports \n " ) ;
goto err_del ;
}
2017-06-06 15:25:17 +03:00
ret = tb_switch_nvm_add ( sw ) ;
2018-07-25 11:48:39 +03:00
if ( ret ) {
2019-08-27 15:18:20 +03:00
dev_err ( & sw - > dev , " failed to add NVM devices \n " ) ;
2021-04-01 17:34:20 +03:00
goto err_ports ;
2018-07-25 11:48:39 +03:00
}
2017-06-06 15:25:17 +03:00
2019-12-06 19:36:07 +03:00
/*
* Thunderbolt routers do not generate wakeups themselves but
* they forward wakeups from tunneled protocols , so enable it
* here .
*/
device_init_wakeup ( & sw - > dev , true ) ;
2018-07-25 11:48:39 +03:00
pm_runtime_set_active ( & sw - > dev ) ;
if ( sw - > rpm ) {
pm_runtime_set_autosuspend_delay ( & sw - > dev , TB_AUTOSUSPEND_DELAY ) ;
pm_runtime_use_autosuspend ( & sw - > dev ) ;
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_enable ( & sw - > dev ) ;
pm_request_autosuspend ( & sw - > dev ) ;
}
2020-06-29 20:30:52 +03:00
tb_switch_debugfs_init ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
2021-04-01 17:34:20 +03:00
err_ports :
usb4_switch_remove_ports ( sw ) ;
err_del :
device_del ( & sw - > dev ) ;
return ret ;
2017-06-06 15:25:01 +03:00
}
2014-06-04 00:04:11 +04:00
2017-06-06 15:25:01 +03:00
/**
* tb_switch_remove ( ) - Remove and release a switch
* @ sw : Switch to remove
*
* This will remove the switch from the domain and release it after last
* reference count drops to zero . If there are switches connected below
* this switch , they will be removed as well .
*/
void tb_switch_remove ( struct tb_switch * sw )
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2014-06-04 00:04:04 +04:00
2020-06-29 20:30:52 +03:00
tb_switch_debugfs_remove ( sw ) ;
2018-07-25 11:48:39 +03:00
if ( sw - > rpm ) {
pm_runtime_get_sync ( & sw - > dev ) ;
pm_runtime_disable ( & sw - > dev ) ;
}
2017-06-06 15:25:01 +03:00
/* port 0 is the switch itself and never has a remote */
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) ) {
tb_switch_remove ( port - > remote - > sw ) ;
port - > remote = NULL ;
} else if ( port - > xdomain ) {
tb_xdomain_remove ( port - > xdomain ) ;
port - > xdomain = NULL ;
2019-03-07 16:26:45 +03:00
}
2020-03-05 17:39:58 +03:00
/* Remove any downstream retimers */
tb_retimer_remove_all ( port ) ;
2017-06-06 15:25:01 +03:00
}
if ( ! sw - > is_unplugged )
tb_plug_events_active ( sw , false ) ;
2019-12-17 15:33:40 +03:00
2017-06-06 15:25:17 +03:00
tb_switch_nvm_remove ( sw ) ;
2021-04-01 17:34:20 +03:00
usb4_switch_remove_ports ( sw ) ;
2018-10-01 12:31:20 +03:00
if ( tb_route ( sw ) )
dev_info ( & sw - > dev , " device disconnected \n " ) ;
2017-06-06 15:25:01 +03:00
device_unregister ( & sw - > dev ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:06 +04:00
/**
2016-03-20 15:57:20 +03:00
* tb_sw_set_unplugged ( ) - set is_unplugged on switch and downstream switches
2021-01-28 13:51:03 +03:00
* @ sw : Router to mark unplugged
2014-06-04 00:04:06 +04:00
*/
2016-03-20 15:57:20 +03:00
void tb_sw_set_unplugged ( struct tb_switch * sw )
2014-06-04 00:04:06 +04:00
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2014-06-04 00:04:06 +04:00
if ( sw = = sw - > tb - > root_switch ) {
tb_sw_WARN ( sw , " cannot unplug root switch \n " ) ;
return ;
}
if ( sw - > is_unplugged ) {
tb_sw_WARN ( sw , " is_unplugged already set \n " ) ;
return ;
}
sw - > is_unplugged = true ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) )
tb_sw_set_unplugged ( port - > remote - > sw ) ;
else if ( port - > xdomain )
port - > xdomain - > is_unplugged = true ;
2014-06-04 00:04:06 +04:00
}
}
2019-12-06 19:36:07 +03:00
static int tb_switch_set_wake ( struct tb_switch * sw , unsigned int flags )
{
if ( flags )
tb_sw_dbg ( sw , " enabling wakeup: %#x \n " , flags ) ;
else
tb_sw_dbg ( sw , " disabling wakeup \n " ) ;
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_set_wake ( sw , flags ) ;
return tb_lc_set_wake ( sw , flags ) ;
}
2014-06-04 00:04:12 +04:00
int tb_switch_resume ( struct tb_switch * sw )
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
int err ;
2018-10-01 12:31:19 +03:00
tb_sw_dbg ( sw , " resuming switch \n " ) ;
2014-06-04 00:04:12 +04:00
2017-06-06 15:24:54 +03:00
/*
* Check for UID of the connected switches except for root
* switch which we assume cannot be removed .
*/
if ( tb_route ( sw ) ) {
u64 uid ;
2018-09-28 16:41:01 +03:00
/*
* Check first that we can still read the switch config
* space . It may be that there is now another domain
* connected .
*/
err = tb_cfg_get_upstream_port ( sw - > tb - > ctl , tb_route ( sw ) ) ;
if ( err < 0 ) {
tb_sw_info ( sw , " switch not present anymore \n " ) ;
return err ;
}
2022-03-03 16:13:25 +03:00
/* We don't have any way to confirm this was the same device */
if ( ! sw - > uid )
return - ENODEV ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
err = usb4_switch_read_uid ( sw , & uid ) ;
else
err = tb_drom_read_uid_only ( sw , & uid ) ;
2017-06-06 15:24:54 +03:00
if ( err ) {
tb_sw_warn ( sw , " uid read failed \n " ) ;
return err ;
}
if ( sw - > uid ! = uid ) {
tb_sw_info ( sw ,
" changed while suspended (uid %#llx -> %#llx) \n " ,
sw - > uid , uid ) ;
return - ENODEV ;
}
2014-06-04 00:04:12 +04:00
}
2019-12-17 15:33:40 +03:00
err = tb_switch_configure ( sw ) ;
2014-06-04 00:04:12 +04:00
if ( err )
return err ;
2019-12-06 19:36:07 +03:00
/* Disable wakes */
tb_switch_set_wake ( sw , 0 ) ;
2020-03-27 18:20:31 +03:00
err = tb_switch_tmu_init ( sw ) ;
if ( err )
return err ;
2014-06-04 00:04:12 +04:00
/* check for surviving downstream switches */
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
2021-04-01 18:42:38 +03:00
if ( ! tb_port_is_null ( port ) )
continue ;
if ( ! tb_port_resume ( port ) )
2014-06-04 00:04:12 +04:00
continue ;
2019-03-07 16:26:45 +03:00
2018-09-28 16:41:01 +03:00
if ( tb_wait_for_port ( port , true ) < = 0 ) {
2014-06-04 00:04:12 +04:00
tb_port_warn ( port ,
" lost during suspend, disconnecting \n " ) ;
2018-09-28 16:41:01 +03:00
if ( tb_port_has_remote ( port ) )
tb_sw_set_unplugged ( port - > remote - > sw ) ;
else if ( port - > xdomain )
port - > xdomain - > is_unplugged = true ;
2021-04-01 18:42:38 +03:00
} else {
2019-12-17 15:33:40 +03:00
/*
* Always unlock the port so the downstream
* switch / domain is accessible .
*/
if ( tb_port_unlock ( port ) )
tb_port_warn ( port , " failed to unlock port \n " ) ;
if ( port - > remote & & tb_switch_resume ( port - > remote - > sw ) ) {
2018-09-28 16:41:01 +03:00
tb_port_warn ( port ,
" lost during suspend, disconnecting \n " ) ;
tb_sw_set_unplugged ( port - > remote - > sw ) ;
}
2014-06-04 00:04:12 +04:00
}
}
return 0 ;
}
2020-06-05 14:25:02 +03:00
/**
* tb_switch_suspend ( ) - Put a switch to sleep
* @ sw : Switch to suspend
* @ runtime : Is this runtime suspend or system sleep
*
* Suspends router and all its children . Enables wakes according to
* value of @ runtime and then sets sleep bit for the router . If @ sw is
* host router the domain is ready to go to sleep once this function
* returns .
*/
void tb_switch_suspend ( struct tb_switch * sw , bool runtime )
2014-06-04 00:04:12 +04:00
{
2019-12-06 19:36:07 +03:00
unsigned int flags = 0 ;
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
int err ;
2020-06-05 14:25:02 +03:00
tb_sw_dbg ( sw , " suspending switch \n " ) ;
2021-12-17 04:16:43 +03:00
/*
* Actually only needed for Titan Ridge but for simplicity can be
* done for USB4 device too as CLx is re - enabled at resume .
*/
2022-10-10 13:36:56 +03:00
tb_switch_clx_disable ( sw ) ;
2021-12-17 04:16:43 +03:00
2014-06-04 00:04:12 +04:00
err = tb_plug_events_active ( sw , false ) ;
if ( err )
return ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) )
2020-06-05 14:25:02 +03:00
tb_switch_suspend ( port - > remote - > sw , runtime ) ;
2014-06-04 00:04:12 +04:00
}
2019-01-09 18:25:43 +03:00
2020-06-05 14:25:02 +03:00
if ( runtime ) {
/* Trigger wake when something is plugged in/out */
flags | = TB_WAKE_ON_CONNECT | TB_WAKE_ON_DISCONNECT ;
2021-01-14 17:44:17 +03:00
flags | = TB_WAKE_ON_USB4 ;
flags | = TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE | TB_WAKE_ON_DP ;
2020-06-05 14:25:02 +03:00
} else if ( device_may_wakeup ( & sw - > dev ) ) {
flags | = TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE ;
}
2019-12-06 19:36:07 +03:00
tb_switch_set_wake ( sw , flags ) ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
usb4_switch_set_sleep ( sw ) ;
else
tb_lc_set_sleep ( sw ) ;
2014-06-04 00:04:12 +04:00
}
2017-06-06 15:25:16 +03:00
2019-03-26 15:52:30 +03:00
/**
* tb_switch_query_dp_resource ( ) - Query availability of DP resource
* @ sw : Switch whose DP resource is queried
* @ in : DP IN port
*
* Queries availability of DP resource for DP tunneling using switch
* specific means . Returns % true if resource is available .
*/
bool tb_switch_query_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_query_dp_resource ( sw , in ) ;
2019-03-26 15:52:30 +03:00
return tb_lc_dp_sink_query ( sw , in ) ;
}
/**
* tb_switch_alloc_dp_resource ( ) - Allocate available DP resource
* @ sw : Switch whose DP resource is allocated
* @ in : DP IN port
*
* Allocates DP resource for DP tunneling . The resource must be
* available for this to succeed ( see tb_switch_query_dp_resource ( ) ) .
* Returns % 0 in success and negative errno otherwise .
*/
int tb_switch_alloc_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2021-11-18 10:12:15 +03:00
int ret ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
2021-11-18 10:12:15 +03:00
ret = usb4_switch_alloc_dp_resource ( sw , in ) ;
else
ret = tb_lc_dp_sink_alloc ( sw , in ) ;
if ( ret )
tb_sw_warn ( sw , " failed to allocate DP resource for port %d \n " ,
in - > port ) ;
else
tb_sw_dbg ( sw , " allocated DP resource for port %d \n " , in - > port ) ;
return ret ;
2019-03-26 15:52:30 +03:00
}
/**
* tb_switch_dealloc_dp_resource ( ) - De - allocate DP resource
* @ sw : Switch whose DP resource is de - allocated
* @ in : DP IN port
*
* De - allocates DP resource that was previously allocated for DP
* tunneling .
*/
void tb_switch_dealloc_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2019-12-17 15:33:40 +03:00
int ret ;
if ( tb_switch_is_usb4 ( sw ) )
ret = usb4_switch_dealloc_dp_resource ( sw , in ) ;
else
ret = tb_lc_dp_sink_dealloc ( sw , in ) ;
if ( ret )
2019-03-26 15:52:30 +03:00
tb_sw_warn ( sw , " failed to de-allocate DP resource for port %d \n " ,
in - > port ) ;
2021-11-18 10:12:15 +03:00
else
tb_sw_dbg ( sw , " released DP resource for port %d \n " , in - > port ) ;
2019-03-26 15:52:30 +03:00
}
2017-06-06 15:25:16 +03:00
struct tb_sw_lookup {
struct tb * tb ;
u8 link ;
u8 depth ;
2017-07-18 16:30:05 +03:00
const uuid_t * uuid ;
2017-10-04 15:24:14 +03:00
u64 route ;
2017-06-06 15:25:16 +03:00
} ;
2019-06-14 20:53:59 +03:00
static int tb_switch_match ( struct device * dev , const void * data )
2017-06-06 15:25:16 +03:00
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2019-06-14 20:53:59 +03:00
const struct tb_sw_lookup * lookup = data ;
2017-06-06 15:25:16 +03:00
if ( ! sw )
return 0 ;
if ( sw - > tb ! = lookup - > tb )
return 0 ;
if ( lookup - > uuid )
return ! memcmp ( sw - > uuid , lookup - > uuid , sizeof ( * lookup - > uuid ) ) ;
2017-10-04 15:24:14 +03:00
if ( lookup - > route ) {
return sw - > config . route_lo = = lower_32_bits ( lookup - > route ) & &
sw - > config . route_hi = = upper_32_bits ( lookup - > route ) ;
}
2017-06-06 15:25:16 +03:00
/* Root switch is matched only by depth */
if ( ! lookup - > depth )
return ! sw - > depth ;
return sw - > link = = lookup - > link & & sw - > depth = = lookup - > depth ;
}
/**
* tb_switch_find_by_link_depth ( ) - Find switch by link and depth
* @ tb : Domain the switch belongs
* @ link : Link number the switch is connected
* @ depth : Depth of the switch in link
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
struct tb_switch * tb_switch_find_by_link_depth ( struct tb * tb , u8 link , u8 depth )
{
struct tb_sw_lookup lookup ;
struct device * dev ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . link = link ;
lookup . depth = depth ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
/**
2017-10-04 15:07:40 +03:00
* tb_switch_find_by_uuid ( ) - Find switch by UUID
2017-06-06 15:25:16 +03:00
* @ tb : Domain the switch belongs
* @ uuid : UUID to look for
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
2017-07-18 16:30:05 +03:00
struct tb_switch * tb_switch_find_by_uuid ( struct tb * tb , const uuid_t * uuid )
2017-06-06 15:25:16 +03:00
{
struct tb_sw_lookup lookup ;
struct device * dev ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . uuid = uuid ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
2017-06-06 15:25:17 +03:00
2017-10-04 15:24:14 +03:00
/**
* tb_switch_find_by_route ( ) - Find switch by route string
* @ tb : Domain the switch belongs
* @ route : Route string to look for
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
struct tb_switch * tb_switch_find_by_route ( struct tb * tb , u64 route )
{
struct tb_sw_lookup lookup ;
struct device * dev ;
if ( ! route )
return tb_switch_get ( tb - > root_switch ) ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . route = route ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
2019-12-17 15:33:37 +03:00
/**
* tb_switch_find_port ( ) - return the first port of @ type on @ sw or NULL
* @ sw : Switch to find the port from
* @ type : Port type to look for
*/
struct tb_port * tb_switch_find_port ( struct tb_switch * sw ,
enum tb_port_type type )
{
struct tb_port * port ;
tb_switch_for_each_port ( sw , port ) {
if ( port - > config . type = = type )
return port ;
}
return NULL ;
}
2021-12-17 04:16:39 +03:00
2021-12-17 04:16:43 +03:00
/*
* Can be used for read / write a specified PCIe bridge for any Thunderbolt 3
* device . For now used only for Titan Ridge .
*/
static int tb_switch_pcie_bridge_write ( struct tb_switch * sw , unsigned int bridge ,
unsigned int pcie_offset , u32 value )
{
u32 offset , command , val ;
int ret ;
if ( sw - > generation ! = 3 )
return - EOPNOTSUPP ;
offset = sw - > cap_plug_events + TB_PLUG_EVENTS_PCIE_WR_DATA ;
ret = tb_sw_write ( sw , & value , TB_CFG_SWITCH , offset , 1 ) ;
if ( ret )
return ret ;
command = pcie_offset & TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK ;
command | = BIT ( bridge + TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT ) ;
command | = TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK ;
command | = TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL
< < TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT ;
command | = TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK ;
offset = sw - > cap_plug_events + TB_PLUG_EVENTS_PCIE_CMD ;
ret = tb_sw_write ( sw , & command , TB_CFG_SWITCH , offset , 1 ) ;
if ( ret )
return ret ;
ret = tb_switch_wait_for_bit ( sw , offset ,
TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK , 0 , 100 ) ;
if ( ret )
return ret ;
ret = tb_sw_read ( sw , & val , TB_CFG_SWITCH , offset , 1 ) ;
if ( ret )
return ret ;
if ( val & TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK )
return - ETIMEDOUT ;
return 0 ;
}
/**
* tb_switch_pcie_l1_enable ( ) - Enable PCIe link to enter L1 state
* @ sw : Router to enable PCIe L1
*
* For Titan Ridge switch to enter CLx state , its PCIe bridges shall enable
* entry to PCIe L1 state . Shall be called after the upstream PCIe tunnel
* was configured . Due to Intel platforms limitation , shall be called only
* for first hop switch .
*/
int tb_switch_pcie_l1_enable ( struct tb_switch * sw )
{
struct tb_switch * parent = tb_switch_parent ( sw ) ;
int ret ;
if ( ! tb_route ( sw ) )
return 0 ;
if ( ! tb_switch_is_titan_ridge ( sw ) )
return 0 ;
/* Enable PCIe L1 enable only for first hop router (depth = 1) */
if ( tb_route ( parent ) )
return 0 ;
/* Write to downstream PCIe bridge #5 aka Dn4 */
ret = tb_switch_pcie_bridge_write ( sw , 5 , 0x143 , 0x0c7806b1 ) ;
if ( ret )
return ret ;
/* Write to Upstream PCIe bridge #0 aka Up0 */
return tb_switch_pcie_bridge_write ( sw , 0 , 0x143 , 0x0c5806b1 ) ;
}
2022-01-07 14:00:47 +03:00
/**
* tb_switch_xhci_connect ( ) - Connect internal xHCI
* @ sw : Router whose xHCI to connect
*
* Can be called to any router . For Alpine Ridge and Titan Ridge
* performs special flows that bring the xHCI functional for any device
* connected to the type - C port . Call only after PCIe tunnel has been
* established . The function only does the connect if not done already
* so can be called several times for the same router .
*/
int tb_switch_xhci_connect ( struct tb_switch * sw )
{
struct tb_port * port1 , * port3 ;
int ret ;
2022-06-14 18:53:59 +03:00
if ( sw - > generation ! = 3 )
return 0 ;
2022-01-07 14:00:47 +03:00
port1 = & sw - > ports [ 1 ] ;
port3 = & sw - > ports [ 3 ] ;
if ( tb_switch_is_alpine_ridge ( sw ) ) {
2022-06-14 18:53:59 +03:00
bool usb_port1 , usb_port3 , xhci_port1 , xhci_port3 ;
2022-01-07 14:00:47 +03:00
usb_port1 = tb_lc_is_usb_plugged ( port1 ) ;
usb_port3 = tb_lc_is_usb_plugged ( port3 ) ;
xhci_port1 = tb_lc_is_xhci_connected ( port1 ) ;
xhci_port3 = tb_lc_is_xhci_connected ( port3 ) ;
/* Figure out correct USB port to connect */
if ( usb_port1 & & ! xhci_port1 ) {
ret = tb_lc_xhci_connect ( port1 ) ;
if ( ret )
return ret ;
}
if ( usb_port3 & & ! xhci_port3 )
return tb_lc_xhci_connect ( port3 ) ;
} else if ( tb_switch_is_titan_ridge ( sw ) ) {
ret = tb_lc_xhci_connect ( port1 ) ;
if ( ret )
return ret ;
return tb_lc_xhci_connect ( port3 ) ;
}
return 0 ;
}
/**
* tb_switch_xhci_disconnect ( ) - Disconnect internal xHCI
* @ sw : Router whose xHCI to disconnect
*
* The opposite of tb_switch_xhci_connect ( ) . Disconnects xHCI on both
* ports .
*/
void tb_switch_xhci_disconnect ( struct tb_switch * sw )
{
if ( sw - > generation = = 3 ) {
struct tb_port * port1 = & sw - > ports [ 1 ] ;
struct tb_port * port3 = & sw - > ports [ 3 ] ;
tb_lc_xhci_disconnect ( port1 ) ;
tb_port_dbg ( port1 , " disconnected xHCI \n " ) ;
tb_lc_xhci_disconnect ( port3 ) ;
tb_port_dbg ( port3 , " disconnected xHCI \n " ) ;
}
}