License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 17:07:57 +03:00
// SPDX-License-Identifier: GPL-2.0
2014-06-04 00:04:02 +04:00
/*
2018-10-01 12:31:22 +03:00
* Thunderbolt driver - switch / port utility functions
2014-06-04 00:04:02 +04:00
*
* Copyright ( c ) 2014 Andreas Noever < andreas . noever @ gmail . com >
2018-10-01 12:31:22 +03:00
* Copyright ( C ) 2018 , Intel Corporation
2014-06-04 00:04:02 +04:00
*/
# include <linux/delay.h>
2017-06-06 15:25:17 +03:00
# include <linux/idr.h>
# include <linux/nvmem-provider.h>
2018-07-25 11:48:39 +03:00
# include <linux/pm_runtime.h>
2019-03-19 17:48:41 +03:00
# include <linux/sched/signal.h>
2017-06-06 15:25:17 +03:00
# include <linux/sizes.h>
2014-06-20 13:02:30 +04:00
# include <linux/slab.h>
2014-06-04 00:04:02 +04:00
# include "tb.h"
2017-06-06 15:25:17 +03:00
/* Switch NVM support */
# define NVM_CSS 0x10
struct nvm_auth_status {
struct list_head list ;
2017-07-18 16:30:05 +03:00
uuid_t uuid ;
2017-06-06 15:25:17 +03:00
u32 status ;
} ;
2020-06-23 19:14:28 +03:00
enum nvm_write_ops {
WRITE_AND_AUTHENTICATE = 1 ,
WRITE_ONLY = 2 ,
} ;
2017-06-06 15:25:17 +03:00
/*
* Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we
* keep it separately .
*/
static LIST_HEAD ( nvm_auth_status_cache ) ;
static DEFINE_MUTEX ( nvm_auth_status_lock ) ;
static struct nvm_auth_status * __nvm_get_auth_status ( const struct tb_switch * sw )
{
struct nvm_auth_status * st ;
list_for_each_entry ( st , & nvm_auth_status_cache , list ) {
2017-07-18 16:30:05 +03:00
if ( uuid_equal ( & st - > uuid , sw - > uuid ) )
2017-06-06 15:25:17 +03:00
return st ;
}
return NULL ;
}
static void nvm_get_auth_status ( const struct tb_switch * sw , u32 * status )
{
struct nvm_auth_status * st ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
mutex_unlock ( & nvm_auth_status_lock ) ;
* status = st ? st - > status : 0 ;
}
static void nvm_set_auth_status ( const struct tb_switch * sw , u32 status )
{
struct nvm_auth_status * st ;
if ( WARN_ON ( ! sw - > uuid ) )
return ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
if ( ! st ) {
st = kzalloc ( sizeof ( * st ) , GFP_KERNEL ) ;
if ( ! st )
goto unlock ;
memcpy ( & st - > uuid , sw - > uuid , sizeof ( st - > uuid ) ) ;
INIT_LIST_HEAD ( & st - > list ) ;
list_add_tail ( & st - > list , & nvm_auth_status_cache ) ;
}
st - > status = status ;
unlock :
mutex_unlock ( & nvm_auth_status_lock ) ;
}
static void nvm_clear_auth_status ( const struct tb_switch * sw )
{
struct nvm_auth_status * st ;
mutex_lock ( & nvm_auth_status_lock ) ;
st = __nvm_get_auth_status ( sw ) ;
if ( st ) {
list_del ( & st - > list ) ;
kfree ( st ) ;
}
mutex_unlock ( & nvm_auth_status_lock ) ;
}
static int nvm_validate_and_write ( struct tb_switch * sw )
{
unsigned int image_size , hdr_size ;
const u8 * buf = sw - > nvm - > buf ;
u16 ds_size ;
int ret ;
if ( ! buf )
return - EINVAL ;
image_size = sw - > nvm - > buf_data_size ;
if ( image_size < NVM_MIN_SIZE | | image_size > NVM_MAX_SIZE )
return - EINVAL ;
/*
* FARB pointer must point inside the image and must at least
* contain parts of the digital section we will be reading here .
*/
hdr_size = ( * ( u32 * ) buf ) & 0xffffff ;
if ( hdr_size + NVM_DEVID + 2 > = image_size )
return - EINVAL ;
/* Digital section start should be aligned to 4k page */
if ( ! IS_ALIGNED ( hdr_size , SZ_4K ) )
return - EINVAL ;
/*
* Read digital section size and check that it also fits inside
* the image .
*/
ds_size = * ( u16 * ) ( buf + hdr_size ) ;
if ( ds_size > = image_size )
return - EINVAL ;
if ( ! sw - > safe_mode ) {
u16 device_id ;
/*
* Make sure the device ID in the image matches the one
* we read from the switch config space .
*/
device_id = * ( u16 * ) ( buf + hdr_size + NVM_DEVID ) ;
if ( device_id ! = sw - > config . device_id )
return - EINVAL ;
if ( sw - > generation < 3 ) {
/* Write CSS headers first */
ret = dma_port_flash_write ( sw - > dma_port ,
DMA_PORT_CSS_ADDRESS , buf + NVM_CSS ,
DMA_PORT_CSS_MAX_SIZE ) ;
if ( ret )
return ret ;
}
/* Skip headers in the image */
buf + = hdr_size ;
image_size - = hdr_size ;
}
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
2020-06-23 19:14:28 +03:00
ret = usb4_switch_nvm_write ( sw , 0 , buf , image_size ) ;
else
ret = dma_port_flash_write ( sw - > dma_port , 0 , buf , image_size ) ;
if ( ! ret )
sw - > nvm - > flushed = true ;
return ret ;
2017-06-06 15:25:17 +03:00
}
2019-12-17 15:33:40 +03:00
static int nvm_authenticate_host_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:17 +03:00
{
2019-11-11 13:25:44 +03:00
int ret = 0 ;
2017-06-06 15:25:17 +03:00
/*
* Root switch NVM upgrade requires that we disconnect the
2017-10-02 13:38:34 +03:00
* existing paths first ( in case it is not in safe mode
2017-06-06 15:25:17 +03:00
* already ) .
*/
if ( ! sw - > safe_mode ) {
2019-11-11 13:25:44 +03:00
u32 status ;
2017-10-02 13:38:34 +03:00
ret = tb_domain_disconnect_all_paths ( sw - > tb ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
return ret ;
/*
* The host controller goes away pretty soon after this if
* everything goes well so getting timeout is expected .
*/
ret = dma_port_flash_update_auth ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
if ( ! ret | | ret = = - ETIMEDOUT )
return 0 ;
/*
* Any error from update auth operation requires power
* cycling of the host router .
*/
tb_sw_warn ( sw , " failed to authenticate NVM, power cycling \n " ) ;
if ( dma_port_flash_update_auth_status ( sw - > dma_port , & status ) > 0 )
nvm_set_auth_status ( sw , status ) ;
2017-06-06 15:25:17 +03:00
}
/*
* From safe mode we can get out by just power cycling the
* switch .
*/
dma_port_power_cycle ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
return ret ;
2017-06-06 15:25:17 +03:00
}
2019-12-17 15:33:40 +03:00
static int nvm_authenticate_device_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:17 +03:00
{
int ret , retries = 10 ;
ret = dma_port_flash_update_auth ( sw - > dma_port ) ;
2019-11-11 13:25:44 +03:00
switch ( ret ) {
case 0 :
case - ETIMEDOUT :
case - EACCES :
case - EINVAL :
/* Power cycle is required */
break ;
default :
2017-06-06 15:25:17 +03:00
return ret ;
2019-11-11 13:25:44 +03:00
}
2017-06-06 15:25:17 +03:00
/*
* Poll here for the authentication status . It takes some time
* for the device to respond ( we get timeout for a while ) . Once
* we get response the device needs to be power cycled in order
* to the new NVM to be taken into use .
*/
do {
u32 status ;
ret = dma_port_flash_update_auth_status ( sw - > dma_port , & status ) ;
if ( ret < 0 & & ret ! = - ETIMEDOUT )
return ret ;
if ( ret > 0 ) {
if ( status ) {
tb_sw_warn ( sw , " failed to authenticate NVM \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
tb_sw_info ( sw , " power cycling the switch now \n " ) ;
dma_port_power_cycle ( sw - > dma_port ) ;
return 0 ;
}
msleep ( 500 ) ;
} while ( - - retries ) ;
return - ETIMEDOUT ;
}
2019-12-17 15:33:40 +03:00
static void nvm_authenticate_start_dma_port ( struct tb_switch * sw )
{
struct pci_dev * root_port ;
/*
* During host router NVM upgrade we should not allow root port to
* go into D3cold because some root ports cannot trigger PME
* itself . To be on the safe side keep the root port in D0 during
* the whole upgrade process .
*/
2020-05-09 13:19:28 +03:00
root_port = pcie_find_root_port ( sw - > tb - > nhi - > pdev ) ;
2019-12-17 15:33:40 +03:00
if ( root_port )
pm_runtime_get_noresume ( & root_port - > dev ) ;
}
static void nvm_authenticate_complete_dma_port ( struct tb_switch * sw )
{
struct pci_dev * root_port ;
2020-05-09 13:19:28 +03:00
root_port = pcie_find_root_port ( sw - > tb - > nhi - > pdev ) ;
2019-12-17 15:33:40 +03:00
if ( root_port )
pm_runtime_put ( & root_port - > dev ) ;
}
static inline bool nvm_readable ( struct tb_switch * sw )
{
if ( tb_switch_is_usb4 ( sw ) ) {
/*
* USB4 devices must support NVM operations but it is
* optional for hosts . Therefore we query the NVM sector
* size here and if it is supported assume NVM
* operations are implemented .
*/
return usb4_switch_nvm_sector_size ( sw ) > 0 ;
}
/* Thunderbolt 2 and 3 devices support NVM through DMA port */
return ! ! sw - > dma_port ;
}
static inline bool nvm_upgradeable ( struct tb_switch * sw )
{
if ( sw - > no_nvm_upgrade )
return false ;
return nvm_readable ( sw ) ;
}
static inline int nvm_read ( struct tb_switch * sw , unsigned int address ,
void * buf , size_t size )
{
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_nvm_read ( sw , address , buf , size ) ;
return dma_port_flash_read ( sw - > dma_port , address , buf , size ) ;
}
static int nvm_authenticate ( struct tb_switch * sw )
{
int ret ;
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_nvm_authenticate ( sw ) ;
if ( ! tb_route ( sw ) ) {
nvm_authenticate_start_dma_port ( sw ) ;
ret = nvm_authenticate_host_dma_port ( sw ) ;
} else {
ret = nvm_authenticate_device_dma_port ( sw ) ;
}
return ret ;
}
2017-06-06 15:25:17 +03:00
static int tb_switch_nvm_read ( void * priv , unsigned int offset , void * val ,
size_t bytes )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm = priv ;
struct tb_switch * sw = tb_to_switch ( nvm - > dev ) ;
2018-07-25 11:48:39 +03:00
int ret ;
pm_runtime_get_sync ( & sw - > dev ) ;
2019-05-28 18:56:20 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) ) {
ret = restart_syscall ( ) ;
goto out ;
}
2019-12-17 15:33:40 +03:00
ret = nvm_read ( sw , offset , val , bytes ) ;
2019-05-28 18:56:20 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
out :
2018-07-25 11:48:39 +03:00
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:17 +03:00
2018-07-25 11:48:39 +03:00
return ret ;
2017-06-06 15:25:17 +03:00
}
static int tb_switch_nvm_write ( void * priv , unsigned int offset , void * val ,
size_t bytes )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm = priv ;
struct tb_switch * sw = tb_to_switch ( nvm - > dev ) ;
int ret ;
2017-06-06 15:25:17 +03:00
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:17 +03:00
/*
* Since writing the NVM image might require some special steps ,
* for example when CSS headers are written , we cache the image
* locally here and handle the special cases when the user asks
* us to authenticate the image .
*/
2020-03-05 12:37:15 +03:00
ret = tb_nvm_write_buf ( nvm , offset , val , bytes ) ;
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static int tb_switch_nvm_add ( struct tb_switch * sw )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm ;
2017-06-06 15:25:17 +03:00
u32 val ;
int ret ;
2019-12-17 15:33:40 +03:00
if ( ! nvm_readable ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2019-12-17 15:33:40 +03:00
/*
* The NVM format of non - Intel hardware is not known so
* currently restrict NVM upgrade for Intel hardware . We may
* relax this in the future when we learn other NVM formats .
*/
2020-05-08 11:49:47 +03:00
if ( sw - > config . vendor_id ! = PCI_VENDOR_ID_INTEL & &
sw - > config . vendor_id ! = 0x8087 ) {
2019-12-17 15:33:40 +03:00
dev_info ( & sw - > dev ,
" NVM format of vendor %#x is not known, disabling NVM upgrade \n " ,
sw - > config . vendor_id ) ;
return 0 ;
}
2020-03-05 12:37:15 +03:00
nvm = tb_nvm_alloc ( & sw - > dev ) ;
if ( IS_ERR ( nvm ) )
return PTR_ERR ( nvm ) ;
2017-06-06 15:25:17 +03:00
/*
* If the switch is in safe - mode the only accessible portion of
* the NVM is the non - active one where userspace is expected to
* write new functional NVM .
*/
if ( ! sw - > safe_mode ) {
u32 nvm_size , hdr_size ;
2019-12-17 15:33:40 +03:00
ret = nvm_read ( sw , NVM_FLASH_SIZE , & val , sizeof ( val ) ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
2020-03-05 12:37:15 +03:00
goto err_nvm ;
2017-06-06 15:25:17 +03:00
hdr_size = sw - > generation < 3 ? SZ_8K : SZ_16K ;
nvm_size = ( SZ_1M < < ( val & 7 ) ) / 8 ;
nvm_size = ( nvm_size - hdr_size ) / 2 ;
2019-12-17 15:33:40 +03:00
ret = nvm_read ( sw , NVM_VERSION , & val , sizeof ( val ) ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
2020-03-05 12:37:15 +03:00
goto err_nvm ;
2017-06-06 15:25:17 +03:00
nvm - > major = val > > 16 ;
nvm - > minor = val > > 8 ;
2020-03-05 12:37:15 +03:00
ret = tb_nvm_add_active ( nvm , nvm_size , tb_switch_nvm_read ) ;
if ( ret )
goto err_nvm ;
2017-06-06 15:25:17 +03:00
}
2019-02-05 12:51:40 +03:00
if ( ! sw - > no_nvm_upgrade ) {
2020-03-05 12:37:15 +03:00
ret = tb_nvm_add_non_active ( nvm , NVM_MAX_SIZE ,
tb_switch_nvm_write ) ;
if ( ret )
goto err_nvm ;
2017-06-06 15:25:17 +03:00
}
sw - > nvm = nvm ;
return 0 ;
2020-03-05 12:37:15 +03:00
err_nvm :
tb_nvm_free ( nvm ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static void tb_switch_nvm_remove ( struct tb_switch * sw )
{
2020-03-05 12:37:15 +03:00
struct tb_nvm * nvm ;
2017-06-06 15:25:17 +03:00
nvm = sw - > nvm ;
sw - > nvm = NULL ;
if ( ! nvm )
return ;
/* Remove authentication status in case the switch is unplugged */
if ( ! nvm - > authenticating )
nvm_clear_auth_status ( sw ) ;
2020-03-05 12:37:15 +03:00
tb_nvm_free ( nvm ) ;
2017-06-06 15:25:17 +03:00
}
2014-06-04 00:04:02 +04:00
/* port utility functions */
static const char * tb_port_type ( struct tb_regs_port_header * port )
{
switch ( port - > type > > 16 ) {
case 0 :
switch ( ( u8 ) port - > type ) {
case 0 :
return " Inactive " ;
case 1 :
return " Port " ;
case 2 :
return " NHI " ;
default :
return " unknown " ;
}
case 0x2 :
return " Ethernet " ;
case 0x8 :
return " SATA " ;
case 0xe :
return " DP/HDMI " ;
case 0x10 :
return " PCIe " ;
case 0x20 :
return " USB " ;
default :
return " unknown " ;
}
}
static void tb_dump_port ( struct tb * tb , struct tb_regs_port_header * port )
{
2018-10-01 12:31:19 +03:00
tb_dbg ( tb ,
" Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x)) \n " ,
port - > port_number , port - > vendor_id , port - > device_id ,
port - > revision , port - > thunderbolt_version , tb_port_type ( port ) ,
port - > type ) ;
tb_dbg ( tb , " Max hop id (in/out): %d/%d \n " ,
port - > max_in_hop_id , port - > max_out_hop_id ) ;
tb_dbg ( tb , " Max counters: %d \n " , port - > max_counters ) ;
tb_dbg ( tb , " NFC Credits: %#x \n " , port - > nfc_credits ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:05 +04:00
/**
* tb_port_state ( ) - get connectedness state of a port
2020-09-24 11:44:01 +03:00
* @ port : the port to check
2014-06-04 00:04:05 +04:00
*
* The port must have a TB_CAP_PHY ( i . e . it should be a real port ) .
*
* Return : Returns an enum tb_port_state on success or an error code on failure .
*/
2020-09-24 11:44:01 +03:00
int tb_port_state ( struct tb_port * port )
2014-06-04 00:04:05 +04:00
{
struct tb_cap_phy phy ;
int res ;
if ( port - > cap_phy = = 0 ) {
tb_port_WARN ( port , " does not have a PHY \n " ) ;
return - EINVAL ;
}
res = tb_port_read ( port , & phy , TB_CFG_PORT , port - > cap_phy , 2 ) ;
if ( res )
return res ;
return phy . state ;
}
/**
* tb_wait_for_port ( ) - wait for a port to become ready
2021-01-28 13:51:03 +03:00
* @ port : Port to wait
* @ wait_if_unplugged : Wait also when port is unplugged
2014-06-04 00:04:05 +04:00
*
* Wait up to 1 second for a port to reach state TB_PORT_UP . If
* wait_if_unplugged is set then we also wait if the port is in state
* TB_PORT_UNPLUGGED ( it takes a while for the device to be registered after
* switch resume ) . Otherwise we only wait if a device is registered but the link
* has not yet been established .
*
* Return : Returns an error code on failure . Returns 0 if the port is not
* connected or failed to reach state TB_PORT_UP within one second . Returns 1
* if the port is connected and in state TB_PORT_UP .
*/
int tb_wait_for_port ( struct tb_port * port , bool wait_if_unplugged )
{
int retries = 10 ;
int state ;
if ( ! port - > cap_phy ) {
tb_port_WARN ( port , " does not have PHY \n " ) ;
return - EINVAL ;
}
if ( tb_is_upstream_port ( port ) ) {
tb_port_WARN ( port , " is the upstream port \n " ) ;
return - EINVAL ;
}
while ( retries - - ) {
state = tb_port_state ( port ) ;
if ( state < 0 )
return state ;
if ( state = = TB_PORT_DISABLED ) {
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " is disabled (state: 0) \n " ) ;
2014-06-04 00:04:05 +04:00
return 0 ;
}
if ( state = = TB_PORT_UNPLUGGED ) {
if ( wait_if_unplugged ) {
/* used during resume */
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port ,
" is unplugged (state: 7), retrying... \n " ) ;
2014-06-04 00:04:05 +04:00
msleep ( 100 ) ;
continue ;
}
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " is unplugged (state: 7) \n " ) ;
2014-06-04 00:04:05 +04:00
return 0 ;
}
if ( state = = TB_PORT_UP ) {
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " is connected, link is up (state: 2) \n " ) ;
2014-06-04 00:04:05 +04:00
return 1 ;
}
/*
* After plug - in the state is TB_PORT_CONNECTING . Give it some
* time .
*/
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port ,
" is connected, link is not up (state: %d), retrying... \n " ,
state ) ;
2014-06-04 00:04:05 +04:00
msleep ( 100 ) ;
}
tb_port_warn ( port ,
" failed to reach state TB_PORT_UP. Ignoring port... \n " ) ;
return 0 ;
}
2014-06-04 00:04:07 +04:00
/**
* tb_port_add_nfc_credits ( ) - add / remove non flow controlled credits to port
2021-01-28 13:51:03 +03:00
* @ port : Port to add / remove NFC credits
* @ credits : Credits to add / remove
2014-06-04 00:04:07 +04:00
*
* Change the number of NFC credits allocated to @ port by @ credits . To remove
* NFC credits pass a negative amount of credits .
*
* Return : Returns 0 on success or an error code on failure .
*/
int tb_port_add_nfc_credits ( struct tb_port * port , int credits )
{
2018-10-11 11:38:22 +03:00
u32 nfc_credits ;
if ( credits = = 0 | | port - > sw - > is_unplugged )
2014-06-04 00:04:07 +04:00
return 0 ;
2018-10-11 11:38:22 +03:00
2020-08-18 17:10:00 +03:00
/*
* USB4 restricts programming NFC buffers to lane adapters only
* so skip other ports .
*/
if ( tb_switch_is_usb4 ( port - > sw ) & & ! tb_port_is_null ( port ) )
return 0 ;
2019-09-06 11:59:00 +03:00
nfc_credits = port - > config . nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK ;
2018-10-11 11:38:22 +03:00
nfc_credits + = credits ;
2019-09-06 11:59:00 +03:00
tb_port_dbg ( port , " adding %d NFC credits to %lu " , credits ,
port - > config . nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK ) ;
2018-10-11 11:38:22 +03:00
2019-09-06 11:59:00 +03:00
port - > config . nfc_credits & = ~ ADP_CS_4_NFC_BUFFERS_MASK ;
2018-10-11 11:38:22 +03:00
port - > config . nfc_credits | = nfc_credits ;
2014-06-04 00:04:07 +04:00
return tb_port_write ( port , & port - > config . nfc_credits ,
2019-09-06 11:59:00 +03:00
TB_CFG_PORT , ADP_CS_4 , 1 ) ;
2014-06-04 00:04:07 +04:00
}
/**
* tb_port_clear_counter ( ) - clear a counter in TB_CFG_COUNTER
2021-01-28 13:51:03 +03:00
* @ port : Port whose counters to clear
* @ counter : Counter index to clear
2014-06-04 00:04:07 +04:00
*
* Return : Returns 0 on success or an error code on failure .
*/
int tb_port_clear_counter ( struct tb_port * port , int counter )
{
u32 zero [ 3 ] = { 0 , 0 , 0 } ;
2018-09-17 16:32:13 +03:00
tb_port_dbg ( port , " clearing counter %d \n " , counter ) ;
2014-06-04 00:04:07 +04:00
return tb_port_write ( port , zero , TB_CFG_COUNTERS , 3 * counter , 3 ) ;
}
2019-12-17 15:33:40 +03:00
/**
* tb_port_unlock ( ) - Unlock downstream port
* @ port : Port to unlock
*
* Needed for USB4 but can be called for any CIO / USB4 ports . Makes the
* downstream router accessible for CM .
*/
int tb_port_unlock ( struct tb_port * port )
{
if ( tb_switch_is_icm ( port - > sw ) )
return 0 ;
if ( ! tb_port_is_null ( port ) )
return - EINVAL ;
if ( tb_switch_is_usb4 ( port - > sw ) )
return usb4_port_unlock ( port ) ;
return 0 ;
}
2020-02-21 13:11:54 +03:00
static int __tb_port_enable ( struct tb_port * port , bool enable )
{
int ret ;
u32 phy ;
if ( ! tb_port_is_null ( port ) )
return - EINVAL ;
ret = tb_port_read ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
if ( enable )
phy & = ~ LANE_ADP_CS_1_LD ;
else
phy | = LANE_ADP_CS_1_LD ;
return tb_port_write ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
}
/**
* tb_port_enable ( ) - Enable lane adapter
* @ port : Port to enable ( can be % NULL )
*
* This is used for lane 0 and 1 adapters to enable it .
*/
int tb_port_enable ( struct tb_port * port )
{
return __tb_port_enable ( port , true ) ;
}
/**
* tb_port_disable ( ) - Disable lane adapter
* @ port : Port to disable ( can be % NULL )
*
* This is used for lane 0 and 1 adapters to disable it .
*/
int tb_port_disable ( struct tb_port * port )
{
return __tb_port_enable ( port , false ) ;
}
2021-01-27 14:25:51 +03:00
/*
2014-06-04 00:04:02 +04:00
* tb_init_port ( ) - initialize a port
*
* This is a helper method for tb_switch_alloc . Does not check or initialize
* any downstream switches .
*
* Return : Returns 0 on success or an error code on failure .
*/
2014-06-13 01:11:47 +04:00
static int tb_init_port ( struct tb_port * port )
2014-06-04 00:04:02 +04:00
{
int res ;
2014-06-04 00:04:05 +04:00
int cap ;
2014-06-13 01:11:47 +04:00
2014-06-04 00:04:02 +04:00
res = tb_port_read ( port , & port - > config , TB_CFG_PORT , 0 , 8 ) ;
2018-07-04 08:50:01 +03:00
if ( res ) {
if ( res = = - ENODEV ) {
tb_dbg ( port - > sw - > tb , " Port %d: not implemented \n " ,
port - > port ) ;
2020-07-21 14:35:23 +03:00
port - > disabled = true ;
2018-07-04 08:50:01 +03:00
return 0 ;
}
2014-06-04 00:04:02 +04:00
return res ;
2018-07-04 08:50:01 +03:00
}
2014-06-04 00:04:02 +04:00
2014-06-04 00:04:05 +04:00
/* Port 0 is the switch itself and has no PHY. */
2014-06-13 01:11:47 +04:00
if ( port - > config . type = = TB_TYPE_PORT & & port - > port ! = 0 ) {
2017-06-06 15:24:58 +03:00
cap = tb_port_find_cap ( port , TB_PORT_CAP_PHY ) ;
2014-06-04 00:04:05 +04:00
if ( cap > 0 )
port - > cap_phy = cap ;
else
tb_port_WARN ( port , " non switch port without a PHY \n " ) ;
2019-12-17 15:33:40 +03:00
cap = tb_port_find_cap ( port , TB_PORT_CAP_USB4 ) ;
if ( cap > 0 )
port - > cap_usb4 = cap ;
2017-02-19 11:39:34 +03:00
} else if ( port - > port ! = 0 ) {
cap = tb_port_find_cap ( port , TB_PORT_CAP_ADAP ) ;
if ( cap > 0 )
port - > cap_adap = cap ;
2014-06-04 00:04:05 +04:00
}
2014-06-13 01:11:47 +04:00
tb_dump_port ( port - > sw - > tb , & port - > config ) ;
2014-06-04 00:04:02 +04:00
2019-03-26 15:52:30 +03:00
INIT_LIST_HEAD ( & port - > list ) ;
2014-06-04 00:04:02 +04:00
return 0 ;
}
2017-02-19 17:57:27 +03:00
static int tb_port_alloc_hopid ( struct tb_port * port , bool in , int min_hopid ,
int max_hopid )
{
int port_max_hopid ;
struct ida * ida ;
if ( in ) {
port_max_hopid = port - > config . max_in_hop_id ;
ida = & port - > in_hopids ;
} else {
port_max_hopid = port - > config . max_out_hop_id ;
ida = & port - > out_hopids ;
}
2020-06-01 12:47:07 +03:00
/*
* NHI can use HopIDs 1 - max for other adapters HopIDs 0 - 7 are
* reserved .
*/
2020-07-25 10:32:46 +03:00
if ( ! tb_port_is_nhi ( port ) & & min_hopid < TB_PATH_MIN_HOPID )
2017-02-19 17:57:27 +03:00
min_hopid = TB_PATH_MIN_HOPID ;
if ( max_hopid < 0 | | max_hopid > port_max_hopid )
max_hopid = port_max_hopid ;
return ida_simple_get ( ida , min_hopid , max_hopid + 1 , GFP_KERNEL ) ;
}
/**
* tb_port_alloc_in_hopid ( ) - Allocate input HopID from port
* @ port : Port to allocate HopID for
* @ min_hopid : Minimum acceptable input HopID
* @ max_hopid : Maximum acceptable input HopID
*
* Return : HopID between @ min_hopid and @ max_hopid or negative errno in
* case of error .
*/
int tb_port_alloc_in_hopid ( struct tb_port * port , int min_hopid , int max_hopid )
{
return tb_port_alloc_hopid ( port , true , min_hopid , max_hopid ) ;
}
/**
* tb_port_alloc_out_hopid ( ) - Allocate output HopID from port
* @ port : Port to allocate HopID for
* @ min_hopid : Minimum acceptable output HopID
* @ max_hopid : Maximum acceptable output HopID
*
* Return : HopID between @ min_hopid and @ max_hopid or negative errno in
* case of error .
*/
int tb_port_alloc_out_hopid ( struct tb_port * port , int min_hopid , int max_hopid )
{
return tb_port_alloc_hopid ( port , false , min_hopid , max_hopid ) ;
}
/**
* tb_port_release_in_hopid ( ) - Release allocated input HopID from port
* @ port : Port whose HopID to release
* @ hopid : HopID to release
*/
void tb_port_release_in_hopid ( struct tb_port * port , int hopid )
{
ida_simple_remove ( & port - > in_hopids , hopid ) ;
}
/**
* tb_port_release_out_hopid ( ) - Release allocated output HopID from port
* @ port : Port whose HopID to release
* @ hopid : HopID to release
*/
void tb_port_release_out_hopid ( struct tb_port * port , int hopid )
{
ida_simple_remove ( & port - > out_hopids , hopid ) ;
}
2020-04-29 17:00:30 +03:00
static inline bool tb_switch_is_reachable ( const struct tb_switch * parent ,
const struct tb_switch * sw )
{
u64 mask = ( 1ULL < < parent - > config . depth * 8 ) - 1 ;
return ( tb_route ( parent ) & mask ) = = ( tb_route ( sw ) & mask ) ;
}
2017-02-19 22:51:30 +03:00
/**
* tb_next_port_on_path ( ) - Return next port for given port on a path
* @ start : Start port of the walk
* @ end : End port of the walk
* @ prev : Previous port ( % NULL if this is the first )
*
* This function can be used to walk from one port to another if they
* are connected through zero or more switches . If the @ prev is dual
* link port , the function follows that link and returns another end on
* that same link .
*
* If the @ end port has been reached , return % NULL .
*
* Domain tb - > lock must be held when this function is called .
*/
struct tb_port * tb_next_port_on_path ( struct tb_port * start , struct tb_port * end ,
struct tb_port * prev )
{
struct tb_port * next ;
if ( ! prev )
return start ;
if ( prev - > sw = = end - > sw ) {
if ( prev = = end )
return NULL ;
return end ;
}
2020-04-29 17:00:30 +03:00
if ( tb_switch_is_reachable ( prev - > sw , end - > sw ) ) {
next = tb_port_at ( tb_route ( end - > sw ) , prev - > sw ) ;
/* Walk down the topology if next == prev */
2017-02-19 22:51:30 +03:00
if ( prev - > remote & &
2020-04-29 17:00:30 +03:00
( next = = prev | | next - > dual_link_port = = prev ) )
2017-02-19 22:51:30 +03:00
next = prev - > remote ;
} else {
if ( tb_is_upstream_port ( prev ) ) {
next = prev - > remote ;
} else {
next = tb_upstream_port ( prev - > sw ) ;
/*
* Keep the same link if prev and next are both
* dual link ports .
*/
if ( next - > dual_link_port & &
next - > link_nr ! = prev - > link_nr ) {
next = next - > dual_link_port ;
}
}
}
2020-04-29 17:00:30 +03:00
return next ! = prev ? next : NULL ;
2017-02-19 22:51:30 +03:00
}
2020-05-08 12:41:34 +03:00
/**
* tb_port_get_link_speed ( ) - Get current link speed
* @ port : Port to check ( USB4 or CIO )
*
* Returns link speed in Gb / s or negative errno in case of failure .
*/
int tb_port_get_link_speed ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
u32 val , speed ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
speed = ( val & LANE_ADP_CS_1_CURRENT_SPEED_MASK ) > >
LANE_ADP_CS_1_CURRENT_SPEED_SHIFT ;
return speed = = LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10 ;
}
2020-09-24 11:43:58 +03:00
/**
* tb_port_get_link_width ( ) - Get current link width
* @ port : Port to check ( USB4 or CIO )
*
* Returns link width . Return values can be 1 ( Single - Lane ) , 2 ( Dual - Lane )
* or negative errno in case of failure .
*/
int tb_port_get_link_width ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
u32 val ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
return ( val & LANE_ADP_CS_1_CURRENT_WIDTH_MASK ) > >
LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT ;
}
static bool tb_port_is_width_supported ( struct tb_port * port , int width )
{
u32 phy , widths ;
int ret ;
if ( ! port - > cap_phy )
return false ;
ret = tb_port_read ( port , & phy , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_0 , 1 ) ;
if ( ret )
2020-03-03 13:17:16 +03:00
return false ;
2019-03-21 20:03:00 +03:00
widths = ( phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK ) > >
LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT ;
return ! ! ( widths & width ) ;
}
static int tb_port_set_link_width ( struct tb_port * port , unsigned int width )
{
u32 val ;
int ret ;
if ( ! port - > cap_phy )
return - EINVAL ;
ret = tb_port_read ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
if ( ret )
return ret ;
val & = ~ LANE_ADP_CS_1_TARGET_WIDTH_MASK ;
switch ( width ) {
case 1 :
val | = LANE_ADP_CS_1_TARGET_WIDTH_SINGLE < <
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT ;
break ;
case 2 :
val | = LANE_ADP_CS_1_TARGET_WIDTH_DUAL < <
LANE_ADP_CS_1_TARGET_WIDTH_SHIFT ;
break ;
default :
return - EINVAL ;
}
val | = LANE_ADP_CS_1_LB ;
return tb_port_write ( port , & val , TB_CFG_PORT ,
port - > cap_phy + LANE_ADP_CS_1 , 1 ) ;
}
2020-09-24 11:44:01 +03:00
/**
* tb_port_lane_bonding_enable ( ) - Enable bonding on port
* @ port : port to enable
*
* Enable bonding by setting the link width of the port and the
* other port in case of dual link port .
*
* Return : % 0 in case of success and negative errno in case of error
*/
int tb_port_lane_bonding_enable ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
int ret ;
/*
* Enable lane bonding for both links if not already enabled by
* for example the boot firmware .
*/
ret = tb_port_get_link_width ( port ) ;
if ( ret = = 1 ) {
ret = tb_port_set_link_width ( port , 2 ) ;
if ( ret )
return ret ;
}
ret = tb_port_get_link_width ( port - > dual_link_port ) ;
if ( ret = = 1 ) {
ret = tb_port_set_link_width ( port - > dual_link_port , 2 ) ;
if ( ret ) {
tb_port_set_link_width ( port , 1 ) ;
return ret ;
}
}
port - > bonded = true ;
port - > dual_link_port - > bonded = true ;
return 0 ;
}
2020-09-24 11:44:01 +03:00
/**
* tb_port_lane_bonding_disable ( ) - Disable bonding on port
* @ port : port to disable
*
* Disable bonding by setting the link width of the port and the
* other port in case of dual link port .
*
*/
void tb_port_lane_bonding_disable ( struct tb_port * port )
2019-03-21 20:03:00 +03:00
{
port - > dual_link_port - > bonded = false ;
port - > bonded = false ;
tb_port_set_link_width ( port - > dual_link_port , 1 ) ;
tb_port_set_link_width ( port , 1 ) ;
}
2020-11-26 12:52:43 +03:00
static int tb_port_start_lane_initialization ( struct tb_port * port )
{
int ret ;
if ( tb_switch_is_usb4 ( port - > sw ) )
return 0 ;
ret = tb_lc_start_lane_initialization ( port ) ;
return ret = = - EINVAL ? 0 : ret ;
}
2017-10-12 16:45:50 +03:00
/**
* tb_port_is_enabled ( ) - Is the adapter port enabled
* @ port : Port to check
*/
bool tb_port_is_enabled ( struct tb_port * port )
{
switch ( port - > config . type ) {
case TB_TYPE_PCIE_UP :
case TB_TYPE_PCIE_DOWN :
return tb_pci_port_is_enabled ( port ) ;
2018-09-17 16:30:49 +03:00
case TB_TYPE_DP_HDMI_IN :
case TB_TYPE_DP_HDMI_OUT :
return tb_dp_port_is_enabled ( port ) ;
2019-12-17 15:33:44 +03:00
case TB_TYPE_USB3_UP :
case TB_TYPE_USB3_DOWN :
return tb_usb3_port_is_enabled ( port ) ;
2017-10-12 16:45:50 +03:00
default :
return false ;
}
}
2019-12-17 15:33:44 +03:00
/**
* tb_usb3_port_is_enabled ( ) - Is the USB3 adapter port enabled
* @ port : USB3 adapter port to check
*/
bool tb_usb3_port_is_enabled ( struct tb_port * port )
{
u32 data ;
if ( tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_USB3_CS_0 , 1 ) )
return false ;
return ! ! ( data & ADP_USB3_CS_0_PE ) ;
}
/**
* tb_usb3_port_enable ( ) - Enable USB3 adapter port
* @ port : USB3 adapter port to enable
* @ enable : Enable / disable the USB3 adapter
*/
int tb_usb3_port_enable ( struct tb_port * port , bool enable )
{
u32 word = enable ? ( ADP_USB3_CS_0_PE | ADP_USB3_CS_0_V )
: ADP_USB3_CS_0_V ;
if ( ! port - > cap_adap )
return - ENXIO ;
return tb_port_write ( port , & word , TB_CFG_PORT ,
port - > cap_adap + ADP_USB3_CS_0 , 1 ) ;
}
2017-02-20 00:43:26 +03:00
/**
* tb_pci_port_is_enabled ( ) - Is the PCIe adapter port enabled
* @ port : PCIe port to check
*/
bool tb_pci_port_is_enabled ( struct tb_port * port )
{
u32 data ;
2019-09-06 12:05:24 +03:00
if ( tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_PCIE_CS_0 , 1 ) )
2017-02-20 00:43:26 +03:00
return false ;
2019-09-06 12:05:24 +03:00
return ! ! ( data & ADP_PCIE_CS_0_PE ) ;
2017-02-20 00:43:26 +03:00
}
2017-02-19 14:48:29 +03:00
/**
* tb_pci_port_enable ( ) - Enable PCIe adapter port
* @ port : PCIe port to enable
* @ enable : Enable / disable the PCIe adapter
*/
int tb_pci_port_enable ( struct tb_port * port , bool enable )
{
2019-09-06 12:05:24 +03:00
u32 word = enable ? ADP_PCIE_CS_0_PE : 0x0 ;
2017-02-19 14:48:29 +03:00
if ( ! port - > cap_adap )
return - ENXIO ;
2019-09-06 12:05:24 +03:00
return tb_port_write ( port , & word , TB_CFG_PORT ,
port - > cap_adap + ADP_PCIE_CS_0 , 1 ) ;
2017-02-19 14:48:29 +03:00
}
2018-09-17 16:30:49 +03:00
/**
* tb_dp_port_hpd_is_active ( ) - Is HPD already active
* @ port : DP out port to check
*
* Checks if the DP OUT adapter port has HDP bit already set .
*/
int tb_dp_port_hpd_is_active ( struct tb_port * port )
{
u32 data ;
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_2 , 1 ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2019-09-06 11:32:15 +03:00
return ! ! ( data & ADP_DP_CS_2_HDP ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_hpd_clear ( ) - Clear HPD from DP IN port
* @ port : Port to clear HPD
*
* If the DP IN port has HDP set , this function can be used to clear it .
*/
int tb_dp_port_hpd_clear ( struct tb_port * port )
{
u32 data ;
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_3 , 1 ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2019-09-06 11:32:15 +03:00
data | = ADP_DP_CS_3_HDPC ;
return tb_port_write ( port , & data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_3 , 1 ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_set_hops ( ) - Set video / aux Hop IDs for DP port
* @ port : DP IN / OUT port to set hops
* @ video : Video Hop ID
* @ aux_tx : AUX TX Hop ID
* @ aux_rx : AUX RX Hop ID
*
* Programs specified Hop IDs for DP IN / OUT port .
*/
int tb_dp_port_set_hops ( struct tb_port * port , unsigned int video ,
unsigned int aux_tx , unsigned int aux_rx )
{
u32 data [ 2 ] ;
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
2019-09-06 11:32:15 +03:00
data [ 0 ] & = ~ ADP_DP_CS_0_VIDEO_HOPID_MASK ;
data [ 1 ] & = ~ ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
data [ 1 ] & = ~ ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
data [ 0 ] | = ( video < < ADP_DP_CS_0_VIDEO_HOPID_SHIFT ) &
ADP_DP_CS_0_VIDEO_HOPID_MASK ;
data [ 1 ] | = aux_tx & ADP_DP_CS_1_AUX_TX_HOPID_MASK ;
data [ 1 ] | = ( aux_rx < < ADP_DP_CS_1_AUX_RX_HOPID_SHIFT ) &
ADP_DP_CS_1_AUX_RX_HOPID_MASK ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
return tb_port_write ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_is_enabled ( ) - Is DP adapter port enabled
* @ port : DP adapter port to check
*/
bool tb_dp_port_is_enabled ( struct tb_port * port )
{
2019-09-19 14:55:23 +03:00
u32 data [ 2 ] ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
if ( tb_port_read ( port , data , TB_CFG_PORT , port - > cap_adap + ADP_DP_CS_0 ,
2019-09-19 14:55:23 +03:00
ARRAY_SIZE ( data ) ) )
2018-09-17 16:30:49 +03:00
return false ;
2019-09-06 11:32:15 +03:00
return ! ! ( data [ 0 ] & ( ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ) ) ;
2018-09-17 16:30:49 +03:00
}
/**
* tb_dp_port_enable ( ) - Enables / disables DP paths of a port
* @ port : DP IN / OUT port
* @ enable : Enable / disable DP path
*
* Once Hop IDs are programmed DP paths can be enabled or disabled by
* calling this function .
*/
int tb_dp_port_enable ( struct tb_port * port , bool enable )
{
2019-09-19 14:55:23 +03:00
u32 data [ 2 ] ;
2018-09-17 16:30:49 +03:00
int ret ;
2019-09-06 11:32:15 +03:00
ret = tb_port_read ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
if ( ret )
return ret ;
if ( enable )
2019-09-06 11:32:15 +03:00
data [ 0 ] | = ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ;
2018-09-17 16:30:49 +03:00
else
2019-09-06 11:32:15 +03:00
data [ 0 ] & = ~ ( ADP_DP_CS_0_VE | ADP_DP_CS_0_AE ) ;
2018-09-17 16:30:49 +03:00
2019-09-06 11:32:15 +03:00
return tb_port_write ( port , data , TB_CFG_PORT ,
port - > cap_adap + ADP_DP_CS_0 , ARRAY_SIZE ( data ) ) ;
2018-09-17 16:30:49 +03:00
}
2014-06-04 00:04:02 +04:00
/* switch utility functions */
2019-12-17 15:33:40 +03:00
static const char * tb_switch_generation_name ( const struct tb_switch * sw )
{
switch ( sw - > generation ) {
case 1 :
return " Thunderbolt 1 " ;
case 2 :
return " Thunderbolt 2 " ;
case 3 :
return " Thunderbolt 3 " ;
case 4 :
return " USB4 " ;
default :
return " Unknown " ;
}
}
static void tb_dump_switch ( const struct tb * tb , const struct tb_switch * sw )
2014-06-04 00:04:02 +04:00
{
2019-12-17 15:33:40 +03:00
const struct tb_regs_switch_header * regs = & sw - > config ;
tb_dbg ( tb , " %s Switch: %x:%x (Revision: %d, TB Version: %d) \n " ,
tb_switch_generation_name ( sw ) , regs - > vendor_id , regs - > device_id ,
regs - > revision , regs - > thunderbolt_version ) ;
tb_dbg ( tb , " Max Port Number: %d \n " , regs - > max_port_number ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " Config: \n " ) ;
tb_dbg ( tb ,
2014-06-04 00:04:02 +04:00
" Upstream Port Number: %d Depth: %d Route String: %#llx Enabled: %d, PlugEventsDelay: %dms \n " ,
2019-12-17 15:33:40 +03:00
regs - > upstream_port_number , regs - > depth ,
( ( ( u64 ) regs - > route_hi ) < < 32 ) | regs - > route_lo ,
regs - > enabled , regs - > plug_events_delay ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " unknown1: %#x unknown4: %#x \n " ,
2019-12-17 15:33:40 +03:00
regs - > __unknown1 , regs - > __unknown4 ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:12 +04:00
/**
2021-01-27 14:25:54 +03:00
* tb_switch_reset ( ) - reconfigure route , enable and send TB_CFG_PKG_RESET
2019-09-19 15:25:30 +03:00
* @ sw : Switch to reset
2014-06-04 00:04:12 +04:00
*
* Return : Returns 0 on success or an error code on failure .
*/
2019-09-19 15:25:30 +03:00
int tb_switch_reset ( struct tb_switch * sw )
2014-06-04 00:04:12 +04:00
{
struct tb_cfg_result res ;
2019-09-19 15:25:30 +03:00
if ( sw - > generation > 1 )
return 0 ;
tb_sw_dbg ( sw , " resetting switch \n " ) ;
res . err = tb_sw_write ( sw , ( ( u32 * ) & sw - > config ) + 2 ,
TB_CFG_SWITCH , 2 , 2 ) ;
2014-06-04 00:04:12 +04:00
if ( res . err )
return res . err ;
2020-12-22 14:40:31 +03:00
res = tb_cfg_reset ( sw - > tb - > ctl , tb_route ( sw ) ) ;
2014-06-04 00:04:12 +04:00
if ( res . err > 0 )
return - EIO ;
return res . err ;
}
2021-01-27 14:25:51 +03:00
/*
2014-06-04 00:04:04 +04:00
* tb_plug_events_active ( ) - enable / disable plug events on a switch
*
* Also configures a sane plug_events_delay of 255 ms .
*
* Return : Returns 0 on success or an error code on failure .
*/
static int tb_plug_events_active ( struct tb_switch * sw , bool active )
{
u32 data ;
int res ;
2020-04-02 12:24:48 +03:00
if ( tb_switch_is_icm ( sw ) | | tb_switch_is_usb4 ( sw ) )
2017-06-06 15:25:01 +03:00
return 0 ;
2014-06-04 00:04:04 +04:00
sw - > config . plug_events_delay = 0xff ;
res = tb_sw_write ( sw , ( ( u32 * ) & sw - > config ) + 4 , TB_CFG_SWITCH , 4 , 1 ) ;
if ( res )
return res ;
res = tb_sw_read ( sw , & data , TB_CFG_SWITCH , sw - > cap_plug_events + 1 , 1 ) ;
if ( res )
return res ;
if ( active ) {
data = data & 0xFFFFFF83 ;
switch ( sw - > config . device_id ) {
2016-03-20 15:57:20 +03:00
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE :
case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE :
case PCI_DEVICE_ID_INTEL_PORT_RIDGE :
2014-06-04 00:04:04 +04:00
break ;
default :
data | = 4 ;
}
} else {
data = data | 0x7c ;
}
return tb_sw_write ( sw , & data , TB_CFG_SWITCH ,
sw - > cap_plug_events + 1 , 1 ) ;
}
2017-06-06 15:25:16 +03:00
static ssize_t authorized_show ( struct device * dev ,
struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %u \n " , sw - > authorized ) ;
}
2020-11-10 11:47:14 +03:00
static int disapprove_switch ( struct device * dev , void * not_used )
{
struct tb_switch * sw ;
sw = tb_to_switch ( dev ) ;
if ( sw & & sw - > authorized ) {
int ret ;
/* First children */
ret = device_for_each_child_reverse ( & sw - > dev , NULL , disapprove_switch ) ;
if ( ret )
return ret ;
ret = tb_domain_disapprove_switch ( sw - > tb , sw ) ;
if ( ret )
return ret ;
sw - > authorized = 0 ;
kobject_uevent ( & sw - > dev . kobj , KOBJ_CHANGE ) ;
}
return 0 ;
}
2017-06-06 15:25:16 +03:00
static int tb_switch_set_authorized ( struct tb_switch * sw , unsigned int val )
{
int ret = - EINVAL ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
2020-11-10 11:47:14 +03:00
if ( ! ! sw - > authorized = = ! ! val )
2017-06-06 15:25:16 +03:00
goto unlock ;
switch ( val ) {
2020-11-10 11:47:14 +03:00
/* Disapprove switch */
case 0 :
if ( tb_route ( sw ) ) {
ret = disapprove_switch ( & sw - > dev , NULL ) ;
goto unlock ;
}
break ;
2017-06-06 15:25:16 +03:00
/* Approve switch */
case 1 :
if ( sw - > key )
ret = tb_domain_approve_switch_key ( sw - > tb , sw ) ;
else
ret = tb_domain_approve_switch ( sw - > tb , sw ) ;
break ;
/* Challenge switch */
case 2 :
if ( sw - > key )
ret = tb_domain_challenge_switch_key ( sw - > tb , sw ) ;
break ;
default :
break ;
}
if ( ! ret ) {
sw - > authorized = val ;
/* Notify status change to the userspace */
kobject_uevent ( & sw - > dev . kobj , KOBJ_CHANGE ) ;
}
unlock :
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
static ssize_t authorized_store ( struct device * dev ,
struct device_attribute * attr ,
const char * buf , size_t count )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
unsigned int val ;
ssize_t ret ;
ret = kstrtouint ( buf , 0 , & val ) ;
if ( ret )
return ret ;
if ( val > 2 )
return - EINVAL ;
2019-05-28 18:56:20 +03:00
pm_runtime_get_sync ( & sw - > dev ) ;
2017-06-06 15:25:16 +03:00
ret = tb_switch_set_authorized ( sw , val ) ;
2019-05-28 18:56:20 +03:00
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:16 +03:00
return ret ? ret : count ;
}
static DEVICE_ATTR_RW ( authorized ) ;
2018-01-22 13:50:09 +03:00
static ssize_t boot_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %u \n " , sw - > boot ) ;
}
static DEVICE_ATTR_RO ( boot ) ;
2017-06-06 15:25:01 +03:00
static ssize_t device_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2014-06-04 00:04:04 +04:00
2017-06-06 15:25:01 +03:00
return sprintf ( buf , " %#x \n " , sw - > device ) ;
}
static DEVICE_ATTR_RO ( device ) ;
2017-06-06 15:25:05 +03:00
static ssize_t
device_name_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %s \n " , sw - > device_name ? sw - > device_name : " " ) ;
}
static DEVICE_ATTR_RO ( device_name ) ;
2019-10-03 20:32:40 +03:00
static ssize_t
generation_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %u \n " , sw - > generation ) ;
}
static DEVICE_ATTR_RO ( generation ) ;
2017-06-06 15:25:16 +03:00
static ssize_t key_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
ssize_t ret ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
if ( sw - > key )
ret = sprintf ( buf , " %*phN \n " , TB_SWITCH_KEY_SIZE , sw - > key ) ;
else
ret = sprintf ( buf , " \n " ) ;
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
static ssize_t key_store ( struct device * dev , struct device_attribute * attr ,
const char * buf , size_t count )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
u8 key [ TB_SWITCH_KEY_SIZE ] ;
ssize_t ret = count ;
2017-08-15 08:19:20 +03:00
bool clear = false ;
2017-06-06 15:25:16 +03:00
2017-08-15 08:19:20 +03:00
if ( ! strcmp ( buf , " \n " ) )
clear = true ;
else if ( hex2bin ( key , buf , sizeof ( key ) ) )
2017-06-06 15:25:16 +03:00
return - EINVAL ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:16 +03:00
if ( sw - > authorized ) {
ret = - EBUSY ;
} else {
kfree ( sw - > key ) ;
2017-08-15 08:19:20 +03:00
if ( clear ) {
sw - > key = NULL ;
} else {
sw - > key = kmemdup ( key , sizeof ( key ) , GFP_KERNEL ) ;
if ( ! sw - > key )
ret = - ENOMEM ;
}
2017-06-06 15:25:16 +03:00
}
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:16 +03:00
return ret ;
}
2017-08-15 08:19:12 +03:00
static DEVICE_ATTR ( key , 0600 , key_show , key_store ) ;
2017-06-06 15:25:16 +03:00
2019-03-21 20:03:00 +03:00
static ssize_t speed_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %u.0 Gb/s \n " , sw - > link_speed ) ;
}
/*
* Currently all lanes must run at the same speed but we expose here
* both directions to allow possible asymmetric links in the future .
*/
static DEVICE_ATTR ( rx_speed , 0444 , speed_show , NULL ) ;
static DEVICE_ATTR ( tx_speed , 0444 , speed_show , NULL ) ;
static ssize_t lanes_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %u \n " , sw - > link_width ) ;
}
/*
* Currently link has same amount of lanes both directions ( 1 or 2 ) but
* expose them separately to allow possible asymmetric links in the future .
*/
static DEVICE_ATTR ( rx_lanes , 0444 , lanes_show , NULL ) ;
static DEVICE_ATTR ( tx_lanes , 0444 , lanes_show , NULL ) ;
2017-06-06 15:25:17 +03:00
static ssize_t nvm_authenticate_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
u32 status ;
nvm_get_auth_status ( sw , & status ) ;
return sprintf ( buf , " %#x \n " , status ) ;
}
2020-06-23 19:14:29 +03:00
static ssize_t nvm_authenticate_sysfs ( struct device * dev , const char * buf ,
bool disconnect )
2017-06-06 15:25:17 +03:00
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2020-06-23 19:14:28 +03:00
int val ;
2017-06-06 15:25:17 +03:00
int ret ;
2019-05-28 18:56:20 +03:00
pm_runtime_get_sync ( & sw - > dev ) ;
if ( ! mutex_trylock ( & sw - > tb - > lock ) ) {
ret = restart_syscall ( ) ;
goto exit_rpm ;
}
2017-06-06 15:25:17 +03:00
/* If NVMem devices are not yet added */
if ( ! sw - > nvm ) {
ret = - EAGAIN ;
goto exit_unlock ;
}
2020-06-23 19:14:28 +03:00
ret = kstrtoint ( buf , 10 , & val ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
goto exit_unlock ;
/* Always clear the authentication status */
nvm_clear_auth_status ( sw ) ;
2020-06-23 19:14:28 +03:00
if ( val > 0 ) {
if ( ! sw - > nvm - > flushed ) {
if ( ! sw - > nvm - > buf ) {
ret = - EINVAL ;
goto exit_unlock ;
}
2017-06-06 15:25:17 +03:00
2020-06-23 19:14:28 +03:00
ret = nvm_validate_and_write ( sw ) ;
if ( ret | | val = = WRITE_ONLY )
goto exit_unlock ;
}
if ( val = = WRITE_AND_AUTHENTICATE ) {
2020-06-23 19:14:29 +03:00
if ( disconnect ) {
ret = tb_lc_force_power ( sw ) ;
} else {
sw - > nvm - > authenticating = true ;
ret = nvm_authenticate ( sw ) ;
}
2020-06-23 19:14:28 +03:00
}
2017-06-06 15:25:17 +03:00
}
exit_unlock :
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2019-05-28 18:56:20 +03:00
exit_rpm :
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_put_autosuspend ( & sw - > dev ) ;
2017-06-06 15:25:17 +03:00
2020-06-23 19:14:29 +03:00
return ret ;
}
static ssize_t nvm_authenticate_store ( struct device * dev ,
struct device_attribute * attr , const char * buf , size_t count )
{
int ret = nvm_authenticate_sysfs ( dev , buf , false ) ;
2017-06-06 15:25:17 +03:00
if ( ret )
return ret ;
return count ;
}
static DEVICE_ATTR_RW ( nvm_authenticate ) ;
2020-06-23 19:14:29 +03:00
static ssize_t nvm_authenticate_on_disconnect_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
return nvm_authenticate_show ( dev , attr , buf ) ;
}
static ssize_t nvm_authenticate_on_disconnect_store ( struct device * dev ,
struct device_attribute * attr , const char * buf , size_t count )
{
int ret ;
ret = nvm_authenticate_sysfs ( dev , buf , true ) ;
return ret ? ret : count ;
}
static DEVICE_ATTR_RW ( nvm_authenticate_on_disconnect ) ;
2017-06-06 15:25:17 +03:00
static ssize_t nvm_version_show ( struct device * dev ,
struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
int ret ;
2019-03-19 17:48:41 +03:00
if ( ! mutex_trylock ( & sw - > tb - > lock ) )
return restart_syscall ( ) ;
2017-06-06 15:25:17 +03:00
if ( sw - > safe_mode )
ret = - ENODATA ;
else if ( ! sw - > nvm )
ret = - EAGAIN ;
else
ret = sprintf ( buf , " %x.%x \n " , sw - > nvm - > major , sw - > nvm - > minor ) ;
2019-03-19 17:48:41 +03:00
mutex_unlock ( & sw - > tb - > lock ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
static DEVICE_ATTR_RO ( nvm_version ) ;
2017-06-06 15:25:01 +03:00
static ssize_t vendor_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
2014-06-04 00:04:02 +04:00
{
2017-06-06 15:25:01 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
2014-06-04 00:04:02 +04:00
2017-06-06 15:25:01 +03:00
return sprintf ( buf , " %#x \n " , sw - > vendor ) ;
}
static DEVICE_ATTR_RO ( vendor ) ;
2017-06-06 15:25:05 +03:00
static ssize_t
vendor_name_show ( struct device * dev , struct device_attribute * attr , char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %s \n " , sw - > vendor_name ? sw - > vendor_name : " " ) ;
}
static DEVICE_ATTR_RO ( vendor_name ) ;
2017-06-06 15:25:01 +03:00
static ssize_t unique_id_show ( struct device * dev , struct device_attribute * attr ,
char * buf )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
return sprintf ( buf , " %pUb \n " , sw - > uuid ) ;
}
static DEVICE_ATTR_RO ( unique_id ) ;
static struct attribute * switch_attrs [ ] = {
2017-06-06 15:25:16 +03:00
& dev_attr_authorized . attr ,
2018-01-22 13:50:09 +03:00
& dev_attr_boot . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_device . attr ,
2017-06-06 15:25:05 +03:00
& dev_attr_device_name . attr ,
2019-10-03 20:32:40 +03:00
& dev_attr_generation . attr ,
2017-06-06 15:25:16 +03:00
& dev_attr_key . attr ,
2017-06-06 15:25:17 +03:00
& dev_attr_nvm_authenticate . attr ,
2020-06-23 19:14:29 +03:00
& dev_attr_nvm_authenticate_on_disconnect . attr ,
2017-06-06 15:25:17 +03:00
& dev_attr_nvm_version . attr ,
2019-03-21 20:03:00 +03:00
& dev_attr_rx_speed . attr ,
& dev_attr_rx_lanes . attr ,
& dev_attr_tx_speed . attr ,
& dev_attr_tx_lanes . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_vendor . attr ,
2017-06-06 15:25:05 +03:00
& dev_attr_vendor_name . attr ,
2017-06-06 15:25:01 +03:00
& dev_attr_unique_id . attr ,
NULL ,
} ;
2021-03-02 17:11:07 +03:00
static bool has_port ( const struct tb_switch * sw , enum tb_port_type type )
{
const struct tb_port * port ;
tb_switch_for_each_port ( sw , port ) {
if ( ! port - > disabled & & port - > config . type = = type )
return true ;
}
return false ;
}
2017-06-06 15:25:16 +03:00
static umode_t switch_attr_is_visible ( struct kobject * kobj ,
struct attribute * attr , int n )
{
2020-09-01 11:27:17 +03:00
struct device * dev = kobj_to_dev ( kobj ) ;
2017-06-06 15:25:16 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
2020-09-03 13:13:21 +03:00
if ( attr = = & dev_attr_authorized . attr ) {
if ( sw - > tb - > security_level = = TB_SECURITY_NOPCIE | |
2021-03-02 17:11:07 +03:00
sw - > tb - > security_level = = TB_SECURITY_DPONLY | |
! has_port ( sw , TB_TYPE_PCIE_UP ) )
2020-09-03 13:13:21 +03:00
return 0 ;
} else if ( attr = = & dev_attr_device . attr ) {
2018-09-11 15:34:23 +03:00
if ( ! sw - > device )
return 0 ;
} else if ( attr = = & dev_attr_device_name . attr ) {
if ( ! sw - > device_name )
return 0 ;
} else if ( attr = = & dev_attr_vendor . attr ) {
if ( ! sw - > vendor )
return 0 ;
} else if ( attr = = & dev_attr_vendor_name . attr ) {
if ( ! sw - > vendor_name )
return 0 ;
} else if ( attr = = & dev_attr_key . attr ) {
2017-06-06 15:25:16 +03:00
if ( tb_route ( sw ) & &
sw - > tb - > security_level = = TB_SECURITY_SECURE & &
sw - > security_level = = TB_SECURITY_SECURE )
return attr - > mode ;
return 0 ;
2019-03-21 20:03:00 +03:00
} else if ( attr = = & dev_attr_rx_speed . attr | |
attr = = & dev_attr_rx_lanes . attr | |
attr = = & dev_attr_tx_speed . attr | |
attr = = & dev_attr_tx_lanes . attr ) {
if ( tb_route ( sw ) )
return attr - > mode ;
return 0 ;
2019-02-05 12:51:40 +03:00
} else if ( attr = = & dev_attr_nvm_authenticate . attr ) {
2019-12-17 15:33:40 +03:00
if ( nvm_upgradeable ( sw ) )
2019-02-05 12:51:40 +03:00
return attr - > mode ;
return 0 ;
} else if ( attr = = & dev_attr_nvm_version . attr ) {
2019-12-17 15:33:40 +03:00
if ( nvm_readable ( sw ) )
2017-06-06 15:25:17 +03:00
return attr - > mode ;
return 0 ;
2018-01-22 13:50:09 +03:00
} else if ( attr = = & dev_attr_boot . attr ) {
if ( tb_route ( sw ) )
return attr - > mode ;
return 0 ;
2020-06-23 19:14:29 +03:00
} else if ( attr = = & dev_attr_nvm_authenticate_on_disconnect . attr ) {
if ( sw - > quirks & QUIRK_FORCE_POWER_LINK_CONTROLLER )
return attr - > mode ;
return 0 ;
2017-06-06 15:25:16 +03:00
}
2017-06-06 15:25:17 +03:00
return sw - > safe_mode ? 0 : attr - > mode ;
2017-06-06 15:25:16 +03:00
}
2021-01-09 02:09:19 +03:00
static const struct attribute_group switch_group = {
2017-06-06 15:25:16 +03:00
. is_visible = switch_attr_is_visible ,
2017-06-06 15:25:01 +03:00
. attrs = switch_attrs ,
} ;
2014-06-04 00:04:04 +04:00
2017-06-06 15:25:01 +03:00
static const struct attribute_group * switch_groups [ ] = {
& switch_group ,
NULL ,
} ;
static void tb_switch_release ( struct device * dev )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2017-06-06 15:25:01 +03:00
2017-06-06 15:25:14 +03:00
dma_port_free ( sw - > dma_port ) ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
2021-02-10 17:06:33 +03:00
ida_destroy ( & port - > in_hopids ) ;
ida_destroy ( & port - > out_hopids ) ;
2017-02-19 17:57:27 +03:00
}
2017-06-06 15:25:01 +03:00
kfree ( sw - > uuid ) ;
2017-06-06 15:25:05 +03:00
kfree ( sw - > device_name ) ;
kfree ( sw - > vendor_name ) ;
2014-06-04 00:04:02 +04:00
kfree ( sw - > ports ) ;
2014-06-13 01:11:47 +04:00
kfree ( sw - > drom ) ;
2017-06-06 15:25:16 +03:00
kfree ( sw - > key ) ;
2014-06-04 00:04:02 +04:00
kfree ( sw ) ;
}
2021-03-02 16:51:44 +03:00
static int tb_switch_uevent ( struct device * dev , struct kobj_uevent_env * env )
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
const char * type ;
if ( sw - > config . thunderbolt_version = = USB4_VERSION_1_0 ) {
if ( add_uevent_var ( env , " USB4_VERSION=1.0 " ) )
return - ENOMEM ;
}
if ( ! tb_route ( sw ) ) {
type = " host " ;
} else {
const struct tb_port * port ;
bool hub = false ;
/* Device is hub if it has any downstream ports */
tb_switch_for_each_port ( sw , port ) {
if ( ! port - > disabled & & ! tb_is_upstream_port ( port ) & &
tb_port_is_null ( port ) ) {
hub = true ;
break ;
}
}
type = hub ? " hub " : " device " ;
}
if ( add_uevent_var ( env , " USB4_TYPE=%s " , type ) )
return - ENOMEM ;
return 0 ;
}
2018-07-25 11:48:39 +03:00
/*
* Currently only need to provide the callbacks . Everything else is handled
* in the connection manager .
*/
static int __maybe_unused tb_switch_runtime_suspend ( struct device * dev )
{
2019-05-28 18:56:20 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
const struct tb_cm_ops * cm_ops = sw - > tb - > cm_ops ;
if ( cm_ops - > runtime_suspend_switch )
return cm_ops - > runtime_suspend_switch ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
}
static int __maybe_unused tb_switch_runtime_resume ( struct device * dev )
{
2019-05-28 18:56:20 +03:00
struct tb_switch * sw = tb_to_switch ( dev ) ;
const struct tb_cm_ops * cm_ops = sw - > tb - > cm_ops ;
if ( cm_ops - > runtime_resume_switch )
return cm_ops - > runtime_resume_switch ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
}
static const struct dev_pm_ops tb_switch_pm_ops = {
SET_RUNTIME_PM_OPS ( tb_switch_runtime_suspend , tb_switch_runtime_resume ,
NULL )
} ;
2017-06-06 15:25:01 +03:00
struct device_type tb_switch_type = {
. name = " thunderbolt_device " ,
. release = tb_switch_release ,
2021-03-02 16:51:44 +03:00
. uevent = tb_switch_uevent ,
2018-07-25 11:48:39 +03:00
. pm = & tb_switch_pm_ops ,
2017-06-06 15:25:01 +03:00
} ;
2017-06-06 15:25:13 +03:00
static int tb_switch_get_generation ( struct tb_switch * sw )
{
switch ( sw - > config . device_id ) {
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE :
case PCI_DEVICE_ID_INTEL_EAGLE_RIDGE :
case PCI_DEVICE_ID_INTEL_LIGHT_PEAK :
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_2C :
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C :
case PCI_DEVICE_ID_INTEL_PORT_RIDGE :
case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_REDWOOD_RIDGE_4C_BRIDGE :
return 1 ;
case PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_BRIDGE :
return 2 ;
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE :
2017-10-04 16:43:43 +03:00
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE :
case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE :
2018-01-16 23:19:00 +03:00
case PCI_DEVICE_ID_INTEL_ICL_NHI0 :
case PCI_DEVICE_ID_INTEL_ICL_NHI1 :
2017-06-06 15:25:13 +03:00
return 3 ;
default :
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return 4 ;
2017-06-06 15:25:13 +03:00
/*
* For unknown switches assume generation to be 1 to be
* on the safe side .
*/
tb_sw_warn ( sw , " unsupported switch device id %#x \n " ,
sw - > config . device_id ) ;
return 1 ;
}
}
2019-12-17 15:33:40 +03:00
static bool tb_switch_exceeds_max_depth ( const struct tb_switch * sw , int depth )
{
int max_depth ;
if ( tb_switch_is_usb4 ( sw ) | |
( sw - > tb - > root_switch & & tb_switch_is_usb4 ( sw - > tb - > root_switch ) ) )
max_depth = USB4_SWITCH_MAX_DEPTH ;
else
max_depth = TB_SWITCH_MAX_DEPTH ;
return depth > max_depth ;
}
2014-06-04 00:04:02 +04:00
/**
2017-06-06 15:25:01 +03:00
* tb_switch_alloc ( ) - allocate a switch
* @ tb : Pointer to the owning domain
* @ parent : Parent device for this switch
* @ route : Route string for this switch
2014-06-04 00:04:02 +04:00
*
2017-06-06 15:25:01 +03:00
* Allocates and initializes a switch . Will not upload configuration to
* the switch . For that you need to call tb_switch_configure ( )
* separately . The returned switch should be released by calling
* tb_switch_put ( ) .
*
2018-12-30 13:17:52 +03:00
* Return : Pointer to the allocated switch or ERR_PTR ( ) in case of
* failure .
2014-06-04 00:04:02 +04:00
*/
2017-06-06 15:25:01 +03:00
struct tb_switch * tb_switch_alloc ( struct tb * tb , struct device * parent ,
u64 route )
2014-06-04 00:04:02 +04:00
{
struct tb_switch * sw ;
2018-12-30 13:14:46 +03:00
int upstream_port ;
2018-12-30 13:17:52 +03:00
int i , ret , depth ;
2018-12-30 13:14:46 +03:00
2019-12-17 15:33:40 +03:00
/* Unlock the downstream port so we can access the switch below */
if ( route ) {
struct tb_switch * parent_sw = tb_to_switch ( parent ) ;
struct tb_port * down ;
down = tb_port_at ( route , parent_sw ) ;
tb_port_unlock ( down ) ;
}
2018-12-30 13:14:46 +03:00
depth = tb_route_length ( route ) ;
upstream_port = tb_cfg_get_upstream_port ( tb - > ctl , route ) ;
2014-06-04 00:04:02 +04:00
if ( upstream_port < 0 )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( upstream_port ) ;
2014-06-04 00:04:02 +04:00
sw = kzalloc ( sizeof ( * sw ) , GFP_KERNEL ) ;
if ( ! sw )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( - ENOMEM ) ;
2014-06-04 00:04:02 +04:00
sw - > tb = tb ;
2018-12-30 13:17:52 +03:00
ret = tb_cfg_read ( tb - > ctl , & sw - > config , route , 0 , TB_CFG_SWITCH , 0 , 5 ) ;
if ( ret )
2017-06-06 15:25:01 +03:00
goto err_free_sw_ports ;
2019-12-17 15:33:40 +03:00
sw - > generation = tb_switch_get_generation ( sw ) ;
2018-10-01 12:31:19 +03:00
tb_dbg ( tb , " current switch config: \n " ) ;
2019-12-17 15:33:40 +03:00
tb_dump_switch ( tb , sw ) ;
2014-06-04 00:04:02 +04:00
/* configure switch */
sw - > config . upstream_port_number = upstream_port ;
2018-12-30 13:14:46 +03:00
sw - > config . depth = depth ;
sw - > config . route_hi = upper_32_bits ( route ) ;
sw - > config . route_lo = lower_32_bits ( route ) ;
2017-06-06 15:25:01 +03:00
sw - > config . enabled = 0 ;
2014-06-04 00:04:02 +04:00
2019-12-17 15:33:40 +03:00
/* Make sure we do not exceed maximum topology limit */
2019-12-21 01:05:26 +03:00
if ( tb_switch_exceeds_max_depth ( sw , depth ) ) {
ret = - EADDRNOTAVAIL ;
goto err_free_sw_ports ;
}
2019-12-17 15:33:40 +03:00
2014-06-04 00:04:02 +04:00
/* initialize ports */
sw - > ports = kcalloc ( sw - > config . max_port_number + 1 , sizeof ( * sw - > ports ) ,
2014-06-13 01:11:47 +04:00
GFP_KERNEL ) ;
2018-12-30 13:17:52 +03:00
if ( ! sw - > ports ) {
ret = - ENOMEM ;
2017-06-06 15:25:01 +03:00
goto err_free_sw_ports ;
2018-12-30 13:17:52 +03:00
}
2014-06-04 00:04:02 +04:00
for ( i = 0 ; i < = sw - > config . max_port_number ; i + + ) {
2014-06-13 01:11:47 +04:00
/* minimum setup for tb_find_cap and tb_drom_read to work */
sw - > ports [ i ] . sw = sw ;
sw - > ports [ i ] . port = i ;
2021-02-10 17:06:33 +03:00
/* Control port does not need HopID allocation */
if ( i ) {
ida_init ( & sw - > ports [ i ] . in_hopids ) ;
ida_init ( & sw - > ports [ i ] . out_hopids ) ;
}
2014-06-04 00:04:02 +04:00
}
2018-12-30 13:17:52 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_PLUG_EVENTS ) ;
2019-12-17 15:33:40 +03:00
if ( ret > 0 )
sw - > cap_plug_events = ret ;
2014-06-04 00:04:04 +04:00
2018-12-30 13:17:52 +03:00
ret = tb_switch_find_vse_cap ( sw , TB_VSE_CAP_LINK_CONTROLLER ) ;
if ( ret > 0 )
sw - > cap_lc = ret ;
2019-01-09 17:42:12 +03:00
2017-06-06 15:25:16 +03:00
/* Root switch is always authorized */
if ( ! route )
sw - > authorized = true ;
2017-06-06 15:25:01 +03:00
device_initialize ( & sw - > dev ) ;
sw - > dev . parent = parent ;
sw - > dev . bus = & tb_bus_type ;
sw - > dev . type = & tb_switch_type ;
sw - > dev . groups = switch_groups ;
dev_set_name ( & sw - > dev , " %u-%llx " , tb - > index , tb_route ( sw ) ) ;
return sw ;
err_free_sw_ports :
kfree ( sw - > ports ) ;
kfree ( sw ) ;
2018-12-30 13:17:52 +03:00
return ERR_PTR ( ret ) ;
2017-06-06 15:25:01 +03:00
}
2017-06-06 15:25:17 +03:00
/**
* tb_switch_alloc_safe_mode ( ) - allocate a switch that is in safe mode
* @ tb : Pointer to the owning domain
* @ parent : Parent device for this switch
* @ route : Route string for this switch
*
* This creates a switch in safe mode . This means the switch pretty much
* lacks all capabilities except DMA configuration port before it is
* flashed with a valid NVM firmware .
*
* The returned switch must be released by calling tb_switch_put ( ) .
*
2018-12-30 13:17:52 +03:00
* Return : Pointer to the allocated switch or ERR_PTR ( ) in case of failure
2017-06-06 15:25:17 +03:00
*/
struct tb_switch *
tb_switch_alloc_safe_mode ( struct tb * tb , struct device * parent , u64 route )
{
struct tb_switch * sw ;
sw = kzalloc ( sizeof ( * sw ) , GFP_KERNEL ) ;
if ( ! sw )
2018-12-30 13:17:52 +03:00
return ERR_PTR ( - ENOMEM ) ;
2017-06-06 15:25:17 +03:00
sw - > tb = tb ;
sw - > config . depth = tb_route_length ( route ) ;
sw - > config . route_hi = upper_32_bits ( route ) ;
sw - > config . route_lo = lower_32_bits ( route ) ;
sw - > safe_mode = true ;
device_initialize ( & sw - > dev ) ;
sw - > dev . parent = parent ;
sw - > dev . bus = & tb_bus_type ;
sw - > dev . type = & tb_switch_type ;
sw - > dev . groups = switch_groups ;
dev_set_name ( & sw - > dev , " %u-%llx " , tb - > index , tb_route ( sw ) ) ;
return sw ;
}
2017-06-06 15:25:01 +03:00
/**
* tb_switch_configure ( ) - Uploads configuration to the switch
* @ sw : Switch to configure
*
* Call this function before the switch is added to the system . It will
* upload configuration to the switch and makes it available for the
2019-12-17 15:33:40 +03:00
* connection manager to use . Can be called to the switch again after
* resume from low power states to re - initialize it .
2017-06-06 15:25:01 +03:00
*
* Return : % 0 in case of success and negative errno in case of failure
*/
int tb_switch_configure ( struct tb_switch * sw )
{
struct tb * tb = sw - > tb ;
u64 route ;
int ret ;
route = tb_route ( sw ) ;
2019-12-17 15:33:40 +03:00
tb_dbg ( tb , " %s Switch at %#llx (depth: %d, up port: %d) \n " ,
2019-12-06 19:36:07 +03:00
sw - > config . enabled ? " restoring " : " initializing " , route ,
2019-12-17 15:33:40 +03:00
tb_route_length ( route ) , sw - > config . upstream_port_number ) ;
2017-06-06 15:25:01 +03:00
sw - > config . enabled = 1 ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
/*
* For USB4 devices , we need to program the CM version
* accordingly so that it knows to expose all the
* additional capabilities .
*/
sw - > config . cmuv = USB4_VERSION_1_0 ;
/* Enumerate the switch */
ret = tb_sw_write ( sw , ( u32 * ) & sw - > config + 1 , TB_CFG_SWITCH ,
ROUTER_CS_1 , 4 ) ;
if ( ret )
return ret ;
2017-06-06 15:25:01 +03:00
2019-12-17 15:33:40 +03:00
ret = usb4_switch_setup ( sw ) ;
} else {
if ( sw - > config . vendor_id ! = PCI_VENDOR_ID_INTEL )
tb_sw_warn ( sw , " unknown switch vendor id %#x \n " ,
sw - > config . vendor_id ) ;
if ( ! sw - > cap_plug_events ) {
tb_sw_warn ( sw , " cannot find TB_VSE_CAP_PLUG_EVENTS aborting \n " ) ;
return - ENODEV ;
}
/* Enumerate the switch */
ret = tb_sw_write ( sw , ( u32 * ) & sw - > config + 1 , TB_CFG_SWITCH ,
ROUTER_CS_1 , 3 ) ;
}
2018-10-11 12:33:08 +03:00
if ( ret )
return ret ;
2017-06-06 15:25:01 +03:00
return tb_plug_events_active ( sw , true ) ;
}
2019-03-20 18:57:54 +03:00
static int tb_switch_set_uuid ( struct tb_switch * sw )
2017-06-06 15:25:01 +03:00
{
2019-12-17 15:33:40 +03:00
bool uid = false ;
2017-06-06 15:25:01 +03:00
u32 uuid [ 4 ] ;
2019-01-09 17:42:12 +03:00
int ret ;
2017-06-06 15:25:01 +03:00
if ( sw - > uuid )
2019-01-09 17:42:12 +03:00
return 0 ;
2017-06-06 15:25:01 +03:00
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) ) {
ret = usb4_switch_read_uid ( sw , & sw - > uid ) ;
if ( ret )
return ret ;
uid = true ;
} else {
/*
* The newer controllers include fused UUID as part of
* link controller specific registers
*/
ret = tb_lc_read_uuid ( sw , uuid ) ;
if ( ret ) {
if ( ret ! = - EINVAL )
return ret ;
uid = true ;
}
}
if ( uid ) {
2017-06-06 15:25:01 +03:00
/*
* ICM generates UUID based on UID and fills the upper
* two words with ones . This is not strictly following
* UUID format but we want to be compatible with it so
* we do the same here .
*/
uuid [ 0 ] = sw - > uid & 0xffffffff ;
uuid [ 1 ] = ( sw - > uid > > 32 ) & 0xffffffff ;
uuid [ 2 ] = 0xffffffff ;
uuid [ 3 ] = 0xffffffff ;
}
sw - > uuid = kmemdup ( uuid , sizeof ( uuid ) , GFP_KERNEL ) ;
2019-03-20 18:57:54 +03:00
if ( ! sw - > uuid )
2019-01-09 17:42:12 +03:00
return - ENOMEM ;
return 0 ;
2017-06-06 15:25:01 +03:00
}
2017-06-06 15:25:17 +03:00
static int tb_switch_add_dma_port ( struct tb_switch * sw )
2017-06-06 15:25:14 +03:00
{
2017-06-06 15:25:17 +03:00
u32 status ;
int ret ;
2017-06-06 15:25:14 +03:00
switch ( sw - > generation ) {
case 2 :
/* Only root switch can be upgraded */
if ( tb_route ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2019-11-11 13:25:44 +03:00
2020-08-24 01:36:59 +03:00
fallthrough ;
2019-11-11 13:25:44 +03:00
case 3 :
2020-11-10 11:34:07 +03:00
case 4 :
2019-11-11 13:25:44 +03:00
ret = tb_switch_set_uuid ( sw ) ;
if ( ret )
return ret ;
2017-06-06 15:25:14 +03:00
break ;
default :
2017-06-06 15:25:17 +03:00
/*
* DMA port is the only thing available when the switch
* is in safe mode .
*/
if ( ! sw - > safe_mode )
return 0 ;
break ;
2017-06-06 15:25:14 +03:00
}
2020-11-10 11:34:07 +03:00
if ( sw - > no_nvm_upgrade )
return 0 ;
if ( tb_switch_is_usb4 ( sw ) ) {
ret = usb4_switch_nvm_authenticate_status ( sw , & status ) ;
if ( ret )
return ret ;
if ( status ) {
tb_sw_info ( sw , " switch flash authentication failed \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
return 0 ;
}
2019-02-05 12:51:40 +03:00
/* Root switch DMA port requires running firmware */
2019-06-25 15:10:01 +03:00
if ( ! tb_route ( sw ) & & ! tb_switch_is_icm ( sw ) )
2017-06-06 15:25:17 +03:00
return 0 ;
2017-06-06 15:25:14 +03:00
sw - > dma_port = dma_port_alloc ( sw ) ;
2017-06-06 15:25:17 +03:00
if ( ! sw - > dma_port )
return 0 ;
2019-11-11 13:25:44 +03:00
/*
* If there is status already set then authentication failed
* when the dma_port_flash_update_auth ( ) returned . Power cycling
* is not needed ( it was done already ) so only thing we do here
* is to unblock runtime PM of the root port .
*/
nvm_get_auth_status ( sw , & status ) ;
if ( status ) {
if ( ! tb_route ( sw ) )
2019-12-17 15:33:40 +03:00
nvm_authenticate_complete_dma_port ( sw ) ;
2019-11-11 13:25:44 +03:00
return 0 ;
}
2017-06-06 15:25:17 +03:00
/*
* Check status of the previous flash authentication . If there
* is one we need to power cycle the switch in any case to make
* it functional again .
*/
ret = dma_port_flash_update_auth_status ( sw - > dma_port , & status ) ;
if ( ret < = 0 )
return ret ;
2018-11-26 12:47:46 +03:00
/* Now we can allow root port to suspend again */
if ( ! tb_route ( sw ) )
2019-12-17 15:33:40 +03:00
nvm_authenticate_complete_dma_port ( sw ) ;
2018-11-26 12:47:46 +03:00
2017-06-06 15:25:17 +03:00
if ( status ) {
tb_sw_info ( sw , " switch flash authentication failed \n " ) ;
nvm_set_auth_status ( sw , status ) ;
}
tb_sw_info ( sw , " power cycling the switch now \n " ) ;
dma_port_power_cycle ( sw - > dma_port ) ;
/*
* We return error here which causes the switch adding failure .
* It should appear back after power cycle is complete .
*/
return - ESHUTDOWN ;
2017-06-06 15:25:14 +03:00
}
2019-08-26 18:19:33 +03:00
static void tb_switch_default_link_ports ( struct tb_switch * sw )
{
int i ;
for ( i = 1 ; i < = sw - > config . max_port_number ; i + = 2 ) {
struct tb_port * port = & sw - > ports [ i ] ;
struct tb_port * subordinate ;
if ( ! tb_port_is_null ( port ) )
continue ;
/* Check for the subordinate port */
if ( i = = sw - > config . max_port_number | |
! tb_port_is_null ( & sw - > ports [ i + 1 ] ) )
continue ;
/* Link them if not already done so (by DROM) */
subordinate = & sw - > ports [ i + 1 ] ;
if ( ! port - > dual_link_port & & ! subordinate - > dual_link_port ) {
port - > link_nr = 0 ;
port - > dual_link_port = subordinate ;
subordinate - > link_nr = 1 ;
subordinate - > dual_link_port = port ;
tb_sw_dbg ( sw , " linked ports %d <-> %d \n " ,
port - > port , subordinate - > port ) ;
}
}
}
2019-03-21 20:03:00 +03:00
static bool tb_switch_lane_bonding_possible ( struct tb_switch * sw )
{
const struct tb_port * up = tb_upstream_port ( sw ) ;
if ( ! up - > dual_link_port | | ! up - > dual_link_port - > remote )
return false ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_lane_bonding_possible ( sw ) ;
2019-03-21 20:03:00 +03:00
return tb_lc_lane_bonding_possible ( sw ) ;
}
static int tb_switch_update_link_attributes ( struct tb_switch * sw )
{
struct tb_port * up ;
bool change = false ;
int ret ;
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return 0 ;
up = tb_upstream_port ( sw ) ;
ret = tb_port_get_link_speed ( up ) ;
if ( ret < 0 )
return ret ;
if ( sw - > link_speed ! = ret )
change = true ;
sw - > link_speed = ret ;
ret = tb_port_get_link_width ( up ) ;
if ( ret < 0 )
return ret ;
if ( sw - > link_width ! = ret )
change = true ;
sw - > link_width = ret ;
/* Notify userspace that there is possible link attribute change */
if ( device_is_registered ( & sw - > dev ) & & change )
kobject_uevent ( & sw - > dev . kobj , KOBJ_CHANGE ) ;
return 0 ;
}
/**
* tb_switch_lane_bonding_enable ( ) - Enable lane bonding
* @ sw : Switch to enable lane bonding
*
* Connection manager can call this function to enable lane bonding of a
* switch . If conditions are correct and both switches support the feature ,
* lanes are bonded . It is safe to call this to any switch .
*/
int tb_switch_lane_bonding_enable ( struct tb_switch * sw )
{
struct tb_switch * parent = tb_to_switch ( sw - > dev . parent ) ;
struct tb_port * up , * down ;
u64 route = tb_route ( sw ) ;
int ret ;
if ( ! route )
return 0 ;
if ( ! tb_switch_lane_bonding_possible ( sw ) )
return 0 ;
up = tb_upstream_port ( sw ) ;
down = tb_port_at ( route , parent ) ;
if ( ! tb_port_is_width_supported ( up , 2 ) | |
! tb_port_is_width_supported ( down , 2 ) )
return 0 ;
ret = tb_port_lane_bonding_enable ( up ) ;
if ( ret ) {
tb_port_warn ( up , " failed to enable lane bonding \n " ) ;
return ret ;
}
ret = tb_port_lane_bonding_enable ( down ) ;
if ( ret ) {
tb_port_warn ( down , " failed to enable lane bonding \n " ) ;
tb_port_lane_bonding_disable ( up ) ;
return ret ;
}
tb_switch_update_link_attributes ( sw ) ;
tb_sw_dbg ( sw , " lane bonding enabled \n " ) ;
return ret ;
}
/**
* tb_switch_lane_bonding_disable ( ) - Disable lane bonding
* @ sw : Switch whose lane bonding to disable
*
* Disables lane bonding between @ sw and parent . This can be called even
* if lanes were not bonded originally .
*/
void tb_switch_lane_bonding_disable ( struct tb_switch * sw )
{
struct tb_switch * parent = tb_to_switch ( sw - > dev . parent ) ;
struct tb_port * up , * down ;
if ( ! tb_route ( sw ) )
return ;
up = tb_upstream_port ( sw ) ;
if ( ! up - > bonded )
return ;
down = tb_port_at ( tb_route ( sw ) , parent ) ;
tb_port_lane_bonding_disable ( up ) ;
tb_port_lane_bonding_disable ( down ) ;
tb_switch_update_link_attributes ( sw ) ;
tb_sw_dbg ( sw , " lane bonding disabled \n " ) ;
}
2020-04-02 14:50:52 +03:00
/**
* tb_switch_configure_link ( ) - Set link configured
* @ sw : Switch whose link is configured
*
* Sets the link upstream from @ sw configured ( from both ends ) so that
* it will not be disconnected when the domain exits sleep . Can be
* called for any switch .
*
* It is recommended that this is called after lane bonding is enabled .
*
* Returns % 0 on success and negative errno in case of error .
*/
int tb_switch_configure_link ( struct tb_switch * sw )
{
2020-04-02 12:42:44 +03:00
struct tb_port * up , * down ;
int ret ;
2020-04-02 14:50:52 +03:00
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return 0 ;
2020-04-02 12:42:44 +03:00
up = tb_upstream_port ( sw ) ;
if ( tb_switch_is_usb4 ( up - > sw ) )
ret = usb4_port_configure ( up ) ;
else
ret = tb_lc_configure_port ( up ) ;
if ( ret )
return ret ;
down = up - > remote ;
if ( tb_switch_is_usb4 ( down - > sw ) )
return usb4_port_configure ( down ) ;
return tb_lc_configure_port ( down ) ;
2020-04-02 14:50:52 +03:00
}
/**
* tb_switch_unconfigure_link ( ) - Unconfigure link
* @ sw : Switch whose link is unconfigured
*
* Sets the link unconfigured so the @ sw will be disconnected if the
* domain exists sleep .
*/
void tb_switch_unconfigure_link ( struct tb_switch * sw )
{
2020-04-02 12:42:44 +03:00
struct tb_port * up , * down ;
2020-04-02 14:50:52 +03:00
if ( sw - > is_unplugged )
return ;
if ( ! tb_route ( sw ) | | tb_switch_is_icm ( sw ) )
return ;
2020-04-02 12:42:44 +03:00
up = tb_upstream_port ( sw ) ;
if ( tb_switch_is_usb4 ( up - > sw ) )
usb4_port_unconfigure ( up ) ;
else
tb_lc_unconfigure_port ( up ) ;
down = up - > remote ;
if ( tb_switch_is_usb4 ( down - > sw ) )
usb4_port_unconfigure ( down ) ;
2020-04-02 14:50:52 +03:00
else
2020-04-02 12:42:44 +03:00
tb_lc_unconfigure_port ( down ) ;
2020-04-02 14:50:52 +03:00
}
2017-06-06 15:25:01 +03:00
/**
* tb_switch_add ( ) - Add a switch to the domain
* @ sw : Switch to add
*
* This is the last step in adding switch to the domain . It will read
* identification information from DROM and initializes ports so that
* they can be used to connect other switches . The switch will be
* exposed to the userspace when this function successfully returns . To
* remove and release the switch , call tb_switch_remove ( ) .
*
* Return : % 0 in case of success and negative errno in case of failure
*/
int tb_switch_add ( struct tb_switch * sw )
{
int i , ret ;
2017-06-06 15:25:14 +03:00
/*
* Initialize DMA control port now before we read DROM . Recent
* host controllers have more complete DROM on NVM that includes
* vendor and model identification strings which we then expose
* to the userspace . NVM can be accessed through DMA
* configuration based mailbox .
*/
2017-06-06 15:25:17 +03:00
ret = tb_switch_add_dma_port ( sw ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to add DMA port \n " ) ;
2017-06-06 15:25:02 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2014-06-13 01:11:47 +04:00
2017-06-06 15:25:17 +03:00
if ( ! sw - > safe_mode ) {
/* read drom */
ret = tb_drom_read ( sw ) ;
if ( ret ) {
2019-08-27 15:18:20 +03:00
dev_err ( & sw - > dev , " reading DROM failed \n " ) ;
2017-06-06 15:25:17 +03:00
return ret ;
}
2018-10-01 12:31:19 +03:00
tb_sw_dbg ( sw , " uid: %#llx \n " , sw - > uid ) ;
2017-06-06 15:25:01 +03:00
2020-12-28 13:47:02 +03:00
tb_check_quirks ( sw ) ;
2019-03-20 18:57:54 +03:00
ret = tb_switch_set_uuid ( sw ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to set UUID \n " ) ;
2019-03-20 18:57:54 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2017-06-06 15:25:17 +03:00
for ( i = 0 ; i < = sw - > config . max_port_number ; i + + ) {
if ( sw - > ports [ i ] . disabled ) {
2018-10-01 12:31:19 +03:00
tb_port_dbg ( & sw - > ports [ i ] , " disabled by eeprom \n " ) ;
2017-06-06 15:25:17 +03:00
continue ;
}
ret = tb_init_port ( & sw - > ports [ i ] ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to initialize port %d \n " , i ) ;
2017-06-06 15:25:17 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2014-06-13 01:11:47 +04:00
}
2019-03-21 20:03:00 +03:00
2019-08-26 18:19:33 +03:00
tb_switch_default_link_ports ( sw ) ;
2019-03-21 20:03:00 +03:00
ret = tb_switch_update_link_attributes ( sw ) ;
if ( ret )
return ret ;
2019-12-17 15:33:43 +03:00
ret = tb_switch_tmu_init ( sw ) ;
if ( ret )
return ret ;
2014-06-13 01:11:47 +04:00
}
2017-06-06 15:25:17 +03:00
ret = device_add ( & sw - > dev ) ;
2019-08-27 15:18:20 +03:00
if ( ret ) {
dev_err ( & sw - > dev , " failed to add device: %d \n " , ret ) ;
2017-06-06 15:25:17 +03:00
return ret ;
2019-08-27 15:18:20 +03:00
}
2017-06-06 15:25:17 +03:00
2018-10-01 12:31:20 +03:00
if ( tb_route ( sw ) ) {
dev_info ( & sw - > dev , " new device found, vendor=%#x device=%#x \n " ,
sw - > vendor , sw - > device ) ;
if ( sw - > vendor_name & & sw - > device_name )
dev_info ( & sw - > dev , " %s %s \n " , sw - > vendor_name ,
sw - > device_name ) ;
}
2017-06-06 15:25:17 +03:00
ret = tb_switch_nvm_add ( sw ) ;
2018-07-25 11:48:39 +03:00
if ( ret ) {
2019-08-27 15:18:20 +03:00
dev_err ( & sw - > dev , " failed to add NVM devices \n " ) ;
2017-06-06 15:25:17 +03:00
device_del ( & sw - > dev ) ;
2018-07-25 11:48:39 +03:00
return ret ;
}
2017-06-06 15:25:17 +03:00
2019-12-06 19:36:07 +03:00
/*
* Thunderbolt routers do not generate wakeups themselves but
* they forward wakeups from tunneled protocols , so enable it
* here .
*/
device_init_wakeup ( & sw - > dev , true ) ;
2018-07-25 11:48:39 +03:00
pm_runtime_set_active ( & sw - > dev ) ;
if ( sw - > rpm ) {
pm_runtime_set_autosuspend_delay ( & sw - > dev , TB_AUTOSUSPEND_DELAY ) ;
pm_runtime_use_autosuspend ( & sw - > dev ) ;
pm_runtime_mark_last_busy ( & sw - > dev ) ;
pm_runtime_enable ( & sw - > dev ) ;
pm_request_autosuspend ( & sw - > dev ) ;
}
2020-06-29 20:30:52 +03:00
tb_switch_debugfs_init ( sw ) ;
2018-07-25 11:48:39 +03:00
return 0 ;
2017-06-06 15:25:01 +03:00
}
2014-06-04 00:04:11 +04:00
2017-06-06 15:25:01 +03:00
/**
* tb_switch_remove ( ) - Remove and release a switch
* @ sw : Switch to remove
*
* This will remove the switch from the domain and release it after last
* reference count drops to zero . If there are switches connected below
* this switch , they will be removed as well .
*/
void tb_switch_remove ( struct tb_switch * sw )
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2014-06-04 00:04:04 +04:00
2020-06-29 20:30:52 +03:00
tb_switch_debugfs_remove ( sw ) ;
2018-07-25 11:48:39 +03:00
if ( sw - > rpm ) {
pm_runtime_get_sync ( & sw - > dev ) ;
pm_runtime_disable ( & sw - > dev ) ;
}
2017-06-06 15:25:01 +03:00
/* port 0 is the switch itself and never has a remote */
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) ) {
tb_switch_remove ( port - > remote - > sw ) ;
port - > remote = NULL ;
} else if ( port - > xdomain ) {
tb_xdomain_remove ( port - > xdomain ) ;
port - > xdomain = NULL ;
2019-03-07 16:26:45 +03:00
}
2020-03-05 17:39:58 +03:00
/* Remove any downstream retimers */
tb_retimer_remove_all ( port ) ;
2017-06-06 15:25:01 +03:00
}
if ( ! sw - > is_unplugged )
tb_plug_events_active ( sw , false ) ;
2019-12-17 15:33:40 +03:00
2017-06-06 15:25:17 +03:00
tb_switch_nvm_remove ( sw ) ;
2018-10-01 12:31:20 +03:00
if ( tb_route ( sw ) )
dev_info ( & sw - > dev , " device disconnected \n " ) ;
2017-06-06 15:25:01 +03:00
device_unregister ( & sw - > dev ) ;
2014-06-04 00:04:02 +04:00
}
2014-06-04 00:04:06 +04:00
/**
2016-03-20 15:57:20 +03:00
* tb_sw_set_unplugged ( ) - set is_unplugged on switch and downstream switches
2021-01-28 13:51:03 +03:00
* @ sw : Router to mark unplugged
2014-06-04 00:04:06 +04:00
*/
2016-03-20 15:57:20 +03:00
void tb_sw_set_unplugged ( struct tb_switch * sw )
2014-06-04 00:04:06 +04:00
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
2014-06-04 00:04:06 +04:00
if ( sw = = sw - > tb - > root_switch ) {
tb_sw_WARN ( sw , " cannot unplug root switch \n " ) ;
return ;
}
if ( sw - > is_unplugged ) {
tb_sw_WARN ( sw , " is_unplugged already set \n " ) ;
return ;
}
sw - > is_unplugged = true ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) )
tb_sw_set_unplugged ( port - > remote - > sw ) ;
else if ( port - > xdomain )
port - > xdomain - > is_unplugged = true ;
2014-06-04 00:04:06 +04:00
}
}
2019-12-06 19:36:07 +03:00
static int tb_switch_set_wake ( struct tb_switch * sw , unsigned int flags )
{
if ( flags )
tb_sw_dbg ( sw , " enabling wakeup: %#x \n " , flags ) ;
else
tb_sw_dbg ( sw , " disabling wakeup \n " ) ;
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_set_wake ( sw , flags ) ;
return tb_lc_set_wake ( sw , flags ) ;
}
2014-06-04 00:04:12 +04:00
int tb_switch_resume ( struct tb_switch * sw )
{
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
int err ;
2018-10-01 12:31:19 +03:00
tb_sw_dbg ( sw , " resuming switch \n " ) ;
2014-06-04 00:04:12 +04:00
2017-06-06 15:24:54 +03:00
/*
* Check for UID of the connected switches except for root
* switch which we assume cannot be removed .
*/
if ( tb_route ( sw ) ) {
u64 uid ;
2018-09-28 16:41:01 +03:00
/*
* Check first that we can still read the switch config
* space . It may be that there is now another domain
* connected .
*/
err = tb_cfg_get_upstream_port ( sw - > tb - > ctl , tb_route ( sw ) ) ;
if ( err < 0 ) {
tb_sw_info ( sw , " switch not present anymore \n " ) ;
return err ;
}
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
err = usb4_switch_read_uid ( sw , & uid ) ;
else
err = tb_drom_read_uid_only ( sw , & uid ) ;
2017-06-06 15:24:54 +03:00
if ( err ) {
tb_sw_warn ( sw , " uid read failed \n " ) ;
return err ;
}
if ( sw - > uid ! = uid ) {
tb_sw_info ( sw ,
" changed while suspended (uid %#llx -> %#llx) \n " ,
sw - > uid , uid ) ;
return - ENODEV ;
}
2014-06-04 00:04:12 +04:00
}
2019-12-17 15:33:40 +03:00
err = tb_switch_configure ( sw ) ;
2014-06-04 00:04:12 +04:00
if ( err )
return err ;
2019-12-06 19:36:07 +03:00
/* Disable wakes */
tb_switch_set_wake ( sw , 0 ) ;
2020-03-27 18:20:31 +03:00
err = tb_switch_tmu_init ( sw ) ;
if ( err )
return err ;
2014-06-04 00:04:12 +04:00
/* check for surviving downstream switches */
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
2020-11-26 12:52:43 +03:00
if ( ! tb_port_has_remote ( port ) & & ! port - > xdomain ) {
/*
* For disconnected downstream lane adapters
* start lane initialization now so we detect
* future connects .
*/
if ( ! tb_is_upstream_port ( port ) & & tb_port_is_null ( port ) )
tb_port_start_lane_initialization ( port ) ;
2014-06-04 00:04:12 +04:00
continue ;
2020-11-26 12:52:43 +03:00
} else if ( port - > xdomain ) {
/*
* Start lane initialization for XDomain so the
* link gets re - established .
*/
tb_port_start_lane_initialization ( port ) ;
}
2019-03-07 16:26:45 +03:00
2018-09-28 16:41:01 +03:00
if ( tb_wait_for_port ( port , true ) < = 0 ) {
2014-06-04 00:04:12 +04:00
tb_port_warn ( port ,
" lost during suspend, disconnecting \n " ) ;
2018-09-28 16:41:01 +03:00
if ( tb_port_has_remote ( port ) )
tb_sw_set_unplugged ( port - > remote - > sw ) ;
else if ( port - > xdomain )
port - > xdomain - > is_unplugged = true ;
2019-12-17 15:33:40 +03:00
} else if ( tb_port_has_remote ( port ) | | port - > xdomain ) {
/*
* Always unlock the port so the downstream
* switch / domain is accessible .
*/
if ( tb_port_unlock ( port ) )
tb_port_warn ( port , " failed to unlock port \n " ) ;
if ( port - > remote & & tb_switch_resume ( port - > remote - > sw ) ) {
2018-09-28 16:41:01 +03:00
tb_port_warn ( port ,
" lost during suspend, disconnecting \n " ) ;
tb_sw_set_unplugged ( port - > remote - > sw ) ;
}
2014-06-04 00:04:12 +04:00
}
}
return 0 ;
}
2020-06-05 14:25:02 +03:00
/**
* tb_switch_suspend ( ) - Put a switch to sleep
* @ sw : Switch to suspend
* @ runtime : Is this runtime suspend or system sleep
*
* Suspends router and all its children . Enables wakes according to
* value of @ runtime and then sets sleep bit for the router . If @ sw is
* host router the domain is ready to go to sleep once this function
* returns .
*/
void tb_switch_suspend ( struct tb_switch * sw , bool runtime )
2014-06-04 00:04:12 +04:00
{
2019-12-06 19:36:07 +03:00
unsigned int flags = 0 ;
2019-09-30 14:07:22 +03:00
struct tb_port * port ;
int err ;
2020-06-05 14:25:02 +03:00
tb_sw_dbg ( sw , " suspending switch \n " ) ;
2014-06-04 00:04:12 +04:00
err = tb_plug_events_active ( sw , false ) ;
if ( err )
return ;
2019-09-30 14:07:22 +03:00
tb_switch_for_each_port ( sw , port ) {
if ( tb_port_has_remote ( port ) )
2020-06-05 14:25:02 +03:00
tb_switch_suspend ( port - > remote - > sw , runtime ) ;
2014-06-04 00:04:12 +04:00
}
2019-01-09 18:25:43 +03:00
2020-06-05 14:25:02 +03:00
if ( runtime ) {
/* Trigger wake when something is plugged in/out */
flags | = TB_WAKE_ON_CONNECT | TB_WAKE_ON_DISCONNECT ;
flags | = TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE ;
} else if ( device_may_wakeup ( & sw - > dev ) ) {
flags | = TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE ;
}
2019-12-06 19:36:07 +03:00
tb_switch_set_wake ( sw , flags ) ;
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
usb4_switch_set_sleep ( sw ) ;
else
tb_lc_set_sleep ( sw ) ;
2014-06-04 00:04:12 +04:00
}
2017-06-06 15:25:16 +03:00
2019-03-26 15:52:30 +03:00
/**
* tb_switch_query_dp_resource ( ) - Query availability of DP resource
* @ sw : Switch whose DP resource is queried
* @ in : DP IN port
*
* Queries availability of DP resource for DP tunneling using switch
* specific means . Returns % true if resource is available .
*/
bool tb_switch_query_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_query_dp_resource ( sw , in ) ;
2019-03-26 15:52:30 +03:00
return tb_lc_dp_sink_query ( sw , in ) ;
}
/**
* tb_switch_alloc_dp_resource ( ) - Allocate available DP resource
* @ sw : Switch whose DP resource is allocated
* @ in : DP IN port
*
* Allocates DP resource for DP tunneling . The resource must be
* available for this to succeed ( see tb_switch_query_dp_resource ( ) ) .
* Returns % 0 in success and negative errno otherwise .
*/
int tb_switch_alloc_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2019-12-17 15:33:40 +03:00
if ( tb_switch_is_usb4 ( sw ) )
return usb4_switch_alloc_dp_resource ( sw , in ) ;
2019-03-26 15:52:30 +03:00
return tb_lc_dp_sink_alloc ( sw , in ) ;
}
/**
* tb_switch_dealloc_dp_resource ( ) - De - allocate DP resource
* @ sw : Switch whose DP resource is de - allocated
* @ in : DP IN port
*
* De - allocates DP resource that was previously allocated for DP
* tunneling .
*/
void tb_switch_dealloc_dp_resource ( struct tb_switch * sw , struct tb_port * in )
{
2019-12-17 15:33:40 +03:00
int ret ;
if ( tb_switch_is_usb4 ( sw ) )
ret = usb4_switch_dealloc_dp_resource ( sw , in ) ;
else
ret = tb_lc_dp_sink_dealloc ( sw , in ) ;
if ( ret )
2019-03-26 15:52:30 +03:00
tb_sw_warn ( sw , " failed to de-allocate DP resource for port %d \n " ,
in - > port ) ;
}
2017-06-06 15:25:16 +03:00
struct tb_sw_lookup {
struct tb * tb ;
u8 link ;
u8 depth ;
2017-07-18 16:30:05 +03:00
const uuid_t * uuid ;
2017-10-04 15:24:14 +03:00
u64 route ;
2017-06-06 15:25:16 +03:00
} ;
2019-06-14 20:53:59 +03:00
static int tb_switch_match ( struct device * dev , const void * data )
2017-06-06 15:25:16 +03:00
{
struct tb_switch * sw = tb_to_switch ( dev ) ;
2019-06-14 20:53:59 +03:00
const struct tb_sw_lookup * lookup = data ;
2017-06-06 15:25:16 +03:00
if ( ! sw )
return 0 ;
if ( sw - > tb ! = lookup - > tb )
return 0 ;
if ( lookup - > uuid )
return ! memcmp ( sw - > uuid , lookup - > uuid , sizeof ( * lookup - > uuid ) ) ;
2017-10-04 15:24:14 +03:00
if ( lookup - > route ) {
return sw - > config . route_lo = = lower_32_bits ( lookup - > route ) & &
sw - > config . route_hi = = upper_32_bits ( lookup - > route ) ;
}
2017-06-06 15:25:16 +03:00
/* Root switch is matched only by depth */
if ( ! lookup - > depth )
return ! sw - > depth ;
return sw - > link = = lookup - > link & & sw - > depth = = lookup - > depth ;
}
/**
* tb_switch_find_by_link_depth ( ) - Find switch by link and depth
* @ tb : Domain the switch belongs
* @ link : Link number the switch is connected
* @ depth : Depth of the switch in link
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
struct tb_switch * tb_switch_find_by_link_depth ( struct tb * tb , u8 link , u8 depth )
{
struct tb_sw_lookup lookup ;
struct device * dev ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . link = link ;
lookup . depth = depth ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
/**
2017-10-04 15:07:40 +03:00
* tb_switch_find_by_uuid ( ) - Find switch by UUID
2017-06-06 15:25:16 +03:00
* @ tb : Domain the switch belongs
* @ uuid : UUID to look for
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
2017-07-18 16:30:05 +03:00
struct tb_switch * tb_switch_find_by_uuid ( struct tb * tb , const uuid_t * uuid )
2017-06-06 15:25:16 +03:00
{
struct tb_sw_lookup lookup ;
struct device * dev ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . uuid = uuid ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
2017-06-06 15:25:17 +03:00
2017-10-04 15:24:14 +03:00
/**
* tb_switch_find_by_route ( ) - Find switch by route string
* @ tb : Domain the switch belongs
* @ route : Route string to look for
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put ( ) when done with the switch .
*/
struct tb_switch * tb_switch_find_by_route ( struct tb * tb , u64 route )
{
struct tb_sw_lookup lookup ;
struct device * dev ;
if ( ! route )
return tb_switch_get ( tb - > root_switch ) ;
memset ( & lookup , 0 , sizeof ( lookup ) ) ;
lookup . tb = tb ;
lookup . route = route ;
dev = bus_find_device ( & tb_bus_type , NULL , & lookup , tb_switch_match ) ;
if ( dev )
return tb_to_switch ( dev ) ;
return NULL ;
}
2019-12-17 15:33:37 +03:00
/**
* tb_switch_find_port ( ) - return the first port of @ type on @ sw or NULL
* @ sw : Switch to find the port from
* @ type : Port type to look for
*/
struct tb_port * tb_switch_find_port ( struct tb_switch * sw ,
enum tb_port_type type )
{
struct tb_port * port ;
tb_switch_for_each_port ( sw , port ) {
if ( port - > config . type = = type )
return port ;
}
return NULL ;
}