2019-06-13 19:30:03 +05:30
/* SPDX-License-Identifier: GPL-2.0 */
2017-03-28 11:36:07 +01:00
/*
* System Control and Management Interface ( SCMI ) Message Protocol
* driver common header file containing some definitions , structures
* and function prototypes used in all the different SCMI protocols .
*
2022-03-30 16:05:38 +01:00
* Copyright ( C ) 2018 - 2022 ARM Ltd .
2017-03-28 11:36:07 +01:00
*/
2020-07-01 16:53:40 +01:00
# ifndef _SCMI_COMMON_H
# define _SCMI_COMMON_H
2017-03-28 11:36:07 +01:00
2018-05-09 17:52:06 +01:00
# include <linux/bitfield.h>
2017-03-28 11:36:07 +01:00
# include <linux/completion.h>
2017-06-06 11:16:15 +01:00
# include <linux/device.h>
# include <linux/errno.h>
# include <linux/kernel.h>
firmware: arm_scmi: Introduce monotonically increasing tokens
Tokens are sequence numbers embedded in the each SCMI message header: they
are used to correlate commands with responses (and delayed responses), but
their usage and policy of selection is entirely up to the caller (usually
the OSPM agent), while they are completely opaque to the callee (i.e. SCMI
platform) which merely copies them back from the command into the response
message header.
This also means that the platform does not, can not and should not enforce
any kind of policy on received messages depending on the contained sequence
number: platform can perfectly handle concurrent requests carrying the same
identifiying token if that should happen.
Moreover the platform is not required to produce in-order responses to
agent requests, the only constraint in these regards is that in case of
an asynchronous message the delayed response must be sent after the
immediate response for the synchronous part of the command transaction.
Currenly the SCMI stack of the OSPM agent selects a token for the egressing
commands picking the lowest possible number which is not already in use by
an existing in-flight transaction, which means, in other words, that we
immediately reuse any token after its transaction has completed or it has
timed out: this policy indeed does simplify management and lookup of tokens
and associated xfers.
Under the above assumptions and constraints, since there is really no state
shared between the agent and the platform to let the platform know when a
token and its associated message has timed out, the current policy of early
reuse of tokens can easily lead to the situation in which a spurious or
late received response (or delayed_response), related to an old stale and
timed out transaction, can be wrongly associated to a newer valid in-flight
xfer that just happens to have reused the same token.
This misbehaviour on such late/spurious responses is more easily exposed on
those transports that naturally have an higher level of parallelism in
processing multiple concurrent in-flight messages.
This commit introduces a new policy of selection of tokens for the OSPM
agent: each new command transfer now gets the next available, monotonically
increasing token, until tokens are exhausted and the counter rolls over.
Such new policy mitigates the above issues with late/spurious responses
since the tokens are now reused as late as possible (when they roll back
ideally) and so it is much easier to identify such late/spurious responses
to stale timed out transactions: this also helps in simplifying the
specific transports implementation since stale transport messages can be
easily identified and discarded early on in the rx path without the need
to cross check their actual state with the core transport layer.
This mitigation is even more effective when, as is usually the case, the
maximum number of pending messages is capped by the platform to a much
lower number than the whole possible range of tokens values (2^10).
This internal policy change in the core SCMI transport layer is fully
transparent to the specific transports so it has not and should not have
any impact on the transports implementation.
Link: https://lore.kernel.org/r/20210803131024.40280-5-cristian.marussi@arm.com
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2021-08-03 14:10:13 +01:00
# include <linux/hashtable.h>
# include <linux/list.h>
2021-03-16 12:49:02 +00:00
# include <linux/module.h>
2021-08-03 14:10:14 +01:00
# include <linux/refcount.h>
2017-03-28 11:36:07 +01:00
# include <linux/scmi_protocol.h>
2021-08-03 14:10:14 +01:00
# include <linux/spinlock.h>
2017-03-28 11:36:07 +01:00
# include <linux/types.h>
2019-08-07 13:46:27 +01:00
# include <asm/unaligned.h>
2022-03-30 16:05:38 +01:00
# include "protocols.h"
2021-03-16 12:48:33 +00:00
# include "notify.h"
2020-01-31 10:58:12 +05:30
# define MSG_ID_MASK GENMASK(7, 0)
# define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr))
# define MSG_TYPE_MASK GENMASK(9, 8)
# define MSG_XTRACT_TYPE(hdr) FIELD_GET(MSG_TYPE_MASK, (hdr))
# define MSG_TYPE_COMMAND 0
# define MSG_TYPE_DELAYED_RESP 2
# define MSG_TYPE_NOTIFICATION 3
# define MSG_PROTOCOL_ID_MASK GENMASK(17, 10)
# define MSG_XTRACT_PROT_ID(hdr) FIELD_GET(MSG_PROTOCOL_ID_MASK, (hdr))
# define MSG_TOKEN_ID_MASK GENMASK(27, 18)
# define MSG_XTRACT_TOKEN(hdr) FIELD_GET(MSG_TOKEN_ID_MASK, (hdr))
# define MSG_TOKEN_MAX (MSG_XTRACT_TOKEN(MSG_TOKEN_ID_MASK) + 1)
firmware: arm_scmi: Introduce monotonically increasing tokens
Tokens are sequence numbers embedded in the each SCMI message header: they
are used to correlate commands with responses (and delayed responses), but
their usage and policy of selection is entirely up to the caller (usually
the OSPM agent), while they are completely opaque to the callee (i.e. SCMI
platform) which merely copies them back from the command into the response
message header.
This also means that the platform does not, can not and should not enforce
any kind of policy on received messages depending on the contained sequence
number: platform can perfectly handle concurrent requests carrying the same
identifiying token if that should happen.
Moreover the platform is not required to produce in-order responses to
agent requests, the only constraint in these regards is that in case of
an asynchronous message the delayed response must be sent after the
immediate response for the synchronous part of the command transaction.
Currenly the SCMI stack of the OSPM agent selects a token for the egressing
commands picking the lowest possible number which is not already in use by
an existing in-flight transaction, which means, in other words, that we
immediately reuse any token after its transaction has completed or it has
timed out: this policy indeed does simplify management and lookup of tokens
and associated xfers.
Under the above assumptions and constraints, since there is really no state
shared between the agent and the platform to let the platform know when a
token and its associated message has timed out, the current policy of early
reuse of tokens can easily lead to the situation in which a spurious or
late received response (or delayed_response), related to an old stale and
timed out transaction, can be wrongly associated to a newer valid in-flight
xfer that just happens to have reused the same token.
This misbehaviour on such late/spurious responses is more easily exposed on
those transports that naturally have an higher level of parallelism in
processing multiple concurrent in-flight messages.
This commit introduces a new policy of selection of tokens for the OSPM
agent: each new command transfer now gets the next available, monotonically
increasing token, until tokens are exhausted and the counter rolls over.
Such new policy mitigates the above issues with late/spurious responses
since the tokens are now reused as late as possible (when they roll back
ideally) and so it is much easier to identify such late/spurious responses
to stale timed out transactions: this also helps in simplifying the
specific transports implementation since stale transport messages can be
easily identified and discarded early on in the rx path without the need
to cross check their actual state with the core transport layer.
This mitigation is even more effective when, as is usually the case, the
maximum number of pending messages is capped by the platform to a much
lower number than the whole possible range of tokens values (2^10).
This internal policy change in the core SCMI transport layer is fully
transparent to the specific transports so it has not and should not have
any impact on the transports implementation.
Link: https://lore.kernel.org/r/20210803131024.40280-5-cristian.marussi@arm.com
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2021-08-03 14:10:13 +01:00
/*
* Size of @ pending_xfers hashtable included in @ scmi_xfers_info ; ideally , in
* order to minimize space and collisions , this should equal max_msg , i . e . the
* maximum number of in - flight messages on a specific platform , but such value
* is only available at runtime while kernel hashtables are statically sized :
* pick instead as a fixed static size the maximum number of entries that can
* fit the whole table into one 4 k page .
*/
# define SCMI_PENDING_XFERS_HT_ORDER_SZ 9
2020-01-31 10:58:12 +05:30
/**
* pack_scmi_header ( ) - packs and returns 32 - bit header
*
* @ hdr : pointer to header containing all the information on message id ,
2021-08-03 14:10:10 +01:00
* protocol id , sequence id and type .
2020-01-31 10:58:12 +05:30
*
* Return : 32 - bit packed message header to be sent to the platform .
*/
static inline u32 pack_scmi_header ( struct scmi_msg_hdr * hdr )
{
return FIELD_PREP ( MSG_ID_MASK , hdr - > id ) |
2021-08-03 14:10:10 +01:00
FIELD_PREP ( MSG_TYPE_MASK , hdr - > type ) |
2020-01-31 10:58:12 +05:30
FIELD_PREP ( MSG_TOKEN_ID_MASK , hdr - > seq ) |
FIELD_PREP ( MSG_PROTOCOL_ID_MASK , hdr - > protocol_id ) ;
}
/**
* unpack_scmi_header ( ) - unpacks and records message and protocol id
*
* @ msg_hdr : 32 - bit packed message header sent from the platform
* @ hdr : pointer to header to fetch message and protocol id .
*/
static inline void unpack_scmi_header ( u32 msg_hdr , struct scmi_msg_hdr * hdr )
{
hdr - > id = MSG_XTRACT_ID ( msg_hdr ) ;
hdr - > protocol_id = MSG_XTRACT_PROT_ID ( msg_hdr ) ;
2021-08-03 14:10:10 +01:00
hdr - > type = MSG_XTRACT_TYPE ( msg_hdr ) ;
2020-01-31 10:58:12 +05:30
}
firmware: arm_scmi: Introduce monotonically increasing tokens
Tokens are sequence numbers embedded in the each SCMI message header: they
are used to correlate commands with responses (and delayed responses), but
their usage and policy of selection is entirely up to the caller (usually
the OSPM agent), while they are completely opaque to the callee (i.e. SCMI
platform) which merely copies them back from the command into the response
message header.
This also means that the platform does not, can not and should not enforce
any kind of policy on received messages depending on the contained sequence
number: platform can perfectly handle concurrent requests carrying the same
identifiying token if that should happen.
Moreover the platform is not required to produce in-order responses to
agent requests, the only constraint in these regards is that in case of
an asynchronous message the delayed response must be sent after the
immediate response for the synchronous part of the command transaction.
Currenly the SCMI stack of the OSPM agent selects a token for the egressing
commands picking the lowest possible number which is not already in use by
an existing in-flight transaction, which means, in other words, that we
immediately reuse any token after its transaction has completed or it has
timed out: this policy indeed does simplify management and lookup of tokens
and associated xfers.
Under the above assumptions and constraints, since there is really no state
shared between the agent and the platform to let the platform know when a
token and its associated message has timed out, the current policy of early
reuse of tokens can easily lead to the situation in which a spurious or
late received response (or delayed_response), related to an old stale and
timed out transaction, can be wrongly associated to a newer valid in-flight
xfer that just happens to have reused the same token.
This misbehaviour on such late/spurious responses is more easily exposed on
those transports that naturally have an higher level of parallelism in
processing multiple concurrent in-flight messages.
This commit introduces a new policy of selection of tokens for the OSPM
agent: each new command transfer now gets the next available, monotonically
increasing token, until tokens are exhausted and the counter rolls over.
Such new policy mitigates the above issues with late/spurious responses
since the tokens are now reused as late as possible (when they roll back
ideally) and so it is much easier to identify such late/spurious responses
to stale timed out transactions: this also helps in simplifying the
specific transports implementation since stale transport messages can be
easily identified and discarded early on in the rx path without the need
to cross check their actual state with the core transport layer.
This mitigation is even more effective when, as is usually the case, the
maximum number of pending messages is capped by the platform to a much
lower number than the whole possible range of tokens values (2^10).
This internal policy change in the core SCMI transport layer is fully
transparent to the specific transports so it has not and should not have
any impact on the transports implementation.
Link: https://lore.kernel.org/r/20210803131024.40280-5-cristian.marussi@arm.com
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
2021-08-03 14:10:13 +01:00
/*
* An helper macro to lookup an xfer from the @ pending_xfers hashtable
* using the message sequence number token as a key .
*/
# define XFER_FIND(__ht, __k) \
( { \
typeof ( __k ) k_ = __k ; \
struct scmi_xfer * xfer_ = NULL ; \
\
hash_for_each_possible ( ( __ht ) , xfer_ , node , k_ ) \
if ( xfer_ - > hdr . seq = = k_ ) \
break ; \
xfer_ ; \
} )
2021-03-16 12:48:34 +00:00
struct scmi_revision_info *
scmi_revision_area_get ( const struct scmi_protocol_handle * ph ) ;
2017-03-28 11:36:07 +01:00
int scmi_handle_put ( const struct scmi_handle * handle ) ;
2022-10-28 15:08:26 +01:00
void scmi_device_link_add ( struct device * consumer , struct device * supplier ) ;
2017-03-28 11:36:07 +01:00
struct scmi_handle * scmi_handle_get ( struct device * dev ) ;
2017-10-30 18:33:30 +00:00
void scmi_set_handle ( struct scmi_device * scmi_dev ) ;
2021-03-16 12:48:35 +00:00
void scmi_setup_protocol_implemented ( const struct scmi_protocol_handle * ph ,
2017-06-06 11:16:15 +01:00
u8 * prot_imp ) ;
2020-09-07 12:00:04 +01:00
int __init scmi_bus_init ( void ) ;
void __exit scmi_bus_exit ( void ) ;
2021-03-16 12:48:26 +00:00
const struct scmi_protocol * scmi_protocol_get ( int protocol_id ) ;
2021-03-16 12:49:02 +00:00
void scmi_protocol_put ( int protocol_id ) ;
2021-03-16 12:48:26 +00:00
2021-03-16 12:48:56 +00:00
int scmi_protocol_acquire ( const struct scmi_handle * handle , u8 protocol_id ) ;
void scmi_protocol_release ( const struct scmi_handle * handle , u8 protocol_id ) ;
2021-03-16 12:48:26 +00:00
2020-01-31 10:58:13 +05:30
/* SCMI Transport */
/**
* struct scmi_chan_info - Structure representing a SCMI channel information
*
* @ dev : Reference to device in the SCMI hierarchy corresponding to this
* channel
2022-10-28 15:08:28 +01:00
* @ rx_timeout_ms : The configured RX timeout in milliseconds .
2020-01-31 10:58:13 +05:30
* @ handle : Pointer to SCMI entity handle
2021-12-20 19:56:36 +00:00
* @ no_completion_irq : Flag to indicate that this channel has no completion
* interrupt mechanism for synchronous commands .
* This can be dynamically set by transports at run - time
* inside their provided . chan_setup ( ) .
2020-01-31 10:58:13 +05:30
* @ transport_info : Transport layer related information
*/
struct scmi_chan_info {
struct device * dev ;
2022-10-28 15:08:28 +01:00
unsigned int rx_timeout_ms ;
2020-01-31 10:58:13 +05:30
struct scmi_handle * handle ;
2021-12-20 19:56:36 +00:00
bool no_completion_irq ;
2020-01-31 10:58:13 +05:30
void * transport_info ;
} ;
/**
* struct scmi_transport_ops - Structure representing a SCMI transport ops
*
2021-08-03 14:10:21 +01:00
* @ link_supplier : Optional callback to add link to a supplier device
2020-01-31 10:58:13 +05:30
* @ chan_available : Callback to check if channel is available or not
* @ chan_setup : Callback to allocate and setup a channel
* @ chan_free : Callback to free a channel
2021-08-03 14:10:19 +01:00
* @ get_max_msg : Optional callback to provide max_msg dynamically
* Returns the maximum number of messages for the channel type
* ( tx or rx ) that can be pending simultaneously in the system
2020-01-31 10:58:13 +05:30
* @ send_message : Callback to send a message
* @ mark_txdone : Callback to mark tx as done
* @ fetch_response : Callback to fetch response
2020-03-27 14:34:28 +00:00
* @ fetch_notification : Callback to fetch notification
2020-04-20 16:23:12 +01:00
* @ clear_channel : Callback to clear a channel
2020-01-31 10:58:13 +05:30
* @ poll_done : Callback to poll transfer status
*/
struct scmi_transport_ops {
2021-08-03 14:10:21 +01:00
int ( * link_supplier ) ( struct device * dev ) ;
2022-12-22 18:50:41 +00:00
bool ( * chan_available ) ( struct device_node * of_node , int idx ) ;
2020-01-31 10:58:13 +05:30
int ( * chan_setup ) ( struct scmi_chan_info * cinfo , struct device * dev ,
bool tx ) ;
int ( * chan_free ) ( int id , void * p , void * data ) ;
2021-08-03 14:10:19 +01:00
unsigned int ( * get_max_msg ) ( struct scmi_chan_info * base_cinfo ) ;
2020-01-31 10:58:13 +05:30
int ( * send_message ) ( struct scmi_chan_info * cinfo ,
struct scmi_xfer * xfer ) ;
2021-12-20 19:56:43 +00:00
void ( * mark_txdone ) ( struct scmi_chan_info * cinfo , int ret ,
struct scmi_xfer * xfer ) ;
2020-01-31 10:58:13 +05:30
void ( * fetch_response ) ( struct scmi_chan_info * cinfo ,
struct scmi_xfer * xfer ) ;
2020-03-27 14:34:28 +00:00
void ( * fetch_notification ) ( struct scmi_chan_info * cinfo ,
size_t max_len , struct scmi_xfer * xfer ) ;
2020-04-20 16:23:12 +01:00
void ( * clear_channel ) ( struct scmi_chan_info * cinfo ) ;
2020-01-31 10:58:13 +05:30
bool ( * poll_done ) ( struct scmi_chan_info * cinfo , struct scmi_xfer * xfer ) ;
} ;
2021-03-16 12:49:03 +00:00
int scmi_protocol_device_request ( const struct scmi_device_id * id_table ) ;
void scmi_protocol_device_unrequest ( const struct scmi_device_id * id_table ) ;
struct scmi_device * scmi_child_dev_find ( struct device * parent ,
int prot_id , const char * name ) ;
2020-01-31 10:58:13 +05:30
/**
* struct scmi_desc - Description of SoC integration
*
2021-08-03 14:10:12 +01:00
* @ transport_init : An optional function that a transport can provide to
* initialize some transport - specific setup during SCMI core
* initialization , so ahead of SCMI core probing .
* @ transport_exit : An optional function that a transport can provide to
* de - initialize some transport - specific setup during SCMI core
* de - initialization , so after SCMI core removal .
2020-01-31 10:58:13 +05:30
* @ ops : Pointer to the transport specific ops structure
* @ max_rx_timeout_ms : Timeout for communication with SoC ( in Milliseconds )
2021-08-03 14:10:19 +01:00
* @ max_msg : Maximum number of messages for a channel type ( tx or rx ) that can
* be pending simultaneously in the system . May be overridden by the
* get_max_msg op .
2020-01-31 10:58:13 +05:30
* @ max_msg_size : Maximum size of data per message that can be handled .
2021-12-20 19:56:36 +00:00
* @ force_polling : Flag to force this whole transport to use SCMI core polling
* mechanism instead of completion interrupts even if available .
2021-12-20 19:56:38 +00:00
* @ sync_cmds_completed_on_ret : Flag to indicate that the transport assures
* synchronous - command messages are atomically
* completed on . send_message : no need to poll
* actively waiting for a response .
* Used by core internally only when polling is
* selected as a waiting for reply method : i . e .
* if a completion irq was found use that anyway .
2021-12-20 19:56:41 +00:00
* @ atomic_enabled : Flag to indicate that this transport , which is assured not
* to sleep anywhere on the TX path , can be used in atomic mode
* when requested .
2020-01-31 10:58:13 +05:30
*/
struct scmi_desc {
2021-08-03 14:10:12 +01:00
int ( * transport_init ) ( void ) ;
void ( * transport_exit ) ( void ) ;
2020-09-07 01:04:52 +02:00
const struct scmi_transport_ops * ops ;
2020-01-31 10:58:13 +05:30
int max_rx_timeout_ms ;
int max_msg ;
int max_msg_size ;
2021-12-20 19:56:36 +00:00
const bool force_polling ;
2021-12-20 19:56:38 +00:00
const bool sync_cmds_completed_on_ret ;
2021-12-20 19:56:41 +00:00
const bool atomic_enabled ;
2020-01-31 10:58:13 +05:30
} ;
2021-08-03 14:10:17 +01:00
# ifdef CONFIG_ARM_SCMI_TRANSPORT_MAILBOX
2020-01-31 10:58:13 +05:30
extern const struct scmi_desc scmi_mailbox_desc ;
2021-08-03 14:10:17 +01:00
# endif
# ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC
2020-03-08 21:24:39 +08:00
extern const struct scmi_desc scmi_smc_desc ;
# endif
2021-08-03 14:10:24 +01:00
# ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO
extern const struct scmi_desc scmi_virtio_desc ;
# endif
2021-10-28 16:00:09 +02:00
# ifdef CONFIG_ARM_SCMI_TRANSPORT_OPTEE
extern const struct scmi_desc scmi_optee_desc ;
# endif
2020-01-31 10:58:13 +05:30
2021-08-03 14:10:23 +01:00
void scmi_rx_callback ( struct scmi_chan_info * cinfo , u32 msg_hdr , void * priv ) ;
2020-01-31 10:58:13 +05:30
/* shmem related declarations */
struct scmi_shared_mem ;
void shmem_tx_prepare ( struct scmi_shared_mem __iomem * shmem ,
2022-10-28 15:08:28 +01:00
struct scmi_xfer * xfer , struct scmi_chan_info * cinfo ) ;
2020-01-31 10:58:13 +05:30
u32 shmem_read_header ( struct scmi_shared_mem __iomem * shmem ) ;
void shmem_fetch_response ( struct scmi_shared_mem __iomem * shmem ,
struct scmi_xfer * xfer ) ;
2020-03-27 14:34:28 +00:00
void shmem_fetch_notification ( struct scmi_shared_mem __iomem * shmem ,
size_t max_len , struct scmi_xfer * xfer ) ;
2020-04-20 16:23:12 +01:00
void shmem_clear_channel ( struct scmi_shared_mem __iomem * shmem ) ;
2020-01-31 10:58:13 +05:30
bool shmem_poll_done ( struct scmi_shared_mem __iomem * shmem ,
struct scmi_xfer * xfer ) ;
2020-07-01 16:53:40 +01:00
2021-08-03 14:10:20 +01:00
/* declarations for message passing transports */
struct scmi_msg_payld ;
/* Maximum overhead of message w.r.t. struct scmi_desc.max_msg_size */
# define SCMI_MSG_MAX_PROT_OVERHEAD (2 * sizeof(__le32))
size_t msg_response_size ( struct scmi_xfer * xfer ) ;
size_t msg_command_size ( struct scmi_xfer * xfer ) ;
void msg_tx_prepare ( struct scmi_msg_payld * msg , struct scmi_xfer * xfer ) ;
u32 msg_read_header ( struct scmi_msg_payld * msg ) ;
void msg_fetch_response ( struct scmi_msg_payld * msg , size_t len ,
struct scmi_xfer * xfer ) ;
void msg_fetch_notification ( struct scmi_msg_payld * msg , size_t len ,
size_t max_len , struct scmi_xfer * xfer ) ;
2021-03-16 12:49:00 +00:00
void scmi_notification_instance_data_set ( const struct scmi_handle * handle ,
void * priv ) ;
void * scmi_notification_instance_data_get ( const struct scmi_handle * handle ) ;
2020-07-01 16:53:40 +01:00
# endif /* _SCMI_COMMON_H */