2017-06-14 21:37:39 +03:00
/*
* Copyright ( c ) 2016 - 2017 , Mellanox Technologies . All rights reserved .
* Copyright ( c ) 2016 - 2017 , Dave Watson < davejwatson @ fb . com > . All rights reserved .
*
* This software is available to you under a choice of one of two
* licenses . You may choose to be licensed under the terms of the GNU
* General Public License ( GPL ) Version 2 , available from the file
* COPYING in the main directory of this source tree , or the
* OpenIB . org BSD license below :
*
* Redistribution and use in source and binary forms , with or
* without modification , are permitted provided that the following
* conditions are met :
*
* - Redistributions of source code must retain the above
* copyright notice , this list of conditions and the following
* disclaimer .
*
* - Redistributions in binary form must reproduce the above
* copyright notice , this list of conditions and the following
* disclaimer in the documentation and / or other materials
* provided with the distribution .
*
* THE SOFTWARE IS PROVIDED " AS IS " , WITHOUT WARRANTY OF ANY KIND ,
* EXPRESS OR IMPLIED , INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY , FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT . IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM , DAMAGES OR OTHER LIABILITY , WHETHER IN AN
* ACTION OF CONTRACT , TORT OR OTHERWISE , ARISING FROM , OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE .
*/
# include <linux/module.h>
# include <net/tcp.h>
# include <net/inet_common.h>
# include <linux/highmem.h>
# include <linux/netdevice.h>
# include <linux/sched/signal.h>
2018-03-31 19:11:52 +03:00
# include <linux/inetdevice.h>
2019-08-30 13:25:49 +03:00
# include <linux/inet_diag.h>
2017-06-14 21:37:39 +03:00
2019-10-05 02:19:24 +03:00
# include <net/snmp.h>
2017-06-14 21:37:39 +03:00
# include <net/tls.h>
2019-10-03 21:18:54 +03:00
# include <net/tls_toe.h>
2017-06-14 21:37:39 +03:00
MODULE_AUTHOR ( " Mellanox Technologies " ) ;
MODULE_DESCRIPTION ( " Transport Layer Security Support " ) ;
MODULE_LICENSE ( " Dual BSD/GPL " ) ;
tcp, ulp: add alias for all ulp modules
Lets not turn the TCP ULP lookup into an arbitrary module loader as
we only intend to load ULP modules through this mechanism, not other
unrelated kernel modules:
[root@bar]# cat foo.c
#include <sys/types.h>
#include <sys/socket.h>
#include <linux/tcp.h>
#include <linux/in.h>
int main(void)
{
int sock = socket(PF_INET, SOCK_STREAM, 0);
setsockopt(sock, IPPROTO_TCP, TCP_ULP, "sctp", sizeof("sctp"));
return 0;
}
[root@bar]# gcc foo.c -O2 -Wall
[root@bar]# lsmod | grep sctp
[root@bar]# ./a.out
[root@bar]# lsmod | grep sctp
sctp 1077248 4
libcrc32c 16384 3 nf_conntrack,nf_nat,sctp
[root@bar]#
Fix it by adding module alias to TCP ULP modules, so probing module
via request_module() will be limited to tcp-ulp-[name]. The existing
modules like kTLS will load fine given tcp-ulp-tls alias, but others
will fail to load:
[root@bar]# lsmod | grep sctp
[root@bar]# ./a.out
[root@bar]# lsmod | grep sctp
[root@bar]#
Sockmap is not affected from this since it's either built-in or not.
Fixes: 734942cc4ea6 ("tcp: ULP infrastructure")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-08-16 22:49:06 +03:00
MODULE_ALIAS_TCP_ULP ( " tls " ) ;
2017-06-14 21:37:39 +03:00
2018-02-27 15:18:39 +03:00
enum {
TLSV4 ,
TLSV6 ,
TLS_NUM_PROTS ,
} ;
2017-11-13 11:22:45 +03:00
2020-04-08 21:54:43 +03:00
static const struct proto * saved_tcpv6_prot ;
2018-02-27 15:18:39 +03:00
static DEFINE_MUTEX ( tcpv6_prot_mutex ) ;
2020-04-08 21:54:43 +03:00
static const struct proto * saved_tcpv4_prot ;
2018-12-20 22:35:36 +03:00
static DEFINE_MUTEX ( tcpv4_prot_mutex ) ;
2018-04-30 10:16:15 +03:00
static struct proto tls_prots [ TLS_NUM_PROTS ] [ TLS_NUM_CONFIG ] [ TLS_NUM_CONFIG ] ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
static struct proto_ops tls_sw_proto_ops ;
2019-01-18 07:55:53 +03:00
static void build_protos ( struct proto prot [ TLS_NUM_CONFIG ] [ TLS_NUM_CONFIG ] ,
2020-03-17 20:04:37 +03:00
const struct proto * base ) ;
2017-11-13 11:22:45 +03:00
2019-10-03 21:18:57 +03:00
void update_sk_prot ( struct sock * sk , struct tls_context * ctx )
2017-11-13 11:22:45 +03:00
{
2018-02-27 15:18:39 +03:00
int ip_ver = sk - > sk_family = = AF_INET6 ? TLSV6 : TLSV4 ;
2020-03-17 20:04:39 +03:00
WRITE_ONCE ( sk - > sk_prot ,
& tls_prots [ ip_ver ] [ ctx - > tx_conf ] [ ctx - > rx_conf ] ) ;
2017-11-13 11:22:45 +03:00
}
2017-06-14 21:37:39 +03:00
int wait_on_pending_writer ( struct sock * sk , long * timeo )
{
int rc = 0 ;
DEFINE_WAIT_FUNC ( wait , woken_wake_function ) ;
add_wait_queue ( sk_sleep ( sk ) , & wait ) ;
while ( 1 ) {
if ( ! * timeo ) {
rc = - EAGAIN ;
break ;
}
if ( signal_pending ( current ) ) {
rc = sock_intr_errno ( * timeo ) ;
break ;
}
if ( sk_wait_event ( sk , timeo , ! sk - > sk_write_pending , & wait ) )
break ;
}
remove_wait_queue ( sk_sleep ( sk ) , & wait ) ;
return rc ;
}
int tls_push_sg ( struct sock * sk ,
struct tls_context * ctx ,
struct scatterlist * sg ,
u16 first_offset ,
int flags )
{
int sendpage_flags = flags | MSG_SENDPAGE_NOTLAST ;
int ret = 0 ;
struct page * p ;
size_t size ;
int offset = first_offset ;
size = sg - > length - offset ;
offset + = sg - > offset ;
2018-05-01 23:05:39 +03:00
ctx - > in_tcp_sendpages = true ;
2017-06-14 21:37:39 +03:00
while ( 1 ) {
if ( sg_is_last ( sg ) )
sendpage_flags = flags ;
/* is sending application-limited? */
tcp_rate_check_app_limited ( sk ) ;
p = sg_page ( sg ) ;
retry :
ret = do_tcp_sendpages ( sk , p , offset , size , sendpage_flags ) ;
if ( ret ! = size ) {
if ( ret > 0 ) {
offset + = ret ;
size - = ret ;
goto retry ;
}
offset - = sg - > offset ;
ctx - > partially_sent_offset = offset ;
ctx - > partially_sent_record = ( void * ) sg ;
2018-05-07 05:24:39 +03:00
ctx - > in_tcp_sendpages = false ;
2017-06-14 21:37:39 +03:00
return ret ;
}
put_page ( p ) ;
sk_mem_uncharge ( sk , sg - > length ) ;
sg = sg_next ( sg ) ;
if ( ! sg )
break ;
offset = sg - > offset ;
size = sg - > length ;
}
2018-05-01 23:05:39 +03:00
ctx - > in_tcp_sendpages = false ;
2017-06-14 21:37:39 +03:00
return 0 ;
}
static int tls_handle_open_record ( struct sock * sk , int flags )
{
struct tls_context * ctx = tls_get_ctx ( sk ) ;
if ( tls_is_pending_open_record ( ctx ) )
return ctx - > push_pending_record ( sk , flags ) ;
return 0 ;
}
int tls_proccess_cmsg ( struct sock * sk , struct msghdr * msg ,
unsigned char * record_type )
{
struct cmsghdr * cmsg ;
int rc = - EINVAL ;
for_each_cmsghdr ( cmsg , msg ) {
if ( ! CMSG_OK ( msg , cmsg ) )
return - EINVAL ;
if ( cmsg - > cmsg_level ! = SOL_TLS )
continue ;
switch ( cmsg - > cmsg_type ) {
case TLS_SET_RECORD_TYPE :
if ( cmsg - > cmsg_len < CMSG_LEN ( sizeof ( * record_type ) ) )
return - EINVAL ;
if ( msg - > msg_flags & MSG_MORE )
return - EINVAL ;
rc = tls_handle_open_record ( sk , msg - > msg_flags ) ;
if ( rc )
return rc ;
* record_type = * ( unsigned char * ) CMSG_DATA ( cmsg ) ;
rc = 0 ;
break ;
default :
return - EINVAL ;
}
}
return rc ;
}
2018-09-21 07:16:13 +03:00
int tls_push_partial_record ( struct sock * sk , struct tls_context * ctx ,
int flags )
2017-06-14 21:37:39 +03:00
{
struct scatterlist * sg ;
u16 offset ;
sg = ctx - > partially_sent_record ;
offset = ctx - > partially_sent_offset ;
ctx - > partially_sent_record = NULL ;
return tls_push_sg ( sk , ctx , sg , offset , flags ) ;
}
2019-11-27 23:16:44 +03:00
void tls_free_partial_record ( struct sock * sk , struct tls_context * ctx )
2019-04-10 21:04:31 +03:00
{
struct scatterlist * sg ;
2019-11-27 23:16:44 +03:00
for ( sg = ctx - > partially_sent_record ; sg ; sg = sg_next ( sg ) ) {
2019-04-10 21:04:31 +03:00
put_page ( sg_page ( sg ) ) ;
sk_mem_uncharge ( sk , sg - > length ) ;
}
ctx - > partially_sent_record = NULL ;
}
2017-06-14 21:37:39 +03:00
static void tls_write_space ( struct sock * sk )
{
struct tls_context * ctx = tls_get_ctx ( sk ) ;
2018-08-22 18:37:32 +03:00
/* If in_tcp_sendpages call lower protocol write space handler
* to ensure we wake up any waiting operations there . For example
* if do_tcp_sendpages where to call sk_wait_event .
*/
if ( ctx - > in_tcp_sendpages ) {
ctx - > sk_write_space ( sk ) ;
2018-05-01 23:05:39 +03:00
return ;
2018-08-22 18:37:32 +03:00
}
2018-05-01 23:05:39 +03:00
2019-02-27 18:38:04 +03:00
# ifdef CONFIG_TLS_DEVICE
if ( ctx - > tx_conf = = TLS_HW )
tls_device_write_space ( sk , ctx ) ;
else
# endif
tls_sw_write_space ( sk , ctx ) ;
2019-03-12 11:22:57 +03:00
ctx - > sk_write_space ( sk ) ;
2017-06-14 21:37:39 +03:00
}
2019-08-30 13:25:47 +03:00
/**
* tls_ctx_free ( ) - free TLS ULP context
* @ sk : socket to with @ ctx is attached
* @ ctx : TLS context structure
*
* Free TLS context . If @ sk is % NULL caller guarantees that the socket
* to which @ ctx was attached has no outstanding references .
*/
void tls_ctx_free ( struct sock * sk , struct tls_context * ctx )
2018-09-12 18:44:42 +03:00
{
if ( ! ctx )
return ;
memzero_explicit ( & ctx - > crypto_send , sizeof ( ctx - > crypto_send ) ) ;
memzero_explicit ( & ctx - > crypto_recv , sizeof ( ctx - > crypto_recv ) ) ;
2019-11-06 01:24:35 +03:00
mutex_destroy ( & ctx - > tx_lock ) ;
2019-08-30 13:25:47 +03:00
if ( sk )
kfree_rcu ( ctx , rcu ) ;
else
kfree ( ctx ) ;
2018-09-12 18:44:42 +03:00
}
2019-07-19 20:29:17 +03:00
static void tls_sk_proto_cleanup ( struct sock * sk ,
struct tls_context * ctx , long timeo )
2017-06-14 21:37:39 +03:00
{
2019-06-24 07:26:58 +03:00
if ( unlikely ( sk - > sk_write_pending ) & &
! wait_on_pending_writer ( sk , & timeo ) )
2017-06-14 21:37:39 +03:00
tls_handle_open_record ( sk , 0 ) ;
2018-04-30 10:16:15 +03:00
/* We need these for tls_sw_fallback handling of other packets */
if ( ctx - > tx_conf = = TLS_SW ) {
kfree ( ctx - > tx . rec_seq ) ;
kfree ( ctx - > tx . iv ) ;
2019-07-19 20:29:17 +03:00
tls_sw_release_resources_tx ( sk ) ;
2019-10-05 02:19:25 +03:00
TLS_DEC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRTXSW ) ;
2019-04-10 21:04:31 +03:00
} else if ( ctx - > tx_conf = = TLS_HW ) {
tls_device_free_resources_tx ( sk ) ;
2019-10-05 02:19:25 +03:00
TLS_DEC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRTXDEVICE ) ;
2018-04-30 10:16:15 +03:00
}
2017-06-14 21:37:39 +03:00
2019-10-05 02:19:25 +03:00
if ( ctx - > rx_conf = = TLS_SW ) {
2019-07-19 20:29:17 +03:00
tls_sw_release_resources_rx ( sk ) ;
2019-10-05 02:19:25 +03:00
TLS_DEC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRRXSW ) ;
} else if ( ctx - > rx_conf = = TLS_HW ) {
2018-07-13 14:33:43 +03:00
tls_device_offload_cleanup_rx ( sk ) ;
2019-10-05 02:19:25 +03:00
TLS_DEC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRRXDEVICE ) ;
}
2019-07-19 20:29:17 +03:00
}
static void tls_sk_proto_close ( struct sock * sk , long timeout )
{
2019-07-19 20:29:22 +03:00
struct inet_connection_sock * icsk = inet_csk ( sk ) ;
2019-07-19 20:29:17 +03:00
struct tls_context * ctx = tls_get_ctx ( sk ) ;
long timeo = sock_sndtimeo ( sk , 0 ) ;
bool free_ctx ;
if ( ctx - > tx_conf = = TLS_SW )
tls_sw_cancel_work_tx ( ctx ) ;
lock_sock ( sk ) ;
free_ctx = ctx - > tx_conf ! = TLS_HW & & ctx - > rx_conf ! = TLS_HW ;
if ( ctx - > tx_conf ! = TLS_BASE | | ctx - > rx_conf ! = TLS_BASE )
tls_sk_proto_cleanup ( sk , ctx , timeo ) ;
2018-04-30 10:16:16 +03:00
2019-07-19 20:29:22 +03:00
write_lock_bh ( & sk - > sk_callback_lock ) ;
if ( free_ctx )
2019-08-30 13:25:47 +03:00
rcu_assign_pointer ( icsk - > icsk_ulp_data , NULL ) ;
2020-03-17 20:04:39 +03:00
WRITE_ONCE ( sk - > sk_prot , ctx - > sk_proto ) ;
2019-08-14 08:31:54 +03:00
if ( sk - > sk_write_space = = tls_write_space )
sk - > sk_write_space = ctx - > sk_write_space ;
2019-07-19 20:29:22 +03:00
write_unlock_bh ( & sk - > sk_callback_lock ) ;
2017-06-14 21:37:39 +03:00
release_sock ( sk ) ;
2019-07-19 20:29:17 +03:00
if ( ctx - > tx_conf = = TLS_SW )
tls_sw_free_ctx_tx ( ctx ) ;
if ( ctx - > rx_conf = = TLS_SW | | ctx - > rx_conf = = TLS_HW )
tls_sw_strparser_done ( ctx ) ;
if ( ctx - > rx_conf = = TLS_SW )
tls_sw_free_ctx_rx ( ctx ) ;
2019-09-03 07:31:02 +03:00
ctx - > sk_proto - > close ( sk , timeout ) ;
2019-07-19 20:29:17 +03:00
2018-05-05 18:35:04 +03:00
if ( free_ctx )
2019-08-30 13:25:47 +03:00
tls_ctx_free ( sk , ctx ) ;
2017-06-14 21:37:39 +03:00
}
2020-09-01 16:59:45 +03:00
static int do_tls_getsockopt_conf ( struct sock * sk , char __user * optval ,
int __user * optlen , int tx )
2017-06-14 21:37:39 +03:00
{
int rc = 0 ;
struct tls_context * ctx = tls_get_ctx ( sk ) ;
struct tls_crypto_info * crypto_info ;
2020-09-01 16:59:45 +03:00
struct cipher_context * cctx ;
2017-06-14 21:37:39 +03:00
int len ;
if ( get_user ( len , optlen ) )
return - EFAULT ;
if ( ! optval | | ( len < sizeof ( * crypto_info ) ) ) {
rc = - EINVAL ;
goto out ;
}
if ( ! ctx ) {
rc = - EBUSY ;
goto out ;
}
/* get user crypto info */
2020-09-01 16:59:45 +03:00
if ( tx ) {
crypto_info = & ctx - > crypto_send . info ;
cctx = & ctx - > tx ;
} else {
crypto_info = & ctx - > crypto_recv . info ;
cctx = & ctx - > rx ;
}
2017-06-14 21:37:39 +03:00
if ( ! TLS_CRYPTO_INFO_READY ( crypto_info ) ) {
rc = - EBUSY ;
goto out ;
}
2017-07-06 07:56:36 +03:00
if ( len = = sizeof ( * crypto_info ) ) {
2017-06-23 13:15:44 +03:00
if ( copy_to_user ( optval , crypto_info , sizeof ( * crypto_info ) ) )
rc = - EFAULT ;
2017-06-14 21:37:39 +03:00
goto out ;
}
switch ( crypto_info - > cipher_type ) {
case TLS_CIPHER_AES_GCM_128 : {
struct tls12_crypto_info_aes_gcm_128 *
crypto_info_aes_gcm_128 =
container_of ( crypto_info ,
struct tls12_crypto_info_aes_gcm_128 ,
info ) ;
if ( len ! = sizeof ( * crypto_info_aes_gcm_128 ) ) {
rc = - EINVAL ;
goto out ;
}
lock_sock ( sk ) ;
2018-02-14 11:46:06 +03:00
memcpy ( crypto_info_aes_gcm_128 - > iv ,
2020-09-01 16:59:45 +03:00
cctx - > iv + TLS_CIPHER_AES_GCM_128_SALT_SIZE ,
2017-06-14 21:37:39 +03:00
TLS_CIPHER_AES_GCM_128_IV_SIZE ) ;
2020-09-01 16:59:45 +03:00
memcpy ( crypto_info_aes_gcm_128 - > rec_seq , cctx - > rec_seq ,
2018-02-14 11:46:08 +03:00
TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE ) ;
2017-06-14 21:37:39 +03:00
release_sock ( sk ) ;
2017-06-23 13:15:44 +03:00
if ( copy_to_user ( optval ,
crypto_info_aes_gcm_128 ,
sizeof ( * crypto_info_aes_gcm_128 ) ) )
rc = - EFAULT ;
2017-06-14 21:37:39 +03:00
break ;
}
2019-01-31 00:58:05 +03:00
case TLS_CIPHER_AES_GCM_256 : {
struct tls12_crypto_info_aes_gcm_256 *
crypto_info_aes_gcm_256 =
container_of ( crypto_info ,
struct tls12_crypto_info_aes_gcm_256 ,
info ) ;
if ( len ! = sizeof ( * crypto_info_aes_gcm_256 ) ) {
rc = - EINVAL ;
goto out ;
}
lock_sock ( sk ) ;
memcpy ( crypto_info_aes_gcm_256 - > iv ,
2020-09-01 16:59:45 +03:00
cctx - > iv + TLS_CIPHER_AES_GCM_256_SALT_SIZE ,
2019-01-31 00:58:05 +03:00
TLS_CIPHER_AES_GCM_256_IV_SIZE ) ;
2020-09-01 16:59:45 +03:00
memcpy ( crypto_info_aes_gcm_256 - > rec_seq , cctx - > rec_seq ,
2019-01-31 00:58:05 +03:00
TLS_CIPHER_AES_GCM_256_REC_SEQ_SIZE ) ;
release_sock ( sk ) ;
if ( copy_to_user ( optval ,
crypto_info_aes_gcm_256 ,
sizeof ( * crypto_info_aes_gcm_256 ) ) )
rc = - EFAULT ;
break ;
}
2017-06-14 21:37:39 +03:00
default :
rc = - EINVAL ;
}
out :
return rc ;
}
static int do_tls_getsockopt ( struct sock * sk , int optname ,
char __user * optval , int __user * optlen )
{
int rc = 0 ;
switch ( optname ) {
case TLS_TX :
2020-09-01 16:59:45 +03:00
case TLS_RX :
rc = do_tls_getsockopt_conf ( sk , optval , optlen ,
optname = = TLS_TX ) ;
2017-06-14 21:37:39 +03:00
break ;
default :
rc = - ENOPROTOOPT ;
break ;
}
return rc ;
}
static int tls_getsockopt ( struct sock * sk , int level , int optname ,
char __user * optval , int __user * optlen )
{
struct tls_context * ctx = tls_get_ctx ( sk ) ;
if ( level ! = SOL_TLS )
2019-09-03 07:31:02 +03:00
return ctx - > sk_proto - > getsockopt ( sk , level ,
optname , optval , optlen ) ;
2017-06-14 21:37:39 +03:00
return do_tls_getsockopt ( sk , optname , optval , optlen ) ;
}
2020-07-23 09:09:07 +03:00
static int do_tls_setsockopt_conf ( struct sock * sk , sockptr_t optval ,
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
unsigned int optlen , int tx )
2017-06-14 21:37:39 +03:00
{
2017-11-13 11:22:48 +03:00
struct tls_crypto_info * crypto_info ;
2019-02-14 10:11:35 +03:00
struct tls_crypto_info * alt_crypto_info ;
2017-06-14 21:37:39 +03:00
struct tls_context * ctx = tls_get_ctx ( sk ) ;
2019-01-31 00:58:05 +03:00
size_t optsize ;
2017-06-14 21:37:39 +03:00
int rc = 0 ;
2018-03-22 20:10:26 +03:00
int conf ;
2017-06-14 21:37:39 +03:00
2020-07-23 09:09:07 +03:00
if ( sockptr_is_null ( optval ) | | ( optlen < sizeof ( * crypto_info ) ) ) {
2017-06-14 21:37:39 +03:00
rc = - EINVAL ;
goto out ;
}
2019-02-14 10:11:35 +03:00
if ( tx ) {
2018-09-12 18:44:42 +03:00
crypto_info = & ctx - > crypto_send . info ;
2019-02-14 10:11:35 +03:00
alt_crypto_info = & ctx - > crypto_recv . info ;
} else {
2018-09-12 18:44:42 +03:00
crypto_info = & ctx - > crypto_recv . info ;
2019-02-14 10:11:35 +03:00
alt_crypto_info = & ctx - > crypto_send . info ;
}
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
2017-11-13 11:22:48 +03:00
/* Currently we don't support set crypto info more than one time */
2018-01-16 18:04:27 +03:00
if ( TLS_CRYPTO_INFO_READY ( crypto_info ) ) {
rc = - EBUSY ;
2017-11-13 11:22:48 +03:00
goto out ;
2018-01-16 18:04:27 +03:00
}
2017-11-13 11:22:48 +03:00
2020-07-23 09:09:07 +03:00
rc = copy_from_sockptr ( crypto_info , optval , sizeof ( * crypto_info ) ) ;
2017-06-14 21:37:39 +03:00
if ( rc ) {
rc = - EFAULT ;
2018-02-14 11:46:07 +03:00
goto err_crypto_info ;
2017-06-14 21:37:39 +03:00
}
/* check version */
2019-01-31 00:58:31 +03:00
if ( crypto_info - > version ! = TLS_1_2_VERSION & &
crypto_info - > version ! = TLS_1_3_VERSION ) {
2019-12-05 09:41:18 +03:00
rc = - EINVAL ;
2017-11-13 11:22:48 +03:00
goto err_crypto_info ;
2017-06-14 21:37:39 +03:00
}
2019-02-14 10:11:35 +03:00
/* Ensure that TLS version and ciphers are same in both directions */
if ( TLS_CRYPTO_INFO_READY ( alt_crypto_info ) ) {
if ( alt_crypto_info - > version ! = crypto_info - > version | |
alt_crypto_info - > cipher_type ! = crypto_info - > cipher_type ) {
rc = - EINVAL ;
goto err_crypto_info ;
}
}
2017-11-13 11:22:48 +03:00
switch ( crypto_info - > cipher_type ) {
2019-01-31 00:58:05 +03:00
case TLS_CIPHER_AES_GCM_128 :
2019-03-20 05:03:36 +03:00
optsize = sizeof ( struct tls12_crypto_info_aes_gcm_128 ) ;
break ;
2019-01-31 00:58:05 +03:00
case TLS_CIPHER_AES_GCM_256 : {
2019-03-20 05:03:36 +03:00
optsize = sizeof ( struct tls12_crypto_info_aes_gcm_256 ) ;
2017-06-14 21:37:39 +03:00
break ;
}
2019-03-20 05:03:36 +03:00
case TLS_CIPHER_AES_CCM_128 :
optsize = sizeof ( struct tls12_crypto_info_aes_ccm_128 ) ;
break ;
2020-11-24 18:24:49 +03:00
case TLS_CIPHER_CHACHA20_POLY1305 :
optsize = sizeof ( struct tls12_crypto_info_chacha20_poly1305 ) ;
break ;
2017-06-14 21:37:39 +03:00
default :
rc = - EINVAL ;
tls: reset crypto_info when do_tls_setsockopt_tx fails
The current code copies directly from userspace to ctx->crypto_send, but
doesn't always reinitialize it to 0 on failure. This causes any
subsequent attempt to use this setsockopt to fail because of the
TLS_CRYPTO_INFO_READY check, eventhough crypto_info is not actually
ready.
This should result in a correctly set up socket after the 3rd call, but
currently it does not:
size_t s = sizeof(struct tls12_crypto_info_aes_gcm_128);
struct tls12_crypto_info_aes_gcm_128 crypto_good = {
.info.version = TLS_1_2_VERSION,
.info.cipher_type = TLS_CIPHER_AES_GCM_128,
};
struct tls12_crypto_info_aes_gcm_128 crypto_bad_type = crypto_good;
crypto_bad_type.info.cipher_type = 42;
setsockopt(sock, SOL_TLS, TLS_TX, &crypto_bad_type, s);
setsockopt(sock, SOL_TLS, TLS_TX, &crypto_good, s - 1);
setsockopt(sock, SOL_TLS, TLS_TX, &crypto_good, s);
Fixes: 3c4d7559159b ("tls: kernel TLS support")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-01-16 18:04:28 +03:00
goto err_crypto_info ;
2017-06-14 21:37:39 +03:00
}
2019-03-20 05:03:36 +03:00
if ( optlen ! = optsize ) {
rc = - EINVAL ;
goto err_crypto_info ;
}
2020-07-28 19:38:35 +03:00
rc = copy_from_sockptr_offset ( crypto_info + 1 , optval ,
sizeof ( * crypto_info ) ,
optlen - sizeof ( * crypto_info ) ) ;
2019-03-20 05:03:36 +03:00
if ( rc ) {
rc = - EFAULT ;
goto err_crypto_info ;
}
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
if ( tx ) {
2018-04-30 10:16:16 +03:00
rc = tls_set_device_offload ( sk , ctx ) ;
conf = TLS_HW ;
2019-10-05 02:19:25 +03:00
if ( ! rc ) {
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSTXDEVICE ) ;
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRTXDEVICE ) ;
} else {
2018-04-30 10:16:16 +03:00
rc = tls_set_sw_offload ( sk , ctx , 1 ) ;
2019-07-19 20:29:14 +03:00
if ( rc )
goto err_crypto_info ;
2019-10-05 02:19:25 +03:00
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSTXSW ) ;
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRTXSW ) ;
2018-04-30 10:16:16 +03:00
conf = TLS_SW ;
}
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
} else {
2018-07-13 14:33:43 +03:00
rc = tls_set_device_offload_rx ( sk , ctx ) ;
conf = TLS_HW ;
2019-10-05 02:19:25 +03:00
if ( ! rc ) {
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSRXDEVICE ) ;
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRRXDEVICE ) ;
} else {
2018-07-13 14:33:43 +03:00
rc = tls_set_sw_offload ( sk , ctx , 0 ) ;
2019-07-19 20:29:14 +03:00
if ( rc )
goto err_crypto_info ;
2019-10-05 02:19:25 +03:00
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSRXSW ) ;
TLS_INC_STATS ( sock_net ( sk ) , LINUX_MIB_TLSCURRRXSW ) ;
2018-07-13 14:33:43 +03:00
conf = TLS_SW ;
}
2019-07-19 20:29:17 +03:00
tls_sw_strparser_arm ( sk , ctx ) ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
}
2018-04-30 10:16:15 +03:00
if ( tx )
ctx - > tx_conf = conf ;
else
ctx - > rx_conf = conf ;
2017-11-13 11:22:45 +03:00
update_sk_prot ( sk , ctx ) ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
if ( tx ) {
ctx - > sk_write_space = sk - > sk_write_space ;
sk - > sk_write_space = tls_write_space ;
} else {
sk - > sk_socket - > ops = & tls_sw_proto_ops ;
}
2017-06-14 21:37:39 +03:00
goto out ;
err_crypto_info :
2018-09-12 18:44:43 +03:00
memzero_explicit ( crypto_info , sizeof ( union tls_crypto_context ) ) ;
2017-06-14 21:37:39 +03:00
out :
return rc ;
}
2020-07-23 09:09:07 +03:00
static int do_tls_setsockopt ( struct sock * sk , int optname , sockptr_t optval ,
unsigned int optlen )
2017-06-14 21:37:39 +03:00
{
int rc = 0 ;
switch ( optname ) {
case TLS_TX :
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
case TLS_RX :
2017-06-14 21:37:39 +03:00
lock_sock ( sk ) ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
rc = do_tls_setsockopt_conf ( sk , optval , optlen ,
optname = = TLS_TX ) ;
2017-06-14 21:37:39 +03:00
release_sock ( sk ) ;
break ;
default :
rc = - ENOPROTOOPT ;
break ;
}
return rc ;
}
static int tls_setsockopt ( struct sock * sk , int level , int optname ,
2020-07-23 09:09:07 +03:00
sockptr_t optval , unsigned int optlen )
2017-06-14 21:37:39 +03:00
{
struct tls_context * ctx = tls_get_ctx ( sk ) ;
if ( level ! = SOL_TLS )
2019-09-03 07:31:02 +03:00
return ctx - > sk_proto - > setsockopt ( sk , level , optname , optval ,
optlen ) ;
2017-06-14 21:37:39 +03:00
return do_tls_setsockopt ( sk , optname , optval , optlen ) ;
}
2019-10-03 21:18:57 +03:00
struct tls_context * tls_ctx_create ( struct sock * sk )
2018-03-31 19:11:52 +03:00
{
struct inet_connection_sock * icsk = inet_csk ( sk ) ;
struct tls_context * ctx ;
2018-12-19 14:48:22 +03:00
ctx = kzalloc ( sizeof ( * ctx ) , GFP_ATOMIC ) ;
2018-03-31 19:11:52 +03:00
if ( ! ctx )
return NULL ;
2019-11-06 01:24:35 +03:00
mutex_init ( & ctx - > tx_lock ) ;
2019-08-30 13:25:47 +03:00
rcu_assign_pointer ( icsk - > icsk_ulp_data , ctx ) ;
2020-03-17 20:04:39 +03:00
ctx - > sk_proto = READ_ONCE ( sk - > sk_prot ) ;
net/tls: Fix use-after-free after the TLS device goes down and up
When a netdev with active TLS offload goes down, tls_device_down is
called to stop the offload and tear down the TLS context. However, the
socket stays alive, and it still points to the TLS context, which is now
deallocated. If a netdev goes up, while the connection is still active,
and the data flow resumes after a number of TCP retransmissions, it will
lead to a use-after-free of the TLS context.
This commit addresses this bug by keeping the context alive until its
normal destruction, and implements the necessary fallbacks, so that the
connection can resume in software (non-offloaded) kTLS mode.
On the TX side tls_sw_fallback is used to encrypt all packets. The RX
side already has all the necessary fallbacks, because receiving
non-decrypted packets is supported. The thing needed on the RX side is
to block resync requests, which are normally produced after receiving
non-decrypted packets.
The necessary synchronization is implemented for a graceful teardown:
first the fallbacks are deployed, then the driver resources are released
(it used to be possible to have a tls_dev_resync after tls_dev_del).
A new flag called TLS_RX_DEV_DEGRADED is added to indicate the fallback
mode. It's used to skip the RX resync logic completely, as it becomes
useless, and some objects may be released (for example, resync_async,
which is allocated and freed by the driver).
Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure")
Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-06-01 15:08:00 +03:00
ctx - > sk = sk ;
2018-03-31 19:11:52 +03:00
return ctx ;
}
2019-01-18 07:55:53 +03:00
static void tls_build_proto ( struct sock * sk )
{
int ip_ver = sk - > sk_family = = AF_INET6 ? TLSV6 : TLSV4 ;
2020-04-15 01:13:50 +03:00
struct proto * prot = READ_ONCE ( sk - > sk_prot ) ;
2019-01-18 07:55:53 +03:00
/* Build IPv6 TLS whenever the address of tcpv6 _prot changes */
if ( ip_ver = = TLSV6 & &
2020-03-17 20:04:38 +03:00
unlikely ( prot ! = smp_load_acquire ( & saved_tcpv6_prot ) ) ) {
2019-01-18 07:55:53 +03:00
mutex_lock ( & tcpv6_prot_mutex ) ;
2020-03-17 20:04:38 +03:00
if ( likely ( prot ! = saved_tcpv6_prot ) ) {
build_protos ( tls_prots [ TLSV6 ] , prot ) ;
smp_store_release ( & saved_tcpv6_prot , prot ) ;
2019-01-18 07:55:53 +03:00
}
mutex_unlock ( & tcpv6_prot_mutex ) ;
}
if ( ip_ver = = TLSV4 & &
2020-03-17 20:04:38 +03:00
unlikely ( prot ! = smp_load_acquire ( & saved_tcpv4_prot ) ) ) {
2019-01-18 07:55:53 +03:00
mutex_lock ( & tcpv4_prot_mutex ) ;
2020-03-17 20:04:38 +03:00
if ( likely ( prot ! = saved_tcpv4_prot ) ) {
build_protos ( tls_prots [ TLSV4 ] , prot ) ;
smp_store_release ( & saved_tcpv4_prot , prot ) ;
2019-01-18 07:55:53 +03:00
}
mutex_unlock ( & tcpv4_prot_mutex ) ;
}
}
2018-04-30 10:16:15 +03:00
static void build_protos ( struct proto prot [ TLS_NUM_CONFIG ] [ TLS_NUM_CONFIG ] ,
2020-03-17 20:04:37 +03:00
const struct proto * base )
2018-02-27 15:18:39 +03:00
{
2018-04-30 10:16:15 +03:00
prot [ TLS_BASE ] [ TLS_BASE ] = * base ;
prot [ TLS_BASE ] [ TLS_BASE ] . setsockopt = tls_setsockopt ;
prot [ TLS_BASE ] [ TLS_BASE ] . getsockopt = tls_getsockopt ;
prot [ TLS_BASE ] [ TLS_BASE ] . close = tls_sk_proto_close ;
prot [ TLS_SW ] [ TLS_BASE ] = prot [ TLS_BASE ] [ TLS_BASE ] ;
prot [ TLS_SW ] [ TLS_BASE ] . sendmsg = tls_sw_sendmsg ;
prot [ TLS_SW ] [ TLS_BASE ] . sendpage = tls_sw_sendpage ;
prot [ TLS_BASE ] [ TLS_SW ] = prot [ TLS_BASE ] [ TLS_BASE ] ;
2018-10-13 03:46:00 +03:00
prot [ TLS_BASE ] [ TLS_SW ] . recvmsg = tls_sw_recvmsg ;
prot [ TLS_BASE ] [ TLS_SW ] . stream_memory_read = tls_sw_stream_read ;
prot [ TLS_BASE ] [ TLS_SW ] . close = tls_sk_proto_close ;
2018-04-30 10:16:15 +03:00
prot [ TLS_SW ] [ TLS_SW ] = prot [ TLS_SW ] [ TLS_BASE ] ;
2018-10-13 03:46:00 +03:00
prot [ TLS_SW ] [ TLS_SW ] . recvmsg = tls_sw_recvmsg ;
prot [ TLS_SW ] [ TLS_SW ] . stream_memory_read = tls_sw_stream_read ;
prot [ TLS_SW ] [ TLS_SW ] . close = tls_sk_proto_close ;
2018-04-30 10:16:15 +03:00
2018-04-30 10:16:16 +03:00
# ifdef CONFIG_TLS_DEVICE
prot [ TLS_HW ] [ TLS_BASE ] = prot [ TLS_BASE ] [ TLS_BASE ] ;
prot [ TLS_HW ] [ TLS_BASE ] . sendmsg = tls_device_sendmsg ;
prot [ TLS_HW ] [ TLS_BASE ] . sendpage = tls_device_sendpage ;
prot [ TLS_HW ] [ TLS_SW ] = prot [ TLS_BASE ] [ TLS_SW ] ;
prot [ TLS_HW ] [ TLS_SW ] . sendmsg = tls_device_sendmsg ;
prot [ TLS_HW ] [ TLS_SW ] . sendpage = tls_device_sendpage ;
2018-07-13 14:33:43 +03:00
prot [ TLS_BASE ] [ TLS_HW ] = prot [ TLS_BASE ] [ TLS_SW ] ;
prot [ TLS_SW ] [ TLS_HW ] = prot [ TLS_SW ] [ TLS_SW ] ;
prot [ TLS_HW ] [ TLS_HW ] = prot [ TLS_HW ] [ TLS_SW ] ;
2018-04-30 10:16:16 +03:00
# endif
2019-10-03 21:18:59 +03:00
# ifdef CONFIG_TLS_TOE
2018-04-30 10:16:15 +03:00
prot [ TLS_HW_RECORD ] [ TLS_HW_RECORD ] = * base ;
2019-10-03 21:18:58 +03:00
prot [ TLS_HW_RECORD ] [ TLS_HW_RECORD ] . hash = tls_toe_hash ;
prot [ TLS_HW_RECORD ] [ TLS_HW_RECORD ] . unhash = tls_toe_unhash ;
2019-10-03 21:18:59 +03:00
# endif
2018-02-27 15:18:39 +03:00
}
2017-06-14 21:37:39 +03:00
static int tls_init ( struct sock * sk )
{
struct tls_context * ctx ;
int rc = 0 ;
2019-10-03 21:18:56 +03:00
tls_build_proto ( sk ) ;
2019-10-03 21:18:59 +03:00
# ifdef CONFIG_TLS_TOE
2019-10-03 21:18:58 +03:00
if ( tls_toe_bypass ( sk ) )
2019-07-19 20:29:22 +03:00
return 0 ;
2019-10-03 21:18:59 +03:00
# endif
2018-03-31 19:11:52 +03:00
2018-01-16 16:31:52 +03:00
/* The TLS ulp is currently supported only for TCP sockets
* in ESTABLISHED state .
* Supporting sockets in LISTEN state will require us
* to modify the accept implementation to clone rather then
* share the ulp context .
*/
if ( sk - > sk_state ! = TCP_ESTABLISHED )
2019-12-05 09:41:18 +03:00
return - ENOTCONN ;
2018-01-16 16:31:52 +03:00
2017-06-14 21:37:39 +03:00
/* allocate tls context */
2019-07-19 20:29:22 +03:00
write_lock_bh ( & sk - > sk_callback_lock ) ;
2019-10-03 21:18:57 +03:00
ctx = tls_ctx_create ( sk ) ;
2017-06-14 21:37:39 +03:00
if ( ! ctx ) {
rc = - ENOMEM ;
goto out ;
}
2017-11-13 11:22:45 +03:00
2018-04-30 10:16:15 +03:00
ctx - > tx_conf = TLS_BASE ;
ctx - > rx_conf = TLS_BASE ;
2017-11-13 11:22:45 +03:00
update_sk_prot ( sk , ctx ) ;
2017-06-14 21:37:39 +03:00
out :
2019-07-19 20:29:22 +03:00
write_unlock_bh ( & sk - > sk_callback_lock ) ;
2017-06-14 21:37:39 +03:00
return rc ;
}
2020-01-11 09:12:01 +03:00
static void tls_update ( struct sock * sk , struct proto * p ,
void ( * write_space ) ( struct sock * sk ) )
2019-07-19 20:29:22 +03:00
{
struct tls_context * ctx ;
ctx = tls_get_ctx ( sk ) ;
2020-01-11 09:12:01 +03:00
if ( likely ( ctx ) ) {
ctx - > sk_write_space = write_space ;
2019-07-19 20:29:22 +03:00
ctx - > sk_proto = p ;
2020-01-11 09:12:01 +03:00
} else {
2020-02-18 20:10:13 +03:00
/* Pairs with lockless read in sk_clone_lock(). */
WRITE_ONCE ( sk - > sk_prot , p ) ;
2020-01-11 09:12:01 +03:00
sk - > sk_write_space = write_space ;
}
2019-07-19 20:29:22 +03:00
}
2019-08-30 13:25:49 +03:00
static int tls_get_info ( const struct sock * sk , struct sk_buff * skb )
{
u16 version , cipher_type ;
struct tls_context * ctx ;
struct nlattr * start ;
int err ;
start = nla_nest_start_noflag ( skb , INET_ULP_INFO_TLS ) ;
if ( ! start )
return - EMSGSIZE ;
rcu_read_lock ( ) ;
ctx = rcu_dereference ( inet_csk ( sk ) - > icsk_ulp_data ) ;
if ( ! ctx ) {
err = 0 ;
goto nla_failure ;
}
version = ctx - > prot_info . version ;
if ( version ) {
err = nla_put_u16 ( skb , TLS_INFO_VERSION , version ) ;
if ( err )
goto nla_failure ;
}
cipher_type = ctx - > prot_info . cipher_type ;
if ( cipher_type ) {
err = nla_put_u16 ( skb , TLS_INFO_CIPHER , cipher_type ) ;
if ( err )
goto nla_failure ;
}
err = nla_put_u16 ( skb , TLS_INFO_TXCONF , tls_user_config ( ctx , true ) ) ;
if ( err )
goto nla_failure ;
err = nla_put_u16 ( skb , TLS_INFO_RXCONF , tls_user_config ( ctx , false ) ) ;
if ( err )
goto nla_failure ;
rcu_read_unlock ( ) ;
nla_nest_end ( skb , start ) ;
return 0 ;
nla_failure :
rcu_read_unlock ( ) ;
nla_nest_cancel ( skb , start ) ;
return err ;
}
static size_t tls_get_info_size ( const struct sock * sk )
{
size_t size = 0 ;
size + = nla_total_size ( 0 ) + /* INET_ULP_INFO_TLS */
nla_total_size ( sizeof ( u16 ) ) + /* TLS_INFO_VERSION */
nla_total_size ( sizeof ( u16 ) ) + /* TLS_INFO_CIPHER */
nla_total_size ( sizeof ( u16 ) ) + /* TLS_INFO_RXCONF */
nla_total_size ( sizeof ( u16 ) ) + /* TLS_INFO_TXCONF */
0 ;
return size ;
}
2019-10-05 02:19:24 +03:00
static int __net_init tls_init_net ( struct net * net )
{
int err ;
net - > mib . tls_statistics = alloc_percpu ( struct linux_tls_mib ) ;
if ( ! net - > mib . tls_statistics )
return - ENOMEM ;
err = tls_proc_init ( net ) ;
if ( err )
goto err_free_stats ;
return 0 ;
err_free_stats :
free_percpu ( net - > mib . tls_statistics ) ;
return err ;
}
static void __net_exit tls_exit_net ( struct net * net )
{
tls_proc_fini ( net ) ;
free_percpu ( net - > mib . tls_statistics ) ;
}
static struct pernet_operations tls_proc_ops = {
. init = tls_init_net ,
. exit = tls_exit_net ,
} ;
2017-06-14 21:37:39 +03:00
static struct tcp_ulp_ops tcp_tls_ulp_ops __read_mostly = {
. name = " tls " ,
. owner = THIS_MODULE ,
. init = tls_init ,
2019-07-19 20:29:22 +03:00
. update = tls_update ,
2019-08-30 13:25:49 +03:00
. get_info = tls_get_info ,
. get_info_size = tls_get_info_size ,
2017-06-14 21:37:39 +03:00
} ;
static int __init tls_register ( void )
{
2019-10-05 02:19:24 +03:00
int err ;
err = register_pernet_subsys ( & tls_proc_ops ) ;
if ( err )
return err ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
tls_sw_proto_ops = inet_stream_ops ;
tls_sw_proto_ops . splice_read = tls_sw_splice_read ;
2020-10-11 13:34:58 +03:00
tls_sw_proto_ops . sendpage_locked = tls_sw_sendpage_locked ;
tls: RX path for ktls
Add rx path for tls software implementation.
recvmsg, splice_read, and poll implemented.
An additional sockopt TLS_RX is added, with the same interface as
TLS_TX. Either TLX_RX or TLX_TX may be provided separately, or
together (with two different setsockopt calls with appropriate keys).
Control messages are passed via CMSG in a similar way to transmit.
If no cmsg buffer is passed, then only application data records
will be passed to userspace, and EIO is returned for other types of
alerts.
EBADMSG is passed for decryption errors, and EMSGSIZE is passed for
framing too big, and EBADMSG for framing too small (matching openssl
semantics). EINVAL is returned for TLS versions that do not match the
original setsockopt call. All are unrecoverable.
strparser is used to parse TLS framing. Decryption is done directly
in to userspace buffers if they are large enough to support it, otherwise
sk_cow_data is called (similar to ipsec), and buffers are decrypted in
place and copied. splice_read always decrypts in place, since no
buffers are provided to decrypt in to.
sk_poll is overridden, and only returns POLLIN if a full TLS message is
received. Otherwise we wait for strparser to finish reading a full frame.
Actual decryption is only done during recvmsg or splice_read calls.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-22 20:10:35 +03:00
2018-04-30 10:16:16 +03:00
tls_device_init ( ) ;
2017-06-14 21:37:39 +03:00
tcp_register_ulp ( & tcp_tls_ulp_ops ) ;
return 0 ;
}
static void __exit tls_unregister ( void )
{
tcp_unregister_ulp ( & tcp_tls_ulp_ops ) ;
2018-04-30 10:16:16 +03:00
tls_device_cleanup ( ) ;
2019-10-05 02:19:24 +03:00
unregister_pernet_subsys ( & tls_proc_ops ) ;
2017-06-14 21:37:39 +03:00
}
module_init ( tls_register ) ;
module_exit ( tls_unregister ) ;