io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
/* SPDX-License-Identifier: GPL-2.0 */
# undef TRACE_SYSTEM
# define TRACE_SYSTEM io_uring
# if !defined(_TRACE_IO_URING_H) || defined(TRACE_HEADER_MULTI_READ)
# define _TRACE_IO_URING_H
# include <linux/tracepoint.h>
2021-09-11 16:04:50 -06:00
# include <uapi/linux/io_uring.h>
2022-06-16 13:57:20 +01:00
# include <linux/io_uring_types.h>
2022-04-26 01:29:07 -07:00
# include <linux/io_uring.h>
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2019-10-24 07:25:42 -06:00
struct io_wq_work ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
/**
* io_uring_create - called after a new io_uring context was prepared
*
2021-05-31 02:54:15 -04:00
* @ fd : corresponding file descriptor
* @ ctx : pointer to a ring context structure
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* @ sq_entries : actual SQ size
* @ cq_entries : actual CQ size
2021-05-31 02:54:15 -04:00
* @ flags : SQ ring flags , provided to io_uring_setup ( 2 )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
*
* Allows to trace io_uring creation and provide pointer to a context , that can
* be used later to find correlated events .
*/
TRACE_EVENT ( io_uring_create ,
TP_PROTO ( int fd , void * ctx , u32 sq_entries , u32 cq_entries , u32 flags ) ,
TP_ARGS ( fd , ctx , sq_entries , cq_entries , flags ) ,
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( int , fd )
__field ( void * , ctx )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__field ( u32 , sq_entries )
__field ( u32 , cq_entries )
__field ( u32 , flags )
) ,
TP_fast_assign (
2022-02-14 10:04:30 -08:00
__entry - > fd = fd ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > ctx = ctx ;
__entry - > sq_entries = sq_entries ;
__entry - > cq_entries = cq_entries ;
__entry - > flags = flags ;
) ,
2022-03-16 02:52:04 -07:00
TP_printk ( " ring %p, fd %d sq size %d, cq size %d, flags 0x%x " ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > ctx , __entry - > fd , __entry - > sq_entries ,
__entry - > cq_entries , __entry - > flags )
) ;
/**
2021-03-23 18:49:35 +01:00
* io_uring_register - called after a buffer / file / eventfd was successfully
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* registered for a ring
*
2021-05-31 02:54:15 -04:00
* @ ctx : pointer to a ring context structure
* @ opcode : describes which operation to perform
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* @ nr_user_files : number of registered files
* @ nr_user_bufs : number of registered buffers
2021-05-31 02:54:15 -04:00
* @ ret : return code
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
*
2022-02-04 14:51:13 +00:00
* Allows to trace fixed files / buffers , that could be registered to
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* avoid an overhead of getting references to them for every operation . This
* event , together with io_uring_file_get , can provide a full picture of how
* much overhead one can reduce via fixing .
*/
TRACE_EVENT ( io_uring_register ,
TP_PROTO ( void * ctx , unsigned opcode , unsigned nr_files ,
2022-02-04 14:51:13 +00:00
unsigned nr_bufs , long ret ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-02-04 14:51:13 +00:00
TP_ARGS ( ctx , opcode , nr_files , nr_bufs , ret ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( unsigned , opcode )
__field ( unsigned , nr_files )
__field ( unsigned , nr_bufs )
__field ( long , ret )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
__entry - > ctx = ctx ;
__entry - > opcode = opcode ;
__entry - > nr_files = nr_files ;
__entry - > nr_bufs = nr_bufs ;
__entry - > ret = ret ;
) ,
TP_printk ( " ring %p, opcode %d, nr_user_files %d, nr_user_bufs %d, "
2022-02-04 14:51:13 +00:00
" ret %ld " ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > ctx , __entry - > opcode , __entry - > nr_files ,
2022-02-04 14:51:13 +00:00
__entry - > nr_bufs , __entry - > ret )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
/**
* io_uring_file_get - called before getting references to an SQE file
*
2022-02-14 10:04:30 -08:00
* @ req : pointer to a submitted request
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* @ fd : SQE file descriptor
*
* Allows to trace out how often an SQE file reference is obtained , which can
* help figuring out if it makes sense to use fixed files , or check that fixed
* files are used correctly .
*/
TRACE_EVENT ( io_uring_file_get ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , int fd ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , fd ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( u64 , user_data )
__field ( int , fd )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2022-02-14 10:04:30 -08:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > fd = fd ;
) ,
2022-03-16 02:52:04 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, fd %d " ,
2022-02-14 10:04:30 -08:00
__entry - > ctx , __entry - > req , __entry - > user_data , __entry - > fd )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
/**
* io_uring_queue_async_work - called before submitting a new async work
*
* @ req : pointer to a submitted request
2022-02-14 10:04:30 -08:00
* @ rw : type of workqueue , hashed or normal
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
*
* Allows to trace asynchronous work submission .
*/
TRACE_EVENT ( io_uring_queue_async_work ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , int rw ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , rw ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( u64 , user_data )
__field ( u8 , opcode )
__field ( unsigned int , flags )
__field ( struct io_wq_work * , work )
__field ( int , rw )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2022-02-14 10:04:30 -08:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
__entry - > flags = req - > flags ;
__entry - > opcode = req - > opcode ;
__entry - > work = & req - > work ;
2022-02-14 10:04:30 -08:00
__entry - > rw = rw ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, request %p, user_data 0x%llx, opcode %s, flags 0x%x, %s queue, work %p " ,
__entry - > ctx , __entry - > req , __entry - > user_data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) ,
2022-02-14 10:04:30 -08:00
__entry - > flags , __entry - > rw ? " hashed " : " normal " , __entry - > work )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
/**
2019-11-21 09:01:20 -07:00
* io_uring_defer - called when an io_uring request is deferred
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
*
* @ req : pointer to a deferred request
*
* Allows to track deferred requests , to get an insight about what requests are
* not started immediately .
*/
TRACE_EVENT ( io_uring_defer ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , data )
__field ( u8 , opcode )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > data = req - > cqe . user_data ;
__entry - > opcode = req - > opcode ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, request %p, user_data 0x%llx, opcode %s " ,
__entry - > ctx , __entry - > req , __entry - > data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
/**
* io_uring_link - called before the io_uring request added into link_list of
2021-05-31 02:54:15 -04:00
* another request
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
*
2021-05-31 02:54:15 -04:00
* @ req : pointer to a linked request
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* @ target_req : pointer to a previous request , that would contain @ req
*
* Allows to track linked requests , to understand dependencies between requests
* and how does it influence their execution flow .
*/
TRACE_EVENT ( io_uring_link ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , struct io_kiocb * target_req ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , target_req ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2021-05-31 02:54:15 -04:00
__field ( void * , ctx )
__field ( void * , req )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__field ( void * , target_req )
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > req = req ;
__entry - > target_req = target_req ;
) ,
TP_printk ( " ring %p, request %p linked after %p " ,
__entry - > ctx , __entry - > req , __entry - > target_req )
) ;
/**
* io_uring_cqring_wait - called before start waiting for an available CQE
*
* @ ctx : pointer to a ring context structure
* @ min_events : minimal number of events to wait for
*
* Allows to track waiting for CQE , so that we can e . g . troubleshoot
* situations , when an application wants to wait for an event , that never
* comes .
*/
TRACE_EVENT ( io_uring_cqring_wait ,
TP_PROTO ( void * ctx , int min_events ) ,
TP_ARGS ( ctx , min_events ) ,
TP_STRUCT__entry (
2021-05-31 02:54:15 -04:00
__field ( void * , ctx )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__field ( int , min_events )
) ,
TP_fast_assign (
2022-02-14 10:04:30 -08:00
__entry - > ctx = ctx ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > min_events = min_events ;
) ,
TP_printk ( " ring %p, min_events %d " , __entry - > ctx , __entry - > min_events )
) ;
/**
* io_uring_fail_link - called before failing a linked request
*
* @ req : request , which links were cancelled
* @ link : cancelled link
*
* Allows to track linked requests cancellation , to see not only that some work
* was cancelled , but also which request was the reason .
*/
TRACE_EVENT ( io_uring_fail_link ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , struct io_kiocb * link ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , link ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , user_data )
__field ( u8 , opcode )
__field ( void * , link )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2022-02-14 10:04:30 -08:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
__entry - > opcode = req - > opcode ;
2022-02-14 10:04:30 -08:00
__entry - > link = link ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, request %p, user_data 0x%llx, opcode %s, link %p " ,
__entry - > ctx , __entry - > req , __entry - > user_data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) , __entry - > link )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
2019-11-03 06:52:50 -07:00
/**
* io_uring_complete - called when completing an SQE
*
* @ ctx : pointer to a ring context structure
2022-02-14 10:04:30 -08:00
* @ req : pointer to a submitted request
2019-11-03 06:52:50 -07:00
* @ user_data : user data associated with the request
* @ res : result of the request
2021-02-22 22:05:00 -07:00
* @ cflags : completion flags
2022-04-26 11:21:31 -07:00
* @ extra1 : extra 64 - bit data for CQE32
* @ extra2 : extra 64 - bit data for CQE32
2019-11-03 06:52:50 -07:00
*
*/
TRACE_EVENT ( io_uring_complete ,
2022-04-26 11:21:31 -07:00
TP_PROTO ( void * ctx , void * req , u64 user_data , int res , unsigned cflags ,
u64 extra1 , u64 extra2 ) ,
2019-11-03 06:52:50 -07:00
2022-04-26 11:21:31 -07:00
TP_ARGS ( ctx , req , user_data , res , cflags , extra1 , extra2 ) ,
2019-11-03 06:52:50 -07:00
TP_STRUCT__entry (
__field ( void * , ctx )
2022-02-14 10:04:30 -08:00
__field ( void * , req )
2019-11-03 06:52:50 -07:00
__field ( u64 , user_data )
io_uring: io_uring_complete() trace should take an integer
It currently takes a long, and while that's normally OK, the io_uring
limit is an int. Internally in io_uring it's an int, but sometimes it's
passed as a long. That can yield confusing results where a completions
seems to generate a huge result:
ou-sqp-1297-1298 [001] ...1 788.056371: io_uring_complete: ring 000000000e98e046, user_data 0x0, result 4294967171, cflags 0
which is due to -ECANCELED being stored in an unsigned, and then passed
in as a long. Using the right int type, the trace looks correct:
iou-sqp-338-339 [002] ...1 15.633098: io_uring_complete: ring 00000000e0ac60cf, user_data 0x0, result -125, cflags 0
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-09-03 16:55:26 -06:00
__field ( int , res )
2021-02-22 22:05:00 -07:00
__field ( unsigned , cflags )
2022-04-26 11:21:31 -07:00
__field ( u64 , extra1 )
__field ( u64 , extra2 )
2019-11-03 06:52:50 -07:00
) ,
TP_fast_assign (
__entry - > ctx = ctx ;
2022-02-14 10:04:30 -08:00
__entry - > req = req ;
2019-11-03 06:52:50 -07:00
__entry - > user_data = user_data ;
__entry - > res = res ;
2021-02-22 22:05:00 -07:00
__entry - > cflags = cflags ;
2022-04-26 11:21:31 -07:00
__entry - > extra1 = extra1 ;
__entry - > extra2 = extra2 ;
2019-11-03 06:52:50 -07:00
) ,
2022-04-26 11:21:31 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, result %d, cflags 0x%x "
" extra1 %llu extra2 %llu " ,
2022-02-14 10:04:30 -08:00
__entry - > ctx , __entry - > req ,
2022-03-16 02:52:04 -07:00
__entry - > user_data ,
2022-04-26 11:21:31 -07:00
__entry - > res , __entry - > cflags ,
( unsigned long long ) __entry - > extra1 ,
( unsigned long long ) __entry - > extra2 )
2019-11-03 06:52:50 -07:00
) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
/**
* io_uring_submit_sqe - called before submitting one SQE
*
2021-05-31 02:36:37 -04:00
* @ req : pointer to a submitted request
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
* @ force_nonblock : whether a context blocking or not
*
* Allows to track SQE submitting , to understand what was the source of it , SQ
* thread or io_uring_enter call .
*/
TRACE_EVENT ( io_uring_submit_sqe ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , bool force_nonblock ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , force_nonblock ) ,
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , user_data )
__field ( u8 , opcode )
__field ( u32 , flags )
__field ( bool , force_nonblock )
__field ( bool , sq_thread )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2021-05-31 02:36:37 -04:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
__entry - > opcode = req - > opcode ;
__entry - > flags = req - > flags ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
__entry - > force_nonblock = force_nonblock ;
2022-06-16 13:57:20 +01:00
__entry - > sq_thread = req - > ctx - > flags & IORING_SETUP_SQPOLL ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, opcode %s, flags 0x%x, "
2021-05-31 02:36:37 -04:00
" non block %d, sq_thread %d " , __entry - > ctx , __entry - > req ,
2022-06-23 01:37:43 -07:00
__entry - > user_data , __get_str ( op_str ) ,
2021-05-31 02:36:37 -04:00
__entry - > flags , __entry - > force_nonblock , __entry - > sq_thread )
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
) ;
2021-05-31 02:36:37 -04:00
/*
* io_uring_poll_arm - called after arming a poll wait if successful
*
* @ req : pointer to the armed request
* @ mask : request poll events mask
* @ events : registered events of interest
*
* Allows to track which fds are waiting for and what are the events of
* interest .
*/
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
TRACE_EVENT ( io_uring_poll_arm ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , int mask , int events ) ,
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , mask , events ) ,
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , user_data )
__field ( u8 , opcode )
__field ( int , mask )
__field ( int , events )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2021-05-31 02:36:37 -04:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
__entry - > opcode = req - > opcode ;
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
__entry - > mask = mask ;
__entry - > events = events ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, opcode %s, mask 0x%x, events 0x%x " ,
__entry - > ctx , __entry - > req , __entry - > user_data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) ,
2021-05-31 02:36:37 -04:00
__entry - > mask , __entry - > events )
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ;
2021-05-31 02:36:37 -04:00
/*
2022-02-14 10:04:30 -08:00
* io_uring_task_add - called after adding a task
2021-05-31 02:36:37 -04:00
*
2022-02-14 10:04:30 -08:00
* @ req : pointer to request
* @ mask : request poll events mask
2021-05-31 02:36:37 -04:00
*
*/
2022-02-14 10:04:30 -08:00
TRACE_EVENT ( io_uring_task_add ,
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
2022-06-16 13:57:20 +01:00
TP_PROTO ( struct io_kiocb * req , int mask ) ,
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( req , mask ) ,
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , user_data )
__field ( u8 , opcode )
__field ( int , mask )
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__string ( op_str , io_uring_get_opcode ( req - > opcode ) )
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2021-05-31 02:36:37 -04:00
__entry - > req = req ;
2022-06-16 13:57:20 +01:00
__entry - > user_data = req - > cqe . user_data ;
__entry - > opcode = req - > opcode ;
2022-02-14 10:04:30 -08:00
__entry - > mask = mask ;
2022-06-23 01:37:43 -07:00
2022-06-16 13:57:20 +01:00
__assign_str ( op_str , io_uring_get_opcode ( req - > opcode ) ) ;
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ,
2022-04-26 01:29:07 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, opcode %s, mask %x " ,
__entry - > ctx , __entry - > req , __entry - > user_data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) ,
2022-02-14 10:04:30 -08:00
__entry - > mask )
io_uring: use poll driven retry for files that support it
Currently io_uring tries any request in a non-blocking manner, if it can,
and then retries from a worker thread if we get -EAGAIN. Now that we have
a new and fancy poll based retry backend, use that to retry requests if
the file supports it.
This means that, for example, an IORING_OP_RECVMSG on a socket no longer
requires an async thread to complete the IO. If we get -EAGAIN reading
from the socket in a non-blocking manner, we arm a poll handler for
notification on when the socket becomes readable. When it does, the
pending read is executed directly by the task again, through the io_uring
task work handlers. Not only is this faster and more efficient, it also
means we're not generating potentially tons of async threads that just
sit and block, waiting for the IO to complete.
The feature is marked with IORING_FEAT_FAST_POLL, meaning that async
pollable IO is fast, and that poll<link>other_op is fast as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-14 22:23:12 -07:00
) ;
2021-09-11 16:04:50 -06:00
/*
* io_uring_req_failed - called when an sqe is errored dring submission
*
* @ sqe : pointer to the io_uring_sqe that failed
2022-02-14 10:04:30 -08:00
* @ req : pointer to request
2021-09-11 16:04:50 -06:00
* @ error : error it failed with
*
* Allows easier diagnosing of malformed requests in production systems .
*/
TRACE_EVENT ( io_uring_req_failed ,
2022-06-16 13:57:20 +01:00
TP_PROTO ( const struct io_uring_sqe * sqe , struct io_kiocb * req , int error ) ,
2021-09-11 16:04:50 -06:00
2022-06-16 13:57:20 +01:00
TP_ARGS ( sqe , req , error ) ,
2021-09-11 16:04:50 -06:00
TP_STRUCT__entry (
2022-02-14 10:04:30 -08:00
__field ( void * , ctx )
__field ( void * , req )
__field ( unsigned long long , user_data )
__field ( u8 , opcode )
__field ( u8 , flags )
__field ( u8 , ioprio )
__field ( u64 , off )
__field ( u64 , addr )
__field ( u32 , len )
__field ( u32 , op_flags )
__field ( u16 , buf_index )
__field ( u16 , personality )
__field ( u32 , file_index )
__field ( u64 , pad1 )
2022-04-24 18:16:57 -06:00
__field ( u64 , addr3 )
2022-02-14 10:04:30 -08:00
__field ( int , error )
2022-06-23 01:37:43 -07:00
__string ( op_str , io_uring_get_opcode ( sqe - > opcode ) )
2021-09-11 16:04:50 -06:00
) ,
TP_fast_assign (
2022-06-16 13:57:20 +01:00
__entry - > ctx = req - > ctx ;
2022-02-14 10:04:30 -08:00
__entry - > req = req ;
__entry - > user_data = sqe - > user_data ;
2021-09-11 16:04:50 -06:00
__entry - > opcode = sqe - > opcode ;
__entry - > flags = sqe - > flags ;
__entry - > ioprio = sqe - > ioprio ;
__entry - > off = sqe - > off ;
__entry - > addr = sqe - > addr ;
__entry - > len = sqe - > len ;
2022-05-19 17:30:49 +03:00
__entry - > op_flags = sqe - > poll32_events ;
2021-09-11 16:04:50 -06:00
__entry - > buf_index = sqe - > buf_index ;
__entry - > personality = sqe - > personality ;
__entry - > file_index = sqe - > file_index ;
__entry - > pad1 = sqe - > __pad2 [ 0 ] ;
2022-04-24 18:16:57 -06:00
__entry - > addr3 = sqe - > addr3 ;
2021-09-11 16:04:50 -06:00
__entry - > error = error ;
2022-06-23 01:37:43 -07:00
__assign_str ( op_str , io_uring_get_opcode ( sqe - > opcode ) ) ;
2021-09-11 16:04:50 -06:00
) ,
2022-03-16 02:52:04 -07:00
TP_printk ( " ring %p, req %p, user_data 0x%llx, "
2022-04-26 01:29:07 -07:00
" opcode %s, flags 0x%x, prio=%d, off=%llu, addr=%llu, "
2022-02-14 10:04:30 -08:00
" len=%u, rw_flags=0x%x, buf_index=%d, "
2022-04-24 18:16:57 -06:00
" personality=%d, file_index=%d, pad=0x%llx, addr3=%llx, "
" error=%d " ,
2022-02-14 10:04:30 -08:00
__entry - > ctx , __entry - > req , __entry - > user_data ,
2022-06-23 01:37:43 -07:00
__get_str ( op_str ) ,
2022-04-26 01:29:07 -07:00
__entry - > flags , __entry - > ioprio ,
2021-09-11 16:04:50 -06:00
( unsigned long long ) __entry - > off ,
( unsigned long long ) __entry - > addr , __entry - > len ,
2022-02-14 10:04:30 -08:00
__entry - > op_flags ,
2021-09-11 16:04:50 -06:00
__entry - > buf_index , __entry - > personality , __entry - > file_index ,
( unsigned long long ) __entry - > pad1 ,
2022-04-24 18:16:57 -06:00
( unsigned long long ) __entry - > addr3 , __entry - > error )
2021-09-11 16:04:50 -06:00
) ;
2022-04-21 02:13:40 -07:00
/*
* io_uring_cqe_overflow - a CQE overflowed
*
* @ ctx : pointer to a ring context structure
* @ user_data : user data associated with the request
* @ res : CQE result
* @ cflags : CQE flags
* @ ocqe : pointer to the overflow cqe ( if available )
*
*/
TRACE_EVENT ( io_uring_cqe_overflow ,
TP_PROTO ( void * ctx , unsigned long long user_data , s32 res , u32 cflags ,
void * ocqe ) ,
TP_ARGS ( ctx , user_data , res , cflags , ocqe ) ,
TP_STRUCT__entry (
__field ( void * , ctx )
__field ( unsigned long long , user_data )
__field ( s32 , res )
__field ( u32 , cflags )
__field ( void * , ocqe )
) ,
TP_fast_assign (
__entry - > ctx = ctx ;
__entry - > user_data = user_data ;
__entry - > res = res ;
__entry - > cflags = cflags ;
__entry - > ocqe = ocqe ;
) ,
2022-06-30 02:12:30 -07:00
TP_printk ( " ring %p, user_data 0x%llx, res %d, cflags 0x%x, "
2022-04-21 02:13:40 -07:00
" overflow_cqe %p " ,
__entry - > ctx , __entry - > user_data , __entry - > res ,
__entry - > cflags , __entry - > ocqe )
) ;
2022-06-22 06:40:27 -07:00
/*
* io_uring_task_work_run - ran task work
*
* @ tctx : pointer to a io_uring_task
* @ count : how many functions it ran
* @ loops : how many loops it ran
*
*/
TRACE_EVENT ( io_uring_task_work_run ,
TP_PROTO ( void * tctx , unsigned int count , unsigned int loops ) ,
TP_ARGS ( tctx , count , loops ) ,
TP_STRUCT__entry (
__field ( void * , tctx )
__field ( unsigned int , count )
__field ( unsigned int , loops )
) ,
TP_fast_assign (
__entry - > tctx = tctx ;
__entry - > count = count ;
__entry - > loops = loops ;
) ,
TP_printk ( " tctx %p, count %u, loops %u " ,
__entry - > tctx , __entry - > count , __entry - > loops )
) ;
2022-06-16 14:22:19 -07:00
TRACE_EVENT ( io_uring_short_write ,
TP_PROTO ( void * ctx , u64 fpos , u64 wanted , u64 got ) ,
TP_ARGS ( ctx , fpos , wanted , got ) ,
TP_STRUCT__entry (
__field ( void * , ctx )
__field ( u64 , fpos )
__field ( u64 , wanted )
__field ( u64 , got )
) ,
TP_fast_assign (
__entry - > ctx = ctx ;
__entry - > fpos = fpos ;
__entry - > wanted = wanted ;
__entry - > got = got ;
) ,
TP_printk ( " ring %p, fpos %lld, wanted %lld, got %lld " ,
__entry - > ctx , __entry - > fpos ,
__entry - > wanted , __entry - > got )
) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 19:02:01 +02:00
# endif /* _TRACE_IO_URING_H */
/* This part must be outside protection */
# include <trace/define_trace.h>