Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
// SPDX-License-Identifier: GPL-2.0
/*
* Shared application / kernel submission and completion ring pairs , for
* supporting fast / efficient IO .
*
* A note on the read / write ordering memory barriers that are matched between
2019-04-25 00:54:16 +03:00
* the application and kernel side .
*
* After the application reads the CQ ring tail , it must use an
* appropriate smp_rmb ( ) to pair with the smp_wmb ( ) the kernel uses
* before writing the tail ( using smp_load_acquire to read the tail will
* do ) . It also needs a smp_mb ( ) before updating CQ head ( ordering the
* entry load ( s ) with the head store ) , pairing with an implicit barrier
* through a control - dependency in io_get_cqring ( smp_store_release to
* store head will do ) . Failure to do so could lead to reading invalid
* CQ entries .
*
* Likewise , the application must use an appropriate smp_wmb ( ) before
* writing the SQ tail ( ordering SQ entry stores with the tail store ) ,
* which pairs with smp_load_acquire in io_get_sqring ( smp_store_release
* to store the tail will do ) . And it needs a barrier ordering the SQ
* head load before writing new SQ entries ( smp_load_acquire to read
* head will do ) .
*
* When using the SQ poll thread ( IORING_SETUP_SQPOLL ) , the application
* needs to check the SQ flags for IORING_SQ_NEED_WAKEUP * after *
* updating the SQ tail ; a full memory barrier smp_mb ( ) is needed
* between .
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
*
* Also see the examples in the liburing library :
*
* git : //git.kernel.dk/liburing
*
* io_uring also uses READ / WRITE_ONCE ( ) for _any_ store or load that happens
* from data shared between the kernel and application . This is done both
* for ordering purposes , but also to ensure that once a value is loaded from
* data that the application could potentially modify , it remains stable .
*
* Copyright ( C ) 2018 - 2019 Jens Axboe
2019-01-11 19:43:02 +03:00
* Copyright ( c ) 2018 - 2019 Christoph Hellwig
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
*/
# include <linux/kernel.h>
# include <linux/init.h>
# include <linux/errno.h>
# include <linux/syscalls.h>
# include <linux/compat.h>
# include <linux/refcount.h>
# include <linux/uio.h>
2020-01-18 20:22:41 +03:00
# include <linux/bits.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# include <linux/sched/signal.h>
# include <linux/fs.h>
# include <linux/file.h>
# include <linux/fdtable.h>
# include <linux/mm.h>
# include <linux/mman.h>
# include <linux/mmu_context.h>
# include <linux/percpu.h>
# include <linux/slab.h>
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
# include <linux/kthread.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# include <linux/blkdev.h>
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
# include <linux/bvec.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# include <linux/net.h>
# include <net/sock.h>
# include <net/af_unix.h>
2019-01-11 08:13:58 +03:00
# include <net/scm.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# include <linux/anon_inodes.h>
# include <linux/sched/mm.h>
# include <linux/uaccess.h>
# include <linux/nospec.h>
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
# include <linux/sizes.h>
# include <linux/hugetlb.h>
2019-11-29 20:14:00 +03:00
# include <linux/highmem.h>
2019-12-11 21:20:36 +03:00
# include <linux/namei.h>
# include <linux/fsnotify.h>
2019-12-26 08:03:45 +03:00
# include <linux/fadvise.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
# define CREATE_TRACE_POINTS
# include <trace/events/io_uring.h>
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# include <uapi/linux/io_uring.h>
# include "internal.h"
2019-10-24 16:25:42 +03:00
# include "io-wq.h"
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-09-15 00:23:45 +03:00
# define IORING_MAX_ENTRIES 32768
2019-10-04 21:10:03 +03:00
# define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
2019-10-26 16:20:21 +03:00
/*
* Shift of 9 is 512 entries , or exactly one page on 64 - bit archs
*/
# define IORING_FILE_TABLE_SHIFT 9
# define IORING_MAX_FILES_TABLE (1U << IORING_FILE_TABLE_SHIFT)
# define IORING_FILE_TABLE_MASK (IORING_MAX_FILES_TABLE - 1)
# define IORING_MAX_FIXED_FILES (64 * IORING_MAX_FILES_TABLE)
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_uring {
u32 head ____cacheline_aligned_in_smp ;
u32 tail ____cacheline_aligned_in_smp ;
} ;
2019-04-25 00:54:16 +03:00
/*
2019-08-26 20:23:46 +03:00
* This data is shared with the application through the mmap at offsets
* IORING_OFF_SQ_RING and IORING_OFF_CQ_RING .
2019-04-25 00:54:16 +03:00
*
* The offsets to the member fields are published through struct
* io_sqring_offsets when calling io_uring_setup .
*/
2019-08-26 20:23:46 +03:00
struct io_rings {
2019-04-25 00:54:16 +03:00
/*
* Head and tail offsets into the ring ; the offsets need to be
* masked to get valid indices .
*
2019-08-26 20:23:46 +03:00
* The kernel controls head of the sq ring and the tail of the cq ring ,
* and the application controls tail of the sq ring and the head of the
* cq ring .
2019-04-25 00:54:16 +03:00
*/
2019-08-26 20:23:46 +03:00
struct io_uring sq , cq ;
2019-04-25 00:54:16 +03:00
/*
2019-08-26 20:23:46 +03:00
* Bitmasks to apply to head and tail offsets ( constant , equals
2019-04-25 00:54:16 +03:00
* ring_entries - 1 )
*/
2019-08-26 20:23:46 +03:00
u32 sq_ring_mask , cq_ring_mask ;
/* Ring sizes (constant, power of 2) */
u32 sq_ring_entries , cq_ring_entries ;
2019-04-25 00:54:16 +03:00
/*
* Number of invalid entries dropped by the kernel due to
* invalid index stored in array
*
* Written by the kernel , shouldn ' t be modified by the
* application ( i . e . get number of " new events " by comparing to
* cached value ) .
*
* After a new SQ head value was read by the application this
* counter includes all submissions that were dropped reaching
* the new SQ head ( and possibly more ) .
*/
2019-08-26 20:23:46 +03:00
u32 sq_dropped ;
2019-04-25 00:54:16 +03:00
/*
* Runtime flags
*
* Written by the kernel , shouldn ' t be modified by the
* application .
*
* The application needs a full memory barrier before checking
* for IORING_SQ_NEED_WAKEUP after updating the sq tail .
*/
2019-08-26 20:23:46 +03:00
u32 sq_flags ;
2019-04-25 00:54:16 +03:00
/*
* Number of completion events lost because the queue was full ;
* this should be avoided by the application by making sure
2019-12-05 15:18:18 +03:00
* there are not more requests pending than there is space in
2019-04-25 00:54:16 +03:00
* the completion queue .
*
* Written by the kernel , shouldn ' t be modified by the
* application ( i . e . get number of " new events " by comparing to
* cached value ) .
*
* As completion events come in out of order this counter is not
* ordered with any other data .
*/
2019-08-26 20:23:46 +03:00
u32 cq_overflow ;
2019-04-25 00:54:16 +03:00
/*
* Ring buffer of completion events .
*
* The kernel writes completion events fresh every time they are
* produced , so the application is allowed to modify pending
* entries .
*/
2019-08-26 20:23:46 +03:00
struct io_uring_cqe cqes [ ] ____cacheline_aligned_in_smp ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
} ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
struct io_mapped_ubuf {
u64 ubuf ;
size_t len ;
struct bio_vec * bvec ;
unsigned int nr_bvecs ;
} ;
2019-10-26 16:20:21 +03:00
struct fixed_file_table {
struct file * * files ;
2019-01-19 08:56:34 +03:00
} ;
2019-12-09 21:22:50 +03:00
enum {
FFD_F_ATOMIC ,
} ;
struct fixed_file_data {
struct fixed_file_table * table ;
struct io_ring_ctx * ctx ;
struct percpu_ref refs ;
struct llist_head put_llist ;
unsigned long state ;
struct work_struct ref_work ;
struct completion done ;
} ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_ring_ctx {
struct {
struct percpu_ref refs ;
} ____cacheline_aligned_in_smp ;
struct {
unsigned int flags ;
2020-01-08 21:01:46 +03:00
int compat : 1 ;
int account_mem : 1 ;
int cq_overflow_flushed : 1 ;
int drain_next : 1 ;
2020-01-08 21:04:00 +03:00
int eventfd_async : 1 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-08-26 20:23:46 +03:00
/*
* Ring buffer of indices into array of io_uring_sqe , which is
* mmapped by the application using the IORING_OFF_SQES offset .
*
* This indirection could e . g . be used to assign fixed
* io_uring_sqe entries to operations and only submit them to
* the queue when needed .
*
* The kernel modifies neither the indices array nor the entries
* array .
*/
u32 * sq_array ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
unsigned cached_sq_head ;
unsigned sq_entries ;
unsigned sq_mask ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
unsigned sq_thread_idle ;
2019-10-25 19:04:25 +03:00
unsigned cached_sq_dropped ;
2019-11-08 04:27:42 +03:00
atomic_t cached_cq_overflow ;
2019-12-19 03:12:20 +03:00
unsigned long sq_check_overflow ;
2019-04-07 06:51:27 +03:00
struct list_head defer_list ;
2019-09-17 21:26:57 +03:00
struct list_head timeout_list ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
struct list_head cq_overflow_list ;
2019-10-24 21:39:47 +03:00
wait_queue_head_t inflight_wait ;
2019-12-19 03:12:20 +03:00
struct io_uring_sqe * sq_sqes ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
} ____cacheline_aligned_in_smp ;
2019-11-08 04:27:42 +03:00
struct io_rings * rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/* IO offload */
2019-10-24 16:25:42 +03:00
struct io_wq * io_wq ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
struct task_struct * sqo_thread ; /* if using sq thread polling */
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct mm_struct * sqo_mm ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
wait_queue_head_t sqo_wait ;
2019-08-26 20:23:46 +03:00
2019-01-11 08:13:58 +03:00
/*
* If used , fixed file set . Writers must ensure that - > refs is dead ,
* readers must ensure that - > refs is alive as long as the file * is
* used . Only updated through io_uring_register ( 2 ) .
*/
2019-12-09 21:22:50 +03:00
struct fixed_file_data * file_data ;
2019-01-11 08:13:58 +03:00
unsigned nr_user_files ;
2020-01-17 04:45:59 +03:00
int ring_fd ;
struct file * ring_file ;
2019-01-11 08:13:58 +03:00
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
/* if used, fixed mapped user buffers */
unsigned nr_user_bufs ;
struct io_mapped_ubuf * user_bufs ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct user_struct * user ;
2019-12-02 18:50:00 +03:00
const struct cred * creds ;
2019-11-25 18:52:30 +03:00
2019-11-08 04:27:42 +03:00
/* 0 is for ctx quiesce/reinit/free, 1 is for sqo_thread started */
struct completion * completions ;
2019-11-08 18:52:53 +03:00
/* if all else fails... */
struct io_kiocb * fallback_req ;
2019-11-08 04:27:42 +03:00
# if defined(CONFIG_UNIX)
struct socket * ring_sock ;
# endif
2020-01-28 20:04:42 +03:00
struct idr personality_idr ;
2019-11-08 04:27:42 +03:00
struct {
unsigned cached_cq_tail ;
unsigned cq_entries ;
unsigned cq_mask ;
atomic_t cq_timeouts ;
2019-12-19 03:12:20 +03:00
unsigned long cq_check_overflow ;
2019-11-08 04:27:42 +03:00
struct wait_queue_head cq_wait ;
struct fasync_struct * cq_fasync ;
struct eventfd_ctx * cq_ev_fd ;
} ____cacheline_aligned_in_smp ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct {
struct mutex uring_lock ;
wait_queue_head_t wait ;
} ____cacheline_aligned_in_smp ;
struct {
spinlock_t completion_lock ;
2019-12-19 22:06:02 +03:00
struct llist_head poll_llist ;
2019-01-09 18:59:42 +03:00
/*
* - > poll_list is protected by the ctx - > uring_lock for
* io_uring instances that don ' t use IORING_SETUP_SQPOLL .
* For SQPOLL , only the single threaded io_sq_thread ( ) will
* manipulate the list , hence no extra locking is needed there .
*/
struct list_head poll_list ;
2019-12-05 05:56:40 +03:00
struct hlist_head * cancel_hash ;
unsigned cancel_hash_bits ;
2019-12-19 22:06:02 +03:00
bool poll_multi_file ;
2019-01-19 08:56:34 +03:00
2019-10-24 21:39:47 +03:00
spinlock_t inflight_lock ;
struct list_head inflight_list ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
} ____cacheline_aligned_in_smp ;
} ;
2019-03-13 21:39:28 +03:00
/*
* First field must be the file pointer in all the
* iocb unions ! See also ' struct kiocb ' in < linux / fs . h >
*/
2019-01-17 19:41:58 +03:00
struct io_poll_iocb {
struct file * file ;
2019-12-18 04:40:57 +03:00
union {
struct wait_queue_head * head ;
u64 addr ;
} ;
2019-01-17 19:41:58 +03:00
__poll_t events ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
bool done ;
2019-01-17 19:41:58 +03:00
bool canceled ;
2019-12-10 03:52:20 +03:00
struct wait_queue_entry wait ;
2019-01-17 19:41:58 +03:00
} ;
2019-12-12 00:02:38 +03:00
struct io_close {
struct file * file ;
struct file * put_file ;
int fd ;
} ;
2019-11-15 18:49:11 +03:00
struct io_timeout_data {
struct io_kiocb * req ;
struct hrtimer timer ;
struct timespec64 ts ;
enum hrtimer_mode mode ;
2019-11-25 23:14:38 +03:00
u32 seq_offset ;
2019-11-15 18:49:11 +03:00
} ;
2019-12-16 21:55:28 +03:00
struct io_accept {
struct file * file ;
struct sockaddr __user * addr ;
int __user * addr_len ;
int flags ;
} ;
struct io_sync {
struct file * file ;
loff_t len ;
loff_t off ;
int flags ;
2019-12-10 20:38:56 +03:00
int mode ;
2019-12-16 21:55:28 +03:00
} ;
2019-12-18 04:45:56 +03:00
struct io_cancel {
struct file * file ;
u64 addr ;
} ;
2019-12-18 04:50:29 +03:00
struct io_timeout {
struct file * file ;
u64 addr ;
int flags ;
2019-12-20 19:02:01 +03:00
unsigned count ;
2019-12-18 04:50:29 +03:00
} ;
2019-12-20 18:45:55 +03:00
struct io_rw {
/* NOTE: kiocb has the file as the first member, so don't do it here */
struct kiocb kiocb ;
u64 addr ;
u64 len ;
} ;
2019-12-20 18:51:52 +03:00
struct io_connect {
struct file * file ;
struct sockaddr __user * addr ;
int addr_len ;
} ;
2019-12-20 18:58:21 +03:00
struct io_sr_msg {
struct file * file ;
2020-01-05 06:19:44 +03:00
union {
struct user_msghdr __user * msg ;
void __user * buf ;
} ;
2019-12-20 18:58:21 +03:00
int msg_flags ;
2020-01-05 06:19:44 +03:00
size_t len ;
2019-12-20 18:58:21 +03:00
} ;
2019-12-11 21:20:36 +03:00
struct io_open {
struct file * file ;
int dfd ;
2019-12-14 07:18:10 +03:00
union {
unsigned mask ;
} ;
2019-12-11 21:20:36 +03:00
struct filename * filename ;
2019-12-14 07:18:10 +03:00
struct statx __user * buffer ;
2020-01-09 03:41:21 +03:00
struct open_how how ;
2019-12-11 21:20:36 +03:00
} ;
2019-12-09 21:22:50 +03:00
struct io_files_update {
struct file * file ;
u64 arg ;
u32 nr_args ;
u32 offset ;
} ;
2019-12-26 08:03:45 +03:00
struct io_fadvise {
struct file * file ;
u64 offset ;
u32 len ;
u32 advice ;
} ;
2019-12-26 08:18:28 +03:00
struct io_madvise {
struct file * file ;
u64 addr ;
u32 len ;
u32 advice ;
} ;
2019-12-03 02:28:46 +03:00
struct io_async_connect {
struct sockaddr_storage address ;
} ;
2019-12-03 04:50:25 +03:00
struct io_async_msghdr {
struct iovec fast_iov [ UIO_FASTIOV ] ;
struct iovec * iov ;
struct sockaddr __user * uaddr ;
struct msghdr msg ;
} ;
2019-12-02 21:03:47 +03:00
struct io_async_rw {
struct iovec fast_iov [ UIO_FASTIOV ] ;
struct iovec * iov ;
ssize_t nr_segs ;
ssize_t size ;
} ;
2019-12-11 21:20:36 +03:00
struct io_async_open {
struct filename * filename ;
} ;
2019-12-02 20:33:15 +03:00
struct io_async_ctx {
2019-12-02 21:03:47 +03:00
union {
struct io_async_rw rw ;
2019-12-03 04:50:25 +03:00
struct io_async_msghdr msg ;
2019-12-03 02:28:46 +03:00
struct io_async_connect connect ;
2019-12-04 21:08:05 +03:00
struct io_timeout_data timeout ;
2019-12-11 21:20:36 +03:00
struct io_async_open open ;
2019-12-02 21:03:47 +03:00
} ;
2019-12-02 20:33:15 +03:00
} ;
2020-01-18 20:22:41 +03:00
enum {
REQ_F_FIXED_FILE_BIT = IOSQE_FIXED_FILE_BIT ,
REQ_F_IO_DRAIN_BIT = IOSQE_IO_DRAIN_BIT ,
REQ_F_LINK_BIT = IOSQE_IO_LINK_BIT ,
REQ_F_HARDLINK_BIT = IOSQE_IO_HARDLINK_BIT ,
REQ_F_FORCE_ASYNC_BIT = IOSQE_ASYNC_BIT ,
REQ_F_LINK_NEXT_BIT ,
REQ_F_FAIL_LINK_BIT ,
REQ_F_INFLIGHT_BIT ,
REQ_F_CUR_POS_BIT ,
REQ_F_NOWAIT_BIT ,
REQ_F_IOPOLL_COMPLETED_BIT ,
REQ_F_LINK_TIMEOUT_BIT ,
REQ_F_TIMEOUT_BIT ,
REQ_F_ISREG_BIT ,
REQ_F_MUST_PUNT_BIT ,
REQ_F_TIMEOUT_NOSEQ_BIT ,
REQ_F_COMP_LOCKED_BIT ,
} ;
enum {
/* ctx owns file */
REQ_F_FIXED_FILE = BIT ( REQ_F_FIXED_FILE_BIT ) ,
/* drain existing IO first */
REQ_F_IO_DRAIN = BIT ( REQ_F_IO_DRAIN_BIT ) ,
/* linked sqes */
REQ_F_LINK = BIT ( REQ_F_LINK_BIT ) ,
/* doesn't sever on completion < 0 */
REQ_F_HARDLINK = BIT ( REQ_F_HARDLINK_BIT ) ,
/* IOSQE_ASYNC */
REQ_F_FORCE_ASYNC = BIT ( REQ_F_FORCE_ASYNC_BIT ) ,
/* already grabbed next link */
REQ_F_LINK_NEXT = BIT ( REQ_F_LINK_NEXT_BIT ) ,
/* fail rest of links */
REQ_F_FAIL_LINK = BIT ( REQ_F_FAIL_LINK_BIT ) ,
/* on inflight list */
REQ_F_INFLIGHT = BIT ( REQ_F_INFLIGHT_BIT ) ,
/* read/write uses file position */
REQ_F_CUR_POS = BIT ( REQ_F_CUR_POS_BIT ) ,
/* must not punt to workers */
REQ_F_NOWAIT = BIT ( REQ_F_NOWAIT_BIT ) ,
/* polled IO has completed */
REQ_F_IOPOLL_COMPLETED = BIT ( REQ_F_IOPOLL_COMPLETED_BIT ) ,
/* has linked timeout */
REQ_F_LINK_TIMEOUT = BIT ( REQ_F_LINK_TIMEOUT_BIT ) ,
/* timeout request */
REQ_F_TIMEOUT = BIT ( REQ_F_TIMEOUT_BIT ) ,
/* regular file */
REQ_F_ISREG = BIT ( REQ_F_ISREG_BIT ) ,
/* must be punted even for NONBLOCK */
REQ_F_MUST_PUNT = BIT ( REQ_F_MUST_PUNT_BIT ) ,
/* no timeout sequence */
REQ_F_TIMEOUT_NOSEQ = BIT ( REQ_F_TIMEOUT_NOSEQ_BIT ) ,
/* completion under lock */
REQ_F_COMP_LOCKED = BIT ( REQ_F_COMP_LOCKED_BIT ) ,
} ;
2019-03-13 21:39:28 +03:00
/*
* NOTE ! Each of the iocb union members has the file pointer
* as the first entry in their struct definition . So you can
* access the file pointer through any of the sub - structs ,
* or directly as just ' ki_filp ' in this struct .
*/
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_kiocb {
2019-01-17 19:41:58 +03:00
union {
2019-03-13 21:39:28 +03:00
struct file * file ;
2019-12-20 18:45:55 +03:00
struct io_rw rw ;
2019-01-17 19:41:58 +03:00
struct io_poll_iocb poll ;
2019-12-16 21:55:28 +03:00
struct io_accept accept ;
struct io_sync sync ;
2019-12-18 04:45:56 +03:00
struct io_cancel cancel ;
2019-12-18 04:50:29 +03:00
struct io_timeout timeout ;
2019-12-20 18:51:52 +03:00
struct io_connect connect ;
2019-12-20 18:58:21 +03:00
struct io_sr_msg sr_msg ;
2019-12-11 21:20:36 +03:00
struct io_open open ;
2019-12-12 00:02:38 +03:00
struct io_close close ;
2019-12-09 21:22:50 +03:00
struct io_files_update files_update ;
2019-12-26 08:03:45 +03:00
struct io_fadvise fadvise ;
2019-12-26 08:18:28 +03:00
struct io_madvise madvise ;
2019-01-17 19:41:58 +03:00
} ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-02 20:33:15 +03:00
struct io_async_ctx * io ;
2020-01-17 04:45:59 +03:00
/*
* llist_node is only used for poll deferred completions
*/
struct llist_node llist_node ;
2019-11-25 23:14:39 +03:00
bool has_user ;
bool in_async ;
bool needs_fixed_file ;
2019-12-18 05:53:05 +03:00
u8 opcode ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_ring_ctx * ctx ;
2019-11-14 22:09:58 +03:00
union {
struct list_head list ;
2019-12-05 05:56:40 +03:00
struct hlist_node hash_node ;
2019-11-14 22:09:58 +03:00
} ;
2019-05-11 01:07:28 +03:00
struct list_head link_list ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
unsigned int flags ;
2019-01-17 18:39:48 +03:00
refcount_t refs ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
u64 user_data ;
2019-05-11 01:07:28 +03:00
u32 result ;
2019-04-07 06:51:27 +03:00
u32 sequence ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-10-24 21:39:47 +03:00
struct list_head inflight_entry ;
2019-10-24 16:25:42 +03:00
struct io_wq_work work ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
} ;
# define IO_PLUG_THRESHOLD 2
2019-01-09 18:59:42 +03:00
# define IO_IOPOLL_BATCH 8
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-01-09 19:06:50 +03:00
struct io_submit_state {
struct blk_plug plug ;
2019-01-09 19:10:43 +03:00
/*
* io_kiocb alloc cache
*/
void * reqs [ IO_IOPOLL_BATCH ] ;
unsigned int free_reqs ;
unsigned int cur_req ;
2019-01-09 19:06:50 +03:00
/*
* File reference cache
*/
struct file * file ;
unsigned int fd ;
unsigned int has_refs ;
unsigned int used_refs ;
unsigned int ios_left ;
} ;
2019-12-18 19:50:26 +03:00
struct io_op_def {
/* needs req->io allocated for deferral/async */
unsigned async_ctx : 1 ;
/* needs current->mm setup, does mm access */
unsigned needs_mm : 1 ;
/* needs req->file assigned */
unsigned needs_file : 1 ;
/* needs req->file assigned IFF fd is >= 0 */
unsigned fd_non_neg : 1 ;
/* hash wq insertion if file is a regular file */
unsigned hash_reg_file : 1 ;
/* unbound wq insertion if file is a non-regular file */
unsigned unbound_nonreg_file : 1 ;
2020-01-17 01:36:52 +03:00
/* opcode is not supported by this kernel */
unsigned not_supported : 1 ;
2019-12-18 19:50:26 +03:00
} ;
static const struct io_op_def io_op_defs [ ] = {
2020-01-18 21:35:38 +03:00
[ IORING_OP_NOP ] = { } ,
[ IORING_OP_READV ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_WRITEV ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
. needs_file = 1 ,
. hash_reg_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_FSYNC ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_READ_FIXED ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_WRITE_FIXED ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
. hash_reg_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_POLL_ADD ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_POLL_REMOVE ] = { } ,
[ IORING_OP_SYNC_FILE_RANGE ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_SENDMSG ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_RECVMSG ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_TIMEOUT ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_TIMEOUT_REMOVE ] = { } ,
[ IORING_OP_ACCEPT ] = {
2019-12-18 19:50:26 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_ASYNC_CANCEL ] = { } ,
[ IORING_OP_LINK_TIMEOUT ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_CONNECT ] = {
2019-12-18 19:50:26 +03:00
. async_ctx = 1 ,
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_FALLOCATE ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_OPENAT ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
. fd_non_neg = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_CLOSE ] = {
2019-12-18 19:50:26 +03:00
. needs_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_FILES_UPDATE ] = {
2019-12-18 19:50:26 +03:00
. needs_mm = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_STATX ] = {
2019-12-18 19:50:26 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. fd_non_neg = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_READ ] = {
2019-12-23 01:19:35 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_WRITE ] = {
2019-12-23 01:19:35 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_FADVISE ] = {
2019-12-26 08:03:45 +03:00
. needs_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_MADVISE ] = {
2019-12-26 08:18:28 +03:00
. needs_mm = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_SEND ] = {
2020-01-05 06:19:44 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_RECV ] = {
2020-01-05 06:19:44 +03:00
. needs_mm = 1 ,
. needs_file = 1 ,
. unbound_nonreg_file = 1 ,
} ,
2020-01-18 21:35:38 +03:00
[ IORING_OP_OPENAT2 ] = {
2020-01-09 03:59:24 +03:00
. needs_file = 1 ,
. fd_non_neg = 1 ,
} ,
2019-12-18 19:50:26 +03:00
} ;
2019-10-24 16:25:42 +03:00
static void io_wq_submit_work ( struct io_wq_work * * workptr ) ;
2019-11-07 01:21:34 +03:00
static void io_cqring_fill_event ( struct io_kiocb * req , long res ) ;
2019-11-08 18:50:36 +03:00
static void io_put_req ( struct io_kiocb * req ) ;
2019-11-15 08:39:04 +03:00
static void __io_double_put_req ( struct io_kiocb * req ) ;
2019-11-15 05:39:52 +03:00
static struct io_kiocb * io_prep_linked_timeout ( struct io_kiocb * req ) ;
static void io_queue_linked_timeout ( struct io_kiocb * req ) ;
2019-12-09 21:22:50 +03:00
static int __io_sqe_files_update ( struct io_ring_ctx * ctx ,
struct io_uring_files_update * ip ,
unsigned nr_args ) ;
2019-04-07 06:51:27 +03:00
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static struct kmem_cache * req_cachep ;
static const struct file_operations io_uring_fops ;
struct sock * io_uring_get_socket ( struct file * file )
{
# if defined(CONFIG_UNIX)
if ( file - > f_op = = & io_uring_fops ) {
struct io_ring_ctx * ctx = file - > private_data ;
return ctx - > ring_sock - > sk ;
}
# endif
return NULL ;
}
EXPORT_SYMBOL ( io_uring_get_socket ) ;
static void io_ring_ctx_ref_free ( struct percpu_ref * ref )
{
struct io_ring_ctx * ctx = container_of ( ref , struct io_ring_ctx , refs ) ;
2019-11-08 04:27:42 +03:00
complete ( & ctx - > completions [ 0 ] ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
static struct io_ring_ctx * io_ring_ctx_alloc ( struct io_uring_params * p )
{
struct io_ring_ctx * ctx ;
2019-12-05 05:56:40 +03:00
int hash_bits ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
ctx = kzalloc ( sizeof ( * ctx ) , GFP_KERNEL ) ;
if ( ! ctx )
return NULL ;
2019-11-08 18:52:53 +03:00
ctx - > fallback_req = kmem_cache_alloc ( req_cachep , GFP_KERNEL ) ;
if ( ! ctx - > fallback_req )
goto err ;
2019-11-08 04:27:42 +03:00
ctx - > completions = kmalloc ( 2 * sizeof ( struct completion ) , GFP_KERNEL ) ;
if ( ! ctx - > completions )
goto err ;
2019-12-05 05:56:40 +03:00
/*
* Use 5 bits less than the max cq entries , that should give us around
* 32 entries per hash list if totally full and uniformly spread .
*/
hash_bits = ilog2 ( p - > cq_entries ) ;
hash_bits - = 5 ;
if ( hash_bits < = 0 )
hash_bits = 1 ;
ctx - > cancel_hash_bits = hash_bits ;
ctx - > cancel_hash = kmalloc ( ( 1U < < hash_bits ) * sizeof ( struct hlist_head ) ,
GFP_KERNEL ) ;
if ( ! ctx - > cancel_hash )
goto err ;
__hash_init ( ctx - > cancel_hash , 1U < < hash_bits ) ;
2019-05-07 20:01:48 +03:00
if ( percpu_ref_init ( & ctx - > refs , io_ring_ctx_ref_free ,
2019-11-08 04:27:42 +03:00
PERCPU_REF_ALLOW_REINIT , GFP_KERNEL ) )
goto err ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
ctx - > flags = p - > flags ;
init_waitqueue_head ( & ctx - > cq_wait ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
INIT_LIST_HEAD ( & ctx - > cq_overflow_list ) ;
2019-11-08 04:27:42 +03:00
init_completion ( & ctx - > completions [ 0 ] ) ;
init_completion ( & ctx - > completions [ 1 ] ) ;
2020-01-28 20:04:42 +03:00
idr_init ( & ctx - > personality_idr ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mutex_init ( & ctx - > uring_lock ) ;
init_waitqueue_head ( & ctx - > wait ) ;
spin_lock_init ( & ctx - > completion_lock ) ;
2019-12-19 22:06:02 +03:00
init_llist_head ( & ctx - > poll_llist ) ;
2019-01-09 18:59:42 +03:00
INIT_LIST_HEAD ( & ctx - > poll_list ) ;
2019-04-07 06:51:27 +03:00
INIT_LIST_HEAD ( & ctx - > defer_list ) ;
2019-09-17 21:26:57 +03:00
INIT_LIST_HEAD ( & ctx - > timeout_list ) ;
2019-10-24 21:39:47 +03:00
init_waitqueue_head ( & ctx - > inflight_wait ) ;
spin_lock_init ( & ctx - > inflight_lock ) ;
INIT_LIST_HEAD ( & ctx - > inflight_list ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return ctx ;
2019-11-08 04:27:42 +03:00
err :
2019-11-08 18:52:53 +03:00
if ( ctx - > fallback_req )
kmem_cache_free ( req_cachep , ctx - > fallback_req ) ;
2019-11-08 04:27:42 +03:00
kfree ( ctx - > completions ) ;
2019-12-05 05:56:40 +03:00
kfree ( ctx - > cancel_hash ) ;
2019-11-08 04:27:42 +03:00
kfree ( ctx ) ;
return NULL ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-11-13 13:06:25 +03:00
static inline bool __req_need_defer ( struct io_kiocb * req )
2019-10-11 06:42:58 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-10-25 19:04:25 +03:00
return req - > sequence ! = ctx - > cached_cq_tail + ctx - > cached_sq_dropped
+ atomic_read ( & ctx - > cached_cq_overflow ) ;
2019-10-11 06:42:58 +03:00
}
2019-11-13 13:06:25 +03:00
static inline bool req_need_defer ( struct io_kiocb * req )
2019-04-07 06:51:27 +03:00
{
2020-01-18 01:22:30 +03:00
if ( unlikely ( req - > flags & REQ_F_IO_DRAIN ) )
2019-11-13 13:06:25 +03:00
return __req_need_defer ( req ) ;
2019-04-07 06:51:27 +03:00
2019-11-13 13:06:25 +03:00
return false ;
2019-04-07 06:51:27 +03:00
}
2019-10-11 06:42:58 +03:00
static struct io_kiocb * io_get_deferred_req ( struct io_ring_ctx * ctx )
2019-04-07 06:51:27 +03:00
{
struct io_kiocb * req ;
2019-10-11 06:42:58 +03:00
req = list_first_entry_or_null ( & ctx - > defer_list , struct io_kiocb , list ) ;
2019-11-13 13:06:25 +03:00
if ( req & & ! req_need_defer ( req ) ) {
2019-04-07 06:51:27 +03:00
list_del_init ( & req - > list ) ;
return req ;
}
return NULL ;
}
2019-09-17 21:26:57 +03:00
static struct io_kiocb * io_get_timeout_req ( struct io_ring_ctx * ctx )
{
2019-10-11 06:42:58 +03:00
struct io_kiocb * req ;
req = list_first_entry_or_null ( & ctx - > timeout_list , struct io_kiocb , list ) ;
2019-11-12 09:34:31 +03:00
if ( req ) {
if ( req - > flags & REQ_F_TIMEOUT_NOSEQ )
return NULL ;
for-5.5/io_uring-20191121
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAl3WxNwQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgps4kD/9SIDXhYhhE8fNqeAF7Uouu8fxgwnkY3hSI
43vJwCziiDxWWJH5mYW7/83VNOMZKHIbiYMnU6iEUsRQ/sG/wI0wEfAQZDHLzCKt
cko2q7zAC1/4rtoslwJ3q04hE2Ap/nb93ELZBVr7fOAuODBNFUp/vifAojvsMPKz
hNMNPq/vYg7c/iYMZKSBdtjE3tqceFNBjAVNMB9dHKQLeexEy4ve7AjBeawWsSi7
GesnQ5w5u5LqkMYwLslpv/oVjHiiFWgGnDAvBNvykQvVy+DfB54KSqMV11W1aqdU
l6L+ENfZasEvlk1yMAth2Foq4vlscm5MKEb6VdJhXWHHXtXkcBmz7RBqPmjSvXCY
wS5GZRw8oYtTcid0aQf+t/wgRNTDJsGsnsT32qto41No3Z7vlIDHUDxHZGTA+gEL
E8j9rDx6EXMTo3EFbC8XZcfsorhPJ1HKAyw1YFczHtYzJEQUR9jJe3f/Q9u6K2Vy
s/EhkVeHa/lEd7kb6mI+6lQjGe1FXl7AHauDuaaEfIOZA/xJB3Bad5Wjq1va1cUO
TX+37zjzFzJghhSIBGYq7G7iT4AMecPQgxHzCdCyYfW5S4Uur9tMmIElwVPI/Pjl
kDZ9gdg9lm6JifZ9Ab8QcGhuQQTF3frwX9VfgrVgcqyvm38AiYzVgL9ZJnxRS/Cy
ZfLNkACXqQ==
=YZ9s
-----END PGP SIGNATURE-----
Merge tag 'for-5.5/io_uring-20191121' of git://git.kernel.dk/linux-block
Pull io_uring updates from Jens Axboe:
"A lot of stuff has been going on this cycle, with improving the
support for networked IO (and hence unbounded request completion
times) being one of the major themes. There's been a set of fixes done
this week, I'll send those out as well once we're certain we're fully
happy with them.
This contains:
- Unification of the "normal" submit path and the SQPOLL path (Pavel)
- Support for sparse (and bigger) file sets, and updating of those
file sets without needing to unregister/register again.
- Independently sized CQ ring, instead of just making it always 2x
the SQ ring size. This makes it more flexible for networked
applications.
- Support for overflowed CQ ring, never dropping events but providing
backpressure on submits.
- Add support for absolute timeouts, not just relative ones.
- Support for generic cancellations. This divorces io_uring from
workqueues as well, which additionally gets us one step closer to
generic async system call support.
- With cancellations, we can support grabbing the process file table
as well, just like we do mm context. This allows support for system
calls that create file descriptors, like accept4() support that's
built on top of that.
- Support for io_uring tracing (Dmitrii)
- Support for linked timeouts. These abort an operation if it isn't
completed by the time noted in the linke timeout.
- Speedup tracking of poll requests
- Various cleanups making the coder easier to follow (Jackie, Pavel,
Bob, YueHaibing, me)
- Update MAINTAINERS with new io_uring list"
* tag 'for-5.5/io_uring-20191121' of git://git.kernel.dk/linux-block: (64 commits)
io_uring: make POLL_ADD/POLL_REMOVE scale better
io-wq: remove now redundant struct io_wq_nulls_list
io_uring: Fix getting file for non-fd opcodes
io_uring: introduce req_need_defer()
io_uring: clean up io_uring_cancel_files()
io-wq: ensure free/busy list browsing see all items
io-wq: ensure we have a stable view of ->cur_work for cancellations
io_wq: add get/put_work handlers to io_wq_create()
io_uring: check for validity of ->rings in teardown
io_uring: fix potential deadlock in io_poll_wake()
io_uring: use correct "is IO worker" helper
io_uring: fix -ENOENT issue with linked timer with short timeout
io_uring: don't do flush cancel under inflight_lock
io_uring: flag SQPOLL busy condition to userspace
io_uring: make ASYNC_CANCEL work with poll and timeout
io_uring: provide fallback request for OOM situations
io_uring: convert accept4() -ERESTARTSYS into -EINTR
io_uring: fix error clear of ->file_table in io_sqe_files_register()
io_uring: separate the io_free_req and io_free_req_find_next interface
io_uring: keep io_put_req only responsible for release and put req
...
2019-11-25 21:40:27 +03:00
if ( ! __req_need_defer ( req ) ) {
2019-11-12 09:34:31 +03:00
list_del_init ( & req - > list ) ;
return req ;
}
2019-10-11 06:42:58 +03:00
}
return NULL ;
2019-09-17 21:26:57 +03:00
}
2019-04-07 06:51:27 +03:00
static void __io_commit_cqring ( struct io_ring_ctx * ctx )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-08-26 20:23:46 +03:00
struct io_rings * rings = ctx - > rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2020-01-17 03:52:46 +03:00
/* order cqe stores with ring update */
smp_store_release ( & rings - > cq . tail , ctx - > cached_cq_tail ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2020-01-17 03:52:46 +03:00
if ( wq_has_sleeper ( & ctx - > cq_wait ) ) {
wake_up_interruptible ( & ctx - > cq_wait ) ;
kill_fasync ( & ctx - > cq_fasync , SIGIO , POLL_IN ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
}
2020-01-28 02:34:48 +03:00
static inline void io_req_work_grab_env ( struct io_kiocb * req ,
const struct io_op_def * def )
{
if ( ! req - > work . mm & & def - > needs_mm ) {
mmgrab ( current - > mm ) ;
req - > work . mm = current - > mm ;
}
if ( ! req - > work . creds )
req - > work . creds = get_current_cred ( ) ;
}
static inline void io_req_work_drop_env ( struct io_kiocb * req )
{
if ( req - > work . mm ) {
mmdrop ( req - > work . mm ) ;
req - > work . mm = NULL ;
}
if ( req - > work . creds ) {
put_cred ( req - > work . creds ) ;
req - > work . creds = NULL ;
}
}
2019-11-15 05:39:52 +03:00
static inline bool io_prep_async_work ( struct io_kiocb * req ,
struct io_kiocb * * link )
2019-09-10 18:13:05 +03:00
{
2019-12-18 19:50:26 +03:00
const struct io_op_def * def = & io_op_defs [ req - > opcode ] ;
2019-10-24 16:25:42 +03:00
bool do_hashed = false ;
2019-09-10 18:15:04 +03:00
2019-12-18 19:50:26 +03:00
if ( req - > flags & REQ_F_ISREG ) {
if ( def - > hash_reg_file )
2019-12-20 04:24:38 +03:00
do_hashed = true ;
2019-12-18 19:50:26 +03:00
} else {
if ( def - > unbound_nonreg_file )
2019-12-20 04:24:38 +03:00
req - > work . flags | = IO_WQ_WORK_UNBOUND ;
2019-09-10 18:15:04 +03:00
}
2020-01-28 02:34:48 +03:00
io_req_work_grab_env ( req , def ) ;
2019-09-10 18:15:04 +03:00
2019-11-15 05:39:52 +03:00
* link = io_prep_linked_timeout ( req ) ;
2019-10-24 16:25:42 +03:00
return do_hashed ;
}
2019-11-08 18:09:12 +03:00
static inline void io_queue_async_work ( struct io_kiocb * req )
2019-10-24 16:25:42 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-11-15 05:39:52 +03:00
struct io_kiocb * link ;
bool do_hashed ;
do_hashed = io_prep_async_work ( req , & link ) ;
2019-10-24 16:25:42 +03:00
trace_io_uring_queue_async_work ( ctx , do_hashed , req , & req - > work ,
req - > flags ) ;
if ( ! do_hashed ) {
io_wq_enqueue ( ctx - > io_wq , & req - > work ) ;
} else {
io_wq_enqueue_hashed ( ctx - > io_wq , & req - > work ,
file_inode ( req - > file ) ) ;
}
2019-11-15 05:39:52 +03:00
if ( link )
io_queue_linked_timeout ( link ) ;
2019-09-10 18:13:05 +03:00
}
2019-09-17 21:26:57 +03:00
static void io_kill_timeout ( struct io_kiocb * req )
{
int ret ;
2019-12-04 21:08:05 +03:00
ret = hrtimer_try_to_cancel ( & req - > io - > timeout . timer ) ;
2019-09-17 21:26:57 +03:00
if ( ret ! = - 1 ) {
atomic_inc ( & req - > ctx - > cq_timeouts ) ;
2019-10-29 21:34:10 +03:00
list_del_init ( & req - > list ) ;
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( req , 0 ) ;
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
2019-09-17 21:26:57 +03:00
}
}
static void io_kill_timeouts ( struct io_ring_ctx * ctx )
{
struct io_kiocb * req , * tmp ;
spin_lock_irq ( & ctx - > completion_lock ) ;
list_for_each_entry_safe ( req , tmp , & ctx - > timeout_list , list )
io_kill_timeout ( req ) ;
spin_unlock_irq ( & ctx - > completion_lock ) ;
}
2019-04-07 06:51:27 +03:00
static void io_commit_cqring ( struct io_ring_ctx * ctx )
{
struct io_kiocb * req ;
2019-09-17 21:26:57 +03:00
while ( ( req = io_get_timeout_req ( ctx ) ) ! = NULL )
io_kill_timeout ( req ) ;
2019-04-07 06:51:27 +03:00
__io_commit_cqring ( ctx ) ;
2020-01-18 01:22:30 +03:00
while ( ( req = io_get_deferred_req ( ctx ) ) ! = NULL )
2019-11-08 18:09:12 +03:00
io_queue_async_work ( req ) ;
2019-04-07 06:51:27 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static struct io_uring_cqe * io_get_cqring ( struct io_ring_ctx * ctx )
{
2019-08-26 20:23:46 +03:00
struct io_rings * rings = ctx - > rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
unsigned tail ;
tail = ctx - > cached_cq_tail ;
2019-04-25 00:54:18 +03:00
/*
* writes to the cq entry need to come after reading head ; the
* control dependency is enough as we ' re using WRITE_ONCE to
* fill the cq entry
*/
2019-08-26 20:23:46 +03:00
if ( tail - READ_ONCE ( rings - > cq . head ) = = rings - > cq_ring_entries )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return NULL ;
ctx - > cached_cq_tail + + ;
2019-08-26 20:23:46 +03:00
return & rings - > cqes [ tail & ctx - > cq_mask ] ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2020-01-08 21:04:00 +03:00
static inline bool io_should_trigger_evfd ( struct io_ring_ctx * ctx )
{
if ( ! ctx - > eventfd_async )
return true ;
return io_wq_current_is_worker ( ) | | in_interrupt ( ) ;
}
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
static void io_cqring_ev_posted ( struct io_ring_ctx * ctx )
{
if ( waitqueue_active ( & ctx - > wait ) )
wake_up ( & ctx - > wait ) ;
if ( waitqueue_active ( & ctx - > sqo_wait ) )
wake_up ( & ctx - > sqo_wait ) ;
2020-01-08 21:04:00 +03:00
if ( ctx - > cq_ev_fd & & io_should_trigger_evfd ( ctx ) )
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
eventfd_signal ( ctx - > cq_ev_fd , 1 ) ;
}
2019-11-22 07:01:26 +03:00
/* Returns true if there are no backlogged entries after the flush */
static bool io_cqring_overflow_flush ( struct io_ring_ctx * ctx , bool force )
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
{
struct io_rings * rings = ctx - > rings ;
struct io_uring_cqe * cqe ;
struct io_kiocb * req ;
unsigned long flags ;
LIST_HEAD ( list ) ;
if ( ! force ) {
if ( list_empty_careful ( & ctx - > cq_overflow_list ) )
2019-11-22 07:01:26 +03:00
return true ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( ( ctx - > cached_cq_tail - READ_ONCE ( rings - > cq . head ) = =
rings - > cq_ring_entries ) )
2019-11-22 07:01:26 +03:00
return false ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
}
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
/* if force is set, the ring is going away. always drop after that */
if ( force )
2020-01-08 21:01:46 +03:00
ctx - > cq_overflow_flushed = 1 ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
2019-11-22 07:01:26 +03:00
cqe = NULL ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
while ( ! list_empty ( & ctx - > cq_overflow_list ) ) {
cqe = io_get_cqring ( ctx ) ;
if ( ! cqe & & ! force )
break ;
req = list_first_entry ( & ctx - > cq_overflow_list , struct io_kiocb ,
list ) ;
list_move ( & req - > list , & list ) ;
if ( cqe ) {
WRITE_ONCE ( cqe - > user_data , req - > user_data ) ;
WRITE_ONCE ( cqe - > res , req - > result ) ;
WRITE_ONCE ( cqe - > flags , 0 ) ;
} else {
WRITE_ONCE ( ctx - > rings - > cq_overflow ,
atomic_inc_return ( & ctx - > cached_cq_overflow ) ) ;
}
}
io_commit_cqring ( ctx ) ;
2019-12-19 03:12:20 +03:00
if ( cqe ) {
clear_bit ( 0 , & ctx - > sq_check_overflow ) ;
clear_bit ( 0 , & ctx - > cq_check_overflow ) ;
}
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_cqring_ev_posted ( ctx ) ;
while ( ! list_empty ( & list ) ) {
req = list_first_entry ( & list , struct io_kiocb , list ) ;
list_del ( & req - > list ) ;
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
}
2019-11-22 07:01:26 +03:00
return cqe ! = NULL ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
}
2019-11-07 01:21:34 +03:00
static void io_cqring_fill_event ( struct io_kiocb * req , long res )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-11-07 01:21:34 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_uring_cqe * cqe ;
2019-11-07 01:21:34 +03:00
trace_io_uring_complete ( ctx , req - > user_data , res ) ;
2019-11-03 16:52:50 +03:00
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/*
* If we can ' t get a cq entry , userspace overflowed the
* submission ( by quite a lot ) . Increment the overflow count in
* the ring .
*/
cqe = io_get_cqring ( ctx ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( likely ( cqe ) ) {
2019-11-07 01:21:34 +03:00
WRITE_ONCE ( cqe - > user_data , req - > user_data ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
WRITE_ONCE ( cqe - > res , res ) ;
2019-05-14 05:58:29 +03:00
WRITE_ONCE ( cqe - > flags , 0 ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
} else if ( ctx - > cq_overflow_flushed ) {
2019-10-25 19:04:25 +03:00
WRITE_ONCE ( ctx - > rings - > cq_overflow ,
atomic_inc_return ( & ctx - > cached_cq_overflow ) ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
} else {
2019-12-19 03:12:20 +03:00
if ( list_empty ( & ctx - > cq_overflow_list ) ) {
set_bit ( 0 , & ctx - > sq_check_overflow ) ;
set_bit ( 0 , & ctx - > cq_check_overflow ) ;
}
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
refcount_inc ( & req - > refs ) ;
req - > result = res ;
list_add_tail ( & req - > list , & ctx - > cq_overflow_list ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
}
2019-11-07 01:21:34 +03:00
static void io_cqring_add_event ( struct io_kiocb * req , long res )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-11-07 01:21:34 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
unsigned long flags ;
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( req , res ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_commit_cqring ( ctx ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
io_cqring_ev_posted ( ctx ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-11-08 18:52:53 +03:00
static inline bool io_is_fallback_req ( struct io_kiocb * req )
{
return req = = ( struct io_kiocb * )
( ( unsigned long ) req - > ctx - > fallback_req & ~ 1UL ) ;
}
static struct io_kiocb * io_get_fallback_req ( struct io_ring_ctx * ctx )
{
struct io_kiocb * req ;
req = ctx - > fallback_req ;
if ( ! test_and_set_bit_lock ( 0 , ( unsigned long * ) ctx - > fallback_req ) )
return req ;
return NULL ;
}
2019-01-09 19:10:43 +03:00
static struct io_kiocb * io_get_req ( struct io_ring_ctx * ctx ,
struct io_submit_state * state )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-03-15 01:30:06 +03:00
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_kiocb * req ;
2019-01-09 19:10:43 +03:00
if ( ! state ) {
2019-03-15 01:30:06 +03:00
req = kmem_cache_alloc ( req_cachep , gfp ) ;
2019-01-09 19:10:43 +03:00
if ( unlikely ( ! req ) )
2019-11-08 18:52:53 +03:00
goto fallback ;
2019-01-09 19:10:43 +03:00
} else if ( ! state - > free_reqs ) {
size_t sz ;
int ret ;
sz = min_t ( size_t , state - > ios_left , ARRAY_SIZE ( state - > reqs ) ) ;
2019-03-15 01:30:06 +03:00
ret = kmem_cache_alloc_bulk ( req_cachep , gfp , sz , state - > reqs ) ;
/*
* Bulk alloc is all - or - nothing . If we fail to get a batch ,
* retry single alloc to be on the safe side .
*/
if ( unlikely ( ret < = 0 ) ) {
state - > reqs [ 0 ] = kmem_cache_alloc ( req_cachep , gfp ) ;
if ( ! state - > reqs [ 0 ] )
2019-11-08 18:52:53 +03:00
goto fallback ;
2019-03-15 01:30:06 +03:00
ret = 1 ;
}
2019-01-09 19:10:43 +03:00
state - > free_reqs = ret - 1 ;
state - > cur_req = 1 ;
req = state - > reqs [ 0 ] ;
} else {
req = state - > reqs [ state - > cur_req ] ;
state - > free_reqs - - ;
state - > cur_req + + ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-11-08 18:52:53 +03:00
got_it :
2019-12-02 20:33:15 +03:00
req - > io = NULL ;
2019-06-21 19:20:18 +03:00
req - > file = NULL ;
2019-01-09 19:10:43 +03:00
req - > ctx = ctx ;
req - > flags = 0 ;
2019-03-12 19:16:44 +03:00
/* one is dropped after submission, the other at completion */
refcount_set ( & req - > refs , 2 ) ;
2019-05-11 01:07:28 +03:00
req - > result = 0 ;
2019-10-24 16:25:42 +03:00
INIT_IO_WORK ( & req - > work , io_wq_submit_work ) ;
2019-01-09 19:10:43 +03:00
return req ;
2019-11-08 18:52:53 +03:00
fallback :
req = io_get_fallback_req ( ctx ) ;
if ( req )
goto got_it ;
2019-10-08 02:18:42 +03:00
percpu_ref_put ( & ctx - > refs ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return NULL ;
}
2019-12-28 14:13:03 +03:00
static void __io_req_do_free ( struct io_kiocb * req )
{
if ( likely ( ! io_is_fallback_req ( req ) ) )
kmem_cache_free ( req_cachep , req ) ;
else
clear_bit_unlock ( 0 , ( unsigned long * ) req - > ctx - > fallback_req ) ;
}
2019-12-28 22:11:08 +03:00
static void __io_req_aux_free ( struct io_kiocb * req )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-10-24 21:39:47 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2020-01-07 17:22:44 +03:00
kfree ( req - > io ) ;
2019-12-09 21:22:50 +03:00
if ( req - > file ) {
if ( req - > flags & REQ_F_FIXED_FILE )
percpu_ref_put ( & ctx - > file_data - > refs ) ;
else
fput ( req - > file ) ;
}
2020-01-28 02:34:48 +03:00
io_req_work_drop_env ( req ) ;
2019-12-28 22:11:08 +03:00
}
static void __io_free_req ( struct io_kiocb * req )
{
__io_req_aux_free ( req ) ;
2019-10-24 21:39:47 +03:00
if ( req - > flags & REQ_F_INFLIGHT ) {
2019-12-28 22:11:08 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-10-24 21:39:47 +03:00
unsigned long flags ;
spin_lock_irqsave ( & ctx - > inflight_lock , flags ) ;
list_del ( & req - > inflight_entry ) ;
if ( waitqueue_active ( & ctx - > inflight_wait ) )
wake_up ( & ctx - > inflight_wait ) ;
spin_unlock_irqrestore ( & ctx - > inflight_lock , flags ) ;
}
2019-12-28 14:13:03 +03:00
percpu_ref_put ( & req - > ctx - > refs ) ;
__io_req_do_free ( req ) ;
2019-03-12 19:16:44 +03:00
}
2019-12-28 22:11:08 +03:00
struct req_batch {
void * reqs [ IO_IOPOLL_BATCH ] ;
int to_free ;
int need_iter ;
} ;
static void io_free_req_many ( struct io_ring_ctx * ctx , struct req_batch * rb )
{
2020-01-09 17:52:28 +03:00
int fixed_refs = rb - > to_free ;
2019-12-28 22:11:08 +03:00
if ( ! rb - > to_free )
return ;
if ( rb - > need_iter ) {
int i , inflight = 0 ;
unsigned long flags ;
2020-01-09 17:52:28 +03:00
fixed_refs = 0 ;
2019-12-28 22:11:08 +03:00
for ( i = 0 ; i < rb - > to_free ; i + + ) {
struct io_kiocb * req = rb - > reqs [ i ] ;
2020-01-09 17:52:28 +03:00
if ( req - > flags & REQ_F_FIXED_FILE ) {
2019-12-28 22:11:08 +03:00
req - > file = NULL ;
2020-01-09 17:52:28 +03:00
fixed_refs + + ;
}
2019-12-28 22:11:08 +03:00
if ( req - > flags & REQ_F_INFLIGHT )
inflight + + ;
__io_req_aux_free ( req ) ;
}
if ( ! inflight )
goto do_free ;
spin_lock_irqsave ( & ctx - > inflight_lock , flags ) ;
for ( i = 0 ; i < rb - > to_free ; i + + ) {
struct io_kiocb * req = rb - > reqs [ i ] ;
2020-01-09 17:52:28 +03:00
if ( req - > flags & REQ_F_INFLIGHT ) {
2019-12-28 22:11:08 +03:00
list_del ( & req - > inflight_entry ) ;
if ( ! - - inflight )
break ;
}
}
spin_unlock_irqrestore ( & ctx - > inflight_lock , flags ) ;
if ( waitqueue_active ( & ctx - > inflight_wait ) )
wake_up ( & ctx - > inflight_wait ) ;
}
do_free :
kmem_cache_free_bulk ( req_cachep , rb - > to_free , rb - > reqs ) ;
2020-01-09 17:52:28 +03:00
if ( fixed_refs )
percpu_ref_put_many ( & ctx - > file_data - > refs , fixed_refs ) ;
2019-12-28 22:11:08 +03:00
percpu_ref_put_many ( & ctx - > refs , rb - > to_free ) ;
rb - > to_free = rb - > need_iter = 0 ;
}
2019-11-08 18:09:12 +03:00
static bool io_link_cancel_timeout ( struct io_kiocb * req )
2019-11-05 22:40:47 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-11-05 22:40:47 +03:00
int ret ;
2019-12-04 21:08:05 +03:00
ret = hrtimer_try_to_cancel ( & req - > io - > timeout . timer ) ;
2019-11-05 22:40:47 +03:00
if ( ret ! = - 1 ) {
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( req , - ECANCELED ) ;
2019-11-05 22:40:47 +03:00
io_commit_cqring ( ctx ) ;
req - > flags & = ~ REQ_F_LINK ;
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
2019-11-05 22:40:47 +03:00
return true ;
}
return false ;
2019-03-12 19:16:44 +03:00
}
2019-09-28 20:36:45 +03:00
static void io_req_link_next ( struct io_kiocb * req , struct io_kiocb * * nxtptr )
2019-05-11 01:07:28 +03:00
{
2019-11-05 22:40:47 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
bool wake_ev = false ;
2019-05-11 01:07:28 +03:00
2019-11-20 23:03:52 +03:00
/* Already got next link */
if ( req - > flags & REQ_F_LINK_NEXT )
return ;
2019-05-11 01:07:28 +03:00
/*
* The list should never be empty when we are called here . But could
* potentially happen if the chain is messed up , check to be on the
* safe side .
*/
2019-12-05 16:16:35 +03:00
while ( ! list_empty ( & req - > link_list ) ) {
struct io_kiocb * nxt = list_first_entry ( & req - > link_list ,
struct io_kiocb , link_list ) ;
2019-11-15 05:39:52 +03:00
2019-12-05 16:16:35 +03:00
if ( unlikely ( ( req - > flags & REQ_F_LINK_TIMEOUT ) & &
( nxt - > flags & REQ_F_TIMEOUT ) ) ) {
list_del_init ( & nxt - > link_list ) ;
2019-11-15 05:39:52 +03:00
wake_ev | = io_link_cancel_timeout ( nxt ) ;
req - > flags & = ~ REQ_F_LINK_TIMEOUT ;
continue ;
}
2019-05-11 01:07:28 +03:00
2019-12-05 16:16:35 +03:00
list_del_init ( & req - > link_list ) ;
if ( ! list_empty ( & nxt - > link_list ) )
nxt - > flags | = REQ_F_LINK ;
2019-11-21 23:21:02 +03:00
* nxtptr = nxt ;
2019-11-15 05:39:52 +03:00
break ;
2019-05-11 01:07:28 +03:00
}
2019-11-05 22:40:47 +03:00
2019-11-20 23:03:52 +03:00
req - > flags | = REQ_F_LINK_NEXT ;
2019-11-05 22:40:47 +03:00
if ( wake_ev )
io_cqring_ev_posted ( ctx ) ;
2019-05-11 01:07:28 +03:00
}
/*
* Called if REQ_F_LINK is set , and we fail the head request
*/
static void io_fail_links ( struct io_kiocb * req )
{
2019-11-05 22:40:47 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
unsigned long flags ;
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
2019-05-11 01:07:28 +03:00
while ( ! list_empty ( & req - > link_list ) ) {
2019-12-05 16:16:35 +03:00
struct io_kiocb * link = list_first_entry ( & req - > link_list ,
struct io_kiocb , link_list ) ;
2019-05-11 01:07:28 +03:00
2019-12-05 16:16:35 +03:00
list_del_init ( & link - > link_list ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
trace_io_uring_fail_link ( req , link ) ;
2019-11-05 22:40:47 +03:00
if ( ( req - > flags & REQ_F_LINK_TIMEOUT ) & &
2019-12-18 05:53:05 +03:00
link - > opcode = = IORING_OP_LINK_TIMEOUT ) {
2019-11-08 18:09:12 +03:00
io_link_cancel_timeout ( link ) ;
2019-11-05 22:40:47 +03:00
} else {
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( link , - ECANCELED ) ;
2019-11-15 08:39:04 +03:00
__io_double_put_req ( link ) ;
2019-11-05 22:40:47 +03:00
}
2019-11-20 01:31:28 +03:00
req - > flags & = ~ REQ_F_LINK_TIMEOUT ;
2019-05-11 01:07:28 +03:00
}
2019-11-05 22:40:47 +03:00
io_commit_cqring ( ctx ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_cqring_ev_posted ( ctx ) ;
2019-05-11 01:07:28 +03:00
}
2019-11-20 23:03:52 +03:00
static void io_req_find_next ( struct io_kiocb * req , struct io_kiocb * * nxt )
2019-05-11 01:07:28 +03:00
{
2019-11-20 23:03:52 +03:00
if ( likely ( ! ( req - > flags & REQ_F_LINK ) ) )
2019-11-05 22:40:47 +03:00
return ;
2019-05-11 01:07:28 +03:00
/*
* If LINK is set , we have dependent requests in this chain . If we
* didn ' t fail this request , queue the first one up , moving any other
* dependencies to the next request . In case of failure , fail the rest
* of the chain .
*/
2019-11-05 22:40:47 +03:00
if ( req - > flags & REQ_F_FAIL_LINK ) {
io_fail_links ( req ) ;
2019-11-12 18:15:53 +03:00
} else if ( ( req - > flags & ( REQ_F_LINK_TIMEOUT | REQ_F_COMP_LOCKED ) ) = =
REQ_F_LINK_TIMEOUT ) {
2019-11-05 22:40:47 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
unsigned long flags ;
/*
* If this is a timeout link , we could be racing with the
* timeout timer . Grab the completion lock for this case to
2019-11-12 18:15:53 +03:00
* protect against that .
2019-11-05 22:40:47 +03:00
*/
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
io_req_link_next ( req , nxt ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
} else {
io_req_link_next ( req , nxt ) ;
2019-05-11 01:07:28 +03:00
}
2019-11-20 23:03:52 +03:00
}
2019-05-11 01:07:28 +03:00
2019-11-09 06:00:08 +03:00
static void io_free_req ( struct io_kiocb * req )
{
2019-11-21 23:21:01 +03:00
struct io_kiocb * nxt = NULL ;
io_req_find_next ( req , & nxt ) ;
2019-11-21 23:21:00 +03:00
__io_free_req ( req ) ;
2019-11-21 23:21:01 +03:00
if ( nxt )
io_queue_async_work ( nxt ) ;
2019-11-09 06:00:08 +03:00
}
2019-09-28 20:36:45 +03:00
/*
* Drop reference to request , return next in chain ( if there is one ) if this
* was the last reference to this request .
*/
2019-11-21 23:21:03 +03:00
__attribute__ ( ( nonnull ) )
2019-11-08 18:50:36 +03:00
static void io_put_req_find_next ( struct io_kiocb * req , struct io_kiocb * * nxtptr )
2019-03-12 19:16:44 +03:00
{
2019-11-21 23:21:03 +03:00
io_req_find_next ( req , nxtptr ) ;
2019-11-20 23:03:52 +03:00
2019-03-12 19:16:44 +03:00
if ( refcount_dec_and_test ( & req - > refs ) )
2019-11-20 23:03:52 +03:00
__io_free_req ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-03-12 19:16:44 +03:00
static void io_put_req ( struct io_kiocb * req )
{
if ( refcount_dec_and_test ( & req - > refs ) )
io_free_req ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-11-15 08:39:04 +03:00
/*
* Must only be used if we don ' t need to care about links , usually from
* within the completion handling itself .
*/
static void __io_double_put_req ( struct io_kiocb * req )
2019-11-07 01:21:34 +03:00
{
/* drop both submit and complete references */
if ( refcount_sub_and_test ( 2 , & req - > refs ) )
__io_free_req ( req ) ;
}
2019-11-15 08:39:04 +03:00
static void io_double_put_req ( struct io_kiocb * req )
{
/* drop both submit and complete references */
if ( refcount_sub_and_test ( 2 , & req - > refs ) )
io_free_req ( req ) ;
}
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
static unsigned io_cqring_events ( struct io_ring_ctx * ctx , bool noflush )
2019-08-20 20:03:11 +03:00
{
2019-11-06 21:27:53 +03:00
struct io_rings * rings = ctx - > rings ;
2019-12-19 03:12:20 +03:00
if ( test_bit ( 0 , & ctx - > cq_check_overflow ) ) {
/*
* noflush = = true is from the waitqueue handler , just ensure
* we wake up the task , and the next invocation will flush the
* entries . We cannot safely to it from here .
*/
if ( noflush & & ! list_empty ( & ctx - > cq_overflow_list ) )
return - 1U ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
2019-12-19 03:12:20 +03:00
io_cqring_overflow_flush ( ctx , false ) ;
}
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
2019-08-20 20:03:11 +03:00
/* See comment at the top of this file */
smp_rmb ( ) ;
2019-12-19 03:12:20 +03:00
return ctx - > cached_cq_tail - READ_ONCE ( rings - > cq . head ) ;
2019-08-20 20:03:11 +03:00
}
2019-10-25 12:31:30 +03:00
static inline unsigned int io_sqring_entries ( struct io_ring_ctx * ctx )
{
struct io_rings * rings = ctx - > rings ;
/* make sure SQ entry isn't read before tail */
return smp_load_acquire ( & rings - > sq . tail ) - ctx - > cached_sq_head ;
}
2019-12-28 20:48:22 +03:00
static inline bool io_req_multi_free ( struct req_batch * rb , struct io_kiocb * req )
2019-12-19 22:06:02 +03:00
{
2019-12-28 22:11:08 +03:00
if ( ( req - > flags & REQ_F_LINK ) | | io_is_fallback_req ( req ) )
return false ;
2019-12-19 22:06:02 +03:00
2019-12-28 22:11:08 +03:00
if ( ! ( req - > flags & REQ_F_FIXED_FILE ) | | req - > io )
rb - > need_iter + + ;
rb - > reqs [ rb - > to_free + + ] = req ;
if ( unlikely ( rb - > to_free = = ARRAY_SIZE ( rb - > reqs ) ) )
io_free_req_many ( req - > ctx , rb ) ;
return true ;
2019-12-19 22:06:02 +03:00
}
2019-01-09 18:59:42 +03:00
/*
* Find and free completed poll iocbs
*/
static void io_iopoll_complete ( struct io_ring_ctx * ctx , unsigned int * nr_events ,
struct list_head * done )
{
2019-12-28 20:48:22 +03:00
struct req_batch rb ;
2019-01-09 18:59:42 +03:00
struct io_kiocb * req ;
2019-12-28 22:11:08 +03:00
rb . to_free = rb . need_iter = 0 ;
2019-01-09 18:59:42 +03:00
while ( ! list_empty ( done ) ) {
req = list_first_entry ( done , struct io_kiocb , list ) ;
list_del ( & req - > list ) ;
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( req , req - > result ) ;
2019-01-09 18:59:42 +03:00
( * nr_events ) + + ;
2019-12-28 20:48:22 +03:00
if ( refcount_dec_and_test ( & req - > refs ) & &
! io_req_multi_free ( & rb , req ) )
io_free_req ( req ) ;
2019-01-09 18:59:42 +03:00
}
2019-03-13 21:39:28 +03:00
io_commit_cqring ( ctx ) ;
2019-12-28 20:48:22 +03:00
io_free_req_many ( ctx , & rb ) ;
2019-01-09 18:59:42 +03:00
}
static int io_do_iopoll ( struct io_ring_ctx * ctx , unsigned int * nr_events ,
long min )
{
struct io_kiocb * req , * tmp ;
LIST_HEAD ( done ) ;
bool spin ;
int ret ;
/*
* Only spin for completions if we don ' t have multiple devices hanging
* off our complete list , and we ' re under the requested amount .
*/
spin = ! ctx - > poll_multi_file & & * nr_events < min ;
ret = 0 ;
list_for_each_entry_safe ( req , tmp , & ctx - > poll_list , list ) {
2019-12-20 18:45:55 +03:00
struct kiocb * kiocb = & req - > rw . kiocb ;
2019-01-09 18:59:42 +03:00
/*
* Move completed entries to our local list . If we find a
* request that requires polling , break out and complete
* the done list first , if we have entries there .
*/
if ( req - > flags & REQ_F_IOPOLL_COMPLETED ) {
list_move_tail ( & req - > list , & done ) ;
continue ;
}
if ( ! list_empty ( & done ) )
break ;
ret = kiocb - > ki_filp - > f_op - > iopoll ( kiocb , spin ) ;
if ( ret < 0 )
break ;
if ( ret & & spin )
spin = false ;
ret = 0 ;
}
if ( ! list_empty ( & done ) )
io_iopoll_complete ( ctx , nr_events , & done ) ;
return ret ;
}
/*
2019-12-13 14:09:50 +03:00
* Poll for a minimum of ' min ' events . Note that if min = = 0 we consider that a
2019-01-09 18:59:42 +03:00
* non - spinning poll check - we ' ll still enter the driver poll loop , but only
* as a non - spinning completion check .
*/
static int io_iopoll_getevents ( struct io_ring_ctx * ctx , unsigned int * nr_events ,
long min )
{
2019-08-22 07:19:11 +03:00
while ( ! list_empty ( & ctx - > poll_list ) & & ! need_resched ( ) ) {
2019-01-09 18:59:42 +03:00
int ret ;
ret = io_do_iopoll ( ctx , nr_events , min ) ;
if ( ret < 0 )
return ret ;
if ( ! min | | * nr_events > = min )
return 0 ;
}
return 1 ;
}
/*
* We can ' t just wait for polled events to come to us , we have to actively
* find and complete them .
*/
static void io_iopoll_reap_events ( struct io_ring_ctx * ctx )
{
if ( ! ( ctx - > flags & IORING_SETUP_IOPOLL ) )
return ;
mutex_lock ( & ctx - > uring_lock ) ;
while ( ! list_empty ( & ctx - > poll_list ) ) {
unsigned int nr_events = 0 ;
io_iopoll_getevents ( ctx , & nr_events , 1 ) ;
2019-08-22 07:19:11 +03:00
/*
* Ensure we allow local - to - the - cpu processing to take place ,
* in this case we need to ensure that we reap all events .
*/
cond_resched ( ) ;
2019-01-09 18:59:42 +03:00
}
mutex_unlock ( & ctx - > uring_lock ) ;
}
2019-10-25 19:06:15 +03:00
static int __io_iopoll_check ( struct io_ring_ctx * ctx , unsigned * nr_events ,
long min )
2019-01-09 18:59:42 +03:00
{
2019-10-25 19:06:15 +03:00
int iters = 0 , ret = 0 ;
2019-08-19 21:15:59 +03:00
2019-01-09 18:59:42 +03:00
do {
int tmin = 0 ;
2019-08-20 20:03:11 +03:00
/*
* Don ' t enter poll loop if we already have events pending .
* If we do , we can potentially be spinning for commands that
* already triggered a CQE ( eg in error ) .
*/
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( io_cqring_events ( ctx , false ) )
2019-08-20 20:03:11 +03:00
break ;
2019-08-19 21:15:59 +03:00
/*
* If a submit got punted to a workqueue , we can have the
* application entering polling for a command before it gets
* issued . That app will hold the uring_lock for the duration
* of the poll right here , so we need to take a breather every
* now and then to ensure that the issue has a chance to add
* the poll to the issued list . Otherwise we can spin here
* forever , while the workqueue is stuck trying to acquire the
* very same mutex .
*/
if ( ! ( + + iters & 7 ) ) {
mutex_unlock ( & ctx - > uring_lock ) ;
mutex_lock ( & ctx - > uring_lock ) ;
}
2019-01-09 18:59:42 +03:00
if ( * nr_events < min )
tmin = min - * nr_events ;
ret = io_iopoll_getevents ( ctx , nr_events , tmin ) ;
if ( ret < = 0 )
break ;
ret = 0 ;
} while ( min & & ! * nr_events & & ! need_resched ( ) ) ;
2019-10-25 19:06:15 +03:00
return ret ;
}
static int io_iopoll_check ( struct io_ring_ctx * ctx , unsigned * nr_events ,
long min )
{
int ret ;
/*
* We disallow the app entering submit / complete with polling , but we
* still need to lock the ring to prevent racing with polled issue
* that got punted to a workqueue .
*/
mutex_lock ( & ctx - > uring_lock ) ;
ret = __io_iopoll_check ( ctx , nr_events , min ) ;
2019-08-19 21:15:59 +03:00
mutex_unlock ( & ctx - > uring_lock ) ;
2019-01-09 18:59:42 +03:00
return ret ;
}
2019-10-17 18:20:46 +03:00
static void kiocb_end_write ( struct io_kiocb * req )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-10-17 18:20:46 +03:00
/*
* Tell lockdep we inherited freeze protection from submission
* thread .
*/
if ( req - > flags & REQ_F_ISREG ) {
struct inode * inode = file_inode ( req - > file ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-10-17 18:20:46 +03:00
__sb_writers_acquired ( inode - > i_sb , SB_FREEZE_WRITE ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-10-17 18:20:46 +03:00
file_end_write ( req - > file ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-12-08 06:59:47 +03:00
static inline void req_set_fail_links ( struct io_kiocb * req )
{
if ( ( req - > flags & ( REQ_F_LINK | REQ_F_HARDLINK ) ) = = REQ_F_LINK )
req - > flags | = REQ_F_FAIL_LINK ;
}
2019-09-28 20:36:45 +03:00
static void io_complete_rw_common ( struct kiocb * kiocb , long res )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-12-20 18:45:55 +03:00
struct io_kiocb * req = container_of ( kiocb , struct io_kiocb , rw . kiocb ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-10-17 18:20:46 +03:00
if ( kiocb - > ki_flags & IOCB_WRITE )
kiocb_end_write ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-08 06:59:47 +03:00
if ( res ! = req - > result )
req_set_fail_links ( req ) ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , res ) ;
2019-09-28 20:36:45 +03:00
}
static void io_complete_rw ( struct kiocb * kiocb , long res , long res2 )
{
2019-12-20 18:45:55 +03:00
struct io_kiocb * req = container_of ( kiocb , struct io_kiocb , rw . kiocb ) ;
2019-09-28 20:36:45 +03:00
io_complete_rw_common ( kiocb , res ) ;
2019-03-12 19:16:44 +03:00
io_put_req ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-09-28 20:36:45 +03:00
static struct io_kiocb * __io_complete_rw ( struct kiocb * kiocb , long res )
{
2019-12-20 18:45:55 +03:00
struct io_kiocb * req = container_of ( kiocb , struct io_kiocb , rw . kiocb ) ;
2019-11-08 18:50:36 +03:00
struct io_kiocb * nxt = NULL ;
2019-09-28 20:36:45 +03:00
io_complete_rw_common ( kiocb , res ) ;
2019-11-08 18:50:36 +03:00
io_put_req_find_next ( req , & nxt ) ;
return nxt ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-01-09 18:59:42 +03:00
static void io_complete_rw_iopoll ( struct kiocb * kiocb , long res , long res2 )
{
2019-12-20 18:45:55 +03:00
struct io_kiocb * req = container_of ( kiocb , struct io_kiocb , rw . kiocb ) ;
2019-01-09 18:59:42 +03:00
2019-10-17 18:20:46 +03:00
if ( kiocb - > ki_flags & IOCB_WRITE )
kiocb_end_write ( req ) ;
2019-01-09 18:59:42 +03:00
2019-12-08 06:59:47 +03:00
if ( res ! = req - > result )
req_set_fail_links ( req ) ;
2019-05-11 01:07:28 +03:00
req - > result = res ;
2019-01-09 18:59:42 +03:00
if ( res ! = - EAGAIN )
req - > flags | = REQ_F_IOPOLL_COMPLETED ;
}
/*
* After the iocb has been issued , it ' s safe to be found on the poll list .
* Adding the kiocb to the list AFTER submission ensures that we don ' t
* find it from a io_iopoll_getevents ( ) thread before the issuer is done
* accessing the kiocb cookie .
*/
static void io_iopoll_req_issued ( struct io_kiocb * req )
{
struct io_ring_ctx * ctx = req - > ctx ;
/*
* Track whether we have multiple files in our lists . This will impact
* how we do polling eventually , not spinning if we ' re on potentially
* different devices .
*/
if ( list_empty ( & ctx - > poll_list ) ) {
ctx - > poll_multi_file = false ;
} else if ( ! ctx - > poll_multi_file ) {
struct io_kiocb * list_req ;
list_req = list_first_entry ( & ctx - > poll_list , struct io_kiocb ,
list ) ;
2019-12-20 18:45:55 +03:00
if ( list_req - > file ! = req - > file )
2019-01-09 18:59:42 +03:00
ctx - > poll_multi_file = true ;
}
/*
* For fast devices , IO may have already completed . If it has , add
* it to the front so we find it first .
*/
if ( req - > flags & REQ_F_IOPOLL_COMPLETED )
list_add ( & req - > list , & ctx - > poll_list ) ;
else
list_add_tail ( & req - > list , & ctx - > poll_list ) ;
}
2019-04-13 20:50:54 +03:00
static void io_file_put ( struct io_submit_state * state )
2019-01-09 19:06:50 +03:00
{
2019-04-13 20:50:54 +03:00
if ( state - > file ) {
2019-01-09 19:06:50 +03:00
int diff = state - > has_refs - state - > used_refs ;
if ( diff )
fput_many ( state - > file , diff ) ;
state - > file = NULL ;
}
}
/*
* Get as many references to a file as we have IOs left in this submission ,
* assuming most submissions are for one file , or at least that each file
* has more than one submission .
*/
static struct file * io_file_get ( struct io_submit_state * state , int fd )
{
if ( ! state )
return fget ( fd ) ;
if ( state - > file ) {
if ( state - > fd = = fd ) {
state - > used_refs + + ;
state - > ios_left - - ;
return state - > file ;
}
2019-04-13 20:50:54 +03:00
io_file_put ( state ) ;
2019-01-09 19:06:50 +03:00
}
state - > file = fget_many ( fd , state - > ios_left ) ;
if ( ! state - > file )
return NULL ;
state - > fd = fd ;
state - > has_refs = state - > ios_left ;
state - > used_refs = 1 ;
state - > ios_left - - ;
return state - > file ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/*
* If we tracked the file through the SCM inflight mechanism , we could support
* any file . For now , just ensure that anything potentially problematic is done
* inline .
*/
static bool io_file_supports_async ( struct file * file )
{
umode_t mode = file_inode ( file ) - > i_mode ;
2019-12-10 06:16:22 +03:00
if ( S_ISBLK ( mode ) | | S_ISCHR ( mode ) | | S_ISSOCK ( mode ) )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return true ;
if ( S_ISREG ( mode ) & & file - > f_op ! = & io_uring_fops )
return true ;
return false ;
}
2019-12-20 04:24:38 +03:00
static int io_prep_rw ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
bool force_nonblock )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-01-09 18:59:42 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-20 18:45:55 +03:00
struct kiocb * kiocb = & req - > rw . kiocb ;
2019-03-13 21:39:28 +03:00
unsigned ioprio ;
int ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-03-13 21:39:28 +03:00
if ( ! req - > file )
return - EBADF ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-10-17 18:20:46 +03:00
if ( S_ISREG ( file_inode ( req - > file ) - > i_mode ) )
req - > flags | = REQ_F_ISREG ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
kiocb - > ki_pos = READ_ONCE ( sqe - > off ) ;
2019-12-26 02:33:42 +03:00
if ( kiocb - > ki_pos = = - 1 & & ! ( req - > file - > f_mode & FMODE_STREAM ) ) {
req - > flags | = REQ_F_CUR_POS ;
kiocb - > ki_pos = req - > file - > f_pos ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
kiocb - > ki_flags = iocb_flags ( kiocb - > ki_filp ) ;
kiocb - > ki_hint = ki_hint_validate ( file_write_hint ( kiocb - > ki_filp ) ) ;
ioprio = READ_ONCE ( sqe - > ioprio ) ;
if ( ioprio ) {
ret = ioprio_check_cap ( ioprio ) ;
if ( ret )
2019-03-13 21:39:28 +03:00
return ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
kiocb - > ki_ioprio = ioprio ;
} else
kiocb - > ki_ioprio = get_current_ioprio ( ) ;
ret = kiocb_set_rw_flags ( kiocb , READ_ONCE ( sqe - > rw_flags ) ) ;
if ( unlikely ( ret ) )
2019-03-13 21:39:28 +03:00
return ret ;
2019-04-27 21:34:19 +03:00
/* don't allow async punt if RWF_NOWAIT was requested */
2019-10-17 18:20:46 +03:00
if ( ( kiocb - > ki_flags & IOCB_NOWAIT ) | |
( req - > file - > f_flags & O_NONBLOCK ) )
2019-04-27 21:34:19 +03:00
req - > flags | = REQ_F_NOWAIT ;
if ( force_nonblock )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
kiocb - > ki_flags | = IOCB_NOWAIT ;
2019-04-27 21:34:19 +03:00
2019-01-09 18:59:42 +03:00
if ( ctx - > flags & IORING_SETUP_IOPOLL ) {
if ( ! ( kiocb - > ki_flags & IOCB_DIRECT ) | |
! kiocb - > ki_filp - > f_op - > iopoll )
2019-03-13 21:39:28 +03:00
return - EOPNOTSUPP ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-01-09 18:59:42 +03:00
kiocb - > ki_flags | = IOCB_HIPRI ;
kiocb - > ki_complete = io_complete_rw_iopoll ;
2019-10-30 22:53:09 +03:00
req - > result = 0 ;
2019-01-09 18:59:42 +03:00
} else {
2019-03-13 21:39:28 +03:00
if ( kiocb - > ki_flags & IOCB_HIPRI )
return - EINVAL ;
2019-01-09 18:59:42 +03:00
kiocb - > ki_complete = io_complete_rw ;
}
2019-12-20 18:45:55 +03:00
2019-12-20 04:24:38 +03:00
req - > rw . addr = READ_ONCE ( sqe - > addr ) ;
req - > rw . len = READ_ONCE ( sqe - > len ) ;
2019-12-20 18:45:55 +03:00
/* we own ->private, reuse it for the buffer index */
req - > rw . kiocb . private = ( void * ) ( unsigned long )
2019-12-20 04:24:38 +03:00
READ_ONCE ( sqe - > buf_index ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return 0 ;
}
static inline void io_rw_done ( struct kiocb * kiocb , ssize_t ret )
{
switch ( ret ) {
case - EIOCBQUEUED :
break ;
case - ERESTARTSYS :
case - ERESTARTNOINTR :
case - ERESTARTNOHAND :
case - ERESTART_RESTARTBLOCK :
/*
* We can ' t just restart the syscall , since previously
* submitted sqes may already be in progress . Just fail this
* IO with EINTR .
*/
ret = - EINTR ;
/* fall through */
default :
kiocb - > ki_complete ( kiocb , ret , 0 ) ;
}
}
2019-09-28 20:36:45 +03:00
static void kiocb_done ( struct kiocb * kiocb , ssize_t ret , struct io_kiocb * * nxt ,
bool in_async )
{
2019-12-26 02:33:42 +03:00
struct io_kiocb * req = container_of ( kiocb , struct io_kiocb , rw . kiocb ) ;
if ( req - > flags & REQ_F_CUR_POS )
req - > file - > f_pos = kiocb - > ki_pos ;
2019-11-21 23:21:03 +03:00
if ( in_async & & ret > = 0 & & kiocb - > ki_complete = = io_complete_rw )
2019-09-28 20:36:45 +03:00
* nxt = __io_complete_rw ( kiocb , ret ) ;
else
io_rw_done ( kiocb , ret ) ;
}
2019-12-20 18:45:55 +03:00
static ssize_t io_import_fixed ( struct io_kiocb * req , int rw ,
2019-11-25 23:14:40 +03:00
struct iov_iter * iter )
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
{
2019-12-20 18:45:55 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
size_t len = req - > rw . len ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
struct io_mapped_ubuf * imu ;
unsigned index , buf_index ;
size_t offset ;
u64 buf_addr ;
/* attempt to use fixed buffers without having provided iovecs */
if ( unlikely ( ! ctx - > user_bufs ) )
return - EFAULT ;
2019-12-20 18:45:55 +03:00
buf_index = ( unsigned long ) req - > rw . kiocb . private ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
if ( unlikely ( buf_index > = ctx - > nr_user_bufs ) )
return - EFAULT ;
index = array_index_nospec ( buf_index , ctx - > nr_user_bufs ) ;
imu = & ctx - > user_bufs [ index ] ;
2019-12-20 18:45:55 +03:00
buf_addr = req - > rw . addr ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
/* overflow */
if ( buf_addr + len < buf_addr )
return - EFAULT ;
/* not inside the mapped region */
if ( buf_addr < imu - > ubuf | | buf_addr + len > imu - > ubuf + imu - > len )
return - EFAULT ;
/*
* May not be a start of buffer , set size appropriately
* and advance us to the beginning .
*/
offset = buf_addr - imu - > ubuf ;
iov_iter_bvec ( iter , rw , imu - > bvec , imu - > nr_bvecs , offset + len ) ;
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 17:37:31 +03:00
if ( offset ) {
/*
* Don ' t use iov_iter_advance ( ) here , as it ' s really slow for
* using the latter parts of a big fixed buffer - it iterates
* over each segment manually . We can cheat a bit here , because
* we know that :
*
* 1 ) it ' s a BVEC iter , we set it up
* 2 ) all bvecs are PAGE_SIZE in size , except potentially the
* first and last bvec
*
* So just find our index , and adjust the iterator afterwards .
* If the offset is within the first bvec ( or the whole first
* bvec , just use iov_iter_advance ( ) . This makes it easier
* since we can just skip the first segment , which may not
* be PAGE_SIZE aligned .
*/
const struct bio_vec * bvec = imu - > bvec ;
if ( offset < = bvec - > bv_len ) {
iov_iter_advance ( iter , offset ) ;
} else {
unsigned long seg_skip ;
/* skip first vec */
offset - = bvec - > bv_len ;
seg_skip = 1 + ( offset > > PAGE_SHIFT ) ;
iter - > bvec = bvec + seg_skip ;
iter - > nr_segs - = seg_skip ;
2019-08-15 15:03:22 +03:00
iter - > count - = bvec - > bv_len + offset ;
io_uring: don't use iov_iter_advance() for fixed buffers
Hrvoje reports that when a large fixed buffer is registered and IO is
being done to the latter pages of said buffer, the IO submission time
is much worse:
reading to the start of the buffer: 11238 ns
reading to the end of the buffer: 1039879 ns
In fact, it's worse by two orders of magnitude. The reason for that is
how io_uring figures out how to setup the iov_iter. We point the iter
at the first bvec, and then use iov_iter_advance() to fast-forward to
the offset within that buffer we need.
However, that is abysmally slow, as it entails iterating the bvecs
that we setup as part of buffer registration. There's really no need
to use this generic helper, as we know it's a BVEC type iterator, and
we also know that each bvec is PAGE_SIZE in size, apart from possibly
the first and last. Hence we can just use a shift on the offset to
find the right index, and then adjust the iov_iter appropriately.
After this fix, the timings are:
reading to the start of the buffer: 10135 ns
reading to the end of the buffer: 1377 ns
Or about an 755x improvement for the tail page.
Reported-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Tested-by: Hrvoje Zeba <zeba.hrvoje@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-07-20 17:37:31 +03:00
iter - > iov_offset = offset & ~ PAGE_MASK ;
}
}
2019-11-14 02:12:46 +03:00
return len ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
}
2019-11-25 23:14:39 +03:00
static ssize_t io_import_iovec ( int rw , struct io_kiocb * req ,
struct iovec * * iovec , struct iov_iter * iter )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-12-20 18:45:55 +03:00
void __user * buf = u64_to_user_ptr ( req - > rw . addr ) ;
size_t sqe_len = req - > rw . len ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
u8 opcode ;
2019-12-18 05:53:05 +03:00
opcode = req - > opcode ;
2019-11-25 23:14:40 +03:00
if ( opcode = = IORING_OP_READ_FIXED | | opcode = = IORING_OP_WRITE_FIXED ) {
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
* iovec = NULL ;
2019-12-20 18:45:55 +03:00
return io_import_fixed ( req , rw , iter ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-20 18:45:55 +03:00
/* buffer index only valid with fixed read/write */
if ( req - > rw . kiocb . private )
return - EINVAL ;
2019-12-23 01:19:35 +03:00
if ( opcode = = IORING_OP_READ | | opcode = = IORING_OP_WRITE ) {
ssize_t ret ;
ret = import_single_range ( rw , buf , sqe_len , * iovec , iter ) ;
* iovec = NULL ;
return ret ;
}
2019-12-02 21:03:47 +03:00
if ( req - > io ) {
struct io_async_rw * iorw = & req - > io - > rw ;
* iovec = iorw - > iov ;
iov_iter_init ( iter , rw , * iovec , iorw - > nr_segs , iorw - > size ) ;
if ( iorw - > iov = = iorw - > fast_iov )
* iovec = NULL ;
return iorw - > size ;
}
2019-11-25 23:14:39 +03:00
if ( ! req - > has_user )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - EFAULT ;
# ifdef CONFIG_COMPAT
2019-11-25 23:14:39 +03:00
if ( req - > ctx - > compat )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return compat_import_iovec ( rw , buf , sqe_len , UIO_FASTIOV ,
iovec , iter ) ;
# endif
return import_iovec ( rw , buf , sqe_len , UIO_FASTIOV , iovec , iter ) ;
}
2019-01-19 08:56:34 +03:00
/*
2019-09-23 20:05:34 +03:00
* For files that don ' t have - > read_iter ( ) and - > write_iter ( ) , handle them
* by looping over - > read ( ) or - > write ( ) manually .
2019-01-19 08:56:34 +03:00
*/
2019-09-23 20:05:34 +03:00
static ssize_t loop_rw_iter ( int rw , struct file * file , struct kiocb * kiocb ,
struct iov_iter * iter )
{
ssize_t ret = 0 ;
/*
* Don ' t support polled IO through this interface , and we can ' t
* support non - blocking either . For the latter , this just causes
* the kiocb to be handled from an async context .
*/
if ( kiocb - > ki_flags & IOCB_HIPRI )
return - EOPNOTSUPP ;
if ( kiocb - > ki_flags & IOCB_NOWAIT )
return - EAGAIN ;
while ( iov_iter_count ( iter ) ) {
2019-11-24 11:58:24 +03:00
struct iovec iovec ;
2019-09-23 20:05:34 +03:00
ssize_t nr ;
2019-11-24 11:58:24 +03:00
if ( ! iov_iter_is_bvec ( iter ) ) {
iovec = iov_iter_iovec ( iter ) ;
} else {
/* fixed buffers import bvec */
iovec . iov_base = kmap ( iter - > bvec - > bv_page )
+ iter - > iov_offset ;
iovec . iov_len = min ( iter - > count ,
iter - > bvec - > bv_len - iter - > iov_offset ) ;
}
2019-09-23 20:05:34 +03:00
if ( rw = = READ ) {
nr = file - > f_op - > read ( file , iovec . iov_base ,
iovec . iov_len , & kiocb - > ki_pos ) ;
} else {
nr = file - > f_op - > write ( file , iovec . iov_base ,
iovec . iov_len , & kiocb - > ki_pos ) ;
}
2019-11-24 11:58:24 +03:00
if ( iov_iter_is_bvec ( iter ) )
kunmap ( iter - > bvec - > bv_page ) ;
2019-09-23 20:05:34 +03:00
if ( nr < 0 ) {
if ( ! ret )
ret = nr ;
break ;
}
ret + = nr ;
if ( nr ! = iovec . iov_len )
break ;
iov_iter_advance ( iter , nr ) ;
}
return ret ;
}
2019-12-16 08:13:43 +03:00
static void io_req_map_rw ( struct io_kiocb * req , ssize_t io_size ,
2019-12-02 21:03:47 +03:00
struct iovec * iovec , struct iovec * fast_iov ,
struct iov_iter * iter )
{
req - > io - > rw . nr_segs = iter - > nr_segs ;
req - > io - > rw . size = io_size ;
req - > io - > rw . iov = iovec ;
if ( ! req - > io - > rw . iov ) {
req - > io - > rw . iov = req - > io - > rw . fast_iov ;
memcpy ( req - > io - > rw . iov , fast_iov ,
sizeof ( struct iovec ) * iter - > nr_segs ) ;
}
}
2019-12-16 08:13:43 +03:00
static int io_alloc_async_ctx ( struct io_kiocb * req )
2019-12-02 21:03:47 +03:00
{
2019-12-18 19:50:26 +03:00
if ( ! io_op_defs [ req - > opcode ] . async_ctx )
return 0 ;
2019-12-02 21:03:47 +03:00
req - > io = kmalloc ( sizeof ( * req - > io ) , GFP_KERNEL ) ;
2019-12-20 00:44:26 +03:00
return req - > io = = NULL ;
2019-12-16 08:13:43 +03:00
}
static void io_rw_async ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct iovec * iov = NULL ;
if ( req - > io - > rw . iov ! = req - > io - > rw . fast_iov )
iov = req - > io - > rw . iov ;
io_wq_submit_work ( workptr ) ;
kfree ( iov ) ;
}
static int io_setup_async_rw ( struct io_kiocb * req , ssize_t io_size ,
struct iovec * iovec , struct iovec * fast_iov ,
struct iov_iter * iter )
{
2020-01-25 09:08:54 +03:00
if ( ! io_op_defs [ req - > opcode ] . async_ctx )
2020-01-14 05:23:24 +03:00
return 0 ;
2019-12-16 08:13:43 +03:00
if ( ! req - > io & & io_alloc_async_ctx ( req ) )
return - ENOMEM ;
io_req_map_rw ( req , io_size , iovec , fast_iov , iter ) ;
req - > work . func = io_rw_async ;
return 0 ;
2019-12-02 21:03:47 +03:00
}
2019-12-20 04:24:38 +03:00
static int io_read_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
bool force_nonblock )
2019-12-02 21:03:47 +03:00
{
2019-12-20 04:24:38 +03:00
struct io_async_ctx * io ;
struct iov_iter iter ;
2019-12-02 21:03:47 +03:00
ssize_t ret ;
2019-12-20 04:24:38 +03:00
ret = io_prep_rw ( req , sqe , force_nonblock ) ;
if ( ret )
return ret ;
2019-12-02 21:03:47 +03:00
2019-12-20 04:24:38 +03:00
if ( unlikely ( ! ( req - > file - > f_mode & FMODE_READ ) ) )
return - EBADF ;
2019-12-02 21:03:47 +03:00
2019-12-20 04:24:38 +03:00
if ( ! req - > io )
return 0 ;
io = req - > io ;
io - > rw . iov = io - > rw . fast_iov ;
req - > io = NULL ;
ret = io_import_iovec ( READ , req , & io - > rw . iov , & iter ) ;
req - > io = io ;
if ( ret < 0 )
return ret ;
io_req_map_rw ( req , ret , io - > rw . iov , io - > rw . fast_iov , & iter ) ;
return 0 ;
2019-12-02 21:03:47 +03:00
}
2019-11-07 01:41:08 +03:00
static int io_read ( struct io_kiocb * req , struct io_kiocb * * nxt ,
2019-04-23 17:17:58 +03:00
bool force_nonblock )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
struct iovec inline_vecs [ UIO_FASTIOV ] , * iovec = inline_vecs ;
2019-12-20 18:45:55 +03:00
struct kiocb * kiocb = & req - > rw . kiocb ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct iov_iter iter ;
2019-01-19 08:56:34 +03:00
size_t iov_count ;
2019-12-02 21:03:47 +03:00
ssize_t io_size , ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-20 04:24:38 +03:00
ret = io_import_iovec ( READ , req , & iovec , & iter ) ;
2019-12-20 00:44:26 +03:00
if ( ret < 0 )
return ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-18 22:19:41 +03:00
/* Ensure we clear previously set non-block flag */
if ( ! force_nonblock )
2019-12-20 18:45:55 +03:00
req - > rw . kiocb . ki_flags & = ~ IOCB_NOWAIT ;
2019-12-18 22:19:41 +03:00
2020-01-16 05:37:45 +03:00
req - > result = 0 ;
2019-12-02 21:03:47 +03:00
io_size = ret ;
2019-05-11 01:07:28 +03:00
if ( req - > flags & REQ_F_LINK )
2019-12-02 21:03:47 +03:00
req - > result = io_size ;
/*
* If the file doesn ' t support async , mark it as REQ_F_MUST_PUNT so
* we know to async punt it even if it was opened O_NONBLOCK
*/
2019-12-20 18:45:55 +03:00
if ( force_nonblock & & ! io_file_supports_async ( req - > file ) ) {
2019-12-02 21:03:47 +03:00
req - > flags | = REQ_F_MUST_PUNT ;
goto copy_iov ;
}
2019-05-11 01:07:28 +03:00
2019-01-19 08:56:34 +03:00
iov_count = iov_iter_count ( & iter ) ;
2019-12-20 18:45:55 +03:00
ret = rw_verify_area ( READ , req - > file , & kiocb - > ki_pos , iov_count ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( ! ret ) {
ssize_t ret2 ;
2019-12-20 18:45:55 +03:00
if ( req - > file - > f_op - > read_iter )
ret2 = call_read_iter ( req - > file , kiocb , & iter ) ;
2019-09-23 20:05:34 +03:00
else
2019-12-20 18:45:55 +03:00
ret2 = loop_rw_iter ( READ , req - > file , kiocb , & iter ) ;
2019-09-23 20:05:34 +03:00
2019-05-15 22:53:07 +03:00
/* Catch -EAGAIN return for forced non-blocking submission */
2019-12-02 21:03:47 +03:00
if ( ! force_nonblock | | ret2 ! = - EAGAIN ) {
2019-11-25 23:14:39 +03:00
kiocb_done ( kiocb , ret2 , nxt , req - > in_async ) ;
2019-12-02 21:03:47 +03:00
} else {
copy_iov :
2019-12-16 08:13:43 +03:00
ret = io_setup_async_rw ( req , io_size , iovec ,
2019-12-02 21:03:47 +03:00
inline_vecs , & iter ) ;
if ( ret )
goto out_free ;
return - EAGAIN ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-12-02 21:03:47 +03:00
out_free :
2019-12-16 08:13:43 +03:00
if ( ! io_wq_current_is_worker ( ) )
kfree ( iovec ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return ret ;
}
2019-12-20 04:24:38 +03:00
static int io_write_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
bool force_nonblock )
2019-12-02 21:03:47 +03:00
{
2019-12-20 04:24:38 +03:00
struct io_async_ctx * io ;
struct iov_iter iter ;
2019-12-02 21:03:47 +03:00
ssize_t ret ;
2019-12-20 04:24:38 +03:00
ret = io_prep_rw ( req , sqe , force_nonblock ) ;
if ( ret )
return ret ;
2019-12-02 21:03:47 +03:00
2019-12-20 04:24:38 +03:00
if ( unlikely ( ! ( req - > file - > f_mode & FMODE_WRITE ) ) )
return - EBADF ;
2019-12-02 21:03:47 +03:00
2019-12-20 04:24:38 +03:00
if ( ! req - > io )
return 0 ;
io = req - > io ;
io - > rw . iov = io - > rw . fast_iov ;
req - > io = NULL ;
ret = io_import_iovec ( WRITE , req , & io - > rw . iov , & iter ) ;
req - > io = io ;
if ( ret < 0 )
return ret ;
io_req_map_rw ( req , ret , io - > rw . iov , io - > rw . fast_iov , & iter ) ;
return 0 ;
2019-12-02 21:03:47 +03:00
}
2019-11-07 01:41:08 +03:00
static int io_write ( struct io_kiocb * req , struct io_kiocb * * nxt ,
2019-04-23 17:17:58 +03:00
bool force_nonblock )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
struct iovec inline_vecs [ UIO_FASTIOV ] , * iovec = inline_vecs ;
2019-12-20 18:45:55 +03:00
struct kiocb * kiocb = & req - > rw . kiocb ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct iov_iter iter ;
2019-01-19 08:56:34 +03:00
size_t iov_count ;
2019-12-02 21:03:47 +03:00
ssize_t ret , io_size ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-20 04:24:38 +03:00
ret = io_import_iovec ( WRITE , req , & iovec , & iter ) ;
2019-12-20 00:44:26 +03:00
if ( ret < 0 )
return ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-18 22:19:41 +03:00
/* Ensure we clear previously set non-block flag */
if ( ! force_nonblock )
2019-12-20 18:45:55 +03:00
req - > rw . kiocb . ki_flags & = ~ IOCB_NOWAIT ;
2019-12-18 22:19:41 +03:00
2020-01-16 05:37:45 +03:00
req - > result = 0 ;
2019-12-02 21:03:47 +03:00
io_size = ret ;
2019-05-11 01:07:28 +03:00
if ( req - > flags & REQ_F_LINK )
2019-12-02 21:03:47 +03:00
req - > result = io_size ;
2019-05-11 01:07:28 +03:00
2019-12-02 21:03:47 +03:00
/*
* If the file doesn ' t support async , mark it as REQ_F_MUST_PUNT so
* we know to async punt it even if it was opened O_NONBLOCK
*/
if ( force_nonblock & & ! io_file_supports_async ( req - > file ) ) {
req - > flags | = REQ_F_MUST_PUNT ;
goto copy_iov ;
}
2019-01-19 08:56:34 +03:00
2019-12-10 06:16:22 +03:00
/* file path doesn't support NOWAIT for non-direct_IO */
if ( force_nonblock & & ! ( kiocb - > ki_flags & IOCB_DIRECT ) & &
( req - > flags & REQ_F_ISREG ) )
2019-12-02 21:03:47 +03:00
goto copy_iov ;
2019-01-19 08:56:34 +03:00
2019-12-02 21:03:47 +03:00
iov_count = iov_iter_count ( & iter ) ;
2019-12-20 18:45:55 +03:00
ret = rw_verify_area ( WRITE , req - > file , & kiocb - > ki_pos , iov_count ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( ! ret ) {
2019-03-25 22:09:24 +03:00
ssize_t ret2 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/*
* Open - code file_start_write here to grab freeze protection ,
* which will be released by another thread in
* io_complete_rw ( ) . Fool lockdep by telling it the lock got
* released so that it doesn ' t complain about the held lock when
* we return to userspace .
*/
2019-10-17 18:20:46 +03:00
if ( req - > flags & REQ_F_ISREG ) {
2019-12-20 18:45:55 +03:00
__sb_start_write ( file_inode ( req - > file ) - > i_sb ,
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
SB_FREEZE_WRITE , true ) ;
2019-12-20 18:45:55 +03:00
__sb_writers_release ( file_inode ( req - > file ) - > i_sb ,
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
SB_FREEZE_WRITE ) ;
}
kiocb - > ki_flags | = IOCB_WRITE ;
2019-03-25 22:09:24 +03:00
2019-12-20 18:45:55 +03:00
if ( req - > file - > f_op - > write_iter )
ret2 = call_write_iter ( req - > file , kiocb , & iter ) ;
2019-09-23 20:05:34 +03:00
else
2019-12-20 18:45:55 +03:00
ret2 = loop_rw_iter ( WRITE , req - > file , kiocb , & iter ) ;
2019-12-02 21:03:47 +03:00
if ( ! force_nonblock | | ret2 ! = - EAGAIN ) {
2019-11-25 23:14:39 +03:00
kiocb_done ( kiocb , ret2 , nxt , req - > in_async ) ;
2019-12-02 21:03:47 +03:00
} else {
copy_iov :
2019-12-16 08:13:43 +03:00
ret = io_setup_async_rw ( req , io_size , iovec ,
2019-12-02 21:03:47 +03:00
inline_vecs , & iter ) ;
if ( ret )
goto out_free ;
return - EAGAIN ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-01-19 08:56:34 +03:00
out_free :
2019-12-16 08:13:43 +03:00
if ( ! io_wq_current_is_worker ( ) )
kfree ( iovec ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return ret ;
}
/*
* IORING_OP_NOP just posts a completion event , nothing else .
*/
2019-11-07 01:21:34 +03:00
static int io_nop ( struct io_kiocb * req )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
struct io_ring_ctx * ctx = req - > ctx ;
2019-01-09 18:59:42 +03:00
if ( unlikely ( ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , 0 ) ;
2019-03-12 19:16:44 +03:00
io_put_req ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return 0 ;
}
2019-12-20 04:24:38 +03:00
static int io_prep_fsync ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-01-11 19:43:02 +03:00
{
2019-01-11 08:13:58 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-01-11 19:43:02 +03:00
2019-03-13 21:39:28 +03:00
if ( ! req - > file )
return - EBADF ;
2019-01-11 19:43:02 +03:00
2019-01-11 08:13:58 +03:00
if ( unlikely ( ctx - > flags & IORING_SETUP_IOPOLL ) )
2019-01-09 18:59:42 +03:00
return - EINVAL ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
if ( unlikely ( sqe - > addr | | sqe - > ioprio | | sqe - > buf_index ) )
2019-01-11 19:43:02 +03:00
return - EINVAL ;
2019-12-16 21:55:28 +03:00
req - > sync . flags = READ_ONCE ( sqe - > fsync_flags ) ;
if ( unlikely ( req - > sync . flags & ~ IORING_FSYNC_DATASYNC ) )
return - EINVAL ;
req - > sync . off = READ_ONCE ( sqe - > off ) ;
req - > sync . len = READ_ONCE ( sqe - > len ) ;
2019-01-11 19:43:02 +03:00
return 0 ;
}
2019-12-16 21:55:28 +03:00
static bool io_req_cancelled ( struct io_kiocb * req )
{
if ( req - > work . flags & IO_WQ_WORK_CANCEL ) {
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , - ECANCELED ) ;
io_put_req ( req ) ;
return true ;
}
return false ;
}
2020-01-15 08:09:06 +03:00
static void io_link_work_cb ( struct io_wq_work * * workptr )
{
struct io_wq_work * work = * workptr ;
struct io_kiocb * link = work - > data ;
io_queue_linked_timeout ( link ) ;
work - > func = io_wq_submit_work ;
}
static void io_wq_assign_next ( struct io_wq_work * * workptr , struct io_kiocb * nxt )
{
struct io_kiocb * link ;
io_prep_async_work ( nxt , & link ) ;
* workptr = & nxt - > work ;
if ( link ) {
nxt - > work . flags | = IO_WQ_WORK_CB ;
nxt - > work . func = io_link_work_cb ;
nxt - > work . data = link ;
}
}
2019-12-16 21:55:28 +03:00
static void io_fsync_finish ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
loff_t end = req - > sync . off + req - > sync . len ;
struct io_kiocb * nxt = NULL ;
int ret ;
if ( io_req_cancelled ( req ) )
return ;
2019-12-20 18:45:55 +03:00
ret = vfs_fsync_range ( req - > file , req - > sync . off ,
2019-12-16 21:55:28 +03:00
end > 0 ? end : LLONG_MAX ,
req - > sync . flags & IORING_FSYNC_DATASYNC ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , & nxt ) ;
if ( nxt )
2020-01-15 08:09:06 +03:00
io_wq_assign_next ( workptr , nxt ) ;
2019-12-16 21:55:28 +03:00
}
2019-12-11 00:38:45 +03:00
static int io_fsync ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
2019-01-11 19:43:02 +03:00
{
2019-12-16 21:55:28 +03:00
struct io_wq_work * work , * old_work ;
2019-01-11 19:43:02 +03:00
/* fsync always requires a blocking context */
2019-12-16 21:55:28 +03:00
if ( force_nonblock ) {
io_put_req ( req ) ;
req - > work . func = io_fsync_finish ;
2019-01-11 19:43:02 +03:00
return - EAGAIN ;
2019-12-16 21:55:28 +03:00
}
2019-01-11 19:43:02 +03:00
2019-12-16 21:55:28 +03:00
work = old_work = & req - > work ;
io_fsync_finish ( & work ) ;
if ( work & & work ! = old_work )
* nxt = container_of ( work , struct io_kiocb , work ) ;
2019-01-11 19:43:02 +03:00
return 0 ;
}
2019-12-10 20:38:56 +03:00
static void io_fallocate_finish ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct io_kiocb * nxt = NULL ;
int ret ;
ret = vfs_fallocate ( req - > file , req - > sync . mode , req - > sync . off ,
req - > sync . len ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , & nxt ) ;
if ( nxt )
io_wq_assign_next ( workptr , nxt ) ;
}
static int io_fallocate_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
{
if ( sqe - > ioprio | | sqe - > buf_index | | sqe - > rw_flags )
return - EINVAL ;
req - > sync . off = READ_ONCE ( sqe - > off ) ;
req - > sync . len = READ_ONCE ( sqe - > addr ) ;
req - > sync . mode = READ_ONCE ( sqe - > len ) ;
return 0 ;
}
static int io_fallocate ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
struct io_wq_work * work , * old_work ;
/* fallocate always requiring blocking context */
if ( force_nonblock ) {
io_put_req ( req ) ;
req - > work . func = io_fallocate_finish ;
return - EAGAIN ;
}
work = old_work = & req - > work ;
io_fallocate_finish ( & work ) ;
if ( work & & work ! = old_work )
* nxt = container_of ( work , struct io_kiocb , work ) ;
return 0 ;
}
2019-12-11 21:20:36 +03:00
static int io_openat_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
2020-01-09 03:47:02 +03:00
const char __user * fname ;
2019-12-11 21:20:36 +03:00
int ret ;
if ( sqe - > ioprio | | sqe - > buf_index )
return - EINVAL ;
req - > open . dfd = READ_ONCE ( sqe - > fd ) ;
2020-01-09 03:41:21 +03:00
req - > open . how . mode = READ_ONCE ( sqe - > len ) ;
2020-01-09 03:47:02 +03:00
fname = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
2020-01-09 03:41:21 +03:00
req - > open . how . flags = READ_ONCE ( sqe - > open_flags ) ;
2019-12-11 21:20:36 +03:00
2020-01-09 03:47:02 +03:00
req - > open . filename = getname ( fname ) ;
2019-12-11 21:20:36 +03:00
if ( IS_ERR ( req - > open . filename ) ) {
ret = PTR_ERR ( req - > open . filename ) ;
req - > open . filename = NULL ;
return ret ;
}
return 0 ;
}
2020-01-09 03:59:24 +03:00
static int io_openat2_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
struct open_how __user * how ;
const char __user * fname ;
size_t len ;
int ret ;
if ( sqe - > ioprio | | sqe - > buf_index )
return - EINVAL ;
req - > open . dfd = READ_ONCE ( sqe - > fd ) ;
fname = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
how = u64_to_user_ptr ( READ_ONCE ( sqe - > addr2 ) ) ;
len = READ_ONCE ( sqe - > len ) ;
if ( len < OPEN_HOW_SIZE_VER0 )
return - EINVAL ;
ret = copy_struct_from_user ( & req - > open . how , sizeof ( req - > open . how ) , how ,
len ) ;
if ( ret )
return ret ;
if ( ! ( req - > open . how . flags & O_PATH ) & & force_o_largefile ( ) )
req - > open . how . flags | = O_LARGEFILE ;
req - > open . filename = getname ( fname ) ;
if ( IS_ERR ( req - > open . filename ) ) {
ret = PTR_ERR ( req - > open . filename ) ;
req - > open . filename = NULL ;
return ret ;
}
return 0 ;
}
static int io_openat2 ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
2019-12-11 21:20:36 +03:00
{
struct open_flags op ;
struct file * file ;
int ret ;
if ( force_nonblock ) {
req - > work . flags | = IO_WQ_WORK_NEEDS_FILES ;
return - EAGAIN ;
}
2020-01-09 03:59:24 +03:00
ret = build_open_flags ( & req - > open . how , & op ) ;
2019-12-11 21:20:36 +03:00
if ( ret )
goto err ;
2020-01-09 03:59:24 +03:00
ret = get_unused_fd_flags ( req - > open . how . flags ) ;
2019-12-11 21:20:36 +03:00
if ( ret < 0 )
goto err ;
file = do_filp_open ( req - > open . dfd , req - > open . filename , & op ) ;
if ( IS_ERR ( file ) ) {
put_unused_fd ( ret ) ;
ret = PTR_ERR ( file ) ;
} else {
fsnotify_open ( file ) ;
fd_install ( ret , file ) ;
}
err :
putname ( req - > open . filename ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
}
2020-01-09 03:59:24 +03:00
static int io_openat ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
req - > open . how = build_open_how ( req - > open . how . flags , req - > open . how . mode ) ;
return io_openat2 ( req , nxt , force_nonblock ) ;
}
2019-12-26 08:18:28 +03:00
static int io_madvise_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
# if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
if ( sqe - > ioprio | | sqe - > buf_index | | sqe - > off )
return - EINVAL ;
req - > madvise . addr = READ_ONCE ( sqe - > addr ) ;
req - > madvise . len = READ_ONCE ( sqe - > len ) ;
req - > madvise . advice = READ_ONCE ( sqe - > fadvise_advice ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
static int io_madvise ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
# if defined(CONFIG_ADVISE_SYSCALLS) && defined(CONFIG_MMU)
struct io_madvise * ma = & req - > madvise ;
int ret ;
if ( force_nonblock )
return - EAGAIN ;
ret = do_madvise ( ma - > addr , ma - > len , ma - > advice ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
2019-12-26 08:03:45 +03:00
static int io_fadvise_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
if ( sqe - > ioprio | | sqe - > buf_index | | sqe - > addr )
return - EINVAL ;
req - > fadvise . offset = READ_ONCE ( sqe - > off ) ;
req - > fadvise . len = READ_ONCE ( sqe - > len ) ;
req - > fadvise . advice = READ_ONCE ( sqe - > fadvise_advice ) ;
return 0 ;
}
static int io_fadvise ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
struct io_fadvise * fa = & req - > fadvise ;
int ret ;
/* DONTNEED may block, others _should_ not */
if ( fa - > advice = = POSIX_FADV_DONTNEED & & force_nonblock )
return - EAGAIN ;
ret = vfs_fadvise ( req - > file , fa - > offset , fa - > len , fa - > advice ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
}
2019-12-14 07:18:10 +03:00
static int io_statx_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
2020-01-09 03:47:02 +03:00
const char __user * fname ;
2019-12-14 07:18:10 +03:00
unsigned lookup_flags ;
int ret ;
if ( sqe - > ioprio | | sqe - > buf_index )
return - EINVAL ;
req - > open . dfd = READ_ONCE ( sqe - > fd ) ;
req - > open . mask = READ_ONCE ( sqe - > len ) ;
2020-01-09 03:47:02 +03:00
fname = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
2019-12-14 07:18:10 +03:00
req - > open . buffer = u64_to_user_ptr ( READ_ONCE ( sqe - > addr2 ) ) ;
2020-01-09 03:41:21 +03:00
req - > open . how . flags = READ_ONCE ( sqe - > statx_flags ) ;
2019-12-14 07:18:10 +03:00
2020-01-09 03:41:21 +03:00
if ( vfs_stat_set_lookup_flags ( & lookup_flags , req - > open . how . flags ) )
2019-12-14 07:18:10 +03:00
return - EINVAL ;
2020-01-09 03:47:02 +03:00
req - > open . filename = getname_flags ( fname , lookup_flags , NULL ) ;
2019-12-14 07:18:10 +03:00
if ( IS_ERR ( req - > open . filename ) ) {
ret = PTR_ERR ( req - > open . filename ) ;
req - > open . filename = NULL ;
return ret ;
}
return 0 ;
}
static int io_statx ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
struct io_open * ctx = & req - > open ;
unsigned lookup_flags ;
struct path path ;
struct kstat stat ;
int ret ;
if ( force_nonblock )
return - EAGAIN ;
2020-01-09 03:41:21 +03:00
if ( vfs_stat_set_lookup_flags ( & lookup_flags , ctx - > how . flags ) )
2019-12-14 07:18:10 +03:00
return - EINVAL ;
retry :
/* filename_lookup() drops it, keep a reference */
ctx - > filename - > refcnt + + ;
ret = filename_lookup ( ctx - > dfd , ctx - > filename , lookup_flags , & path ,
NULL ) ;
if ( ret )
goto err ;
2020-01-09 03:41:21 +03:00
ret = vfs_getattr ( & path , & stat , ctx - > mask , ctx - > how . flags ) ;
2019-12-14 07:18:10 +03:00
path_put ( & path ) ;
if ( retry_estale ( ret , lookup_flags ) ) {
lookup_flags | = LOOKUP_REVAL ;
goto retry ;
}
if ( ! ret )
ret = cp_statx ( & stat , ctx - > buffer ) ;
err :
putname ( ctx - > filename ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
}
2019-12-12 00:02:38 +03:00
static int io_close_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
{
/*
* If we queue this for async , it must not be cancellable . That would
* leave the ' file ' in an undeterminate state .
*/
req - > work . flags | = IO_WQ_WORK_NO_CANCEL ;
if ( sqe - > ioprio | | sqe - > off | | sqe - > addr | | sqe - > len | |
sqe - > rw_flags | | sqe - > buf_index )
return - EINVAL ;
if ( sqe - > flags & IOSQE_FIXED_FILE )
return - EINVAL ;
req - > close . fd = READ_ONCE ( sqe - > fd ) ;
if ( req - > file - > f_op = = & io_uring_fops | |
2020-01-17 04:45:59 +03:00
req - > close . fd = = req - > ctx - > ring_fd )
2019-12-12 00:02:38 +03:00
return - EBADF ;
return 0 ;
}
static void io_close_finish ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct io_kiocb * nxt = NULL ;
/* Invoked with files, we need to do the close */
if ( req - > work . files ) {
int ret ;
ret = filp_close ( req - > close . put_file , req - > work . files ) ;
if ( ret < 0 ) {
req_set_fail_links ( req ) ;
}
io_cqring_add_event ( req , ret ) ;
}
fput ( req - > close . put_file ) ;
/* we bypassed the re-issue, drop the submission reference */
io_put_req ( req ) ;
io_put_req_find_next ( req , & nxt ) ;
if ( nxt )
io_wq_assign_next ( workptr , nxt ) ;
}
static int io_close ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
int ret ;
req - > close . put_file = NULL ;
ret = __close_fd_get_file ( req - > close . fd , & req - > close . put_file ) ;
if ( ret < 0 )
return ret ;
/* if the file has a flush method, be safe and punt to async */
if ( req - > close . put_file - > f_op - > flush & & ! io_wq_current_is_worker ( ) ) {
req - > work . flags | = IO_WQ_WORK_NEEDS_FILES ;
goto eagain ;
}
/*
* No - > flush ( ) , safely close from here and just punt the
* fput ( ) to async context .
*/
ret = filp_close ( req - > close . put_file , current - > files ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
if ( io_wq_current_is_worker ( ) ) {
struct io_wq_work * old_work , * work ;
old_work = work = & req - > work ;
io_close_finish ( & work ) ;
if ( work & & work ! = old_work )
* nxt = container_of ( work , struct io_kiocb , work ) ;
return 0 ;
}
eagain :
req - > work . func = io_close_finish ;
return - EAGAIN ;
}
2019-12-20 04:24:38 +03:00
static int io_prep_sfr ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-04-09 23:56:44 +03:00
{
struct io_ring_ctx * ctx = req - > ctx ;
if ( ! req - > file )
return - EBADF ;
if ( unlikely ( ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
if ( unlikely ( sqe - > addr | | sqe - > ioprio | | sqe - > buf_index ) )
return - EINVAL ;
2019-12-16 21:55:28 +03:00
req - > sync . off = READ_ONCE ( sqe - > off ) ;
req - > sync . len = READ_ONCE ( sqe - > len ) ;
req - > sync . flags = READ_ONCE ( sqe - > sync_range_flags ) ;
return 0 ;
}
static void io_sync_file_range_finish ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct io_kiocb * nxt = NULL ;
int ret ;
if ( io_req_cancelled ( req ) )
return ;
2019-12-20 18:45:55 +03:00
ret = sync_file_range ( req - > file , req - > sync . off , req - > sync . len ,
2019-12-16 21:55:28 +03:00
req - > sync . flags ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , & nxt ) ;
if ( nxt )
2020-01-15 08:09:06 +03:00
io_wq_assign_next ( workptr , nxt ) ;
2019-04-09 23:56:44 +03:00
}
2019-12-11 00:38:45 +03:00
static int io_sync_file_range ( struct io_kiocb * req , struct io_kiocb * * nxt ,
2019-04-09 23:56:44 +03:00
bool force_nonblock )
{
2019-12-16 21:55:28 +03:00
struct io_wq_work * work , * old_work ;
2019-04-09 23:56:44 +03:00
/* sync_file_range always requires a blocking context */
2019-12-16 21:55:28 +03:00
if ( force_nonblock ) {
io_put_req ( req ) ;
req - > work . func = io_sync_file_range_finish ;
2019-04-09 23:56:44 +03:00
return - EAGAIN ;
2019-12-16 21:55:28 +03:00
}
2019-04-09 23:56:44 +03:00
2019-12-16 21:55:28 +03:00
work = old_work = & req - > work ;
io_sync_file_range_finish ( & work ) ;
if ( work & & work ! = old_work )
* nxt = container_of ( work , struct io_kiocb , work ) ;
2019-04-09 23:56:44 +03:00
return 0 ;
}
2019-12-16 08:13:43 +03:00
# if defined(CONFIG_NET)
static void io_sendrecv_async ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct iovec * iov = NULL ;
if ( req - > io - > rw . iov ! = req - > io - > rw . fast_iov )
iov = req - > io - > msg . iov ;
io_wq_submit_work ( workptr ) ;
kfree ( iov ) ;
}
# endif
2019-12-20 04:24:38 +03:00
static int io_sendmsg_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-12-03 04:50:25 +03:00
{
2019-04-19 22:34:07 +03:00
# if defined(CONFIG_NET)
2019-12-20 18:58:21 +03:00
struct io_sr_msg * sr = & req - > sr_msg ;
2019-12-20 04:24:38 +03:00
struct io_async_ctx * io = req - > io ;
2019-12-03 04:50:25 +03:00
2019-12-20 18:58:21 +03:00
sr - > msg_flags = READ_ONCE ( sqe - > msg_flags ) ;
sr - > msg = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
2020-01-05 06:19:44 +03:00
sr - > len = READ_ONCE ( sqe - > len ) ;
2019-12-20 04:24:38 +03:00
2020-01-05 06:19:44 +03:00
if ( ! io | | req - > opcode = = IORING_OP_SEND )
2019-12-20 04:24:38 +03:00
return 0 ;
2019-12-10 05:35:20 +03:00
io - > msg . iov = io - > msg . fast_iov ;
2019-12-20 04:24:38 +03:00
return sendmsg_copy_msghdr ( & io - > msg . msg , sr - > msg , sr - > msg_flags ,
2019-12-20 18:58:21 +03:00
& io - > msg . iov ) ;
2019-12-03 04:50:25 +03:00
# else
2019-12-20 18:58:21 +03:00
return - EOPNOTSUPP ;
2019-12-03 04:50:25 +03:00
# endif
}
2019-12-11 00:38:45 +03:00
static int io_sendmsg ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
2019-04-19 22:38:09 +03:00
{
2019-12-03 04:50:25 +03:00
# if defined(CONFIG_NET)
2019-12-15 20:57:46 +03:00
struct io_async_msghdr * kmsg = NULL ;
2019-04-19 22:34:07 +03:00
struct socket * sock ;
int ret ;
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
sock = sock_from_file ( req - > file , & ret ) ;
if ( sock ) {
2019-12-16 08:13:43 +03:00
struct io_async_ctx io ;
2019-12-03 04:50:25 +03:00
struct sockaddr_storage addr ;
2019-04-19 22:34:07 +03:00
unsigned flags ;
2019-12-03 04:50:25 +03:00
if ( req - > io ) {
2019-12-15 20:57:46 +03:00
kmsg = & req - > io - > msg ;
kmsg - > msg . msg_name = & addr ;
/* if iov is set, it's allocated already */
if ( ! kmsg - > iov )
kmsg - > iov = kmsg - > fast_iov ;
kmsg - > msg . msg_iter . iov = kmsg - > iov ;
2019-12-03 04:50:25 +03:00
} else {
2019-12-20 04:24:38 +03:00
struct io_sr_msg * sr = & req - > sr_msg ;
2019-12-15 20:57:46 +03:00
kmsg = & io . msg ;
kmsg - > msg . msg_name = & addr ;
2019-12-20 04:24:38 +03:00
io . msg . iov = io . msg . fast_iov ;
ret = sendmsg_copy_msghdr ( & io . msg . msg , sr - > msg ,
sr - > msg_flags , & io . msg . iov ) ;
2019-12-03 04:50:25 +03:00
if ( ret )
2019-12-20 04:24:38 +03:00
return ret ;
2019-12-03 04:50:25 +03:00
}
2019-04-19 22:34:07 +03:00
2019-12-20 18:58:21 +03:00
flags = req - > sr_msg . msg_flags ;
if ( flags & MSG_DONTWAIT )
req - > flags | = REQ_F_NOWAIT ;
else if ( force_nonblock )
flags | = MSG_DONTWAIT ;
2019-12-15 20:57:46 +03:00
ret = __sys_sendmsg_sock ( sock , & kmsg - > msg , flags ) ;
2019-12-03 04:50:25 +03:00
if ( force_nonblock & & ret = = - EAGAIN ) {
2019-12-16 08:13:43 +03:00
if ( req - > io )
return - EAGAIN ;
if ( io_alloc_async_ctx ( req ) )
return - ENOMEM ;
memcpy ( & req - > io - > msg , & io . msg , sizeof ( io . msg ) ) ;
req - > work . func = io_sendrecv_async ;
2019-12-15 20:57:46 +03:00
return - EAGAIN ;
2019-12-03 04:50:25 +03:00
}
2019-12-03 04:49:10 +03:00
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
2019-04-19 22:34:07 +03:00
}
2019-12-16 08:13:43 +03:00
if ( ! io_wq_current_is_worker ( ) & & kmsg & & kmsg - > iov ! = kmsg - > fast_iov )
2019-12-15 20:57:46 +03:00
kfree ( kmsg - > iov ) ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-08 18:50:36 +03:00
io_put_req_find_next ( req , nxt ) ;
2019-04-09 23:56:44 +03:00
return 0 ;
2019-12-03 04:50:25 +03:00
# else
return - EOPNOTSUPP ;
2019-04-19 22:38:09 +03:00
# endif
2019-12-03 04:50:25 +03:00
}
2019-04-19 22:38:09 +03:00
2020-01-05 06:19:44 +03:00
static int io_send ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
# if defined(CONFIG_NET)
struct socket * sock ;
int ret ;
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
sock = sock_from_file ( req - > file , & ret ) ;
if ( sock ) {
struct io_sr_msg * sr = & req - > sr_msg ;
struct msghdr msg ;
struct iovec iov ;
unsigned flags ;
ret = import_single_range ( WRITE , sr - > buf , sr - > len , & iov ,
& msg . msg_iter ) ;
if ( ret )
return ret ;
msg . msg_name = NULL ;
msg . msg_control = NULL ;
msg . msg_controllen = 0 ;
msg . msg_namelen = 0 ;
flags = req - > sr_msg . msg_flags ;
if ( flags & MSG_DONTWAIT )
req - > flags | = REQ_F_NOWAIT ;
else if ( force_nonblock )
flags | = MSG_DONTWAIT ;
ret = __sys_sendmsg_sock ( sock , & msg , flags ) ;
if ( force_nonblock & & ret = = - EAGAIN )
return - EAGAIN ;
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
}
io_cqring_add_event ( req , ret ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
2019-12-20 04:24:38 +03:00
static int io_recvmsg_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-04-19 22:38:09 +03:00
{
# if defined(CONFIG_NET)
2019-12-20 18:58:21 +03:00
struct io_sr_msg * sr = & req - > sr_msg ;
2019-12-20 04:24:38 +03:00
struct io_async_ctx * io = req - > io ;
sr - > msg_flags = READ_ONCE ( sqe - > msg_flags ) ;
sr - > msg = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
2019-12-20 00:44:26 +03:00
2020-01-05 06:19:44 +03:00
if ( ! io | | req - > opcode = = IORING_OP_RECV )
2019-12-20 00:44:26 +03:00
return 0 ;
2019-12-03 04:50:25 +03:00
2019-12-10 05:35:20 +03:00
io - > msg . iov = io - > msg . fast_iov ;
2019-12-20 04:24:38 +03:00
return recvmsg_copy_msghdr ( & io - > msg . msg , sr - > msg , sr - > msg_flags ,
2019-12-20 18:58:21 +03:00
& io - > msg . uaddr , & io - > msg . iov ) ;
2019-04-19 22:38:09 +03:00
# else
2019-12-20 18:58:21 +03:00
return - EOPNOTSUPP ;
2019-04-19 22:38:09 +03:00
# endif
}
2019-12-11 00:38:45 +03:00
static int io_recvmsg ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
2019-04-19 22:38:09 +03:00
{
# if defined(CONFIG_NET)
2019-12-15 20:57:46 +03:00
struct io_async_msghdr * kmsg = NULL ;
2019-12-03 04:50:25 +03:00
struct socket * sock ;
int ret ;
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
sock = sock_from_file ( req - > file , & ret ) ;
if ( sock ) {
2019-12-16 08:13:43 +03:00
struct io_async_ctx io ;
2019-12-03 04:50:25 +03:00
struct sockaddr_storage addr ;
unsigned flags ;
if ( req - > io ) {
2019-12-15 20:57:46 +03:00
kmsg = & req - > io - > msg ;
kmsg - > msg . msg_name = & addr ;
/* if iov is set, it's allocated already */
if ( ! kmsg - > iov )
kmsg - > iov = kmsg - > fast_iov ;
kmsg - > msg . msg_iter . iov = kmsg - > iov ;
2019-12-03 04:50:25 +03:00
} else {
2019-12-20 04:24:38 +03:00
struct io_sr_msg * sr = & req - > sr_msg ;
2019-12-15 20:57:46 +03:00
kmsg = & io . msg ;
kmsg - > msg . msg_name = & addr ;
2019-12-20 04:24:38 +03:00
io . msg . iov = io . msg . fast_iov ;
ret = recvmsg_copy_msghdr ( & io . msg . msg , sr - > msg ,
sr - > msg_flags , & io . msg . uaddr ,
& io . msg . iov ) ;
2019-12-03 04:50:25 +03:00
if ( ret )
2019-12-20 04:24:38 +03:00
return ret ;
2019-12-03 04:50:25 +03:00
}
2019-12-20 18:58:21 +03:00
flags = req - > sr_msg . msg_flags ;
if ( flags & MSG_DONTWAIT )
req - > flags | = REQ_F_NOWAIT ;
else if ( force_nonblock )
flags | = MSG_DONTWAIT ;
ret = __sys_recvmsg_sock ( sock , & kmsg - > msg , req - > sr_msg . msg ,
kmsg - > uaddr , flags ) ;
2019-12-03 04:50:25 +03:00
if ( force_nonblock & & ret = = - EAGAIN ) {
2019-12-16 08:13:43 +03:00
if ( req - > io )
return - EAGAIN ;
if ( io_alloc_async_ctx ( req ) )
return - ENOMEM ;
memcpy ( & req - > io - > msg , & io . msg , sizeof ( io . msg ) ) ;
req - > work . func = io_sendrecv_async ;
2019-12-15 20:57:46 +03:00
return - EAGAIN ;
2019-12-03 04:50:25 +03:00
}
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
}
2019-12-16 08:13:43 +03:00
if ( ! io_wq_current_is_worker ( ) & & kmsg & & kmsg - > iov ! = kmsg - > fast_iov )
2019-12-15 20:57:46 +03:00
kfree ( kmsg - > iov ) ;
2019-12-03 04:50:25 +03:00
io_cqring_add_event ( req , ret ) ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-12-03 04:50:25 +03:00
io_put_req_find_next ( req , nxt ) ;
return 0 ;
2019-04-19 22:34:07 +03:00
# else
return - EOPNOTSUPP ;
# endif
}
2019-04-09 23:56:44 +03:00
2020-01-05 06:19:44 +03:00
static int io_recv ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
# if defined(CONFIG_NET)
struct socket * sock ;
int ret ;
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
sock = sock_from_file ( req - > file , & ret ) ;
if ( sock ) {
struct io_sr_msg * sr = & req - > sr_msg ;
struct msghdr msg ;
struct iovec iov ;
unsigned flags ;
ret = import_single_range ( READ , sr - > buf , sr - > len , & iov ,
& msg . msg_iter ) ;
if ( ret )
return ret ;
msg . msg_name = NULL ;
msg . msg_control = NULL ;
msg . msg_controllen = 0 ;
msg . msg_namelen = 0 ;
msg . msg_iocb = NULL ;
msg . msg_flags = 0 ;
flags = req - > sr_msg . msg_flags ;
if ( flags & MSG_DONTWAIT )
req - > flags | = REQ_F_NOWAIT ;
else if ( force_nonblock )
flags | = MSG_DONTWAIT ;
ret = __sys_recvmsg_sock ( sock , & msg , NULL , NULL , flags ) ;
if ( force_nonblock & & ret = = - EAGAIN )
return - EAGAIN ;
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
}
io_cqring_add_event ( req , ret ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
2019-12-20 04:24:38 +03:00
static int io_accept_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-10-17 23:42:58 +03:00
{
# if defined(CONFIG_NET)
2019-12-16 21:55:28 +03:00
struct io_accept * accept = & req - > accept ;
2019-10-17 23:42:58 +03:00
if ( unlikely ( req - > ctx - > flags & ( IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL ) ) )
return - EINVAL ;
2019-11-25 22:40:22 +03:00
if ( sqe - > ioprio | | sqe - > len | | sqe - > buf_index )
2019-10-17 23:42:58 +03:00
return - EINVAL ;
2019-12-12 02:12:15 +03:00
accept - > addr = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
accept - > addr_len = u64_to_user_ptr ( READ_ONCE ( sqe - > addr2 ) ) ;
2019-12-16 21:55:28 +03:00
accept - > flags = READ_ONCE ( sqe - > accept_flags ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
2019-10-17 23:42:58 +03:00
2019-12-16 21:55:28 +03:00
# if defined(CONFIG_NET)
static int __io_accept ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
struct io_accept * accept = & req - > accept ;
unsigned file_flags ;
int ret ;
file_flags = force_nonblock ? O_NONBLOCK : 0 ;
ret = __sys_accept4_file ( req - > file , file_flags , accept - > addr ,
accept - > addr_len , accept - > flags ) ;
if ( ret = = - EAGAIN & & force_nonblock )
2019-10-17 23:42:58 +03:00
return - EAGAIN ;
2019-11-10 05:52:33 +03:00
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-11-08 18:50:36 +03:00
io_put_req_find_next ( req , nxt ) ;
2019-10-17 23:42:58 +03:00
return 0 ;
2019-12-16 21:55:28 +03:00
}
static void io_accept_finish ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct io_kiocb * nxt = NULL ;
if ( io_req_cancelled ( req ) )
return ;
__io_accept ( req , & nxt , false ) ;
if ( nxt )
2020-01-15 08:09:06 +03:00
io_wq_assign_next ( workptr , nxt ) ;
2019-12-16 21:55:28 +03:00
}
# endif
static int io_accept ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
{
# if defined(CONFIG_NET)
int ret ;
ret = __io_accept ( req , nxt , force_nonblock ) ;
if ( ret = = - EAGAIN & & force_nonblock ) {
req - > work . func = io_accept_finish ;
req - > work . flags | = IO_WQ_WORK_NEEDS_FILES ;
io_put_req ( req ) ;
return - EAGAIN ;
}
return 0 ;
2019-04-19 22:34:07 +03:00
# else
return - EOPNOTSUPP ;
# endif
}
2019-04-09 23:56:44 +03:00
2019-12-20 04:24:38 +03:00
static int io_connect_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-12-03 02:28:46 +03:00
{
# if defined(CONFIG_NET)
2019-12-20 04:24:38 +03:00
struct io_connect * conn = & req - > connect ;
struct io_async_ctx * io = req - > io ;
2019-12-03 02:28:46 +03:00
2019-12-20 18:51:52 +03:00
if ( unlikely ( req - > ctx - > flags & ( IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL ) ) )
return - EINVAL ;
if ( sqe - > ioprio | | sqe - > len | | sqe - > buf_index | | sqe - > rw_flags )
return - EINVAL ;
2019-12-20 04:24:38 +03:00
conn - > addr = u64_to_user_ptr ( READ_ONCE ( sqe - > addr ) ) ;
conn - > addr_len = READ_ONCE ( sqe - > addr2 ) ;
if ( ! io )
return 0 ;
return move_addr_to_kernel ( conn - > addr , conn - > addr_len ,
2019-12-20 18:51:52 +03:00
& io - > connect . address ) ;
2019-12-03 02:28:46 +03:00
# else
2019-12-20 18:51:52 +03:00
return - EOPNOTSUPP ;
2019-12-03 02:28:46 +03:00
# endif
}
2019-12-11 00:38:45 +03:00
static int io_connect ( struct io_kiocb * req , struct io_kiocb * * nxt ,
bool force_nonblock )
2019-11-24 00:24:24 +03:00
{
# if defined(CONFIG_NET)
2019-12-03 02:28:46 +03:00
struct io_async_ctx __io , * io ;
2019-11-24 00:24:24 +03:00
unsigned file_flags ;
2019-12-20 18:51:52 +03:00
int ret ;
2019-11-24 00:24:24 +03:00
2019-12-03 02:28:46 +03:00
if ( req - > io ) {
io = req - > io ;
} else {
2019-12-20 04:24:38 +03:00
ret = move_addr_to_kernel ( req - > connect . addr ,
req - > connect . addr_len ,
& __io . connect . address ) ;
2019-12-03 02:28:46 +03:00
if ( ret )
goto out ;
io = & __io ;
}
2019-12-20 18:51:52 +03:00
file_flags = force_nonblock ? O_NONBLOCK : 0 ;
ret = __sys_connect_file ( req - > file , & io - > connect . address ,
req - > connect . addr_len , file_flags ) ;
2019-12-03 21:23:54 +03:00
if ( ( ret = = - EAGAIN | | ret = = - EINPROGRESS ) & & force_nonblock ) {
2019-12-16 08:13:43 +03:00
if ( req - > io )
return - EAGAIN ;
if ( io_alloc_async_ctx ( req ) ) {
2019-12-03 02:28:46 +03:00
ret = - ENOMEM ;
goto out ;
}
2019-12-16 08:13:43 +03:00
memcpy ( & req - > io - > connect , & __io . connect , sizeof ( __io . connect ) ) ;
2019-11-24 00:24:24 +03:00
return - EAGAIN ;
2019-12-03 02:28:46 +03:00
}
2019-11-24 00:24:24 +03:00
if ( ret = = - ERESTARTSYS )
ret = - EINTR ;
2019-12-03 02:28:46 +03:00
out :
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-24 00:24:24 +03:00
io_cqring_add_event ( req , ret ) ;
io_put_req_find_next ( req , nxt ) ;
return 0 ;
# else
return - EOPNOTSUPP ;
# endif
}
2019-01-17 19:41:58 +03:00
static void io_poll_remove_one ( struct io_kiocb * req )
{
struct io_poll_iocb * poll = & req - > poll ;
spin_lock ( & poll - > head - > lock ) ;
WRITE_ONCE ( poll - > canceled , true ) ;
2019-12-10 03:52:20 +03:00
if ( ! list_empty ( & poll - > wait . entry ) ) {
list_del_init ( & poll - > wait . entry ) ;
2019-11-08 18:09:12 +03:00
io_queue_async_work ( req ) ;
2019-01-17 19:41:58 +03:00
}
spin_unlock ( & poll - > head - > lock ) ;
2019-12-05 05:56:40 +03:00
hash_del ( & req - > hash_node ) ;
2019-01-17 19:41:58 +03:00
}
static void io_poll_remove_all ( struct io_ring_ctx * ctx )
{
2019-12-05 05:56:40 +03:00
struct hlist_node * tmp ;
2019-01-17 19:41:58 +03:00
struct io_kiocb * req ;
2019-12-05 05:56:40 +03:00
int i ;
2019-01-17 19:41:58 +03:00
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-12-05 05:56:40 +03:00
for ( i = 0 ; i < ( 1U < < ctx - > cancel_hash_bits ) ; i + + ) {
struct hlist_head * list ;
list = & ctx - > cancel_hash [ i ] ;
hlist_for_each_entry_safe ( req , tmp , list , hash_node )
io_poll_remove_one ( req ) ;
2019-01-17 19:41:58 +03:00
}
spin_unlock_irq ( & ctx - > completion_lock ) ;
}
2019-11-10 03:43:02 +03:00
static int io_poll_cancel ( struct io_ring_ctx * ctx , __u64 sqe_addr )
{
2019-12-05 05:56:40 +03:00
struct hlist_head * list ;
2019-11-10 03:43:02 +03:00
struct io_kiocb * req ;
2019-12-05 05:56:40 +03:00
list = & ctx - > cancel_hash [ hash_long ( sqe_addr , ctx - > cancel_hash_bits ) ] ;
hlist_for_each_entry ( req , list , hash_node ) {
if ( sqe_addr = = req - > user_data ) {
2019-11-14 22:09:58 +03:00
io_poll_remove_one ( req ) ;
return 0 ;
}
2019-11-10 03:43:02 +03:00
}
return - ENOENT ;
}
2019-12-20 04:24:38 +03:00
static int io_poll_remove_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-12-18 04:40:57 +03:00
{
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
if ( sqe - > ioprio | | sqe - > off | | sqe - > len | | sqe - > buf_index | |
sqe - > poll_events )
return - EINVAL ;
req - > poll . addr = READ_ONCE ( sqe - > addr ) ;
return 0 ;
}
2019-01-17 19:41:58 +03:00
/*
* Find a running poll command that matches one specified in sqe - > addr ,
* and remove it if found .
*/
2019-12-11 00:38:45 +03:00
static int io_poll_remove ( struct io_kiocb * req )
2019-01-17 19:41:58 +03:00
{
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-18 04:40:57 +03:00
u64 addr ;
2019-11-10 03:43:02 +03:00
int ret ;
2019-01-17 19:41:58 +03:00
2019-12-18 04:40:57 +03:00
addr = req - > poll . addr ;
2019-01-17 19:41:58 +03:00
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-12-18 04:40:57 +03:00
ret = io_poll_cancel ( ctx , addr ) ;
2019-01-17 19:41:58 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-03-12 19:16:44 +03:00
io_put_req ( req ) ;
2019-01-17 19:41:58 +03:00
return 0 ;
}
2019-11-18 22:14:54 +03:00
static void io_poll_complete ( struct io_kiocb * req , __poll_t mask , int error )
2019-01-17 19:41:58 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
req - > poll . done = true ;
2019-11-18 22:14:54 +03:00
if ( error )
io_cqring_fill_event ( req , error ) ;
else
io_cqring_fill_event ( req , mangle_poll ( mask ) ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
io_commit_cqring ( ctx ) ;
2019-01-17 19:41:58 +03:00
}
2019-10-24 16:25:42 +03:00
static void io_poll_complete_work ( struct io_wq_work * * workptr )
2019-01-17 19:41:58 +03:00
{
2019-10-24 16:25:42 +03:00
struct io_wq_work * work = * workptr ;
2019-01-17 19:41:58 +03:00
struct io_kiocb * req = container_of ( work , struct io_kiocb , work ) ;
struct io_poll_iocb * poll = & req - > poll ;
struct poll_table_struct pt = { . _key = poll - > events } ;
struct io_ring_ctx * ctx = req - > ctx ;
2019-11-06 01:32:58 +03:00
struct io_kiocb * nxt = NULL ;
2019-01-17 19:41:58 +03:00
__poll_t mask = 0 ;
2019-11-18 22:14:54 +03:00
int ret = 0 ;
2019-01-17 19:41:58 +03:00
2019-11-18 22:14:54 +03:00
if ( work - > flags & IO_WQ_WORK_CANCEL ) {
2019-10-24 16:25:42 +03:00
WRITE_ONCE ( poll - > canceled , true ) ;
2019-11-18 22:14:54 +03:00
ret = - ECANCELED ;
} else if ( READ_ONCE ( poll - > canceled ) ) {
ret = - ECANCELED ;
}
2019-10-24 16:25:42 +03:00
2019-11-18 22:14:54 +03:00
if ( ret ! = - ECANCELED )
2019-01-17 19:41:58 +03:00
mask = vfs_poll ( poll - > file , & pt ) & poll - > events ;
/*
* Note that - > ki_cancel callers also delete iocb from active_reqs after
* calling - > ki_cancel . We need the ctx_lock roundtrip here to
* synchronize with them . In the cancellation case the list_del_init
* itself is not actually needed , but harmless so we keep it in to
* avoid further branches in the fast path .
*/
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-11-18 22:14:54 +03:00
if ( ! mask & & ret ! = - ECANCELED ) {
2019-12-10 03:52:20 +03:00
add_wait_queue ( poll - > head , & poll - > wait ) ;
2019-01-17 19:41:58 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
return ;
}
2019-12-05 05:56:40 +03:00
hash_del ( & req - > hash_node ) ;
2019-11-18 22:14:54 +03:00
io_poll_complete ( req , mask , ret ) ;
2019-01-17 19:41:58 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
io_cqring_ev_posted ( ctx ) ;
2019-11-06 01:32:58 +03:00
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-08 18:50:36 +03:00
io_put_req_find_next ( req , & nxt ) ;
2019-11-06 01:32:58 +03:00
if ( nxt )
2020-01-15 08:09:06 +03:00
io_wq_assign_next ( workptr , nxt ) ;
2019-01-17 19:41:58 +03:00
}
2019-12-19 22:06:02 +03:00
static void __io_poll_flush ( struct io_ring_ctx * ctx , struct llist_node * nodes )
{
struct io_kiocb * req , * tmp ;
2019-12-28 20:48:22 +03:00
struct req_batch rb ;
2019-12-19 22:06:02 +03:00
2019-12-28 22:11:08 +03:00
rb . to_free = rb . need_iter = 0 ;
2019-12-19 22:06:02 +03:00
spin_lock_irq ( & ctx - > completion_lock ) ;
llist_for_each_entry_safe ( req , tmp , nodes , llist_node ) {
hash_del ( & req - > hash_node ) ;
io_poll_complete ( req , req - > result , 0 ) ;
2019-12-28 20:48:22 +03:00
if ( refcount_dec_and_test ( & req - > refs ) & &
! io_req_multi_free ( & rb , req ) ) {
req - > flags | = REQ_F_COMP_LOCKED ;
io_free_req ( req ) ;
2019-12-19 22:06:02 +03:00
}
}
spin_unlock_irq ( & ctx - > completion_lock ) ;
io_cqring_ev_posted ( ctx ) ;
2019-12-28 20:48:22 +03:00
io_free_req_many ( ctx , & rb ) ;
2019-12-19 22:06:02 +03:00
}
static void io_poll_flush ( struct io_wq_work * * workptr )
{
struct io_kiocb * req = container_of ( * workptr , struct io_kiocb , work ) ;
struct llist_node * nodes ;
nodes = llist_del_all ( & req - > ctx - > poll_llist ) ;
if ( nodes )
__io_poll_flush ( req - > ctx , nodes ) ;
}
2019-01-17 19:41:58 +03:00
static int io_poll_wake ( struct wait_queue_entry * wait , unsigned mode , int sync ,
void * key )
{
2019-11-27 01:02:04 +03:00
struct io_poll_iocb * poll = wait - > private ;
2019-01-17 19:41:58 +03:00
struct io_kiocb * req = container_of ( poll , struct io_kiocb , poll ) ;
struct io_ring_ctx * ctx = req - > ctx ;
__poll_t mask = key_to_poll ( key ) ;
/* for instances that support it check for an event match first: */
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
if ( mask & & ! ( mask & poll - > events ) )
return 0 ;
2019-01-17 19:41:58 +03:00
2019-12-10 03:52:20 +03:00
list_del_init ( & poll - > wait . entry ) ;
2019-01-17 19:41:58 +03:00
2019-11-12 18:15:53 +03:00
/*
* Run completion inline if we can . We ' re using trylock here because
* we are violating the completion_lock - > poll wq lock ordering .
* If we have a link timeout we ' re going to need the completion_lock
* for finalizing the request , mark us as having grabbed that already .
*/
2019-12-19 22:06:02 +03:00
if ( mask ) {
unsigned long flags ;
2019-01-17 19:41:58 +03:00
2019-12-19 22:06:02 +03:00
if ( llist_empty ( & ctx - > poll_llist ) & &
spin_trylock_irqsave ( & ctx - > completion_lock , flags ) ) {
hash_del ( & req - > hash_node ) ;
io_poll_complete ( req , mask , 0 ) ;
req - > flags | = REQ_F_COMP_LOCKED ;
io_put_req ( req ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_cqring_ev_posted ( ctx ) ;
req = NULL ;
} else {
req - > result = mask ;
req - > llist_node . next = NULL ;
/* if the list wasn't empty, we're done */
if ( ! llist_add ( & req - > llist_node , & ctx - > poll_llist ) )
req = NULL ;
else
req - > work . func = io_poll_flush ;
}
2019-01-17 19:41:58 +03:00
}
2019-12-19 22:06:02 +03:00
if ( req )
io_queue_async_work ( req ) ;
2019-01-17 19:41:58 +03:00
return 1 ;
}
struct io_poll_table {
struct poll_table_struct pt ;
struct io_kiocb * req ;
int error ;
} ;
static void io_poll_queue_proc ( struct file * file , struct wait_queue_head * head ,
struct poll_table_struct * p )
{
struct io_poll_table * pt = container_of ( p , struct io_poll_table , pt ) ;
if ( unlikely ( pt - > req - > poll . head ) ) {
pt - > error = - EINVAL ;
return ;
}
pt - > error = 0 ;
pt - > req - > poll . head = head ;
2019-12-10 03:52:20 +03:00
add_wait_queue ( head , & pt - > req - > poll . wait ) ;
2019-01-17 19:41:58 +03:00
}
2019-11-14 22:09:58 +03:00
static void io_poll_req_insert ( struct io_kiocb * req )
{
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-05 05:56:40 +03:00
struct hlist_head * list ;
list = & ctx - > cancel_hash [ hash_long ( req - > user_data , ctx - > cancel_hash_bits ) ] ;
hlist_add_head ( & req - > hash_node , list ) ;
2019-11-14 22:09:58 +03:00
}
2019-12-20 04:24:38 +03:00
static int io_poll_add_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-01-17 19:41:58 +03:00
{
struct io_poll_iocb * poll = & req - > poll ;
u16 events ;
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
if ( sqe - > addr | | sqe - > ioprio | | sqe - > off | | sqe - > len | | sqe - > buf_index )
return - EINVAL ;
2019-03-13 21:39:28 +03:00
if ( ! poll - > file )
return - EBADF ;
2019-01-17 19:41:58 +03:00
events = READ_ONCE ( sqe - > poll_events ) ;
poll - > events = demangle_poll ( events ) | EPOLLERR | EPOLLHUP ;
2019-12-18 04:40:57 +03:00
return 0 ;
}
static int io_poll_add ( struct io_kiocb * req , struct io_kiocb * * nxt )
{
struct io_poll_iocb * poll = & req - > poll ;
struct io_ring_ctx * ctx = req - > ctx ;
struct io_poll_table ipt ;
bool cancel = false ;
__poll_t mask ;
INIT_IO_WORK ( & req - > work , io_poll_complete_work ) ;
2019-12-05 05:56:40 +03:00
INIT_HLIST_NODE ( & req - > hash_node ) ;
2019-01-17 19:41:58 +03:00
poll - > head = NULL ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
poll - > done = false ;
2019-01-17 19:41:58 +03:00
poll - > canceled = false ;
ipt . pt . _qproc = io_poll_queue_proc ;
ipt . pt . _key = poll - > events ;
ipt . req = req ;
ipt . error = - EINVAL ; /* same as no support for IOCB_CMD_POLL */
/* initialized the list so that we can do list_empty checks */
2019-12-10 03:52:20 +03:00
INIT_LIST_HEAD ( & poll - > wait . entry ) ;
init_waitqueue_func_entry ( & poll - > wait , io_poll_wake ) ;
poll - > wait . private = poll ;
2019-01-17 19:41:58 +03:00
2019-07-25 19:20:18 +03:00
INIT_LIST_HEAD ( & req - > list ) ;
2019-01-17 19:41:58 +03:00
mask = vfs_poll ( poll - > file , & ipt . pt ) & poll - > events ;
spin_lock_irq ( & ctx - > completion_lock ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
if ( likely ( poll - > head ) ) {
spin_lock ( & poll - > head - > lock ) ;
2019-12-10 03:52:20 +03:00
if ( unlikely ( list_empty ( & poll - > wait . entry ) ) ) {
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
if ( ipt . error )
cancel = true ;
ipt . error = 0 ;
mask = 0 ;
}
if ( mask | | ipt . error )
2019-12-10 03:52:20 +03:00
list_del_init ( & poll - > wait . entry ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
else if ( cancel )
WRITE_ONCE ( poll - > canceled , true ) ;
else if ( ! poll - > done ) /* actually waiting for an event */
2019-11-14 22:09:58 +03:00
io_poll_req_insert ( req ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
spin_unlock ( & poll - > head - > lock ) ;
}
if ( mask ) { /* no async, we'd stolen it */
2019-01-17 19:41:58 +03:00
ipt . error = 0 ;
2019-11-18 22:14:54 +03:00
io_poll_complete ( req , mask , 0 ) ;
2019-01-17 19:41:58 +03:00
}
spin_unlock_irq ( & ctx - > completion_lock ) ;
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
if ( mask ) {
io_cqring_ev_posted ( ctx ) ;
2019-11-08 18:50:36 +03:00
io_put_req_find_next ( req , nxt ) ;
2019-01-17 19:41:58 +03:00
}
io_uring: fix poll races
This is a straight port of Al's fix for the aio poll implementation,
since the io_uring version is heavily based on that. The below
description is almost straight from that patch, just modified to
fit the io_uring situation.
io_poll() has to cope with several unpleasant problems:
* requests that might stay around indefinitely need to
be made visible for io_cancel(2); that must not be done to
a request already completed, though.
* in cases when ->poll() has placed us on a waitqueue,
wakeup might have happened (and request completed) before ->poll()
returns.
* worse, in some early wakeup cases request might end
up re-added into the queue later - we can't treat "woken up and
currently not in the queue" as "it's not going to stick around
indefinitely"
* ... moreover, ->poll() might have decided not to
put it on any queues to start with, and that needs to be distinguished
from the previous case
* ->poll() might have tried to put us on more than one queue.
Only the first will succeed for io poll, so we might end up missing
wakeups. OTOH, we might very well notice that only after the
wakeup hits and request gets completed (all before ->poll() gets
around to the second poll_wait()). In that case it's too late to
decide that we have an error.
req->woken was an attempt to deal with that. Unfortunately, it was
broken. What we need to keep track of is not that wakeup has happened -
the thing might come back after that. It's that async reference is
already gone and won't come back, so we can't (and needn't) put the
request on the list of cancellables.
The easiest case is "request hadn't been put on any waitqueues"; we
can tell by seeing NULL apt.head, and in that case there won't be
anything async. We should either complete the request ourselves
(if vfs_poll() reports anything of interest) or return an error.
In all other cases we get exclusion with wakeups by grabbing the
queue lock.
If request is currently on queue and we have something interesting
from vfs_poll(), we can steal it and complete the request ourselves.
If it's on queue and vfs_poll() has not reported anything interesting,
we either put it on the cancellable list, or, if we know that it
hadn't been put on all queues ->poll() wanted it on, we steal it and
return an error.
If it's _not_ on queue, it's either been already dealt with (in which
case we do nothing), or there's io_poll_complete_work() about to be
executed. In that case we either put it on the cancellable list,
or, if we know it hadn't been put on all queues ->poll() wanted it on,
simulate what cancel would've done.
Fixes: 221c5eb23382 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-03-13 00:48:16 +03:00
return ipt . error ;
2019-01-17 19:41:58 +03:00
}
2019-09-17 21:26:57 +03:00
static enum hrtimer_restart io_timeout_fn ( struct hrtimer * timer )
{
2019-11-15 18:49:11 +03:00
struct io_timeout_data * data = container_of ( timer ,
struct io_timeout_data , timer ) ;
struct io_kiocb * req = data - > req ;
struct io_ring_ctx * ctx = req - > ctx ;
2019-09-17 21:26:57 +03:00
unsigned long flags ;
atomic_inc ( & ctx - > cq_timeouts ) ;
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
2019-10-23 10:10:08 +03:00
/*
2019-10-16 18:08:32 +03:00
* We could be racing with timeout deletion . If the list is empty ,
* then timeout lookup already found it and will be handling it .
2019-10-23 10:10:08 +03:00
*/
2019-10-29 21:34:10 +03:00
if ( ! list_empty ( & req - > list ) ) {
2019-10-16 18:08:32 +03:00
struct io_kiocb * prev ;
2019-09-17 21:26:57 +03:00
2019-10-16 18:08:32 +03:00
/*
* Adjust the reqs sequence before the current one because it
2019-12-13 14:09:50 +03:00
* will consume a slot in the cq_ring and the cq_tail
2019-10-16 18:08:32 +03:00
* pointer will be increased , otherwise other timeout reqs may
* return in advance without waiting for enough wait_nr .
*/
prev = req ;
list_for_each_entry_continue_reverse ( prev , & ctx - > timeout_list , list )
prev - > sequence + + ;
list_del_init ( & req - > list ) ;
}
2019-09-17 21:26:57 +03:00
2019-11-07 01:21:34 +03:00
io_cqring_fill_event ( req , - ETIME ) ;
2019-09-17 21:26:57 +03:00
io_commit_cqring ( ctx ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_cqring_ev_posted ( ctx ) ;
2019-12-08 06:59:47 +03:00
req_set_fail_links ( req ) ;
2019-09-17 21:26:57 +03:00
io_put_req ( req ) ;
return HRTIMER_NORESTART ;
}
2019-11-10 03:43:02 +03:00
static int io_timeout_cancel ( struct io_ring_ctx * ctx , __u64 user_data )
{
struct io_kiocb * req ;
int ret = - ENOENT ;
list_for_each_entry ( req , & ctx - > timeout_list , list ) {
if ( user_data = = req - > user_data ) {
list_del_init ( & req - > list ) ;
ret = 0 ;
break ;
}
}
if ( ret = = - ENOENT )
return ret ;
2019-12-04 21:08:05 +03:00
ret = hrtimer_try_to_cancel ( & req - > io - > timeout . timer ) ;
2019-11-10 03:43:02 +03:00
if ( ret = = - 1 )
return - EALREADY ;
2019-12-08 06:59:47 +03:00
req_set_fail_links ( req ) ;
2019-11-10 03:43:02 +03:00
io_cqring_fill_event ( req , - ECANCELED ) ;
io_put_req ( req ) ;
return 0 ;
}
2019-12-20 04:24:38 +03:00
static int io_timeout_remove_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-12-18 04:50:29 +03:00
{
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
return - EINVAL ;
if ( sqe - > flags | | sqe - > ioprio | | sqe - > buf_index | | sqe - > len )
return - EINVAL ;
req - > timeout . addr = READ_ONCE ( sqe - > addr ) ;
req - > timeout . flags = READ_ONCE ( sqe - > timeout_flags ) ;
if ( req - > timeout . flags )
return - EINVAL ;
return 0 ;
}
2019-10-16 18:08:32 +03:00
/*
* Remove or update an existing timeout command
*/
2019-12-11 00:38:45 +03:00
static int io_timeout_remove ( struct io_kiocb * req )
2019-10-16 18:08:32 +03:00
{
struct io_ring_ctx * ctx = req - > ctx ;
2019-11-10 03:43:02 +03:00
int ret ;
2019-10-16 18:08:32 +03:00
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-12-18 04:50:29 +03:00
ret = io_timeout_cancel ( ctx , req - > timeout . addr ) ;
2019-10-16 18:08:32 +03:00
2019-11-10 03:43:02 +03:00
io_cqring_fill_event ( req , ret ) ;
2019-10-16 18:08:32 +03:00
io_commit_cqring ( ctx ) ;
spin_unlock_irq ( & ctx - > completion_lock ) ;
2019-09-17 21:26:57 +03:00
io_cqring_ev_posted ( ctx ) ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
2019-10-16 18:08:32 +03:00
return 0 ;
2019-09-17 21:26:57 +03:00
}
2019-12-20 04:24:38 +03:00
static int io_timeout_prep ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
2019-12-04 21:08:05 +03:00
bool is_timeout_link )
2019-09-17 21:26:57 +03:00
{
2019-11-15 18:49:11 +03:00
struct io_timeout_data * data ;
2019-10-16 01:48:15 +03:00
unsigned flags ;
2019-09-17 21:26:57 +03:00
2019-11-15 18:49:11 +03:00
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
2019-09-17 21:26:57 +03:00
return - EINVAL ;
2019-11-15 18:49:11 +03:00
if ( sqe - > ioprio | | sqe - > buf_index | | sqe - > len ! = 1 )
2019-10-16 01:48:15 +03:00
return - EINVAL ;
2019-12-04 21:08:05 +03:00
if ( sqe - > off & & is_timeout_link )
return - EINVAL ;
2019-10-16 01:48:15 +03:00
flags = READ_ONCE ( sqe - > timeout_flags ) ;
if ( flags & ~ IORING_TIMEOUT_ABS )
2019-09-17 21:26:57 +03:00
return - EINVAL ;
2019-10-01 18:53:29 +03:00
2019-12-20 19:02:01 +03:00
req - > timeout . count = READ_ONCE ( sqe - > off ) ;
2019-12-20 04:24:38 +03:00
if ( ! req - > io & & io_alloc_async_ctx ( req ) )
2019-12-20 19:02:01 +03:00
return - ENOMEM ;
data = & req - > io - > timeout ;
2019-11-15 18:49:11 +03:00
data - > req = req ;
req - > flags | = REQ_F_TIMEOUT ;
if ( get_timespec64 ( & data - > ts , u64_to_user_ptr ( sqe - > addr ) ) )
2019-09-17 21:26:57 +03:00
return - EFAULT ;
2019-10-16 18:08:32 +03:00
if ( flags & IORING_TIMEOUT_ABS )
2019-11-15 18:49:11 +03:00
data - > mode = HRTIMER_MODE_ABS ;
2019-10-16 18:08:32 +03:00
else
2019-11-15 18:49:11 +03:00
data - > mode = HRTIMER_MODE_REL ;
2019-10-16 18:08:32 +03:00
2019-11-15 18:49:11 +03:00
hrtimer_init ( & data - > timer , CLOCK_MONOTONIC , data - > mode ) ;
return 0 ;
}
2019-12-11 00:38:45 +03:00
static int io_timeout ( struct io_kiocb * req )
2019-11-15 18:49:11 +03:00
{
unsigned count ;
struct io_ring_ctx * ctx = req - > ctx ;
struct io_timeout_data * data ;
struct list_head * entry ;
unsigned span = 0 ;
2019-12-04 21:08:05 +03:00
data = & req - > io - > timeout ;
2019-11-12 09:34:31 +03:00
2019-09-17 21:26:57 +03:00
/*
* sqe - > off holds how many events that need to occur for this
2019-11-12 09:34:31 +03:00
* timeout event to be satisfied . If it isn ' t set , then this is
* a pure timeout request , sequence isn ' t used .
2019-09-17 21:26:57 +03:00
*/
2019-12-20 19:02:01 +03:00
count = req - > timeout . count ;
2019-11-12 09:34:31 +03:00
if ( ! count ) {
req - > flags | = REQ_F_TIMEOUT_NOSEQ ;
spin_lock_irq ( & ctx - > completion_lock ) ;
entry = ctx - > timeout_list . prev ;
goto add ;
}
2019-09-17 21:26:57 +03:00
req - > sequence = ctx - > cached_sq_head + count - 1 ;
2019-12-04 21:08:05 +03:00
data - > seq_offset = count ;
2019-09-17 21:26:57 +03:00
/*
* Insertion sort , ensuring the first entry in the list is always
* the one we need first .
*/
spin_lock_irq ( & ctx - > completion_lock ) ;
list_for_each_prev ( entry , & ctx - > timeout_list ) {
struct io_kiocb * nxt = list_entry ( entry , struct io_kiocb , list ) ;
2019-10-15 16:59:29 +03:00
unsigned nxt_sq_head ;
long long tmp , tmp_nxt ;
2019-12-04 21:08:05 +03:00
u32 nxt_offset = nxt - > io - > timeout . seq_offset ;
2019-09-17 21:26:57 +03:00
2019-11-12 09:34:31 +03:00
if ( nxt - > flags & REQ_F_TIMEOUT_NOSEQ )
continue ;
2019-10-15 16:59:29 +03:00
/*
* Since cached_sq_head + count - 1 can overflow , use type long
* long to store it .
*/
tmp = ( long long ) ctx - > cached_sq_head + count - 1 ;
2019-11-25 23:14:38 +03:00
nxt_sq_head = nxt - > sequence - nxt_offset + 1 ;
tmp_nxt = ( long long ) nxt_sq_head + nxt_offset - 1 ;
2019-10-15 16:59:29 +03:00
/*
* cached_sq_head may overflow , and it will never overflow twice
* once there is some timeout req still be valid .
*/
if ( ctx - > cached_sq_head < nxt_sq_head )
2019-10-17 07:12:35 +03:00
tmp + = UINT_MAX ;
2019-10-15 16:59:29 +03:00
2019-10-23 10:10:09 +03:00
if ( tmp > tmp_nxt )
2019-09-17 21:26:57 +03:00
break ;
2019-10-23 10:10:09 +03:00
/*
* Sequence of reqs after the insert one and itself should
* be adjusted because each timeout req consumes a slot .
*/
span + + ;
nxt - > sequence + + ;
2019-09-17 21:26:57 +03:00
}
2019-10-23 10:10:09 +03:00
req - > sequence - = span ;
2019-11-12 09:34:31 +03:00
add :
2019-09-17 21:26:57 +03:00
list_add ( & req - > list , entry ) ;
2019-11-15 18:49:11 +03:00
data - > timer . function = io_timeout_fn ;
hrtimer_start ( & data - > timer , timespec64_to_ktime ( data - > ts ) , data - > mode ) ;
2019-09-17 21:26:57 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
return 0 ;
}
2019-10-29 06:49:21 +03:00
static bool io_cancel_cb ( struct io_wq_work * work , void * data )
{
struct io_kiocb * req = container_of ( work , struct io_kiocb , work ) ;
return req - > user_data = = ( unsigned long ) data ;
}
2019-11-05 22:39:45 +03:00
static int io_async_cancel_one ( struct io_ring_ctx * ctx , void * sqe_addr )
2019-10-29 06:49:21 +03:00
{
enum io_wq_cancel cancel_ret ;
int ret = 0 ;
cancel_ret = io_wq_cancel_cb ( ctx - > io_wq , io_cancel_cb , sqe_addr ) ;
switch ( cancel_ret ) {
case IO_WQ_CANCEL_OK :
ret = 0 ;
break ;
case IO_WQ_CANCEL_RUNNING :
ret = - EALREADY ;
break ;
case IO_WQ_CANCEL_NOTFOUND :
ret = - ENOENT ;
break ;
}
2019-11-05 22:39:45 +03:00
return ret ;
}
2019-11-10 03:43:02 +03:00
static void io_async_find_and_cancel ( struct io_ring_ctx * ctx ,
struct io_kiocb * req , __u64 sqe_addr ,
2019-11-18 22:14:54 +03:00
struct io_kiocb * * nxt , int success_ret )
2019-11-10 03:43:02 +03:00
{
unsigned long flags ;
int ret ;
ret = io_async_cancel_one ( ctx , ( void * ) ( unsigned long ) sqe_addr ) ;
if ( ret ! = - ENOENT ) {
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
goto done ;
}
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
ret = io_timeout_cancel ( ctx , sqe_addr ) ;
if ( ret ! = - ENOENT )
goto done ;
ret = io_poll_cancel ( ctx , sqe_addr ) ;
done :
2019-11-18 22:14:54 +03:00
if ( ! ret )
ret = success_ret ;
2019-11-10 03:43:02 +03:00
io_cqring_fill_event ( req , ret ) ;
io_commit_cqring ( ctx ) ;
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
io_cqring_ev_posted ( ctx ) ;
2019-12-08 06:59:47 +03:00
if ( ret < 0 )
req_set_fail_links ( req ) ;
2019-11-10 03:43:02 +03:00
io_put_req_find_next ( req , nxt ) ;
}
2019-12-20 04:24:38 +03:00
static int io_async_cancel_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-11-05 22:39:45 +03:00
{
2019-12-18 04:45:56 +03:00
if ( unlikely ( req - > ctx - > flags & IORING_SETUP_IOPOLL ) )
2019-11-05 22:39:45 +03:00
return - EINVAL ;
if ( sqe - > flags | | sqe - > ioprio | | sqe - > off | | sqe - > len | |
sqe - > cancel_flags )
return - EINVAL ;
2019-12-18 04:45:56 +03:00
req - > cancel . addr = READ_ONCE ( sqe - > addr ) ;
return 0 ;
}
static int io_async_cancel ( struct io_kiocb * req , struct io_kiocb * * nxt )
{
struct io_ring_ctx * ctx = req - > ctx ;
io_async_find_and_cancel ( ctx , req , req - > cancel . addr , nxt , 0 ) ;
2019-09-17 21:26:57 +03:00
return 0 ;
}
2019-12-09 21:22:50 +03:00
static int io_files_update_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
{
if ( sqe - > flags | | sqe - > ioprio | | sqe - > rw_flags )
return - EINVAL ;
req - > files_update . offset = READ_ONCE ( sqe - > off ) ;
req - > files_update . nr_args = READ_ONCE ( sqe - > len ) ;
if ( ! req - > files_update . nr_args )
return - EINVAL ;
req - > files_update . arg = READ_ONCE ( sqe - > addr ) ;
return 0 ;
}
static int io_files_update ( struct io_kiocb * req , bool force_nonblock )
{
struct io_ring_ctx * ctx = req - > ctx ;
struct io_uring_files_update up ;
int ret ;
if ( force_nonblock ) {
req - > work . flags | = IO_WQ_WORK_NEEDS_FILES ;
return - EAGAIN ;
}
up . offset = req - > files_update . offset ;
up . fds = req - > files_update . arg ;
mutex_lock ( & ctx - > uring_lock ) ;
ret = __io_sqe_files_update ( ctx , & up , req - > files_update . nr_args ) ;
mutex_unlock ( & ctx - > uring_lock ) ;
if ( ret < 0 )
req_set_fail_links ( req ) ;
io_cqring_add_event ( req , ret ) ;
io_put_req ( req ) ;
return 0 ;
}
2019-12-20 04:24:38 +03:00
static int io_req_defer_prep ( struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-12-02 21:03:47 +03:00
{
2019-12-18 05:45:06 +03:00
ssize_t ret = 0 ;
2019-12-02 21:03:47 +03:00
2020-01-28 02:34:48 +03:00
io_req_work_grab_env ( req , & io_op_defs [ req - > opcode ] ) ;
2019-12-18 05:53:05 +03:00
switch ( req - > opcode ) {
2019-12-18 05:45:06 +03:00
case IORING_OP_NOP :
break ;
2019-12-02 21:03:47 +03:00
case IORING_OP_READV :
case IORING_OP_READ_FIXED :
2019-12-23 01:19:35 +03:00
case IORING_OP_READ :
2019-12-20 04:24:38 +03:00
ret = io_read_prep ( req , sqe , true ) ;
2019-12-02 21:03:47 +03:00
break ;
case IORING_OP_WRITEV :
case IORING_OP_WRITE_FIXED :
2019-12-23 01:19:35 +03:00
case IORING_OP_WRITE :
2019-12-20 04:24:38 +03:00
ret = io_write_prep ( req , sqe , true ) ;
2019-12-02 21:03:47 +03:00
break ;
2019-12-18 04:40:57 +03:00
case IORING_OP_POLL_ADD :
2019-12-20 04:24:38 +03:00
ret = io_poll_add_prep ( req , sqe ) ;
2019-12-18 04:40:57 +03:00
break ;
case IORING_OP_POLL_REMOVE :
2019-12-20 04:24:38 +03:00
ret = io_poll_remove_prep ( req , sqe ) ;
2019-12-18 04:40:57 +03:00
break ;
2019-12-16 21:55:28 +03:00
case IORING_OP_FSYNC :
2019-12-20 04:24:38 +03:00
ret = io_prep_fsync ( req , sqe ) ;
2019-12-16 21:55:28 +03:00
break ;
case IORING_OP_SYNC_FILE_RANGE :
2019-12-20 04:24:38 +03:00
ret = io_prep_sfr ( req , sqe ) ;
2019-12-16 21:55:28 +03:00
break ;
2019-12-03 04:50:25 +03:00
case IORING_OP_SENDMSG :
2020-01-05 06:19:44 +03:00
case IORING_OP_SEND :
2019-12-20 04:24:38 +03:00
ret = io_sendmsg_prep ( req , sqe ) ;
2019-12-03 04:50:25 +03:00
break ;
case IORING_OP_RECVMSG :
2020-01-05 06:19:44 +03:00
case IORING_OP_RECV :
2019-12-20 04:24:38 +03:00
ret = io_recvmsg_prep ( req , sqe ) ;
2019-12-03 04:50:25 +03:00
break ;
2019-12-03 02:28:46 +03:00
case IORING_OP_CONNECT :
2019-12-20 04:24:38 +03:00
ret = io_connect_prep ( req , sqe ) ;
2019-12-03 02:28:46 +03:00
break ;
2019-12-04 21:08:05 +03:00
case IORING_OP_TIMEOUT :
2019-12-20 04:24:38 +03:00
ret = io_timeout_prep ( req , sqe , false ) ;
2019-12-16 08:13:43 +03:00
break ;
2019-12-18 04:50:29 +03:00
case IORING_OP_TIMEOUT_REMOVE :
2019-12-20 04:24:38 +03:00
ret = io_timeout_remove_prep ( req , sqe ) ;
2019-12-18 04:50:29 +03:00
break ;
2019-12-18 04:45:56 +03:00
case IORING_OP_ASYNC_CANCEL :
2019-12-20 04:24:38 +03:00
ret = io_async_cancel_prep ( req , sqe ) ;
2019-12-18 04:45:56 +03:00
break ;
2019-12-04 21:08:05 +03:00
case IORING_OP_LINK_TIMEOUT :
2019-12-20 04:24:38 +03:00
ret = io_timeout_prep ( req , sqe , true ) ;
2019-12-16 08:13:43 +03:00
break ;
2019-12-16 21:55:28 +03:00
case IORING_OP_ACCEPT :
2019-12-20 04:24:38 +03:00
ret = io_accept_prep ( req , sqe ) ;
2019-12-16 21:55:28 +03:00
break ;
2019-12-10 20:38:56 +03:00
case IORING_OP_FALLOCATE :
ret = io_fallocate_prep ( req , sqe ) ;
break ;
2019-12-11 21:20:36 +03:00
case IORING_OP_OPENAT :
ret = io_openat_prep ( req , sqe ) ;
break ;
2019-12-12 00:02:38 +03:00
case IORING_OP_CLOSE :
ret = io_close_prep ( req , sqe ) ;
break ;
2019-12-09 21:22:50 +03:00
case IORING_OP_FILES_UPDATE :
ret = io_files_update_prep ( req , sqe ) ;
break ;
2019-12-14 07:18:10 +03:00
case IORING_OP_STATX :
ret = io_statx_prep ( req , sqe ) ;
break ;
2019-12-26 08:03:45 +03:00
case IORING_OP_FADVISE :
ret = io_fadvise_prep ( req , sqe ) ;
break ;
2019-12-26 08:18:28 +03:00
case IORING_OP_MADVISE :
ret = io_madvise_prep ( req , sqe ) ;
break ;
2020-01-09 03:59:24 +03:00
case IORING_OP_OPENAT2 :
ret = io_openat2_prep ( req , sqe ) ;
break ;
2019-12-02 21:03:47 +03:00
default :
2019-12-18 05:45:06 +03:00
printk_once ( KERN_WARNING " io_uring: unhandled opcode %d \n " ,
req - > opcode ) ;
ret = - EINVAL ;
2019-12-16 08:13:43 +03:00
break ;
2019-12-02 21:03:47 +03:00
}
2019-12-16 08:13:43 +03:00
return ret ;
2019-12-02 21:03:47 +03:00
}
2019-12-20 04:24:38 +03:00
static int io_req_defer ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-04-07 06:51:27 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-02 21:03:47 +03:00
int ret ;
2019-04-07 06:51:27 +03:00
2019-11-13 13:06:25 +03:00
/* Still need defer if there is pending req in defer list. */
if ( ! req_need_defer ( req ) & & list_empty ( & ctx - > defer_list ) )
2019-04-07 06:51:27 +03:00
return 0 ;
2019-12-20 04:24:38 +03:00
if ( ! req - > io & & io_alloc_async_ctx ( req ) )
2019-04-07 06:51:27 +03:00
return - EAGAIN ;
2019-12-20 04:24:38 +03:00
ret = io_req_defer_prep ( req , sqe ) ;
2019-12-16 08:13:43 +03:00
if ( ret < 0 )
2019-12-04 21:08:05 +03:00
return ret ;
2019-04-07 06:51:27 +03:00
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-11-13 13:06:25 +03:00
if ( ! req_need_defer ( req ) & & list_empty ( & ctx - > defer_list ) ) {
2019-04-07 06:51:27 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
return 0 ;
}
2019-11-21 19:01:20 +03:00
trace_io_uring_defer ( ctx , req , req - > user_data ) ;
2019-04-07 06:51:27 +03:00
list_add_tail ( & req - > list , & ctx - > defer_list ) ;
spin_unlock_irq ( & ctx - > completion_lock ) ;
return - EIOCBQUEUED ;
}
2019-12-20 04:24:38 +03:00
static int io_issue_sqe ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
struct io_kiocb * * nxt , bool force_nonblock )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-18 05:53:05 +03:00
int ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-18 05:53:05 +03:00
switch ( req - > opcode ) {
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
case IORING_OP_NOP :
2019-11-07 01:21:34 +03:00
ret = io_nop ( req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
break ;
case IORING_OP_READV :
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
case IORING_OP_READ_FIXED :
2019-12-23 01:19:35 +03:00
case IORING_OP_READ :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_read_prep ( req , sqe , force_nonblock ) ;
if ( ret < 0 )
break ;
}
2019-11-07 01:41:08 +03:00
ret = io_read ( req , nxt , force_nonblock ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
break ;
2019-12-20 04:24:38 +03:00
case IORING_OP_WRITEV :
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
case IORING_OP_WRITE_FIXED :
2019-12-23 01:19:35 +03:00
case IORING_OP_WRITE :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_write_prep ( req , sqe , force_nonblock ) ;
if ( ret < 0 )
break ;
}
2019-11-07 01:41:08 +03:00
ret = io_write ( req , nxt , force_nonblock ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
break ;
2019-01-11 19:43:02 +03:00
case IORING_OP_FSYNC :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_prep_fsync ( req , sqe ) ;
if ( ret < 0 )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_fsync ( req , nxt , force_nonblock ) ;
2019-01-11 19:43:02 +03:00
break ;
2019-01-17 19:41:58 +03:00
case IORING_OP_POLL_ADD :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_poll_add_prep ( req , sqe ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_poll_add ( req , nxt ) ;
2019-01-17 19:41:58 +03:00
break ;
case IORING_OP_POLL_REMOVE :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_poll_remove_prep ( req , sqe ) ;
if ( ret < 0 )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_poll_remove ( req ) ;
2019-01-17 19:41:58 +03:00
break ;
2019-04-09 23:56:44 +03:00
case IORING_OP_SYNC_FILE_RANGE :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_prep_sfr ( req , sqe ) ;
if ( ret < 0 )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_sync_file_range ( req , nxt , force_nonblock ) ;
2019-04-09 23:56:44 +03:00
break ;
2019-04-19 22:34:07 +03:00
case IORING_OP_SENDMSG :
2020-01-05 06:19:44 +03:00
case IORING_OP_SEND :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_sendmsg_prep ( req , sqe ) ;
if ( ret < 0 )
break ;
}
2020-01-05 06:19:44 +03:00
if ( req - > opcode = = IORING_OP_SENDMSG )
ret = io_sendmsg ( req , nxt , force_nonblock ) ;
else
ret = io_send ( req , nxt , force_nonblock ) ;
2019-04-19 22:34:07 +03:00
break ;
2019-04-19 22:38:09 +03:00
case IORING_OP_RECVMSG :
2020-01-05 06:19:44 +03:00
case IORING_OP_RECV :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_recvmsg_prep ( req , sqe ) ;
if ( ret )
break ;
}
2020-01-05 06:19:44 +03:00
if ( req - > opcode = = IORING_OP_RECVMSG )
ret = io_recvmsg ( req , nxt , force_nonblock ) ;
else
ret = io_recv ( req , nxt , force_nonblock ) ;
2019-04-19 22:38:09 +03:00
break ;
2019-09-17 21:26:57 +03:00
case IORING_OP_TIMEOUT :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_timeout_prep ( req , sqe , false ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_timeout ( req ) ;
2019-09-17 21:26:57 +03:00
break ;
2019-10-16 18:08:32 +03:00
case IORING_OP_TIMEOUT_REMOVE :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_timeout_remove_prep ( req , sqe ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_timeout_remove ( req ) ;
2019-10-16 18:08:32 +03:00
break ;
2019-10-17 23:42:58 +03:00
case IORING_OP_ACCEPT :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_accept_prep ( req , sqe ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_accept ( req , nxt , force_nonblock ) ;
2019-10-17 23:42:58 +03:00
break ;
2019-11-24 00:24:24 +03:00
case IORING_OP_CONNECT :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_connect_prep ( req , sqe ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_connect ( req , nxt , force_nonblock ) ;
2019-11-24 00:24:24 +03:00
break ;
2019-10-29 06:49:21 +03:00
case IORING_OP_ASYNC_CANCEL :
2019-12-20 04:24:38 +03:00
if ( sqe ) {
ret = io_async_cancel_prep ( req , sqe ) ;
if ( ret )
break ;
}
2019-12-11 00:38:45 +03:00
ret = io_async_cancel ( req , nxt ) ;
2019-10-29 06:49:21 +03:00
break ;
2019-12-10 20:38:56 +03:00
case IORING_OP_FALLOCATE :
if ( sqe ) {
ret = io_fallocate_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_fallocate ( req , nxt , force_nonblock ) ;
break ;
2019-12-11 21:20:36 +03:00
case IORING_OP_OPENAT :
if ( sqe ) {
ret = io_openat_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_openat ( req , nxt , force_nonblock ) ;
break ;
2019-12-12 00:02:38 +03:00
case IORING_OP_CLOSE :
if ( sqe ) {
ret = io_close_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_close ( req , nxt , force_nonblock ) ;
break ;
2019-12-09 21:22:50 +03:00
case IORING_OP_FILES_UPDATE :
if ( sqe ) {
ret = io_files_update_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_files_update ( req , force_nonblock ) ;
break ;
2019-12-14 07:18:10 +03:00
case IORING_OP_STATX :
if ( sqe ) {
ret = io_statx_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_statx ( req , nxt , force_nonblock ) ;
break ;
2019-12-26 08:03:45 +03:00
case IORING_OP_FADVISE :
if ( sqe ) {
ret = io_fadvise_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_fadvise ( req , nxt , force_nonblock ) ;
break ;
2019-12-26 08:18:28 +03:00
case IORING_OP_MADVISE :
if ( sqe ) {
ret = io_madvise_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_madvise ( req , nxt , force_nonblock ) ;
break ;
2020-01-09 03:59:24 +03:00
case IORING_OP_OPENAT2 :
if ( sqe ) {
ret = io_openat2_prep ( req , sqe ) ;
if ( ret )
break ;
}
ret = io_openat2 ( req , nxt , force_nonblock ) ;
break ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
default :
ret = - EINVAL ;
break ;
}
2019-01-09 18:59:42 +03:00
if ( ret )
return ret ;
if ( ctx - > flags & IORING_SETUP_IOPOLL ) {
2020-01-16 07:51:17 +03:00
const bool in_async = io_wq_current_is_worker ( ) ;
2019-05-11 01:07:28 +03:00
if ( req - > result = = - EAGAIN )
2019-01-09 18:59:42 +03:00
return - EAGAIN ;
2020-01-16 07:51:17 +03:00
/* workqueue context doesn't hold uring_lock, grab it now */
if ( in_async )
mutex_lock ( & ctx - > uring_lock ) ;
2019-01-09 18:59:42 +03:00
io_iopoll_req_issued ( req ) ;
2020-01-16 07:51:17 +03:00
if ( in_async )
mutex_unlock ( & ctx - > uring_lock ) ;
2019-01-09 18:59:42 +03:00
}
return 0 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-10-24 16:25:42 +03:00
static void io_wq_submit_work ( struct io_wq_work * * workptr )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-10-24 16:25:42 +03:00
struct io_wq_work * work = * workptr ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct io_kiocb * req = container_of ( work , struct io_kiocb , work ) ;
2019-10-24 16:25:42 +03:00
struct io_kiocb * nxt = NULL ;
int ret = 0 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-12 05:29:43 +03:00
/* if NO_CANCEL is set, we must still run the work */
if ( ( work - > flags & ( IO_WQ_WORK_CANCEL | IO_WQ_WORK_NO_CANCEL ) ) = =
IO_WQ_WORK_CANCEL ) {
2019-10-24 16:25:42 +03:00
ret = - ECANCELED ;
2019-12-12 05:29:43 +03:00
}
2019-01-19 08:56:34 +03:00
2019-10-24 16:25:42 +03:00
if ( ! ret ) {
2019-11-25 23:14:39 +03:00
req - > has_user = ( work - > flags & IO_WQ_WORK_HAS_MM ) ! = 0 ;
req - > in_async = true ;
2019-10-24 16:25:42 +03:00
do {
2019-12-20 04:24:38 +03:00
ret = io_issue_sqe ( req , NULL , & nxt , false ) ;
2019-10-24 16:25:42 +03:00
/*
* We can get EAGAIN for polled IO even though we ' re
* forcing a sync submission from here , since we can ' t
* wait for request slots on the block side .
*/
if ( ret ! = - EAGAIN )
break ;
cond_resched ( ) ;
} while ( 1 ) ;
}
2019-01-19 08:56:34 +03:00
2019-10-24 16:25:42 +03:00
/* drop submission reference */
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
2019-04-30 23:44:05 +03:00
2019-10-24 16:25:42 +03:00
if ( ret ) {
2019-12-08 06:59:47 +03:00
req_set_fail_links ( req ) ;
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-04-30 23:44:05 +03:00
io_put_req ( req ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-10-24 16:25:42 +03:00
/* if a dependent link is ready, pass it back */
2020-01-15 08:09:06 +03:00
if ( ! ret & & nxt )
io_wq_assign_next ( workptr , nxt ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-12-11 21:20:36 +03:00
static int io_req_needs_file ( struct io_kiocb * req , int fd )
2019-03-13 21:39:28 +03:00
{
2019-12-18 19:50:26 +03:00
if ( ! io_op_defs [ req - > opcode ] . needs_file )
2019-12-12 01:55:43 +03:00
return 0 ;
2019-12-18 19:50:26 +03:00
if ( fd = = - 1 & & io_op_defs [ req - > opcode ] . fd_non_neg )
return 0 ;
return 1 ;
2019-03-13 21:39:28 +03:00
}
2019-10-26 16:20:21 +03:00
static inline struct file * io_file_from_index ( struct io_ring_ctx * ctx ,
int index )
{
struct fixed_file_table * table ;
2019-12-09 21:22:50 +03:00
table = & ctx - > file_data - > table [ index > > IORING_FILE_TABLE_SHIFT ] ;
return table - > files [ index & IORING_FILE_TABLE_MASK ] ; ;
2019-10-26 16:20:21 +03:00
}
2019-12-20 04:24:38 +03:00
static int io_req_set_file ( struct io_submit_state * state , struct io_kiocb * req ,
const struct io_uring_sqe * sqe )
2019-03-13 21:39:28 +03:00
{
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-03-13 21:39:28 +03:00
unsigned flags ;
2019-12-18 19:50:26 +03:00
int fd ;
2019-03-13 21:39:28 +03:00
2019-12-20 04:24:38 +03:00
flags = READ_ONCE ( sqe - > flags ) ;
fd = READ_ONCE ( sqe - > fd ) ;
2019-03-13 21:39:28 +03:00
2019-12-18 19:50:26 +03:00
if ( ! io_req_needs_file ( req , fd ) )
return 0 ;
2019-03-13 21:39:28 +03:00
if ( flags & IOSQE_FIXED_FILE ) {
2019-12-09 21:22:50 +03:00
if ( unlikely ( ! ctx - > file_data | |
2019-03-13 21:39:28 +03:00
( unsigned ) fd > = ctx - > nr_user_files ) )
return - EBADF ;
2019-10-26 16:22:55 +03:00
fd = array_index_nospec ( fd , ctx - > nr_user_files ) ;
2019-10-26 16:20:21 +03:00
req - > file = io_file_from_index ( ctx , fd ) ;
if ( ! req - > file )
2019-10-03 17:11:03 +03:00
return - EBADF ;
2019-03-13 21:39:28 +03:00
req - > flags | = REQ_F_FIXED_FILE ;
2019-12-09 21:22:50 +03:00
percpu_ref_get ( & ctx - > file_data - > refs ) ;
2019-03-13 21:39:28 +03:00
} else {
2019-11-25 23:14:39 +03:00
if ( req - > needs_fixed_file )
2019-03-13 21:39:28 +03:00
return - EBADF ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
trace_io_uring_file_get ( ctx , fd ) ;
2019-03-13 21:39:28 +03:00
req - > file = io_file_get ( state , fd ) ;
if ( unlikely ( ! req - > file ) )
return - EBADF ;
}
return 0 ;
}
2019-11-08 18:09:12 +03:00
static int io_grab_files ( struct io_kiocb * req )
2019-10-24 21:39:47 +03:00
{
int ret = - EBADF ;
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-10-24 21:39:47 +03:00
2020-01-17 04:45:59 +03:00
if ( ! ctx - > ring_file )
2019-12-12 00:02:38 +03:00
return - EBADF ;
2019-10-24 21:39:47 +03:00
rcu_read_lock ( ) ;
spin_lock_irq ( & ctx - > inflight_lock ) ;
/*
* We use the f_ops - > flush ( ) handler to ensure that we can flush
* out work accessing these files if the fd is closed . Check if
* the fd has changed since we started down this path , and disallow
* this operation if it has .
*/
2020-01-17 04:45:59 +03:00
if ( fcheck ( ctx - > ring_fd ) = = ctx - > ring_file ) {
2019-10-24 21:39:47 +03:00
list_add ( & req - > inflight_entry , & ctx - > inflight_list ) ;
req - > flags | = REQ_F_INFLIGHT ;
req - > work . files = current - > files ;
ret = 0 ;
}
spin_unlock_irq ( & ctx - > inflight_lock ) ;
rcu_read_unlock ( ) ;
return ret ;
}
2019-11-05 22:40:47 +03:00
static enum hrtimer_restart io_link_timeout_fn ( struct hrtimer * timer )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-11-15 18:49:11 +03:00
struct io_timeout_data * data = container_of ( timer ,
struct io_timeout_data , timer ) ;
struct io_kiocb * req = data - > req ;
2019-11-05 22:40:47 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
struct io_kiocb * prev = NULL ;
unsigned long flags ;
spin_lock_irqsave ( & ctx - > completion_lock , flags ) ;
/*
* We don ' t expect the list to be empty , that will only happen if we
* race with the completion of the linked work .
*/
2019-12-05 16:16:35 +03:00
if ( ! list_empty ( & req - > link_list ) ) {
prev = list_entry ( req - > link_list . prev , struct io_kiocb ,
link_list ) ;
2019-11-20 01:31:28 +03:00
if ( refcount_inc_not_zero ( & prev - > refs ) ) {
2019-12-05 16:16:35 +03:00
list_del_init ( & req - > link_list ) ;
2019-11-20 01:31:28 +03:00
prev - > flags & = ~ REQ_F_LINK_TIMEOUT ;
} else
2019-11-11 09:34:16 +03:00
prev = NULL ;
2019-11-05 22:40:47 +03:00
}
spin_unlock_irqrestore ( & ctx - > completion_lock , flags ) ;
if ( prev ) {
2019-12-08 06:59:47 +03:00
req_set_fail_links ( prev ) ;
2019-11-18 22:14:54 +03:00
io_async_find_and_cancel ( ctx , req , prev - > user_data , NULL ,
- ETIME ) ;
2019-11-11 09:34:16 +03:00
io_put_req ( prev ) ;
2019-11-10 03:43:02 +03:00
} else {
io_cqring_add_event ( req , - ETIME ) ;
io_put_req ( req ) ;
2019-11-05 22:40:47 +03:00
}
return HRTIMER_NORESTART ;
}
2019-11-15 18:49:11 +03:00
static void io_queue_linked_timeout ( struct io_kiocb * req )
2019-11-05 22:40:47 +03:00
{
2019-11-11 09:34:16 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-11-05 22:40:47 +03:00
2019-11-11 09:34:16 +03:00
/*
* If the list is now empty , then our linked request finished before
* we got a chance to setup the timer
*/
spin_lock_irq ( & ctx - > completion_lock ) ;
2019-12-05 16:16:35 +03:00
if ( ! list_empty ( & req - > link_list ) ) {
2019-12-04 21:08:05 +03:00
struct io_timeout_data * data = & req - > io - > timeout ;
2019-11-15 05:39:52 +03:00
2019-11-15 18:49:11 +03:00
data - > timer . function = io_link_timeout_fn ;
hrtimer_start ( & data - > timer , timespec64_to_ktime ( data - > ts ) ,
data - > mode ) ;
2019-11-05 22:40:47 +03:00
}
2019-11-11 09:34:16 +03:00
spin_unlock_irq ( & ctx - > completion_lock ) ;
2019-11-05 22:40:47 +03:00
/* drop submission reference */
2019-11-11 09:34:16 +03:00
io_put_req ( req ) ;
}
2019-11-05 22:40:47 +03:00
2019-11-15 18:49:11 +03:00
static struct io_kiocb * io_prep_linked_timeout ( struct io_kiocb * req )
2019-11-05 22:40:47 +03:00
{
struct io_kiocb * nxt ;
if ( ! ( req - > flags & REQ_F_LINK ) )
return NULL ;
2019-12-05 16:16:35 +03:00
nxt = list_first_entry_or_null ( & req - > link_list , struct io_kiocb ,
link_list ) ;
2019-12-18 05:53:05 +03:00
if ( ! nxt | | nxt - > opcode ! = IORING_OP_LINK_TIMEOUT )
2019-11-11 09:34:16 +03:00
return NULL ;
2019-11-05 22:40:47 +03:00
2019-11-11 09:34:16 +03:00
req - > flags | = REQ_F_LINK_TIMEOUT ;
return nxt ;
2019-11-05 22:40:47 +03:00
}
2019-12-20 04:24:38 +03:00
static void __io_queue_sqe ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-12-10 06:01:01 +03:00
struct io_kiocb * linked_timeout ;
2019-11-21 23:21:03 +03:00
struct io_kiocb * nxt = NULL ;
2019-03-12 19:18:47 +03:00
int ret ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-10 06:01:01 +03:00
again :
linked_timeout = io_prep_linked_timeout ( req ) ;
2019-12-20 04:24:38 +03:00
ret = io_issue_sqe ( req , sqe , & nxt , true ) ;
2019-10-17 18:20:46 +03:00
/*
* We async punt it if the file wasn ' t marked NOWAIT , or if the file
* doesn ' t support non - blocking read / write attempts
*/
if ( ret = = - EAGAIN & & ( ! ( req - > flags & REQ_F_NOWAIT ) | |
( req - > flags & REQ_F_MUST_PUNT ) ) ) {
2020-01-22 23:09:36 +03:00
punt :
2019-11-19 23:32:47 +03:00
if ( req - > work . flags & IO_WQ_WORK_NEEDS_FILES ) {
ret = io_grab_files ( req ) ;
if ( ret )
goto err ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-11-19 23:32:47 +03:00
/*
* Queued up for async execution , worker will release
* submit reference when the iocb is actually submitted .
*/
io_queue_async_work ( req ) ;
2019-12-10 06:01:01 +03:00
goto done_req ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-03-12 19:16:44 +03:00
2019-10-24 21:39:47 +03:00
err :
2019-11-11 09:34:16 +03:00
/* drop submission reference */
2019-11-08 18:50:36 +03:00
io_put_req ( req ) ;
2019-03-12 19:16:44 +03:00
2019-11-21 23:21:03 +03:00
if ( linked_timeout ) {
2019-11-11 09:34:16 +03:00
if ( ! ret )
2019-11-21 23:21:03 +03:00
io_queue_linked_timeout ( linked_timeout ) ;
2019-11-11 09:34:16 +03:00
else
2019-11-21 23:21:03 +03:00
io_put_req ( linked_timeout ) ;
2019-11-11 09:34:16 +03:00
}
2019-03-12 19:16:44 +03:00
/* and drop final reference, if we failed */
2019-05-11 01:07:28 +03:00
if ( ret ) {
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-12-08 06:59:47 +03:00
req_set_fail_links ( req ) ;
2019-03-12 19:16:44 +03:00
io_put_req ( req ) ;
2019-05-11 01:07:28 +03:00
}
2019-12-10 06:01:01 +03:00
done_req :
if ( nxt ) {
req = nxt ;
nxt = NULL ;
2020-01-22 23:09:36 +03:00
if ( req - > flags & REQ_F_FORCE_ASYNC )
goto punt ;
2019-12-10 06:01:01 +03:00
goto again ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-12-20 04:24:38 +03:00
static void io_queue_sqe ( struct io_kiocb * req , const struct io_uring_sqe * sqe )
2019-09-09 15:50:40 +03:00
{
int ret ;
2019-12-20 04:24:38 +03:00
ret = io_req_defer ( req , sqe ) ;
2019-09-09 15:50:40 +03:00
if ( ret ) {
if ( ret ! = - EIOCBQUEUED ) {
2020-01-22 23:09:35 +03:00
fail_req :
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
2019-12-08 06:59:47 +03:00
req_set_fail_links ( req ) ;
2019-11-07 01:21:34 +03:00
io_double_put_req ( req ) ;
2019-09-09 15:50:40 +03:00
}
2019-12-30 21:24:47 +03:00
} else if ( req - > flags & REQ_F_FORCE_ASYNC ) {
2020-01-22 23:09:35 +03:00
ret = io_req_defer_prep ( req , sqe ) ;
if ( unlikely ( ret < 0 ) )
goto fail_req ;
2019-12-17 18:04:44 +03:00
/*
* Never try inline submit of IOSQE_ASYNC is set , go straight
* to async execution .
*/
req - > work . flags | = IO_WQ_WORK_CONCURRENT ;
io_queue_async_work ( req ) ;
} else {
2019-12-20 04:24:38 +03:00
__io_queue_sqe ( req , sqe ) ;
2019-12-17 18:04:44 +03:00
}
2019-09-09 15:50:40 +03:00
}
2019-11-21 11:54:28 +03:00
static inline void io_queue_link_head ( struct io_kiocb * req )
2019-09-09 15:50:40 +03:00
{
2019-11-15 05:39:52 +03:00
if ( unlikely ( req - > flags & REQ_F_FAIL_LINK ) ) {
2019-11-21 11:54:28 +03:00
io_cqring_add_event ( req , - ECANCELED ) ;
io_double_put_req ( req ) ;
} else
2019-12-20 04:24:38 +03:00
io_queue_sqe ( req , NULL ) ;
2019-09-09 15:50:40 +03:00
}
2019-12-08 06:59:47 +03:00
# define SQE_VALID_FLAGS (IOSQE_FIXED_FILE|IOSQE_IO_DRAIN|IOSQE_IO_LINK| \
2019-12-17 18:04:44 +03:00
IOSQE_IO_HARDLINK | IOSQE_ASYNC )
2019-05-11 01:07:28 +03:00
2019-12-20 04:24:38 +03:00
static bool io_submit_sqe ( struct io_kiocb * req , const struct io_uring_sqe * sqe ,
struct io_submit_state * state , struct io_kiocb * * link )
2019-05-11 01:07:28 +03:00
{
2020-01-28 20:15:23 +03:00
const struct cred * old_creds = NULL ;
2019-11-08 18:09:12 +03:00
struct io_ring_ctx * ctx = req - > ctx ;
2019-12-17 22:26:58 +03:00
unsigned int sqe_flags ;
2020-01-28 20:15:23 +03:00
int ret , id ;
2019-05-11 01:07:28 +03:00
2019-12-17 22:26:58 +03:00
sqe_flags = READ_ONCE ( sqe - > flags ) ;
2019-05-11 01:07:28 +03:00
/* enforce forwards compatibility on users */
2019-12-17 22:26:58 +03:00
if ( unlikely ( sqe_flags & ~ SQE_VALID_FLAGS ) ) {
2019-05-11 01:07:28 +03:00
ret = - EINVAL ;
2019-11-07 01:41:06 +03:00
goto err_req ;
2019-05-11 01:07:28 +03:00
}
2020-01-28 20:15:23 +03:00
id = READ_ONCE ( sqe - > personality ) ;
if ( id ) {
const struct cred * personality_creds ;
personality_creds = idr_find ( & ctx - > personality_idr , id ) ;
if ( unlikely ( ! personality_creds ) ) {
ret = - EINVAL ;
goto err_req ;
}
old_creds = override_creds ( personality_creds ) ;
}
2020-01-18 20:22:41 +03:00
/* same numerical values with corresponding REQ_F_*, safe to copy */
req - > flags | = sqe_flags & ( IOSQE_IO_DRAIN | IOSQE_IO_HARDLINK |
IOSQE_ASYNC ) ;
2019-05-11 01:07:28 +03:00
2019-12-20 04:24:38 +03:00
ret = io_req_set_file ( state , req , sqe ) ;
2019-05-11 01:07:28 +03:00
if ( unlikely ( ret ) ) {
err_req :
2019-11-07 01:21:34 +03:00
io_cqring_add_event ( req , ret ) ;
io_double_put_req ( req ) ;
2020-01-28 20:15:23 +03:00
if ( old_creds )
revert_creds ( old_creds ) ;
2019-12-05 16:15:45 +03:00
return false ;
2019-05-11 01:07:28 +03:00
}
/*
* If we already have a head request , queue this one for async
* submittal once the head completes . If we don ' t have a head but
* IOSQE_IO_LINK is set in the sqe , start a new head . This one will be
* submitted sync once the chain is complete . If none of those
* conditions are true ( normal request ) , then just queue it .
*/
if ( * link ) {
2019-12-17 02:22:07 +03:00
struct io_kiocb * head = * link ;
2019-05-11 01:07:28 +03:00
2020-01-25 00:40:24 +03:00
/*
* Taking sequential execution of a link , draining both sides
* of the link also fullfils IOSQE_IO_DRAIN semantics for all
* requests in the link . So , it drains the head and the
* next after the link request . The last one is done via
* drain_next flag to persist the effect across calls .
*/
2020-01-17 03:57:59 +03:00
if ( sqe_flags & IOSQE_IO_DRAIN ) {
head - > flags | = REQ_F_IO_DRAIN ;
ctx - > drain_next = 1 ;
}
2019-12-16 08:13:43 +03:00
if ( io_alloc_async_ctx ( req ) ) {
2019-05-11 01:07:28 +03:00
ret = - EAGAIN ;
goto err_req ;
}
2019-12-20 04:24:38 +03:00
ret = io_req_defer_prep ( req , sqe ) ;
2019-12-04 21:08:05 +03:00
if ( ret ) {
2019-12-08 06:59:47 +03:00
/* fail even hard links since we don't submit */
2019-12-17 02:22:07 +03:00
head - > flags | = REQ_F_FAIL_LINK ;
2019-12-02 21:03:47 +03:00
goto err_req ;
2019-12-04 21:08:05 +03:00
}
2019-12-17 02:22:07 +03:00
trace_io_uring_link ( ctx , req , head ) ;
list_add_tail ( & req - > link_list , & head - > link_list ) ;
2019-12-17 22:26:58 +03:00
/* last request of a link, enqueue the link */
if ( ! ( sqe_flags & ( IOSQE_IO_LINK | IOSQE_IO_HARDLINK ) ) ) {
io_queue_link_head ( head ) ;
* link = NULL ;
}
2019-05-11 01:07:28 +03:00
} else {
2020-01-17 03:57:59 +03:00
if ( unlikely ( ctx - > drain_next ) ) {
req - > flags | = REQ_F_IO_DRAIN ;
req - > ctx - > drain_next = 0 ;
}
if ( sqe_flags & ( IOSQE_IO_LINK | IOSQE_IO_HARDLINK ) ) {
req - > flags | = REQ_F_LINK ;
INIT_LIST_HEAD ( & req - > link_list ) ;
ret = io_req_defer_prep ( req , sqe ) ;
if ( ret )
req - > flags | = REQ_F_FAIL_LINK ;
* link = req ;
} else {
io_queue_sqe ( req , sqe ) ;
}
2019-05-11 01:07:28 +03:00
}
2019-12-05 16:15:45 +03:00
2020-01-28 20:15:23 +03:00
if ( old_creds )
revert_creds ( old_creds ) ;
2019-12-05 16:15:45 +03:00
return true ;
2019-05-11 01:07:28 +03:00
}
2019-01-09 19:06:50 +03:00
/*
* Batched submission is done , ensure local IO is flushed out .
*/
static void io_submit_state_end ( struct io_submit_state * state )
{
blk_finish_plug ( & state - > plug ) ;
2019-04-13 20:50:54 +03:00
io_file_put ( state ) ;
2019-01-09 19:10:43 +03:00
if ( state - > free_reqs )
kmem_cache_free_bulk ( req_cachep , state - > free_reqs ,
& state - > reqs [ state - > cur_req ] ) ;
2019-01-09 19:06:50 +03:00
}
/*
* Start submission side cache .
*/
static void io_submit_state_start ( struct io_submit_state * state ,
2019-12-02 12:14:52 +03:00
unsigned int max_ios )
2019-01-09 19:06:50 +03:00
{
blk_start_plug ( & state - > plug ) ;
2019-01-09 19:10:43 +03:00
state - > free_reqs = 0 ;
2019-01-09 19:06:50 +03:00
state - > file = NULL ;
state - > ios_left = max_ios ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static void io_commit_sqring ( struct io_ring_ctx * ctx )
{
2019-08-26 20:23:46 +03:00
struct io_rings * rings = ctx - > rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-12-30 21:24:46 +03:00
/*
* Ensure any loads from the SQEs are done at this point ,
* since once we write the new head , the application could
* write new data to them .
*/
smp_store_release ( & rings - > sq . head , ctx - > cached_sq_head ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
/*
2019-12-20 04:24:38 +03:00
* Fetch an sqe , if one is available . Note that sqe_ptr will point to memory
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
* that is mapped by userspace . This means that care needs to be taken to
* ensure that reads are stable , as we cannot rely on userspace always
* being a good citizen . If members of the sqe are validated and then later
* used , it ' s important that those reads are done through READ_ONCE ( ) to
* prevent a re - load down the line .
*/
2019-12-20 04:24:38 +03:00
static bool io_get_sqring ( struct io_ring_ctx * ctx , struct io_kiocb * req ,
const struct io_uring_sqe * * sqe_ptr )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
2019-08-26 20:23:46 +03:00
u32 * sq_array = ctx - > sq_array ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
unsigned head ;
/*
* The cached sq head ( or cq tail ) serves two purposes :
*
* 1 ) allows us to batch the cost of updating the user visible
* head updates .
* 2 ) allows the kernel side to track the head on its own , even
* though the application is the one updating it .
*/
2019-12-30 21:24:45 +03:00
head = READ_ONCE ( sq_array [ ctx - > cached_sq_head & ctx - > sq_mask ] ) ;
2019-11-21 21:24:56 +03:00
if ( likely ( head < ctx - > sq_entries ) ) {
2019-11-25 23:14:39 +03:00
/*
* All io need record the previous position , if LINK vs DARIN ,
* it can be used to mark the position of the first IO in the
* link list .
*/
req - > sequence = ctx - > cached_sq_head ;
2019-12-20 04:24:38 +03:00
* sqe_ptr = & ctx - > sq_sqes [ head ] ;
req - > opcode = READ_ONCE ( ( * sqe_ptr ) - > opcode ) ;
req - > user_data = READ_ONCE ( ( * sqe_ptr ) - > user_data ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
ctx - > cached_sq_head + + ;
return true ;
}
/* drop invalid entries */
ctx - > cached_sq_head + + ;
2019-10-25 19:04:25 +03:00
ctx - > cached_sq_dropped + + ;
2019-12-30 21:24:45 +03:00
WRITE_ONCE ( ctx - > rings - > sq_dropped , ctx - > cached_sq_dropped ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return false ;
}
2019-10-25 12:31:30 +03:00
static int io_submit_sqes ( struct io_ring_ctx * ctx , unsigned int nr ,
2019-11-06 00:22:14 +03:00
struct file * ring_file , int ring_fd ,
struct mm_struct * * mm , bool async )
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
{
struct io_submit_state state , * statep = NULL ;
2019-05-11 01:07:28 +03:00
struct io_kiocb * link = NULL ;
int i , submitted = 0 ;
2019-10-27 23:15:41 +03:00
bool mm_fault = false ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-11-22 07:01:26 +03:00
/* if we have a backlog and couldn't flush it all, return BUSY */
2019-12-19 03:12:20 +03:00
if ( test_bit ( 0 , & ctx - > sq_check_overflow ) ) {
if ( ! list_empty ( & ctx - > cq_overflow_list ) & &
! io_cqring_overflow_flush ( ctx , false ) )
return - EBUSY ;
}
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-12-30 21:24:45 +03:00
/* make sure SQ entry isn't read before tail */
nr = min3 ( nr , ctx - > sq_entries , io_sqring_entries ( ctx ) ) ;
2019-12-30 21:24:44 +03:00
2019-12-28 14:13:03 +03:00
if ( ! percpu_ref_tryget_many ( & ctx - > refs , nr ) )
return - EAGAIN ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( nr > IO_PLUG_THRESHOLD ) {
2019-12-02 12:14:52 +03:00
io_submit_state_start ( & state , nr ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
statep = & state ;
}
2020-01-17 04:45:59 +03:00
ctx - > ring_fd = ring_fd ;
ctx - > ring_file = ring_file ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
for ( i = 0 ; i < nr ; i + + ) {
2019-12-20 04:24:38 +03:00
const struct io_uring_sqe * sqe ;
2019-11-07 01:41:06 +03:00
struct io_kiocb * req ;
2019-10-25 12:31:30 +03:00
2019-11-07 01:41:06 +03:00
req = io_get_req ( ctx , statep ) ;
if ( unlikely ( ! req ) ) {
if ( ! submitted )
submitted = - EAGAIN ;
2019-10-25 12:31:30 +03:00
break ;
2019-11-07 01:41:06 +03:00
}
2019-12-20 04:24:38 +03:00
if ( ! io_get_sqring ( ctx , req , & sqe ) ) {
2019-12-28 14:13:03 +03:00
__io_req_do_free ( req ) ;
2019-11-07 01:41:06 +03:00
break ;
}
2019-10-25 12:31:30 +03:00
2019-12-18 19:50:26 +03:00
/* will complete beyond this point, count as submitted */
submitted + + ;
if ( unlikely ( req - > opcode > = IORING_OP_LAST ) ) {
io_cqring_add_event ( req , - EINVAL ) ;
io_double_put_req ( req ) ;
break ;
}
if ( io_op_defs [ req - > opcode ] . needs_mm & & ! * mm ) {
2019-10-27 23:15:41 +03:00
mm_fault = mm_fault | | ! mmget_not_zero ( ctx - > sqo_mm ) ;
if ( ! mm_fault ) {
use_mm ( ctx - > sqo_mm ) ;
* mm = ctx - > sqo_mm ;
}
2019-05-11 01:07:28 +03:00
}
2019-11-25 23:14:39 +03:00
req - > has_user = * mm ! = NULL ;
req - > in_async = async ;
req - > needs_fixed_file = async ;
2020-01-09 04:55:15 +03:00
trace_io_uring_submit_sqe ( ctx , req - > opcode , req - > user_data ,
true , async ) ;
2019-12-20 04:24:38 +03:00
if ( ! io_submit_sqe ( req , sqe , statep , & link ) )
2019-12-05 16:15:45 +03:00
break ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
}
2020-01-25 22:34:01 +03:00
if ( unlikely ( submitted ! = nr ) ) {
int ref_used = ( submitted = = - EAGAIN ) ? 0 : submitted ;
percpu_ref_put_many ( & ctx - > refs , nr - ref_used ) ;
}
2019-05-11 01:07:28 +03:00
if ( link )
2019-11-21 11:54:28 +03:00
io_queue_link_head ( link ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( statep )
io_submit_state_end ( & state ) ;
2019-11-06 00:22:14 +03:00
/* Commit SQ ring head once we've consumed and submitted all SQEs */
io_commit_sqring ( ctx ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
return submitted ;
}
static int io_sq_thread ( void * data )
{
struct io_ring_ctx * ctx = data ;
struct mm_struct * cur_mm = NULL ;
2019-11-25 18:52:30 +03:00
const struct cred * old_cred ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
mm_segment_t old_fs ;
DEFINE_WAIT ( wait ) ;
unsigned inflight ;
unsigned long timeout ;
2019-11-11 02:56:04 +03:00
int ret ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-11-08 04:27:42 +03:00
complete ( & ctx - > completions [ 1 ] ) ;
2019-07-08 08:41:12 +03:00
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
old_fs = get_fs ( ) ;
set_fs ( USER_DS ) ;
2019-11-25 18:52:30 +03:00
old_cred = override_creds ( ctx - > creds ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-11-11 02:56:04 +03:00
ret = timeout = inflight = 0 ;
io_uring: fix infinite wait in khread_park() on io_finish_async()
This fixes couple of races which lead to infinite wait of park completion
with the following backtraces:
[20801.303319] Call Trace:
[20801.303321] ? __schedule+0x284/0x650
[20801.303323] schedule+0x33/0xc0
[20801.303324] schedule_timeout+0x1bc/0x210
[20801.303326] ? schedule+0x3d/0xc0
[20801.303327] ? schedule_timeout+0x1bc/0x210
[20801.303329] ? preempt_count_add+0x79/0xb0
[20801.303330] wait_for_completion+0xa5/0x120
[20801.303331] ? wake_up_q+0x70/0x70
[20801.303333] kthread_park+0x48/0x80
[20801.303335] io_finish_async+0x2c/0x70
[20801.303336] io_ring_ctx_wait_and_kill+0x95/0x180
[20801.303338] io_uring_release+0x1c/0x20
[20801.303339] __fput+0xad/0x210
[20801.303341] task_work_run+0x8f/0xb0
[20801.303342] exit_to_usermode_loop+0xa0/0xb0
[20801.303343] do_syscall_64+0xe0/0x100
[20801.303349] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[20801.303380] Call Trace:
[20801.303383] ? __schedule+0x284/0x650
[20801.303384] schedule+0x33/0xc0
[20801.303386] io_sq_thread+0x38a/0x410
[20801.303388] ? __switch_to_asm+0x40/0x70
[20801.303390] ? wait_woken+0x80/0x80
[20801.303392] ? _raw_spin_lock_irqsave+0x17/0x40
[20801.303394] ? io_submit_sqes+0x120/0x120
[20801.303395] kthread+0x112/0x130
[20801.303396] ? kthread_create_on_node+0x60/0x60
[20801.303398] ret_from_fork+0x35/0x40
o kthread_park() waits for park completion, so io_sq_thread() loop
should check kthread_should_park() along with khread_should_stop(),
otherwise if kthread_park() is called before prepare_to_wait()
the following schedule() never returns:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
while(!kthread_should_stop() && !ctx->sqo_stop) {
ctx->sqo_stop = 1;
kthread_park()
prepare_to_wait();
if (kthread_should_stop() {
}
schedule(); <<< nobody checks park flag,
<<< so schedule and never return
o if the flag ctx->sqo_stop is observed by the io_sq_thread() loop
it is quite possible, that kthread_should_park() check and the
following kthread_parkme() is never called, because kthread_park()
has not been yet called, but few moments later is is called and
waits there for park completion, which never happens, because
kthread has already exited:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
ctx->sqo_stop = 1;
while(!kthread_should_stop() && !ctx->sqo_stop) {
<<< observe sqo_stop and exit the loop
}
if (kthread_should_park())
kthread_parkme(); <<< never called, since was
<<< never parked
kthread_park() <<< waits forever for park completion
In the current patch we quit the loop by only kthread_should_park()
check (kthread_park() is synchronous, so kthread_should_stop() is
never observed), and we abandon ->sqo_stop flag, since it is racy.
At the end of the io_sq_thread() we unconditionally call parmke(),
since we've exited the loop by the park flag.
Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-05-16 11:53:57 +03:00
while ( ! kthread_should_park ( ) ) {
2019-10-25 12:31:30 +03:00
unsigned int to_submit ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( inflight ) {
unsigned nr_events = 0 ;
if ( ctx - > flags & IORING_SETUP_IOPOLL ) {
2019-10-25 19:06:15 +03:00
/*
* inflight is the count of the maximum possible
* entries we submitted , but it can be smaller
* if we dropped some of them . If we don ' t have
* poll entries available , then we know that we
* have nothing left to poll for . Reset the
* inflight count to zero in that case .
*/
mutex_lock ( & ctx - > uring_lock ) ;
if ( ! list_empty ( & ctx - > poll_list ) )
__io_iopoll_check ( ctx , & nr_events , 0 ) ;
else
inflight = 0 ;
mutex_unlock ( & ctx - > uring_lock ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
} else {
/*
* Normal IO , just pretend everything completed .
* We don ' t have to poll completions for that .
*/
nr_events = inflight ;
}
inflight - = nr_events ;
if ( ! inflight )
timeout = jiffies + ctx - > sq_thread_idle ;
}
2019-10-25 12:31:30 +03:00
to_submit = io_sqring_entries ( ctx ) ;
2019-11-11 02:56:04 +03:00
/*
* If submit got - EBUSY , flag us as needing the application
* to enter the kernel to reap and flush events .
*/
if ( ! to_submit | | ret = = - EBUSY ) {
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
/*
* We ' re polling . If we ' re within the defined idle
* period , then let us spin without work before going
2019-11-11 02:56:04 +03:00
* to sleep . The exception is if we got EBUSY doing
* more IO , we should wait for the application to
* reap events and wake us up .
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
*/
2019-11-11 02:56:04 +03:00
if ( inflight | |
( ! time_after ( jiffies , timeout ) & & ret ! = - EBUSY ) ) {
2019-09-19 18:48:55 +03:00
cond_resched ( ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
continue ;
}
/*
* Drop cur_mm before scheduling , we can ' t hold it for
* long periods ( or over schedule ( ) ) . Do this before
* adding ourselves to the waitqueue , as the unuse / drop
* may sleep .
*/
if ( cur_mm ) {
unuse_mm ( cur_mm ) ;
mmput ( cur_mm ) ;
cur_mm = NULL ;
}
prepare_to_wait ( & ctx - > sqo_wait , & wait ,
TASK_INTERRUPTIBLE ) ;
/* Tell userspace we may need a wakeup call */
2019-08-26 20:23:46 +03:00
ctx - > rings - > sq_flags | = IORING_SQ_NEED_WAKEUP ;
2019-04-19 12:57:45 +03:00
/* make sure to read SQ tail after writing flags */
smp_mb ( ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-10-25 12:31:30 +03:00
to_submit = io_sqring_entries ( ctx ) ;
2019-11-11 02:56:04 +03:00
if ( ! to_submit | | ret = = - EBUSY ) {
io_uring: fix infinite wait in khread_park() on io_finish_async()
This fixes couple of races which lead to infinite wait of park completion
with the following backtraces:
[20801.303319] Call Trace:
[20801.303321] ? __schedule+0x284/0x650
[20801.303323] schedule+0x33/0xc0
[20801.303324] schedule_timeout+0x1bc/0x210
[20801.303326] ? schedule+0x3d/0xc0
[20801.303327] ? schedule_timeout+0x1bc/0x210
[20801.303329] ? preempt_count_add+0x79/0xb0
[20801.303330] wait_for_completion+0xa5/0x120
[20801.303331] ? wake_up_q+0x70/0x70
[20801.303333] kthread_park+0x48/0x80
[20801.303335] io_finish_async+0x2c/0x70
[20801.303336] io_ring_ctx_wait_and_kill+0x95/0x180
[20801.303338] io_uring_release+0x1c/0x20
[20801.303339] __fput+0xad/0x210
[20801.303341] task_work_run+0x8f/0xb0
[20801.303342] exit_to_usermode_loop+0xa0/0xb0
[20801.303343] do_syscall_64+0xe0/0x100
[20801.303349] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[20801.303380] Call Trace:
[20801.303383] ? __schedule+0x284/0x650
[20801.303384] schedule+0x33/0xc0
[20801.303386] io_sq_thread+0x38a/0x410
[20801.303388] ? __switch_to_asm+0x40/0x70
[20801.303390] ? wait_woken+0x80/0x80
[20801.303392] ? _raw_spin_lock_irqsave+0x17/0x40
[20801.303394] ? io_submit_sqes+0x120/0x120
[20801.303395] kthread+0x112/0x130
[20801.303396] ? kthread_create_on_node+0x60/0x60
[20801.303398] ret_from_fork+0x35/0x40
o kthread_park() waits for park completion, so io_sq_thread() loop
should check kthread_should_park() along with khread_should_stop(),
otherwise if kthread_park() is called before prepare_to_wait()
the following schedule() never returns:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
while(!kthread_should_stop() && !ctx->sqo_stop) {
ctx->sqo_stop = 1;
kthread_park()
prepare_to_wait();
if (kthread_should_stop() {
}
schedule(); <<< nobody checks park flag,
<<< so schedule and never return
o if the flag ctx->sqo_stop is observed by the io_sq_thread() loop
it is quite possible, that kthread_should_park() check and the
following kthread_parkme() is never called, because kthread_park()
has not been yet called, but few moments later is is called and
waits there for park completion, which never happens, because
kthread has already exited:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
ctx->sqo_stop = 1;
while(!kthread_should_stop() && !ctx->sqo_stop) {
<<< observe sqo_stop and exit the loop
}
if (kthread_should_park())
kthread_parkme(); <<< never called, since was
<<< never parked
kthread_park() <<< waits forever for park completion
In the current patch we quit the loop by only kthread_should_park()
check (kthread_park() is synchronous, so kthread_should_stop() is
never observed), and we abandon ->sqo_stop flag, since it is racy.
At the end of the io_sq_thread() we unconditionally call parmke(),
since we've exited the loop by the park flag.
Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-05-16 11:53:57 +03:00
if ( kthread_should_park ( ) ) {
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
finish_wait ( & ctx - > sqo_wait , & wait ) ;
break ;
}
if ( signal_pending ( current ) )
flush_signals ( current ) ;
schedule ( ) ;
finish_wait ( & ctx - > sqo_wait , & wait ) ;
2019-08-26 20:23:46 +03:00
ctx - > rings - > sq_flags & = ~ IORING_SQ_NEED_WAKEUP ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
continue ;
}
finish_wait ( & ctx - > sqo_wait , & wait ) ;
2019-08-26 20:23:46 +03:00
ctx - > rings - > sq_flags & = ~ IORING_SQ_NEED_WAKEUP ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
}
2019-12-10 00:52:35 +03:00
mutex_lock ( & ctx - > uring_lock ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
ret = io_submit_sqes ( ctx , to_submit , NULL , - 1 , & cur_mm , true ) ;
2019-12-10 00:52:35 +03:00
mutex_unlock ( & ctx - > uring_lock ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( ret > 0 )
inflight + = ret ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
}
set_fs ( old_fs ) ;
if ( cur_mm ) {
unuse_mm ( cur_mm ) ;
mmput ( cur_mm ) ;
}
2019-11-25 18:52:30 +03:00
revert_creds ( old_cred ) ;
2019-04-13 18:26:03 +03:00
io_uring: fix infinite wait in khread_park() on io_finish_async()
This fixes couple of races which lead to infinite wait of park completion
with the following backtraces:
[20801.303319] Call Trace:
[20801.303321] ? __schedule+0x284/0x650
[20801.303323] schedule+0x33/0xc0
[20801.303324] schedule_timeout+0x1bc/0x210
[20801.303326] ? schedule+0x3d/0xc0
[20801.303327] ? schedule_timeout+0x1bc/0x210
[20801.303329] ? preempt_count_add+0x79/0xb0
[20801.303330] wait_for_completion+0xa5/0x120
[20801.303331] ? wake_up_q+0x70/0x70
[20801.303333] kthread_park+0x48/0x80
[20801.303335] io_finish_async+0x2c/0x70
[20801.303336] io_ring_ctx_wait_and_kill+0x95/0x180
[20801.303338] io_uring_release+0x1c/0x20
[20801.303339] __fput+0xad/0x210
[20801.303341] task_work_run+0x8f/0xb0
[20801.303342] exit_to_usermode_loop+0xa0/0xb0
[20801.303343] do_syscall_64+0xe0/0x100
[20801.303349] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[20801.303380] Call Trace:
[20801.303383] ? __schedule+0x284/0x650
[20801.303384] schedule+0x33/0xc0
[20801.303386] io_sq_thread+0x38a/0x410
[20801.303388] ? __switch_to_asm+0x40/0x70
[20801.303390] ? wait_woken+0x80/0x80
[20801.303392] ? _raw_spin_lock_irqsave+0x17/0x40
[20801.303394] ? io_submit_sqes+0x120/0x120
[20801.303395] kthread+0x112/0x130
[20801.303396] ? kthread_create_on_node+0x60/0x60
[20801.303398] ret_from_fork+0x35/0x40
o kthread_park() waits for park completion, so io_sq_thread() loop
should check kthread_should_park() along with khread_should_stop(),
otherwise if kthread_park() is called before prepare_to_wait()
the following schedule() never returns:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
while(!kthread_should_stop() && !ctx->sqo_stop) {
ctx->sqo_stop = 1;
kthread_park()
prepare_to_wait();
if (kthread_should_stop() {
}
schedule(); <<< nobody checks park flag,
<<< so schedule and never return
o if the flag ctx->sqo_stop is observed by the io_sq_thread() loop
it is quite possible, that kthread_should_park() check and the
following kthread_parkme() is never called, because kthread_park()
has not been yet called, but few moments later is is called and
waits there for park completion, which never happens, because
kthread has already exited:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
ctx->sqo_stop = 1;
while(!kthread_should_stop() && !ctx->sqo_stop) {
<<< observe sqo_stop and exit the loop
}
if (kthread_should_park())
kthread_parkme(); <<< never called, since was
<<< never parked
kthread_park() <<< waits forever for park completion
In the current patch we quit the loop by only kthread_should_park()
check (kthread_park() is synchronous, so kthread_should_stop() is
never observed), and we abandon ->sqo_stop flag, since it is racy.
At the end of the io_sq_thread() we unconditionally call parmke(),
since we've exited the loop by the park flag.
Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-05-16 11:53:57 +03:00
kthread_parkme ( ) ;
2019-04-13 18:26:03 +03:00
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
return 0 ;
}
2019-09-24 22:47:15 +03:00
struct io_wait_queue {
struct wait_queue_entry wq ;
struct io_ring_ctx * ctx ;
unsigned to_wait ;
unsigned nr_timeouts ;
} ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
static inline bool io_should_wake ( struct io_wait_queue * iowq , bool noflush )
2019-09-24 22:47:15 +03:00
{
struct io_ring_ctx * ctx = iowq - > ctx ;
/*
2019-12-13 14:09:50 +03:00
* Wake up if we have enough events , or if a timeout occurred since we
2019-09-24 22:47:15 +03:00
* started waiting . For timeouts , we always want to return to userspace ,
* regardless of event count .
*/
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
return io_cqring_events ( ctx , noflush ) > = iowq - > to_wait | |
2019-09-24 22:47:15 +03:00
atomic_read ( & ctx - > cq_timeouts ) ! = iowq - > nr_timeouts ;
}
static int io_wake_function ( struct wait_queue_entry * curr , unsigned int mode ,
int wake_flags , void * key )
{
struct io_wait_queue * iowq = container_of ( curr , struct io_wait_queue ,
wq ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
/* use noflush == true, as we can't safely rely on locking context */
if ( ! io_should_wake ( iowq , true ) )
2019-09-24 22:47:15 +03:00
return - 1 ;
return autoremove_wake_function ( curr , mode , wake_flags , key ) ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/*
* Wait until events become available , if we don ' t already have some . The
* application must reap them itself , as they reside on the shared cq ring .
*/
static int io_cqring_wait ( struct io_ring_ctx * ctx , int min_events ,
const sigset_t __user * sig , size_t sigsz )
{
2019-09-24 22:47:15 +03:00
struct io_wait_queue iowq = {
. wq = {
. private = current ,
. func = io_wake_function ,
. entry = LIST_HEAD_INIT ( iowq . wq . entry ) ,
} ,
. ctx = ctx ,
. to_wait = min_events ,
} ;
2019-08-26 20:23:46 +03:00
struct io_rings * rings = ctx - > rings ;
2019-10-29 06:16:42 +03:00
int ret = 0 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( io_cqring_events ( ctx , false ) > = min_events )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return 0 ;
if ( sig ) {
2019-03-25 17:34:53 +03:00
# ifdef CONFIG_COMPAT
if ( in_compat_syscall ( ) )
ret = set_compat_user_sigmask ( ( const compat_sigset_t __user * ) sig ,
2019-07-17 02:29:53 +03:00
sigsz ) ;
2019-03-25 17:34:53 +03:00
else
# endif
2019-07-17 02:29:53 +03:00
ret = set_user_sigmask ( sig , sigsz ) ;
2019-03-25 17:34:53 +03:00
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( ret )
return ret ;
}
2019-09-24 22:47:15 +03:00
iowq . nr_timeouts = atomic_read ( & ctx - > cq_timeouts ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
trace_io_uring_cqring_wait ( ctx , min_events ) ;
2019-09-24 22:47:15 +03:00
do {
prepare_to_wait_exclusive ( & ctx - > wait , & iowq . wq ,
TASK_INTERRUPTIBLE ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( io_should_wake ( & iowq , false ) )
2019-09-24 22:47:15 +03:00
break ;
schedule ( ) ;
if ( signal_pending ( current ) ) {
2019-10-29 06:16:42 +03:00
ret = - EINTR ;
2019-09-24 22:47:15 +03:00
break ;
}
} while ( 1 ) ;
finish_wait ( & ctx - > wait , & iowq . wq ) ;
2019-10-29 06:16:42 +03:00
restore_saved_sigmask_unless ( ret = = - EINTR ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-08-26 20:23:46 +03:00
return READ_ONCE ( rings - > cq . head ) = = READ_ONCE ( rings - > cq . tail ) ? ret : 0 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-01-11 08:13:58 +03:00
static void __io_sqe_files_unregister ( struct io_ring_ctx * ctx )
{
# if defined(CONFIG_UNIX)
if ( ctx - > ring_sock ) {
struct sock * sock = ctx - > ring_sock - > sk ;
struct sk_buff * skb ;
while ( ( skb = skb_dequeue ( & sock - > sk_receive_queue ) ) ! = NULL )
kfree_skb ( skb ) ;
}
# else
int i ;
2019-10-26 16:20:21 +03:00
for ( i = 0 ; i < ctx - > nr_user_files ; i + + ) {
struct file * file ;
file = io_file_from_index ( ctx , i ) ;
if ( file )
fput ( file ) ;
}
2019-01-11 08:13:58 +03:00
# endif
}
2019-12-09 21:22:50 +03:00
static void io_file_ref_kill ( struct percpu_ref * ref )
{
struct fixed_file_data * data ;
data = container_of ( ref , struct fixed_file_data , refs ) ;
complete ( & data - > done ) ;
}
2019-01-11 08:13:58 +03:00
static int io_sqe_files_unregister ( struct io_ring_ctx * ctx )
{
2019-12-09 21:22:50 +03:00
struct fixed_file_data * data = ctx - > file_data ;
2019-10-26 16:20:21 +03:00
unsigned nr_tables , i ;
2019-12-09 21:22:50 +03:00
if ( ! data )
2019-01-11 08:13:58 +03:00
return - ENXIO ;
2019-12-09 21:22:50 +03:00
/* protect against inflight atomic switch, which drops the ref */
percpu_ref_get ( & data - > refs ) ;
2020-01-17 21:15:34 +03:00
/* wait for existing switches */
flush_work ( & data - > ref_work ) ;
2019-12-09 21:22:50 +03:00
percpu_ref_kill_and_confirm ( & data - > refs , io_file_ref_kill ) ;
wait_for_completion ( & data - > done ) ;
percpu_ref_put ( & data - > refs ) ;
2020-01-17 21:15:34 +03:00
/* flush potential new switch */
flush_work ( & data - > ref_work ) ;
2019-12-09 21:22:50 +03:00
percpu_ref_exit ( & data - > refs ) ;
2019-01-11 08:13:58 +03:00
__io_sqe_files_unregister ( ctx ) ;
2019-10-26 16:20:21 +03:00
nr_tables = DIV_ROUND_UP ( ctx - > nr_user_files , IORING_MAX_FILES_TABLE ) ;
for ( i = 0 ; i < nr_tables ; i + + )
2019-12-09 21:22:50 +03:00
kfree ( data - > table [ i ] . files ) ;
kfree ( data - > table ) ;
kfree ( data ) ;
ctx - > file_data = NULL ;
2019-01-11 08:13:58 +03:00
ctx - > nr_user_files = 0 ;
return 0 ;
}
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
static void io_sq_thread_stop ( struct io_ring_ctx * ctx )
{
if ( ctx - > sqo_thread ) {
2019-11-08 04:27:42 +03:00
wait_for_completion ( & ctx - > completions [ 1 ] ) ;
io_uring: fix infinite wait in khread_park() on io_finish_async()
This fixes couple of races which lead to infinite wait of park completion
with the following backtraces:
[20801.303319] Call Trace:
[20801.303321] ? __schedule+0x284/0x650
[20801.303323] schedule+0x33/0xc0
[20801.303324] schedule_timeout+0x1bc/0x210
[20801.303326] ? schedule+0x3d/0xc0
[20801.303327] ? schedule_timeout+0x1bc/0x210
[20801.303329] ? preempt_count_add+0x79/0xb0
[20801.303330] wait_for_completion+0xa5/0x120
[20801.303331] ? wake_up_q+0x70/0x70
[20801.303333] kthread_park+0x48/0x80
[20801.303335] io_finish_async+0x2c/0x70
[20801.303336] io_ring_ctx_wait_and_kill+0x95/0x180
[20801.303338] io_uring_release+0x1c/0x20
[20801.303339] __fput+0xad/0x210
[20801.303341] task_work_run+0x8f/0xb0
[20801.303342] exit_to_usermode_loop+0xa0/0xb0
[20801.303343] do_syscall_64+0xe0/0x100
[20801.303349] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[20801.303380] Call Trace:
[20801.303383] ? __schedule+0x284/0x650
[20801.303384] schedule+0x33/0xc0
[20801.303386] io_sq_thread+0x38a/0x410
[20801.303388] ? __switch_to_asm+0x40/0x70
[20801.303390] ? wait_woken+0x80/0x80
[20801.303392] ? _raw_spin_lock_irqsave+0x17/0x40
[20801.303394] ? io_submit_sqes+0x120/0x120
[20801.303395] kthread+0x112/0x130
[20801.303396] ? kthread_create_on_node+0x60/0x60
[20801.303398] ret_from_fork+0x35/0x40
o kthread_park() waits for park completion, so io_sq_thread() loop
should check kthread_should_park() along with khread_should_stop(),
otherwise if kthread_park() is called before prepare_to_wait()
the following schedule() never returns:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
while(!kthread_should_stop() && !ctx->sqo_stop) {
ctx->sqo_stop = 1;
kthread_park()
prepare_to_wait();
if (kthread_should_stop() {
}
schedule(); <<< nobody checks park flag,
<<< so schedule and never return
o if the flag ctx->sqo_stop is observed by the io_sq_thread() loop
it is quite possible, that kthread_should_park() check and the
following kthread_parkme() is never called, because kthread_park()
has not been yet called, but few moments later is is called and
waits there for park completion, which never happens, because
kthread has already exited:
CPU#0 CPU#1
io_sq_thread_stop(): io_sq_thread():
ctx->sqo_stop = 1;
while(!kthread_should_stop() && !ctx->sqo_stop) {
<<< observe sqo_stop and exit the loop
}
if (kthread_should_park())
kthread_parkme(); <<< never called, since was
<<< never parked
kthread_park() <<< waits forever for park completion
In the current patch we quit the loop by only kthread_should_park()
check (kthread_park() is synchronous, so kthread_should_stop() is
never observed), and we abandon ->sqo_stop flag, since it is racy.
At the end of the io_sq_thread() we unconditionally call parmke(),
since we've exited the loop by the park flag.
Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-05-16 11:53:57 +03:00
/*
* The park is a bit of a work - around , without it we get
* warning spews on shutdown with SQPOLL set and affinity
* set to a single CPU .
*/
2019-04-13 18:26:03 +03:00
kthread_park ( ctx - > sqo_thread ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
kthread_stop ( ctx - > sqo_thread ) ;
ctx - > sqo_thread = NULL ;
}
}
2019-01-11 08:13:58 +03:00
static void io_finish_async ( struct io_ring_ctx * ctx )
{
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
io_sq_thread_stop ( ctx ) ;
2019-10-24 16:25:42 +03:00
if ( ctx - > io_wq ) {
io_wq_destroy ( ctx - > io_wq ) ;
ctx - > io_wq = NULL ;
2019-01-11 08:13:58 +03:00
}
}
# if defined(CONFIG_UNIX)
/*
* Ensure the UNIX gc is aware of our file set , so we are certain that
* the io_uring can be safely unregistered on process exit , even if we have
* loops in the file referencing .
*/
static int __io_sqe_files_scm ( struct io_ring_ctx * ctx , int nr , int offset )
{
struct sock * sk = ctx - > ring_sock - > sk ;
struct scm_fp_list * fpl ;
struct sk_buff * skb ;
2019-10-03 17:11:03 +03:00
int i , nr_files ;
2019-01-11 08:13:58 +03:00
if ( ! capable ( CAP_SYS_RESOURCE ) & & ! capable ( CAP_SYS_ADMIN ) ) {
unsigned long inflight = ctx - > user - > unix_inflight + nr ;
if ( inflight > task_rlimit ( current , RLIMIT_NOFILE ) )
return - EMFILE ;
}
fpl = kzalloc ( sizeof ( * fpl ) , GFP_KERNEL ) ;
if ( ! fpl )
return - ENOMEM ;
skb = alloc_skb ( 0 , GFP_KERNEL ) ;
if ( ! skb ) {
kfree ( fpl ) ;
return - ENOMEM ;
}
skb - > sk = sk ;
2019-10-03 17:11:03 +03:00
nr_files = 0 ;
2019-01-11 08:13:58 +03:00
fpl - > user = get_uid ( ctx - > user ) ;
for ( i = 0 ; i < nr ; i + + ) {
2019-10-26 16:20:21 +03:00
struct file * file = io_file_from_index ( ctx , i + offset ) ;
if ( ! file )
2019-10-03 17:11:03 +03:00
continue ;
2019-10-26 16:20:21 +03:00
fpl - > fp [ nr_files ] = get_file ( file ) ;
2019-10-03 17:11:03 +03:00
unix_inflight ( fpl - > user , fpl - > fp [ nr_files ] ) ;
nr_files + + ;
2019-01-11 08:13:58 +03:00
}
2019-10-03 17:11:03 +03:00
if ( nr_files ) {
fpl - > max = SCM_MAX_FD ;
fpl - > count = nr_files ;
UNIXCB ( skb ) . fp = fpl ;
2019-12-09 21:22:50 +03:00
skb - > destructor = unix_destruct_scm ;
2019-10-03 17:11:03 +03:00
refcount_add ( skb - > truesize , & sk - > sk_wmem_alloc ) ;
skb_queue_head ( & sk - > sk_receive_queue , skb ) ;
2019-01-11 08:13:58 +03:00
2019-10-03 17:11:03 +03:00
for ( i = 0 ; i < nr_files ; i + + )
fput ( fpl - > fp [ i ] ) ;
} else {
kfree_skb ( skb ) ;
kfree ( fpl ) ;
}
2019-01-11 08:13:58 +03:00
return 0 ;
}
/*
* If UNIX sockets are enabled , fd passing can cause a reference cycle which
* causes regular reference counting to break down . We rely on the UNIX
* garbage collection to take care of this problem for us .
*/
static int io_sqe_files_scm ( struct io_ring_ctx * ctx )
{
unsigned left , total ;
int ret = 0 ;
total = 0 ;
left = ctx - > nr_user_files ;
while ( left ) {
unsigned this_files = min_t ( unsigned , left , SCM_MAX_FD ) ;
ret = __io_sqe_files_scm ( ctx , this_files , total ) ;
if ( ret )
break ;
left - = this_files ;
total + = this_files ;
}
if ( ! ret )
return 0 ;
while ( total < ctx - > nr_user_files ) {
2019-10-26 16:20:21 +03:00
struct file * file = io_file_from_index ( ctx , total ) ;
if ( file )
fput ( file ) ;
2019-01-11 08:13:58 +03:00
total + + ;
}
return ret ;
}
# else
static int io_sqe_files_scm ( struct io_ring_ctx * ctx )
{
return 0 ;
}
# endif
2019-10-26 16:20:21 +03:00
static int io_sqe_alloc_file_tables ( struct io_ring_ctx * ctx , unsigned nr_tables ,
unsigned nr_files )
{
int i ;
for ( i = 0 ; i < nr_tables ; i + + ) {
2019-12-09 21:22:50 +03:00
struct fixed_file_table * table = & ctx - > file_data - > table [ i ] ;
2019-10-26 16:20:21 +03:00
unsigned this_files ;
this_files = min ( nr_files , IORING_MAX_FILES_TABLE ) ;
table - > files = kcalloc ( this_files , sizeof ( struct file * ) ,
GFP_KERNEL ) ;
if ( ! table - > files )
break ;
nr_files - = this_files ;
}
if ( i = = nr_tables )
return 0 ;
for ( i = 0 ; i < nr_tables ; i + + ) {
2019-12-09 21:22:50 +03:00
struct fixed_file_table * table = & ctx - > file_data - > table [ i ] ;
2019-10-26 16:20:21 +03:00
kfree ( table - > files ) ;
}
return 1 ;
}
2019-12-09 21:22:50 +03:00
static void io_ring_file_put ( struct io_ring_ctx * ctx , struct file * file )
{
# if defined(CONFIG_UNIX)
struct sock * sock = ctx - > ring_sock - > sk ;
struct sk_buff_head list , * head = & sock - > sk_receive_queue ;
struct sk_buff * skb ;
int i ;
__skb_queue_head_init ( & list ) ;
/*
* Find the skb that holds this file in its SCM_RIGHTS . When found ,
* remove this entry and rearrange the file array .
*/
skb = skb_dequeue ( head ) ;
while ( skb ) {
struct scm_fp_list * fp ;
fp = UNIXCB ( skb ) . fp ;
for ( i = 0 ; i < fp - > count ; i + + ) {
int left ;
if ( fp - > fp [ i ] ! = file )
continue ;
unix_notinflight ( fp - > user , fp - > fp [ i ] ) ;
left = fp - > count - 1 - i ;
if ( left ) {
memmove ( & fp - > fp [ i ] , & fp - > fp [ i + 1 ] ,
left * sizeof ( struct file * ) ) ;
}
fp - > count - - ;
if ( ! fp - > count ) {
kfree_skb ( skb ) ;
skb = NULL ;
} else {
__skb_queue_tail ( & list , skb ) ;
}
fput ( file ) ;
file = NULL ;
break ;
}
if ( ! file )
break ;
__skb_queue_tail ( & list , skb ) ;
skb = skb_dequeue ( head ) ;
}
if ( skb_peek ( & list ) ) {
spin_lock_irq ( & head - > lock ) ;
while ( ( skb = __skb_dequeue ( & list ) ) ! = NULL )
__skb_queue_tail ( head , skb ) ;
spin_unlock_irq ( & head - > lock ) ;
}
# else
fput ( file ) ;
# endif
}
struct io_file_put {
struct llist_node llist ;
struct file * file ;
struct completion * done ;
} ;
static void io_ring_file_ref_switch ( struct work_struct * work )
{
struct io_file_put * pfile , * tmp ;
struct fixed_file_data * data ;
struct llist_node * node ;
data = container_of ( work , struct fixed_file_data , ref_work ) ;
while ( ( node = llist_del_all ( & data - > put_llist ) ) ! = NULL ) {
llist_for_each_entry_safe ( pfile , tmp , node , llist ) {
io_ring_file_put ( data - > ctx , pfile - > file ) ;
if ( pfile - > done )
complete ( pfile - > done ) ;
else
kfree ( pfile ) ;
}
}
percpu_ref_get ( & data - > refs ) ;
percpu_ref_switch_to_percpu ( & data - > refs ) ;
}
static void io_file_data_ref_zero ( struct percpu_ref * ref )
{
struct fixed_file_data * data ;
data = container_of ( ref , struct fixed_file_data , refs ) ;
/* we can't safely switch from inside this context, punt to wq */
queue_work ( system_wq , & data - > ref_work ) ;
}
2019-01-11 08:13:58 +03:00
static int io_sqe_files_register ( struct io_ring_ctx * ctx , void __user * arg ,
unsigned nr_args )
{
__s32 __user * fds = ( __s32 __user * ) arg ;
2019-10-26 16:20:21 +03:00
unsigned nr_tables ;
2019-12-09 21:22:50 +03:00
struct file * file ;
2019-01-11 08:13:58 +03:00
int fd , ret = 0 ;
unsigned i ;
2019-12-09 21:22:50 +03:00
if ( ctx - > file_data )
2019-01-11 08:13:58 +03:00
return - EBUSY ;
if ( ! nr_args )
return - EINVAL ;
if ( nr_args > IORING_MAX_FIXED_FILES )
return - EMFILE ;
2019-12-09 21:22:50 +03:00
ctx - > file_data = kzalloc ( sizeof ( * ctx - > file_data ) , GFP_KERNEL ) ;
if ( ! ctx - > file_data )
return - ENOMEM ;
ctx - > file_data - > ctx = ctx ;
init_completion ( & ctx - > file_data - > done ) ;
2019-10-26 16:20:21 +03:00
nr_tables = DIV_ROUND_UP ( nr_args , IORING_MAX_FILES_TABLE ) ;
2019-12-09 21:22:50 +03:00
ctx - > file_data - > table = kcalloc ( nr_tables ,
sizeof ( struct fixed_file_table ) ,
2019-10-26 16:20:21 +03:00
GFP_KERNEL ) ;
2019-12-09 21:22:50 +03:00
if ( ! ctx - > file_data - > table ) {
kfree ( ctx - > file_data ) ;
ctx - > file_data = NULL ;
2019-01-11 08:13:58 +03:00
return - ENOMEM ;
2019-12-09 21:22:50 +03:00
}
if ( percpu_ref_init ( & ctx - > file_data - > refs , io_file_data_ref_zero ,
PERCPU_REF_ALLOW_REINIT , GFP_KERNEL ) ) {
kfree ( ctx - > file_data - > table ) ;
kfree ( ctx - > file_data ) ;
ctx - > file_data = NULL ;
return - ENOMEM ;
}
ctx - > file_data - > put_llist . first = NULL ;
INIT_WORK ( & ctx - > file_data - > ref_work , io_ring_file_ref_switch ) ;
2019-01-11 08:13:58 +03:00
2019-10-26 16:20:21 +03:00
if ( io_sqe_alloc_file_tables ( ctx , nr_tables , nr_args ) ) {
2019-12-09 21:22:50 +03:00
percpu_ref_exit ( & ctx - > file_data - > refs ) ;
kfree ( ctx - > file_data - > table ) ;
kfree ( ctx - > file_data ) ;
ctx - > file_data = NULL ;
2019-10-26 16:20:21 +03:00
return - ENOMEM ;
}
2019-10-03 17:11:03 +03:00
for ( i = 0 ; i < nr_args ; i + + , ctx - > nr_user_files + + ) {
2019-10-26 16:20:21 +03:00
struct fixed_file_table * table ;
unsigned index ;
2019-01-11 08:13:58 +03:00
ret = - EFAULT ;
if ( copy_from_user ( & fd , & fds [ i ] , sizeof ( fd ) ) )
break ;
2019-10-03 17:11:03 +03:00
/* allow sparse sets */
if ( fd = = - 1 ) {
ret = 0 ;
continue ;
}
2019-01-11 08:13:58 +03:00
2019-12-09 21:22:50 +03:00
table = & ctx - > file_data - > table [ i > > IORING_FILE_TABLE_SHIFT ] ;
2019-10-26 16:20:21 +03:00
index = i & IORING_FILE_TABLE_MASK ;
2019-12-09 21:22:50 +03:00
file = fget ( fd ) ;
2019-01-11 08:13:58 +03:00
ret = - EBADF ;
2019-12-09 21:22:50 +03:00
if ( ! file )
2019-01-11 08:13:58 +03:00
break ;
2019-12-09 21:22:50 +03:00
2019-01-11 08:13:58 +03:00
/*
* Don ' t allow io_uring instances to be registered . If UNIX
* isn ' t enabled , then this causes a reference cycle and this
* instance can never get freed . If UNIX is enabled we ' ll
* handle it just fine , but there ' s still no point in allowing
* a ring fd as it doesn ' t support regular read / write anyway .
*/
2019-12-09 21:22:50 +03:00
if ( file - > f_op = = & io_uring_fops ) {
fput ( file ) ;
2019-01-11 08:13:58 +03:00
break ;
}
ret = 0 ;
2019-12-09 21:22:50 +03:00
table - > files [ index ] = file ;
2019-01-11 08:13:58 +03:00
}
if ( ret ) {
2019-10-26 16:20:21 +03:00
for ( i = 0 ; i < ctx - > nr_user_files ; i + + ) {
file = io_file_from_index ( ctx , i ) ;
if ( file )
fput ( file ) ;
}
for ( i = 0 ; i < nr_tables ; i + + )
2019-12-09 21:22:50 +03:00
kfree ( ctx - > file_data - > table [ i ] . files ) ;
2019-01-11 08:13:58 +03:00
2019-12-09 21:22:50 +03:00
kfree ( ctx - > file_data - > table ) ;
kfree ( ctx - > file_data ) ;
ctx - > file_data = NULL ;
2019-01-11 08:13:58 +03:00
ctx - > nr_user_files = 0 ;
return ret ;
}
ret = io_sqe_files_scm ( ctx ) ;
if ( ret )
io_sqe_files_unregister ( ctx ) ;
return ret ;
}
2019-10-03 22:59:56 +03:00
static int io_sqe_file_register ( struct io_ring_ctx * ctx , struct file * file ,
int index )
{
# if defined(CONFIG_UNIX)
struct sock * sock = ctx - > ring_sock - > sk ;
struct sk_buff_head * head = & sock - > sk_receive_queue ;
struct sk_buff * skb ;
/*
* See if we can merge this file into an existing skb SCM_RIGHTS
* file set . If there ' s no room , fall back to allocating a new skb
* and filling it in .
*/
spin_lock_irq ( & head - > lock ) ;
skb = skb_peek ( head ) ;
if ( skb ) {
struct scm_fp_list * fpl = UNIXCB ( skb ) . fp ;
if ( fpl - > count < SCM_MAX_FD ) {
__skb_unlink ( skb , head ) ;
spin_unlock_irq ( & head - > lock ) ;
fpl - > fp [ fpl - > count ] = get_file ( file ) ;
unix_inflight ( fpl - > user , fpl - > fp [ fpl - > count ] ) ;
fpl - > count + + ;
spin_lock_irq ( & head - > lock ) ;
__skb_queue_head ( head , skb ) ;
} else {
skb = NULL ;
}
}
spin_unlock_irq ( & head - > lock ) ;
if ( skb ) {
fput ( file ) ;
return 0 ;
}
return __io_sqe_files_scm ( ctx , 1 , index ) ;
# else
return 0 ;
# endif
}
2019-12-09 21:22:50 +03:00
static void io_atomic_switch ( struct percpu_ref * ref )
2019-10-03 22:59:56 +03:00
{
2019-12-09 21:22:50 +03:00
struct fixed_file_data * data ;
data = container_of ( ref , struct fixed_file_data , refs ) ;
clear_bit ( FFD_F_ATOMIC , & data - > state ) ;
}
static bool io_queue_file_removal ( struct fixed_file_data * data ,
struct file * file )
{
struct io_file_put * pfile , pfile_stack ;
DECLARE_COMPLETION_ONSTACK ( done ) ;
/*
* If we fail allocating the struct we need for doing async reomval
* of this file , just punt to sync and wait for it .
*/
pfile = kzalloc ( sizeof ( * pfile ) , GFP_KERNEL ) ;
if ( ! pfile ) {
pfile = & pfile_stack ;
pfile - > done = & done ;
}
pfile - > file = file ;
llist_add ( & pfile - > llist , & data - > put_llist ) ;
if ( pfile = = & pfile_stack ) {
if ( ! test_and_set_bit ( FFD_F_ATOMIC , & data - > state ) ) {
percpu_ref_put ( & data - > refs ) ;
percpu_ref_switch_to_atomic ( & data - > refs ,
io_atomic_switch ) ;
}
wait_for_completion ( & done ) ;
flush_work ( & data - > ref_work ) ;
return false ;
}
return true ;
}
static int __io_sqe_files_update ( struct io_ring_ctx * ctx ,
struct io_uring_files_update * up ,
unsigned nr_args )
{
struct fixed_file_data * data = ctx - > file_data ;
bool ref_switch = false ;
struct file * file ;
2019-10-03 22:59:56 +03:00
__s32 __user * fds ;
int fd , i , err ;
__u32 done ;
2019-12-09 21:22:50 +03:00
if ( check_add_overflow ( up - > offset , nr_args , & done ) )
2019-10-03 22:59:56 +03:00
return - EOVERFLOW ;
if ( done > ctx - > nr_user_files )
return - EINVAL ;
done = 0 ;
2019-12-09 21:22:50 +03:00
fds = u64_to_user_ptr ( up - > fds ) ;
2019-10-03 22:59:56 +03:00
while ( nr_args ) {
2019-10-26 16:20:21 +03:00
struct fixed_file_table * table ;
unsigned index ;
2019-10-03 22:59:56 +03:00
err = 0 ;
if ( copy_from_user ( & fd , & fds [ done ] , sizeof ( fd ) ) ) {
err = - EFAULT ;
break ;
}
2019-12-09 21:22:50 +03:00
i = array_index_nospec ( up - > offset , ctx - > nr_user_files ) ;
table = & ctx - > file_data - > table [ i > > IORING_FILE_TABLE_SHIFT ] ;
2019-10-26 16:20:21 +03:00
index = i & IORING_FILE_TABLE_MASK ;
if ( table - > files [ index ] ) {
2019-12-09 21:22:50 +03:00
file = io_file_from_index ( ctx , index ) ;
2019-10-26 16:20:21 +03:00
table - > files [ index ] = NULL ;
2019-12-09 21:22:50 +03:00
if ( io_queue_file_removal ( data , file ) )
ref_switch = true ;
2019-10-03 22:59:56 +03:00
}
if ( fd ! = - 1 ) {
file = fget ( fd ) ;
if ( ! file ) {
err = - EBADF ;
break ;
}
/*
* Don ' t allow io_uring instances to be registered . If
* UNIX isn ' t enabled , then this causes a reference
* cycle and this instance can never get freed . If UNIX
* is enabled we ' ll handle it just fine , but there ' s
* still no point in allowing a ring fd as it doesn ' t
* support regular read / write anyway .
*/
if ( file - > f_op = = & io_uring_fops ) {
fput ( file ) ;
err = - EBADF ;
break ;
}
2019-10-26 16:20:21 +03:00
table - > files [ index ] = file ;
2019-10-03 22:59:56 +03:00
err = io_sqe_file_register ( ctx , file , i ) ;
if ( err )
break ;
}
nr_args - - ;
done + + ;
2019-12-09 21:22:50 +03:00
up - > offset + + ;
}
if ( ref_switch & & ! test_and_set_bit ( FFD_F_ATOMIC , & data - > state ) ) {
percpu_ref_put ( & data - > refs ) ;
percpu_ref_switch_to_atomic ( & data - > refs , io_atomic_switch ) ;
2019-10-03 22:59:56 +03:00
}
return done ? done : err ;
}
2019-12-09 21:22:50 +03:00
static int io_sqe_files_update ( struct io_ring_ctx * ctx , void __user * arg ,
unsigned nr_args )
{
struct io_uring_files_update up ;
if ( ! ctx - > file_data )
return - ENXIO ;
if ( ! nr_args )
return - EINVAL ;
if ( copy_from_user ( & up , arg , sizeof ( up ) ) )
return - EFAULT ;
if ( up . resv )
return - EINVAL ;
return __io_sqe_files_update ( ctx , & up , nr_args ) ;
}
2019-10-03 22:59:56 +03:00
2019-11-13 08:31:31 +03:00
static void io_put_work ( struct io_wq_work * work )
{
struct io_kiocb * req = container_of ( work , struct io_kiocb , work ) ;
io_put_req ( req ) ;
}
static void io_get_work ( struct io_wq_work * work )
{
struct io_kiocb * req = container_of ( work , struct io_kiocb , work ) ;
refcount_inc ( & req - > refs ) ;
}
2020-01-28 03:15:48 +03:00
static int io_init_wq_offload ( struct io_ring_ctx * ctx ,
struct io_uring_params * p )
{
struct io_wq_data data ;
struct fd f ;
struct io_ring_ctx * ctx_attach ;
unsigned int concurrency ;
int ret = 0 ;
data . user = ctx - > user ;
data . get_work = io_get_work ;
data . put_work = io_put_work ;
if ( ! ( p - > flags & IORING_SETUP_ATTACH_WQ ) ) {
/* Do QD, or 4 * CPUS, whatever is smallest */
concurrency = min ( ctx - > sq_entries , 4 * num_online_cpus ( ) ) ;
ctx - > io_wq = io_wq_create ( concurrency , & data ) ;
if ( IS_ERR ( ctx - > io_wq ) ) {
ret = PTR_ERR ( ctx - > io_wq ) ;
ctx - > io_wq = NULL ;
}
return ret ;
}
f = fdget ( p - > wq_fd ) ;
if ( ! f . file )
return - EBADF ;
if ( f . file - > f_op ! = & io_uring_fops ) {
ret = - EINVAL ;
goto out_fput ;
}
ctx_attach = f . file - > private_data ;
/* @io_wq is protected by holding the fd */
if ( ! io_wq_get ( ctx_attach - > io_wq , & data ) ) {
ret = - EINVAL ;
goto out_fput ;
}
ctx - > io_wq = ctx_attach - > io_wq ;
out_fput :
fdput ( f ) ;
return ret ;
}
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
static int io_sq_offload_start ( struct io_ring_ctx * ctx ,
struct io_uring_params * p )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
int ret ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
init_waitqueue_head ( & ctx - > sqo_wait ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mmgrab ( current - > mm ) ;
ctx - > sqo_mm = current - > mm ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( ctx - > flags & IORING_SETUP_SQPOLL ) {
2019-04-08 19:51:01 +03:00
ret = - EPERM ;
if ( ! capable ( CAP_SYS_ADMIN ) )
goto err ;
2019-04-13 18:28:55 +03:00
ctx - > sq_thread_idle = msecs_to_jiffies ( p - > sq_thread_idle ) ;
if ( ! ctx - > sq_thread_idle )
ctx - > sq_thread_idle = HZ ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( p - > flags & IORING_SETUP_SQ_AFF ) {
2019-05-15 05:00:30 +03:00
int cpu = p - > sq_thread_cpu ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
2019-04-13 18:28:55 +03:00
ret = - EINVAL ;
2019-05-15 05:00:30 +03:00
if ( cpu > = nr_cpu_ids )
goto err ;
2019-05-07 11:03:19 +03:00
if ( ! cpu_online ( cpu ) )
2019-04-13 18:28:55 +03:00
goto err ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
ctx - > sqo_thread = kthread_create_on_cpu ( io_sq_thread ,
ctx , cpu ,
" io_uring-sq " ) ;
} else {
ctx - > sqo_thread = kthread_create ( io_sq_thread , ctx ,
" io_uring-sq " ) ;
}
if ( IS_ERR ( ctx - > sqo_thread ) ) {
ret = PTR_ERR ( ctx - > sqo_thread ) ;
ctx - > sqo_thread = NULL ;
goto err ;
}
wake_up_process ( ctx - > sqo_thread ) ;
} else if ( p - > flags & IORING_SETUP_SQ_AFF ) {
/* Can't have SQ_AFF without SQPOLL */
ret = - EINVAL ;
goto err ;
}
2020-01-28 03:15:48 +03:00
ret = io_init_wq_offload ( ctx , p ) ;
if ( ret )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
goto err ;
return 0 ;
err :
2019-09-10 18:15:04 +03:00
io_finish_async ( ctx ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mmdrop ( ctx - > sqo_mm ) ;
ctx - > sqo_mm = NULL ;
return ret ;
}
static void io_unaccount_mem ( struct user_struct * user , unsigned long nr_pages )
{
atomic_long_sub ( nr_pages , & user - > locked_vm ) ;
}
static int io_account_mem ( struct user_struct * user , unsigned long nr_pages )
{
unsigned long page_limit , cur_pages , new_pages ;
/* Don't allow more pages than we can safely lock */
page_limit = rlimit ( RLIMIT_MEMLOCK ) > > PAGE_SHIFT ;
do {
cur_pages = atomic_long_read ( & user - > locked_vm ) ;
new_pages = cur_pages + nr_pages ;
if ( new_pages > page_limit )
return - ENOMEM ;
} while ( atomic_long_cmpxchg ( & user - > locked_vm , cur_pages ,
new_pages ) ! = cur_pages ) ;
return 0 ;
}
static void io_mem_free ( void * ptr )
{
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 19:30:21 +03:00
struct page * page ;
if ( ! ptr )
return ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_uring: free allocated io_memory once
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.
When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.
This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.
Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.
Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.
[ 26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[ 26.102976] flags: 0x63fffc000000()
[ 26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[ 26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[ 26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[ 26.143960] ------------[ cut here ]------------
[ 26.146020] kernel BUG at include/linux/mm.h:547!
[ 26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 26.149163] Modules linked in:
[ 26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[ 26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[ 26.156566] Hardware name: linux,dummy-virt (DT)
[ 26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[ 26.159869] pc : io_mem_free+0x9c/0xa8
[ 26.161436] lr : io_mem_free+0x9c/0xa8
[ 26.162720] sp : ffff000013003d60
[ 26.164048] x29: ffff000013003d60 x28: ffff800025048040
[ 26.165804] x27: 0000000000000000 x26: ffff800025048040
[ 26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[ 26.169682] x23: 0000000000000000 x22: 0000000020000080
[ 26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[ 26.174236] x19: ffff80002143b280 x18: 0000000000000000
[ 26.176607] x17: 0000000000000000 x16: 0000000000000000
[ 26.178997] x15: 0000000000000000 x14: 0000000000000000
[ 26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[ 26.183863] x11: 0000000000000000 x10: 0000000000000980
[ 26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[ 26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[ 26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[ 26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[ 26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[ 26.198892] Call trace:
[ 26.199893] io_mem_free+0x9c/0xa8
[ 26.201155] io_ring_ctx_wait_and_kill+0xec/0x180
[ 26.202688] io_uring_setup+0x6c4/0x6f0
[ 26.204091] __arm64_sys_io_uring_setup+0x18/0x20
[ 26.205576] el0_svc_common.constprop.0+0x7c/0xe8
[ 26.207186] el0_svc_handler+0x28/0x78
[ 26.208389] el0_svc+0x8/0xc
[ 26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[ 26.211995] ---[ end trace bdb81cd43a21e50d ]---
[ 81.770626] ------------[ cut here ]------------
[ 81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 ( (null))
[ 81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[ 81.831202] Modules linked in:
[ 81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[ 81.835616] Hardware name: linux,dummy-virt (DT)
[ 81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 81.838727] pc : __virt_to_phys+0x48/0x68
[ 81.840572] lr : __virt_to_phys+0x48/0x68
[ 81.842264] sp : ffff80002cf67c70
[ 81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[ 81.846463] x27: 0000000000000000 x26: 0000000020000080
[ 81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[ 81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[ 81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[ 81.856711] x19: 0000000000000000 x18: 0000000000000000
[ 81.859132] x17: 0000000000000000 x16: 0000000000000000
[ 81.861586] x15: 0000000000000000 x14: 0000000000000000
[ 81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[ 81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[ 81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[ 81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[ 81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[ 81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[ 81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[ 81.880453] Call trace:
[ 81.881164] __virt_to_phys+0x48/0x68
[ 81.882919] io_mem_free+0x18/0x110
[ 81.886585] io_ring_ctx_wait_and_kill+0x13c/0x1f0
[ 81.891212] io_uring_setup+0xa60/0xad0
[ 81.892881] __arm64_sys_io_uring_setup+0x2c/0x38
[ 81.894398] el0_svc_common.constprop.0+0xac/0x150
[ 81.896306] el0_svc_handler+0x34/0x88
[ 81.897744] el0_svc+0x8/0xc
[ 81.898715] ---[ end trace b4a703802243cbba ]---
Fixes: 2b188cc1bb857a9d ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-04-30 19:30:21 +03:00
page = virt_to_head_page ( ptr ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( put_page_testzero ( page ) )
free_compound_page ( page ) ;
}
static void * io_mem_alloc ( size_t size )
{
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_COMP |
__GFP_NORETRY ;
return ( void * ) __get_free_pages ( gfp_flags , get_order ( size ) ) ;
}
2019-08-26 20:23:46 +03:00
static unsigned long rings_size ( unsigned sq_entries , unsigned cq_entries ,
size_t * sq_offset )
{
struct io_rings * rings ;
size_t off , sq_array_size ;
off = struct_size ( rings , cqes , cq_entries ) ;
if ( off = = SIZE_MAX )
return SIZE_MAX ;
# ifdef CONFIG_SMP
off = ALIGN ( off , SMP_CACHE_BYTES ) ;
if ( off = = 0 )
return SIZE_MAX ;
# endif
sq_array_size = array_size ( sizeof ( u32 ) , sq_entries ) ;
if ( sq_array_size = = SIZE_MAX )
return SIZE_MAX ;
if ( check_add_overflow ( off , sq_array_size , & off ) )
return SIZE_MAX ;
if ( sq_offset )
* sq_offset = off ;
return off ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static unsigned long ring_pages ( unsigned sq_entries , unsigned cq_entries )
{
2019-08-26 20:23:46 +03:00
size_t pages ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-08-26 20:23:46 +03:00
pages = ( size_t ) 1 < < get_order (
rings_size ( sq_entries , cq_entries , NULL ) ) ;
pages + = ( size_t ) 1 < < get_order (
array_size ( sizeof ( struct io_uring_sqe ) , sq_entries ) ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-08-26 20:23:46 +03:00
return pages ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
static int io_sqe_buffer_unregister ( struct io_ring_ctx * ctx )
{
int i , j ;
if ( ! ctx - > user_bufs )
return - ENXIO ;
for ( i = 0 ; i < ctx - > nr_user_bufs ; i + + ) {
struct io_mapped_ubuf * imu = & ctx - > user_bufs [ i ] ;
for ( j = 0 ; j < imu - > nr_bvecs ; j + + )
2019-08-05 05:32:06 +03:00
put_user_page ( imu - > bvec [ j ] . bv_page ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
if ( ctx - > account_mem )
io_unaccount_mem ( ctx - > user , imu - > nr_bvecs ) ;
2019-05-01 18:59:16 +03:00
kvfree ( imu - > bvec ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
imu - > nr_bvecs = 0 ;
}
kfree ( ctx - > user_bufs ) ;
ctx - > user_bufs = NULL ;
ctx - > nr_user_bufs = 0 ;
return 0 ;
}
static int io_copy_iov ( struct io_ring_ctx * ctx , struct iovec * dst ,
void __user * arg , unsigned index )
{
struct iovec __user * src ;
# ifdef CONFIG_COMPAT
if ( ctx - > compat ) {
struct compat_iovec __user * ciovs ;
struct compat_iovec ciov ;
ciovs = ( struct compat_iovec __user * ) arg ;
if ( copy_from_user ( & ciov , & ciovs [ index ] , sizeof ( ciov ) ) )
return - EFAULT ;
2019-12-12 02:12:15 +03:00
dst - > iov_base = u64_to_user_ptr ( ( u64 ) ciov . iov_base ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
dst - > iov_len = ciov . iov_len ;
return 0 ;
}
# endif
src = ( struct iovec __user * ) arg ;
if ( copy_from_user ( dst , & src [ index ] , sizeof ( * dst ) ) )
return - EFAULT ;
return 0 ;
}
static int io_sqe_buffer_register ( struct io_ring_ctx * ctx , void __user * arg ,
unsigned nr_args )
{
struct vm_area_struct * * vmas = NULL ;
struct page * * pages = NULL ;
int i , j , got_pages = 0 ;
int ret = - EINVAL ;
if ( ctx - > user_bufs )
return - EBUSY ;
if ( ! nr_args | | nr_args > UIO_MAXIOV )
return - EINVAL ;
ctx - > user_bufs = kcalloc ( nr_args , sizeof ( struct io_mapped_ubuf ) ,
GFP_KERNEL ) ;
if ( ! ctx - > user_bufs )
return - ENOMEM ;
for ( i = 0 ; i < nr_args ; i + + ) {
struct io_mapped_ubuf * imu = & ctx - > user_bufs [ i ] ;
unsigned long off , start , end , ubuf ;
int pret , nr_pages ;
struct iovec iov ;
size_t size ;
ret = io_copy_iov ( ctx , & iov , arg , i ) ;
if ( ret )
2019-05-26 12:35:47 +03:00
goto err ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
/*
* Don ' t impose further limits on the size and buffer
* constraints here , we ' ll - EINVAL later when IO is
* submitted if they are wrong .
*/
ret = - EFAULT ;
if ( ! iov . iov_base | | ! iov . iov_len )
goto err ;
/* arbitrary limit, but we need something */
if ( iov . iov_len > SZ_1G )
goto err ;
ubuf = ( unsigned long ) iov . iov_base ;
end = ( ubuf + iov . iov_len + PAGE_SIZE - 1 ) > > PAGE_SHIFT ;
start = ubuf > > PAGE_SHIFT ;
nr_pages = end - start ;
if ( ctx - > account_mem ) {
ret = io_account_mem ( ctx - > user , nr_pages ) ;
if ( ret )
goto err ;
}
ret = 0 ;
if ( ! pages | | nr_pages > got_pages ) {
kfree ( vmas ) ;
kfree ( pages ) ;
2019-05-01 18:59:16 +03:00
pages = kvmalloc_array ( nr_pages , sizeof ( struct page * ) ,
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
GFP_KERNEL ) ;
2019-05-01 18:59:16 +03:00
vmas = kvmalloc_array ( nr_pages ,
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
sizeof ( struct vm_area_struct * ) ,
GFP_KERNEL ) ;
if ( ! pages | | ! vmas ) {
ret = - ENOMEM ;
if ( ctx - > account_mem )
io_unaccount_mem ( ctx - > user , nr_pages ) ;
goto err ;
}
got_pages = nr_pages ;
}
2019-05-01 18:59:16 +03:00
imu - > bvec = kvmalloc_array ( nr_pages , sizeof ( struct bio_vec ) ,
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
GFP_KERNEL ) ;
ret = - ENOMEM ;
if ( ! imu - > bvec ) {
if ( ctx - > account_mem )
io_unaccount_mem ( ctx - > user , nr_pages ) ;
goto err ;
}
ret = 0 ;
down_read ( & current - > mm - > mmap_sem ) ;
mm/gup: replace get_user_pages_longterm() with FOLL_LONGTERM
Pach series "Add FOLL_LONGTERM to GUP fast and use it".
HFI1, qib, and mthca, use get_user_pages_fast() due to its performance
advantages. These pages can be held for a significant time. But
get_user_pages_fast() does not protect against mapping FS DAX pages.
Introduce FOLL_LONGTERM and use this flag in get_user_pages_fast() which
retains the performance while also adding the FS DAX checks. XDP has also
shown interest in using this functionality.[1]
In addition we change get_user_pages() to use the new FOLL_LONGTERM flag
and remove the specialized get_user_pages_longterm call.
[1] https://lkml.org/lkml/2019/3/19/939
"longterm" is a relative thing and at this point is probably a misnomer.
This is really flagging a pin which is going to be given to hardware and
can't move. I've thought of a couple of alternative names but I think we
have to settle on if we are going to use FL_LAYOUT or something else to
solve the "longterm" problem. Then I think we can change the flag to a
better name.
Secondly, it depends on how often you are registering memory. I have
spoken with some RDMA users who consider MR in the performance path...
For the overall application performance. I don't have the numbers as the
tests for HFI1 were done a long time ago. But there was a significant
advantage. Some of which is probably due to the fact that you don't have
to hold mmap_sem.
Finally, architecturally I think it would be good for everyone to use
*_fast. There are patches submitted to the RDMA list which would allow
the use of *_fast (they reworking the use of mmap_sem) and as soon as they
are accepted I'll submit a patch to convert the RDMA core as well. Also
to this point others are looking to use *_fast.
As an aside, Jasons pointed out in my previous submission that *_fast and
*_unlocked look very much the same. I agree and I think further cleanup
will be coming. But I'm focused on getting the final solution for DAX at
the moment.
This patch (of 7):
This patch starts a series which aims to support FOLL_LONGTERM in
get_user_pages_fast(). Some callers who would like to do a longterm (user
controlled pin) of pages with the fast variant of GUP for performance
purposes.
Rather than have a separate get_user_pages_longterm() call, introduce
FOLL_LONGTERM and change the longterm callers to use it.
This patch does not change any functionality. In the short term
"longterm" or user controlled pins are unsafe for Filesystems and FS DAX
in particular has been blocked. However, callers of get_user_pages_fast()
were not "protected".
FOLL_LONGTERM can _only_ be supported with get_user_pages[_fast]() as it
requires vmas to determine if DAX is in use.
NOTE: In merging with the CMA changes we opt to change the
get_user_pages() call in check_and_migrate_cma_pages() to a call of
__get_user_pages_locked() on the newly migrated pages. This makes the
code read better in that we are calling __get_user_pages_locked() on the
pages before and after a potential migration.
As a side affect some of the interfaces are cleaned up but this is not the
primary purpose of the series.
In review[1] it was asked:
<quote>
> This I don't get - if you do lock down long term mappings performance
> of the actual get_user_pages call shouldn't matter to start with.
>
> What do I miss?
A couple of points.
First "longterm" is a relative thing and at this point is probably a
misnomer. This is really flagging a pin which is going to be given to
hardware and can't move. I've thought of a couple of alternative names
but I think we have to settle on if we are going to use FL_LAYOUT or
something else to solve the "longterm" problem. Then I think we can
change the flag to a better name.
Second, It depends on how often you are registering memory. I have spoken
with some RDMA users who consider MR in the performance path... For the
overall application performance. I don't have the numbers as the tests
for HFI1 were done a long time ago. But there was a significant
advantage. Some of which is probably due to the fact that you don't have
to hold mmap_sem.
Finally, architecturally I think it would be good for everyone to use
*_fast. There are patches submitted to the RDMA list which would allow
the use of *_fast (they reworking the use of mmap_sem) and as soon as they
are accepted I'll submit a patch to convert the RDMA core as well. Also
to this point others are looking to use *_fast.
As an asside, Jasons pointed out in my previous submission that *_fast and
*_unlocked look very much the same. I agree and I think further cleanup
will be coming. But I'm focused on getting the final solution for DAX at
the moment.
</quote>
[1] https://lore.kernel.org/lkml/20190220180255.GA12020@iweiny-DESK2.sc.intel.com/T/#md6abad2569f3bf6c1f03686c8097ab6563e94965
[ira.weiny@intel.com: v3]
Link: http://lkml.kernel.org/r/20190328084422.29911-2-ira.weiny@intel.com
Link: http://lkml.kernel.org/r/20190328084422.29911-2-ira.weiny@intel.com
Link: http://lkml.kernel.org/r/20190317183438.2057-2-ira.weiny@intel.com
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Rich Felker <dalias@libc.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 03:17:03 +03:00
pret = get_user_pages ( ubuf , nr_pages ,
FOLL_WRITE | FOLL_LONGTERM ,
pages , vmas ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
if ( pret = = nr_pages ) {
/* don't support file backed memory */
for ( j = 0 ; j < nr_pages ; j + + ) {
struct vm_area_struct * vma = vmas [ j ] ;
if ( vma - > vm_file & &
! is_file_hugepages ( vma - > vm_file ) ) {
ret = - EOPNOTSUPP ;
break ;
}
}
} else {
ret = pret < 0 ? pret : - EFAULT ;
}
up_read ( & current - > mm - > mmap_sem ) ;
if ( ret ) {
/*
* if we did partial map , or found file backed vmas ,
* release any pages we did get
*/
2019-08-05 05:32:06 +03:00
if ( pret > 0 )
put_user_pages ( pages , pret ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
if ( ctx - > account_mem )
io_unaccount_mem ( ctx - > user , nr_pages ) ;
2019-05-01 18:59:16 +03:00
kvfree ( imu - > bvec ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
goto err ;
}
off = ubuf & ~ PAGE_MASK ;
size = iov . iov_len ;
for ( j = 0 ; j < nr_pages ; j + + ) {
size_t vec_len ;
vec_len = min_t ( size_t , size , PAGE_SIZE - off ) ;
imu - > bvec [ j ] . bv_page = pages [ j ] ;
imu - > bvec [ j ] . bv_len = vec_len ;
imu - > bvec [ j ] . bv_offset = off ;
off = 0 ;
size - = vec_len ;
}
/* store original address for later verification */
imu - > ubuf = ubuf ;
imu - > len = iov . iov_len ;
imu - > nr_bvecs = nr_pages ;
ctx - > nr_user_bufs + + ;
}
2019-05-01 18:59:16 +03:00
kvfree ( pages ) ;
kvfree ( vmas ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
return 0 ;
err :
2019-05-01 18:59:16 +03:00
kvfree ( pages ) ;
kvfree ( vmas ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
io_sqe_buffer_unregister ( ctx ) ;
return ret ;
}
2019-04-11 20:45:41 +03:00
static int io_eventfd_register ( struct io_ring_ctx * ctx , void __user * arg )
{
__s32 __user * fds = arg ;
int fd ;
if ( ctx - > cq_ev_fd )
return - EBUSY ;
if ( copy_from_user ( & fd , fds , sizeof ( * fds ) ) )
return - EFAULT ;
ctx - > cq_ev_fd = eventfd_ctx_fdget ( fd ) ;
if ( IS_ERR ( ctx - > cq_ev_fd ) ) {
int ret = PTR_ERR ( ctx - > cq_ev_fd ) ;
ctx - > cq_ev_fd = NULL ;
return ret ;
}
return 0 ;
}
static int io_eventfd_unregister ( struct io_ring_ctx * ctx )
{
if ( ctx - > cq_ev_fd ) {
eventfd_ctx_put ( ctx - > cq_ev_fd ) ;
ctx - > cq_ev_fd = NULL ;
return 0 ;
}
return - ENXIO ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static void io_ring_ctx_free ( struct io_ring_ctx * ctx )
{
2019-01-11 08:13:58 +03:00
io_finish_async ( ctx ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( ctx - > sqo_mm )
mmdrop ( ctx - > sqo_mm ) ;
2019-01-09 18:59:42 +03:00
io_iopoll_reap_events ( ctx ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
io_sqe_buffer_unregister ( ctx ) ;
2019-01-11 08:13:58 +03:00
io_sqe_files_unregister ( ctx ) ;
2019-04-11 20:45:41 +03:00
io_eventfd_unregister ( ctx ) ;
2019-01-09 18:59:42 +03:00
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# if defined(CONFIG_UNIX)
2019-06-13 00:58:43 +03:00
if ( ctx - > ring_sock ) {
ctx - > ring_sock - > file = NULL ; /* so that iput() is called */
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
sock_release ( ctx - > ring_sock ) ;
2019-06-13 00:58:43 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
# endif
2019-08-26 20:23:46 +03:00
io_mem_free ( ctx - > rings ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_mem_free ( ctx - > sq_sqes ) ;
percpu_ref_exit ( & ctx - > refs ) ;
if ( ctx - > account_mem )
io_unaccount_mem ( ctx - > user ,
ring_pages ( ctx - > sq_entries , ctx - > cq_entries ) ) ;
free_uid ( ctx - > user ) ;
2019-11-25 18:52:30 +03:00
put_cred ( ctx - > creds ) ;
2019-11-08 04:27:42 +03:00
kfree ( ctx - > completions ) ;
2019-12-05 05:56:40 +03:00
kfree ( ctx - > cancel_hash ) ;
2019-11-08 18:52:53 +03:00
kmem_cache_free ( req_cachep , ctx - > fallback_req ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
kfree ( ctx ) ;
}
static __poll_t io_uring_poll ( struct file * file , poll_table * wait )
{
struct io_ring_ctx * ctx = file - > private_data ;
__poll_t mask = 0 ;
poll_wait ( file , & ctx - > cq_wait , wait ) ;
2019-04-25 00:54:17 +03:00
/*
* synchronizes with barrier from wq_has_sleeper call in
* io_commit_cqring
*/
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
smp_rmb ( ) ;
2019-08-26 20:23:46 +03:00
if ( READ_ONCE ( ctx - > rings - > sq . tail ) - ctx - > cached_sq_head ! =
ctx - > rings - > sq_ring_entries )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mask | = EPOLLOUT | EPOLLWRNORM ;
2019-09-24 15:53:34 +03:00
if ( READ_ONCE ( ctx - > rings - > cq . head ) ! = ctx - > cached_cq_tail )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mask | = EPOLLIN | EPOLLRDNORM ;
return mask ;
}
static int io_uring_fasync ( int fd , struct file * file , int on )
{
struct io_ring_ctx * ctx = file - > private_data ;
return fasync_helper ( fd , file , on , & ctx - > cq_fasync ) ;
}
2020-01-28 20:04:42 +03:00
static int io_remove_personalities ( int id , void * p , void * data )
{
struct io_ring_ctx * ctx = data ;
const struct cred * cred ;
cred = idr_remove ( & ctx - > personality_idr , id ) ;
if ( cred )
put_cred ( cred ) ;
return 0 ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static void io_ring_ctx_wait_and_kill ( struct io_ring_ctx * ctx )
{
mutex_lock ( & ctx - > uring_lock ) ;
percpu_ref_kill ( & ctx - > refs ) ;
mutex_unlock ( & ctx - > uring_lock ) ;
2019-09-17 21:26:57 +03:00
io_kill_timeouts ( ctx ) ;
2019-01-17 19:41:58 +03:00
io_poll_remove_all ( ctx ) ;
2019-10-24 16:25:42 +03:00
if ( ctx - > io_wq )
io_wq_cancel_all ( ctx - > io_wq ) ;
2019-01-09 18:59:42 +03:00
io_iopoll_reap_events ( ctx ) ;
2019-11-13 19:09:23 +03:00
/* if we failed setting up the ctx, we might not have any rings */
if ( ctx - > rings )
io_cqring_overflow_flush ( ctx , true ) ;
2020-01-28 20:04:42 +03:00
idr_for_each ( & ctx - > personality_idr , io_remove_personalities , ctx ) ;
2019-11-08 04:27:42 +03:00
wait_for_completion ( & ctx - > completions [ 0 ] ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
io_ring_ctx_free ( ctx ) ;
}
static int io_uring_release ( struct inode * inode , struct file * file )
{
struct io_ring_ctx * ctx = file - > private_data ;
file - > private_data = NULL ;
io_ring_ctx_wait_and_kill ( ctx ) ;
return 0 ;
}
2019-10-24 21:39:47 +03:00
static void io_uring_cancel_files ( struct io_ring_ctx * ctx ,
struct files_struct * files )
{
struct io_kiocb * req ;
DEFINE_WAIT ( wait ) ;
while ( ! list_empty_careful ( & ctx - > inflight_list ) ) {
2019-11-11 06:30:53 +03:00
struct io_kiocb * cancel_req = NULL ;
2019-10-24 21:39:47 +03:00
spin_lock_irq ( & ctx - > inflight_lock ) ;
list_for_each_entry ( req , & ctx - > inflight_list , inflight_entry ) {
2019-11-11 06:30:53 +03:00
if ( req - > work . files ! = files )
continue ;
/* req is being completed, ignore */
if ( ! refcount_inc_not_zero ( & req - > refs ) )
continue ;
cancel_req = req ;
break ;
2019-10-24 21:39:47 +03:00
}
2019-11-11 06:30:53 +03:00
if ( cancel_req )
2019-10-24 21:39:47 +03:00
prepare_to_wait ( & ctx - > inflight_wait , & wait ,
2019-11-11 06:30:53 +03:00
TASK_UNINTERRUPTIBLE ) ;
2019-10-24 21:39:47 +03:00
spin_unlock_irq ( & ctx - > inflight_lock ) ;
2019-11-11 06:30:53 +03:00
/* We need to keep going until we don't find a matching req */
if ( ! cancel_req )
2019-10-24 21:39:47 +03:00
break ;
2019-11-13 13:06:24 +03:00
io_wq_cancel_work ( ctx - > io_wq , & cancel_req - > work ) ;
io_put_req ( cancel_req ) ;
2019-10-24 21:39:47 +03:00
schedule ( ) ;
}
2019-11-11 06:30:53 +03:00
finish_wait ( & ctx - > inflight_wait , & wait ) ;
2019-10-24 21:39:47 +03:00
}
static int io_uring_flush ( struct file * file , void * data )
{
struct io_ring_ctx * ctx = file - > private_data ;
io_uring_cancel_files ( ctx , data ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
if ( fatal_signal_pending ( current ) | | ( current - > flags & PF_EXITING ) ) {
io_cqring_overflow_flush ( ctx , true ) ;
2019-10-24 21:39:47 +03:00
io_wq_cancel_all ( ctx - > io_wq ) ;
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 21:31:17 +03:00
}
2019-10-24 21:39:47 +03:00
return 0 ;
}
2019-11-28 14:53:22 +03:00
static void * io_uring_validate_mmap_request ( struct file * file ,
loff_t pgoff , size_t sz )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
{
struct io_ring_ctx * ctx = file - > private_data ;
2019-11-28 14:53:22 +03:00
loff_t offset = pgoff < < PAGE_SHIFT ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
struct page * page ;
void * ptr ;
switch ( offset ) {
case IORING_OFF_SQ_RING :
2019-08-26 20:23:46 +03:00
case IORING_OFF_CQ_RING :
ptr = ctx - > rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
break ;
case IORING_OFF_SQES :
ptr = ctx - > sq_sqes ;
break ;
default :
2019-11-28 14:53:22 +03:00
return ERR_PTR ( - EINVAL ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
page = virt_to_head_page ( ptr ) ;
2019-09-24 01:34:25 +03:00
if ( sz > page_size ( page ) )
2019-11-28 14:53:22 +03:00
return ERR_PTR ( - EINVAL ) ;
return ptr ;
}
# ifdef CONFIG_MMU
static int io_uring_mmap ( struct file * file , struct vm_area_struct * vma )
{
size_t sz = vma - > vm_end - vma - > vm_start ;
unsigned long pfn ;
void * ptr ;
ptr = io_uring_validate_mmap_request ( file , vma - > vm_pgoff , sz ) ;
if ( IS_ERR ( ptr ) )
return PTR_ERR ( ptr ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
pfn = virt_to_phys ( ptr ) > > PAGE_SHIFT ;
return remap_pfn_range ( vma , vma - > vm_start , pfn , sz , vma - > vm_page_prot ) ;
}
2019-11-28 14:53:22 +03:00
# else /* !CONFIG_MMU */
static int io_uring_mmap ( struct file * file , struct vm_area_struct * vma )
{
return vma - > vm_flags & ( VM_SHARED | VM_MAYSHARE ) ? 0 : - EINVAL ;
}
static unsigned int io_uring_nommu_mmap_capabilities ( struct file * file )
{
return NOMMU_MAP_DIRECT | NOMMU_MAP_READ | NOMMU_MAP_WRITE ;
}
static unsigned long io_uring_nommu_get_unmapped_area ( struct file * file ,
unsigned long addr , unsigned long len ,
unsigned long pgoff , unsigned long flags )
{
void * ptr ;
ptr = io_uring_validate_mmap_request ( file , pgoff , len ) ;
if ( IS_ERR ( ptr ) )
return PTR_ERR ( ptr ) ;
return ( unsigned long ) ptr ;
}
# endif /* !CONFIG_MMU */
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
SYSCALL_DEFINE6 ( io_uring_enter , unsigned int , fd , u32 , to_submit ,
u32 , min_complete , u32 , flags , const sigset_t __user * , sig ,
size_t , sigsz )
{
struct io_ring_ctx * ctx ;
long ret = - EBADF ;
int submitted = 0 ;
struct fd f ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( flags & ~ ( IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP ) )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - EINVAL ;
f = fdget ( fd ) ;
if ( ! f . file )
return - EBADF ;
ret = - EOPNOTSUPP ;
if ( f . file - > f_op ! = & io_uring_fops )
goto out_fput ;
ret = - ENXIO ;
ctx = f . file - > private_data ;
if ( ! percpu_ref_tryget ( & ctx - > refs ) )
goto out_fput ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
/*
* For SQ polling , the thread will do all submissions and completions .
* Just return the requested submit count , and wake the thread if
* we were asked to .
*/
2019-09-12 23:19:16 +03:00
ret = 0 ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( ctx - > flags & IORING_SETUP_SQPOLL ) {
2019-11-11 02:56:04 +03:00
if ( ! list_empty_careful ( & ctx - > cq_overflow_list ) )
io_cqring_overflow_flush ( ctx , false ) ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( flags & IORING_ENTER_SQ_WAKEUP )
wake_up ( & ctx - > sqo_wait ) ;
submitted = to_submit ;
2019-09-12 23:19:16 +03:00
} else if ( to_submit ) {
2019-11-06 00:22:14 +03:00
struct mm_struct * cur_mm ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2020-01-17 05:00:24 +03:00
if ( current - > mm ! = ctx - > sqo_mm | |
current_cred ( ) ! = ctx - > creds ) {
ret = - EPERM ;
goto out ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mutex_lock ( & ctx - > uring_lock ) ;
2019-11-06 00:22:14 +03:00
/* already have mm, so io_submit_sqes() won't try to grab it */
cur_mm = ctx - > sqo_mm ;
submitted = io_submit_sqes ( ctx , to_submit , f . file , fd ,
& cur_mm , false ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
mutex_unlock ( & ctx - > uring_lock ) ;
2019-12-18 19:53:45 +03:00
if ( submitted ! = to_submit )
goto out ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
if ( flags & IORING_ENTER_GETEVENTS ) {
2019-01-09 18:59:42 +03:00
unsigned nr_events = 0 ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
min_complete = min ( min_complete , ctx - > cq_entries ) ;
2019-01-09 18:59:42 +03:00
if ( ctx - > flags & IORING_SETUP_IOPOLL ) {
ret = io_iopoll_check ( ctx , & nr_events , min_complete ) ;
} else {
ret = io_cqring_wait ( ctx , min_complete , sig , sigsz ) ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
}
2019-12-18 19:53:45 +03:00
out :
2019-10-08 02:18:42 +03:00
percpu_ref_put ( & ctx - > refs ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
out_fput :
fdput ( f ) ;
return submitted ? submitted : ret ;
}
static const struct file_operations io_uring_fops = {
. release = io_uring_release ,
2019-10-24 21:39:47 +03:00
. flush = io_uring_flush ,
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
. mmap = io_uring_mmap ,
2019-11-28 14:53:22 +03:00
# ifndef CONFIG_MMU
. get_unmapped_area = io_uring_nommu_get_unmapped_area ,
. mmap_capabilities = io_uring_nommu_mmap_capabilities ,
# endif
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
. poll = io_uring_poll ,
. fasync = io_uring_fasync ,
} ;
static int io_allocate_scq_urings ( struct io_ring_ctx * ctx ,
struct io_uring_params * p )
{
2019-08-26 20:23:46 +03:00
struct io_rings * rings ;
size_t size , sq_array_offset ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
2019-08-26 20:23:46 +03:00
size = rings_size ( p - > sq_entries , p - > cq_entries , & sq_array_offset ) ;
if ( size = = SIZE_MAX )
return - EOVERFLOW ;
rings = io_mem_alloc ( size ) ;
if ( ! rings )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - ENOMEM ;
2019-08-26 20:23:46 +03:00
ctx - > rings = rings ;
ctx - > sq_array = ( u32 * ) ( ( char * ) rings + sq_array_offset ) ;
rings - > sq_ring_mask = p - > sq_entries - 1 ;
rings - > cq_ring_mask = p - > cq_entries - 1 ;
rings - > sq_ring_entries = p - > sq_entries ;
rings - > cq_ring_entries = p - > cq_entries ;
ctx - > sq_mask = rings - > sq_ring_mask ;
ctx - > cq_mask = rings - > cq_ring_mask ;
ctx - > sq_entries = rings - > sq_ring_entries ;
ctx - > cq_entries = rings - > cq_ring_entries ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
size = array_size ( sizeof ( struct io_uring_sqe ) , p - > sq_entries ) ;
2019-11-20 19:26:29 +03:00
if ( size = = SIZE_MAX ) {
io_mem_free ( ctx - > rings ) ;
ctx - > rings = NULL ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - EOVERFLOW ;
2019-11-20 19:26:29 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
ctx - > sq_sqes = io_mem_alloc ( size ) ;
2019-11-20 19:26:29 +03:00
if ( ! ctx - > sq_sqes ) {
io_mem_free ( ctx - > rings ) ;
ctx - > rings = NULL ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - ENOMEM ;
2019-11-20 19:26:29 +03:00
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return 0 ;
}
/*
* Allocate an anonymous fd , this is what constitutes the application
* visible backing of an io_uring instance . The application mmaps this
* fd to gain access to the SQ / CQ ring details . If UNIX sockets are enabled ,
* we have to tie this fd to a socket for file garbage collection purposes .
*/
static int io_uring_get_fd ( struct io_ring_ctx * ctx )
{
struct file * file ;
int ret ;
# if defined(CONFIG_UNIX)
ret = sock_create_kern ( & init_net , PF_UNIX , SOCK_RAW , IPPROTO_IP ,
& ctx - > ring_sock ) ;
if ( ret )
return ret ;
# endif
ret = get_unused_fd_flags ( O_RDWR | O_CLOEXEC ) ;
if ( ret < 0 )
goto err ;
file = anon_inode_getfile ( " [io_uring] " , & io_uring_fops , ctx ,
O_RDWR | O_CLOEXEC ) ;
if ( IS_ERR ( file ) ) {
put_unused_fd ( ret ) ;
ret = PTR_ERR ( file ) ;
goto err ;
}
# if defined(CONFIG_UNIX)
ctx - > ring_sock - > file = file ;
# endif
fd_install ( ret , file ) ;
return ret ;
err :
# if defined(CONFIG_UNIX)
sock_release ( ctx - > ring_sock ) ;
ctx - > ring_sock = NULL ;
# endif
return ret ;
}
static int io_uring_create ( unsigned entries , struct io_uring_params * p )
{
struct user_struct * user = NULL ;
struct io_ring_ctx * ctx ;
bool account_mem ;
int ret ;
2019-12-29 01:39:54 +03:00
if ( ! entries )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - EINVAL ;
2019-12-29 01:39:54 +03:00
if ( entries > IORING_MAX_ENTRIES ) {
if ( ! ( p - > flags & IORING_SETUP_CLAMP ) )
return - EINVAL ;
entries = IORING_MAX_ENTRIES ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
/*
* Use twice as many entries for the CQ ring . It ' s possible for the
* application to drive a higher depth than the size of the SQ ring ,
* since the sqes are only used at submission time . This allows for
2019-10-04 21:10:03 +03:00
* some flexibility in overcommitting a bit . If the application has
* set IORING_SETUP_CQSIZE , it will have passed in the desired number
* of CQ ring entries manually .
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
*/
p - > sq_entries = roundup_pow_of_two ( entries ) ;
2019-10-04 21:10:03 +03:00
if ( p - > flags & IORING_SETUP_CQSIZE ) {
/*
* If IORING_SETUP_CQSIZE is set , we do the same roundup
* to a power - of - two , if it isn ' t already . We do NOT impose
* any cq vs sq ring sizing .
*/
2019-12-29 01:39:54 +03:00
if ( p - > cq_entries < p - > sq_entries )
2019-10-04 21:10:03 +03:00
return - EINVAL ;
2019-12-29 01:39:54 +03:00
if ( p - > cq_entries > IORING_MAX_CQ_ENTRIES ) {
if ( ! ( p - > flags & IORING_SETUP_CLAMP ) )
return - EINVAL ;
p - > cq_entries = IORING_MAX_CQ_ENTRIES ;
}
2019-10-04 21:10:03 +03:00
p - > cq_entries = roundup_pow_of_two ( p - > cq_entries ) ;
} else {
p - > cq_entries = 2 * p - > sq_entries ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
user = get_uid ( current_user ( ) ) ;
account_mem = ! capable ( CAP_IPC_LOCK ) ;
if ( account_mem ) {
ret = io_account_mem ( user ,
ring_pages ( p - > sq_entries , p - > cq_entries ) ) ;
if ( ret ) {
free_uid ( user ) ;
return ret ;
}
}
ctx = io_ring_ctx_alloc ( p ) ;
if ( ! ctx ) {
if ( account_mem )
io_unaccount_mem ( user , ring_pages ( p - > sq_entries ,
p - > cq_entries ) ) ;
free_uid ( user ) ;
return - ENOMEM ;
}
ctx - > compat = in_compat_syscall ( ) ;
ctx - > account_mem = account_mem ;
ctx - > user = user ;
2019-12-02 18:50:00 +03:00
ctx - > creds = get_current_cred ( ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
ret = io_allocate_scq_urings ( ctx , p ) ;
if ( ret )
goto err ;
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
ret = io_sq_offload_start ( ctx , p ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
if ( ret )
goto err ;
memset ( & p - > sq_off , 0 , sizeof ( p - > sq_off ) ) ;
2019-08-26 20:23:46 +03:00
p - > sq_off . head = offsetof ( struct io_rings , sq . head ) ;
p - > sq_off . tail = offsetof ( struct io_rings , sq . tail ) ;
p - > sq_off . ring_mask = offsetof ( struct io_rings , sq_ring_mask ) ;
p - > sq_off . ring_entries = offsetof ( struct io_rings , sq_ring_entries ) ;
p - > sq_off . flags = offsetof ( struct io_rings , sq_flags ) ;
p - > sq_off . dropped = offsetof ( struct io_rings , sq_dropped ) ;
p - > sq_off . array = ( char * ) ctx - > sq_array - ( char * ) ctx - > rings ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
memset ( & p - > cq_off , 0 , sizeof ( p - > cq_off ) ) ;
2019-08-26 20:23:46 +03:00
p - > cq_off . head = offsetof ( struct io_rings , cq . head ) ;
p - > cq_off . tail = offsetof ( struct io_rings , cq . tail ) ;
p - > cq_off . ring_mask = offsetof ( struct io_rings , cq_ring_mask ) ;
p - > cq_off . ring_entries = offsetof ( struct io_rings , cq_ring_entries ) ;
p - > cq_off . overflow = offsetof ( struct io_rings , cq_overflow ) ;
p - > cq_off . cqes = offsetof ( struct io_rings , cqes ) ;
2019-09-06 19:26:21 +03:00
2019-10-28 18:15:33 +03:00
/*
* Install ring fd as the very last thing , so we don ' t risk someone
* having closed it before we finish setup
*/
ret = io_uring_get_fd ( ctx ) ;
if ( ret < 0 )
goto err ;
2019-12-03 04:51:26 +03:00
p - > features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
2020-01-28 02:34:48 +03:00
IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
IORING_FEAT_CUR_PERSONALITY ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
trace_io_uring_create ( ret , ctx , p - > sq_entries , p - > cq_entries , p - > flags ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return ret ;
err :
io_ring_ctx_wait_and_kill ( ctx ) ;
return ret ;
}
/*
* Sets up an aio uring context , and returns the fd . Applications asks for a
* ring size , we return the actual sq / cq ring sizes ( among other things ) in the
* params structure passed in .
*/
static long io_uring_setup ( u32 entries , struct io_uring_params __user * params )
{
struct io_uring_params p ;
long ret ;
int i ;
if ( copy_from_user ( & p , params , sizeof ( p ) ) )
return - EFAULT ;
for ( i = 0 ; i < ARRAY_SIZE ( p . resv ) ; i + + ) {
if ( p . resv [ i ] )
return - EINVAL ;
}
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 21:22:30 +03:00
if ( p . flags & ~ ( IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
2019-12-29 01:39:54 +03:00
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
2020-01-28 03:15:48 +03:00
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ ) )
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
return - EINVAL ;
ret = io_uring_create ( entries , & p ) ;
if ( ret < 0 )
return ret ;
if ( copy_to_user ( params , & p , sizeof ( p ) ) )
return - EFAULT ;
return ret ;
}
SYSCALL_DEFINE2 ( io_uring_setup , u32 , entries ,
struct io_uring_params __user * , params )
{
return io_uring_setup ( entries , params ) ;
}
2020-01-17 01:36:52 +03:00
static int io_probe ( struct io_ring_ctx * ctx , void __user * arg , unsigned nr_args )
{
struct io_uring_probe * p ;
size_t size ;
int i , ret ;
size = struct_size ( p , ops , nr_args ) ;
if ( size = = SIZE_MAX )
return - EOVERFLOW ;
p = kzalloc ( size , GFP_KERNEL ) ;
if ( ! p )
return - ENOMEM ;
ret = - EFAULT ;
if ( copy_from_user ( p , arg , size ) )
goto out ;
ret = - EINVAL ;
if ( memchr_inv ( p , 0 , size ) )
goto out ;
p - > last_op = IORING_OP_LAST - 1 ;
if ( nr_args > IORING_OP_LAST )
nr_args = IORING_OP_LAST ;
for ( i = 0 ; i < nr_args ; i + + ) {
p - > ops [ i ] . op = i ;
if ( ! io_op_defs [ i ] . not_supported )
p - > ops [ i ] . flags = IO_URING_OP_SUPPORTED ;
}
p - > ops_len = i ;
ret = 0 ;
if ( copy_to_user ( arg , p , size ) )
ret = - EFAULT ;
out :
kfree ( p ) ;
return ret ;
}
2020-01-28 20:04:42 +03:00
static int io_register_personality ( struct io_ring_ctx * ctx )
{
const struct cred * creds = get_current_cred ( ) ;
int id ;
id = idr_alloc_cyclic ( & ctx - > personality_idr , ( void * ) creds , 1 ,
USHRT_MAX , GFP_KERNEL ) ;
if ( id < 0 )
put_cred ( creds ) ;
return id ;
}
static int io_unregister_personality ( struct io_ring_ctx * ctx , unsigned id )
{
const struct cred * old_creds ;
old_creds = idr_remove ( & ctx - > personality_idr , id ) ;
if ( old_creds ) {
put_cred ( old_creds ) ;
return 0 ;
}
return - EINVAL ;
}
static bool io_register_op_must_quiesce ( int op )
{
switch ( op ) {
case IORING_UNREGISTER_FILES :
case IORING_REGISTER_FILES_UPDATE :
case IORING_REGISTER_PROBE :
case IORING_REGISTER_PERSONALITY :
case IORING_UNREGISTER_PERSONALITY :
return false ;
default :
return true ;
}
}
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
static int __io_uring_register ( struct io_ring_ctx * ctx , unsigned opcode ,
void __user * arg , unsigned nr_args )
2019-04-15 19:49:38 +03:00
__releases ( ctx - > uring_lock )
__acquires ( ctx - > uring_lock )
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
{
int ret ;
2019-04-22 19:23:23 +03:00
/*
* We ' re inside the ring mutex , if the ref is already dying , then
* someone else killed the ctx or is already going through
* io_uring_register ( ) .
*/
if ( percpu_ref_is_dying ( & ctx - > refs ) )
return - ENXIO ;
2020-01-28 20:04:42 +03:00
if ( io_register_op_must_quiesce ( opcode ) ) {
2019-12-09 21:22:50 +03:00
percpu_ref_kill ( & ctx - > refs ) ;
2019-04-15 19:49:38 +03:00
2019-12-09 21:22:50 +03:00
/*
* Drop uring mutex before waiting for references to exit . If
* another thread is currently inside io_uring_enter ( ) it might
* need to grab the uring_lock to make progress . If we hold it
* here across the drain wait , then we can deadlock . It ' s safe
* to drop the mutex here , since no new references will come in
* after we ' ve killed the percpu ref .
*/
mutex_unlock ( & ctx - > uring_lock ) ;
2020-01-08 18:26:07 +03:00
ret = wait_for_completion_interruptible ( & ctx - > completions [ 0 ] ) ;
2019-12-09 21:22:50 +03:00
mutex_lock ( & ctx - > uring_lock ) ;
2020-01-08 18:26:07 +03:00
if ( ret ) {
percpu_ref_resurrect ( & ctx - > refs ) ;
ret = - EINTR ;
goto out ;
}
2019-12-09 21:22:50 +03:00
}
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
switch ( opcode ) {
case IORING_REGISTER_BUFFERS :
ret = io_sqe_buffer_register ( ctx , arg , nr_args ) ;
break ;
case IORING_UNREGISTER_BUFFERS :
ret = - EINVAL ;
if ( arg | | nr_args )
break ;
ret = io_sqe_buffer_unregister ( ctx ) ;
break ;
2019-01-11 08:13:58 +03:00
case IORING_REGISTER_FILES :
ret = io_sqe_files_register ( ctx , arg , nr_args ) ;
break ;
case IORING_UNREGISTER_FILES :
ret = - EINVAL ;
if ( arg | | nr_args )
break ;
ret = io_sqe_files_unregister ( ctx ) ;
break ;
2019-10-03 22:59:56 +03:00
case IORING_REGISTER_FILES_UPDATE :
ret = io_sqe_files_update ( ctx , arg , nr_args ) ;
break ;
2019-04-11 20:45:41 +03:00
case IORING_REGISTER_EVENTFD :
2020-01-08 21:04:00 +03:00
case IORING_REGISTER_EVENTFD_ASYNC :
2019-04-11 20:45:41 +03:00
ret = - EINVAL ;
if ( nr_args ! = 1 )
break ;
ret = io_eventfd_register ( ctx , arg ) ;
2020-01-08 21:04:00 +03:00
if ( ret )
break ;
if ( opcode = = IORING_REGISTER_EVENTFD_ASYNC )
ctx - > eventfd_async = 1 ;
else
ctx - > eventfd_async = 0 ;
2019-04-11 20:45:41 +03:00
break ;
case IORING_UNREGISTER_EVENTFD :
ret = - EINVAL ;
if ( arg | | nr_args )
break ;
ret = io_eventfd_unregister ( ctx ) ;
break ;
2020-01-17 01:36:52 +03:00
case IORING_REGISTER_PROBE :
ret = - EINVAL ;
if ( ! arg | | nr_args > 256 )
break ;
ret = io_probe ( ctx , arg , nr_args ) ;
break ;
2020-01-28 20:04:42 +03:00
case IORING_REGISTER_PERSONALITY :
ret = - EINVAL ;
if ( arg | | nr_args )
break ;
ret = io_register_personality ( ctx ) ;
break ;
case IORING_UNREGISTER_PERSONALITY :
ret = - EINVAL ;
if ( arg )
break ;
ret = io_unregister_personality ( ctx , nr_args ) ;
break ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
default :
ret = - EINVAL ;
break ;
}
2020-01-28 20:04:42 +03:00
if ( io_register_op_must_quiesce ( opcode ) ) {
2019-12-09 21:22:50 +03:00
/* bring the ctx back to life */
percpu_ref_reinit ( & ctx - > refs ) ;
2020-01-08 18:26:07 +03:00
out :
reinit_completion ( & ctx - > completions [ 0 ] ) ;
2019-12-09 21:22:50 +03:00
}
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
return ret ;
}
SYSCALL_DEFINE4 ( io_uring_register , unsigned int , fd , unsigned int , opcode ,
void __user * , arg , unsigned int , nr_args )
{
struct io_ring_ctx * ctx ;
long ret = - EBADF ;
struct fd f ;
f = fdget ( fd ) ;
if ( ! f . file )
return - EBADF ;
ret = - EOPNOTSUPP ;
if ( f . file - > f_op ! = & io_uring_fops )
goto out_fput ;
ctx = f . file - > private_data ;
mutex_lock ( & ctx - > uring_lock ) ;
ret = __io_uring_register ( ctx , opcode , arg , nr_args ) ;
mutex_unlock ( & ctx - > uring_lock ) ;
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 20:02:01 +03:00
trace_io_uring_register ( ctx , opcode , ctx - > nr_user_files , ctx - > nr_user_bufs ,
ctx - > cq_ev_fd ! = NULL , ret ) ;
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 19:16:05 +03:00
out_fput :
fdput ( f ) ;
return ret ;
}
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
static int __init io_uring_init ( void )
{
2019-12-18 19:50:26 +03:00
BUILD_BUG_ON ( ARRAY_SIZE ( io_op_defs ) ! = IORING_OP_LAST ) ;
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 20:46:33 +03:00
req_cachep = KMEM_CACHE ( io_kiocb , SLAB_HWCACHE_ALIGN | SLAB_PANIC ) ;
return 0 ;
} ;
__initcall ( io_uring_init ) ;