f2fs: preserve direct write semantics when buffering is forced

In some cases, e.g. for zoned block devices, direct writes are
forced into buffered writes that will populate the page cache
and be written out just like buffered io.

Direct reads, on the other hand, is supported for the zoned
block device case. This has the effect that applications
built for direct io will fill up the page cache with data
that will never be read, and that is a waste of resources.

If we agree that this is a problem, how do we fix it?

A) Supporting proper direct writes for zoned block devices would
be the best, but it is currently not supported (probably for
a good but non-obvious reason). Would it be feasible to
implement proper direct IO?

B) Avoid the cost of keeping unwanted data by syncing and throwing
out the cached pages for buffered O_DIRECT writes before completion.

This patch implements B) by reusing the code for how partial
block writes are flushed out on the "normal" direct write path.

Note that this changes the performance characteristics of f2fs
quite a bit.

Direct IO performance for zoned block devices is lower for
small writes after this patch, but this should be expected
with direct IO and in line with how f2fs behaves on top of
conventional block devices.

Another open question is if the flushing should be done for
all cases where buffered writes are forced.

Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>
Reviewed-by: Yonggil Song <yonggil.song@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
This commit is contained in:
Hans Holmberg 2023-02-20 13:20:04 +01:00 committed by Jaegeuk Kim
parent babedcbac1
commit 92318f20d7

View File

@ -4541,6 +4541,19 @@ static const struct iomap_dio_ops f2fs_iomap_dio_write_ops = {
.end_io = f2fs_dio_write_end_io,
};
static void f2fs_flush_buffered_write(struct address_space *mapping,
loff_t start_pos, loff_t end_pos)
{
int ret;
ret = filemap_write_and_wait_range(mapping, start_pos, end_pos);
if (ret < 0)
return;
invalidate_mapping_pages(mapping,
start_pos >> PAGE_SHIFT,
end_pos >> PAGE_SHIFT);
}
static ssize_t f2fs_dio_write_iter(struct kiocb *iocb, struct iov_iter *from,
bool *may_need_sync)
{
@ -4640,14 +4653,9 @@ static ssize_t f2fs_dio_write_iter(struct kiocb *iocb, struct iov_iter *from,
ret += ret2;
ret2 = filemap_write_and_wait_range(file->f_mapping,
bufio_start_pos,
bufio_end_pos);
if (ret2 < 0)
goto out;
invalidate_mapping_pages(file->f_mapping,
bufio_start_pos >> PAGE_SHIFT,
bufio_end_pos >> PAGE_SHIFT);
f2fs_flush_buffered_write(file->f_mapping,
bufio_start_pos,
bufio_end_pos);
}
} else {
/* iomap_dio_rw() already handled the generic_write_sync(). */
@ -4730,8 +4738,18 @@ out_unlock:
inode_unlock(inode);
out:
trace_f2fs_file_write_iter(inode, orig_pos, orig_count, ret);
if (ret > 0 && may_need_sync)
ret = generic_write_sync(iocb, ret);
/* If buffered IO was forced, flush and drop the data from
* the page cache to preserve O_DIRECT semantics
*/
if (ret > 0 && !dio && (iocb->ki_flags & IOCB_DIRECT))
f2fs_flush_buffered_write(iocb->ki_filp->f_mapping,
orig_pos,
orig_pos + ret - 1);
return ret;
}