5794351146
The most significant change is that btree reads are now done synchronously, instead of asynchronously and doing the post read stuff from a workqueue. This was originally done because we can't block on IO under generic_make_request(). But - we already have a mechanism to punt cache lookups to workqueue if needed, so if we just use that we don't have to deal with the complexity of doing things asynchronously. The main benefit is this makes the locking situation saner; we can hold our write lock on the btree node until we're finished reading it, and we don't need that btree_node_read_done() flag anymore. Also, for writes, btree_write() was broken out into btree_node_write() and btree_leaf_dirty() - the old code with the boolean argument was dumb and confusing. The prio_blocked mechanism was improved a bit too, now the only counter is in struct btree_write, we don't mess with transfering a count from struct btree anymore. This required changing garbage collection to block prios at the start and unblock when it finishes, which is cleaner than what it was doing anyways (the old code had mostly the same effect, but was doing it in a convoluted way) And the btree iter btree_node_read_done() uses was converted to a real mempool. Signed-off-by: Kent Overstreet <koverstreet@google.com> |
||
---|---|---|
.. | ||
alloc.c | ||
bcache.h | ||
bset.c | ||
bset.h | ||
btree.c | ||
btree.h | ||
closure.c | ||
closure.h | ||
debug.c | ||
debug.h | ||
io.c | ||
journal.c | ||
journal.h | ||
Kconfig | ||
Makefile | ||
movinggc.c | ||
request.c | ||
request.h | ||
stats.c | ||
stats.h | ||
super.c | ||
sysfs.c | ||
sysfs.h | ||
trace.c | ||
util.c | ||
util.h | ||
writeback.c |