Kent Overstreet c2a4f3183a bcache: Fix a writeback performance regression
Background writeback works by scanning the btree for dirty data and
adding those keys into a fixed size buffer, then for each dirty key in
the keybuf writing it to the backing device.

When read_dirty() finishes and it's time to scan for more dirty data, we
need to wait for the outstanding writeback IO to finish - they still
take up slots in the keybuf (so that foreground writes can check for
them to avoid races) - without that wait, we'll continually rescan when
we'll be able to add at most a key or two to the keybuf, and that takes
locks that starves foreground IO.  Doh.

Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-24 14:41:43 -07:00
..
2013-07-12 00:22:49 -07:00
2013-06-26 17:09:15 -07:00
2013-07-01 14:43:48 -07:00
2013-07-12 00:22:33 -07:00
2013-03-25 13:06:13 -06:00
2013-07-01 14:43:53 -07:00
2013-06-26 17:09:15 -07:00
2013-07-01 14:43:53 -07:00
2013-05-15 00:42:51 -07:00
2013-07-01 14:43:53 -07:00
2013-06-26 17:09:15 -07:00
2013-07-12 00:22:49 -07:00
2013-06-26 17:09:15 -07:00
2013-06-26 21:58:04 -07:00