vfs: handle __wait_on_freeing_inode() and evict() race

Lockless hash lookup can find and lock the inode after it gets the
I_FREEING flag set, at which point it blocks waiting for teardown in
evict() to finish.

However, the flag is still set even after evict() wakes up all waiters.

This results in a race where if the inode lock is taken late enough, it
can happen after both hash removal and wakeups, meaning there is nobody
to wake the racing thread up.

This worked prior to RCU-based lookup because the entire ordeal was
synchronized with the inode hash lock.

Since unhashing requires the inode lock, we can safely check whether it
happened after acquiring it.

Link: https://lore.kernel.org/v9fs/20240717102458.649b60be@kernel.org/
Reported-by: Dominique Martinet <asmadeus@codewreck.org>
Fixes: 7180f8d91fcb ("vfs: add rcu-based find_inode variants for iget ops")
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Link: https://lore.kernel.org/r/20240718151838.611807-1-mjguzik@gmail.com
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>
This commit is contained in:
Mateusz Guzik 2024-07-18 17:18:37 +02:00 committed by Christian Brauner
parent fcad93360d
commit 5bc9ad78c2
No known key found for this signature in database
GPG Key ID: 91C61BC06578DCA2

View File

@ -676,6 +676,16 @@ static void evict(struct inode *inode)
remove_inode_hash(inode);
/*
* Wake up waiters in __wait_on_freeing_inode().
*
* Lockless hash lookup may end up finding the inode before we removed
* it above, but only lock it *after* we are done with the wakeup below.
* In this case the potential waiter cannot safely block.
*
* The inode being unhashed after the call to remove_inode_hash() is
* used as an indicator whether blocking on it is safe.
*/
spin_lock(&inode->i_lock);
wake_up_bit(&inode->i_state, __I_NEW);
BUG_ON(inode->i_state != (I_FREEING | I_CLEAR));
@ -2291,6 +2301,16 @@ static void __wait_on_freeing_inode(struct inode *inode, bool locked)
{
wait_queue_head_t *wq;
DEFINE_WAIT_BIT(wait, &inode->i_state, __I_NEW);
/*
* Handle racing against evict(), see that routine for more details.
*/
if (unlikely(inode_unhashed(inode))) {
WARN_ON(locked);
spin_unlock(&inode->i_lock);
return;
}
wq = bit_waitqueue(&inode->i_state, __I_NEW);
prepare_to_wait(wq, &wait.wq_entry, TASK_UNINTERRUPTIBLE);
spin_unlock(&inode->i_lock);