IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
bcachefs grabs s_umount and sets SB_RDONLY when the fs is shutdown
via the ioctl() interface. This has a couple issues related to
interactions between shutdown and freeze:
1. The flags == FSOP_GOING_FLAGS_DEFAULT case is a deadlock vector
because freeze_bdev() calls into freeze_super(), which also
acquires s_umount.
2. If an explicit shutdown occurs while the sb is frozen, SB_RDONLY
alters the thaw path as if the sb was read-only at freeze time.
This effectively leaks the frozen state and leaves the sb frozen
indefinitely.
The usage of SB_RDONLY here goes back to the initial bcachefs commit
and AFAICT is simply historical behavior. This behavior is unique to
bcachefs relative to the handful of other filesystems that support
the shutdown ioctl(). Typically, SB_RDONLY is reserved for the
proper remount path, which itself is restricted from modifying
frozen superblocks in reconfigure_super(). Drop the unnecessary sb
lock and flags update bch2_ioc_goingdown() to address both of these
issues.
Signed-off-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
backpointers fsck now always runs in rw mode - the btree is being
modified while it runs, by e.g. copygc, rebalance, the discard worker,
the invalidate worker.
We could find a missing backpointer, flush the btree write buffer, and
then on the next iteration find a new key at the exact same position -
which will most likely need another write buffer flush.
Hence, we have to check for an exact match on last_flushed, not just the
pos.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Daniel Hill <daniel@gluo.nz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Fake flexible arrays (zero-length and one-element arrays) are
deprecated, and should be replaced by flexible-array members.
So, replace zero-length arrays with flexible-array members
in multiple structures.
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
In the CI, we're seeing tests failing due to excessive would_deadlock
transaction restarts - the tracepoint now includes the lock cycle that
occured.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Minor refactoring - improved naming, and move the responsibility for
flush_lock to the caller instead of having it be shared.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
__bch2_btree_write_buffer_flush() now assumes a write ref is already
held (as called by the transaction commit path); and the wrappers
bch2_write_buffer_flush() and flush_sync() take an explicit write ref.
This means internally the write buffer code can always use
BTREE_INSERT_NOCHECK_RW, instead of in the previous code passing flags
around and hoping the NOCHECK_RW flag was always carried around
correctly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
- add a tracepoint for write_buffer_flush_sync; this is expensive
- fix the write_buffer_flush_slowpath tracepoint
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Now we can print out filesystem flags in sysfs, useful for debugging
various "what's my filesystem doing" issues.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This code was somewhat convoluted - because originally bch2_lru_set()
could modify the LRU index if there was a collision.
That's no longer the case, so the "create LRU entry" path has no reason
to update the alloc key, so we can separate the handling of the two fsck
errors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Add a tracepoint for rebalance, printing out
- the target option
- the compression option
- the key being rebalanced
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Break it out by compression type, and include average extent size.
Also, format into a nice table.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This counter is redundant; it's simply the sum of BCH_DATA_stripe and
BCH_DATA_parity buckets.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
This introduces bch2_bucket_sectors() and bch2_bucket_sectors_dirty(),
prep work for separately accounting stripe sectors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
BCH_IOCTL_DEV_USAGE mistakenly put the per-data-type array in struct
bch_ioctl_dev_usage; since ioctl numbers encode the size of the arg,
that means adding new data types breaks the ioctl.
This adds a new version that includes the number of data types as a
parameter: the old version is fixed at 10 so as to not break when adding
new types.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
osq lock maintainers don't want it to be used outside of kernel/locking/
- but, we can do better.
Since we have lock handoff signalled via waitlist entries, there's no
reason for optimistic spinning to have to look at the lock at all -
aside from checking lock-owner; we can just spin looking at our waitlist
entry.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bcachefs's six locks need kvm_guest, via
ower_on_cpu() -> vcpu_is_preempted() -> is_kvm_guest()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Cc: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
If we're looking for a bcachefs supers iteratively we don't want to see
this error.
This function replaces KERN_ERR with KERN_INFO for when we don't find a
bcachefs superblock but preserves other errors.
Signed-off-by: Daniel Hill <daniel@gluo.nz>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
bch2_update_cached_sectors_list() is closer to how the new disk space
accounting works, called from trans_mark().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
For BTREE_ITER_WITH_JOURNAL, we memoize lookups in the journal keys, to
avoid the binary search overhead.
Previously we stashed the pos of the last key returned from the journal,
in order to force the lookup to be redone when rewinding.
Now bch2_journal_keys_peek_upto() handles rewinding itself when
necessary - so we can slim down btree_iter.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
The flush_all_pins() after journal replay was unecessary, and trying to
completely flush the journal while RW is not a great idea - it's not
guaranteed to terminate if other threads keep adding things to the
jorunal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>