btrfs: zoned: fix negative space_info->bytes_readonly

Consider we have a using block group on zoned btrfs.

|<- ZU ->|<- used ->|<---free--->|
                     `- Alloc offset
ZU: Zone unusable

Marking the block group read-only will migrate the zone unusable bytes
to the read-only bytes. So, we will have this.

|<- RO ->|<- used ->|<--- RO --->|

RO: Read only

When marking it back to read-write, btrfs_dec_block_group_ro()
subtracts the above "RO" bytes from the
space_info->bytes_readonly. And, it moves the zone unusable bytes back
and again subtracts those bytes from the space_info->bytes_readonly,
leading to negative bytes_readonly.

This can be observed in the output as eg.:

  Data, single: total=512.00MiB, used=165.21MiB, zone_unusable=16.00EiB
  Data, single: total=536870912, used=173256704, zone_unusable=18446744073603186688

This commit fixes the issue by reordering the operations.

Link: https://github.com/naota/linux/issues/37
Reported-by: David Sterba <dsterba@suse.com>
Fixes: 169e0da91a ("btrfs: zoned: track unusable bytes for zones")
CC: stable@vger.kernel.org # 5.12+
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
Naohiro Aota 2021-06-17 13:56:18 +09:00 committed by David Sterba
parent aefd7f7065
commit f9f28e5bd0

View File

@ -2442,16 +2442,16 @@ void btrfs_dec_block_group_ro(struct btrfs_block_group *cache)
spin_lock(&sinfo->lock); spin_lock(&sinfo->lock);
spin_lock(&cache->lock); spin_lock(&cache->lock);
if (!--cache->ro) { if (!--cache->ro) {
num_bytes = cache->length - cache->reserved -
cache->pinned - cache->bytes_super -
cache->zone_unusable - cache->used;
sinfo->bytes_readonly -= num_bytes;
if (btrfs_is_zoned(cache->fs_info)) { if (btrfs_is_zoned(cache->fs_info)) {
/* Migrate zone_unusable bytes back */ /* Migrate zone_unusable bytes back */
cache->zone_unusable = cache->alloc_offset - cache->used; cache->zone_unusable = cache->alloc_offset - cache->used;
sinfo->bytes_zone_unusable += cache->zone_unusable; sinfo->bytes_zone_unusable += cache->zone_unusable;
sinfo->bytes_readonly -= cache->zone_unusable; sinfo->bytes_readonly -= cache->zone_unusable;
} }
num_bytes = cache->length - cache->reserved -
cache->pinned - cache->bytes_super -
cache->zone_unusable - cache->used;
sinfo->bytes_readonly -= num_bytes;
list_del_init(&cache->ro_list); list_del_init(&cache->ro_list);
} }
spin_unlock(&cache->lock); spin_unlock(&cache->lock);