1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

16836 Commits

Author SHA1 Message Date
David Teigland
23774f997e devices: detect md ddf and imsm superblocks 2020-07-09 10:48:21 -05:00
Heinz Mauelshagen
286a793c12 lvconvert: fix conversion to 'mirrored' mirror log with larger regionsize
merge.c:_check_lv_segment() was checking regionsize vs. mirrored LV size on
any 'mirror/raid1/raid10' segment type including type 'mirrored' mirror logs.

Avoid the check only for 'mirrored' mirror logs to allow conversion from log
type 'disk' with regionsize > mirror log SubLV size.

As we disabled support for 'mirrored' mirror logs with
commit e82303fd6a which still conditionally
allows to enable it via global/support_mirrored_mirror_logs=1,
patch is mandatory for all distributions.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1712983
2020-07-09 14:39:50 +02:00
Zdenek Kabelac
d0faad0db3 debug: missing stacktrace 2020-07-08 11:40:55 +02:00
Zdenek Kabelac
9b9bf8786f raid: no wiping when zeroing raid metadata device
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.

TODO: Several issues to resolve:

1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).

2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.

3. Using TRIM instead of zeroing metadata device might be worth to
implement.

mm
2020-07-08 11:40:55 +02:00
Zdenek Kabelac
b7f3667ce2 lvconvert: more support for yes conversion
When converting volume to pool LV use also wiping of other signatures.
For writecache & pool conversion support --yet and --force
to bypass prompting for signature wiping.
For writecache drop unneded zero_sectors.

Note: currently we have lvconvert doing convertion and prompting
for confirmation of conversion - and then again wipe_lv() prompts
for removing i.e. filesystem signature - we should unify this
prompting into 1 message  - althought the 'filesystem' discovery
needs active volume - while the 1st. conversion prompt can
work without active converted volume.
2020-07-08 11:37:33 +02:00
Zdenek Kabelac
fe78cd4082 wipe_lv: always zero at least 4K
When zero_sectors passed value like 1 - we could zero only 1 sector.
Reinstantiate we always zero at least 4K block.
2020-07-08 11:12:54 +02:00
David Teigland
40266faaab writecache: skip fs block size check in test mode
if doing so requires activating the LV
2020-07-07 13:20:18 -05:00
David Teigland
ad773511c5 integrity: add initial size to metadata size
The metadata device size needs to include space for
the dm-integrity "initial_sectors" which hold journals.
2020-06-30 16:43:05 -05:00
Zdenek Kabelac
3f32f9811e tests: check pool metadata are zeroed 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
094d6f80dd tests: failure of zeroing fails command 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
88b92d4225 make: make generate
update
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
b7885dbb73 man: update cache page
Few more sentences around migration threshold.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
cca2a652d1 cov: avoid double call of free_hints() on error path
Since we 'free_hints()' on return error path from call of
_read_hint_file(), avoid calling it twice in the middle of
error path process.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
eb06832b37 cov: remove unused header 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
dccaab3d79 cov: use 64bit arithmetic
Although values of VDO block_map_cache_size, index_memory_size, slab_size
should not overflow here - use proper 64bit math.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
edbc5a62b2 wipe_lv: make error a fatal event
Failure in wiping/zeroing stop the command.
If user wants to avoid command abortion he should use -Zn or -Wn
to avoid wiping.

Note: there is no easy way to distinguish which kind of failure has
happend - so it's safe to not proceed any futher.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
6eb9eba59b bcache: support longer writes
When initiated larger write request, it may have happened, bcache
got out of free chunks - fix the loop, that is supposed to wait
until next free chunk becomes avain available.
2020-06-24 15:01:03 +02:00
Heinz Mauelshagen
04bba5ea42 lv{resize,extend,reduce}: also check for 2-legged raid4
Users can also convert 2-legged raid1 to raid4 thus causing 'Bus error'
on resize requests.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 14:02:31 +02:00
Heinz Mauelshagen
2cf0f90780 lv{resize,extend,reduce}: reject size change on 2-legged raid5*
Reject size changing request in to avoid 'Bus error' and
display hint to convert to more stripes.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 13:52:56 +02:00
David Teigland
3bd9d81b29 man: lvmcache info about cachedevice usage 2020-06-22 11:24:02 -05:00
David Teigland
ae5634a8be tests: cachevol-cachedevice 2020-06-22 11:23:58 -05:00
David Teigland
2aed2a41f7 lvcreate: new cache or writecache lv with single command
To create a new cache or writecache LV with a single command:

lvcreate --type cache|writecache
    -n Name -L Size --cachedevice PVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, a new cachevol LV is created internally, using PVfast
  specified by the cachedevice option.
- Then, the cachevol is attached to the main LV, converting the
  main LV to type cache|writecache.

Include --cachesize Size to specify the size of cache|writecache
to create from the specified --cachedevice PVs, otherwise the
entire cachedevice PV is used.  The --cachedevice option can be
repeated to create the cache from multiple devices, or the
cachedevice option can contain a tag name specifying a set of PVs
to allocate the cache from.

To create a new cache or writecache LV with a single command
using an existing cachevol LV:

lvcreate --type cache|writecache
    -n Name -L Size --cachevol LVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, the cachevol LVfast is attached to the main LV, converting
  the main LV to type cache|writecache.

In cases where more advanced types (for the main LV or cachevol LV)
are needed, they should be created independently and then combined
with lvconvert.

Example
-------

user creates a new VG with one slow device and one fast device:

$ vgcreate vg /dev/slow1 /dev/fast1

user creates a new 8G main LV on /dev/slow1 that uses all of
/dev/fast1 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1
    -n main -L 8G vg /dev/slow1

Example
-------

user creates a new VG with two slow devs and two fast devs:

$ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2

user creates a new 8G main LV on /dev/slow1 and /dev/slow2
that uses all of /dev/fast1 and /dev/fast2 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2
    -n main -L 8G vg /dev/slow1 /dev/slow2

Example
-------

A user has several slow devices and several fast devices in their VG,
the slow devs have tag @slow, the fast devs have tag @fast.

user creates a new 8G main LV on the slow devs with a
2G writecache on the fast devs:

$ lvcreate --type writecache -n main -L 8G
    --cachedevice @fast --cachesize 2G vg @slow
2020-06-16 13:46:51 -05:00
David Teigland
21b37964eb lvconvert: single step cachevol creation and attachment
To add a cache or writecache to a main LV with a single command:

lvconvert --type cache|writecache --cachedevice /dev/ssd vg/main

A cachevol LV will be allocated from the specified cache device,
then attached to the main LV.  Include --cachesize to specify the
size of cachevol to create, otherwise the entire cachedevice is
used.  The cachedevice option can be repeated to create a cachevol
from multiple devices.

Example
-------

A user has an existing main LV that they want to speed up
using a new ssd.

user adds the new ssd to the VG:

$ vgextend vg /dev/ssd

user attaches the new ssd their main LV:

$ lvconvert --type writecache --cachedevice /dev/ssd vg/main

Example
-------

A user has two existing main LVs that they want to speed up
with a new ssd.

user adds the new 16G ssd to the VG:

$ vgextend vg /dev/ssd

user attaches some of the new ssd to the first main LV,
using half of the space:

$ lvconvert --type writecache --cachedevice /dev/ssd
    --cachesize 8G vg/main1

user attaches some of the new ssd to the second main LV,
using the other half of the space:

$ lvconvert --type writecache --cachedevice /dev/ssd
    --cachesize 8G vg/main2

Example
-------

A user has an existing main LV that they want to speed up using
two new ssds.

user adds the new two ssds the VG:

$ vgextend vg /dev/ssd1
$ vgextend vg /dev/ssd2

user attaches both ssds their main LV:

$ lvconvert --type writecache
    --cachedevice /dev/ssd1 --cachedevice /dev/ssd2 vg/main
2020-06-16 13:46:51 -05:00
David Teigland
950d2d59c1 integrity: wait for raid sync to complete 2020-06-16 12:29:41 -05:00
David Teigland
48872b0369 integrity: avoid increasing logical block size of active LV
When adding integrity to an active LV, avoid choosing an
integrity block size that would result in increasing the
logical block size of the LV.
2020-06-16 12:27:22 -05:00
David Teigland
a014c4f341 tests: integrity and block size 2020-06-15 16:04:40 -05:00
David Teigland
8e2938c963 improve get_fs_block_size string to number 2020-06-11 15:05:47 -05:00
David Teigland
9f38e95a2f tests: fix typo in writecache-blocksize 2020-06-11 13:15:38 -05:00
David Teigland
f32e85ae51 tests: expand integrity-blocksize 2020-06-11 12:46:47 -05:00
David Teigland
b528a9ce90 integrity: fix block size check when inactive
Checking fs block size requires the LV to be active.
2020-06-11 12:43:52 -05:00
David Teigland
9fbad5bb0f fix libblkid BLOCK_SIZE check 2020-06-11 12:43:07 -05:00
David Teigland
6ea3654868 tests: writecache tests
backport updates from later commits
2020-06-10 16:09:36 -05:00
David Teigland
ba27b9ee2a writecache: activate to check block size
backport fixes from later commit
2020-06-10 15:58:25 -05:00
David Teigland
38eaa1035b writecache: allow snapshot of LV with writecache 2020-06-10 12:18:00 -05:00
David Teigland
712c9efbf6 fix bad result from _cache_min_metadata_size
fixes regression from switching to use _cache_min_metadata_size
(commit c08704cee7) which returns
a bogus value when the cachevol size is 8MB.
2020-06-10 12:17:34 -05:00
David Teigland
48c1a295a2 tests: writecache-blocksize 2020-06-10 12:16:31 -05:00
David Teigland
a7b2fc8f57 writecache: add settings cleaner and max_age
available in dm-writecache 1.2
2020-06-10 12:15:50 -05:00
David Teigland
d15c466f95 writecache: attach while active using fs block size
Use libblkid to detect sector/block size of the fs on the LV.
Use this to choose a compatible writecache block size.
Enable attaching writecache to an active LV.
2020-06-10 12:15:34 -05:00
David Teigland
1ee42f1391 writecache: cachesettings in lvchange and lvs
lvchange --cachesettings
lvs -o+cache_settings
2020-06-10 12:14:00 -05:00
David Teigland
ce772bfab9 writecache: show error in lv_health_status and lv_attr
lv_attr is 'E' and lv_health_status is 'error'
when dm-writecache status reports error.
2020-06-10 12:13:48 -05:00
David Teigland
240062a183 writecache: remove from an active lv 2020-06-10 12:13:31 -05:00
Peter Rajnoha
8806f2d5ed blkdeactivate: add missing VDO_AVAILABLE check in deactivate_vdo 2020-06-08 15:41:35 +02:00
David Teigland
fa9eb76a5d improve info about vgck updatemetadata
Add man page info about this option, and add log messages
pointing to this option.
2020-06-03 12:38:27 -05:00
Zhao Heming
b59127a838 Change dev->bcache_fd default value from 0 to -1
This fix can avoid bcache_fd will mistakenly open/close in later.

Signed-off-by: Zhao Heming <heming.zhao@suse.com>
2020-06-01 12:22:15 -05:00
David Teigland
d14a8040d4 Revert "pvck: dump headers_only to skip metadata text"
This reverts commit 5410dd5441.

Accidental push.
2020-05-29 13:26:43 -05:00
David Teigland
ae029fcced integrity: skip calling add when removing images
When lvconvert is used to remove raid images, we can
skip calling lv_add_integrity_to_raid(), which finds
nothing to do, but the the blocksize validation would
be called unnecessarily and trigger spurious errors.
2020-05-29 13:18:24 -05:00
David Teigland
7b04ed07ba tests: integrity wait for sync
The test was using a raid+integrity LV without
first waiting for the integrity sync, which could
cause the test to fail (depending on init speed)
where it depends on integrity to work in uninitialized
areas.

Also use cmp instead of diff.
2020-05-29 10:57:56 -05:00
David Teigland
5410dd5441 pvck: dump headers_only to skip metadata text
pvck --dump headers reads the metadata text area
to compute the text metadata checksum to compare
with the mda_header checksum.
The new header_only will skip reading the metadata
text and not validate the mda_header checksum.
2020-05-28 15:51:59 -05:00
Marian Csontos
be61bd6ff5 test: Warn and exit on problematic integrity device behavior
The first leg of integrity enabled raid device sometimes does not get
recalculated.
2020-05-28 17:04:35 +02:00