1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

16860 Commits

Author SHA1 Message Date
Zdenek Kabelac
fb7a3fe8d6 container_of: drop needless const converion 2020-08-28 21:43:02 +02:00
Zdenek Kabelac
ca54afd701 tests: check we detect lvm.conf read failure
No coredumps with unreadable lvm.conf.
2020-08-28 21:43:02 +02:00
Zdenek Kabelac
e3e04b99f2 config: drop reading file with mmap
While normally the 'mmap' file reading is better utilizing resources,
it has also its odd side with handling errors - so while we normally
use the mmap only for reading regular files from root filesystem
(i.e. lvm.conf) we can't prevent error to happen during the read
of these file - and such error unfortunately ends with SIGBUS error.
Maintaing signal handler would be compilated - so switch to slightly
less effiecient but more error resistant read() functinality.
2020-08-28 21:43:02 +02:00
David Teigland
9a88a9c4ce Revert "lvdisplay: dispaly correct status when underlying devs missing"
This reverts commit 1d0dc74f91.

We should avoid adding anything new to lvdisplay and report
new information via lvs reporting fields.
2020-08-28 13:28:15 -05:00
Zhao Heming
1d0dc74f91 lvdisplay: dispaly correct status when underlying devs missing
reproducible steps:
1. vgcreate vg1 /dev/sda /dev/sdb
2. lvcreate --type raid0 -l 100%FREE -n raid0lv vg1
3. do remove the /dev/sdb action
4. lvdisplay show wrong 'LV Status'

After removing raid0 type LV underlying dev, lvdisplay still display
'available'. This is wrong status for raid0.

This patch add a new function raid_is_available(), which will handle
all raid case.

With this patch, lvdisplay will show
from:
  LV Status              available
to:
  LV Status              NOT available (partial)

Reviewed-by: Enzo Matsumiya <ematsumiya@suse.com>
Signed-off-by: Zhao Heming <heming.zhao@suse.com>
2020-08-24 09:47:04 -05:00
Zdenek Kabelac
46d15b5e4d wipe_lv: close devices on error path
Device was kept open preventing its deactivated and removed
on error path.
2020-08-19 15:09:09 +02:00
Zdenek Kabelac
3e9664baca man: vdo improvals
Add some more notes about discard.
Correct enumeration.
2020-08-19 15:09:09 +02:00
Zdenek Kabelac
7b41ea61b2 config: move some config setting into commented part
It's better to set most of option as 'commented' with some
documented defaults instead of providing strict values.

This has the advantage we can eventually 'change' defualts
and get them working in future. Otherwise once the setting
is stored in lvm.conf in /etc, such setting has strictly
defined value and that can be only change with file update.
2020-08-19 15:07:09 +02:00
Marian Csontos
135d16fbb8 Update README 2020-08-12 12:05:36 +02:00
Marian Csontos
231cdd0efb post-release 2020-08-09 16:17:15 +02:00
Marian Csontos
4d9f0606be pre-release 2020-08-09 16:17:15 +02:00
Marian Csontos
c1d136fea3 WHATS_NEW 2020-08-09 16:17:15 +02:00
Marian Csontos
9f8c331760 build: make generate 2020-08-09 15:20:22 +02:00
Tony Asleson
4f44841045 WHATS_NEW: Add writecache lvmdbusd 2020-08-06 15:42:49 -05:00
Vojtech Trefny
d4d060acd5 lvmdbusd: Bump LVM DBus API version
So users can check for writecache support.
2020-08-06 13:54:45 -05:00
Vojtech Trefny
8f1068c02d lvmdbusd: Add support for LVM writecache 2020-08-06 13:54:34 -05:00
Marian Csontos
e12bdd591a tests: Adapt RAID test to changes
Change 3c9177fdc0 causes a conversion of raid1 volume to a raid1 with
the same number of legs succeed with a warning.
2020-07-28 17:36:57 +02:00
David Teigland
7a507583d9 cachevol: add LV type restrictions to command defs
LV type restrictions were missed on the command definitions.
2020-07-23 15:10:35 -05:00
David Teigland
085760992d cachevol: generate a unique name when creating
When a cachevol is automatically created, if the default name
conflicts with an existing name, generate a new unique name.
2020-07-23 13:18:22 -05:00
Heinz Mauelshagen
3c9177fdc0 lvconvert: display warning if raid1 LV image count does not change
Fix "lvconvert -mN $RaidLV" to display a warning in
case the same number of images is being requested.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1774696
2020-07-20 15:42:15 +02:00
David Teigland
119d594788 integrity: allow type option to be set when changing mirrors
Allow the optional '--type raid1' to be included in the lvconvert
command when adding or removing raid images with integrity.
It does not change the meaning of the command (specifying a type
that matches the current type is redundant but generally allowed.)
2020-07-15 10:57:05 -05:00
David Teigland
4667c4b35b lvmdbusd: recognize lv attr letter g for integrity 2020-07-15 10:07:28 -05:00
Heinz Mauelshagen
8f421bdd7a lvconvert: preset raid1 in case of striped conversions
Fixed invoking "lvconvert -m+1 $StripedLV" to cause errors
(preset raid conversion implied by '-m').

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1781406
2020-07-13 19:07:26 +02:00
David Teigland
00c9a788cc devices: simplify md superblock checking code 2020-07-09 10:48:34 -05:00
David Teigland
23774f997e devices: detect md ddf and imsm superblocks 2020-07-09 10:48:21 -05:00
Heinz Mauelshagen
286a793c12 lvconvert: fix conversion to 'mirrored' mirror log with larger regionsize
merge.c:_check_lv_segment() was checking regionsize vs. mirrored LV size on
any 'mirror/raid1/raid10' segment type including type 'mirrored' mirror logs.

Avoid the check only for 'mirrored' mirror logs to allow conversion from log
type 'disk' with regionsize > mirror log SubLV size.

As we disabled support for 'mirrored' mirror logs with
commit e82303fd6a which still conditionally
allows to enable it via global/support_mirrored_mirror_logs=1,
patch is mandatory for all distributions.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1712983
2020-07-09 14:39:50 +02:00
Zdenek Kabelac
d0faad0db3 debug: missing stacktrace 2020-07-08 11:40:55 +02:00
Zdenek Kabelac
9b9bf8786f raid: no wiping when zeroing raid metadata device
Currently lvm2 is not wiping signatures when creating 'metadata' volumes
and raid _rmeta was the only exception - so make the behavior consistent
with other metadata devices and drop wiping ATM.
Drop also some extra debug since they are now more explanatory in
wipe_lv() function.
Also note - although lvm2 now does not wipe signatures - the error
from such wipping used to be actually 'ignored' before wipe_lv()
started to return error (with recent commit) and raid creation
continued with 'unzeroed' metadata device.

TODO: Several issues to resolve:

1. We may want to flip to wipping with all LVs (in that case we need to
support passing --yet & --force).

2. Also we may want to clear whole metadata device - however current
function is also used for wipping i.e. snapshot COW device which
is likely not a good candidate for full device zeroing.
We may also need to think about better logic when extent size is
enforcing very large LVs, when only a small portion of LV is ever
being used.

3. Using TRIM instead of zeroing metadata device might be worth to
implement.

mm
2020-07-08 11:40:55 +02:00
Zdenek Kabelac
b7f3667ce2 lvconvert: more support for yes conversion
When converting volume to pool LV use also wiping of other signatures.
For writecache & pool conversion support --yet and --force
to bypass prompting for signature wiping.
For writecache drop unneded zero_sectors.

Note: currently we have lvconvert doing convertion and prompting
for confirmation of conversion - and then again wipe_lv() prompts
for removing i.e. filesystem signature - we should unify this
prompting into 1 message  - althought the 'filesystem' discovery
needs active volume - while the 1st. conversion prompt can
work without active converted volume.
2020-07-08 11:37:33 +02:00
Zdenek Kabelac
fe78cd4082 wipe_lv: always zero at least 4K
When zero_sectors passed value like 1 - we could zero only 1 sector.
Reinstantiate we always zero at least 4K block.
2020-07-08 11:12:54 +02:00
David Teigland
40266faaab writecache: skip fs block size check in test mode
if doing so requires activating the LV
2020-07-07 13:20:18 -05:00
David Teigland
ad773511c5 integrity: add initial size to metadata size
The metadata device size needs to include space for
the dm-integrity "initial_sectors" which hold journals.
2020-06-30 16:43:05 -05:00
Zdenek Kabelac
3f32f9811e tests: check pool metadata are zeroed 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
094d6f80dd tests: failure of zeroing fails command 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
88b92d4225 make: make generate
update
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
b7885dbb73 man: update cache page
Few more sentences around migration threshold.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
cca2a652d1 cov: avoid double call of free_hints() on error path
Since we 'free_hints()' on return error path from call of
_read_hint_file(), avoid calling it twice in the middle of
error path process.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
eb06832b37 cov: remove unused header 2020-06-24 15:01:03 +02:00
Zdenek Kabelac
dccaab3d79 cov: use 64bit arithmetic
Although values of VDO block_map_cache_size, index_memory_size, slab_size
should not overflow here - use proper 64bit math.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
bc39d5bec6 pool: zero metadata
To avoid polution of metadata with some 'garbage' content or eventualy
some leak of stale data in case user want to upload metadata somewhere,
ensure upon allocation the metadata device is fully zeroed.

Behaviour may slow down allocation of thin-pool or cache-pool a bit
so the old behaviour can be restored with lvm.conf setting:
allocation/zero_metadata=0

TODO: add zeroing for extension of metadata volume.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
edbc5a62b2 wipe_lv: make error a fatal event
Failure in wiping/zeroing stop the command.
If user wants to avoid command abortion he should use -Zn or -Wn
to avoid wiping.

Note: there is no easy way to distinguish which kind of failure has
happend - so it's safe to not proceed any futher.
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
6eb9eba59b bcache: support longer writes
When initiated larger write request, it may have happened, bcache
got out of free chunks - fix the loop, that is supposed to wait
until next free chunk becomes avain available.
2020-06-24 15:01:03 +02:00
Heinz Mauelshagen
04bba5ea42 lv{resize,extend,reduce}: also check for 2-legged raid4
Users can also convert 2-legged raid1 to raid4 thus causing 'Bus error'
on resize requests.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 14:02:31 +02:00
Heinz Mauelshagen
2cf0f90780 lv{resize,extend,reduce}: reject size change on 2-legged raid5*
Reject size changing request in to avoid 'Bus error' and
display hint to convert to more stripes.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1784351
2020-06-24 13:52:56 +02:00
David Teigland
3bd9d81b29 man: lvmcache info about cachedevice usage 2020-06-22 11:24:02 -05:00
David Teigland
ae5634a8be tests: cachevol-cachedevice 2020-06-22 11:23:58 -05:00
David Teigland
2aed2a41f7 lvcreate: new cache or writecache lv with single command
To create a new cache or writecache LV with a single command:

lvcreate --type cache|writecache
    -n Name -L Size --cachedevice PVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, a new cachevol LV is created internally, using PVfast
  specified by the cachedevice option.
- Then, the cachevol is attached to the main LV, converting the
  main LV to type cache|writecache.

Include --cachesize Size to specify the size of cache|writecache
to create from the specified --cachedevice PVs, otherwise the
entire cachedevice PV is used.  The --cachedevice option can be
repeated to create the cache from multiple devices, or the
cachedevice option can contain a tag name specifying a set of PVs
to allocate the cache from.

To create a new cache or writecache LV with a single command
using an existing cachevol LV:

lvcreate --type cache|writecache
    -n Name -L Size --cachevol LVfast VG [PVslow ...]

- A new main linear|striped LV is created as usual, using the
  specified -n Name and -L Size, and using the optionally
  specified PVslow devices.
- Then, the cachevol LVfast is attached to the main LV, converting
  the main LV to type cache|writecache.

In cases where more advanced types (for the main LV or cachevol LV)
are needed, they should be created independently and then combined
with lvconvert.

Example
-------

user creates a new VG with one slow device and one fast device:

$ vgcreate vg /dev/slow1 /dev/fast1

user creates a new 8G main LV on /dev/slow1 that uses all of
/dev/fast1 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1
    -n main -L 8G vg /dev/slow1

Example
-------

user creates a new VG with two slow devs and two fast devs:

$ vgcreate vg /dev/slow1 /dev/slow2 /dev/fast1 /dev/fast2

user creates a new 8G main LV on /dev/slow1 and /dev/slow2
that uses all of /dev/fast1 and /dev/fast2 as a writecache:

$ lvcreate --type writecache --cachedevice /dev/fast1 --cachedevice /dev/fast2
    -n main -L 8G vg /dev/slow1 /dev/slow2

Example
-------

A user has several slow devices and several fast devices in their VG,
the slow devs have tag @slow, the fast devs have tag @fast.

user creates a new 8G main LV on the slow devs with a
2G writecache on the fast devs:

$ lvcreate --type writecache -n main -L 8G
    --cachedevice @fast --cachesize 2G vg @slow
2020-06-16 13:46:51 -05:00
David Teigland
21b37964eb lvconvert: single step cachevol creation and attachment
To add a cache or writecache to a main LV with a single command:

lvconvert --type cache|writecache --cachedevice /dev/ssd vg/main

A cachevol LV will be allocated from the specified cache device,
then attached to the main LV.  Include --cachesize to specify the
size of cachevol to create, otherwise the entire cachedevice is
used.  The cachedevice option can be repeated to create a cachevol
from multiple devices.

Example
-------

A user has an existing main LV that they want to speed up
using a new ssd.

user adds the new ssd to the VG:

$ vgextend vg /dev/ssd

user attaches the new ssd their main LV:

$ lvconvert --type writecache --cachedevice /dev/ssd vg/main

Example
-------

A user has two existing main LVs that they want to speed up
with a new ssd.

user adds the new 16G ssd to the VG:

$ vgextend vg /dev/ssd

user attaches some of the new ssd to the first main LV,
using half of the space:

$ lvconvert --type writecache --cachedevice /dev/ssd
    --cachesize 8G vg/main1

user attaches some of the new ssd to the second main LV,
using the other half of the space:

$ lvconvert --type writecache --cachedevice /dev/ssd
    --cachesize 8G vg/main2

Example
-------

A user has an existing main LV that they want to speed up using
two new ssds.

user adds the new two ssds the VG:

$ vgextend vg /dev/ssd1
$ vgextend vg /dev/ssd2

user attaches both ssds their main LV:

$ lvconvert --type writecache
    --cachedevice /dev/ssd1 --cachedevice /dev/ssd2 vg/main
2020-06-16 13:46:51 -05:00
David Teigland
950d2d59c1 integrity: wait for raid sync to complete 2020-06-16 12:29:41 -05:00
David Teigland
48872b0369 integrity: avoid increasing logical block size of active LV
When adding integrity to an active LV, avoid choosing an
integrity block size that would result in increasing the
logical block size of the LV.
2020-06-16 12:27:22 -05:00