1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

1013 Commits

Author SHA1 Message Date
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
b218a7cfe7 man: update lvmthin
Add few more notes about thin-pool repair.
Fix couple typos.
2021-02-01 12:06:13 +01:00
Marian Csontos
9757b4726c make: generate 2021-01-18 14:46:22 +01:00
Zdenek Kabelac
7691213a91 man: update lvmvdo
Fix vdo example.
Update some sentences.
2020-12-08 20:32:34 +01:00
David Teigland
9b3458d5a9 man lvmcache: add writecache cleaner info 2020-12-02 15:29:21 -06:00
Marek Suchánek
a2affffed5 man: update writing style of the lvmvdo man page
This patch improves the clarity, writing style, and language
of the lvmvdo(7) man page.

See https://bugzilla.redhat.com/show_bug.cgi?id=1855804.
2020-12-02 10:31:11 +01:00
Marian Csontos
205fb35b50 build: make generate 2020-11-26 17:37:32 +01:00
David Teigland
9c0253d930 man: vgsplit source and destination VGs
make clearer which is source and which is destination
2020-11-17 11:00:40 -06:00
Zdenek Kabelac
8801a86a3e man: update vdo
Enhance VDO man page with description of memory usage
and space requirements chapter.

Remove some unneeded blank lines in man page.

Use more precise terminology.

Correct examples since  cpool and vpool are protected names.
2020-11-03 16:34:46 +01:00
Zdenek Kabelac
edb55b767a man: regenerate 2020-10-24 01:42:16 +02:00
Marian Csontos
b50134dc14 make: generate 2020-10-15 11:16:54 +02:00
Zdenek Kabelac
0210c7076d man: correcting vdo issues
Fixing reported bugs within provided examples - so examples
can be used via cut&paste.
2020-09-09 15:16:34 +02:00
Zdenek Kabelac
763342016c man: correctly use configured directories 2020-09-09 13:22:37 +02:00
Zdenek Kabelac
3e9664baca man: vdo improvals
Add some more notes about discard.
Correct enumeration.
2020-08-19 15:09:09 +02:00
Marian Csontos
9f8c331760 build: make generate 2020-08-09 15:20:22 +02:00
Zdenek Kabelac
88b92d4225 make: make generate
update
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
b7885dbb73 man: update cache page
Few more sentences around migration threshold.
2020-06-24 15:01:03 +02:00
David Teigland
3bd9d81b29 man: lvmcache info about cachedevice usage 2020-06-22 11:24:02 -05:00
David Teigland
ce772bfab9 writecache: show error in lv_health_status and lv_attr
lv_attr is 'E' and lv_health_status is 'error'
when dm-writecache status reports error.
2020-06-10 12:13:48 -05:00
Marian Csontos
70a45c44e8 build: make generate 2020-05-21 15:02:31 +02:00
David Teigland
5263551a2d lvmlockd: replace lock adopt info source
The lock adopt feature was disabled since it had used
lvmetad as a source of info.  This replaces the lvmetad
info with a local file and enables the adopt feature again
(enabled with lvmlockd --adopt 1).
2020-05-04 13:35:03 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Peter Rajnoha
0dd905c959 blkdeactivate: add support for VDO in blkdeactivate script
Make it possible to tear down VDO volumes with blkdeactivate if VDO is
part of a device stack (and if VDO binary is installed). Also, support
optional -o|--vdooptions configfile=file.
2020-04-09 15:29:29 +02:00
David Teigland
fbdcc45255 man: lvm2-activation-generator fix vgchange comment
generated services use vgchange -aay (not -ay)
2020-03-10 14:41:51 -05:00
Jonathan Brassow
c392ccaa47 man: lvmcache raid1 references 2020-02-27 11:33:55 -06:00
Zdenek Kabelac
aa7642a444 generate: remake
Regen man page.
2020-02-04 17:22:06 +01:00
David Teigland
64a82a1c79 man: lvmcache writecache watermark percent 2020-02-03 11:21:24 -06:00
David Teigland
2444e830a9 man: updates to lvmcache 2020-01-30 14:09:21 -06:00
David Teigland
7078dd01e8 man: pvck dump description improvements 2020-01-22 15:01:00 -06:00
David Teigland
338f4df54b man pvck: describe settings 2019-12-06 16:24:27 -06:00
Marian Csontos
0a7495e680 build: make generate 2019-11-30 14:24:22 +01:00
David Teigland
3145a85583 pvck: repair headers and metadata
To write a new/repaired pv_header and label_header:

  pvck --repairtype pv_header --file <file> <device>

This uses the metadata input file to find the PV UUID,
device size, and data offset.

To write new/repaired metadata text and mda_header:

  pvck --repairtype metadata --file <file> <device>

This requires a good pv_header which points to one or two
metadata areas.  Any metadata areas referenced by the
pv_header are updated with the specified metadata and
a new mda_header. "--settings mda_num=1|2" can be used
to select one mda to repair.

To combine all header and metadata repairs:

  pvck --repair --file <file> <device>

It's best to use a raw metadata file as input, that was
extracted from another PV in the same VG (or from another
metadata area on the same PV.)  pvck will also accept a
metadata backup file, but that will produce metadata that
is not identical to other metadata copies on other PVs
and other areas.  So, when using a backup file, consider
using it to update metadata on all PVs/areas.

To get a raw metadata file to use for the repair, see
pvck --dump metadata|metadata_search.

List all instances of metadata from the metadata area:
  pvck --dump metadata_search <device>

Save one instance of metadata at the given offset to
the specified file (this file can be used for repair):

  pvck --dump metadata_search --file <file>
    --settings "metadata_offset=<off>" <device>
2019-11-27 11:13:47 -06:00
Heinz Mauelshagen
e184f77109 man: adjust 'disks' to 'devices' as used throughout 2019-11-07 17:45:37 +01:00
Tony Asleson
12c47e0c98 man lvmvdo: Correct spellings 2019-10-30 10:38:40 -05:00
Tony Asleson
75628a5f4c man: Include '_vdata' as reserved name 2019-10-30 10:38:40 -05:00
David Teigland
0ba260e397 man lvmthin: change wording about mounting xfs 2019-10-24 10:10:18 -05:00
David Teigland
018cf39316 man: lvmcache naming updates 2019-10-21 11:35:28 -05:00
David Teigland
53b97b146d man: lvmcache note dm-cache block size issue 2019-10-08 09:59:38 -05:00
Marian Csontos
dc3f0e067d build: make generate 2019-09-25 08:27:49 +02:00
David Teigland
a53c157db0 man lvmthin: remove nonexistent topic 2019-08-19 14:06:32 -05:00
Zdenek Kabelac
08396b4bce make: generate
Run make generate.
2019-08-09 12:57:07 +02:00
Marian Csontos
b4ff865b44 build: make generate 2019-06-15 08:30:04 +02:00
Marian Csontos
a2c309a5c5 build: make generate 2019-06-07 17:59:43 +02:00
David Teigland
52586b1039 pvck: new dump option to extract metadata
The new command 'pvck --dump metadata PV' will extract
the current version of VG metadata from a PV for testing
and debugging.  --dump metadata_area extracts the entire
text metadata area.
2019-05-23 11:49:06 -05:00
David Teigland
6f408f68d2 man: updates to lvmlockd
- remove reference to locking_type which is no longer used
- remove references to adopting locks which has been disabled
- move some sanlock-specific info out of a general section
- remove info about doing automatic lockstart by the system
  since this was never used (the resource agent does it)
- replace info about lvextend and manual refresh under gfs2
  with a description about the automatic remote refresh
2019-04-04 14:36:28 -05:00
David Teigland
c33770c02d lvmlockd: do not allow mirror LV to be activated shared
This reverts 518a8e8cfb
  "lvmlockd: activate mirror LVs in shared mode with cmirrord"

because while activating a mirror LV with cmirrord worked,
changes to the active cmirror did not work.
2019-04-04 13:21:38 -05:00
Zdenek Kabelac
1117f1d46f man: dmeventd vdo plugin 2019-03-20 14:39:11 +01:00
Zdenek Kabelac
597113646d man: basic vdo stacking support
Document some basic lvconvert stacking posibilities.
2019-03-20 14:39:11 +01:00
David Teigland
a9eaab6beb Use "cachevol" to refer to cache on a single LV
and "cachepool" to refer to a cache on a cache pool object.

The problem was that the --cachepool option was being used
to refer to both a cache pool object, and to a standard LV
used for caching.  This could be somewhat confusing, and it
made it less clear when each kind would be used.  By
separating them, it's clear when a cachepool or a cachevol
should be used.

Previously:

- lvm would use the cache pool approach when the user passed
  a cache-pool LV to the --cachepool option.

- lvm would use the cache vol approach when the user passed
  a standard LV in the --cachepool option.

Now:

- lvm will always use the cache pool approach when the user
  uses the --cachepool option.

- lvm will always use the cache vol approach when the user
  uses the --cachevol option.
2019-02-27 08:52:34 -06:00
Zdenek Kabelac
5cf8888976 man: lvmvdo component activation description
Describe component activation for VDO Data LV.
2019-01-28 22:39:10 +01:00