1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

1022 Commits

Author SHA1 Message Date
David Teigland
a481fdaa35 man: lvmlockd use of lvmlockctl_kill_command 2021-03-17 13:02:51 -05:00
Marian Csontos
3bea893733 man: Fix wording in lvmthin(7) 2021-03-05 12:49:54 +01:00
David Teigland
89a3440fc0 lvmlockctl: use lvm.conf lvmlockctl_kill_command
which specifies a command to run by lvmlockctl --kill.
2021-03-03 13:57:15 -06:00
Zdenek Kabelac
9cb913ab4b make: generate 2021-03-02 22:57:35 +01:00
Zdenek Kabelac
39eee85fff makefiles: better logging
Show only filename instead of full path name when building
in builddir != srcdir
2021-03-02 22:54:40 +01:00
Zdenek Kabelac
520bd9356e makefiles: simplify and cleanup
Print all installed man pages with INSTALL
Simplify distclean handling.
2021-03-02 22:54:40 +01:00
David Teigland
83fe6e720f device usage based on devices file
The LVM devices file lists devices that lvm can use.  The default
file is /etc/lvm/devices/system.devices, and the lvmdevices(8)
command is used to add or remove device entries.  If the file
does not exist, or if lvm.conf includes use_devicesfile=0, then
lvm will not use a devices file.  When the devices file is in use,
the regex filter is not used, and the filter settings in lvm.conf
or on the command line are ignored.

LVM records devices in the devices file using hardware-specific
IDs, such as the WWID, and attempts to use subsystem-specific
IDs for virtual device types.  These device IDs are also written
in the VG metadata.  When no hardware or virtual ID is available,
lvm falls back using the unstable device name as the device ID.
When devnames are used, lvm performs extra scanning to find
devices if their devname changes, e.g. after reboot.

When proper device IDs are used, an lvm command will not look
at devices outside the devices file, but when devnames are used
as a fallback, lvm will scan devices outside the devices file
to locate PVs on renamed devices.  A config setting
search_for_devnames can be used to control the scanning for
renamed devname entries.

Related to the devices file, the new command option
--devices <devnames> allows a list of devices to be specified for
the command to use, overriding the devices file.  The listed
devices act as a sort of devices file in terms of limiting which
devices lvm will see and use.  Devices that are not listed will
appear to be missing to the lvm command.

Multiple devices files can be kept in /etc/lvm/devices, which
allows lvm to be used with different sets of devices, e.g.
system devices do not need to be exposed to a specific application,
and the application can use lvm on its own set of devices that are
not exposed to the system.  The option --devicesfile <filename> is
used to select the devices file to use with the command.  Without
the option set, the default system devices file is used.

Setting --devicesfile "" causes lvm to not use a devices file.

An existing, empty devices file means lvm will see no devices.

The new command vgimportdevices adds PVs from a VG to the devices
file and updates the VG metadata to include the device IDs.
vgimportdevices -a will import all VGs into the system devices file.

LVM commands run by dmeventd not use a devices file by default,
and will look at all devices on the system.  A devices file can
be created for dmeventd (/etc/lvm/devices/dmeventd.devices)  If
this file exists, lvm commands run by dmeventd will use it.

Internal implementaion:

- device_ids_read - read the devices file
  . add struct dev_use (du) to cmd->use_devices for each devices file entry
- dev_cache_scan - get /dev entries
  . add struct device (dev) to dev_cache for each device on the system
- device_ids_match - match devices file entries to /dev entries
  . match each du on cmd->use_devices to a dev in dev_cache, using device ID
  . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID
- label_scan - read lvm headers and metadata from devices
  . filters are applied, those that do not need data from the device
  . filter-deviceid skips devs without MATCHED_USE_ID, i.e.
    skips /dev entries that are not listed in the devices file
  . read lvm label from dev
  . filters are applied, those that use data from the device
  . read lvm metadata from dev
  . add info/vginfo structs for PVs/VGs (info is "lvmcache")
- device_ids_find_renamed_devs - handle devices with unstable devname ID
  where devname changed
  . this step only needed when devs do not have proper device IDs,
    and their dev names change, e.g. after reboot sdb becomes sdc.
  . detect incorrect match because PVID in the devices file entry
    does not match the PVID found when the device was read above
  . undo incorrect match between du and dev above
  . search system devices for new location of PVID
  . update devices file with new devnames for PVIDs on renamed devices
  . label_scan the renamed devs
- continue with command processing
2021-02-23 16:43:32 -06:00
Zdenek Kabelac
2c5e034cd3 make: generate 2021-02-17 11:53:19 +01:00
Zdenek Kabelac
9c0ce4daa2 man: vdo drop resize restriction comment
lvm2 supports resize of cached vdo pool volumes.
2021-02-17 11:53:19 +01:00
Zdenek Kabelac
b4212be2e7 thin: improve 16g support for thin pool metadata
Initial support for thin-pool used slightly smaller max size 15.81GiB
for thin-pool metadata. However the real limit later settled at 15.88GiB
(difference is ~64MiB - 16448 4K blocks).

lvm2 could not simply increase the size as it has been using hard cropping
of the loaded metadata device to avoid warnings printing warning of kernel
when the size was bigger (i.e. due to bigger extent_size).

This patch adds the new lvm.conf configurable setting:
allocation/thin_pool_crop_metadata
which defaults to 0 -> no crop of metadata beyond 15.81GiB.
Only user with these sizes of metadata will be affected.

Without cropping lvm2 now limits metadata allocation size to 15.88GiB.
Any space beyond is currently not used by thin-pool target.
Even if i.e. bigger LV is used for metadata via lvconvert,
or allocated bigger because of to large extent size.

With cropping enabled (=1) lvm2 preserves the old limitation
15.81GiB and should allow to work in the evironement with
older lvm2 tools (i.e. older distribution).

Thin-pool metadata with size bigger then 15.81G is now using CROP_METADATA
flag within lvm2 metadata, so older lvm2 recognizes an
incompatible thin-pool and cannot activate such pool!

Users should use uncropped version as it is not suffering
from various issues between thin_repair results and allocated
metadata LV as thin_repair limit is 15.88GiB
Users should use cropping only when really needed!

Patch also better handles resize of thin-pool metadata and prevents resize
beoyond usable size 15.88GiB. Resize beyond 15.81GiB automatically
switches pool to no-crop version. Even with existing bigger thin-pool
metadata command 'lvextend -l+1 vg/pool_tmeta' does the change.

Patch gives better controls 'coverted' metadata LV and
reports less confusing message during conversion.

Patch set also moves the code for updating min/max into pool_manip.c
for better sharing with cache_pool code.
2021-02-01 12:06:13 +01:00
Zdenek Kabelac
b218a7cfe7 man: update lvmthin
Add few more notes about thin-pool repair.
Fix couple typos.
2021-02-01 12:06:13 +01:00
Marian Csontos
9757b4726c make: generate 2021-01-18 14:46:22 +01:00
Zdenek Kabelac
7691213a91 man: update lvmvdo
Fix vdo example.
Update some sentences.
2020-12-08 20:32:34 +01:00
David Teigland
9b3458d5a9 man lvmcache: add writecache cleaner info 2020-12-02 15:29:21 -06:00
Marek Suchánek
a2affffed5 man: update writing style of the lvmvdo man page
This patch improves the clarity, writing style, and language
of the lvmvdo(7) man page.

See https://bugzilla.redhat.com/show_bug.cgi?id=1855804.
2020-12-02 10:31:11 +01:00
Marian Csontos
205fb35b50 build: make generate 2020-11-26 17:37:32 +01:00
David Teigland
9c0253d930 man: vgsplit source and destination VGs
make clearer which is source and which is destination
2020-11-17 11:00:40 -06:00
Zdenek Kabelac
8801a86a3e man: update vdo
Enhance VDO man page with description of memory usage
and space requirements chapter.

Remove some unneeded blank lines in man page.

Use more precise terminology.

Correct examples since  cpool and vpool are protected names.
2020-11-03 16:34:46 +01:00
Zdenek Kabelac
edb55b767a man: regenerate 2020-10-24 01:42:16 +02:00
Marian Csontos
b50134dc14 make: generate 2020-10-15 11:16:54 +02:00
Zdenek Kabelac
0210c7076d man: correcting vdo issues
Fixing reported bugs within provided examples - so examples
can be used via cut&paste.
2020-09-09 15:16:34 +02:00
Zdenek Kabelac
763342016c man: correctly use configured directories 2020-09-09 13:22:37 +02:00
Zdenek Kabelac
3e9664baca man: vdo improvals
Add some more notes about discard.
Correct enumeration.
2020-08-19 15:09:09 +02:00
Marian Csontos
9f8c331760 build: make generate 2020-08-09 15:20:22 +02:00
Zdenek Kabelac
88b92d4225 make: make generate
update
2020-06-24 15:01:03 +02:00
Zdenek Kabelac
b7885dbb73 man: update cache page
Few more sentences around migration threshold.
2020-06-24 15:01:03 +02:00
David Teigland
3bd9d81b29 man: lvmcache info about cachedevice usage 2020-06-22 11:24:02 -05:00
David Teigland
ce772bfab9 writecache: show error in lv_health_status and lv_attr
lv_attr is 'E' and lv_health_status is 'error'
when dm-writecache status reports error.
2020-06-10 12:13:48 -05:00
Marian Csontos
70a45c44e8 build: make generate 2020-05-21 15:02:31 +02:00
David Teigland
5263551a2d lvmlockd: replace lock adopt info source
The lock adopt feature was disabled since it had used
lvmetad as a source of info.  This replaces the lvmetad
info with a local file and enables the adopt feature again
(enabled with lvmlockd --adopt 1).
2020-05-04 13:35:03 -05:00
David Teigland
d9e8895a96 Allow dm-integrity to be used for raid images
dm-integrity stores checksums of the data written to an
LV, and returns an error if data read from the LV does
not match the previously saved checksum.  When used on
raid images, dm-raid will correct the error by reading
the block from another image, and the device user sees
no error.  The integrity metadata (checksums) are stored
on an internal LV allocated by lvm for each linear image.
The internal LV is allocated on the same PV as the image.

Create a raid LV with an integrity layer over each
raid image (for raid levels 1,4,5,6,10):

lvcreate --type raidN --raidintegrity y [options]

Add an integrity layer to images of an existing raid LV:

lvconvert --raidintegrity y LV

Remove the integrity layer from images of a raid LV:

lvconvert --raidintegrity n LV

Settings

Use --raidintegritymode journal|bitmap (journal is default)
to configure the method used by dm-integrity to ensure
crash consistency.

Initialization

When integrity is added to an LV, the kernel needs to
initialize the integrity metadata/checksums for all blocks
in the LV.  The data corruption checking performed by
dm-integrity will only operate on areas of the LV that
are already initialized.  The progress of integrity
initialization is reported by the "syncpercent" LV
reporting field (and under the Cpy%Sync lvs column.)

Example: create a raid1 LV with integrity:

$ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo
  Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_0_imeta" created.
  Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB.
  Logical volume "rr_rimage_1_imeta" created.
  Logical volume "rr" created.
$ lvs -a foo
  LV                  VG  Attr       LSize  Origin              Cpy%Sync
  rr                  foo rwi-a-r---  1.00g                     4.93
  [rr_rimage_0]       foo gwi-aor---  1.00g [rr_rimage_0_iorig] 41.02
  [rr_rimage_0_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_0_iorig] foo -wi-ao----  1.00g
  [rr_rimage_1]       foo gwi-aor---  1.00g [rr_rimage_1_iorig] 39.45
  [rr_rimage_1_imeta] foo ewi-ao---- 12.00m
  [rr_rimage_1_iorig] foo -wi-ao----  1.00g
  [rr_rmeta_0]        foo ewi-aor---  4.00m
  [rr_rmeta_1]        foo ewi-aor---  4.00m
2020-04-15 12:10:32 -05:00
Peter Rajnoha
0dd905c959 blkdeactivate: add support for VDO in blkdeactivate script
Make it possible to tear down VDO volumes with blkdeactivate if VDO is
part of a device stack (and if VDO binary is installed). Also, support
optional -o|--vdooptions configfile=file.
2020-04-09 15:29:29 +02:00
David Teigland
fbdcc45255 man: lvm2-activation-generator fix vgchange comment
generated services use vgchange -aay (not -ay)
2020-03-10 14:41:51 -05:00
Jonathan Brassow
c392ccaa47 man: lvmcache raid1 references 2020-02-27 11:33:55 -06:00
Zdenek Kabelac
aa7642a444 generate: remake
Regen man page.
2020-02-04 17:22:06 +01:00
David Teigland
64a82a1c79 man: lvmcache writecache watermark percent 2020-02-03 11:21:24 -06:00
David Teigland
2444e830a9 man: updates to lvmcache 2020-01-30 14:09:21 -06:00
David Teigland
7078dd01e8 man: pvck dump description improvements 2020-01-22 15:01:00 -06:00
David Teigland
338f4df54b man pvck: describe settings 2019-12-06 16:24:27 -06:00
Marian Csontos
0a7495e680 build: make generate 2019-11-30 14:24:22 +01:00
David Teigland
3145a85583 pvck: repair headers and metadata
To write a new/repaired pv_header and label_header:

  pvck --repairtype pv_header --file <file> <device>

This uses the metadata input file to find the PV UUID,
device size, and data offset.

To write new/repaired metadata text and mda_header:

  pvck --repairtype metadata --file <file> <device>

This requires a good pv_header which points to one or two
metadata areas.  Any metadata areas referenced by the
pv_header are updated with the specified metadata and
a new mda_header. "--settings mda_num=1|2" can be used
to select one mda to repair.

To combine all header and metadata repairs:

  pvck --repair --file <file> <device>

It's best to use a raw metadata file as input, that was
extracted from another PV in the same VG (or from another
metadata area on the same PV.)  pvck will also accept a
metadata backup file, but that will produce metadata that
is not identical to other metadata copies on other PVs
and other areas.  So, when using a backup file, consider
using it to update metadata on all PVs/areas.

To get a raw metadata file to use for the repair, see
pvck --dump metadata|metadata_search.

List all instances of metadata from the metadata area:
  pvck --dump metadata_search <device>

Save one instance of metadata at the given offset to
the specified file (this file can be used for repair):

  pvck --dump metadata_search --file <file>
    --settings "metadata_offset=<off>" <device>
2019-11-27 11:13:47 -06:00
Heinz Mauelshagen
e184f77109 man: adjust 'disks' to 'devices' as used throughout 2019-11-07 17:45:37 +01:00
Tony Asleson
12c47e0c98 man lvmvdo: Correct spellings 2019-10-30 10:38:40 -05:00
Tony Asleson
75628a5f4c man: Include '_vdata' as reserved name 2019-10-30 10:38:40 -05:00
David Teigland
0ba260e397 man lvmthin: change wording about mounting xfs 2019-10-24 10:10:18 -05:00
David Teigland
018cf39316 man: lvmcache naming updates 2019-10-21 11:35:28 -05:00
David Teigland
53b97b146d man: lvmcache note dm-cache block size issue 2019-10-08 09:59:38 -05:00
Marian Csontos
dc3f0e067d build: make generate 2019-09-25 08:27:49 +02:00
David Teigland
a53c157db0 man lvmthin: remove nonexistent topic 2019-08-19 14:06:32 -05:00
Zdenek Kabelac
08396b4bce make: generate
Run make generate.
2019-08-09 12:57:07 +02:00