1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-09-20 05:44:20 +03:00

Compare commits

...

21 Commits

Author SHA1 Message Date
David Teigland
f59cb6632b pvscan: use alternate device names from DEVLINKS to check filter
The filter may contains a symlink for the device, and that symlink
may not yet be created when our udev rule runs pvscan --cache using
the kernel dev name.  The kernel dev name does not match the symlink
name in the filter, so pvscan will ignore the device.  udev sets
the DEVLINKS env variable to a list of link names for the device
that will be created, so check the filter with that set of names.
2023-01-19 17:37:31 -06:00
David Teigland
3bb5576528 lvresize: only resize crypt when fs resize is enabled
There were a couple of cases where lvresize, without --fs resize,
was resizing the crypt layer above the LV.  Resizing the crypt
layer should only be done when fs resizing is enabled (even if the
fs is already small enough due to being independently reduced.)

Also, check the size of the crypt device to see if it's already
been reduced independently, and skip the cryptsetup resize if
it's not needed.
2023-01-19 11:52:14 -06:00
Zdenek Kabelac
92199ad0b9 makefiles: fix grep warning from make
Remove unnecessary '\'.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
3a58e08b8c makefiles: comment out hiding dir entering
While the output of building looks more polished, text editors fail to
find source file from compile errors - so until we start to print
all file with full paths - comment out this make build parameter.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
3bedceec38 libdm: correcting ifdef possition
Fix building without ioctl support.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
aa09232dc4 tests: vdo resizing 2023-01-16 12:37:40 +01:00
Zdenek Kabelac
c20f01a0cb vdo: resize requires active vdopool volume
ATM kernel VDO target does not handle resize of inactive VDO LVs
so prevent users corrupting such LVs and require active such LVs.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
2451bc568f vdo: fix and enhance vdo constain checking
Enhance checking vdo constains so it also handles changes of active VDO LVs
where only added difference is considered now.

For this also the reported informational message about used memory
was improved to only list consuming RAM blocks.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
1bed2cafe8 vdo: read live vdo size configuration
Introduce struct vdo_pool_size_config usable to calculate necessary
memory size for active VDO volume.

Function lv_vdo_pool_size_config() is able to read out this
configuration out of runtime DM table line.
2023-01-16 12:37:40 +01:00
Zdenek Kabelac
773b88e028 vdo: check memory only in non critical section
When we are actually resizing VDO device - we need to check size only in
non-critical section - otherwise we are checking
2023-01-16 12:37:38 +01:00
Zdenek Kabelac
f486eb60d5 lvresize: use standard extent conversion function
We need to validate whether the requested resizing size can be
expressed with given extent_size.
2023-01-16 12:35:00 +01:00
lilinjie
bb34ebd4e4 fix typo
Signed-off-by: lilinjie <lilinjie@uniontech.com>
(cherry picked from commit 81b1f5bc3bac0e2e9099b67162da7d1a4995c5f4)
2023-01-11 13:52:12 +01:00
Marian Csontos
2ab81a3513 lvmlockd: Fix syntax error in previous commit 2023-01-11 13:34:38 +01:00
David Teigland
7c9c3ba5d5 lvmlockd: fix report of lv_active_exclusively for special lv types
Cover a case missed by the recent commit e0ea0706d
"report: query lvmlockd for lv_active_exclusively"

Fix the lv_active_exclusively value reported for thin LVs.
It's the thin pool that is locked in lvmlockd, and the thin
LV state was mistakenly being queried and not found.

Certain LV types like thin can only be activated exclusively, so
always report lv_active_exclusively true for these when active.
2023-01-10 15:37:15 -06:00
David Teigland
789904bd57 tests: vgimportclone with incomplete pv list and nomda pv 2023-01-05 14:47:49 -06:00
David Teigland
c4b898a53e vgimportclone: fix importing PV without metadata
If one of the PVs in the VG does not hold metadata, then the
command would fail, thinking that PV was from a different VG.
Also add missing free on that error path.
2023-01-05 14:28:31 -06:00
David Teigland
2580f007f0 tests: lvresize-fs-crypt using helper only for crypt dev 2023-01-03 14:35:26 -06:00
David Teigland
81acde7ffd lvresize: fix cryptsetup resize in helper
typo used "cryptresize" as command name

this affects cases where the file system is resized
independently, and then the lvresize command is used
which only needs to resize the crypt device and the LV.
2023-01-03 11:40:53 -06:00
Samanta Navarro
aec5e573af doc: fix typos in documentation
Typos found with codespell.
2023-01-03 16:09:58 +01:00
Marian Csontos
118145b072 post-release 2023-01-03 16:02:07 +01:00
Marian Csontos
2abb029f2a pre-release 2022-12-22 16:07:35 +01:00
59 changed files with 470 additions and 132 deletions

View File

@@ -24,7 +24,7 @@ You MUST disable (or mask) any LVM daemons:
For running cluster tests, we are using singlenode locking. Pass
`--with-clvmd=singlenode` to configure.
NOTE: This is useful only for testing, and should not be used in produciton
NOTE: This is useful only for testing, and should not be used in production
code.
To run D-Bus daemon tests, existing D-Bus session is required.

View File

@@ -1 +1 @@
2.03.18(2)-git (2022-11-10)
2.03.19(2)-git (2022-12-22)

View File

@@ -1 +1 @@
1.02.189-git (2022-11-10)
1.02.191-git (2022-12-22)

View File

@@ -1,4 +1,8 @@
Version 2.03.18 -
version 2.03.19 -
====================================
Fix and improve runtime memory size detection for VDO volumes.
version 2.03.18 - 22nd december 2022
====================================
Fix issues reported by coverity scan.
Fix warning for thin pool overprovisioning on lvextend (2.03.17).
@@ -193,7 +197,7 @@ Version 2.03.10 - 09th August 2020
Version 2.03.09 - 26th March 2020
=================================
Fix formating of vdopool (vdo_slab_size_mb was smaller by 2 bits).
Fix formatting of vdopool (vdo_slab_size_mb was smaller by 2 bits).
Fix showing of a dm kernel error when uncaching a volume with cachevol.
Version 2.03.08 - 11th February 2020

View File

@@ -1,4 +1,7 @@
Version 1.02.189 -
Version 1.02.191 -
=====================================
Version 1.02.189 - 22nd December 2022
=====================================
Improve 'dmsetup create' without given table line with new kernels.

View File

@@ -757,7 +757,7 @@ allocation {
# vdo_max_discard = 1
# Configuration option allocation/vdo_pool_header_size.
# Specified the emptry header size in KiB at the front and end of vdo pool device.
# Specified the empty header size in KiB at the front and end of vdo pool device.
# This configuration option has an automatic default value.
# vdo_pool_header_size = 512
}
@@ -936,7 +936,7 @@ backup {
# archive = 1
# Configuration option backup/archive_dir.
# Location of the metdata archive files.
# Location of the metadata archive files.
# Remember to back up this directory regularly!
# This configuration option has an automatic default value.
# archive_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_ARCHIVE_SUBDIR@"
@@ -1463,13 +1463,13 @@ activation {
# Configuration option activation/reserved_stack.
# Stack size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
# Insufficient reserve risks I/O deadlock during device suspension.
# This configuration option has an automatic default value.
# reserved_stack = 64
# Configuration option activation/reserved_memory.
# Memory size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
# Insufficient reserve risks I/O deadlock during device suspension.
# This configuration option has an automatic default value.
# reserved_memory = 8192
@@ -1604,7 +1604,7 @@ activation {
# This includes LVs that have the following segment types:
# raid1, raid4, raid5*, and raid6*.
# If a device in the LV fails, the policy determines the steps
# performed by dmeventd automatically, and the steps perfomed by the
# performed by dmeventd automatically, and the steps performed by the
# manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#
@@ -1628,7 +1628,7 @@ activation {
# (copies) and a mirror log. A disk log ensures that a mirror LV does
# not need to be re-synced (all copies made the same) every time a
# machine reboots or crashes. If a device in the LV fails, this policy
# determines the steps perfomed by dmeventd automatically, and the steps
# determines the steps performed by dmeventd automatically, and the steps
# performed by the manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#

View File

@@ -139,7 +139,6 @@ static char *_align(char *ptr, unsigned int a)
return (char *) (((unsigned long) ptr + agn) & ~agn);
}
#ifdef DM_IOCTLS
static unsigned _kernel_major = 0;
static unsigned _kernel_minor = 0;
static unsigned _kernel_release = 0;
@@ -182,6 +181,9 @@ int get_uname_version(unsigned *major, unsigned *minor, unsigned *release)
return 1;
}
#ifdef DM_IOCTLS
/*
* Set number to NULL to populate _dm_bitset - otherwise first
* match is returned.

View File

@@ -67,7 +67,7 @@ the entries (each hotspot block covers a larger area than a single
cache block).
All this means smq uses ~25bytes per cache block. Still a lot of
memory, but a substantial improvement nontheless.
memory, but a substantial improvement nonetheless.
Level balancing:
mq placed entries in different levels of the multiqueue structures

View File

@@ -35,7 +35,7 @@ Parameters: <cipher> <key> <iv_offset> <device path> \
capi:authenc(hmac(sha256),xts(aes))-random
capi:rfc7539(chacha20,poly1305)-random
The /proc/crypto contains a list of curently loaded crypto modes.
The /proc/crypto contains a list of currently loaded crypto modes.
<key>
Key used for encryption. It is encoded either as a hexadecimal number
@@ -81,7 +81,7 @@ Parameters: <cipher> <key> <iv_offset> <device path> \
<#opt_params>
Number of optional parameters. If there are no optional parameters,
the optional paramaters section can be skipped or #opt_params can be zero.
the optional parameters section can be skipped or #opt_params can be zero.
Otherwise #opt_params is the number of following arguments.
Example of optional parameters section:

View File

@@ -120,7 +120,7 @@ journal_crypt:algorithm(:key) (the key is optional)
"salsa20", "ctr(aes)" or "ecb(arc4)").
The journal contains history of last writes to the block device,
an attacker reading the journal could see the last sector nubmers
an attacker reading the journal could see the last sector numbers
that were written. From the sector numbers, the attacker can infer
the size of files that were written. To protect against this
situation, you can encrypt the journal.

View File

@@ -65,7 +65,7 @@ Construction Parameters
<#opt_params>
Number of optional parameters. If there are no optional parameters,
the optional paramaters section can be skipped or #opt_params can be zero.
the optional parameters section can be skipped or #opt_params can be zero.
Otherwise #opt_params is the number of following arguments.
Example of optional parameters section:

View File

@@ -37,7 +37,7 @@ segment type. The available RAID types are:
"raid6_nr" - RAID6 Rotating parity N with data restart
"raid6_nc" - RAID6 Rotating parity N with data continuation
The exception to 'no shorthand options' will be where the RAID implementations
can displace traditional tagets. This is the case with 'mirror' and 'raid1'.
can displace traditional targets. This is the case with 'mirror' and 'raid1'.
In this case, "mirror_segtype_default" - found under the "global" section in
lvm.conf - can be set to "mirror" or "raid1". The segment type inferred when
the '-m' option is used will be taken from this setting. The default segment
@@ -104,7 +104,7 @@ and 4 devices for RAID 6/10.
lvconvert should work exactly as it does now when dealing with mirrors -
even if(when) we switch to MD RAID1. Of course, there are no plans to
allow the presense of the metadata area to be configurable (e.g. --corelog).
allow the presence of the metadata area to be configurable (e.g. --corelog).
It will be simple enough to detect if the LV being up/down-converted is
new or old-style mirroring.
@@ -120,7 +120,7 @@ RAID4 to RAID5 or RAID5 to RAID6.
Line 02/03/04:
These are familiar options - all of which would now be available as options
for change. (However, it'd be nice if we didn't have regionsize in there.
It's simple on the kernel side, but is just an extra - often unecessary -
It's simple on the kernel side, but is just an extra - often unnecessary -
parameter to many functions in the LVM codebase.)
Line 05:
@@ -375,8 +375,8 @@ the slot. Even the names of the images will be renamed to properly reflect
their index in the array. Unlike the "mirror" segment type, you will never have
an image named "*_rimage_1" occupying the index position 0.
As with adding images, removing images holds off on commiting LVM metadata
until all possible changes have been made. This reduces the likelyhood of bad
As with adding images, removing images holds off on committing LVM metadata
until all possible changes have been made. This reduces the likelihood of bad
intermediate stages being left due to a failure of operation or machine crash.
RAID1 '--splitmirrors', '--trackchanges', and '--merge' operations

View File

@@ -87,7 +87,7 @@ are as follows:
/etc/lvm/lvm.conf. Once this operation is complete, the logical volumes
will be consistent. However, the volume group will still be inconsistent -
due to the refernced-but-missing device/PV - and operations will still be
restricted to the aformentioned actions until either the device is
restricted to the aforementioned actions until either the device is
restored or 'vgreduce --removemissing' is run.
Device Revival (transient failures):
@@ -135,9 +135,9 @@ If a mirror is not 'in-sync', a read failure will produce an I/O error.
This error will propagate all the way up to the applications above the
logical volume (e.g. the file system). No automatic intervention will
take place in this case either. It is up to the user to decide what
can be done/salvaged in this senario. If the user is confident that the
can be done/salvaged in this scenario. If the user is confident that the
images of the mirror are the same (or they are willing to simply attempt
to retreive whatever data they can), 'lvconvert' can be used to eliminate
to retrieve whatever data they can), 'lvconvert' can be used to eliminate
the failed image and proceed.
Mirror resynchronization errors:
@@ -191,11 +191,11 @@ command are set in the LVM configuration file. They are:
3-way mirror fails, the mirror will be converted to a 2-way mirror.
The "allocate" policy takes the further action of trying to replace
the failed image using space that is available in the volume group.
Replacing a failed mirror image will incure the cost of
Replacing a failed mirror image will incur the cost of
resynchronizing - degrading the performance of the mirror. The
default policy for handling an image failure is "remove". This
allows the mirror to still function, but gives the administrator the
choice of when to incure the extra performance costs of replacing
choice of when to incur the extra performance costs of replacing
the failed image.
RAID logical volume device failures are handled differently from the "mirror"

View File

@@ -63,7 +63,7 @@ classical snapshot merge, thin snapshot merge.
The second store is suited only for pvmove --abort operations in-progress. Both
stores are independent and identical LVs (pvmove /dev/sda3 and pvmove --abort /dev/sda3)
can be run concurently from lvmpolld point of view (on lvm2 side the consistency is
can be run concurrently from lvmpolld point of view (on lvm2 side the consistency is
guaranteed by lvm2 locking mechanism).
Locking order

View File

@@ -126,7 +126,7 @@ Usage Examples
followed by 'vgchange -ay vg2'
Option (ii) - localised admin & configuation
Option (ii) - localised admin & configuration
(i.e. each host holds *locally* which classes of volumes to activate)
# Add @database tag to vg1's metadata
vgchange --addtag @database vg1

View File

@@ -35,7 +35,7 @@ VGs from PVs as they appear, and at the same time collect information on what is
already available. A command, pvscan --cache is expected to be used to
implement udev rules. It is relatively easy to make this command print out a
list of VGs (and possibly LVs) that have been made available by adding any
particular device to the set of visible devices. In othe words, udev says "hey,
particular device to the set of visible devices. In other words, udev says "hey,
/dev/sdb just appeared", calls pvscan --cache, which talks to lvmetad, which
says "cool, that makes vg0 complete". Pvscan takes this info and prints it out,
and the udev rule can then somehow decide whether anything needs to be done

View File

@@ -322,6 +322,11 @@ int lv_vdo_pool_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return 0;
}
int lv_vdo_pool_size_config(const struct logical_volume *lv,
struct vdo_pool_size_config *cfg)
{
return 0;
}
int lvs_in_vg_activated(const struct volume_group *vg)
{
return 0;
@@ -1363,6 +1368,32 @@ int lv_vdo_pool_percent(const struct logical_volume *lv, dm_percent_t *percent)
return 1;
}
/*
* lv_vdo_pool_size_config obtains size configuration from active VDO table line
*
* If the 'params' string has been already retrieved, use it.
* If the mempool already exists, use it.
*
*/
int lv_vdo_pool_size_config(const struct logical_volume *lv,
struct vdo_pool_size_config *cfg)
{
struct dev_manager *dm;
int r;
if (!lv_info(lv->vg->cmd, lv, 1, NULL, 0, 0))
return 1; /* Inactive VDO pool -> no runtime config */
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, !lv_is_pvmove(lv))))
return_0;
r = dev_manager_vdo_pool_size_config(dm, lv, cfg);
dev_manager_destroy(dm);
return r;
}
static int _lv_active(struct cmd_context *cmd, const struct logical_volume *lv)
{
struct lvinfo info;

View File

@@ -204,6 +204,8 @@ int lv_thin_pool_status(const struct logical_volume *lv, int flush,
int lv_vdo_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_vdo **status);
int lv_vdo_pool_percent(const struct logical_volume *lv, dm_percent_t *percent);
int lv_vdo_pool_size_config(const struct logical_volume *lv,
struct vdo_pool_size_config *cfg);
/*
* Return number of LVs in the VG that are active.

View File

@@ -1957,6 +1957,71 @@ out:
return r;
}
int dev_manager_vdo_pool_size_config(struct dev_manager *dm,
const struct logical_volume *lv,
struct vdo_pool_size_config *cfg)
{
const char *dlid;
struct dm_info info;
uint64_t start, length;
struct dm_task *dmt = NULL;
char *type = NULL;
char *params = NULL;
int r = 0;
unsigned version = 0;
memset(cfg, 0, sizeof(*cfg));
if (!(dlid = build_dm_uuid(dm->mem, lv, lv_layer(lv))))
return_0;
if (!(dmt = _setup_task_run(DM_DEVICE_TABLE, &info, NULL, dlid, 0, 0, 0, 0, 0, 0)))
return_0;
if (!info.exists)
goto inactive; /* VDO device is not active, should not happen here... */
log_debug_activation("Checking VDO pool table line for LV %s.",
display_lvname(lv));
if (dm_get_next_target(dmt, NULL, &start, &length, &type, &params)) {
log_error("More then one table line found for %s.",
display_lvname(lv));
goto out;
}
if (!type || strcmp(type, TARGET_NAME_VDO)) {
log_error("Expected %s segment type but got %s instead.",
TARGET_NAME_VDO, type ? type : "NULL");
goto out;
}
if (sscanf(params, "V%u %*s " FMTu64 " %*u " FMTu32,
&version, &cfg->physical_size, &cfg->block_map_cache_size_mb) != 3) {
log_error("Failed to parse VDO parameters %s for LV %s.",
params, display_lvname(lv));
goto out;
}
switch (version) {
case 2: break;
case 4: break;
default: log_warn("WARNING: Unknown VDO table line version %u.", version);
}
cfg->virtual_size = length;
cfg->physical_size *= 8; // From 4K unit to 512B
cfg->block_map_cache_size_mb /= 256; // From 4K unit to MiB
cfg->index_memory_size_mb = first_seg(lv)->vdo_params.index_memory_size_mb; // Preserved
inactive:
r = 1;
out:
dm_task_destroy(dmt);
return r;
}
/*************************/
/* NEW CODE STARTS HERE */

View File

@@ -83,6 +83,9 @@ int dev_manager_thin_pool_status(struct dev_manager *dm,
int dev_manager_vdo_pool_status(struct dev_manager *dm,
const struct logical_volume *lv, int flush,
struct lv_status_vdo **status, int *exists);
int dev_manager_vdo_pool_size_config(struct dev_manager *dm,
const struct logical_volume *lv,
struct vdo_pool_size_config *cfg);
int dev_manager_suspend(struct dev_manager *dm, const struct logical_volume *lv,
struct lv_activate_opts *laopts, int lockfs, int flush_required);
int dev_manager_activate(struct dev_manager *dm, const struct logical_volume *lv,

View File

@@ -52,6 +52,7 @@ static struct {
struct dm_regex *preferred_names_matcher;
const char *dev_dir;
int preferred_names_disabled;
int has_scanned;
long st_dev;
struct dm_list dirs;
@@ -166,11 +167,19 @@ void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev)
if (_cache.preferred_names_matcher)
return;
if (_cache.preferred_names_disabled)
return;
log_debug_devs("%s: New preferred name", sl->str);
dm_list_del(&sl->list);
dm_list_add_h(&dev->aliases, &sl->list);
}
void dev_cache_disable_preferred_names(void)
{
_cache.preferred_names_disabled = 1;
}
/*
* Check whether path0 or path1 contains the subpath. The path that
* *does not* contain the subpath wins (return 0 or 1). If both paths

View File

@@ -61,6 +61,8 @@ struct device *dev_hash_get(const char *name);
void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev);
void dev_cache_disable_preferred_names(void);
/*
* Object for iterating through the cache.
*/

View File

@@ -22,6 +22,7 @@
#include <dirent.h>
#include <mntent.h>
#include <sys/ioctl.h>
static const char *_lvresize_fs_helper_path;
@@ -105,6 +106,7 @@ int fs_get_info(struct cmd_context *cmd, struct logical_volume *lv,
struct fs_info info;
FILE *fme = NULL;
struct mntent *me;
int fd;
int ret;
if (dm_snprintf(lv_path, PATH_MAX, "%s%s/%s", lv->vg->cmd->dev_dir,
@@ -150,6 +152,17 @@ int fs_get_info(struct cmd_context *cmd, struct logical_volume *lv,
log_print("File system found on crypt device %s on LV %s.",
crypt_path, display_lvname(lv));
if ((fd = open(crypt_path, O_RDONLY)) < 0) {
log_error("Failed to open crypt path %s", crypt_path);
return 0;
}
if (ioctl(fd, BLKGETSIZE64, &info.crypt_dev_size_bytes) < 0) {
log_error("Failed to get crypt device size %s", crypt_path);
close(fd);
return 0;
}
close(fd);
if (!fs_get_blkid(crypt_path, &info)) {
log_error("No file system info from blkid for dm-crypt device %s on LV %s.",
crypt_path, display_lvname(lv));

View File

@@ -25,6 +25,7 @@ struct fs_info {
uint64_t fs_last_byte; /* last byte on the device used by the fs */
uint32_t crypt_offset_bytes; /* offset in bytes of crypt data on LV */
dev_t crypt_devt; /* dm-crypt device between the LV and FS */
uint64_t crypt_dev_size_bytes;
unsigned nofs:1;
unsigned unmounted:1;

View File

@@ -2298,27 +2298,20 @@ int lockd_vg_update(struct volume_group *vg)
return ret;
}
int lockd_query_lv(struct volume_group *vg, const char *lv_name, char *lv_uuid,
const char *lock_args, int *ex, int *sh)
static int _query_lv(struct cmd_context *cmd, struct volume_group *vg,
const char *lv_name, char *lv_uuid, const char *lock_args,
int *ex, int *sh)
{
daemon_reply reply;
const char *opts = NULL;
const char *reply_str;
int result;
int ret;
if (!vg_is_shared(vg))
return 1;
if (!_use_lvmlockd)
return 0;
if (!_lvmlockd_connected)
return 0;
log_debug("lockd query LV %s/%s", vg->name, lv_name);
reply = _lockd_send("query_lock_lv",
"pid = " FMTd64, (int64_t) getpid(),
"opts = %s", opts ?: "none",
"opts = %s", "none",
"vg_name = %s", vg->name,
"lv_name = %s", lv_name,
"lv_uuid = %s", lv_uuid,
@@ -2354,6 +2347,37 @@ int lockd_query_lv(struct volume_group *vg, const char *lv_name, char *lv_uuid,
return ret;
}
int lockd_query_lv(struct cmd_context *cmd, struct logical_volume *lv, int *ex, int *sh)
{
struct volume_group *vg = lv->vg;
char lv_uuid[64] __attribute__((aligned(8)));
if (cmd->lockd_lv_disable)
return 1;
if (!vg_is_shared(vg))
return 1;
if (!_use_lvmlockd)
return 0;
if (!_lvmlockd_connected)
return 0;
/* types that cannot be active concurrently will always be ex. */
if (lv_is_external_origin(lv) ||
lv_is_thin_type(lv) ||
lv_is_mirror_type(lv) ||
lv_is_raid_type(lv) ||
lv_is_vdo_type(lv) ||
lv_is_cache_type(lv)) {
*ex = 1;
return 1;
}
if (!id_write_format(&lv->lvid.id[1], lv_uuid, sizeof(lv_uuid)))
return_0;
return _query_lv(cmd, vg, lv->name, lv_uuid, lv->lock_args, ex, sh);
}
/*
* When this is called directly (as opposed to being called from
* lockd_lv), the caller knows that the LV has a lock.
@@ -2392,7 +2416,7 @@ int lockd_lv_name(struct cmd_context *cmd, struct volume_group *vg,
!strcmp(cmd->name, "lvchange") || !strcmp(cmd->name, "lvconvert")) {
int ex = 0, sh = 0;
if (!lockd_query_lv(vg, lv_name, lv_uuid, lock_args, &ex, &sh))
if (!_query_lv(cmd, vg, lv_name, lv_uuid, lock_args, &ex, &sh))
return 1;
if (sh) {
log_warn("WARNING: shared LV may require refresh on other hosts where it is active.");

View File

@@ -103,8 +103,7 @@ int lockd_lv_uses_lock(struct logical_volume *lv);
int lockd_lv_refresh(struct cmd_context *cmd, struct lvresize_params *lp);
int lockd_query_lv(struct volume_group *vg, const char *lv_name, char *lv_uuid,
const char *lock_args, int *ex, int *sh);
int lockd_query_lv(struct cmd_context *cmd, struct logical_volume *lv, int *ex, int *sh);
#else /* LVMLOCKD_SUPPORT */
@@ -261,8 +260,7 @@ static inline int lockd_lv_refresh(struct cmd_context *cmd, struct lvresize_para
return 0;
}
static inline int lockd_query_lv(struct volume_group *vg, const char *lv_name,
char *lv_uuid, const char *lock_args, int *ex, int *sh)
static inline int lockd_query_lv(struct cmd_context *cmd, struct logical_volume *lv, int *ex, int *sh)
{
return 0;
}

View File

@@ -5197,7 +5197,8 @@ static int _lvresize_adjust_size(struct volume_group *vg,
display_size(vg->cmd, size));
}
*extents = size / extent_size;
if (!(*extents = extents_from_size(vg->cmd, size, extent_size)))
return_0;
return 1;
}
@@ -5953,6 +5954,16 @@ static int _lv_resize_check_used(struct logical_volume *lv)
return 0;
}
if (lv_is_vdo(lv) && !lv_is_active(lv)) {
log_error("Cannot resize inactive VDO logical volume %s.", display_lvname(lv));
return 0;
}
if (lv_is_vdo_pool(lv) && !lv_is_active(lv_lock_holder(lv))) {
log_error("Cannot resize inactive VDO POOL volume %s.", display_lvname(lv));
return 0;
}
if (lv_is_external_origin(lv)) {
/*
* Since external-origin can be activated read-only,
@@ -6397,7 +6408,23 @@ static int _fs_reduce(struct cmd_context *cmd, struct logical_volume *lv,
* but the crypt dev over the LV should be shrunk to correspond with
* the LV size, so that the FS does not see an incorrect device size.
*/
if (!fsinfo.needs_reduce && fsinfo.needs_crypt && !test_mode()) {
if (!fsinfo.needs_reduce && fsinfo.needs_crypt) {
/* Check if the crypt device is already sufficiently reduced. */
if (fsinfo.crypt_dev_size_bytes <= newsize_bytes_fs) {
log_print("crypt device is already reduced to %llu bytes.",
(unsigned long long)fsinfo.crypt_dev_size_bytes);
ret = 1;
goto out;
}
if (!strcmp(lp->fsopt, "checksize")) {
log_error("crypt reduce is required (see --resizefs or cryptsetup resize.)");
ret = 0;
goto out;
}
if (test_mode()) {
ret = 1;
goto_out;
}
ret = crypt_resize_script(cmd, lv, &fsinfo, newsize_bytes_fs);
goto out;
}

View File

@@ -1376,8 +1376,14 @@ int fill_vdo_target_params(struct cmd_context *cmd,
struct dm_vdo_target_params *vtp,
uint64_t *vdo_pool_header_size,
struct profile *profile);
int check_vdo_constrains(struct cmd_context *cmd, uint64_t physical_size,
uint64_t virtual_size, struct dm_vdo_target_params *vtp);
struct vdo_pool_size_config {
uint64_t physical_size;
uint64_t virtual_size;
uint32_t block_map_cache_size_mb;
uint32_t index_memory_size_mb;
};
int check_vdo_constrains(struct cmd_context *cmd, const struct vdo_pool_size_config *cfg);
/* -- metadata/vdo_manip.c */
struct logical_volume *find_pvmove_lv(struct volume_group *vg,

View File

@@ -24,6 +24,7 @@
#include "lib/misc/lvm-exec.h"
#include <sys/sysinfo.h> // sysinfo
#include <stdarg.h>
const char *get_vdo_compression_state_name(enum dm_vdo_compression_state state)
{
@@ -258,7 +259,7 @@ static int _format_vdo_pool_data_lv(struct logical_volume *data_lv,
};
if (!(dpath = lv_path_dup(data_lv->vg->cmd->mem, data_lv))) {
log_error("Failed to build device path for VDO formating of data volume %s.",
log_error("Failed to build device path for VDO formatting of data volume %s.",
display_lvname(data_lv));
return 0;
}
@@ -425,13 +426,13 @@ struct logical_volume *convert_vdo_pool_lv(struct logical_volume *data_lv,
/* Format data LV as VDO volume */
if (format) {
if (test_mode()) {
log_verbose("Test mode: Skipping formating of VDO pool volume.");
log_verbose("Test mode: Skipping formatting of VDO pool volume.");
} else if (!_format_vdo_pool_data_lv(data_lv, vtp, &vdo_logical_size)) {
log_error("Cannot format VDO pool volume %s.", display_lvname(data_lv));
return NULL;
}
} else {
log_verbose("Skiping VDO formating %s.", display_lvname(data_lv));
log_verbose("Skiping VDO formatting %s.", display_lvname(data_lv));
/* TODO: parse existing VDO data and retrieve vdo_logical_size */
if (!*virtual_extents)
vdo_logical_size = data_lv->size;
@@ -647,39 +648,76 @@ static uint64_t _round_sectors_to_tib(uint64_t s)
return (s + ((UINT64_C(1) << (40 - SECTOR_SHIFT)) - 1)) >> (40 - SECTOR_SHIFT);
}
int check_vdo_constrains(struct cmd_context *cmd, uint64_t physical_size,
uint64_t virtual_size, struct dm_vdo_target_params *vtp)
__attribute__ ((format(printf, 3, 4)))
static int _vdo_snprintf(char **buf, size_t *bufsize, const char *format, ...)
{
uint64_t req_mb, total_mb, available_mb;
uint64_t phy_mb = _round_sectors_to_tib(UINT64_C(268) * physical_size); // 268 MiB per 1 TiB of physical size
uint64_t virt_mb = _round_1024(UINT64_C(1638) * _round_sectors_to_tib(virtual_size)); // 1.6 MiB per 1 TiB
uint64_t cache_mb = _round_1024(UINT64_C(1177) * vtp->block_map_cache_size_mb); // 1.15 MiB per 1 MiB cache size
char msg[512];
int n;
va_list ap;
if (cache_mb < 150)
va_start(ap, format);
n = vsnprintf(*buf, *bufsize, format, ap);
va_end(ap);
if (n < 0 || ((unsigned) n >= *bufsize))
return -1;
*buf += n;
*bufsize -= n;
return n;
}
int check_vdo_constrains(struct cmd_context *cmd, const struct vdo_pool_size_config *cfg)
{
static const char *_split[] = { "", " and", ",", "," };
uint64_t req_mb, total_mb, available_mb;
uint64_t phy_mb = _round_sectors_to_tib(UINT64_C(268) * cfg->physical_size); // 268 MiB per 1 TiB of physical size
uint64_t virt_mb = _round_1024(UINT64_C(1638) * _round_sectors_to_tib(cfg->virtual_size)); // 1.6 MiB per 1 TiB
uint64_t cache_mb = _round_1024(UINT64_C(1177) * cfg->block_map_cache_size_mb); // 1.15 MiB per 1 MiB cache size
char msg[512];
size_t mlen = sizeof(msg);
char *pmsg = msg;
int cnt, has_cnt;
if (cfg->block_map_cache_size_mb && (cache_mb < 150))
cache_mb = 150; // always at least 150 MiB for block map
// total required memory for VDO target
req_mb = 38 + vtp->index_memory_size_mb + virt_mb + phy_mb + cache_mb;
req_mb = 38 + cfg->index_memory_size_mb + virt_mb + phy_mb + cache_mb;
_get_memory_info(&total_mb, &available_mb);
(void)snprintf(msg, sizeof(msg), "VDO configuration needs %s RAM for physical volume size %s, "
"%s RAM for virtual volume size %s, %s RAM for block map cache size %s and "
"%s RAM for index memory.",
display_size(cmd, phy_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, physical_size),
display_size(cmd, virt_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, virtual_size),
display_size(cmd, cache_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, ((uint64_t)vtp->block_map_cache_size_mb) << (20 - SECTOR_SHIFT)),
display_size(cmd, ((uint64_t)vtp->index_memory_size_mb) << (20 - SECTOR_SHIFT)));
has_cnt = cnt = (phy_mb ? 1 : 0) +
(virt_mb ? 1 : 0) +
(cfg->block_map_cache_size_mb ? 1 : 0) +
(cfg->index_memory_size_mb ? 1 : 0);
if (phy_mb)
(void)_vdo_snprintf(&pmsg, &mlen, " %s RAM for physical volume size %s%s",
display_size(cmd, phy_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, cfg->physical_size), _split[--cnt]);
if (virt_mb)
(void)_vdo_snprintf(&pmsg, &mlen, " %s RAM for virtual volume size %s%s",
display_size(cmd, virt_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, cfg->virtual_size), _split[--cnt]);
if (cfg->block_map_cache_size_mb)
(void)_vdo_snprintf(&pmsg, &mlen, " %s RAM for block map cache size %s%s",
display_size(cmd, cache_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, ((uint64_t)cfg->block_map_cache_size_mb) << (20 - SECTOR_SHIFT)),
_split[--cnt]);
if (cfg->index_memory_size_mb)
(void)_vdo_snprintf(&pmsg, &mlen, " %s RAM for index memory",
display_size(cmd, ((uint64_t)cfg->index_memory_size_mb) << (20 - SECTOR_SHIFT)));
if (req_mb > available_mb) {
log_error("Not enough free memory for VDO target. %s RAM is required, but only %s RAM is available.",
display_size(cmd, req_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, available_mb << (20 - SECTOR_SHIFT)));
log_print_unless_silent("%s", msg);
if (has_cnt)
log_print_unless_silent("VDO configuration needs%s.", msg);
return 0;
}
@@ -687,7 +725,8 @@ int check_vdo_constrains(struct cmd_context *cmd, uint64_t physical_size,
display_size(cmd, req_mb << (20 - SECTOR_SHIFT)),
display_size(cmd, available_mb << (20 - SECTOR_SHIFT)));
log_verbose("%s", msg);
if (has_cnt)
log_verbose("VDO configuration needs%s.", msg);
return 1;
}

View File

@@ -3855,21 +3855,20 @@ static int _lvactiveexclusively_disp(struct dm_report *rh, struct dm_pool *mem,
const void *data, void *private)
{
const struct logical_volume *lv = (const struct logical_volume *) data;
int active_exclusively, _sh = 0;
int ex = 0, sh = 0;
if (!activation())
return _binary_undef_disp(rh, mem, field, private);
active_exclusively = lv_is_active(lv);
ex = lv_is_active(lv);
if (active_exclusively && vg_is_shared(lv->vg)) {
active_exclusively = 0;
if (!lockd_query_lv(lv->vg, lv->name, lv_uuid_dup(NULL, lv),
lv->lock_args, &active_exclusively, &_sh))
if (ex && vg_is_shared(lv->vg)) {
ex = 0;
if (!lockd_query_lv(lv->vg->cmd, (struct logical_volume *)lv, &ex, &sh))
return _binary_undef_disp(rh, mem, field, private);
}
return _binary_disp(rh, mem, field, active_exclusively, GET_FIRST_RESERVED_NAME(lv_active_exclusively_y), private);
return _binary_disp(rh, mem, field, ex, GET_FIRST_RESERVED_NAME(lv_active_exclusively_y), private);
}
static int _lvmergefailed_disp(struct dm_report *rh, struct dm_pool *mem,

View File

@@ -23,6 +23,7 @@
#include "lib/metadata/metadata.h"
#include "lib/metadata/lv_alloc.h"
#include "lib/metadata/segtype.h"
#include "lib/mm/memlock.h"
#include "base/memory/zalloc.h"
static const char _vdo_module[] = MODULE_NAME_VDO;
@@ -354,6 +355,27 @@ static int _vdo_pool_target_status_compatible(const char *type)
return (strcmp(type, TARGET_NAME_VDO) == 0);
}
static int _vdo_check(struct cmd_context *cmd, const struct lv_segment *seg)
{
struct vdo_pool_size_config cfg = { 0 };
if (!lv_vdo_pool_size_config(seg->lv, &cfg))
return_0;
/* Check if we are just adding more size to the already running vdo pool */
if (seg->lv->size >= cfg.physical_size)
cfg.physical_size = seg->lv->size - cfg.physical_size;
if (get_vdo_pool_virtual_size(seg) >= cfg.virtual_size)
cfg.virtual_size = get_vdo_pool_virtual_size(seg) - cfg.virtual_size;
if (seg->vdo_params.block_map_cache_size_mb >= cfg.block_map_cache_size_mb)
cfg.block_map_cache_size_mb = seg->vdo_params.block_map_cache_size_mb - cfg.block_map_cache_size_mb;
if (seg->vdo_params.index_memory_size_mb >= cfg.index_memory_size_mb)
cfg.index_memory_size_mb = seg->vdo_params.index_memory_size_mb - cfg.index_memory_size_mb;
return check_vdo_constrains(cmd, &cfg);
}
static int _vdo_pool_add_target_line(struct dev_manager *dm,
struct dm_pool *mem,
struct cmd_context *cmd,
@@ -374,7 +396,7 @@ static int _vdo_pool_add_target_line(struct dev_manager *dm,
return 0;
}
if (!check_vdo_constrains(cmd, seg->lv->size, seg_lv(seg, 0)->size, &seg->vdo_params))
if (!critical_section() && !_vdo_check(cmd, seg))
return_0;
if (!(vdo_pool_name = dm_build_dm_name(mem, seg->lv->vg->name, seg->lv->name, lv_layer(seg->lv))))

View File

@@ -138,7 +138,6 @@ static char *_align(char *ptr, unsigned int a)
return (char *) (((unsigned long) ptr + agn) & ~agn);
}
#ifdef DM_IOCTLS
static unsigned _kernel_major = 0;
static unsigned _kernel_minor = 0;
static unsigned _kernel_release = 0;
@@ -181,6 +180,9 @@ int get_uname_version(unsigned *major, unsigned *minor, unsigned *release)
return 1;
}
#ifdef DM_IOCTLS
/*
* Set number to NULL to populate _dm_bitset - otherwise first
* match is returned.

View File

@@ -262,7 +262,7 @@ endif
# end of fPIC protection
endif
# Combination of DEBUG_POOL and DEBUG_ENFORCE_POOL_LOCKING is not suppored.
# Combination of DEBUG_POOL and DEBUG_ENFORCE_POOL_LOCKING is not supported.
#DEFS += -DDEBUG_POOL
# Default pool locking is using the crc checksum. With mprotect memory
# enforcing compilation faulty memory write could be easily found.

View File

@@ -182,9 +182,10 @@ ifndef MAKEFLAGS
MAKEFLAGS = @JOBS@
endif
ifneq (1, $(firstword $(V)))
MAKEFLAGS += --no-print-directory
endif
# Hiding dir entering makes hard for editors to look for files
#ifneq (1, $(firstword $(V)))
#MAKEFLAGS += --no-print-directory
#endif
# Handle installation of files
ifeq ("@WRITE_INSTALL@", "yes")
@@ -301,7 +302,7 @@ ifeq ("@BUILD_DMEVENTD@", "yes")
DMEVENT_LIBS = -L$(top_builddir)/daemons/dmeventd -ldevmapper-event -L$(interfacebuilddir) -ldevmapper
endif
# Combination of DEBUG_POOL and DEBUG_ENFORCE_POOL_LOCKING is not suppored.
# Combination of DEBUG_POOL and DEBUG_ENFORCE_POOL_LOCKING is not supported.
#DEFS += -DDEBUG_POOL
# Default pool locking is using the crc checksum. With mprotect memory
# enforcing compilation faulty memory write could be easily found.

View File

@@ -103,7 +103,7 @@ when it's been filled above configured threshold
\fBactivation/thin_pool_autoextend_threshold\fP.
If the command fails, dmeventd thin plugin will keep
retrying execution with increasing time delay between
retries upto 42 minutes.
retries up to 42 minutes.
User may also configure external command to support more advanced
maintenance operations of a thin pool.
Such external command can e.g. remove some unneeded snapshots,
@@ -133,7 +133,7 @@ when it's been filled above the configured threshold
\fBactivation/vdo_pool_autoextend_threshold\fP.
If the command fails, dmeventd vdo plugin will keep
retrying execution with increasing time delay between
retries upto 42 minutes.
retries up to 42 minutes.
User may also configure external command to support more advanced
maintenance operations of a VDO pool.
Such external command can e.g. remove some unneeded space
@@ -166,7 +166,7 @@ actual usage of VDO pool data volume. Variable is not set when error event
is processed.
.TP
.B LVM_RUN_BY_DMEVENTD
Variable is set by thin and vdo plugin to prohibit recursive interation
Variable is set by thin and vdo plugin to prohibit recursive interaction
with dmeventd by any executed lvm2 command from
a thin_command, vdo_command environment.
.

View File

@@ -572,7 +572,7 @@ See below for more information on the table format.
.B --udevcookie \fIcookie
Use cookie for udev synchronisation.
Note: Same cookie should be used for same type of operations i.e. creation of
multiple different devices. It's not adviced to combine different
multiple different devices. It's not advised to combine different
operations on the single device.
.
.TP

View File

@@ -292,7 +292,7 @@ region identifier.
.
.TP
.B --area
When peforming a list or report, include objects of type area in the
When performing a list or report, include objects of type area in the
results.
.
.TP
@@ -317,7 +317,7 @@ argument is zero reports will continue to repeat until interrupted.
.
.TP
.B --group
When peforming a list or report, include objects of type group in the
When performing a list or report, include objects of type group in the
results.
.
.TP
@@ -389,7 +389,7 @@ region as a comma separated list of latency values. Latency values are
given in nanoseconds. An optional unit suffix of
.BR ns , us , ms ,
or \fBs\fP may be given after each value to specify units of
nanoseconds, microseconds, miliseconds or seconds respectively.
nanoseconds, microseconds, milliseconds or seconds respectively.
.
.TP
.B --histogram
@@ -456,7 +456,7 @@ default program ID for dmstats-managed regions is "dmstats".
.
.TP
.B --region
When peforming a list or report, include objects of type region in the
When performing a list or report, include objects of type region in the
results.
.
.TP
@@ -530,7 +530,7 @@ Produce additional output.
.HP
.CMD_CLEAR
.br
Instructs the kernel to clear statistics counters for the speficied
Instructs the kernel to clear statistics counters for the specified
regions (with the exception of in-flight IO counters).
.
.HP
@@ -556,10 +556,10 @@ configured interval duration) on the final bin.
.sp
Latencies are given in nanoseconds. An optional unit suffix of ns, us,
ms, or s may be given after each value to specify units of nanoseconds,
microseconds, miliseconds or seconds respectively, so for example, 10ms
microseconds, milliseconds or seconds respectively, so for example, 10ms
is equivalent to 10000000. Latency values with a precision of less than
one milisecond can only be used when precise timestamps are enabled: if
\fB--precise\fP is not given and values less than one milisecond are
one millisecond can only be used when precise timestamps are enabled: if
\fB--precise\fP is not given and values less than one millisecond are
used it will be enabled automatically.
.sp
An optional \fBprogram_id\fP or \fBuser_data\fP string may be associated
@@ -627,7 +627,7 @@ group.
The list of regions to be grouped is specified with \fB--regions\fP
and an optional alias may be assigned with \fB--alias\fP. The set of
regions is given as a comma-separated list of region identifiers. A
continuous range of identifers spanning from \fBR1\fP to \fBR2\fP may
continuous range of identifiers spanning from \fBR1\fP to \fBR2\fP may
be expressed as '\fBR1\fP-\fBR2\fP'.
.sp
Regions that have a histogram configured can be grouped: in this case
@@ -711,7 +711,7 @@ that were previously created with \fB--filemap\fP, either directly,
or by starting the monitoring daemon, \fBdmfilemapd\fP.
.sp
This will add and remove regions to reflect changes in the allocated
extents of the file on-disk, since the time that it was crated or last
extents of the file on-disk, since the time that it was created or last
updated.
.sp
Use of this command is not normally needed since the \fBdmfilemapd\fP
@@ -1090,13 +1090,13 @@ bounds.
.B hist_bounds
A list of the histogram boundary values for the current statistics area
in order of ascending latency value. The values are expressed in whole
units of seconds, miliseconds, microseconds or nanoseconds with a suffix
units of seconds, milliseconds, microseconds or nanoseconds with a suffix
indicating the unit.
.TP
.B hist_ranges
A list of the histogram bin ranges for the current statistics area in
order of ascending latency value. The values are expressed as
"LOWER-UPPER" in whole units of seconds, miliseconds, microseconds or
"LOWER-UPPER" in whole units of seconds, milliseconds, microseconds or
nanoseconds with a suffix indicating the unit.
.TP
.B hist_bins

View File

@@ -745,7 +745,7 @@ See \fBlvmraid\fP(7) for more information.
.br
Start (yes) or stop (no) monitoring an LV with dmeventd.
dmeventd monitors kernel events for an LV, and performs
automated maintenance for the LV in reponse to specific events.
automated maintenance for the LV in response to specific events.
See \fBdmeventd\fP(8) for more information.
.
.HP

View File

@@ -23,7 +23,7 @@ The
type is equivalent to the
.B striped
type when one stripe exists.
In that case, the types can sometimes be used interchangably.
In that case, the types can sometimes be used interchangeably.
.P
In most cases, the
.B mirror

View File

@@ -196,7 +196,7 @@ The
type is equivalent to the
.B striped
type when one stripe exists.
In that case, the types can sometimes be used interchangably.
In that case, the types can sometimes be used interchangeably.
.P
In most cases, the
.B mirror

View File

@@ -1176,7 +1176,7 @@ See \fBlvmraid\fP(7) for more information.
.br
Start (yes) or stop (no) monitoring an LV with dmeventd.
dmeventd monitors kernel events for an LV, and performs
automated maintenance for the LV in reponse to specific events.
automated maintenance for the LV in response to specific events.
See \fBdmeventd\fP(8) for more information.
.
.HP

View File

@@ -73,7 +73,7 @@ using dm-writecache (with cachevol):
.P
# lvconvert --type writecache --cachevol fast vg/main
.P
For more alteratives see:
For more alternatives see:
.br
dm-cache command shortcut
.br
@@ -252,7 +252,7 @@ when selecting the writecache cachevol size and the writecache block size.
.P
.IP \[bu] 2
writecache block size 4096: each 100 GiB of writecache cachevol uses
slighly over 2 GiB of system memory.
slightly over 2 GiB of system memory.
.IP \[bu] 2
writecache block size 512: each 100 GiB of writecache cachevol uses
a little over 16 GiB of system memory.
@@ -311,11 +311,11 @@ read requests.
.TP
autocommit_blocks = <count>
When the application writes this amount of blocks without issuing the
FLUSH request, the blocks are automatically commited.
FLUSH request, the blocks are automatically committed.
.
.TP
autocommit_time = <milliseconds>
The data is automatically commited if this time passes and no FLUSH
The data is automatically committed if this time passes and no FLUSH
request is received.
.
.TP
@@ -478,7 +478,7 @@ cache, in which small reads and writes cause large sections of an LV to be
stored in the cache. It can also require increasing migration threshold
which defaults to 2048 sectors (1 MiB). Lvm2 ensures migration threshold is
at least 8 chunks in size. This may in some cases result in very
high bandwidth load of transfering data between the cache LV and its
high bandwidth load of transferring data between the cache LV and its
cache origin LV. However, choosing a chunk size that is too small
can result in more overhead trying to manage the numerous chunks that
become mapped into the cache. Overhead can include both excessive CPU

View File

@@ -96,7 +96,7 @@ is used for loop devices, the backing file name repored by sysfs.
the device name is used if no other type applies.
.P
The default choice for device ID type can be overriden using lvmdevices
The default choice for device ID type can be overridden using lvmdevices
--addev --deviceidtype <type>. If the specified type is available for the
device it will be used, otherwise the device will be added using the type
that would otherwise be chosen.

View File

@@ -163,7 +163,7 @@ is used for loop devices, the backing file name repored by sysfs.
the device name is used if no other type applies.
.P
The default choice for device ID type can be overriden using lvmdevices
The default choice for device ID type can be overridden using lvmdevices
--addev --deviceidtype <type>. If the specified type is available for the
device it will be used, otherwise the device will be added using the type
that would otherwise be chosen.

View File

@@ -1197,7 +1197,7 @@ But let's still use the original "," character for list_item_separator
for subsequent examples.
.P
Format for any of time values displayed in reports can be configured with
\fBreport/time_format\fP configuretion setting. By default complete date
\fBreport/time_format\fP configuration setting. By default complete date
and time is displayed, including timezone.
.P
.nf
@@ -1302,11 +1302,11 @@ binary_values_as_numeric=1
.SS Changing output format
.
LVM can output reports in different formats - use \fBreport/output_format\fP
configuration setting (or \fB--reportformat\fP command line option) to swith
configuration setting (or \fB--reportformat\fP command line option) to switch
the report output format.
.P
Currently, LVM supports these outpout formats:
Currently, LVM supports these output formats:
.RS
- \fB"basic"\fP (all the examples we used above used this format),
.br

View File

@@ -1,6 +1,6 @@
vgcfgrestore restores the metadata of a VG from a text back up file
produced by \fBvgcfgbackup\fP. This writes VG metadata onto the devices
specifed in back up file.
specified in back up file.
.P
A back up file can be specified with \fB--file\fP. If no backup file is
specified, the most recent one is used. Use \fB--list\fP for a list of

View File

@@ -63,7 +63,7 @@ vgcfgrestore \(em Restore volume group configuration
.
vgcfgrestore restores the metadata of a VG from a text back up file
produced by \fBvgcfgbackup\fP. This writes VG metadata onto the devices
specifed in back up file.
specified in back up file.
.P
A back up file can be specified with \fB--file\fP. If no backup file is
specified, the most recent one is used. Use \fB--list\fP for a list of

View File

@@ -684,7 +684,7 @@ See \fBlvm.conf\fP(5) for more information about profiles.
.br
Start (yes) or stop (no) monitoring an LV with dmeventd.
dmeventd monitors kernel events for an LV, and performs
automated maintenance for the LV in reponse to specific events.
automated maintenance for the LV in response to specific events.
See \fBdmeventd\fP(8) for more information.
.
.HP

View File

@@ -224,7 +224,7 @@ fsreduce() {
cryptresize() {
NEWSIZESECTORS=$(($NEWSIZEBYTES/512))
logmsg "cryptsetup resize ${NEWSIZESECTORS} sectors ${DEVPATH}"
cryptresize resize --size "$NEWSIZESECTORS" "$DEVPATH"
cryptsetup resize --size "$NEWSIZESECTORS" "$DEVPATH"
if [ $? -eq 0 ]; then
logmsg "cryptsetup done"
else

View File

@@ -6,7 +6,7 @@
set -euE -o pipefail
# tool for formating 'old' VDO metadata format
# tool for formatting 'old' VDO metadata format
LVM_VDO_FORMAT=${LVM_VDO_FORMAT-"oldvdoformat"}
# tool for shifting VDO metadata header by 2MiB
LVM_VDO_PREPARE=${LVM_VDO_PREPARE-"oldvdoprepareforlvm"}

View File

@@ -135,6 +135,36 @@ cryptsetup close $cr
lvchange -an $vg/$lv
lvremove $vg/$lv
# lvresize uses helper only for crypt dev resize
# because the fs was resized separately beforehand
lvcreate -n $lv -L 456M $vg
echo 93R4P4pIqAH8 | cryptsetup luksFormat -i1 --type luks1 "$DM_DEV_DIR/$vg/$lv"
echo 93R4P4pIqAH8 | cryptsetup luksOpen "$DM_DEV_DIR/$vg/$lv" $cr
mkfs.ext4 /dev/mapper/$cr
mount /dev/mapper/$cr "$mount_dir"
dd if=/dev/zero of="$mount_dir/zeros1" bs=1M count=100 conv=fdatasync
df --output=size "$mount_dir" |tee df1
# resize only the fs (to 256M), not the crypt dev or LV
umount "$mount_dir"
resize2fs /dev/mapper/$cr 262144k
mount /dev/mapper/$cr "$mount_dir"
# this lvresize will not resize the fs (which is already reduced
# to smaller than the requested LV size), but lvresize will use
# the helper to resize the crypt dev before resizing the LV.
# Using --fs resize is required to allow lvresize to look above
# the lv at crypt&fs layers for potential resizing. Without
# --fs resize, lvresize fails because it sees that crypt resize
# is needed and --fs resize is needed to enable that.
not lvresize -L-100 $vg/$lv
lvresize -L-100M --fs resize $vg/$lv
check lv_field $vg/$lv lv_size "356.00m"
df --output=size "$mount_dir" |tee df2
not diff df1 df2
umount "$mount_dir"
cryptsetup close $cr
lvchange -an $vg/$lv
lvremove $vg/$lv
# test with LUKS2?
vgremove -ff $vg

View File

@@ -32,4 +32,19 @@ check lv_field $vg/${lv2}_vdata size "6.00g"
lvresize -L6G $vg/$lv1
check lv_field $vg/$lv1 size "6.00g"
# Check too large size
not lvresize -L4P $vg/$lv1 2>err
grep "Volume too large" err
# Can't resize inactive VDO
lvchange -an $vg
not lvresize -L10G $vg/$lv1 2>err
grep "Cannot resize inactive" err
not lvresize -L10G $vg/$lv2 2>err
grep "Cannot resize inactive" err
not lvresize -L10G $vg/${lv2}_vdata 2>err
grep "Cannot resize inactive" err
vgremove -ff $vg

View File

@@ -14,7 +14,7 @@ SKIP_WITH_LVMPOLLD=1
. lib/inittest
aux prepare_devs 2
aux prepare_devs 3
vgcreate $SHARED --metadatasize 128k $vg1 "$dev1"
lvcreate -l100%FREE -n $lv1 $vg1
@@ -85,6 +85,25 @@ vgchange -an $vg1 $vg2
vgremove -ff $vg1 $vg2
pvremove "$dev1"
pvremove "$dev2"
# Test vgimportclone with incomplete list of devs, and with nomda PV.
vgcreate $SHARED --vgmetadatacopies 2 $vg1 "$dev1" "$dev2" "$dev3"
lvcreate -l1 -an $vg1
not vgimportclone -n newvgname "$dev1"
not vgimportclone -n newvgname "$dev2"
not vgimportclone -n newvgname "$dev3"
not vgimportclone -n newvgname "$dev1" "$dev2"
not vgimportclone -n newvgname "$dev1" "$dev3"
not vgimportclone -n newvgname "$dev2" "$dev3"
vgimportclone -n ${vg1}new "$dev1" "$dev2" "$dev3"
lvs ${vg1}new
vgremove -y ${vg1}new
pvremove "$dev1"
pvremove "$dev2"
pvremove "$dev3"
# Verify that if we provide the -n|--basevgname,
# the number suffix is not added unnecessarily.
vgcreate $SHARED --metadatasize 128k A${vg1}B "$dev1"

View File

@@ -5464,7 +5464,7 @@ static int _lvconvert_to_vdopool_single(struct cmd_context *cmd,
vdo_pool_zero = arg_int_value(cmd, zero_ARG, 1);
log_warn("WARNING: Converting logical volume %s to VDO pool volume %s formating.",
log_warn("WARNING: Converting logical volume %s to VDO pool volume %s formatting.",
display_lvname(lv), vdo_pool_zero ? "with" : "WITHOUT");
if (vdo_pool_zero)

View File

@@ -1762,8 +1762,12 @@ static int _lvcreate_single(struct cmd_context *cmd, const char *vg_name,
if (!_update_extents_params(vg, lp, lcp))
goto_out;
if (seg_is_vdo(lp) && !check_vdo_constrains(cmd, (uint64_t)lp->extents * vg->extent_size,
lcp->virtual_size, &lp->vdo_params))
if (seg_is_vdo(lp) &&
!check_vdo_constrains(cmd, &(struct vdo_pool_size_config) {
.physical_size = (uint64_t)lp->extents * vg->extent_size,
.virtual_size = lcp->virtual_size,
.block_map_cache_size_mb = lp->vdo_params.block_map_cache_size_mb,
.index_memory_size_mb = lp->vdo_params.index_memory_size_mb }))
goto_out;
if (seg_is_thin(lp) && !_validate_internal_thin_processing(lp))

View File

@@ -44,6 +44,8 @@ struct pvscan_aa_params {
*/
static struct volume_group *saved_vg;
static int _found_filter_symlinks;
static int _pvscan_display_pv(struct cmd_context *cmd,
struct physical_volume *pv,
struct pvscan_params *params)
@@ -930,6 +932,7 @@ static int _get_args_devs(struct cmd_context *cmd, struct dm_list *pvscan_args,
if (!cmd->enable_devices_file && !cmd->enable_devices_list &&
(_filter_uses_symlinks(cmd, devices_filter_CFG) ||
_filter_uses_symlinks(cmd, devices_global_filter_CFG))) {
_found_filter_symlinks = 1;
log_print_pvscan(cmd, "finding all devices for filter symlinks.");
dev_cache_scan(cmd);
}
@@ -1550,6 +1553,18 @@ static int _pvscan_cache_args(struct cmd_context *cmd, int argc, char **argv,
cmd->filter_nodata_only = 1;
if ((dm_list_size(&pvscan_devs) == 1) && _found_filter_symlinks) {
char *env_str;
struct dm_list *env_aliases;
devl = dm_list_item(dm_list_first(&pvscan_devs), struct device_list);
if ((env_str = getenv("DEVLINKS"))) {
env_aliases = str_to_str_list(cmd->mem, env_str, " ", 0);
dm_list_splice(&devl->dev->aliases, env_aliases);
}
/* A symlink from env may not actually exist so don't try to use it. */
dev_cache_disable_preferred_names();
}
dm_list_iterate_items_safe(devl, devl2, &pvscan_devs) {
if (!cmd->filter->passes_filter(cmd, cmd->filter, devl->dev, NULL)) {
log_print_pvscan(cmd, "%s excluded: %s.",

View File

@@ -203,7 +203,7 @@ int vgimportclone(struct cmd_context *cmd, int argc, char **argv)
struct device *dev;
struct device_list *devl;
struct dm_list other_devs;
struct volume_group *vg, *error_vg;
struct volume_group *vg, *error_vg = NULL;
const char *vgname;
char base_vgname[NAME_LEN] = { 0 };
char tmp_vgname[NAME_LEN] = { 0 };
@@ -322,7 +322,7 @@ int vgimportclone(struct cmd_context *cmd, int argc, char **argv)
goto out;
}
if (!(vgname = lvmcache_vgname_from_info(info))) {
if (!(vgname = lvmcache_vgname_from_info(info)) || is_orphan_vg(vgname)) {
/* The PV may not have metadata, this will be resolved in
the process_each_vg/vg_read at the end. */
continue;
@@ -503,6 +503,8 @@ retry_name:
}
ret = ECMD_PROCESSED;
out:
if (error_vg)
release_vg(error_vg);
unlock_devices_file(cmd);
return ret;
}

View File

@@ -87,7 +87,7 @@ LABEL="systemd_background"
#
# In this case, we simply set up the dependency between the device and the pvscan
# job using SYSTEMD_ALIAS (which sets up a simplified device identifier that
# allows using "BindsTo" in the sytemd unit file) and SYSTEMD_WANTS (which tells
# allows using "BindsTo" in the systemd unit file) and SYSTEMD_WANTS (which tells
# systemd to start the pvscan job once the device is ready).
# We need to set these variables for both "add" and "change" events, otherwise
# systemd may loose information about the device/unit dependencies.

View File

@@ -20,7 +20,7 @@ include $(top_builddir)/make.tmpl
DM_RULES=10-dm.rules 13-dm-disk.rules 95-dm-notify.rules
LVM_RULES=11-dm-lvm.rules 69-dm-lvm.rules
DM_DIR=$(shell $(GREP) "\#define DM_DIR" $(top_srcdir)/libdm/misc/dm-ioctl.h | $(AWK) '{print $$3}')
DM_DIR=$(shell $(GREP) "#define DM_DIR" $(top_srcdir)/libdm/misc/dm-ioctl.h | $(AWK) '{print $$3}')
BINDIR=@bindir@
ifeq ("@UDEV_RULE_EXEC_DETECTION@", "yes")