1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-10-09 23:33:17 +03:00

Compare commits

..

1 Commits

Author SHA1 Message Date
Marian Csontos
6887846559 [WIP] Disable dlm for RHEL10 2024-09-09 16:06:59 +02:00
70 changed files with 518 additions and 2067 deletions

View File

@@ -1 +1 @@
2.03.28(2)-git (2024-10-02)
2.03.27(2)-git (2024-08-23)

View File

@@ -1 +1 @@
1.02.202-git (2024-10-02)
1.02.201-git (2024-08-23)

View File

@@ -1,14 +1,5 @@
Version 2.03.28 -
Version 2.03.27 -
==================
Version 2.03.27 - 02nd October 2024
===================================
Fix swap device size detection using blkid for lvresize/lvreduce/lvextend.
Detect GPT partition table and pass partition filter if no partitions defined.
Add global/sanlock_align_size option to configure sanlock lease size.
Disable mem locking when activation/reserved_stack or reserved_memory is 0.
Fix locking issues in lvmlockd leaving thin pool locked.
Deprecate vdo settings vdo_write_policy and vdo_write_policy.
Lots of typo fixes across lvm2 code base (codespell).
Corrected integrity parameter interleave_sectors for DM table line.
Ignore -i|--stripes, -I|--stripesize for lvextend on raid0 LV, like raid10.
@@ -157,7 +148,6 @@ Version 2.03.17 - 10th November 2022
Switch to use mallinfo2 and use it only with glibc.
Error out in lvm shell if using a cmd argument not supported in the shell.
Fix lvm shell's lastlog command to report previous pre-command failures.
Keep libaio locked in memory in critical section.
Extend VDO and VDOPOOL without flushing and locking fs.
Add --valuesonly option to lvmconfig to print only values without keys.
Updates configure with recent autoconf tooling.
@@ -439,7 +429,7 @@ Version 2.03.03 - 07th June 2019
Improve -lXXX%VG modifier which improves cache segment estimation.
Ensure migration_threshold for cache is at least 8 chunks.
Restore missing man info lvcreate --zero for thin-pools.
Drop misleading comment for metadata minimum_io_size for VDO segment.
Drop misleadning comment for metadata minimum_io_size for VDO segment.
Add device hints to reduce scanning.
Introduce LVM_SUPPRESS_SYSLOG to suppress syslog usage by generator.
Fix generator querying lvmconfig unpresent config option.
@@ -565,7 +555,7 @@ Version 2.02.177 - 18th December 2017
Fix lvmlockd to use pool lock when accessing _tmeta volume.
Report expected sanlock_convert errors only when retries fail.
Avoid blocking in sanlock_convert on SH to EX lock conversion.
Deactivate missing raid LV legs (_rimage_X-missing_Y_Z) on deactivation.
Deactivate missing raid LV legs (_rimage_X-missing_Y_Z) on decativation.
Skip read-modify-write when entire block is replaced.
Categorise I/O with reason annotations in debug messages.
Allow extending of raid LVs created with --nosync after a failed repair.
@@ -587,7 +577,7 @@ Version 2.02.177 - 18th December 2017
Check raid reshape flags in vg_validate().
Add support for pvmove of cache and snapshot origins.
Avoid using precommitted metadata for suspending pvmove tree.
Enhance pvmove locking.
Ehnance pvmove locking.
Deactivate activated LVs on error path when pvmove activation fails.
Add "io" to log/debug_classes for logging low-level I/O.
Eliminate redundant nested VG metadata in VG struct.
@@ -1164,7 +1154,7 @@ Version 2.02.143 - 21st February 2016
Fix error path when sending thin-pool message fails in update_pool_lv().
Support reporting CheckNeeded and Fail state for thin-pool and thin LV.
For failing thin-pool and thin volume correctly report percentage as INVALID.
Report -1, not 'unknown' for lv_{snapshot_invalid,merge_failed} with --binary.
Report -1, not 'unkown' for lv_{snapshot_invalid,merge_failed} with --binary.
Add configure --enable-dbus-service for an LVM D-Bus service.
Replace configure --enable-python_bindings with python2 and python3 versions.
If PV belongs to some VG and metadata missing, skip it if system ID is used.
@@ -1193,7 +1183,7 @@ Version 2.02.141 - 25th January 2016
Restore support for command breaking in process_each_lv_in_vg() (2.02.118).
Use correct mempool when process_each_lv_in_vg() (2.02.118).
Fix lvm.8 man to show again prohibited suffixes.
Fix configure to set proper use_blkid_wiping if autodetection as disabled.
Fix configure to set proper use_blkid_wiping if autodetected as disabled.
Initialise udev in clvmd for use in device scanning. (2.02.116)
Add seg_le_ranges report field for common format when displaying seg devices.
Honour report/list_item_separator for seg_metadata_le_ranges report field.
@@ -4712,7 +4702,7 @@ Version 2.02.11 - 12th October 2006
Capture error messages in clvmd and pass them back to the user.
Remove unused #defines from filter-md.c.
Make clvmd restart init script wait until clvmd has died before starting it.
Add -R to clvmd which tells running clvmd to reload their device cache.
Add -R to clvmd which tells running clvmds to reload their device cache.
Add LV column to reports listing kernel modules needed for activation.
Show available fields if report given invalid field. (e.g. lvs -o list)
Add timestamp functions with --disable-realtime configure option.

View File

@@ -1,11 +1,6 @@
Version 1.02.202 -
Version 1.02.201 -
===================
Version 1.02.201 - 02nd October 2024
====================================
Cleanup udev sync semaphore if dm_{udev_create,task_set}_cookie fails.
Improve error messages on failed udev cookie create/inc/dec operation.
Version 1.02.200 - 23rd August 2024
===================================

View File

@@ -646,6 +646,13 @@ allocation {
# This configuration option has an automatic default value.
# vdo_use_deduplication = 1
# Configuration option allocation/vdo_use_metadata_hints.
# Enables or disables whether VDO volume should tag its latency-critical
# writes with the REQ_SYNC flag. Some device mapper targets such as dm-raid5
# process writes with this flag at a higher priority.
# This configuration option has an automatic default value.
# vdo_use_metadata_hints = 1
# Configuration option allocation/vdo_minimum_io_size.
# The minimum IO size for VDO volume to accept, in bytes.
# Valid values are 512 or 4096. The recommended value is 4096.
@@ -744,6 +751,19 @@ allocation {
# This configuration option has an automatic default value.
# vdo_physical_threads = 1
# Configuration option allocation/vdo_write_policy.
# Specifies the write policy:
# auto - VDO will check the storage device and determine whether it supports flushes.
# If it does, VDO will run in async mode, otherwise it will run in sync mode.
# sync - Writes are acknowledged only after data is stably written.
# This policy is not supported if the underlying storage is not also synchronous.
# async - Writes are acknowledged after data has been cached for writing to stable storage.
# Data which has not been flushed is not guaranteed to persist in this mode.
# async-unsafe - Writes are handled like 'async' but there is no guarantee of the atomicity async provides.
# This mode should only be used for better performance when atomicity is not required.
# This configuration option has an automatic default value.
# vdo_write_policy = "auto"
# Configuration option allocation/vdo_max_discard.
# Specified the maximum size of discard bio accepted, in 4096 byte blocks.
# I/O requests to a VDO volume are normally split into 4096-byte blocks,
@@ -1192,16 +1212,6 @@ global {
# This configuration option has an automatic default value.
# sanlock_lv_extend = 256
# Configuration option global/sanlock_align_size.
# The sanlock lease size in MiB to use on disks with a 4K sector size.
# Possible values are 1,2,4,8. The default is 8, which supports up to
# 2000 hosts (and max host_id 2000.) Smaller values support smaller
# numbers of max hosts (and max host_ids): 250, 500, 1000, 2000 for
# lease sizes 1,2,4,8. Disks with 512 byte sectors always use 1MiB
# leases and support 2000 hosts, and are not affected by this setting.
# This configuration option has an automatic default value.
# sanlock_align_size = 8
# Configuration option global/lvmlockctl_kill_command.
# The command that lvmlockctl --kill should use to force LVs offline.
# The lvmlockctl --kill command is run when a shared VG has lost
@@ -1505,14 +1515,12 @@ activation {
# Configuration option activation/reserved_stack.
# Stack size in KiB to reserve for use while devices are suspended.
# Insufficient reserve risks I/O deadlock during device suspension.
# Value 0 disables memory locking.
# This configuration option has an automatic default value.
# reserved_stack = 64
# Configuration option activation/reserved_memory.
# Memory size in KiB to reserve for use while devices are suspended.
# Insufficient reserve risks I/O deadlock during device suspension.
# Value 0 disables memory locking.
# This configuration option has an automatic default value.
# reserved_memory = 8192

View File

@@ -49,10 +49,9 @@ local {
# This configuration option does not have a default value defined.
# Configuration option local/host_id.
# The sanlock host_id used by lvmlockd. This must be unique among all the hosts
# using shared VGs with sanlock. Accepted values are 1-2000, except when sanlock_align_size
# is configured to 1, 2 or 4, which correspond to max host_id values of 250, 500, or 1000.
# Applicable only if LVM is compiled with support for lvmlockd+sanlock.
# The lvmlockd sanlock host_id.
# This must be unique among all hosts, and must be between 1 and 2000.
# Applicable only if LVM is compiled with lockd support
# This configuration option has an automatic default value.
# host_id = 0
}

View File

@@ -4,6 +4,7 @@
allocation {
vdo_use_compression=1
vdo_use_deduplication=1
vdo_use_metadata_hints=1
vdo_minimum_io_size=4096
vdo_block_map_cache_size_mb=128
vdo_block_map_period=16380
@@ -17,5 +18,6 @@ allocation {
vdo_hash_zone_threads=1
vdo_logical_threads=1
vdo_physical_threads=1
vdo_write_policy="auto"
vdo_max_discard=1
}

View File

@@ -133,7 +133,7 @@ def process_args():
def running_under_systemd():
""""
Checks to see if we are running under systemd, by checking daemon fd 0, 1
Checks to see if we are running under systemd, by checking damon fd 0, 1
systemd sets stdin to /dev/null and 1 & 2 are a socket
"""
base = "/proc/self/fd"
@@ -214,7 +214,7 @@ def main():
cfg.loop = GLib.MainLoop()
for thread in thread_list:
thread.daemon = True
thread.damon = True
thread.start()
# In all cases we are going to monitor for udev until we get an

View File

@@ -1177,12 +1177,12 @@ static void lm_rem_resource(struct lockspace *ls, struct resource *r)
lm_rem_resource_idm(ls, r);
}
static int lm_find_free_lock(struct lockspace *ls, uint64_t lv_size_bytes)
static int lm_find_free_lock(struct lockspace *ls, uint64_t *free_offset, int *sector_size, int *align_size)
{
if (ls->lm_type == LD_LM_DLM)
return 0;
else if (ls->lm_type == LD_LM_SANLOCK)
return lm_find_free_lock_sanlock(ls, lv_size_bytes);
return lm_find_free_lock_sanlock(ls, free_offset, sector_size, align_size);
else if (ls->lm_type == LD_LM_IDM)
return 0;
return -1;
@@ -2712,10 +2712,17 @@ static void *lockspace_thread_main(void *arg_in)
}
if (act->op == LD_OP_FIND_FREE_LOCK && act->rt == LD_RT_VG) {
uint64_t free_offset = 0;
int sector_size = 0;
int align_size = 0;
log_debug("S %s find free lock", ls->name);
rv = lm_find_free_lock(ls, act->lv_size_bytes);
log_debug("S %s find free lock %d offset %llu",
ls->name, rv, (unsigned long long)ls->free_lock_offset);
rv = lm_find_free_lock(ls, &free_offset, &sector_size, &align_size);
log_debug("S %s find free lock %d offset %llu sector_size %d align_size %d",
ls->name, rv, (unsigned long long)free_offset, sector_size, align_size);
ls->free_lock_offset = free_offset;
ls->free_lock_sector_size = sector_size;
ls->free_lock_align_size = align_size;
list_del(&act->list);
act->result = rv;
add_client_result(act);
@@ -3549,7 +3556,7 @@ static int work_init_vg(struct action *act)
}
if (act->lm_type == LD_LM_SANLOCK)
rv = lm_init_vg_sanlock(ls_name, act->vg_name, act->flags, act->vg_args, act->align_mb);
rv = lm_init_vg_sanlock(ls_name, act->vg_name, act->flags, act->vg_args);
else if (act->lm_type == LD_LM_DLM)
rv = lm_init_vg_dlm(ls_name, act->vg_name, act->flags, act->vg_args);
else if (act->lm_type == LD_LM_IDM)
@@ -3615,6 +3622,9 @@ static int work_init_lv(struct action *act)
char ls_name[MAX_NAME+1];
char vg_args[MAX_ARGS+1];
char lv_args[MAX_ARGS+1];
uint64_t free_offset = 0;
int sector_size = 0;
int align_size = 0;
int lm_type = 0;
int rv = 0;
@@ -3629,6 +3639,9 @@ static int work_init_lv(struct action *act)
if (ls) {
lm_type = ls->lm_type;
memcpy(vg_args, ls->vg_args, MAX_ARGS);
free_offset = ls->free_lock_offset;
sector_size = ls->free_lock_sector_size;
align_size = ls->free_lock_align_size;
}
pthread_mutex_unlock(&lockspaces_mutex);
@@ -3644,13 +3657,8 @@ static int work_init_lv(struct action *act)
}
if (lm_type == LD_LM_SANLOCK) {
/* FIXME: can init_lv ever be called without the lockspace already started? */
if (!ls) {
log_error("init_lv no lockspace found");
return -EINVAL;
}
rv = lm_init_lv_sanlock(ls, act->lv_uuid, vg_args, lv_args);
rv = lm_init_lv_sanlock(ls_name, act->vg_name, act->lv_uuid,
vg_args, lv_args, sector_size, align_size, free_offset);
memcpy(act->lv_args, lv_args, MAX_ARGS);
return rv;
@@ -5024,12 +5032,6 @@ static void client_recv_action(struct client *cl)
if (val)
act->host_id = val;
val = daemon_request_int(req, "align_mb", 0);
if (val)
act->align_mb = val;
act->lv_size_bytes = (uint64_t)dm_config_find_int64(req.cft->root, "lv_size_bytes", 0);
/* Create PV list for idm */
if (lm == LD_LM_IDM) {
memset(&pvs, 0x0, sizeof(pvs));

View File

@@ -132,7 +132,6 @@ struct action {
uint32_t flags; /* LD_AF_ */
uint32_t version;
uint64_t host_id;
uint64_t lv_size_bytes;
int8_t op; /* operation type LD_OP_ */
int8_t rt; /* resource type LD_RT_ */
int8_t mode; /* lock mode LD_LK_ */
@@ -141,7 +140,6 @@ struct action {
int max_retries;
int result;
int lm_rv; /* return value from lm_ function */
int align_mb;
char *path;
char vg_uuid[64];
char vg_name[MAX_NAME+1];
@@ -193,6 +191,8 @@ struct lockspace {
void *lm_data;
uint64_t host_id;
uint64_t free_lock_offset; /* for sanlock, start search for free lock here */
int free_lock_sector_size; /* for sanlock */
int free_lock_align_size; /* for sanlock */
struct pvs pvs; /* for idm: PV list */
uint32_t start_client_id; /* client_id that started the lockspace */
@@ -505,8 +505,8 @@ static inline int lm_refresh_lv_check_dlm(struct action *act)
#ifdef LOCKDSANLOCK_SUPPORT
int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args, int opt_align_mb);
int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char *lv_args);
int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args);
int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name, char *vg_args, char *lv_args, int sector_size, int align_size, uint64_t free_offset);
int lm_free_lv_sanlock(struct lockspace *ls, struct resource *r);
int lm_rename_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args);
int lm_prepare_lockspace_sanlock(struct lockspace *ls);
@@ -527,7 +527,7 @@ int lm_gl_is_enabled(struct lockspace *ls);
int lm_get_lockspaces_sanlock(struct list_head *ls_rejoin);
int lm_data_size_sanlock(void);
int lm_is_running_sanlock(void);
int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes);
int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t *free_offset, int *sector_size, int *align_size);
static inline int lm_support_sanlock(void)
{
@@ -536,12 +536,12 @@ static inline int lm_support_sanlock(void)
#else
static inline int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args, int opt_align_mb)
static inline int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args)
{
return -1;
}
static inline int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char *lv_args)
static inline int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name, char *vg_args, char *lv_args, int sector_size, int align_size, uint64_t free_offset)
{
return -1;
}
@@ -630,7 +630,7 @@ static inline int lm_is_running_sanlock(void)
return 0;
}
static inline int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes);
static inline int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t *free_offset, int *sector_size, int *align_size)
{
return -1;
}

View File

@@ -145,8 +145,6 @@ struct lm_sanlock {
int sector_size;
int align_size;
int sock; /* sanlock daemon connection */
uint32_t ss_flags; /* sector and align flags for lockspace */
uint32_t rs_flags; /* sector and align flags for resource */
};
struct rd_sanlock {
@@ -341,16 +339,14 @@ fail:
return rv;
}
static void _read_sysfs_size(dev_t devno, const char *name, uint64_t *val)
static void _read_sysfs_size(dev_t devno, const char *name, unsigned int *val)
{
char path[PATH_MAX];
char buf[32];
FILE *fp;
size_t len;
*val = 0;
snprintf(path, sizeof(path), "/sys/dev/block/%d:%d/%s",
snprintf(path, sizeof(path), "/sys/dev/block/%d:%d/queue/%s",
(int)major(devno), (int)minor(devno), name);
if (!(fp = fopen(path, "r")))
@@ -363,19 +359,20 @@ static void _read_sysfs_size(dev_t devno, const char *name, uint64_t *val)
buf[--len] = '\0';
if (strlen(buf))
*val = strtoull(buf, NULL, 0);
*val = atoi(buf);
out:
(void)fclose(fp);
if (fclose(fp))
log_debug("Failed to fclose host id file %s (%s).", path, strerror(errno));
}
/* Select sector/align size for a new VG based on what the device reports for
sector size of the lvmlock LV. */
static int get_sizes_device(char *path, uint64_t *dev_size, int *sector_size, int *align_size, int *align_mb)
static int get_sizes_device(char *path, int *sector_size, int *align_size)
{
unsigned int physical_block_size = 0;
unsigned int logical_block_size = 0;
uint64_t val;
struct stat st;
int rv;
@@ -385,26 +382,18 @@ static int get_sizes_device(char *path, uint64_t *dev_size, int *sector_size, in
return -1;
}
_read_sysfs_size(st.st_rdev, "size", &val);
*dev_size = val * 512;
_read_sysfs_size(st.st_rdev, "queue/physical_block_size", &val);
physical_block_size = (unsigned int)val;
_read_sysfs_size(st.st_rdev, "queue/logical_block_size", &val);
logical_block_size = (unsigned int)val;
_read_sysfs_size(st.st_rdev, "physical_block_size", &physical_block_size);
_read_sysfs_size(st.st_rdev, "logical_block_size", &logical_block_size);
if ((physical_block_size == 512) && (logical_block_size == 512)) {
*sector_size = 512;
*align_size = ONE_MB;
*align_mb = 1;
return 0;
}
if ((physical_block_size == 4096) && (logical_block_size == 4096)) {
*sector_size = 4096;
*align_size = 8 * ONE_MB;
*align_mb = 8;
return 0;
}
@@ -439,7 +428,6 @@ static int get_sizes_device(char *path, uint64_t *dev_size, int *sector_size, in
physical_block_size, logical_block_size, path);
*sector_size = 4096;
*align_size = 8 * ONE_MB;
*align_mb = 8;
return 0;
}
@@ -448,21 +436,18 @@ static int get_sizes_device(char *path, uint64_t *dev_size, int *sector_size, in
physical_block_size, logical_block_size, path);
*sector_size = 4096;
*align_size = 8 * ONE_MB;
*align_mb = 8;
return 0;
}
if (physical_block_size == 512) {
*sector_size = 512;
*align_size = ONE_MB;
*align_mb = 1;
return 0;
}
if (physical_block_size == 4096) {
*sector_size = 4096;
*align_size = 8 * ONE_MB;
*align_mb = 8;
return 0;
}
@@ -474,8 +459,7 @@ static int get_sizes_device(char *path, uint64_t *dev_size, int *sector_size, in
/* Get the sector/align sizes that were used to create an existing VG.
sanlock encoded this in the lockspace/resource structs on disk. */
static int get_sizes_lockspace(char *path, int *sector_size, int *align_size, int *align_mb,
uint32_t *ss_flags, uint32_t *rs_flags)
static int get_sizes_lockspace(char *path, int *sector_size, int *align_size)
{
struct sanlk_lockspace ss;
uint32_t io_timeout = 0;
@@ -493,38 +477,10 @@ static int get_sizes_lockspace(char *path, int *sector_size, int *align_size, in
if ((ss.flags & SANLK_LSF_SECTOR4K) && (ss.flags & SANLK_LSF_ALIGN8M)) {
*sector_size = 4096;
*align_mb = 8;
*align_size = 8 * ONE_MB;
*ss_flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN8M;
*rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M;
} else if ((ss.flags & SANLK_LSF_SECTOR4K) && (ss.flags & SANLK_LSF_ALIGN4M)) {
*sector_size = 4096;
*align_mb = 4;
*align_size = 4 * ONE_MB;
*ss_flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN4M;
*rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN4M;
} else if ((ss.flags & SANLK_LSF_SECTOR4K) && (ss.flags & SANLK_LSF_ALIGN2M)) {
*sector_size = 4096;
*align_mb = 2;
*align_size = 2 * ONE_MB;
*ss_flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN2M;
*rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN2M;
} else if ((ss.flags & SANLK_LSF_SECTOR4K) && (ss.flags & SANLK_LSF_ALIGN1M)) {
*sector_size = 4096;
*align_mb = 1;
*align_size = ONE_MB;
*ss_flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN1M;
*rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN1M;
} else if ((ss.flags & SANLK_LSF_SECTOR512) && (ss.flags & SANLK_LSF_ALIGN1M)) {
*sector_size = 512;
*align_mb = 1;
*align_size = ONE_MB;
*ss_flags = SANLK_LSF_SECTOR512 | SANLK_LSF_ALIGN1M;
*rs_flags = SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M;
}
log_debug("get_sizes_lockspace found %d %d", *sector_size, *align_size);
@@ -541,7 +497,7 @@ static int get_sizes_lockspace(char *path, int *sector_size, int *align_size, in
#define MAX_VERSION 16
int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args, int opt_align_mb)
int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args)
{
struct sanlk_lockspace ss;
struct sanlk_resourced rd;
@@ -549,14 +505,11 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
char lock_lv_name[MAX_ARGS+1];
char lock_args_version[MAX_VERSION+1];
const char *gl_name = NULL;
uint32_t rs_flags;
uint32_t daemon_version;
uint32_t daemon_proto;
uint64_t offset;
uint64_t dev_size;
int sector_size = 0;
int align_size = 0;
int align_mb = 0;
int i, rv;
memset(&ss, 0, sizeof(ss));
@@ -581,7 +534,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
log_debug("S %s init_vg_san path %s align %d", ls_name, disk.path, opt_align_mb);
log_debug("S %s init_vg_san path %s", ls_name, disk.path);
if (daemon_test) {
if (!gl_lsname_sanlock[0])
@@ -602,7 +555,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
daemon_version, daemon_proto);
/* Nothing formatted on disk yet, use what the device reports. */
rv = get_sizes_device(disk.path, &dev_size, &sector_size, &align_size, &align_mb);
rv = get_sizes_device(disk.path, &sector_size, &align_size);
if (rv < 0) {
if (rv == -EACCES) {
log_error("S %s init_vg_san sanlock error -EACCES: no permission to access %s",
@@ -615,48 +568,11 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
}
}
/* Non-default lease size is requested. */
if ((sector_size == 4096) && opt_align_mb && (opt_align_mb != 8)) {
if (opt_align_mb != 1 && opt_align_mb != 2 && opt_align_mb != 4) {
log_error("S %s init_vg_sanlock invalid align input %u", ls_name, opt_align_mb);
return -EARGS;
}
align_mb = opt_align_mb;
align_size = align_mb * ONE_MB;
}
log_debug("S %s init_vg_san %s dev_size %llu sector_size %u align_size %u",
ls_name, disk.path, (unsigned long long)dev_size, sector_size, align_size);
strcpy_name_len(ss.name, ls_name, SANLK_NAME_LEN);
memcpy(ss.host_id_disk.path, disk.path, SANLK_PATH_LEN);
ss.host_id_disk.offset = 0;
if (sector_size == 512) {
ss.flags = SANLK_LSF_SECTOR512 | SANLK_LSF_ALIGN1M;
rs_flags = SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M;
} else if (sector_size == 4096) {
if (align_mb == 8) {
ss.flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN8M;
rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M;
} else if (align_mb == 4) {
ss.flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN4M;
rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN4M;
} else if (align_mb == 2) {
ss.flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN2M;
rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN2M;
} else if (align_mb == 1) {
ss.flags = SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN1M;
rs_flags = SANLK_RES_SECTOR4K | SANLK_RES_ALIGN1M;
}
else {
log_error("Invalid sanlock align_size %d %d", align_size, align_mb);
return -EARGS;
}
} else {
log_error("Invalid sanlock sector_size %d", sector_size);
return -EARGS;
}
ss.flags = (sector_size == 4096) ? (SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN8M) :
(SANLK_LSF_SECTOR512 | SANLK_LSF_ALIGN1M);
rv = sanlock_write_lockspace(&ss, 0, 0, sanlock_io_timeout);
if (rv < 0) {
@@ -689,7 +605,8 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
memcpy(rd.rs.disks[0].path, disk.path, SANLK_PATH_LEN);
rd.rs.disks[0].offset = align_size * GL_LOCK_BEGIN;
rd.rs.num_disks = 1;
rd.rs.flags = rs_flags;
rd.rs.flags = (sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rv = sanlock_write_resource(&rd.rs, 0, 0, 0);
if (rv < 0) {
@@ -703,7 +620,8 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
memcpy(rd.rs.disks[0].path, disk.path, SANLK_PATH_LEN);
rd.rs.disks[0].offset = align_size * VG_LOCK_BEGIN;
rd.rs.num_disks = 1;
rd.rs.flags = rs_flags;
rd.rs.flags = (sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rv = sanlock_write_resource(&rd.rs, 0, 0, 0);
if (rv < 0) {
@@ -729,7 +647,8 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
memset(&rd, 0, sizeof(rd));
rd.rs.num_disks = 1;
rd.rs.flags = rs_flags;
rd.rs.flags = (sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
memcpy(rd.rs.disks[0].path, disk.path, SANLK_PATH_LEN);
strcpy_name_len(rd.rs.lockspace_name, ls_name, SANLK_NAME_LEN);
strcpy_name_len(rd.rs.name, "#unused", SANLK_NAME_LEN);
@@ -739,9 +658,6 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
log_debug("S %s init_vg_san clearing lv lease areas", ls_name);
for (i = 0; ; i++) {
if (dev_size && (offset + align_size > dev_size))
break;
rd.rs.disks[0].offset = offset;
rv = sanlock_write_resource(&rd.rs, 0, 0, 0);
@@ -770,14 +686,14 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
* can be saved in the lv's lock_args in the vg metadata.
*/
int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char *lv_args)
int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
char *vg_args, char *lv_args,
int sector_size, int align_size, uint64_t free_offset)
{
struct lm_sanlock *lms = (struct lm_sanlock *)ls->lm_data;
struct sanlk_resourced rd;
char lock_lv_name[MAX_ARGS+1];
char lock_args_version[MAX_VERSION+1];
uint64_t offset;
int align_size = lms->align_size;
int rv;
memset(&rd, 0, sizeof(rd));
@@ -787,7 +703,7 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
rv = lock_lv_name_from_args(vg_args, lock_lv_name);
if (rv < 0) {
log_error("S %s init_lv_san lock_lv_name_from_args error %d %s",
ls->name, rv, vg_args);
ls_name, rv, vg_args);
return rv;
}
@@ -795,6 +711,7 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
LV_LOCK_ARGS_MAJOR, LV_LOCK_ARGS_MINOR, LV_LOCK_ARGS_PATCH);
if (daemon_test) {
align_size = ONE_MB;
snprintf(lv_args, MAX_ARGS, "%s:%llu",
lock_args_version,
(unsigned long long)((align_size * LV_LOCK_BEGIN) + (align_size * daemon_test_lv_count)));
@@ -802,15 +719,42 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
return 0;
}
strcpy_name_len(rd.rs.lockspace_name, ls->name, SANLK_NAME_LEN);
strcpy_name_len(rd.rs.lockspace_name, ls_name, SANLK_NAME_LEN);
rd.rs.num_disks = 1;
if ((rv = build_dm_path(rd.rs.disks[0].path, SANLK_PATH_LEN, ls->vg_name, lock_lv_name)))
if ((rv = build_dm_path(rd.rs.disks[0].path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
rd.rs.flags = lms->rs_flags;
/*
* These should not usually be zero, maybe only the first time this function is called?
* We need to use the same sector/align sizes that are already being used.
*/
if (!sector_size || !align_size) {
rv = get_sizes_lockspace(rd.rs.disks[0].path, &sector_size, &align_size);
if (rv < 0) {
log_error("S %s init_lv_san read_lockspace error %d %s",
ls_name, rv, rd.rs.disks[0].path);
return rv;
}
if (ls->free_lock_offset)
offset = ls->free_lock_offset;
if (sector_size)
log_debug("S %s init_lv_san found ls sector_size %d align_size %d", ls_name, sector_size, align_size);
else {
/* use the old method */
align_size = sanlock_align(&rd.rs.disks[0]);
if (align_size <= 0) {
log_error("S %s init_lv_san align error %d", ls_name, align_size);
return -EINVAL;
}
sector_size = (align_size == ONE_MB) ? 512 : 4096;
log_debug("S %s init_lv_san found old sector_size %d align_size %d", ls_name, sector_size, align_size);
}
}
rd.rs.flags = (sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
if (free_offset)
offset = free_offset;
else
offset = align_size * LV_LOCK_BEGIN;
rd.rs.disks[0].offset = offset;
@@ -824,20 +768,20 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
if (rv == -EMSGSIZE || rv == -ENOSPC) {
/* This indicates the end of the device is reached. */
log_debug("S %s init_lv_san read limit offset %llu",
ls->name, (unsigned long long)offset);
ls_name, (unsigned long long)offset);
rv = -EMSGSIZE;
return rv;
}
if (rv && rv != SANLK_LEADER_MAGIC) {
log_error("S %s init_lv_san read error %d offset %llu",
ls->name, rv, (unsigned long long)offset);
ls_name, rv, (unsigned long long)offset);
break;
}
if (!strncmp(rd.rs.name, lv_name, SANLK_NAME_LEN)) {
log_error("S %s init_lv_san resource name %s already exists at %llu",
ls->name, lv_name, (unsigned long long)offset);
ls_name, lv_name, (unsigned long long)offset);
return -EEXIST;
}
@@ -848,10 +792,11 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
*/
if ((rv == SANLK_LEADER_MAGIC) || !strcmp(rd.rs.name, "#unused")) {
log_debug("S %s init_lv_san %s found unused area at %llu",
ls->name, lv_name, (unsigned long long)offset);
ls_name, lv_name, (unsigned long long)offset);
strcpy_name_len(rd.rs.name, lv_name, SANLK_NAME_LEN);
rd.rs.flags = lms->rs_flags;
rd.rs.flags = (sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rv = sanlock_write_resource(&rd.rs, 0, 0, 0);
if (!rv) {
@@ -859,7 +804,7 @@ int lm_init_lv_sanlock(struct lockspace *ls, char *lv_name, char *vg_args, char
lock_args_version, (unsigned long long)offset);
} else {
log_error("S %s init_lv_san write error %d offset %llu",
ls->name, rv, (unsigned long long)rv);
ls_name, rv, (unsigned long long)rv);
}
break;
}
@@ -928,19 +873,12 @@ int lm_rename_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_
return rv;
}
if (ss.flags & SANLK_LSF_SECTOR512) {
if ((ss.flags & SANLK_LSF_SECTOR4K) && (ss.flags & SANLK_LSF_ALIGN8M)) {
sector_size = 4096;
align_size = 8 * ONE_MB;
} else if ((ss.flags & SANLK_LSF_SECTOR512) && (ss.flags & SANLK_LSF_ALIGN1M)) {
sector_size = 512;
align_size = ONE_MB;
} else if (ss.flags & SANLK_LSF_SECTOR4K) {
sector_size = 4096;
if (ss.flags & SANLK_LSF_ALIGN8M)
align_size = 8 * ONE_MB;
else if (ss.flags & SANLK_LSF_ALIGN4M)
align_size = 4 * ONE_MB;
else if (ss.flags & SANLK_LSF_ALIGN2M)
align_size = 2 * ONE_MB;
else if (ss.flags & SANLK_LSF_ALIGN1M)
align_size = ONE_MB;
} else {
/* use the old method */
align_size = sanlock_align(&ss.host_id_disk);
@@ -1109,8 +1047,10 @@ int lm_ex_disable_gl_sanlock(struct lockspace *ls)
memcpy(rd1.rs.disks[0].path, lms->ss.host_id_disk.path, SANLK_PATH_LEN-1);
rd1.rs.disks[0].offset = lms->align_size * GL_LOCK_BEGIN;
rd1.rs.flags = lms->rs_flags;
rd2.rs.flags = lms->rs_flags;
rd1.rs.flags = (lms->sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rd2.rs.flags = (lms->sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rv = sanlock_acquire(lms->sock, -1, 0, 1, &rs1, NULL);
if (rv < 0) {
@@ -1172,7 +1112,8 @@ int lm_able_gl_sanlock(struct lockspace *ls, int enable)
rd.rs.num_disks = 1;
memcpy(rd.rs.disks[0].path, lms->ss.host_id_disk.path, SANLK_PATH_LEN-1);
rd.rs.disks[0].offset = lms->align_size * GL_LOCK_BEGIN;
rd.rs.flags = lms->rs_flags;
rd.rs.flags = (lms->sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
rv = sanlock_write_resource(&rd.rs, 0, 0, 0);
if (rv < 0) {
@@ -1256,7 +1197,7 @@ int lm_gl_is_enabled(struct lockspace *ls)
* been disabled.)
*/
int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes)
int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t *free_offset, int *sector_size, int *align_size)
{
struct lm_sanlock *lms = (struct lm_sanlock *)ls->lm_data;
struct sanlk_resourced rd;
@@ -1266,16 +1207,22 @@ int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes)
int round = 0;
if (daemon_test) {
ls->free_lock_offset = (ONE_MB * LV_LOCK_BEGIN) + (ONE_MB * (daemon_test_lv_count + 1));
*free_offset = (ONE_MB * LV_LOCK_BEGIN) + (ONE_MB * (daemon_test_lv_count + 1));
*sector_size = 512;
*align_size = ONE_MB;
return 0;
}
*sector_size = lms->sector_size;
*align_size = lms->align_size;
memset(&rd, 0, sizeof(rd));
strcpy_name_len(rd.rs.lockspace_name, ls->name, SANLK_NAME_LEN);
rd.rs.num_disks = 1;
memcpy(rd.rs.disks[0].path, lms->ss.host_id_disk.path, SANLK_PATH_LEN-1);
rd.rs.flags = lms->rs_flags;
rd.rs.flags = (lms->sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) :
(SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
if (ls->free_lock_offset)
offset = ls->free_lock_offset;
@@ -1297,37 +1244,15 @@ int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes)
memset(rd.rs.name, 0, SANLK_NAME_LEN);
/*
* End of the device. Older lvm versions didn't pass lv_size_bytes
* and just relied on sanlock_read_resource returning an error when
* reading beyond the device.
*/
if (lv_size_bytes && (offset + lms->align_size > lv_size_bytes)) {
/* end of the device */
log_debug("S %s find_free_lock_san read limit offset %llu lv_size_bytes %llu",
ls->name, (unsigned long long)offset, (unsigned long long)lv_size_bytes);
/* remember the NO SPACE offset, if no free area left,
* search from this offset after extend */
ls->free_lock_offset = offset;
offset = lms->align_size * LV_LOCK_BEGIN;
round = 1;
continue;
}
rv = sanlock_read_resource(&rd.rs, 0);
if (rv == -EMSGSIZE || rv == -ENOSPC) {
/*
* These errors indicate the end of the device is reached.
* Still check this in case lv_size_bytes is not provided.
*/
/* This indicates the end of the device is reached. */
log_debug("S %s find_free_lock_san read limit offset %llu",
ls->name, (unsigned long long)offset);
/* remember the NO SPACE offset, if no free area left,
* search from this offset after extend */
ls->free_lock_offset = offset;
*free_offset = offset;
offset = lms->align_size * LV_LOCK_BEGIN;
round = 1;
@@ -1342,7 +1267,7 @@ int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes)
if (rv == SANLK_LEADER_MAGIC) {
log_debug("S %s find_free_lock_san found empty area at %llu",
ls->name, (unsigned long long)offset);
ls->free_lock_offset = offset;
*free_offset = offset;
return 0;
}
@@ -1355,7 +1280,7 @@ int lm_find_free_lock_sanlock(struct lockspace *ls, uint64_t lv_size_bytes)
if (!strcmp(rd.rs.name, "#unused")) {
log_debug("S %s find_free_lock_san found unused area at %llu",
ls->name, (unsigned long long)offset);
ls->free_lock_offset = offset;
*free_offset = offset;
return 0;
}
@@ -1397,11 +1322,8 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
char disk_path[SANLK_PATH_LEN];
char killpath[SANLK_PATH_LEN];
char killargs[SANLK_PATH_LEN];
uint32_t ss_flags = 0;
uint32_t rs_flags = 0;
int sector_size = 0;
int align_size = 0;
int align_mb = 0;
int gl_found;
int ret, rv;
@@ -1489,8 +1411,6 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
strncpy(gl_lsname_sanlock, lsname, MAX_NAME);
log_debug("S %s prepare_lockspace_san use global lock", lsname);
}
lms->align_size = ONE_MB;
lms->sector_size = 512;
goto out;
}
@@ -1518,7 +1438,7 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
goto fail;
}
rv = get_sizes_lockspace(disk_path, &sector_size, &align_size, &align_mb, &ss_flags, &rs_flags);
rv = get_sizes_lockspace(disk_path, &sector_size, &align_size);
if (rv < 0) {
log_error("S %s prepare_lockspace_san cannot get sector/align sizes %d", lsname, rv);
ret = -EMANAGER;
@@ -1538,27 +1458,13 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
log_debug("S %s prepare_lockspace_san found old sector_size %d align_size %d", lsname, sector_size, align_size);
}
log_debug("S %s prepare_lockspace_san sector_size %d align_mb %d align_size %d",
lsname, sector_size, align_mb, align_size);
if (sector_size == 4096) {
if (((align_mb == 1) && (ls->host_id > 250)) ||
((align_mb == 2) && (ls->host_id > 500)) ||
((align_mb == 4) && (ls->host_id > 1000)) ||
((align_mb == 8) && (ls->host_id > 2000))) {
log_error("S %s prepare_lockspace_san invalid host_id %llu for align %d MiB",
lsname, (unsigned long long)ls->host_id, align_mb);
ret = -EHOSTID;
goto fail;
}
}
log_debug("S %s prepare_lockspace_san sizes %d %d", lsname, sector_size, align_size);
lms->align_size = align_size;
lms->sector_size = sector_size;
lms->ss_flags = ss_flags;
lms->rs_flags = rs_flags;
lms->ss.flags = ss_flags;
lms->ss.flags = (sector_size == 4096) ? (SANLK_LSF_SECTOR4K | SANLK_LSF_ALIGN8M) :
(SANLK_LSF_SECTOR512 | SANLK_LSF_ALIGN1M);
gl_found = gl_is_enabled(ls, lms);
if (gl_found < 0) {
@@ -1700,7 +1606,7 @@ static int lm_add_resource_sanlock(struct lockspace *ls, struct resource *r)
strcpy_name_len(rds->rs.name, r->name, SANLK_NAME_LEN);
rds->rs.num_disks = 1;
memcpy(rds->rs.disks[0].path, lms->ss.host_id_disk.path, SANLK_PATH_LEN);
rds->rs.flags = lms->rs_flags;
rds->rs.flags = (lms->sector_size == 4096) ? (SANLK_RES_SECTOR4K | SANLK_RES_ALIGN8M) : (SANLK_RES_SECTOR512 | SANLK_RES_ALIGN1M);
if (r->type == LD_RT_GL)
rds->rs.disks[0].offset = GL_LOCK_BEGIN * lms->align_size;

View File

@@ -2167,7 +2167,7 @@ struct dm_pool *dm_config_memory(struct dm_config_tree *cft);
*/
#define DM_UDEV_DISABLE_DM_RULES_FLAG 0x0001
/*
* DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG is set in case we need to disable
* DM_UDEV_DISABLE_SUBSYTEM_RULES_FLAG is set in case we need to disable
* subsystem udev rules, but still we need the general DM udev rules to
* be applied (to create the nodes and symlinks under /dev and /dev/disk).
*/

View File

@@ -2437,20 +2437,20 @@ static int _udev_notify_sem_inc(uint32_t cookie, int semid)
int val;
if (semop(semid, &sb, 1) < 0) {
log_error("cookie inc: semid %d: semop failed for cookie 0x%" PRIx32 ": %s",
log_error("semid %d: semop failed for cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
return 0;
}
if ((val = semctl(semid, 0, GETVAL)) < 0) {
log_warn("cookie inc: semid %d: sem_ctl GETVAL failed for "
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented.",
cookie, semid);
} else
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented to %d",
cookie, semid, val);
return 0;
}
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented to %d",
cookie, semid, val);
return 1;
}
@@ -2460,21 +2460,23 @@ static int _udev_notify_sem_dec(uint32_t cookie, int semid)
struct sembuf sb = {0, -1, IPC_NOWAIT};
int val;
if ((val = semctl(semid, 0, GETVAL)) < 0)
log_warn("cookie dec: semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
if ((val = semctl(semid, 0, GETVAL)) < 0) {
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
return 0;
}
if (semop(semid, &sb, 1) < 0) {
switch (errno) {
case EAGAIN:
log_error("cookie dec: semid %d: semop failed for cookie "
log_error("semid %d: semop failed for cookie "
"0x%" PRIx32 ": "
"incorrect semaphore state",
semid, cookie);
break;
default:
log_error("cookie dec: semid %d: semop failed for cookie "
log_error("semid %d: semop failed for cookie "
"0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
break;
@@ -2482,12 +2484,9 @@ static int _udev_notify_sem_dec(uint32_t cookie, int semid)
return 0;
}
if (val < 0)
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented.",
cookie, semid);
else
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented to %d",
cookie, semid, val - 1);
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented to %d",
cookie, semid, val - 1);
return 1;
}
@@ -2564,7 +2563,7 @@ static int _udev_notify_sem_create(uint32_t *cookie, int *semid)
sem_arg.val = 1;
if (semctl(gen_semid, 0, SETVAL, sem_arg) < 0) {
log_error("cookie create: semid %d: semctl failed: %s", gen_semid, strerror(errno));
log_error("semid %d: semctl failed: %s", gen_semid, strerror(errno));
/* We have to destroy just created semaphore
* so it won't stay in the system. */
(void) _udev_notify_sem_destroy(gen_cookie, gen_semid);
@@ -2572,10 +2571,9 @@ static int _udev_notify_sem_create(uint32_t *cookie, int *semid)
}
if ((val = semctl(gen_semid, 0, GETVAL)) < 0) {
log_error("cookie create: semid %d: sem_ctl GETVAL failed for "
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
gen_semid, gen_cookie, strerror(errno));
(void) _udev_notify_sem_destroy(gen_cookie, gen_semid);
goto bad;
}

View File

@@ -70,8 +70,7 @@ static struct dm_config_value *_value(struct parser *p);
static struct dm_config_value *_type(struct parser *p);
static int _match_aux(struct parser *p, int t);
static struct dm_config_value *_create_value(struct dm_pool *mem);
static struct dm_config_value *_create_str_value(struct dm_pool *mem, const char *str, size_t str_len);
static struct dm_config_node *_create_node(struct dm_pool *mem, const char *key, size_t key_len);
static struct dm_config_node *_create_node(struct dm_pool *mem);
static char *_dup_tok(struct parser *p);
static char *_dup_token(struct dm_pool *mem, const char *b, const char *e);
@@ -85,19 +84,14 @@ static char *_dup_token(struct dm_pool *mem, const char *b, const char *e);
} \
} while(0)
/* match token */
static int _tok_match(const char *str, const char *b, const char *e)
{
while (b < e) {
if (*str != *b)
while (*str && (b != e)) {
if (*str++ != *b++)
return 0;
if (!*str)
return 0;
++str;
++b;
}
return !*str; /* token is matching for \0 end */
return !(*str || (b != e));
}
struct dm_config_tree *dm_config_create(void)
@@ -473,33 +467,23 @@ int dm_config_write_node_out(const struct dm_config_node *cn,
/*
* parser
*/
static const char *_string_tok(struct parser *p, size_t *len)
static char *_dup_string_tok(struct parser *p)
{
ptrdiff_t d = p->te - p->tb;
char *str;
if (d < 2) {
p->tb++, p->te--; /* strip "'s */
if (p->te < p->tb) {
log_error("Parse error at byte %" PRIptrdiff_t " (line %d): "
"expected a string token.",
p->tb - p->fb + 1, p->line);
return NULL;
}
*len = (size_t)(d - 2); /* strip "'s */
return p->tb + 1;
}
static char *_dup_string_tok(struct parser *p)
{
const char *tok;
size_t len;
char *str;
if (!(tok = _string_tok(p, &len)))
if (!(str = _dup_tok(p)))
return_NULL;
if (!(str = _dup_token(p->mem, tok, tok + len)))
return_NULL;
p->te++;
return str;
}
@@ -521,9 +505,10 @@ static struct dm_config_node *_make_node(struct dm_pool *mem,
{
struct dm_config_node *n;
if (!(n = _create_node(mem, key_b, key_e - key_b)))
if (!(n = _create_node(mem)))
return_NULL;
n->key = _dup_token(mem, key_b, key_e);
if (parent) {
n->parent = parent;
n->sib = parent->child;
@@ -684,14 +669,16 @@ static struct dm_config_value *_value(struct parser *p)
static struct dm_config_value *_type(struct parser *p)
{
/* [+-]{0,1}[0-9]+ | [0-9]*\.[0-9]* | ".*" */
struct dm_config_value *v;
const char *str;
size_t len;
struct dm_config_value *v = _create_value(p->mem);
char *str;
if (!v) {
log_error("Failed to allocate type value");
return NULL;
}
switch (p->t) {
case TOK_INT:
if (!(v = _create_value(p->mem)))
break;
v->type = DM_CFG_INT;
errno = 0;
v->v.i = strtoll(p->tb, NULL, 0); /* FIXME: check error */
@@ -712,8 +699,6 @@ static struct dm_config_value *_type(struct parser *p)
break;
case TOK_FLOAT:
if (!(v = _create_value(p->mem)))
break;
v->type = DM_CFG_FLOAT;
errno = 0;
v->v.f = strtod(p->tb, NULL); /* FIXME: check error */
@@ -725,31 +710,31 @@ static struct dm_config_value *_type(struct parser *p)
break;
case TOK_STRING:
if (!(str = _string_tok(p, &len)))
v->type = DM_CFG_STRING;
if (!(v->v.str = _dup_string_tok(p)))
return_NULL;
if ((v = _create_str_value(p->mem, str, len))) {
v->type = DM_CFG_STRING;
match(TOK_STRING);
}
match(TOK_STRING);
break;
case TOK_STRING_BARE:
if ((v = _create_str_value(p->mem, p->tb, p->te - p->tb))) {
v->type = DM_CFG_STRING;
match(TOK_STRING_BARE);
}
v->type = DM_CFG_STRING;
if (!(v->v.str = _dup_tok(p)))
return_NULL;
match(TOK_STRING_BARE);
break;
case TOK_STRING_ESCAPED:
if (!(str = _string_tok(p, &len)))
return_NULL;
v->type = DM_CFG_STRING;
if ((v = _create_str_value(p->mem, str, len))) {
v->type = DM_CFG_STRING;
dm_unescape_double_quotes((char*)v->v.str);
match(TOK_STRING_ESCAPED);
}
if (!(str = _dup_string_tok(p)))
return_NULL;
dm_unescape_double_quotes(str);
v->v.str = str;
match(TOK_STRING_ESCAPED);
break;
default:
@@ -757,12 +742,6 @@ static struct dm_config_value *_type(struct parser *p)
p->tb - p->fb + 1, p->line);
return NULL;
}
if (!v) {
log_error("Failed to allocate type value.");
return NULL;
}
return v;
}
@@ -904,19 +883,16 @@ static void _get_token(struct parser *p, int tok_prev)
static void _eat_space(struct parser *p)
{
while (p->tb != p->fe) {
if (!isspace(*p->te)) {
if (*p->te != '#')
break;
if (*p->te == '#')
while ((p->te != p->fe) && (*p->te != '\n') && (*p->te))
++p->te;
}
while (p->te != p->fe) {
else if (!isspace(*p->te))
break;
while ((p->te != p->fe) && isspace(*p->te)) {
if (*p->te == '\n')
++p->line;
else if (!isspace(*p->te))
break;
++p->te;
}
@@ -932,46 +908,9 @@ static struct dm_config_value *_create_value(struct dm_pool *mem)
return dm_pool_zalloc(mem, sizeof(struct dm_config_value));
}
static struct dm_config_value *_create_str_value(struct dm_pool *mem, const char *str, size_t str_len)
static struct dm_config_node *_create_node(struct dm_pool *mem)
{
struct dm_config_value *cv;
char *str_buf;
if (!(cv = dm_pool_alloc(mem, sizeof(struct dm_config_value) + str_len + 1)))
return_NULL;
memset(cv, 0, sizeof(*cv));
if (str) {
str_buf = (char *)(cv + 1);
if (str_len)
memcpy(str_buf, str, str_len);
str_buf[str_len] = '\0';
cv->v.str = str_buf;
}
return cv;
}
static struct dm_config_node *_create_node(struct dm_pool *mem, const char *key, size_t key_len)
{
struct dm_config_node *cn;
char *key_buf;
if (!(cn = dm_pool_alloc(mem, sizeof(struct dm_config_node) + key_len + 1)))
return_NULL;
memset(cn, 0, sizeof(*cn));
if (key) {
key_buf = (char *)(cn + 1);
if (key_len)
memcpy(key_buf, key, key_len);
key_buf[key_len] = '\0';
cn->key = key_buf;
}
return cn;
return dm_pool_zalloc(mem, sizeof(struct dm_config_node));
}
static char *_dup_token(struct dm_pool *mem, const char *b, const char *e)
@@ -1388,19 +1327,19 @@ static struct dm_config_value *_clone_config_value(struct dm_pool *mem,
{
struct dm_config_value *new_cv;
if (v->type == DM_CFG_STRING) {
if (!(new_cv = _create_str_value(mem, v->v.str, strlen(v->v.str)))) {
}
} else {
if (!(new_cv = _create_value(mem))) {
log_error("Failed to clone config value.");
return NULL;
}
new_cv->v = v->v;
if (!(new_cv = _create_value(mem))) {
log_error("Failed to clone config value.");
return NULL;
}
new_cv->type = v->type;
if (v->type == DM_CFG_STRING) {
if (!(new_cv->v.str = dm_pool_strdup(mem, v->v.str))) {
log_error("Failed to clone config string value.");
return NULL;
}
} else
new_cv->v = v->v;
if (v->next && !(new_cv->next = _clone_config_value(mem, v->next)))
return_NULL;
@@ -1417,11 +1356,16 @@ struct dm_config_node *dm_config_clone_node_with_mem(struct dm_pool *mem, const
return NULL;
}
if (!(new_cn = _create_node(mem, cn->key, cn->key ? strlen(cn->key) : 0))) {
if (!(new_cn = _create_node(mem))) {
log_error("Failed to clone config node.");
return NULL;
}
if ((cn->key && !(new_cn->key = dm_pool_strdup(mem, cn->key)))) {
log_error("Failed to clone config node key.");
return NULL;
}
new_cn->id = cn->id;
if ((cn->v && !(new_cn->v = _clone_config_value(mem, cn->v))) ||
@@ -1441,11 +1385,14 @@ struct dm_config_node *dm_config_create_node(struct dm_config_tree *cft, const c
{
struct dm_config_node *cn;
if (!(cn = _create_node(cft->mem, key, strlen(key)))) {
if (!(cn = _create_node(cft->mem))) {
log_error("Failed to create config node.");
return NULL;
}
if (!(cn->key = dm_pool_strdup(cft->mem, key))) {
log_error("Failed to create config node's key.");
return NULL;
}
cn->parent = NULL;
cn->v = NULL;

View File

@@ -1,26 +0,0 @@
<!-- Page title -->
[[!meta title="Version 2.03.27 - Bug Fix Release"]]
Version 2.03.27
===============
* Deprecate vdo settings `vdo_write_policy` and `vdo_write_policy`.
* Do not accept duplicate device names for pvcreate.
* Fix swap device size detection using blkid for lvresize/lvreduce/lvextend.
* Detect GPT partition table and pass partition filter if no partitions defined.
* Add `global/sanlock_align_size` option to configure sanlock lease size.
* Disable mem locking when `activation/reserved_stack` or `reserved_memory` is `0`.
* Fix locking issues in lvmlockd leaving thin pool inactive but locked.
* Corrected integrity parameter `interleave_sectors` for DM table line.
* Ignore `-i|--stripes`, `-I|--stripesize` for lvextend on raid0 LV, like on raid10.
* Fix lot of typos across lvm2 code base (codespell).
* Cleanup udev sync semaphore if `dm_{udev_create,task_set}_cookie` fails.
* Improve error messages on failed udev cookie create/inc/dec operation.
<!-- remove the pending tag on release, remove draft tag once editing is complete -->
[[!tag]]
<!--
For old releases add Release Timestamp like this, date from git show $COMMIT is fine.
\[[!meta date="Tue Nov 21 14:26:07 2023 +0100"]]
-->

View File

@@ -203,8 +203,7 @@ static int _settings_text_export(const struct lv_segment *seg,
static int _cache_pool_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct logical_volume *data_lv, *meta_lv;
const char *str = NULL;
@@ -213,7 +212,7 @@ static int _cache_pool_text_import(struct lv_segment *seg,
return SEG_LOG_ERROR("Cache data not specified in");
if (!(str = dm_config_find_str(sn, "data", NULL)))
return SEG_LOG_ERROR("Cache data must be a string in");
if (!(data_lv = dm_hash_lookup(lv_hash, str)))
if (!(data_lv = find_lv(seg->lv->vg, str)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"cache data in", str);
@@ -221,7 +220,7 @@ static int _cache_pool_text_import(struct lv_segment *seg,
return SEG_LOG_ERROR("Cache metadata not specified in");
if (!(str = dm_config_find_str(sn, "metadata", NULL)))
return SEG_LOG_ERROR("Cache metadata must be a string in");
if (!(meta_lv = dm_hash_lookup(lv_hash, str)))
if (!(meta_lv = find_lv(seg->lv->vg, str)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"cache metadata in", str);
@@ -440,8 +439,7 @@ static const struct segtype_handler _cache_pool_ops = {
static int _cache_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct logical_volume *pool_lv, *origin_lv;
const char *name;
@@ -451,7 +449,7 @@ static int _cache_text_import(struct lv_segment *seg,
return SEG_LOG_ERROR("cache_pool not specified in");
if (!(name = dm_config_find_str(sn, "cache_pool", NULL)))
return SEG_LOG_ERROR("cache_pool must be a string in");
if (!(pool_lv = dm_hash_lookup(lv_hash, name)))
if (!(pool_lv = find_lv(seg->lv->vg, name)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"cache_pool in", name);
@@ -459,7 +457,7 @@ static int _cache_text_import(struct lv_segment *seg,
return SEG_LOG_ERROR("Cache origin not specified in");
if (!(name = dm_config_find_str(sn, "origin", NULL)))
return SEG_LOG_ERROR("Cache origin must be a string in");
if (!(origin_lv = dm_hash_lookup(lv_hash, name)))
if (!(origin_lv = find_lv(seg->lv->vg, name)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"cache origin in", name);
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))

View File

@@ -659,7 +659,6 @@ static int _process_config(struct cmd_context *cmd)
mode_t old_umask;
const char *dev_ext_info_src = NULL;
const char *read_ahead;
const char *str;
struct stat st;
const struct dm_config_node *cn;
const struct dm_config_value *cv;
@@ -817,12 +816,6 @@ static int _process_config(struct cmd_context *cmd)
cmd->check_pv_dev_sizes = find_config_tree_bool(cmd, metadata_check_pv_device_sizes_CFG, NULL);
cmd->event_activation = find_config_tree_bool(cmd, global_event_activation_CFG, NULL);
if ((str = find_config_tree_str(cmd, global_vg_copy_internal_CFG, NULL))) {
if (!strcmp(str, "binary"))
cmd->vg_copy_binary = 1;
}
if (!process_profilable_config(cmd))
return_0;

View File

@@ -218,7 +218,6 @@ struct cmd_context {
unsigned device_ids_invalid:1;
unsigned device_ids_auto_import:1;
unsigned get_vgname_from_options:1; /* used by lvconvert */
unsigned vg_copy_binary:1;
/*
* Devices and filtering.
@@ -272,7 +271,7 @@ struct cmd_context {
/*
* Buffers.
*/
char display_buffer[NAME_LEN * 10]; /* ring buffer for up to 10 longest vg/lv names */
char display_buffer[NAME_LEN * 10]; /* ring buffer for upto 10 longest vg/lv names */
unsigned display_lvname_idx; /* index to ring buffer */
char *linebuffer;

View File

@@ -2558,18 +2558,3 @@ uint64_t get_default_allocation_cache_pool_max_chunks_CFG(struct cmd_context *cm
return max_chunks;
}
int get_default_allocation_vdo_use_metadata_hints_CFG(struct cmd_context *cmd, struct profile *profile)
{
unsigned maj, min;
if ((sscanf(cmd->kernel_vsn, "%u.%u", &maj, &min) == 2) &&
((maj > 6) || ((maj == 6) && (min > 8)))) {
/* With kernels > 6.8 this feature is considered deprecated.
* Return false as default setting. */
return false;
}
/* With older kernels use the configured default setting. */
return DEFAULT_VDO_USE_METADATA_HINTS;
}

View File

@@ -312,8 +312,6 @@ int get_default_allocation_cache_pool_chunk_size_CFG(struct cmd_context *cmd, st
const char *get_default_allocation_cache_policy_CFG(struct cmd_context *cmd, struct profile *profile);
#define get_default_unconfigured_allocation_cache_policy_CFG NULL
uint64_t get_default_allocation_cache_pool_max_chunks_CFG(struct cmd_context *cmd, struct profile *profile);
int get_default_allocation_vdo_use_metadata_hints_CFG(struct cmd_context *cmd, struct profile *profile);
#define get_default_unconfigured_allocation_vdo_use_metadata_hints_CFG NULL
int get_default_metadata_pvmetadatasize_CFG(struct cmd_context *cmd, struct profile *profile);
#define get_default_unconfigured_metadata_pvmetadatasize_CFG NULL

View File

@@ -728,8 +728,8 @@ cfg(allocation_vdo_use_deduplication_CFG, "vdo_use_deduplication", allocation_CF
"Deduplication may be disabled in instances where data is not expected\n"
"to have good deduplication rates but compression is still desired.\n")
cfg_runtime(allocation_vdo_use_metadata_hints_CFG, "vdo_use_metadata_hints", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, VDO_1ST_VSN, vsn(2, 3, 27), NULL,
"Deprecated enablement whether VDO volume should tag its latency-critical\n"
cfg(allocation_vdo_use_metadata_hints_CFG, "vdo_use_metadata_hints", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_VDO_USE_METADATA_HINTS, VDO_1ST_VSN, NULL, 0, NULL,
"Enables or disables whether VDO volume should tag its latency-critical\n"
"writes with the REQ_SYNC flag. Some device mapper targets such as dm-raid5\n"
"process writes with this flag at a higher priority.\n")
@@ -821,8 +821,8 @@ cfg(allocation_vdo_physical_threads_CFG, "vdo_physical_threads", allocation_CFG_
"vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be\n"
"either all zero or all non-zero.\n")
cfg(allocation_vdo_write_policy_CFG, "vdo_write_policy", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_VDO_WRITE_POLICY, VDO_1ST_VSN, NULL, vsn(2, 3, 27), NULL,
"Deprecated option to specify the write policy with these accepted values:\n"
cfg(allocation_vdo_write_policy_CFG, "vdo_write_policy", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_VDO_WRITE_POLICY, VDO_1ST_VSN, NULL, 0, NULL,
"Specifies the write policy:\n"
"auto - VDO will check the storage device and determine whether it supports flushes.\n"
" If it does, VDO will run in async mode, otherwise it will run in sync mode.\n"
"sync - Writes are acknowledged only after data is stably written.\n"
@@ -1189,14 +1189,6 @@ cfg(global_sanlock_lv_extend_CFG, "sanlock_lv_extend", global_CFG_SECTION, CFG_D
"and can cause lvcreate to fail. Applicable only if LVM is compiled\n"
"with lockd support\n")
cfg(global_sanlock_align_size_CFG, "sanlock_align_size", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_SANLOCK_ALIGN_SIZE, vsn(2, 3, 27), NULL, 0, NULL,
"The sanlock lease size in MiB to use on disks with a 4K sector size.\n"
"Possible values are 1,2,4,8. The default is 8, which supports up to\n"
"2000 hosts (and max host_id 2000.) Smaller values support smaller\n"
"numbers of max hosts (and max host_ids): 250, 500, 1000, 2000 for\n"
"lease sizes 1,2,4,8. Disks with 512 byte sectors always use 1MiB\n"
"leases and support 2000 hosts, and are not affected by this setting.\n")
cfg(global_lvmlockctl_kill_command_CFG, "lvmlockctl_kill_command", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, "", vsn(2, 3, 12), NULL, 0, NULL,
"The command that lvmlockctl --kill should use to force LVs offline.\n"
"The lvmlockctl --kill command is run when a shared VG has lost\n"
@@ -1387,13 +1379,6 @@ cfg(global_io_memory_size_CFG, "io_memory_size", global_CFG_SECTION, CFG_DEFAULT
"This value should usually not be decreased from the default; setting\n"
"it too low can result in lvm failing to read VGs.\n")
cfg(global_vg_copy_internal_CFG, "vg_copy_internal", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_VG_COPY_INTERNAL, vsn(2, 3, 27), NULL, 0, NULL,
"The method that lvm uses for internal VG structure copying.\n"
"\"binary\" copies between binary structures to improve performance\n"
"with large metadata (experimental.) \"text\" exports a binary\n"
"struct to text format, and reimports text to a new binary\n"
"structure (traditional.)\n")
cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_UDEV_SYNC, vsn(2, 2, 51), NULL, 0, NULL,
"Use udev notifications to synchronize udev and LVM.\n"
"The --noudevsync option overrides this setting.\n"
@@ -1439,14 +1424,11 @@ cfg(activation_use_linear_target_CFG, "use_linear_target", activation_CFG_SECTIO
cfg(activation_reserved_stack_CFG, "reserved_stack", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RESERVED_STACK, vsn(1, 0, 0), NULL, 0, NULL,
"Stack size in KiB to reserve for use while devices are suspended.\n"
"Insufficient reserve risks I/O deadlock during device suspension.\n"
"Value 0 disables memory locking.\n")
"Insufficient reserve risks I/O deadlock during device suspension.\n")
cfg(activation_reserved_memory_CFG, "reserved_memory", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RESERVED_MEMORY, vsn(1, 0, 0), NULL, 0, NULL,
"Memory size in KiB to reserve for use while devices are suspended.\n"
"Insufficient reserve risks I/O deadlock during device suspension.\n"
"Value 0 disables memory locking.\n")
"Insufficient reserve risks I/O deadlock during device suspension.\n")
cfg(activation_process_priority_CFG, "process_priority", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_PROCESS_PRIORITY, vsn(1, 0, 0), NULL, 0, NULL,
"Nice value used while devices are suspended.\n"
@@ -2282,9 +2264,8 @@ cfg_array(local_extra_system_ids_CFG, "extra_system_ids", local_CFG_SECTION, CFG
"correct usage and possible dangers.\n")
cfg(local_host_id_CFG, "host_id", local_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(2, 2, 124), NULL, 0, NULL,
"The sanlock host_id used by lvmlockd. This must be unique among all the hosts\n"
"using shared VGs with sanlock. Accepted values are 1-2000, except when sanlock_align_size\n"
"is configured to 1, 2 or 4, which correspond to max host_id values of 250, 500, or 1000.\n"
"Applicable only if LVM is compiled with support for lvmlockd+sanlock.\n")
"The lvmlockd sanlock host_id.\n"
"This must be unique among all hosts, and must be between 1 and 2000.\n"
"Applicable only if LVM is compiled with lockd support\n")
cfg(CFG_COUNT, NULL, root_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(0, 0, 0), NULL, 0, NULL, NULL)

View File

@@ -72,7 +72,6 @@
#define DEFAULT_USE_AIO 1
#define DEFAULT_SANLOCK_LV_EXTEND_MB 256
#define DEFAULT_SANLOCK_ALIGN_SIZE 8 /* in MiB, applies to 4K disks only */
#define DEFAULT_MIRRORLOG MIRROR_LOG_DISK
#define DEFAULT_MIRROR_LOG_FAULT_POLICY "allocate"
@@ -344,6 +343,4 @@
#define DEFAULT_DEVICESFILE_BACKUP_LIMIT 50
#define DEFAULT_VG_COPY_INTERNAL "binary"
#endif /* _LVM_DEFAULTS_H */

View File

@@ -502,14 +502,9 @@ int dev_get_partition_number(struct device *dev, int *num)
}
/* See linux/genhd.h and fs/partitions/msdos */
#define PART_MSDOS_MAGIC 0xAA55
#define PART_MSDOS_MAGIC_OFFSET UINT64_C(0x1FE)
#define PART_MSDOS_OFFSET UINT64_C(0x1BE)
#define PART_MSDOS_TYPE_GPT_PMBR UINT8_C(0xEE)
#define PART_GPT_HEADER_OFFSET_LBA 0x01
#define PART_GPT_MAGIC 0x5452415020494645UL /* "EFI PART" string */
#define PART_GPT_ENTRIES_FIELDS_OFFSET UINT64_C(0x48)
#define PART_MAGIC 0xAA55
#define PART_MAGIC_OFFSET UINT64_C(0x1FE)
#define PART_OFFSET UINT64_C(0x1BE)
struct partition {
uint8_t boot_ind;
@@ -575,71 +570,12 @@ static int _is_partitionable(struct dev_types *dt, struct device *dev)
return 1;
}
static int _has_gpt_partition_table(struct device *dev)
{
unsigned int pbs, lbs;
uint64_t entries_start;
uint32_t nr_entries, sz_entry, i;
struct {
uint64_t magic;
/* skip fields we're not interested in */
uint8_t skip[PART_GPT_ENTRIES_FIELDS_OFFSET - sizeof(uint64_t)];
uint64_t part_entries_lba;
uint32_t nr_part_entries;
uint32_t sz_part_entry;
} __attribute__((packed)) gpt_header;
struct {
uint64_t part_type_guid;
/* not interested in any other fields */
} __attribute__((packed)) gpt_part_entry;
if (!dev_get_direct_block_sizes(dev, &pbs, &lbs))
return_0;
if (!dev_read_bytes(dev, PART_GPT_HEADER_OFFSET_LBA * lbs, sizeof(gpt_header), &gpt_header))
return_0;
/* the gpt table is always written using LE on disk */
if (le64_to_cpu(gpt_header.magic) != PART_GPT_MAGIC)
return_0;
entries_start = le64_to_cpu(gpt_header.part_entries_lba) * lbs;
nr_entries = le32_to_cpu(gpt_header.nr_part_entries);
sz_entry = le32_to_cpu(gpt_header.sz_part_entry);
for (i = 0; i < nr_entries; i++) {
if (!dev_read_bytes(dev, entries_start + i * sz_entry,
sizeof(gpt_part_entry), &gpt_part_entry))
return_0;
/* just check if the guid is nonzero, no need to call le64_to_cpu here */
if (gpt_part_entry.part_type_guid)
return 1;
}
return 0;
}
/*
* Check if there's a partition table present on the device dev, either msdos or gpt.
* Returns:
*
* 1 - if it has a partition table with at least one real partition defined
* (note: the gpt's PMBR partition alone does not count as a real partition)
*
* 0 - if it has no partition table,
* - or if it does have a partition table, but without any partition defined,
* - or on error
*/
static int _has_partition_table(struct device *dev)
{
int ret = 0;
unsigned p;
struct {
uint8_t skip[PART_MSDOS_OFFSET];
uint8_t skip[PART_OFFSET];
struct partition part[4];
uint16_t magic;
} __attribute__((packed)) buf; /* sizeof() == SECTOR_SIZE */
@@ -650,7 +586,7 @@ static int _has_partition_table(struct device *dev)
/* FIXME Check for other types of partition table too */
/* Check for msdos partition table */
if (buf.magic == xlate16(PART_MSDOS_MAGIC)) {
if (buf.magic == xlate16(PART_MAGIC)) {
for (p = 0; p < 4; ++p) {
/* Table is invalid if boot indicator not 0 or 0x80 */
if (buf.part[p].boot_ind & 0x7f) {
@@ -658,20 +594,10 @@ static int _has_partition_table(struct device *dev)
break;
}
/* Must have at least one non-empty partition */
if (buf.part[p].nr_sects) {
/*
* If this is GPT's PMBR, then also
* check for gpt partition table.
*/
if (buf.part[p].sys_ind == PART_MSDOS_TYPE_GPT_PMBR && !ret)
ret = _has_gpt_partition_table(dev);
else
ret = 1;
}
if (buf.part[p].nr_sects)
ret = 1;
}
} else
/* Check for gpt partition table. */
ret = _has_gpt_partition_table(dev);
}
return ret;
}
@@ -943,7 +869,6 @@ int fs_get_blkid(const char *pathname, struct fs_info *fsi)
const char *str = "";
size_t len = 0;
uint64_t fslastblock = 0;
uint64_t fssize = 0;
unsigned int fsblocksize = 0;
int rc;
@@ -994,25 +919,10 @@ int fs_get_blkid(const char *pathname, struct fs_info *fsi)
if (!blkid_probe_lookup_value(probe, "FSBLOCKSIZE", &str, &len) && len)
fsblocksize = (unsigned int)atoi(str);
if (!blkid_probe_lookup_value(probe, "FSSIZE", &str, &len) && len)
fssize = strtoull(str, NULL, 0);
blkid_free_probe(probe);
if (fslastblock && fsblocksize)
fsi->fs_last_byte = fslastblock * fsblocksize;
else if (fssize) {
fsi->fs_last_byte = fssize;
/*
* For swap, there's no FSLASTBLOCK reported by blkid. We do have FSSIZE reported though.
* The last block is then calculated as:
* FSSIZE (== size of the usable swap area) + FSBLOCKSIZE (== size of the swap header)
*/
if (!strcmp(fsi->fstype, "swap"))
fsi->fs_last_byte += fsblocksize;
}
log_debug("libblkid TYPE %s BLOCK_SIZE %d FSLASTBLOCK %llu FSBLOCKSIZE %u fs_last_byte %llu",
fsi->fstype, fsi->fs_block_size_bytes, (unsigned long long)fslastblock, fsblocksize,

View File

@@ -21,7 +21,7 @@
#include <fcntl.h>
#define DEV_REGULAR 0x00000002 /* Regular file? */
#define DEV_ALLOCATED 0x00000004 /* malloc used */
#define DEV_ALLOCED 0x00000004 /* malloc used */
#define DEV_OPENED_RW 0x00000008 /* Opened RW */
#define DEV_OPENED_EXCL 0x00000010 /* Opened EXCL */
#define DEV_O_DIRECT 0x00000020 /* Use O_DIRECT */

View File

@@ -201,13 +201,6 @@ int fs_get_info(struct cmd_context *cmd, struct logical_volume *lv,
if (!include_mount)
return 1;
/*
* Note: used swap devices are not considered as mount points,
* hence they're not listed in /etc/mtab, we'd need to read the
* /proc/swaps instead. We don't need it at this moment though,
* but if we do once, read the /proc/swaps here if fsi->fstype == "swap".
*/
if (!(fme = setmntent("/etc/mtab", "r")))
return_0;

View File

@@ -647,36 +647,15 @@ static int _vg_write_raw(struct format_instance *fid, struct volume_group *vg,
*
* 'Lazy' creation of such VG might improve performance, but we
* lose important validation that written metadata can be parsed. */
if (!(cft = config_tree_from_string_without_dup_node_check(write_buf))) {
log_error("Error parsing metadata for VG %s.", vg->name);
goto out;
}
release_vg(vg->vg_precommitted);
vg->vg_precommitted = NULL;
if (!vg->cmd->vg_copy_binary) {
if (!(cft = config_tree_from_string_without_dup_node_check(write_buf))) {
log_error("Error parsing metadata for VG %s.", vg->name);
goto out;
}
vg->vg_precommitted = import_vg_from_config_tree(vg->cmd, vg->fid, cft);
dm_config_destroy(cft);
} else {
vg->vg_precommitted = vg_copy_struct(vg);
if (!vg->vg_precommitted) {
log_debug("vg_copy_struct failed, trying text import.");
if (!(cft = config_tree_from_string_without_dup_node_check(write_buf))) {
log_error("Error parsing metadata for VG %s.", vg->name);
goto out;
}
vg->vg_precommitted = import_vg_from_config_tree(vg->cmd, vg->fid, cft);
dm_config_destroy(cft);
}
}
if (!vg->vg_precommitted) {
log_error("Failed to copy vg struct.");
vg->vg_precommitted = import_vg_from_config_tree(vg->cmd, vg->fid, cft);
dm_config_destroy(cft);
if (!vg->vg_precommitted)
goto_out;
}
log_debug("Saved vg struct %p as precommitted", vg->vg_precommitted);
fidtc->checksum = checksum = calc_crc(INITIAL_CRC, (uint8_t *)write_buf, new_size);
}

View File

@@ -376,8 +376,7 @@ static int _read_segment(struct cmd_context *cmd,
struct format_instance *fid,
struct dm_pool *mem,
struct logical_volume *lv, const struct dm_config_node *sn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
uint32_t area_count = 0u;
struct lv_segment *seg;
@@ -450,7 +449,7 @@ static int _read_segment(struct cmd_context *cmd,
}
if (seg->segtype->ops->text_import &&
!seg->segtype->ops->text_import(seg, sn_child, pv_hash, lv_hash))
!seg->segtype->ops->text_import(seg, sn_child, pv_hash))
return_0;
/* Optional tags */
@@ -552,8 +551,7 @@ static int _read_segments(struct cmd_context *cmd,
struct format_instance *fid,
struct dm_pool *mem,
struct logical_volume *lv, const struct dm_config_node *lvn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
const struct dm_config_node *sn;
int count = 0, seg_count;
@@ -564,7 +562,7 @@ static int _read_segments(struct cmd_context *cmd,
* All sub-sections are assumed to be segments.
*/
if (!sn->v) {
if (!_read_segment(cmd, fmt, fid, mem, lv, sn, pv_hash, lv_hash))
if (!_read_segment(cmd, fmt, fid, mem, lv, sn, pv_hash))
return_0;
count++;
@@ -981,7 +979,7 @@ static int _read_lvsegs(struct cmd_context *cmd,
memcpy(&lv->lvid.id[0], &lv->vg->id, sizeof(lv->lvid.id[0]));
if (!_read_segments(cmd, fmt, fid, mem, lv, lvn, pv_hash, lv_hash))
if (!_read_segments(cmd, fmt, fid, mem, lv, lvn, pv_hash))
return_0;
lv->size = (uint64_t) lv->le_count * (uint64_t) vg->extent_size;
@@ -1085,7 +1083,7 @@ static struct volume_group *_read_vg(struct cmd_context *cmd,
* The lv hash memorizes the lv section names -> lv
* structures.
*/
if (!(lv_hash = dm_hash_create(8181))) {
if (!(lv_hash = dm_hash_create(1023))) {
log_error("Couldn't create lv hash table.");
goto bad;
}

View File

@@ -37,8 +37,7 @@ static void _integrity_display(const struct lv_segment *seg)
static int _integrity_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct integrity_settings *set;
struct logical_volume *origin_lv = NULL;
@@ -59,7 +58,7 @@ static int _integrity_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "origin", &origin_name))
return SEG_LOG_ERROR("origin must be a string in");
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_name)))
if (!(origin_lv = find_lv(seg->lv->vg, origin_name)))
return SEG_LOG_ERROR("Unknown LV specified for integrity origin %s in", origin_name);
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))
@@ -104,7 +103,7 @@ static int _integrity_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "meta_dev", &meta_dev))
return SEG_LOG_ERROR("meta_dev must be a string in");
if (!(meta_lv = dm_hash_lookup(lv_hash, meta_dev)))
if (!(meta_lv = find_lv(seg->lv->vg, meta_dev)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for integrity in", meta_dev);
}

View File

@@ -18,7 +18,6 @@
#include "daemons/lvmlockd/lvmlockd-client.h"
#include <mntent.h>
#include <sys/ioctl.h>
static daemon_handle _lvmlockd;
static const char *_lvmlockd_socket = NULL;
@@ -494,7 +493,7 @@ static int _lockd_request(struct cmd_context *cmd,
static int _create_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg,
const char *lock_lv_name, int num_mb)
{
uint64_t lv_size_bytes;
uint32_t lv_size_bytes;
uint32_t extent_bytes;
uint32_t total_extents;
struct logical_volume *lv;
@@ -512,24 +511,14 @@ static int _create_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg,
.zero = 1,
};
/*
* Make the lvmlock lv a multiple of 8 MB, i.e. a multiple of any
* sanlock align_size, to avoid having unused space at the end of the
* lvmlock LV.
*/
if (num_mb % 8)
num_mb += (8 - (num_mb % 8));
lv_size_bytes = (uint64_t)num_mb * ONE_MB_IN_BYTES; /* size of sanlock LV in bytes */
lv_size_bytes = num_mb * ONE_MB_IN_BYTES; /* size of sanlock LV in bytes */
extent_bytes = vg->extent_size * SECTOR_SIZE; /* size of one extent in bytes */
total_extents = dm_div_up(lv_size_bytes, extent_bytes); /* number of extents in sanlock LV */
lp.extents = total_extents;
lv_size_bytes = (uint64_t)total_extents * extent_bytes;
lv_size_bytes = total_extents * extent_bytes;
num_mb = lv_size_bytes / ONE_MB_IN_BYTES;
log_debug("Creating lvmlock LV for sanlock with size %um %llub %u extents",
num_mb, (unsigned long long)lv_size_bytes, lp.extents);
log_debug("Creating lvmlock LV for sanlock with size %um %ub %u extents", num_mb, lv_size_bytes, lp.extents);
dm_list_init(&lp.tags);
@@ -558,9 +547,11 @@ static int _remove_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
return 1;
}
static int _extend_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg, unsigned extend_mb, char *lvmlock_path)
static int _extend_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg, unsigned extend_mb)
{
struct device *dev;
char path[PATH_MAX];
char *name;
uint64_t old_size_bytes;
uint64_t new_size_bytes;
uint32_t extend_bytes;
@@ -603,14 +594,23 @@ static int _extend_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg,
new_size_bytes = lv->size * SECTOR_SIZE;
if (!(name = dm_build_dm_name(lv->vg->cmd->mem, lv->vg->name, lv->name, NULL)))
return_0;
if (dm_snprintf(path, sizeof(path), "%s/%s", dm_dir(), name) < 0) {
log_error("Extend sanlock LV %s name too long - extended size not zeroed.",
display_lvname(lv));
return 0;
}
log_debug("Extend sanlock LV zeroing %u bytes from offset %llu to %llu",
(uint32_t)(new_size_bytes - old_size_bytes),
(unsigned long long)old_size_bytes,
(unsigned long long)new_size_bytes);
log_debug("Zeroing %u MiB on extended internal lvmlock LV...", extend_mb);
log_print_unless_silent("Zeroing %u MiB on extended internal lvmlock LV...", extend_mb);
if (!(dev = dev_cache_get(cmd, lvmlock_path, NULL))) {
if (!(dev = dev_cache_get(cmd, path, NULL))) {
log_error("Extend sanlock LV %s cannot find device.", display_lvname(lv));
return 0;
}
@@ -653,27 +653,16 @@ static int _refresh_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
int handle_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
{
struct logical_volume *lv = vg->sanlock_lv;
daemon_reply reply;
char *lvmlock_name;
char lvmlock_path[PATH_MAX];
unsigned extend_mb;
uint64_t lv_size_bytes;
uint64_t dm_size_bytes;
int result;
int ret;
int fd;
if (!_use_lvmlockd)
return 1;
if (!_lvmlockd_connected)
return 0;
if (!lv) {
log_error("No internal lvmlock LV found.");
return 0;
}
extend_mb = (unsigned) find_config_tree_int(cmd, global_sanlock_lv_extend_CFG, NULL);
/*
@@ -683,46 +672,13 @@ int handle_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
if (!extend_mb)
return 1;
lv_size_bytes = lv->size * SECTOR_SIZE;
if (!(lvmlock_name = dm_build_dm_name(cmd->mem, vg->name, lv->name, NULL)))
return_0;
if (dm_snprintf(lvmlock_path, sizeof(lvmlock_path), "%s/%s", dm_dir(), lvmlock_name) < 0) {
log_error("Handle sanlock LV %s path too long.", lvmlock_name);
return 0;
}
fd = open(lvmlock_path, O_RDONLY);
if (fd < 0) {
log_error("Cannot open sanlock LV %s.", lvmlock_path);
return 0;
}
if (ioctl(fd, BLKGETSIZE64, &dm_size_bytes) < 0) {
log_error("Cannot get size of sanlock LV %s.", lvmlock_path);
if (close(fd))
stack;
return 0;
}
if (close(fd))
stack;
/*
* Another host may have extended the lvmlock LV.
* If so the lvmlock LV size in metadata will be
* larger than our active lvmlock LV, and we need
* to refresh our lvmlock LV to use the new space.
* Another host may have extended the lvmlock LV already.
* Refresh so that we'll find the new space they added
* when we search for new space.
*/
if (lv_size_bytes > dm_size_bytes) {
log_debug("Refresh sanlock lv %llu dm %llu",
(unsigned long long)lv_size_bytes,
(unsigned long long)dm_size_bytes);
if (!_refresh_sanlock_lv(cmd, vg))
return 0;
}
if (!_refresh_sanlock_lv(cmd, vg))
return 0;
/*
* Ask lvmlockd/sanlock to look for an unused lock.
@@ -730,7 +686,6 @@ int handle_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
reply = _lockd_send("find_free_lock",
"pid = " FMTd64, (int64_t) getpid(),
"vg_name = %s", vg->name,
"lv_size_bytes = " FMTd64, (int64_t) lv_size_bytes,
NULL);
if (!_lockd_result(reply, &result, NULL)) {
@@ -741,7 +696,7 @@ int handle_sanlock_lv(struct cmd_context *cmd, struct volume_group *vg)
/* No space on the lvmlock lv for a new lease. */
if (result == -EMSGSIZE)
ret = _extend_sanlock_lv(cmd, vg, extend_mb, lvmlock_path);
ret = _extend_sanlock_lv(cmd, vg, extend_mb);
daemon_reply_destroy(reply);
@@ -867,9 +822,7 @@ static int _init_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg, in
const char *opts = NULL;
struct pv_list *pvl;
uint32_t sector_size = 0;
uint32_t align_size = 0;
unsigned int physical_block_size, logical_block_size;
int host_id;
int num_mb = 0;
int result;
int ret;
@@ -896,54 +849,11 @@ static int _init_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg, in
log_debug("Using sector size %u for sanlock LV", sector_size);
host_id = find_config_tree_int(cmd, local_host_id_CFG, NULL);
/*
* Starting size of lvmlock LV is 256MB/512MB/1GB depending
* on sector_size/align_size, and max valid host_id depends
* on sector_size/align_size.
*/
if (sector_size == 4096) {
align_size = find_config_tree_int(cmd, global_sanlock_align_size_CFG, NULL);
if (align_size == 1) {
num_mb = 256;
if (host_id < 1 || host_id > 250) {
log_error("Invalid host_id %d, use 1-250 (sanlock_align_size is 1MiB).", host_id);
return 0;
}
} else if (align_size == 2) {
num_mb = 512;
if (host_id < 1 || host_id > 500) {
log_error("Invalid host_id %d, use 1-500 (sanlock_align_size is 2MiB).", host_id);
return 0;
}
} else if (align_size == 4) {
num_mb = 1024;
if (host_id < 1 || host_id > 1000) {
log_error("Invalid host_id %d, use 1-1000 (sanlock_align_size is 4MiB).", host_id);
return 0;
}
} else if (align_size == 8) {
num_mb = 1024;
if (host_id < 1 || host_id > 2000) {
log_error("Invalid host_id %d, use 1-2000 (sanlock_align_size is 8MiB).", host_id);
return 0;
}
} else {
log_error("Invalid sanlock_align_size %u, use 1,2,4,8.", align_size);
return 0;
}
} else if (sector_size == 512) {
num_mb = 256;
if (host_id < 1 || host_id > 2000) {
log_error("Invalid host_id %d, use 1-2000.", host_id);
return 0;
}
} else {
log_error("Unsupported sector size %u.", sector_size);
return 0;
/* Base starting size of sanlock LV is 256MB/1GB for 512/4K sectors */
switch (sector_size) {
case 512: num_mb = 256; break;
case 4096: num_mb = 1024; break;
default: log_error("Unknown sector size %u.", sector_size); return 0;
}
/*
@@ -981,7 +891,6 @@ static int _init_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg, in
"vg_name = %s", vg->name,
"vg_lock_type = %s", "sanlock",
"vg_lock_args = %s", vg->sanlock_lv->name,
"align_mb = " FMTd64, (int64_t) align_size,
"opts = %s", opts ?: "none",
NULL);
@@ -2666,7 +2575,7 @@ int lockd_lv_name(struct cmd_context *cmd, struct volume_group *vg,
}
retry:
log_debug("lockd LV %s/%s mode %s uuid %s %s", vg->name, lv_name, mode, lv_uuid, opts ?: "");
log_debug("lockd LV %s/%s mode %s uuid %s", vg->name, lv_name, mode, lv_uuid);
/* Pass PV list for IDM lock type */
if (!strcmp(vg->lock_type, "idm")) {
@@ -3269,7 +3178,7 @@ int lockd_init_lv(struct cmd_context *cmd, struct volume_group *vg, struct logic
} else if (seg_is_thin(lp)) {
if ((seg_is_thin_volume(lp) && !lp->create_pool) ||
(!seg_is_thin_volume(lp) && lp->origin_name)) {
(!seg_is_thin_volume(lp) && lp->snapshot)) {
struct lv_list *lvl;
/*
@@ -3277,13 +3186,12 @@ int lockd_init_lv(struct cmd_context *cmd, struct volume_group *vg, struct logic
* their own lock but use the pool lock. If an lv does not
* use its own lock, its lock_args is set to NULL.
*/
log_debug("lockd_init_lv thin %s locking thin pool", display_lvname(lv));
if (!(lvl = find_lv_in_vg(vg, lp->pool_name))) {
log_error("Failed to find thin pool %s/%s", vg->name, lp->pool_name);
return 0;
}
if (!lockd_lv(cmd, lvl->lv, "ex", 0)) {
if (!lockd_lv(cmd, lvl->lv, "ex", LDLV_PERSISTENT)) {
log_error("Failed to lock thin pool %s/%s", vg->name, lp->pool_name);
return 0;
}

View File

@@ -258,7 +258,7 @@ static int _get_pid_starttime(int *pid, unsigned long long *starttime)
/*
* Support envvar LVM_LOG_FILE_EPOCH and allow to attach
* extra keyword (consist of up to 32 alpha chars) to
* extra keyword (consist of upto 32 alpha chars) to
* opened log file. After this 'epoch' word pid and starttime
* (in kernel units, read from /proc/self/stat)
* is automatically attached.
@@ -591,7 +591,6 @@ static void _vprint_log(int level, const char *file, int line, int dm_errno_or_c
(_log_report.report && !log_bypass_report && (use_stderr || (level <=_LOG_WARN))) ||
log_once) {
va_copy(ap, orig_ap);
/* coverity[format_string_injection] our code expectes this behavior. */
n = vsnprintf(message, sizeof(message), trformat, ap);
va_end(ap);

View File

@@ -1558,6 +1558,8 @@ bad:
int lv_set_creation(struct logical_volume *lv,
const char *hostname, uint64_t timestamp)
{
const char *hn;
if (!hostname) {
if (!_utsinit) {
if (uname(&_utsname)) {
@@ -1571,7 +1573,17 @@ int lv_set_creation(struct logical_volume *lv,
hostname = _utsname.nodename;
}
lv->hostname = dm_pool_strdup(lv->vg->vgmem, hostname);
if (!(hn = dm_hash_lookup(lv->vg->hostnames, hostname))) {
if (!(hn = dm_pool_strdup(lv->vg->vgmem, hostname))) {
log_error("Failed to duplicate hostname");
return 0;
}
if (!dm_hash_insert(lv->vg->hostnames, hostname, (void*)hn))
return_0;
}
lv->hostname = hn;
lv->timestamp = timestamp ? : (uint64_t) time(NULL);
return 1;

View File

@@ -49,10 +49,10 @@ typedef enum {
#define RAID_METADATA_AREA_LEN 1
/* FIXME These ended up getting used differently from first intended. Refactor. */
/* Only one of A_CONTIGUOUS_TO_LVSEG, A_CLING_TO_LVSEG, A_CLING_TO_ALLOCATED may be set */
/* Only one of A_CONTIGUOUS_TO_LVSEG, A_CLING_TO_LVSEG, A_CLING_TO_ALLOCED may be set */
#define A_CONTIGUOUS_TO_LVSEG 0x01 /* Must be contiguous to an existing segment */
#define A_CLING_TO_LVSEG 0x02 /* Must use same disks as existing LV segment */
#define A_CLING_TO_ALLOCATED 0x04 /* Must use same disks as already-allocated segment */
#define A_CLING_TO_ALLOCED 0x04 /* Must use same disks as already-allocated segment */
#define A_CLING_BY_TAGS 0x08 /* Must match tags against existing segment */
#define A_CAN_SPLIT 0x10
@@ -1044,7 +1044,7 @@ struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
if (segtype_is_raid_with_meta(segtype) &&
!(seg->meta_areas = dm_pool_zalloc(mem, areas_sz))) {
dm_pool_free(mem, seg); /* frees everything allocated since seg */
dm_pool_free(mem, seg); /* frees everything alloced since seg */
return_NULL;
}
@@ -1846,7 +1846,7 @@ int lv_remove(struct logical_volume *lv)
/*
* A set of contiguous physical extents allocated
*/
struct allocated_area {
struct alloced_area {
struct dm_list list;
struct physical_volume *pv;
@@ -1897,7 +1897,7 @@ struct alloc_handle {
* Contains area_count lists of areas allocated to data stripes
* followed by log_area_count lists of areas allocated to log stripes.
*/
struct dm_list allocated_areas[];
struct dm_list alloced_areas[];
};
/*
@@ -2001,7 +2001,7 @@ static void _init_alloc_parms(struct alloc_handle *ah,
*/
if ((alloc_parms->alloc == ALLOC_CLING) ||
(alloc_parms->alloc == ALLOC_CLING_BY_TAGS)) {
alloc_parms->flags |= A_CLING_TO_ALLOCATED;
alloc_parms->flags |= A_CLING_TO_ALLOCED;
alloc_parms->flags |= A_POSITIONAL_FILL;
}
@@ -2021,17 +2021,17 @@ static void _init_alloc_parms(struct alloc_handle *ah,
if (ah->maximise_cling &&
(alloc_parms->alloc == ALLOC_NORMAL) &&
(allocated != alloc_parms->extents_still_needed))
alloc_parms->flags |= A_CLING_TO_ALLOCATED;
alloc_parms->flags |= A_CLING_TO_ALLOCED;
if (can_split)
alloc_parms->flags |= A_CAN_SPLIT;
}
static int _setup_allocated_segment(struct logical_volume *lv, uint64_t status,
static int _setup_alloced_segment(struct logical_volume *lv, uint64_t status,
uint32_t area_count,
uint32_t stripe_size,
const struct segment_type *segtype,
struct allocated_area *aa,
struct alloced_area *aa,
uint32_t region_size)
{
uint32_t s, extents, area_multiple;
@@ -2062,18 +2062,18 @@ static int _setup_allocated_segment(struct logical_volume *lv, uint64_t status,
return 1;
}
static int _setup_allocated_segments(struct logical_volume *lv,
struct dm_list *allocated_areas,
static int _setup_alloced_segments(struct logical_volume *lv,
struct dm_list *alloced_areas,
uint32_t area_count,
uint64_t status,
uint32_t stripe_size,
const struct segment_type *segtype,
uint32_t region_size)
{
struct allocated_area *aa;
struct alloced_area *aa;
dm_list_iterate_items(aa, &allocated_areas[0]) {
if (!_setup_allocated_segment(lv, status, area_count,
dm_list_iterate_items(aa, &alloced_areas[0]) {
if (!_setup_alloced_segment(lv, status, area_count,
stripe_size, segtype, aa,
region_size))
return_0;
@@ -2094,7 +2094,7 @@ static int _alloc_parallel_area(struct alloc_handle *ah, uint32_t max_to_allocat
uint32_t s, smeta;
uint32_t ix_log_skip = 0; /* How many areas to skip in middle of array to reach log areas */
uint32_t total_area_count;
struct allocated_area *aa;
struct alloced_area *aa;
struct pv_area *pva;
total_area_count = ah->area_count + ah->parity_count + alloc_state->log_area_count_still_needed;
@@ -2113,7 +2113,7 @@ static int _alloc_parallel_area(struct alloc_handle *ah, uint32_t max_to_allocat
len = (ah->alloc_and_split_meta && !ah->split_metadata_is_allocated) ? total_area_count * 2 : total_area_count;
len *= sizeof(*aa);
if (!(aa = dm_pool_alloc(ah->mem, len))) {
log_error("allocated_area allocation failed");
log_error("alloced_area allocation failed");
return 0;
}
@@ -2156,7 +2156,7 @@ static int _alloc_parallel_area(struct alloc_handle *ah, uint32_t max_to_allocat
aa[smeta].len);
consume_pv_area(pva, aa[smeta].len);
dm_list_add(&ah->allocated_areas[smeta], &aa[smeta].list);
dm_list_add(&ah->alloced_areas[smeta], &aa[smeta].list);
}
aa[s].len = (ah->alloc_and_split_meta && !ah->split_metadata_is_allocated) ? len - ah->log_len : len;
/* Skip empty allocations */
@@ -2172,7 +2172,7 @@ static int _alloc_parallel_area(struct alloc_handle *ah, uint32_t max_to_allocat
consume_pv_area(pva, aa[s].len);
dm_list_add(&ah->allocated_areas[s], &aa[s].list);
dm_list_add(&ah->alloced_areas[s], &aa[s].list);
}
/* Only need to alloc metadata from the first batch */
@@ -2728,11 +2728,11 @@ static int _check_contiguous(struct alloc_handle *ah,
/*
* Is pva on same PV as any areas already used in this allocation attempt?
*/
static int _check_cling_to_allocated(struct alloc_handle *ah, const struct dm_config_node *cling_tag_list_cn,
static int _check_cling_to_alloced(struct alloc_handle *ah, const struct dm_config_node *cling_tag_list_cn,
struct pv_area *pva, struct alloc_state *alloc_state)
{
unsigned s;
struct allocated_area *aa;
struct alloced_area *aa;
int positional = alloc_state->alloc_parms->flags & A_POSITIONAL_FILL;
/*
@@ -2745,7 +2745,7 @@ static int _check_cling_to_allocated(struct alloc_handle *ah, const struct dm_co
for (s = 0; s < ah->area_count; s++) {
if (positional && alloc_state->areas[s].pva)
continue; /* Area already assigned */
dm_list_iterate_items(aa, &ah->allocated_areas[s]) {
dm_list_iterate_items(aa, &ah->alloced_areas[s]) {
if ((!cling_tag_list_cn && (pva->map->pv == aa[0].pv)) ||
(cling_tag_list_cn && _pvs_have_matching_tag(cling_tag_list_cn, pva->map->pv, aa[0].pv, 0))) {
if (positional &&
@@ -2802,7 +2802,7 @@ static area_use_t _check_pva(struct alloc_handle *ah, struct pv_area *pva, uint3
return NEXT_AREA;
/* If maximise_cling is set, perform several checks, otherwise perform exactly one. */
if (!iteration_count && !log_iteration_count && alloc_parms->flags & (A_CONTIGUOUS_TO_LVSEG | A_CLING_TO_LVSEG | A_CLING_TO_ALLOCATED)) {
if (!iteration_count && !log_iteration_count && alloc_parms->flags & (A_CONTIGUOUS_TO_LVSEG | A_CLING_TO_LVSEG | A_CLING_TO_ALLOCED)) {
/* Contiguous? */
if (((alloc_parms->flags & A_CONTIGUOUS_TO_LVSEG) ||
(ah->maximise_cling && (alloc_parms->flags & A_AREA_COUNT_MATCHES))) &&
@@ -2820,9 +2820,9 @@ static area_use_t _check_pva(struct alloc_handle *ah, struct pv_area *pva, uint3
/* If this PV is suitable, use this first area */
goto found;
/* Cling_to_allocated? */
if ((alloc_parms->flags & A_CLING_TO_ALLOCATED) &&
_check_cling_to_allocated(ah, NULL, pva, alloc_state))
/* Cling_to_alloced? */
if ((alloc_parms->flags & A_CLING_TO_ALLOCED) &&
_check_cling_to_alloced(ah, NULL, pva, alloc_state))
goto found;
/* Cling_by_tags? */
@@ -2832,7 +2832,7 @@ static area_use_t _check_pva(struct alloc_handle *ah, struct pv_area *pva, uint3
if ((alloc_parms->flags & A_AREA_COUNT_MATCHES)) {
if (_check_cling(ah, ah->cling_tag_list_cn, alloc_parms->prev_lvseg, pva, alloc_state))
goto found;
} else if (_check_cling_to_allocated(ah, ah->cling_tag_list_cn, pva, alloc_state))
} else if (_check_cling_to_alloced(ah, ah->cling_tag_list_cn, pva, alloc_state))
goto found;
/* All areas on this PV give same result so pointless checking more */
@@ -2993,9 +2993,9 @@ static int _find_some_parallel_space(struct alloc_handle *ah,
unsigned already_found_one;
unsigned ix_log_offset; /* Offset to start of areas to use for log */
unsigned too_small_for_log_count; /* How many too small for log? */
unsigned iteration_count = 0; /* cling_to_allocated may need 2 iterations */
unsigned iteration_count = 0; /* cling_to_alloced may need 2 iterations */
unsigned log_iteration_count = 0; /* extra iteration for logs on data devices */
struct allocated_area *aa;
struct alloced_area *aa;
uint32_t s;
uint32_t devices_needed = ah->area_count + ah->parity_count;
uint32_t required;
@@ -3005,17 +3005,17 @@ static int _find_some_parallel_space(struct alloc_handle *ah,
/* num_positional_areas holds the number of parallel allocations that must be contiguous/cling */
/* These appear first in the array, so it is also the offset to the non-preferred allocations */
/* At most one of A_CONTIGUOUS_TO_LVSEG, A_CLING_TO_LVSEG or A_CLING_TO_ALLOCATED may be set */
/* At most one of A_CONTIGUOUS_TO_LVSEG, A_CLING_TO_LVSEG or A_CLING_TO_ALLOCED may be set */
if (!(alloc_parms->flags & A_POSITIONAL_FILL))
alloc_state->num_positional_areas = 0;
else if (alloc_parms->flags & (A_CONTIGUOUS_TO_LVSEG | A_CLING_TO_LVSEG))
alloc_state->num_positional_areas = _stripes_per_mimage(alloc_parms->prev_lvseg) * alloc_parms->prev_lvseg->area_count;
else if (alloc_parms->flags & A_CLING_TO_ALLOCATED)
else if (alloc_parms->flags & A_CLING_TO_ALLOCED)
alloc_state->num_positional_areas = ah->area_count;
if (alloc_parms->alloc == ALLOC_NORMAL || (alloc_parms->flags & A_CLING_TO_ALLOCATED))
if (alloc_parms->alloc == ALLOC_NORMAL || (alloc_parms->flags & A_CLING_TO_ALLOCED))
log_debug_alloc("Cling_to_allocated is %sset",
alloc_parms->flags & A_CLING_TO_ALLOCATED ? "" : "not ");
alloc_parms->flags & A_CLING_TO_ALLOCED ? "" : "not ");
if (alloc_parms->flags & A_POSITIONAL_FILL)
log_debug_alloc("%u preferred area(s) to be filled positionally.", alloc_state->num_positional_areas);
@@ -3053,7 +3053,7 @@ static int _find_some_parallel_space(struct alloc_handle *ah,
if (alloc_parms->alloc != ALLOC_ANYWHERE) {
/* Don't allocate onto the log PVs */
if (ah->log_area_count)
dm_list_iterate_items(aa, &ah->allocated_areas[ah->area_count])
dm_list_iterate_items(aa, &ah->alloced_areas[ah->area_count])
for (s = 0; s < ah->log_area_count; s++)
if (!aa[s].pv)
goto next_pv;
@@ -3136,17 +3136,17 @@ static int _find_some_parallel_space(struct alloc_handle *ah,
break;
}
} while ((alloc_parms->alloc == ALLOC_ANYWHERE && last_ix != ix && ix < devices_needed + alloc_state->log_area_count_still_needed) ||
/* With cling_to_allocated and normal, if there were gaps in the preferred areas, have a second iteration */
/* With cling_to_alloced and normal, if there were gaps in the preferred areas, have a second iteration */
(alloc_parms->alloc == ALLOC_NORMAL && preferred_count &&
(preferred_count < alloc_state->num_positional_areas || alloc_state->log_area_count_still_needed) &&
(alloc_parms->flags & A_CLING_TO_ALLOCATED) && !iteration_count++) ||
(alloc_parms->flags & A_CLING_TO_ALLOCED) && !iteration_count++) ||
/* Extra iteration needed to fill log areas on PVs already used? */
(alloc_parms->alloc == ALLOC_NORMAL && preferred_count == alloc_state->num_positional_areas && !ah->mirror_logs_separate &&
(ix + preferred_count >= devices_needed) &&
(ix + preferred_count < devices_needed + alloc_state->log_area_count_still_needed) && !log_iteration_count++));
/* Non-zero ix means at least one USE_AREA was returned */
if (preferred_count < alloc_state->num_positional_areas && !(alloc_parms->flags & A_CLING_TO_ALLOCATED) && !ix)
if (preferred_count < alloc_state->num_positional_areas && !(alloc_parms->flags & A_CLING_TO_ALLOCED) && !ix)
return 1;
if (ix + preferred_count < devices_needed + alloc_state->log_area_count_still_needed)
@@ -3313,7 +3313,7 @@ static int _find_max_parallel_space_for_one_policy(struct alloc_handle *ah, stru
* set we allow two passes, first with A_POSITIONAL_FILL then without.
*
* If we didn't allocate anything this time with ALLOC_NORMAL and had
* A_CLING_TO_ALLOCATED set, try again without it.
* A_CLING_TO_ALLOCED set, try again without it.
*
* For ALLOC_NORMAL, if we did allocate something without the
* flag set, set it and continue so that further allocations
@@ -3323,13 +3323,13 @@ static int _find_max_parallel_space_for_one_policy(struct alloc_handle *ah, stru
if (ah->maximise_cling && ((alloc_parms->alloc == ALLOC_CLING) || (alloc_parms->alloc == ALLOC_CLING_BY_TAGS)) &&
(alloc_parms->flags & A_CLING_TO_LVSEG) && (alloc_parms->flags & A_POSITIONAL_FILL))
alloc_parms->flags &= ~A_POSITIONAL_FILL;
else if ((alloc_parms->alloc == ALLOC_NORMAL) && (alloc_parms->flags & A_CLING_TO_ALLOCATED))
alloc_parms->flags &= ~A_CLING_TO_ALLOCATED;
else if ((alloc_parms->alloc == ALLOC_NORMAL) && (alloc_parms->flags & A_CLING_TO_ALLOCED))
alloc_parms->flags &= ~A_CLING_TO_ALLOCED;
else
break; /* Give up */
} else if (ah->maximise_cling && alloc_parms->alloc == ALLOC_NORMAL &&
!(alloc_parms->flags & A_CLING_TO_ALLOCATED))
alloc_parms->flags |= A_CLING_TO_ALLOCATED;
!(alloc_parms->flags & A_CLING_TO_ALLOCED))
alloc_parms->flags |= A_CLING_TO_ALLOCED;
} while ((alloc_parms->alloc != ALLOC_CONTIGUOUS) && alloc_state->allocated != alloc_parms->extents_still_needed && (alloc_parms->flags & A_CAN_SPLIT) && (!ah->approx_alloc || pv_maps_size(pvms)));
return 1;
@@ -3622,7 +3622,7 @@ static struct alloc_handle *_alloc_init(struct cmd_context *cmd,
/* mirrors specify their exact log count */
alloc_count += metadata_area_count;
size += sizeof(ah->allocated_areas[0]) * alloc_count;
size += sizeof(ah->alloced_areas[0]) * alloc_count;
if (!(mem = dm_pool_create("allocation", 1024))) {
log_error("allocation pool creation failed");
@@ -3727,7 +3727,7 @@ static struct alloc_handle *_alloc_init(struct cmd_context *cmd,
ah->new_extents = total_extents;
for (s = 0; s < alloc_count; s++)
dm_list_init(&ah->allocated_areas[s]);
dm_list_init(&ah->alloced_areas[s]);
ah->parallel_areas = parallel_areas;
@@ -3825,7 +3825,7 @@ int lv_add_segment(struct alloc_handle *ah,
return 0;
}
if (!_setup_allocated_segments(lv, &ah->allocated_areas[first_area],
if (!_setup_alloced_segments(lv, &ah->alloced_areas[first_area],
num_areas, status,
stripe_size, segtype,
region_size))
@@ -3898,7 +3898,7 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
uint32_t region_size)
{
char *image_name;
struct allocated_area *aa;
struct alloced_area *aa;
struct lv_segment *seg, *new_seg;
uint32_t current_le = le;
uint32_t s;
@@ -3923,7 +3923,7 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
* single segment of the original LV, that LV segment must be
* split up to match.
*/
dm_list_iterate_items(aa, &ah->allocated_areas[0]) {
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
@@ -3965,7 +3965,7 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
if (!(segtype = get_segtype_from_string(lv->vg->cmd, SEG_TYPE_NAME_STRIPED)))
return_0;
dm_list_iterate_items(aa, &ah->allocated_areas[0]) {
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(orig_lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
@@ -4007,12 +4007,12 @@ int lv_add_mirror_areas(struct alloc_handle *ah,
struct logical_volume *lv, uint32_t le,
uint32_t region_size)
{
struct allocated_area *aa;
struct alloced_area *aa;
struct lv_segment *seg;
uint32_t current_le = le;
uint32_t s, old_area_count, new_area_count;
dm_list_iterate_items(aa, &ah->allocated_areas[0]) {
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
@@ -5889,8 +5889,8 @@ static int _lv_resize_volume(struct logical_volume *lv,
alloc_policy_t alloc = lp->alloc ? : lv->alloc;
old_extents = lv->le_count;
log_verbose("%s logical volume %s to %s%s",
(lp->resize == LV_REDUCE) ? "Reducing" : "Extending",
log_verbose("%sing logical volume %s to %s%s",
(lp->resize == LV_REDUCE) ? "Reduc" : "Extend",
display_lvname(lv), lp->approx_alloc ? "up to " : "",
display_size(cmd, (uint64_t) lp->extents * vg->extent_size));
@@ -6144,8 +6144,6 @@ static int _fs_reduce_allow(struct cmd_context *cmd, struct logical_volume *lv,
if (fsi->mounted)
fsi->needs_unmount = 1;
fsi->needs_reduce = 1;
} else if (!strcmp(fsi->fstype, "swap")) {
fsi->needs_reduce = 1;
} else {
/*
@@ -6316,8 +6314,6 @@ static int _fs_extend_allow(struct cmd_context *cmd, struct logical_volume *lv,
if (lp->nofsck)
fsi->needs_fsck = 0;
} else if (!strcmp(fsi->fstype, "swap")) {
fsi->needs_extend = 1;
} else if (!strcmp(fsi->fstype, "xfs")) {
fs_extend_cmd = " xfs_growfs";
@@ -6891,10 +6887,6 @@ int lv_resize(struct cmd_context *cmd, struct logical_volume *lv,
/*
* If the LV is locked due to being active, this lock call is a no-op.
* Otherwise, this acquires a transient lock on the lv (not PERSISTENT)
* FIXME: should probably use a persistent lock in case the command
* crashes while the lv is active, in which case we'd want the active
* lv to remain locked. This means then adding lockd_lv("un") at the
* end.
*/
if (!lockd_lv_resize(cmd, lv_top, "ex", 0, lp))
return_0;
@@ -7557,7 +7549,6 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
int visible, historical;
struct logical_volume *pool_lv = NULL;
struct logical_volume *lock_lv = lv;
struct logical_volume *lockd_pool = NULL;
struct lv_segment *cache_seg = NULL;
struct seg_list *sl;
struct lv_segment *seg = first_seg(lv);
@@ -7622,21 +7613,8 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
return 0;
}
if (vg_is_shared(vg)) {
if (lv_is_thin_type(lv)) {
/* FIXME: is this also needed for other types? */
/* Thin is special because it needs to be active and locked to remove. */
if (lv_is_thin_volume(lv))
lockd_pool = first_seg(lv)->pool_lv;
else if (lv_is_thin_pool(lv))
lockd_pool = lv;
if (!lockd_lv(cmd, lock_lv, "ex", LDLV_PERSISTENT))
return_0;
} else {
if (!lockd_lv(cmd, lock_lv, "ex", LDLV_PERSISTENT))
return_0;
}
}
if (!lockd_lv(cmd, lock_lv, "ex", LDLV_PERSISTENT))
return_0;
if (!lv_is_cache_vol(lv)) {
if (!_lv_remove_check_in_use(lv, force))
@@ -7785,13 +7763,8 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
display_lvname(pool_lv));
}
if (lockd_pool && !thin_pool_is_active(lockd_pool)) {
if (!lockd_lv_name(cmd, vg, lockd_pool->name, &lockd_pool->lvid.id[1], lockd_pool->lock_args, "un", LDLV_PERSISTENT))
log_warn("WARNING: Failed to unlock %s.", display_lvname(lockd_pool));
} else {
if (!lockd_lv(cmd, lv, "un", LDLV_PERSISTENT))
log_warn("WARNING: Failed to unlock %s.", display_lvname(lv));
}
if (!lockd_lv(cmd, lv, "un", LDLV_PERSISTENT))
log_warn("WARNING: Failed to unlock %s.", display_lvname(lv));
lockd_free_lv(cmd, vg, lv->name, &lv->lvid.id[1], lv->lock_args);
if (!suppress_remove_message && (visible || historical)) {
@@ -9270,11 +9243,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
return_NULL;
/* New pool is now inactive */
} else {
if (!lockd_lv(cmd, pool_lv, "ex", LDLV_PERSISTENT)) {
log_error("Failed to lock thin pool.");
return NULL;
}
if (!activate_lv(cmd, pool_lv)) {
log_error("Aborting. Failed to locally activate thin pool %s.",
display_lvname(pool_lv));
@@ -9569,10 +9537,8 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
if (seg_is_raid(lp) && lp->raidintegrity) {
log_debug("Adding integrity to new LV");
if (!lv_add_integrity_to_raid(lv, &lp->integrity_settings, lp->pvh, NULL)) {
stack;
if (!lv_add_integrity_to_raid(lv, &lp->integrity_settings, lp->pvh, NULL))
goto revert_new_lv;
}
}
/* Do not scan this LV until properly zeroed/wiped. */
@@ -9645,7 +9611,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
first_seg(pool_lv)->transaction_id = seg->transaction_id;
first_seg(lv)->device_id = 0; /* no delete of never existing thin device */
}
stack;
goto revert_new_lv;
}
/* At this point remove pool messages, snapshot is active */
@@ -9659,10 +9624,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
/* Avoid multiple thin-pool activations in this case */
if (thin_pool_was_active < 0)
thin_pool_was_active = 0;
if (!lockd_lv(cmd, pool_lv, "ex", LDLV_PERSISTENT)) {
log_error("Failed to lock thin pool.");
return NULL;
}
if (!activate_lv(cmd, pool_lv)) {
log_error("Failed to activate thin pool %s.",
display_lvname(pool_lv));
@@ -9687,15 +9648,11 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
}
/* Restore inactive state if needed */
if (!thin_pool_was_active) {
if (!deactivate_lv(cmd, pool_lv)) {
log_error("Failed to deactivate thin pool %s.", display_lvname(pool_lv));
return NULL;
}
if (!lockd_lv(cmd, pool_lv, "un", LDLV_PERSISTENT)) {
log_error("Failed to unlock thin pool.");
return NULL;
}
if (!thin_pool_was_active &&
!deactivate_lv(cmd, pool_lv)) {
log_error("Failed to deactivate thin pool %s.",
display_lvname(pool_lv));
return NULL;
}
} else if (lp->snapshot) {
lv->status |= LV_TEMPORARY;
@@ -9801,7 +9758,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
return_NULL;
}
stack;
goto deactivate_and_revert_new_lv;
}
} else if (lp->snapshot) {
@@ -9823,10 +9779,8 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
if (lp->virtual_extents &&
!(origin_lv = _create_virtual_origin(cmd, vg, lv->name,
(lp->permission & ~LVM_WRITE),
lp->virtual_extents))) {
stack;
lp->virtual_extents)))
goto revert_new_lv;
}
/* Reset permission after zeroing */
if (!(lp->permission & LVM_WRITE))
@@ -9868,12 +9822,9 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
}
}
out:
if (!lv)
log_debug("No LV created.");
return lv;
deactivate_and_revert_new_lv:
log_debug("Deactivate and revert new lv");
if (!sync_local_dev_names(lv->vg->cmd))
log_error("Failed to sync local devices before reverting %s.",
display_lvname(lv));
@@ -9884,7 +9835,6 @@ deactivate_and_revert_new_lv:
}
revert_new_lv:
log_debug("Revert new lv");
if (!lockd_lv(cmd, lv, "un", LDLV_PERSISTENT))
log_warn("WARNING: Failed to unlock %s.", display_lvname(lv));
lockd_free_lv(vg->cmd, vg, lv->name, &lv->lvid.id[1], lv->lock_args);

View File

@@ -1439,21 +1439,6 @@ char *top_level_lv_name(struct volume_group *vg, const char *lv_name);
struct generic_logical_volume *get_or_create_glv(struct dm_pool *mem, struct logical_volume *lv, int *glv_created);
struct glv_list *get_or_create_glvl(struct dm_pool *mem, struct logical_volume *lv, int *glv_created);
struct logical_volume *get_data_from_pool(struct logical_volume *pool_lv);
struct logical_volume *get_meta_from_pool(struct logical_volume *pool_lv);
struct logical_volume *get_pool_from_thin(struct logical_volume *thin_lv);
struct logical_volume *get_pool_from_cache(struct logical_volume *cache_lv);
struct logical_volume *get_pool_from_vdo(struct logical_volume *vdo_lv);
struct logical_volume *get_origin_from_cache(struct logical_volume *cache_lv);
struct logical_volume *get_origin_from_writecache(struct logical_volume *writecache_lv);
struct logical_volume *get_origin_from_integrity(struct logical_volume *integrity_lv);
struct logical_volume *get_origin_from_thin(struct logical_volume *thin_lv);
struct logical_volume *get_merge_lv_from_thin(struct logical_volume *thin_lv);
struct logical_volume *get_external_lv_from_thin(struct logical_volume *thin_lv);
struct logical_volume *get_origin_from_snap(struct logical_volume *snap_lv);
struct logical_volume *get_cow_from_snap(struct logical_volume *snap_lv);
struct logical_volume *get_fast_from_writecache(struct logical_volume *writecache_lv);
/*
* Begin skeleton for external LVM library
*/
@@ -1529,8 +1514,4 @@ int lv_raid_integrity_total_mismatches(struct cmd_context *cmd, const struct log
int setting_str_list_add(const char *field, uint64_t val, char *val_str, struct dm_list *result, struct dm_pool *mem);
struct volume_group *vg_copy_struct(struct volume_group *vgo);
void insert_segment(struct logical_volume *lv, struct lv_segment *seg);
#endif

View File

@@ -4346,8 +4346,6 @@ const struct logical_volume *lv_committed(const struct logical_volume *lv)
found_lv = lv; /* Use uncommitted LV as best effort */
}
log_debug("lv_committed %s from vg_committed %p", display_lvname(found_lv), vg);
return found_lv;
}

View File

@@ -255,8 +255,7 @@ struct segtype_handler {
uint32_t *area_count);
int (*text_import) (struct lv_segment * seg,
const struct dm_config_node * sn,
struct dm_hash_table * pv_hash,
struct dm_hash_table * lv_hash);
struct dm_hash_table * pv_hash);
int (*merge_segments) (struct lv_segment * seg1,
struct lv_segment * seg2);
int (*add_target_line) (struct dev_manager *dm, struct dm_pool *mem,

View File

@@ -572,9 +572,6 @@ int set_vdo_write_policy(enum dm_vdo_write_policy *vwp, const char *policy)
return 0;
}
if (*vwp != DM_VDO_WRITE_POLICY_AUTO)
log_info("Deprecated VDO setting write_policy specified.");
return 1;
}
@@ -629,9 +626,6 @@ int fill_vdo_target_params(struct cmd_context *cmd,
*vdo_pool_header_size = 2 * find_config_tree_int64(cmd, allocation_vdo_pool_header_size_CFG, profile);
if (vtp->use_metadata_hints)
log_info("Deprecated VDO setting use_metadata_hints specified.");
return 1;
}

View File

@@ -15,14 +15,10 @@
#include "lib/misc/lib.h"
#include "lib/metadata/metadata.h"
#include "lib/metadata/lv_alloc.h"
#include "lib/metadata/segtype.h"
#include "lib/metadata/pv_alloc.h"
#include "lib/display/display.h"
#include "lib/activate/activate.h"
#include "lib/commands/toolcontext.h"
#include "lib/format_text/archiver.h"
#include "lib/datastruct/str_list.h"
struct volume_group *alloc_vg(const char *pool_name, struct cmd_context *cmd,
const char *vg_name)
@@ -50,6 +46,12 @@ struct volume_group *alloc_vg(const char *pool_name, struct cmd_context *cmd,
vg->vgmem = vgmem;
vg->alloc = ALLOC_NORMAL;
if (!(vg->hostnames = dm_hash_create(14))) {
log_error("Failed to allocate VG hostname hashtable.");
dm_pool_destroy(vgmem);
return NULL;
}
dm_list_init(&vg->pvs);
dm_list_init(&vg->pv_write_list);
dm_list_init(&vg->lvs);
@@ -79,6 +81,7 @@ static void _free_vg(struct volume_group *vg)
if (vg->committed_cft)
config_destroy(vg->committed_cft);
dm_hash_destroy(vg->hostnames);
dm_pool_destroy(vg->vgmem);
}
@@ -761,858 +764,3 @@ void vg_backup_if_needed(struct volume_group *vg)
vg->needs_backup = 0;
backup(vg->vg_committed);
}
void insert_segment(struct logical_volume *lv, struct lv_segment *seg)
{
struct lv_segment *comp;
dm_list_iterate_items(comp, &lv->segments) {
if (comp->le > seg->le) {
dm_list_add(&comp->list, &seg->list);
return;
}
}
lv->le_count += seg->len;
dm_list_add(&lv->segments, &seg->list);
}
struct logical_volume *get_data_from_pool(struct logical_volume *pool_lv)
{
/* works for cache pool, thin pool, vdo pool */
/* first_seg() = dm_list_first_entry(&lv->segments) */
/* seg_lv(seg, n) = seg->areas[n].u.lv.lv */
return seg_lv(first_seg(pool_lv), 0);
}
struct logical_volume *get_meta_from_pool(struct logical_volume *pool_lv)
{
/* works for cache pool, thin pool, vdo pool */
/* first_seg() = dm_list_first_entry(&lv->segments) */
/* seg_lv(seg, n) = seg->areas[n].u.lv.lv */
return first_seg(pool_lv)->metadata_lv;
}
struct logical_volume *get_pool_from_thin(struct logical_volume *thin_lv)
{
return first_seg(thin_lv)->pool_lv;
}
struct logical_volume *get_pool_from_cache(struct logical_volume *cache_lv)
{
return first_seg(cache_lv)->pool_lv;
}
struct logical_volume *get_pool_from_vdo(struct logical_volume *vdo_lv)
{
return seg_lv(first_seg(vdo_lv), 0);
}
struct logical_volume *get_origin_from_cache(struct logical_volume *cache_lv)
{
return seg_lv(first_seg(cache_lv), 0);
}
struct logical_volume *get_origin_from_writecache(struct logical_volume *writecache_lv)
{
return seg_lv(first_seg(writecache_lv), 0);
}
struct logical_volume *get_origin_from_integrity(struct logical_volume *integrity_lv)
{
return seg_lv(first_seg(integrity_lv), 0);
}
struct logical_volume *get_origin_from_thin(struct logical_volume *thin_lv)
{
return first_seg(thin_lv)->origin;
}
struct logical_volume *get_merge_lv_from_thin(struct logical_volume *thin_lv)
{
return first_seg(thin_lv)->merge_lv;
}
struct logical_volume *get_external_lv_from_thin(struct logical_volume *thin_lv)
{
return first_seg(thin_lv)->external_lv;
}
struct logical_volume *get_origin_from_snap(struct logical_volume *snap_lv)
{
return first_seg(snap_lv)->origin;
}
struct logical_volume *get_cow_from_snap(struct logical_volume *snap_lv)
{
return first_seg(snap_lv)->cow;
}
struct logical_volume *get_fast_from_writecache(struct logical_volume *writecache_lv)
{
return first_seg(writecache_lv)->writecache;
}
/*
* When reading from text:
* - pv comes from looking up the "pv0" key in pv_hash
* - pe comes from text field
* - pv and pe are passed to set_lv_segment_area_pv() to
* create the pv_segment structs, and connect them to
* the lv_segment.
*
* When copying the struct:
* - pv comes from looking up the pv id in vg->pvs
* - pe comes from the original pvseg struct
* - pv and pe are passed to set_lv_segment_area_pv() to
* create the pv_segment structs, and connect them to
* the lv_segment (same as when reading from text.)
*
* set_lv_segment_area_pv(struct lv_segment *seg, uint32_t s,
* struct physical_volume *pv, uint32_t pe);
* does:
*
* seg_pvseg(seg, s) =
* assign_peg_to_lvseg(pv, pe, seg->area_len, seg, s);
*
* does:
*
* seg->areas[s].u.pv.pvseg =
* assign_peg_to_lvseg(pv, pe, area_len, seg, s);
*
* struct pv_segment *assign_peg_to_lvseg(struct physical_volume *pv,
* uint32_t pe, uint32_t area_len,
* struct lv_segment *seg, uint32_t s);
*
* This does multiple things:
* 1. creates pv_segment and connects it to lv_segment
* 2. creates pv->segments list of all pv_segments on the pv
* 3. updates pv->pe_alloc_count, vg->free_count
*/
static int _areas_copy_struct(struct volume_group *vg,
struct logical_volume *lv,
struct lv_segment *seg,
struct volume_group *vgo,
struct logical_volume *lvo,
struct lv_segment *sego,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
{
uint32_t s;
/* text_import_areas */
for (s = 0; s < sego->area_count; s++) {
seg->areas[s].type = sego->areas[s].type;
if (sego->areas[s].type == AREA_PV) {
struct physical_volume *area_pvo;
struct physical_volume *area_pv;
if (!(area_pvo = sego->areas[s].u.pv.pvseg->pv))
goto_bad;
if (!(area_pv = dm_hash_lookup_binary(pv_hash, &area_pvo->id, ID_LEN)))
goto_bad;
if (!set_lv_segment_area_pv(seg, s, area_pv, sego->areas[s].u.pv.pvseg->pe))
goto_bad;
} else if (sego->areas[s].type == AREA_LV) {
struct logical_volume *area_lvo;
struct logical_volume *area_lv;
if (!(area_lvo = sego->areas[s].u.lv.lv))
goto_bad;
if (!(area_lv = dm_hash_lookup(lv_hash, area_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, s, area_lv, sego->areas[s].u.lv.le, 0))
goto_bad;
}
}
return 1;
bad:
return 0;
}
static int _thin_messages_copy_struct(struct volume_group *vgo, struct volume_group *vg,
struct logical_volume *lvo, struct logical_volume *lv,
struct lv_segment *sego, struct lv_segment *seg,
struct dm_hash_table *lv_hash)
{
struct lv_thin_message *mso;
struct lv_thin_message *ms;
struct logical_volume *ms_lvo;
struct logical_volume *ms_lv;
if (dm_list_empty(&sego->thin_messages))
return 1;
dm_list_iterate_items(mso, &sego->thin_messages) {
if (!(ms = dm_pool_alloc(vg->vgmem, sizeof(*ms))))
goto_bad;
ms->type = mso->type;
switch (ms->type) {
case DM_THIN_MESSAGE_CREATE_SNAP:
case DM_THIN_MESSAGE_CREATE_THIN:
if (!(ms_lvo = mso->u.lv))
goto_bad;
if (!(ms_lv = dm_hash_lookup(lv_hash, ms_lvo->name)))
goto_bad;
ms->u.lv = ms_lv;
break;
case DM_THIN_MESSAGE_DELETE:
ms->u.delete_id = mso->u.delete_id;
break;
default:
break;
}
dm_list_add(&seg->thin_messages, &ms->list);
}
return 1;
bad:
return 0;
}
static struct lv_segment *_seg_copy_struct(struct volume_group *vg,
struct logical_volume *lv,
struct volume_group *vgo,
struct logical_volume *lvo,
struct lv_segment *sego,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
{
struct dm_pool *mem = vg->vgmem;
struct lv_segment *seg;
uint32_t s;
if (!(seg = dm_pool_zalloc(mem, sizeof(*seg))))
return_NULL;
if (sego->area_count && sego->areas &&
!(seg->areas = dm_pool_zalloc(mem, sego->area_count * sizeof(*seg->areas))))
return_NULL;
/*
* This is a more accurate copy of the original segment:
* if (sego->area_count && sego->meta_areas &&
* !(seg->meta_areas = dm_pool_zalloc(mem, sego->area_count * sizeof(*seg->meta_areas))))
* return_NULL;
*
* But it causes a segfault in for_each_sub_lv, which seems to want meta_areas allocated
* in the copy even when it's null in the original. So, this copies alloc_lv_segment
* which always allocates meta_areas.
*/
if (segtype_is_raid_with_meta(sego->segtype)) {
if (!(seg->meta_areas = dm_pool_zalloc(mem, sego->area_count * sizeof(*seg->meta_areas))))
return_NULL;
}
/* see _read_segment, alloc_lv_segment */
dm_list_init(&seg->tags);
dm_list_init(&seg->origin_list);
dm_list_init(&seg->thin_messages);
seg->lv = lv;
seg->segtype = sego->segtype;
seg->le = sego->le;
seg->len = sego->len;
seg->status = sego->status;
seg->area_count = sego->area_count;
seg->area_len = sego->area_len;
if (!dm_list_empty(&sego->tags) && !str_list_dup(mem, &seg->tags, &sego->tags))
goto_bad;
/*
* _read_segment, ->text_import(), i.e. _foo_text_import()
*/
if (seg_is_striped_target(sego)) {
/* see _striped_text_import, N.B. not "seg_is_striped" */
seg->stripe_size = sego->stripe_size;
if (!_areas_copy_struct(vg, lv, seg, vgo, lvo, sego, pv_hash, lv_hash))
goto_bad;
} else if (seg_is_cache_pool(sego)) {
struct logical_volume *data_lvo;
struct logical_volume *meta_lvo;
struct logical_volume *data_lv;
struct logical_volume *meta_lv;
/* see _cache_pool_text_import */
seg->cache_metadata_format = sego->cache_metadata_format;
seg->chunk_size = sego->chunk_size;
seg->cache_mode = sego->cache_mode;
if (sego->policy_name)
seg->policy_name = dm_pool_strdup(mem, sego->policy_name);
if (sego->policy_settings)
seg->policy_settings = dm_config_clone_node_with_mem(mem, sego->policy_settings, 0);
if (!(data_lvo = get_data_from_pool(lvo)))
goto_bad;
if (!(meta_lvo = get_meta_from_pool(lvo)))
goto_bad;
if (!(data_lv = dm_hash_lookup(lv_hash, data_lvo->name)))
goto_bad;
if (!(meta_lv = dm_hash_lookup(lv_hash, meta_lvo->name)))
goto_bad;
if (!attach_pool_data_lv(seg, data_lv))
goto_bad;
if (!attach_pool_metadata_lv(seg, meta_lv))
goto_bad;
} else if (seg_is_cache(sego)) {
struct logical_volume *pool_lvo;
struct logical_volume *origin_lvo;
struct logical_volume *pool_lv;
struct logical_volume *origin_lv;
/* see _cache_text_import */
seg->cache_metadata_format = sego->cache_metadata_format;
seg->chunk_size = sego->chunk_size;
seg->cache_mode = sego->cache_mode;
if (sego->policy_name)
seg->policy_name = dm_pool_strdup(mem, sego->policy_name);
if (sego->policy_settings)
seg->policy_settings = dm_config_clone_node_with_mem(mem, sego->policy_settings, 0);
seg->cleaner_policy = sego->cleaner_policy;
seg->metadata_start = sego->metadata_start;
seg->metadata_len = sego->metadata_len;
seg->data_start = sego->data_start;
seg->data_len = sego->data_len;
if (sego->metadata_id) {
if (!(seg->metadata_id = dm_pool_zalloc(mem, sizeof(struct id))))
goto_bad;
memcpy(seg->metadata_id, sego->metadata_id, sizeof(struct id));
}
if (sego->data_id) {
if (!(seg->data_id = dm_pool_zalloc(mem, sizeof(struct id))))
goto_bad;
memcpy(seg->data_id, sego->data_id, sizeof(struct id));
}
if (!(pool_lvo = get_pool_from_cache(lvo)))
goto_bad;
if (!(origin_lvo = get_origin_from_cache(lvo)))
goto_bad;
if (!(pool_lv = dm_hash_lookup(lv_hash, pool_lvo->name)))
goto_bad;
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))
goto_bad;
if (!attach_pool_lv(seg, pool_lv, NULL, NULL, NULL))
goto_bad;
} else if (seg_is_integrity(sego)) {
struct logical_volume *origin_lvo;
struct logical_volume *origin_lv;
struct logical_volume *meta_lvo;
struct logical_volume *meta_lv;
const char *hash;
/* see _integrity_text_import */
if (!(origin_lvo = get_origin_from_integrity(lvo)))
goto_bad;
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))
goto_bad;
seg->origin = origin_lv;
if ((meta_lvo = sego->integrity_meta_dev)) {
if (!(meta_lv = dm_hash_lookup(lv_hash, meta_lvo->name)))
goto_bad;
seg->integrity_meta_dev = meta_lv;
if (!add_seg_to_segs_using_this_lv(meta_lv, seg))
goto_bad;
}
seg->integrity_data_sectors = sego->integrity_data_sectors;
seg->integrity_recalculate = sego->integrity_recalculate;
memcpy(&seg->integrity_settings, &sego->integrity_settings, sizeof(seg->integrity_settings));
if ((hash = sego->integrity_settings.internal_hash)) {
if (!(seg->integrity_settings.internal_hash = dm_pool_strdup(mem, hash)))
goto_bad;
}
} else if (seg_is_mirror(sego)) {
struct logical_volume *log_lv;
/* see _mirrored_text_import */
seg->extents_copied = sego->extents_copied;
seg->region_size = sego->region_size;
if (sego->log_lv) {
if (!(log_lv = dm_hash_lookup(lv_hash, sego->log_lv->name)))
goto_bad;
seg->log_lv = log_lv;
}
if (!_areas_copy_struct(vg, lv, seg, vgo, lvo, sego, pv_hash, lv_hash))
goto_bad;
} else if (seg_is_thin_pool(sego)) {
struct logical_volume *data_lvo;
struct logical_volume *meta_lvo;
struct logical_volume *data_lv;
struct logical_volume *meta_lv;
/* see _thin_pool_text_import */
if (!(data_lvo = get_data_from_pool(lvo)))
goto_bad;
if (!(meta_lvo = get_meta_from_pool(lvo)))
goto_bad;
if (!(data_lv = dm_hash_lookup(lv_hash, data_lvo->name)))
goto_bad;
if (!(meta_lv = dm_hash_lookup(lv_hash, meta_lvo->name)))
goto_bad;
if (!attach_pool_data_lv(seg, data_lv))
goto_bad;
if (!attach_pool_metadata_lv(seg, meta_lv))
goto_bad;
seg->transaction_id = sego->transaction_id;
seg->chunk_size = sego->chunk_size;
seg->discards = sego->discards;
seg->zero_new_blocks = sego->zero_new_blocks;
seg->crop_metadata = sego->crop_metadata;
if (!_thin_messages_copy_struct(vgo, vg, lvo, lv, sego, seg, lv_hash))
goto_bad;
} else if (seg_is_thin_volume(sego)) {
struct logical_volume *pool_lvo;
struct logical_volume *origin_lvo;
struct logical_volume *merge_lvo;
struct logical_volume *external_lvo;
struct logical_volume *pool_lv = NULL;
struct logical_volume *origin_lv = NULL;
struct logical_volume *merge_lv = NULL;
struct logical_volume *external_lv = NULL;
/* see _thin_text_import */
if (!(pool_lvo = get_pool_from_thin(lvo)))
goto_bad;
if (!(pool_lv = dm_hash_lookup(lv_hash, pool_lvo->name)))
goto_bad;
if ((origin_lvo = get_origin_from_thin(lvo))) {
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_lvo->name)))
goto_bad;
}
if ((merge_lvo = get_merge_lv_from_thin(lvo))) {
if (!(merge_lv = dm_hash_lookup(lv_hash, merge_lvo->name)))
goto_bad;
}
if ((external_lvo = get_external_lv_from_thin(lvo))) {
if (!(external_lv = dm_hash_lookup(lv_hash, external_lvo->name)))
goto_bad;
}
if (!attach_pool_lv(seg, pool_lv, origin_lv, NULL, merge_lv))
goto_bad;
if (!attach_thin_external_origin(seg, external_lv))
goto_bad;
seg->transaction_id = sego->transaction_id;
seg->device_id = sego->device_id;
} else if (seg_is_snapshot(sego)) {
struct logical_volume *origin_lvo;
struct logical_volume *cow_lvo;
struct logical_volume *origin_lv;
struct logical_volume *cow_lv;
/* see _snap_text_import */
if (!(origin_lvo = get_origin_from_snap(lvo)))
goto_bad;
if (!(cow_lvo = get_cow_from_snap(lvo)))
goto_bad;
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_lvo->name)))
goto_bad;
if (!(cow_lv = dm_hash_lookup(lv_hash, cow_lvo->name)))
goto_bad;
init_snapshot_seg(seg, origin_lv, cow_lv, sego->chunk_size,
(sego->status & MERGING) ? 1 : 0);
} else if (seg_is_writecache(sego)) {
struct logical_volume *origin_lvo;
struct logical_volume *fast_lvo;
struct logical_volume *origin_lv;
struct logical_volume *fast_lv;
/* see _writecache_text_import */
if (!(origin_lvo = get_origin_from_writecache(lvo)))
goto_bad;
if (!(fast_lvo = get_fast_from_writecache(lvo)))
goto_bad;
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_lvo->name)))
goto_bad;
if (!(fast_lv = dm_hash_lookup(lv_hash, fast_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))
return_0;
seg->writecache_block_size = sego->writecache_block_size;
seg->origin = origin_lv;
seg->writecache = fast_lv;
if (!add_seg_to_segs_using_this_lv(fast_lv, seg))
return_0;
memcpy(&seg->writecache_settings, &sego->writecache_settings, sizeof(seg->writecache_settings));
if (sego->writecache_settings.new_key &&
!(seg->writecache_settings.new_key = dm_pool_strdup(vg->vgmem, sego->writecache_settings.new_key)))
goto_bad;
if (sego->writecache_settings.new_val &&
!(seg->writecache_settings.new_val = dm_pool_strdup(vg->vgmem, sego->writecache_settings.new_val)))
goto_bad;
} else if (seg_is_raid(sego)) {
struct logical_volume *area_lvo;
struct logical_volume *area_lv;
/* see _raid_text_import_areas */
seg->region_size = sego->region_size;
seg->stripe_size = sego->stripe_size;
seg->data_copies = sego->data_copies;
seg->writebehind = sego->writebehind;
seg->min_recovery_rate = sego->min_recovery_rate;
seg->max_recovery_rate = sego->max_recovery_rate;
seg->data_offset = sego->data_offset;
seg->reshape_len = sego->reshape_len;
for (s = 0; s < sego->area_count; s++) {
if (!(area_lvo = sego->areas[s].u.lv.lv))
goto_bad;
if (!(area_lv = dm_hash_lookup(lv_hash, area_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, s, area_lv, 0, RAID_IMAGE))
goto_bad;
if (!sego->meta_areas)
continue;
if (!(area_lvo = sego->meta_areas[s].u.lv.lv))
continue;
if (!(area_lv = dm_hash_lookup(lv_hash, area_lvo->name)))
goto_bad;
if (!set_lv_segment_area_lv(seg, s, area_lv, 0, RAID_META))
goto_bad;
}
} else if (seg_is_vdo_pool(sego)) {
struct logical_volume *data_lvo;
struct logical_volume *data_lv;
if (!(data_lvo = get_data_from_pool(lvo)))
goto_bad;
if (!(data_lv = dm_hash_lookup(lv_hash, data_lvo->name)))
goto_bad;
seg->vdo_pool_header_size = sego->vdo_pool_header_size;
seg->vdo_pool_virtual_extents = sego->vdo_pool_virtual_extents;
memcpy(&seg->vdo_params, &sego->vdo_params, sizeof(seg->vdo_params));
if (!set_lv_segment_area_lv(seg, 0, data_lv, 0, LV_VDO_POOL_DATA))
goto_bad;
} else if (seg_is_vdo(sego)) {
struct logical_volume *pool_lvo;
struct logical_volume *pool_lv;
uint32_t vdo_offset;
if (!(pool_lvo = get_pool_from_vdo(lvo)))
goto_bad;
if (!(pool_lv = dm_hash_lookup(lv_hash, pool_lvo->name)))
goto_bad;
vdo_offset = sego->areas[0].u.lv.le; /* or seg_le(seg, 0)) */
if (!set_lv_segment_area_lv(seg, 0, pool_lv, vdo_offset, LV_VDO_POOL))
goto_bad;
} else if (seg_is_zero(sego) || seg_is_error(sego)) {
/* nothing to copy */
} else {
log_error("Missing copy for lv %s segtype %s.",
display_lvname(lvo), sego->segtype->name);
goto bad;
}
return seg;
bad:
return NULL;
}
/* _read_lvsegs, _read_segments, _read_segment, alloc_lv_segment, ->text_import */
static int _lvsegs_copy_struct(struct volume_group *vg,
struct logical_volume *lv,
struct volume_group *vgo,
struct logical_volume *lvo,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
{
struct lv_segment *sego;
struct lv_segment *seg;
/* see _read_segment */
dm_list_iterate_items(sego, &lvo->segments) {
/* see _read_segment */
if (!(seg = _seg_copy_struct(vg, lv, vgo, lvo, sego, pv_hash, lv_hash)))
goto_bad;
/* last step in _read_segment */
/* adds seg to lv->segments and sets lv->le_count */
insert_segment(lv, seg);
}
return 1;
bad:
return 0;
}
static struct logical_volume *_lv_copy_struct(struct volume_group *vg,
struct volume_group *vgo,
struct logical_volume *lvo,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
{
struct dm_pool *mem = vg->vgmem;
struct logical_volume *lv;
if (!(lv = alloc_lv(mem)))
return NULL;
if (!(lv->name = dm_pool_strdup(mem, lvo->name)))
goto_bad;
if (lvo->profile && !(lv->profile = add_profile(lvo->vg->cmd, lvo->profile->name, CONFIG_PROFILE_METADATA)))
goto_bad;
if (lvo->hostname && !(lv->hostname = dm_pool_strdup(mem, lvo->hostname)))
goto_bad;
if (lvo->lock_args && !(lv->lock_args = dm_pool_strdup(mem, lvo->lock_args)))
goto_bad;
if (!dm_list_empty(&lvo->tags) && !str_list_dup(mem, &lv->tags, &lvo->tags))
goto_bad;
memcpy(&lv->lvid, &lvo->lvid, sizeof(lvo->lvid));
lv->vg = vg;
lv->status = lvo->status;
lv->alloc = lvo->alloc;
lv->read_ahead = lvo->read_ahead;
lv->major = lvo->major;
lv->minor = lvo->minor;
lv->size = lvo->size;
/* lv->le_count = lvo->le_count; */ /* set by calls to insert_segment() */
lv->origin_count = lvo->origin_count;
lv->external_count = lvo->external_count;
lv->timestamp = lvo->timestamp;
if (!dm_hash_insert(lv_hash, lv->name, lv))
goto_bad;
return lv;
bad:
return NULL;
}
/* _read_pv */
static struct physical_volume *_pv_copy_struct(struct volume_group *vg, struct volume_group *vgo,
struct physical_volume *pvo, struct dm_hash_table *pv_hash)
{
struct dm_pool *mem = vg->vgmem;
struct physical_volume *pv;
if (!(pv = dm_pool_zalloc(mem, sizeof(*pv))))
return_NULL;
if (!(pv->vg_name = dm_pool_strdup(mem, vg->name)))
goto_bad;
pv->is_labelled = pvo->is_labelled;
memcpy(&pv->id, &pvo->id, sizeof(struct id));
memcpy(&pv->vg_id, &vgo->id, sizeof(struct id));
pv->status = pvo->status;
pv->size = pvo->size;
if (pvo->device_hint && !(pv->device_hint = dm_pool_strdup(mem, pvo->device_hint)))
goto_bad;
if (pvo->device_id && !(pv->device_id = dm_pool_strdup(mem, pvo->device_id)))
goto_bad;
if (pvo->device_id_type && !(pv->device_id_type = dm_pool_strdup(mem, pvo->device_id_type)))
goto_bad;
pv->pe_start = pvo->pe_start;
pv->pe_count = pvo->pe_count;
pv->ba_start = pvo->ba_start;
pv->ba_size = pvo->ba_size;
dm_list_init(&pv->tags);
dm_list_init(&pv->segments);
if (!dm_list_empty(&pvo->tags) && !str_list_dup(mem, &pv->tags, &pvo->tags))
goto_bad;
pv->pe_size = vg->extent_size;
pv->pe_alloc_count = 0;
pv->pe_align = 0;
/* Note: text import uses "pv0" style keys rather than pv id. */
if (!dm_hash_insert_binary(pv_hash, &pv->id, ID_LEN, pv))
goto_bad;
return pv;
bad:
return NULL;
}
/*
* We only need to copy things that are exported to metadata text.
* This struct copy is an alternative to text export+import, so the
* the reference for what to copy are the text export and import
* functions.
*
* There are two parts to copying the struct:
* 1. Setting the values, e.g. new->field = old->field.
* 2. Creating the linkages (pointers/lists) among all of
* the new structs.
*
* Creating the linkages is the complex part, and for that we use
* most of the same functions that text import uses.
*
* In some cases, the functions creating linkage also set values.
* This is not common, but in those cases we need to be careful.
*
* Many parts of the vg struct are not used by the activation code,
* but it's difficult to know exactly what is or isn't used, so we
* try to copy everything, except in cases where we know it's not
* used and implementing it would be complicated.
*/
struct volume_group *vg_copy_struct(struct volume_group *vgo)
{
struct volume_group *vg;
struct logical_volume *lv;
struct pv_list *pvlo;
struct pv_list *pvl;
struct lv_list *lvlo;
struct lv_list *lvl;
struct dm_hash_table *pv_hash = NULL;
struct dm_hash_table *lv_hash = NULL;
if (!(vg = alloc_vg("read_vg", vgo->cmd, vgo->name)))
return NULL;
log_debug("Copying vg struct %p to %p", vgo, vg);
/*
* TODO: put hash tables in vg struct, and also use for text import.
*/
if (!(pv_hash = dm_hash_create(58)))
goto_bad;
if (!(lv_hash = dm_hash_create(8180)))
goto_bad;
vg->seqno = vgo->seqno;
vg->alloc = vgo->alloc;
vg->status = vgo->status;
vg->id = vgo->id;
vg->extent_size = vgo->extent_size;
vg->max_lv = vgo->max_lv;
vg->max_pv = vgo->max_pv;
vg->pv_count = vgo->pv_count;
vg->open_mode = vgo->open_mode;
vg->mda_copies = vgo->mda_copies;
if (vgo->profile && !(vg->profile = add_profile(vgo->cmd, vgo->profile->name, CONFIG_PROFILE_METADATA)))
goto_bad;
if (vgo->system_id && !(vg->system_id = dm_pool_strdup(vg->vgmem, vgo->system_id)))
goto_bad;
if (vgo->lock_type && !(vg->lock_type = dm_pool_strdup(vg->vgmem, vgo->lock_type)))
goto_bad;
if (vgo->lock_args && !(vg->lock_args = dm_pool_strdup(vg->vgmem, vgo->lock_args)))
goto_bad;
if (!dm_list_empty(&vgo->tags) && !str_list_dup(vg->vgmem, &vg->tags, &vgo->tags))
goto_bad;
dm_list_iterate_items(pvlo, &vgo->pvs) {
if (!(pvl = dm_pool_zalloc(vg->vgmem, sizeof(struct pv_list))))
goto_bad;
if (!(pvl->pv = _pv_copy_struct(vg, vgo, pvlo->pv, pv_hash)))
goto_bad;
if (!alloc_pv_segment_whole_pv(vg->vgmem, pvl->pv))
goto_bad;
vg->extent_count += pvl->pv->pe_count;
vg->free_count += pvl->pv->pe_count;
add_pvl_to_vgs(vg, pvl);
}
dm_list_iterate_items(lvlo, &vgo->lvs) {
if (!(lvl = dm_pool_zalloc(vg->vgmem, sizeof(struct lv_list))))
goto_bad;
if (!(lvl->lv = _lv_copy_struct(vg, vgo, lvlo->lv, pv_hash, lv_hash)))
goto_bad;
dm_list_add(&vg->lvs, &lvl->list);
}
if (vgo->pool_metadata_spare_lv &&
!(vg->pool_metadata_spare_lv = dm_hash_lookup(lv_hash, vgo->pool_metadata_spare_lv->name)))
goto_bad;
if (vgo->sanlock_lv &&
!(vg->sanlock_lv = dm_hash_lookup(lv_hash, vgo->sanlock_lv->name)))
goto_bad;
dm_list_iterate_items(lvlo, &vgo->lvs) {
if (!(lv = dm_hash_lookup(lv_hash, lvlo->lv->name)))
goto_bad;
if (!_lvsegs_copy_struct(vg, lv, vgo, lvlo->lv, pv_hash, lv_hash))
goto_bad;
}
/* sanity check */
if ((vg->free_count != vgo->free_count) || (vg->extent_count != vgo->extent_count)) {
log_error("vg copy wrong free_count %u %u extent_count %u %u",
vgo->free_count, vg->free_count, vgo->extent_count, vg->extent_count);
goto_bad;
}
set_pv_devices(vgo->fid, vg);
dm_hash_destroy(pv_hash);
dm_hash_destroy(lv_hash);
return vg;
bad:
dm_hash_destroy(pv_hash);
dm_hash_destroy(lv_hash);
release_vg(vg);
return NULL;
}

View File

@@ -129,6 +129,7 @@ struct volume_group {
uint32_t mda_copies; /* target number of mdas for this VG */
struct dm_hash_table *hostnames; /* map of creation hostnames */
struct logical_volume *pool_metadata_spare_lv; /* one per VG */
struct logical_volume *sanlock_lv; /* one per VG */
struct dm_list msg_list;

View File

@@ -74,8 +74,7 @@ static int _mirrored_text_import_area_count(const struct dm_config_node *sn, uin
}
static int _mirrored_text_import(struct lv_segment *seg, const struct dm_config_node *sn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
const struct dm_config_value *cv;
const char *logname = NULL;
@@ -103,7 +102,7 @@ static int _mirrored_text_import(struct lv_segment *seg, const struct dm_config_
}
if (dm_config_get_str(sn, "mirror_log", &logname)) {
if (!(seg->log_lv = dm_hash_lookup(lv_hash, logname))) {
if (!(seg->log_lv = find_lv(seg->lv->vg, logname))) {
log_error("Unrecognised mirror log in "
"segment %s of logical volume %s.",
dm_config_parent_name(sn), seg->lv->name);

View File

@@ -18,130 +18,6 @@
#include "lib/misc/crc.h"
#include "lib/mm/xlate.h"
/*
* CRC-32 byte lookup table generated by crc_gen.c
*
* Precomputed lookup table for CRC computed with 0xedb88320 polynomial.
*/
static const uint32_t _crctab[] = {
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d,
};
#ifdef __x86_64__
/*
* Note that the CRC-32 checksum is merely used for error detection in
* transmission and storage. It is not intended to guard against the malicious
* modification of files (i.e., it is not a cryptographic hash). !!!
*
* This code is based on zlib code from:
*
* https://github.com/vlastavesely/crc32sum
* https://github.com/chromium/chromium/blob/master/third_party/zlib/
*
* SPDX-License-Identifier: GPL-2.0
*/
/*
* ATM Use this code only for X86_64 arch where it was tested
* TODO: check if it speeds also non X86_64 arch
*/
static unsigned int _crc32_lookup[16][256] = { 0 };
static void _initialise_crc32(void)
{
unsigned int i, j;
if (_crc32_lookup[0][1])
return;
for (i = 0; i < 256; i++)
_crc32_lookup[0][i] = _crctab[i];
for (i = 0; i < 256; i++)
for (j = 1; j < 16; j++)
_crc32_lookup[j][i] = (_crc32_lookup[j - 1][i] >> 8) ^
_crc32_lookup[0][_crc32_lookup[j - 1][i] & 0xff];
}
#ifndef DEBUG_CRC32
uint32_t calc_crc(uint32_t initial, const uint8_t *buf, uint32_t size)
#else
static uint32_t _calc_crc_new(uint32_t initial, const uint8_t *buf, uint32_t size)
#endif
{
const uint32_t *ptr = (const uint32_t *) buf;
uint32_t a, b, c, d;
uint32_t crc = initial;
_initialise_crc32();
for (;size >= 16; size -= 16) {
a = xlate32(*ptr++) ^ crc;
b = xlate32(*ptr++);
c = xlate32(*ptr++);
d = xlate32(*ptr++);
crc = _crc32_lookup[ 0][(d >> 24) & 0xff] ^
_crc32_lookup[ 1][(d >> 16) & 0xff] ^
_crc32_lookup[ 2][(d >> 8) & 0xff] ^
_crc32_lookup[ 3][ d & 0xff] ^
_crc32_lookup[ 4][(c >> 24) & 0xff] ^
_crc32_lookup[ 5][(c >> 16) & 0xff] ^
_crc32_lookup[ 6][(c >> 8) & 0xff] ^
_crc32_lookup[ 7][ c & 0xff] ^
_crc32_lookup[ 8][(b >> 24) & 0xff] ^
_crc32_lookup[ 9][(b >> 16) & 0xff] ^
_crc32_lookup[10][(b >> 8) & 0xff] ^
_crc32_lookup[11][ b & 0xff] ^
_crc32_lookup[12][(a >> 24) & 0xff] ^
_crc32_lookup[13][(a >> 16) & 0xff] ^
_crc32_lookup[14][(a >> 8) & 0xff] ^
_crc32_lookup[15][ a & 0xff];
}
buf = (const uint8_t *) ptr;
while (size--)
crc = _crc32_lookup[0][((unsigned char) crc ^ *(buf++))] ^ (crc >> 8);
return crc;
}
#else // __x86_64__
/* Calculate an endian-independent CRC of supplied buffer */
#ifndef DEBUG_CRC32
uint32_t calc_crc(uint32_t initial, const uint8_t *buf, uint32_t size)
@@ -149,6 +25,41 @@ uint32_t calc_crc(uint32_t initial, const uint8_t *buf, uint32_t size)
static uint32_t _calc_crc_new(uint32_t initial, const uint8_t *buf, uint32_t size)
#endif
{
/* CRC-32 byte lookup table generated by crc_gen.c */
static const uint32_t _crctab[] = {
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d,
};
const uint32_t *start = (const uint32_t *) buf;
const uint32_t *end = (const uint32_t *) (buf + (size & 0xfffffffc));
uint32_t crc = initial;
@@ -173,8 +84,6 @@ static uint32_t _calc_crc_new(uint32_t initial, const uint8_t *buf, uint32_t siz
return crc;
}
#endif // __x86_64__
#ifdef DEBUG_CRC32
static uint32_t _calc_crc_old(uint32_t initial, const uint8_t *buf, uint32_t size)
{

View File

@@ -517,13 +517,6 @@ static void _restore_priority_if_possible(struct cmd_context *cmd)
/* Stop memory getting swapped out */
static void _lock_mem(struct cmd_context *cmd)
{
if (!_size_stack || _size_malloc_tmp) {
log_debug_mem("Skipping memory locking (reserved memory: "
FMTsize_t " stack: " FMTsize_t ").",
_size_malloc_tmp, _size_stack);
return;
}
if (!cmd->running_on_valgrind)
_allocate_memory();
(void)strerror(0); /* Force libc.mo load */
@@ -564,13 +557,6 @@ static void _unlock_mem(struct cmd_context *cmd)
{
size_t unlock_mstats = 0;
if (!_size_stack || _size_malloc_tmp) {
log_debug_mem("Skipping memory unlocking (reserved memory: "
FMTsize_t " stack: " FMTsize_t ").",
_size_malloc_tmp, _size_stack);
return;
}
log_very_verbose("Unlocking memory");
if (!_memlock_maps(cmd, LVM_MUNLOCK, &unlock_mstats))

View File

@@ -109,7 +109,7 @@ void lvmnotify_send(struct cmd_context *cmd)
/* If lvmdbusd isn't running, don't notify as you will start it as it will auto activate */
if (!lvmdbusd_running()) {
log_debug_dbus("dbus daemon not running, not notifying");
log_debug_dbus("dbus damon not running, not notifying");
return;
}

View File

@@ -70,8 +70,7 @@ static int _raid_text_import_area_count(const struct dm_config_node *sn,
static int _raid_text_import_areas(struct lv_segment *seg,
const struct dm_config_node *sn,
const struct dm_config_value *cv,
struct dm_hash_table *lv_hash)
const struct dm_config_value *cv)
{
unsigned int s;
struct logical_volume *lv;
@@ -89,7 +88,7 @@ static int _raid_text_import_areas(struct lv_segment *seg,
}
/* Metadata device comes first. */
if (!(lv = dm_hash_lookup(lv_hash, cv->v.str))) {
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
@@ -107,7 +106,7 @@ static int _raid_text_import_areas(struct lv_segment *seg,
}
/* Data device comes second */
if (!(lv = dm_hash_lookup(lv_hash, cv->v.str))) {
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
@@ -130,8 +129,7 @@ static int _raid_text_import_areas(struct lv_segment *seg,
static int _raid_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
const struct dm_config_value *cv;
const struct {
@@ -173,7 +171,7 @@ static int _raid_text_import(struct lv_segment *seg,
return 0;
}
if (!_raid_text_import_areas(seg, sn, cv, lv_hash)) {
if (!_raid_text_import_areas(seg, sn, cv)) {
log_error("Failed to import RAID component pairs.");
return 0;
}

View File

@@ -295,6 +295,7 @@ FIELD(SEGS, seg, STR_LIST, "IntegSettings", list, 0, integrity_settings, integri
FIELD(SEGS, seg, BIN, "VDOCompression", list, 0, vdo_compression, vdo_compression, "Set for compressed LV (vdopool).", 0)
FIELD(SEGS, seg, BIN, "VDODeduplication", list, 0, vdo_deduplication, vdo_deduplication, "Set for deduplicated LV (vdopool).", 0)
FIELD(SEGS, seg, BIN, "VDOMetadataHints", list, 0, vdo_use_metadata_hints, vdo_use_metadata_hints, "Use REQ_SYNC for writes (vdopool).", 0)
FIELD(SEGS, seg, NUM, "VDOMinimumIOSize", list, 0, vdo_minimum_io_size, vdo_minimum_io_size, "Minimum acceptable IO size (vdopool).", 0)
FIELD(SEGS, seg, SIZ, "VDOBlockMapCacheSize", list, 0, vdo_block_map_cache_size, vdo_block_map_cache_size, "Allocated caching size (vdopool).", 0)
FIELD(SEGS, seg, NUM, "VDOBlockMapEraLength", list, 0, vdo_block_map_era_length, vdo_block_map_era_length, "Speed of cache writes (vdopool).", 0)
@@ -309,9 +310,8 @@ FIELD(SEGS, seg, NUM, "VDOHashZoneThreads", list, 0, vdo_hash_zone_threads, vdo_
FIELD(SEGS, seg, NUM, "VDOLogicalThreads", list, 0, vdo_logical_threads, vdo_logical_threads, "Logical threads for subdivide parts (vdopool).", 0)
FIELD(SEGS, seg, NUM, "VDOPhysicalThreads", list, 0, vdo_physical_threads, vdo_physical_threads, "Physical threads for subdivide parts (vdopool).", 0)
FIELD(SEGS, seg, NUM, "VDOMaxDiscard", list, 0, vdo_max_discard, vdo_max_discard, "Maximum discard size volume can receive (vdopool).", 0)
FIELD(SEGS, seg, STR, "VDOWritePolicy", list, 0, vdo_write_policy, vdo_write_policy, "Specified write policy (vdopool).", 0)
FIELD(SEGS, seg, SIZ, "VDOHeaderSize", list, 0, vdo_header_size, vdo_header_size, "Header size at front of vdopool.", 0)
FIELD(SEGS, seg, BIN, "VDOMetadataHints", list, 0, vdo_use_metadata_hints, vdo_use_metadata_hints, "Deprecated use of REQ_SYNC for writes (vdopool).", 0)
FIELD(SEGS, seg, STR, "VDOWritePolicy", list, 0, vdo_write_policy, vdo_write_policy, "Deprecated write policy (vdopool).", 0)
/*
* End of SEGS type fields

View File

@@ -36,8 +36,7 @@ static const char *_snap_target_name(const struct lv_segment *seg,
}
static int _snap_text_import(struct lv_segment *seg, const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
uint32_t chunk_size;
struct logical_volume *org, *cow;
@@ -72,11 +71,11 @@ static int _snap_text_import(struct lv_segment *seg, const struct dm_config_node
if (!(org_name = dm_config_find_str(sn, "origin", NULL)))
return SEG_LOG_ERROR("Snapshot origin must be a string in");
if (!(cow = dm_hash_lookup(lv_hash, cow_name)))
if (!(cow = find_lv(seg->lv->vg, cow_name)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"snapshot cow store in", cow_name);
if (!(org = dm_hash_lookup(lv_hash, org_name)))
if (!(org = find_lv(seg->lv->vg, org_name)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for "
"snapshot origin in", org_name);

View File

@@ -70,8 +70,7 @@ static int _striped_text_import_area_count(const struct dm_config_node *sn, uint
}
static int _striped_text_import(struct lv_segment *seg, const struct dm_config_node *sn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
const struct dm_config_value *cv;

View File

@@ -53,8 +53,7 @@ static void _thin_pool_display(const struct lv_segment *seg)
static int _thin_pool_add_message(struct lv_segment *seg,
const char *key,
const struct dm_config_node *sn,
struct dm_hash_table *lv_hash)
const struct dm_config_node *sn)
{
const char *lv_name = NULL;
struct logical_volume *lv = NULL;
@@ -63,7 +62,7 @@ static int _thin_pool_add_message(struct lv_segment *seg,
/* Message must have only one from: create, delete */
if (dm_config_get_str(sn, "create", &lv_name)) {
if (!(lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown LV %s for create message in",
lv_name);
/* FIXME: switch to _SNAP later, if the created LV has an origin */
@@ -81,8 +80,7 @@ static int _thin_pool_add_message(struct lv_segment *seg,
static int _thin_pool_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
const char *lv_name;
struct logical_volume *pool_data_lv, *pool_metadata_lv;
@@ -93,13 +91,13 @@ static int _thin_pool_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "metadata", &lv_name))
return SEG_LOG_ERROR("Metadata must be a string in");
if (!(pool_metadata_lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(pool_metadata_lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown metadata %s in", lv_name);
if (!dm_config_get_str(sn, "pool", &lv_name))
return SEG_LOG_ERROR("Pool must be a string in");
if (!(pool_data_lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(pool_data_lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown pool %s in", lv_name);
if (!attach_pool_data_lv(seg, pool_data_lv))
@@ -143,7 +141,7 @@ static int _thin_pool_text_import(struct lv_segment *seg,
/* Read messages */
for (; sn; sn = sn->sib)
if (!(sn->v) && !_thin_pool_add_message(seg, sn->key, sn->child, lv_hash))
if (!(sn->v) && !_thin_pool_add_message(seg, sn->key, sn->child))
return_0;
return 1;
@@ -470,8 +468,7 @@ static void _thin_display(const struct lv_segment *seg)
static int _thin_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
const char *lv_name;
struct logical_volume *pool_lv, *origin = NULL, *external_lv = NULL, *merge_lv = NULL;
@@ -480,7 +477,7 @@ static int _thin_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "thin_pool", &lv_name))
return SEG_LOG_ERROR("Thin pool must be a string in");
if (!(pool_lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(pool_lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown thin pool %s in", lv_name);
if (!dm_config_get_uint64(sn, "transaction_id", &seg->transaction_id))
@@ -490,14 +487,14 @@ static int _thin_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "origin", &lv_name))
return SEG_LOG_ERROR("Origin must be a string in");
if (!(origin = dm_hash_lookup(lv_hash, lv_name)))
if (!(origin = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown origin %s in", lv_name);
}
if (dm_config_has_node(sn, "merge")) {
if (!dm_config_get_str(sn, "merge", &lv_name))
return SEG_LOG_ERROR("Merge lv must be a string in");
if (!(merge_lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(merge_lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown merge lv %s in", lv_name);
}
@@ -512,7 +509,7 @@ static int _thin_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "external_origin", &lv_name))
return SEG_LOG_ERROR("External origin must be a string in");
if (!(external_lv = dm_hash_lookup(lv_hash, lv_name)))
if (!(external_lv = find_lv(seg->lv->vg, lv_name)))
return SEG_LOG_ERROR("Unknown external origin %s in", lv_name);
}

View File

@@ -20,8 +20,7 @@
#include "lib/config/config.h"
static int _unknown_text_import(struct lv_segment *seg, const struct dm_config_node *sn,
struct dm_hash_table *pv_hash,
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash)
{
struct dm_config_node *new, *last = NULL, *head = NULL;
const struct dm_config_node *current;

View File

@@ -75,8 +75,7 @@ static void _vdo_display(const struct lv_segment *seg)
static int _vdo_text_import(struct lv_segment *seg,
const struct dm_config_node *n,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct logical_volume *vdo_pool_lv;
const char *str;
@@ -85,7 +84,7 @@ static int _vdo_text_import(struct lv_segment *seg,
if (!dm_config_has_node(n, "vdo_pool") ||
!(str = dm_config_find_str(n, "vdo_pool", NULL)))
return _bad_field("vdo_pool");
if (!(vdo_pool_lv = dm_hash_lookup(lv_hash, str))) {
if (!(vdo_pool_lv = find_lv(seg->lv->vg, str))) {
log_error("Unknown VDO pool logical volume %s.", str);
return 0;
}
@@ -168,8 +167,7 @@ static void _vdo_pool_display(const struct lv_segment *seg)
_print_yes_no("Compression\t", vtp->use_compression);
_print_yes_no("Deduplication", vtp->use_deduplication);
if (vtp->use_metadata_hints)
_print_yes_no("Metadata hints", vtp->use_metadata_hints);
_print_yes_no("Metadata hints", vtp->use_metadata_hints);
log_print(" Minimum IO size\t%s",
display_size(cmd, vtp->minimum_io_size));
@@ -193,8 +191,7 @@ static void _vdo_pool_display(const struct lv_segment *seg)
log_print(" # Logical threads\t%u", (unsigned) vtp->logical_threads);
log_print(" # Physical threads\t%u", (unsigned) vtp->physical_threads);
log_print(" Max discard\t\t%u", (unsigned) vtp->max_discard);
if (vtp->write_policy != DM_VDO_WRITE_POLICY_AUTO)
log_print(" Write policy\t%s", get_vdo_write_policy_name(vtp->write_policy));
log_print(" Write policy\t%s", get_vdo_write_policy_name(vtp->write_policy));
}
/* reused as _vdo_text_import_area_count */
@@ -208,8 +205,7 @@ static int _vdo_pool_text_import_area_count(const struct dm_config_node *sn __at
static int _vdo_pool_text_import(struct lv_segment *seg,
const struct dm_config_node *n,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct dm_vdo_target_params *vtp = &seg->vdo_params;
struct logical_volume *data_lv;
@@ -218,7 +214,7 @@ static int _vdo_pool_text_import(struct lv_segment *seg,
if (!dm_config_has_node(n, "data") ||
!(str = dm_config_find_str(n, "data", NULL)))
return _bad_field("data");
if (!(data_lv = dm_hash_lookup(lv_hash, str))) {
if (!(data_lv = find_lv(seg->lv->vg, str))) {
log_error("Unknown logical volume %s.", str);
return 0;
}

View File

@@ -40,8 +40,7 @@ static void _writecache_display(const struct lv_segment *seg)
static int _writecache_text_import(struct lv_segment *seg,
const struct dm_config_node *sn,
struct dm_hash_table *pv_hash __attribute__((unused)),
struct dm_hash_table *lv_hash)
struct dm_hash_table *pv_hash __attribute__((unused)))
{
struct logical_volume *origin_lv = NULL;
struct logical_volume *fast_lv;
@@ -54,7 +53,7 @@ static int _writecache_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "origin", &origin_name))
return SEG_LOG_ERROR("origin must be a string in");
if (!(origin_lv = dm_hash_lookup(lv_hash, origin_name)))
if (!(origin_lv = find_lv(seg->lv->vg, origin_name)))
return SEG_LOG_ERROR("Unknown LV specified for writecache origin %s in", origin_name);
if (!set_lv_segment_area_lv(seg, 0, origin_lv, 0, 0))
@@ -66,7 +65,7 @@ static int _writecache_text_import(struct lv_segment *seg,
if (!dm_config_get_str(sn, "writecache", &fast_name))
return SEG_LOG_ERROR("writecache must be a string in");
if (!(fast_lv = dm_hash_lookup(lv_hash, fast_name)))
if (!(fast_lv = find_lv(seg->lv->vg, fast_name)))
return SEG_LOG_ERROR("Unknown logical volume %s specified for writecache in",
fast_name);

View File

@@ -3670,7 +3670,7 @@ struct dm_pool *dm_config_memory(struct dm_config_tree *cft);
*/
#define DM_UDEV_DISABLE_DM_RULES_FLAG 0x0001
/*
* DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG is set in case we need to disable
* DM_UDEV_DISABLE_SUBSYTEM_RULES_FLAG is set in case we need to disable
* subsystem udev rules, but still we need the general DM udev rules to
* be applied (to create the nodes and symlinks under /dev and /dev/disk).
*/

View File

@@ -2436,20 +2436,20 @@ static int _udev_notify_sem_inc(uint32_t cookie, int semid)
int val;
if (semop(semid, &sb, 1) < 0) {
log_error("cookie inc: semid %d: semop failed for cookie 0x%" PRIx32 ": %s",
log_error("semid %d: semop failed for cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
return 0;
}
if ((val = semctl(semid, 0, GETVAL)) < 0) {
log_warn("cookie inc: semid %d: sem_ctl GETVAL failed for "
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented.",
cookie, semid);
} else
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented to %d",
cookie, semid, val);
return 0;
}
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) incremented to %d",
cookie, semid, val);
return 1;
}
@@ -2459,21 +2459,23 @@ static int _udev_notify_sem_dec(uint32_t cookie, int semid)
struct sembuf sb = {0, -1, IPC_NOWAIT};
int val;
if ((val = semctl(semid, 0, GETVAL)) < 0)
log_warn("cookie dec: semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
if ((val = semctl(semid, 0, GETVAL)) < 0) {
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
return 0;
}
if (semop(semid, &sb, 1) < 0) {
switch (errno) {
case EAGAIN:
log_error("cookie dec: semid %d: semop failed for cookie "
log_error("semid %d: semop failed for cookie "
"0x%" PRIx32 ": "
"incorrect semaphore state",
semid, cookie);
break;
default:
log_error("cookie dec: semid %d: semop failed for cookie "
log_error("semid %d: semop failed for cookie "
"0x%" PRIx32 ": %s",
semid, cookie, strerror(errno));
break;
@@ -2481,12 +2483,9 @@ static int _udev_notify_sem_dec(uint32_t cookie, int semid)
return 0;
}
if (val < 0)
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented.",
cookie, semid);
else
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented to %d",
cookie, semid, val - 1);
log_debug_activation("Udev cookie 0x%" PRIx32 " (semid %d) decremented to %d",
cookie, semid, val - 1);
return 1;
}
@@ -2563,7 +2562,7 @@ static int _udev_notify_sem_create(uint32_t *cookie, int *semid)
sem_arg.val = 1;
if (semctl(gen_semid, 0, SETVAL, sem_arg) < 0) {
log_error("cookie create: semid %d: semctl failed: %s", gen_semid, strerror(errno));
log_error("semid %d: semctl failed: %s", gen_semid, strerror(errno));
/* We have to destroy just created semaphore
* so it won't stay in the system. */
(void) _udev_notify_sem_destroy(gen_cookie, gen_semid);
@@ -2571,10 +2570,9 @@ static int _udev_notify_sem_create(uint32_t *cookie, int *semid)
}
if ((val = semctl(gen_semid, 0, GETVAL)) < 0) {
log_error("cookie create: semid %d: sem_ctl GETVAL failed for "
log_error("semid %d: sem_ctl GETVAL failed for "
"cookie 0x%" PRIx32 ": %s",
gen_semid, gen_cookie, strerror(errno));
(void) _udev_notify_sem_destroy(gen_cookie, gen_semid);
goto bad;
}

View File

@@ -832,7 +832,7 @@ a suitable value automatically.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--rebuild\fP \fIPV\fP

View File

@@ -325,7 +325,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--segments\fP

View File

@@ -311,7 +311,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -182,7 +182,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB-t\fP|\fB--test\fP

View File

@@ -41,11 +41,12 @@ block addresses that are mapped to the shared physical block are not
modified.
.P
To use VDO with \fBlvm\fP(8), you must install the standard VDO user-space tools
\fBvdoformat\fP(8) and kernel module "\fIdm_vdo\fP" (For older kernels <6.9
the out of tree kernel VDO module "\fIkvdo\fP" is necessary).
\fBvdoformat\fP(8) and the currently non-standard kernel VDO module
"\fIkvdo\fP".
.P
The kernel module implements fine-grained storage virtualization,
thin provisioning, block sharing, compression and memory-efficient duplicate
The "\fIkvdo\fP" module implements fine-grained storage virtualization,
thin provisioning, block sharing, and compression.
The "\fIuds\fP" module provides memory-efficient duplicate
identification. The user-space tools include \fBvdostats\fP(8)
for extracting statistics from VDO volumes.
.
@@ -160,6 +161,7 @@ allocation {
.RS
vdo_use_compression=1
vdo_use_deduplication=1
vdo_use_metadata_hints=1
vdo_minimum_io_size=4096
vdo_block_map_cache_size_mb=128
vdo_block_map_period=16380
@@ -173,6 +175,7 @@ vdo_cpu_threads=2
vdo_hash_zone_threads=1
vdo_logical_threads=1
vdo_physical_threads=1
vdo_write_policy="auto"
vdo_max_discard=1
.RE
}
@@ -189,7 +192,7 @@ or repeat --vdosettings for each option being set.
Options are listed in the Example section above, for the full description see
.BR lvm.conf (5).
Options can omit 'vdo_' and 'vdo_use_' prefixes and all its underscores.
So i.e. vdo_use_deduplication=1 and deduplication=1 are equivalent.
So i.e. vdo_use_metadata_hints=1 and metadatahints=1 are equivalent.
To change the option for an already existing VDOPoolLV use
.BR lvchange (8)
command. However not all option can be changed.
@@ -304,7 +307,6 @@ volume types: linear, stripe, raid and cache with cachepool.
You can convert existing VDO LV into a thin volume. After this conversion
you can create a thin snapshot or you can add more thin volumes
with thin-pool named after original LV name LV_tpool0.
See \fBlvmthin\fP(7) for more details.
.P
.I Example
.nf
@@ -439,7 +441,6 @@ a dense UDS index uses 17 GiB of storage and a sparse UDS index will use
.BR lvremove (8),
.BR lvs (8),
.P
.BR lvmthin (7),
.BR vdoformat (8),
.BR vdostats (8),
.P

View File

@@ -321,7 +321,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -200,7 +200,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -315,7 +315,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -319,7 +319,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -217,7 +217,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -777,7 +777,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--refresh\fP

View File

@@ -305,7 +305,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB-S\fP|\fB--select\fP \fIString\fP

View File

@@ -308,7 +308,7 @@ Repeat once to also suppress any prompts with answer 'no'.
\fB--readonly\fP
.br
Prevent the command from making changes, including activation and
metadata updates. (See --permission r for read only LVs.)
metadata updates.
.
.HP
\fB--reportformat\fP \fBbasic\fP|\fBjson\fP|\fBjson_std\fP

View File

@@ -24,7 +24,7 @@
int main(int argc, char *argv[])
{
int percent = atoi(argv[1]);
int pecent = atoi(argv[1]);
int ret, s;
ret = ilm_connect(&s);
@@ -35,7 +35,7 @@ int main(int argc, char *argv[])
exit(-1);
}
ret = ilm_inject_fault(s, percent);
ret = ilm_inject_fault(s, pecent);
if (ret == 0) {
printf("ilm_inject_fault (100): SUCCESS\n");
} else {

View File

@@ -17,7 +17,6 @@ SKIP_WITH_LVMPOLLD=1
which mkfs.ext4 || skip
which resize2fs || skip
which mkswap || skip
aux prepare_vg 2 100
@@ -300,7 +299,7 @@ dd if=/dev/zero of="$mount_dir/zeros1" bs=1M count=8 oflag=direct
lvextend --fs resize --fsmode offline -L+10M $vg/$lv
check lv_field $vg/$lv lv_size "30.00m"
# fsmode offline leaves fs unmounted
df | tee dfa
df -a | tee dfa
not grep "$mount_dir" dfa
mount "$DM_DEV_DIR/$vg/$lv" "$mount_dir"
df --output=size "$mount_dir" |tee df2
@@ -650,32 +649,5 @@ df --output=size "$mount_dir" |tee df7
not diff df6 df7
umount "$mount_dir"
lvremove -f $vg
######################################
#
# lvreduce, lvextend with swap device
#
######################################
lvcreate -n $lv -L 16M $vg
mkswap /dev/$vg/$lv
# lvreduce not allowed if LV size < swap size
not lvreduce --fs checksize -L8m $vg/$lv
check lv_field $vg/$lv lv_size "16.00m"
# lvreduce not allowed if LV size < swap size,
# even with --fs resize, this is not supported
not lvreduce --fs resize $vg/$lv
check lv_field $vg/$lv lv_size "16.00m"
# lvextend allowed if LV size > swap size
lvextend -L32m $vg/$lv
check lv_field $vg/$lv lv_size "32.00m"
# lvreduce allowed if LV size == swap size
lvreduce -L16m $vg/$lv
check lv_field $vg/$lv lv_size "16.00m"
vgremove -ff $vg

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
# Copyright (C) 2024 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
SKIP_WITH_LVMPOLLD=1
SKIP_WITH_LVMLOCKD=1
. lib/inittest
which sfdisk || skip
aux prepare_devs 1 4
pvcreate_on_dev_with_part_table() {
local dev=$1
local type=$2
# pvcreate passes on empty partition table
echo "label:$type" | sfdisk "$dev"
pvcreate -y "$dev"
pvremove "$dev"
# pvcreate fails if there's at least 1 partition
echo "label:$type" | sfdisk "$dev"
echo "1MiB 1" | sfdisk "$dev"
not pvcreate "$dev" 2>err
grep "device is partitioned" err
aux wipefs_a "$dev"
}
pvcreate_on_dev_with_part_table "$dev1" "dos"
pvcreate_on_dev_with_part_table "$dev1" "gpt"

View File

@@ -10,7 +10,7 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# Check very large device size (up to 15Exa bytes)
# Check very large device size (upto 15Exa bytes)
# this needs 64bit arch
SKIP_WITH_LVMLOCKD=1

View File

@@ -673,7 +673,7 @@ arg(raidintegritymode_ARG, '\0', "raidintegritymode", string_VAL, 0, 0,
arg(readonly_ARG, '\0', "readonly", 0, 0, 0,
"Prevent the command from making changes, including activation and\n"
"metadata updates. (See --permission r for read only LVs.)\n")
"metadata updates.\n")
arg(refresh_ARG, '\0', "refresh", 0, 0, 0,
"#lvmdevices\n"