1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-11-18 12:23:51 +03:00

Compare commits

..

2 Commits

Author SHA1 Message Date
Bryn M. Reeves
d4acf8b533 dmstats: separate report and sample clocks
Maintain separate timestamps for sampling interval and report
waits and correct the sleep interval for the time spent collecting
and processing stats.
2015-08-08 23:59:06 +01:00
Bryn M. Reeves
dfb4560e24 dmstats: add libdm-stats library and 'dmsetup stats' command
Add the libdm-stats module to libdm. This implements a simple interface
for creating, managing and interrogating I/O statistics regions and
areas on device-mapper devices.

The library interface is documented in libdevmapper.h and provides a
'dm_stats' handle that is used to perform statistics operations and
obtain data. Methods are provided to return baisc count values and to
derive time-based metrics when a suitable interval estimate is provided.

The dm_stats handle contains a pointer to a table of one or more
dm_stats_region objects representing the regions registered with the
@stats_create message. These in turn point to a table of one or more
dm_stats_counters objects containing the counter sets for each defined
area within the region:

  dm_stats->dm_stats_region[nr_regions]->dm_stats_counters[nr_areas]

This structure is private to the library and may change in future
versions: all users should make use of the public interface and treat
the dm_stats type as an opaque handle.

Regions and counter sets are stored in order of increasing region_id.

Public methods are provided to create and destroy handles and to
list, create, and destroy, statistics regions as well as to obtain and
parse the actual counter data.

Linux iostat-style derived performance metrics are provided to return
higher-level performance metrics.

This commit also adds arguments, report types, and a 'stats' command to
dmsetup and implements 'clear', 'create', 'delete', 'list', 'print', and
'report' sub-commands.

The dmsetup _display_info_cols() function is adapted to allow reporting
of statistics with the DR_STATS report type: since a single object
(device) may have many rows of statistics to report the call to
dm_report_object() is placed inside a loop over each statistics area
present (for non-stats reports or for devices with a single region
spanning the entire device the body of the loop will be executed exactly
once).
2015-08-08 23:39:22 +01:00
46 changed files with 605 additions and 1441 deletions

View File

@@ -1 +1 @@
2.02.128(2)-git (2015-08-10)
2.02.127(2)-git (2015-07-24)

View File

@@ -1 +1 @@
1.02.105-git (2015-08-10)
1.02.104-git (2015-07-24)

View File

@@ -1,22 +1,8 @@
Version 2.02.128 -
===================================
Check for valid cache mode in validation of cache segment.
Enhance internal API cache_set_mode() and cache_set_policy().
Enhance toollib's get_cache_params().
Runtime detect presence of cache smq policy.
Add demo cache-mq and cache-smq profiles.
Add cmd profilable allocation/cache_policy,cache_settings,cache_mode.
Require cache_check 0.5.4 for use of --clear-needs-check-flag.
Fix lvmetad udev rules to not override SYSTEMD_WANTS, add the service instead.
Version 2.02.127 - 10th August 2015
===================================
Version 2.02.127 -
=================================
Do not init filters, locking, lvmetad, lvmpolld if command doesn't use it.
Order fields in struct cmd_context more logically.
Add lock_type to lvmcache VG summary and info structs.
Recognise vg/lv name format in dmsetup.
Fix regression in cache causing some PVs to bypass filters (2.02.105).
Make configure --enable-realtime the default now.
Update .gitignore and configure.in files to reflect usage of current tree.
Version 2.02.126 - 24th July 2015
=================================

View File

@@ -1,32 +1,22 @@
Version 1.02.105 -
===================================
Add more arg validation for dm_tree_node_add_cache_target().
Add --alldevices switch to replace use of --force for stats create / delete.
Version 1.02.104 - 10th August 2015
===================================
Version 1.02.104 -
=================================
Add dmstats.8 man page
Add dmstats --segments switch to create one region per device segment.
Add dmstats --regionid, --allregions to specify a single / all stats regions.
Add dmstats --allprograms for stats commands that filter by program ID.
Add dmstats --auxdata and --programid args to specify aux data and program ID.
Add report stats sub-command to provide repeating stats reports.
Add clear, delete, list, and print stats sub-commands.
Add create stats sub-command and --start, --length, --areas and --areasize.
Recognize 'dmstats' as an alias for 'dmsetup stats' when run with this name.
Add a 'stats' command to dmsetup to configure, manage and report stats data.
Add statistics fields to dmsetup -o.
Add --regionid, --allregions to specify a single stats region or all regions.
Add --allprograms for stats commands that filter by program ID.
Add --auxdata and --programid arguments to set stats aux data and program ID.
Add statistics fields to -o <field>
Add libdm-stats library to allow management of device-mapper statistics.
Add --nosuffix to suppress dmsetup unit suffixes in report output.
Add --units to control dmsetup report field output units.
Add --units to control report field output units.
Add support to redisplay column headings for repeating column reports.
Fix report header and row resource leaks.
Report timestamps of ioctls with dmsetup -vvv.
Recognize report field name variants without any underscores too.
Add dmsetup --interval and --count to repeat reports at specified intervals.
Add dm_timestamp functions to libdevmapper.
Recognise vg/lv name format in dmsetup.
Move size display code to libdevmapper as dm_size_to_string.
Version 1.02.103 - 24th July 2015
=================================

View File

@@ -1,5 +1,5 @@
#
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -20,11 +20,7 @@ CONFDEST=lvm.conf
CONFLOCAL=lvmlocal.conf
PROFILE_TEMPLATES=command_profile_template.profile metadata_profile_template.profile
PROFILES=$(PROFILE_TEMPLATES) \
$(srcdir)/cache-mq.profile \
$(srcdir)/cache-smq.profile \
$(srcdir)/thin-generic.profile \
$(srcdir)/thin-performance.profile
PROFILES=$(PROFILE_TEMPLATES) $(srcdir)/thin-generic.profile $(srcdir)/thin-performance.profile
include $(top_builddir)/make.tmpl

View File

@@ -1,20 +0,0 @@
# Demo configuration 'mq' cache policy
#
# Note: This policy has been deprecated in favor of the smq policy
# keyword "default" means, setting is left with kernel defaults.
#
allocation {
cache_pool_chunk_size = 64
cache_mode = "writethrough"
cache_policy = "mq"
cache_settings {
mq {
sequential_threshold = "default" # #nr_sequential_ios
random_threshold = "default" # #nr_random_ios
read_promote_adjustment = "default"
write_promote_adjustment = "default"
discard_promote_adjustment = "default"
}
}
}

View File

@@ -1,14 +0,0 @@
# Demo configuration 'smq' cache policy
#
# The stochastic multi-queue (smq) policy addresses some of the problems
# with the multiqueue (mq) policy and uses less memory.
#
allocation {
cache_pool_chunk_size = 64
cache_mode = "writethrough"
cache_policy = "smq"
cache_settings {
# currently no settins for "smq" policy
}
}

25
configure vendored
View File

@@ -5734,7 +5734,7 @@ fi
done
for ac_header in termios.h sys/statvfs.h sys/timerfd.h
for ac_header in termios.h sys/statvfs.h
do :
as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh`
ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default"
@@ -8813,27 +8813,20 @@ $as_echo "$as_me: WARNING: cache_check not found in path $PATH" >&2;}
fi
fi
if test "$CACHE_CHECK_NEEDS_CHECK" = yes; then
$CACHE_CHECK_CMD -V 2>/dev/null >conftest.tmp
read -r CACHE_CHECK_VSN < conftest.tmp
IFS=. read -r CACHE_CHECK_VSN_MAJOR CACHE_CHECK_VSN_MINOR CACHE_CHECK_VSN_PATCH < conftest.tmp
rm -f conftest.tmp
CACHE_CHECK_VSN=`"$CACHE_CHECK_CMD" -V 2>/dev/null`
CACHE_CHECK_VSN_MAJOR=`echo "$CACHE_CHECK_VSN" | $AWK -F '.' '{print $1}'`
CACHE_CHECK_VSN_MINOR=`echo "$CACHE_CHECK_VSN" | $AWK -F '.' '{print $2}'`
# Require version >= 0.5.4 for --clear-needs-check-flag
if test -z "$CACHE_CHECK_VSN_MAJOR" \
|| test -z "$CACHE_CHECK_VSN_MINOR" \
|| test -z "$CACHE_CHECK_VSN_PATCH"; then
if test -z "$CACHE_CHECK_VSN_MAJOR" -o -z "$CACHE_CHECK_VSN_MINOR"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $CACHE_CHECK_CMD: Bad version \"$CACHE_CHECK_VSN\" found" >&5
$as_echo "$as_me: WARNING: $CACHE_CHECK_CMD: Bad version \"$CACHE_CHECK_VSN\" found" >&2;}
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
elif test "$CACHE_CHECK_VSN_MAJOR" -eq 0 ; then
if test "$CACHE_CHECK_VSN_MINOR" -lt 5 \
|| test "$CACHE_CHECK_VSN_MINOR" -eq 5 -a "$CACHE_CHECK_VSN_PATCH" -lt 4; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $CACHE_CHECK_CMD: Old version \"$CACHE_CHECK_VSN\" found" >&5
elif test "$CACHE_CHECK_VSN_MAJOR" -eq 0 -a "$CACHE_CHECK_VSN_MINOR" -lt 5; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $CACHE_CHECK_CMD: Old version \"$CACHE_CHECK_VSN\" found" >&5
$as_echo "$as_me: WARNING: $CACHE_CHECK_CMD: Old version \"$CACHE_CHECK_VSN\" found" >&2;}
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
fi
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
fi
fi
# Empty means a config way to ignore cache dumping

View File

@@ -103,7 +103,7 @@ AC_CHECK_HEADERS([assert.h ctype.h dirent.h errno.h fcntl.h float.h \
sys/time.h sys/types.h sys/utsname.h sys/wait.h time.h \
unistd.h], , [AC_MSG_ERROR(bailing out)])
AC_CHECK_HEADERS(termios.h sys/statvfs.h sys/timerfd.h)
AC_CHECK_HEADERS(termios.h sys/statvfs.h)
case "$host_os" in
linux*)
@@ -584,25 +584,18 @@ case "$CACHE" in
fi
fi
if test "$CACHE_CHECK_NEEDS_CHECK" = yes; then
$CACHE_CHECK_CMD -V 2>/dev/null >conftest.tmp
read -r CACHE_CHECK_VSN < conftest.tmp
IFS=. read -r CACHE_CHECK_VSN_MAJOR CACHE_CHECK_VSN_MINOR CACHE_CHECK_VSN_PATCH < conftest.tmp
rm -f conftest.tmp
CACHE_CHECK_VSN=`"$CACHE_CHECK_CMD" -V 2>/dev/null`
CACHE_CHECK_VSN_MAJOR=`echo "$CACHE_CHECK_VSN" | $AWK -F '.' '{print $1}'`
CACHE_CHECK_VSN_MINOR=`echo "$CACHE_CHECK_VSN" | $AWK -F '.' '{print $2}'`
# Require version >= 0.5.4 for --clear-needs-check-flag
if test -z "$CACHE_CHECK_VSN_MAJOR" \
|| test -z "$CACHE_CHECK_VSN_MINOR" \
|| test -z "$CACHE_CHECK_VSN_PATCH"; then
if test -z "$CACHE_CHECK_VSN_MAJOR" -o -z "$CACHE_CHECK_VSN_MINOR"; then
AC_MSG_WARN([$CACHE_CHECK_CMD: Bad version "$CACHE_CHECK_VSN" found])
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
elif test "$CACHE_CHECK_VSN_MAJOR" -eq 0 ; then
if test "$CACHE_CHECK_VSN_MINOR" -lt 5 \
|| test "$CACHE_CHECK_VSN_MINOR" -eq 5 -a "$CACHE_CHECK_VSN_PATCH" -lt 4; then
AC_MSG_WARN([$CACHE_CHECK_CMD: Old version "$CACHE_CHECK_VSN" found])
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
fi
elif test "$CACHE_CHECK_VSN_MAJOR" -eq 0 -a "$CACHE_CHECK_VSN_MINOR" -lt 5; then
AC_MSG_WARN([$CACHE_CHECK_CMD: Old version "$CACHE_CHECK_VSN" found])
CACHE_CHECK_VERSION_WARN=y
CACHE_CHECK_NEEDS_CHECK=no
fi
fi
# Empty means a config way to ignore cache dumping

View File

@@ -1022,10 +1022,7 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
uint32_t r_version = 0;
int rv;
if (r->type == LD_RT_LV)
log_debug("S %s R %s res_lock mode %s (%s)", ls->name, r->name, mode_str(act->mode), act->lv_name);
else
log_debug("S %s R %s res_lock mode %s", ls->name, r->name, mode_str(act->mode));
log_debug("S %s R %s res_lock mode %s", ls->name, r->name, mode_str(act->mode));
if (r->mode == LD_LK_SH && act->mode == LD_LK_SH)
goto add_lk;
@@ -1287,12 +1284,8 @@ static int res_unlock(struct lockspace *ls, struct resource *r,
return -ENOENT;
do_unlock:
if (act->op == LD_OP_CLOSE)
log_debug("S %s R %s res_unlock from close", ls->name, r->name);
else if (r->type == LD_RT_LV)
log_debug("S %s R %s res_unlock (%s)", ls->name, r->name, act->lv_name);
else
log_debug("S %s R %s res_unlock", ls->name, r->name);
log_debug("S %s R %s res_unlock %s", ls->name, r->name,
(act->op == LD_OP_CLOSE) ? "from close" : "");
/* send unlock to lm when last sh lock is unlocked */
if (lk->mode == LD_LK_SH) {
@@ -1988,15 +1981,11 @@ static int other_sanlock_vgs_exist(struct lockspace *ls_rem)
struct lockspace *ls;
list_for_each_entry(ls, &lockspaces_inactive, list) {
if (ls->lm_type != LD_LM_SANLOCK)
continue;
log_debug("other sanlock vg exists inactive %s", ls->name);
return 1;
}
list_for_each_entry(ls, &lockspaces, list) {
if (ls->lm_type != LD_LM_SANLOCK)
continue;
if (!strcmp(ls->name, ls_rem->name))
continue;
log_debug("other sanlock vg exists %s", ls->name);

View File

@@ -25,11 +25,6 @@
#include "lv_alloc.h"
#include "defaults.h"
static const char _cache_module[] = "cache";
/* TODO: using static field here, maybe should be a part of segment_type */
static unsigned _feature_mask;
#define SEG_LOG_ERROR(t, p...) \
log_error(t " segment %s of logical volume %s.", ## p, \
dm_config_parent_name(sn), seg->lv->name), 0;
@@ -71,15 +66,23 @@ static int _cache_pool_text_import(struct lv_segment *seg,
if (dm_config_has_node(sn, "cache_mode")) {
if (!(str = dm_config_find_str(sn, "cache_mode", NULL)))
return SEG_LOG_ERROR("cache_mode must be a string in");
if (!cache_set_mode(seg, str))
if (!set_cache_pool_feature(&seg->feature_flags, str))
return SEG_LOG_ERROR("Unknown cache_mode in");
}
} else
/* When missed in metadata, it's an old stuff - use writethrough */
seg->feature_flags |= DM_CACHE_FEATURE_WRITETHROUGH;
if (dm_config_has_node(sn, "policy")) {
if (!(str = dm_config_find_str(sn, "policy", NULL)))
return SEG_LOG_ERROR("policy must be a string in");
if (!(seg->policy_name = dm_pool_strdup(mem, str)))
return SEG_LOG_ERROR("Failed to duplicate policy in");
} else {
/* Cannot use 'just' default, so pick one */
seg->policy_name = DEFAULT_CACHE_POOL_POLICY; /* FIXME make configurable */
/* FIXME maybe here should be always 'mq' */
log_warn("WARNING: cache_policy undefined, using default \"%s\" policy.",
seg->policy_name);
}
/*
@@ -100,9 +103,6 @@ static int _cache_pool_text_import(struct lv_segment *seg,
* If the policy is not present, default policy is used.
*/
if ((sn = dm_config_find_node(sn, "policy_settings"))) {
if (!seg->policy_name)
return SEG_LOG_ERROR("policy_settings must have a policy_name in");
if (sn->v)
return SEG_LOG_ERROR("policy_settings must be a section in");
@@ -131,33 +131,28 @@ static int _cache_pool_text_export(const struct lv_segment *seg,
{
const char *cache_mode;
if (!(cache_mode = get_cache_pool_cachemode_name(seg)))
return_0;
if (!seg->policy_name) {
log_error(INTERNAL_ERROR "Policy name for %s is not defined.",
display_lvname(seg->lv));
return 0;
}
outf(f, "data = \"%s\"", seg_lv(seg, 0)->name);
outf(f, "metadata = \"%s\"", seg->metadata_lv->name);
outf(f, "chunk_size = %" PRIu32, seg->chunk_size);
outf(f, "cache_mode = \"%s\"", cache_mode);
outf(f, "policy = \"%s\"", seg->policy_name);
/*
* Cache pool used by a cache LV holds data. Not ideal,
* but not worth to break backward compatibility, by shifting
* content to cache segment
*/
if (cache_mode_is_set(seg)) {
if (!(cache_mode = get_cache_mode_name(seg)))
return_0;
outf(f, "cache_mode = \"%s\"", cache_mode);
}
if (seg->policy_name) {
outf(f, "policy = \"%s\"", seg->policy_name);
if (seg->policy_settings) {
if (strcmp(seg->policy_settings->key, "policy_settings")) {
log_error(INTERNAL_ERROR "Incorrect policy_settings tree, %s.",
seg->policy_settings->key);
return 0;
}
if (seg->policy_settings->child)
out_config_node(f, seg->policy_settings);
if (seg->policy_settings) {
if (strcmp(seg->policy_settings->key, "policy_settings")) {
log_error(INTERNAL_ERROR "Incorrect policy_settings tree, %s.",
seg->policy_settings->key);
return 0;
}
out_config_node(f, seg->policy_settings);
}
return 1;
@@ -170,29 +165,12 @@ static void _destroy(struct segment_type *segtype)
#ifdef DEVMAPPER_SUPPORT
static int _target_present(struct cmd_context *cmd,
const struct lv_segment *seg __attribute__((unused)),
unsigned *attributes __attribute__((unused)))
const struct lv_segment *seg __attribute__((unused)),
unsigned *attributes __attribute__((unused)))
{
/* List of features with their kernel target version */
static const struct feature {
uint32_t maj;
uint32_t min;
unsigned cache_feature;
const char feature[12];
const char module[12]; /* check dm-%s */
} _features[] = {
{ 1, 3, CACHE_FEATURE_POLICY_MQ, "policy_mq", "cache-mq" },
{ 1, 8, CACHE_FEATURE_POLICY_SMQ, "policy_smq", "cache-smq" },
};
static const char _lvmconf[] = "global/cache_disabled_features";
static unsigned _attrs = 0;
uint32_t maj, min, patchlevel;
static int _cache_checked = 0;
static int _cache_present = 0;
uint32_t maj, min, patchlevel;
unsigned i;
const struct dm_config_node *cn;
const struct dm_config_value *cv;
const char *str;
if (!_cache_checked) {
_cache_present = target_present(cmd, "cache", 1);
@@ -206,53 +184,11 @@ static int _target_present(struct cmd_context *cmd,
if ((maj < 1) ||
((maj == 1) && (min < 3))) {
_cache_present = 0;
log_error("The cache kernel module is version %u.%u.%u. "
"Version 1.3.0+ is required.",
log_error("The cache kernel module is version %u.%u.%u."
" Version 1.3.0+ is required.",
maj, min, patchlevel);
return 0;
}
for (i = 0; i < DM_ARRAY_SIZE(_features); ++i) {
if (((maj > _features[i].maj) ||
(maj == _features[i].maj && min >= _features[i].min)) &&
(!_features[i].module[0] || module_present(cmd, _features[i].module)))
_attrs |= _features[i].cache_feature;
else
log_very_verbose("Target %s does not support %s.",
_cache_module, _features[i].feature);
}
}
if (attributes) {
if (!_feature_mask) {
/* Support runtime lvm.conf changes, N.B. avoid 32 feature */
if ((cn = find_config_tree_array(cmd, global_cache_disabled_features_CFG, NULL))) {
for (cv = cn->v; cv; cv = cv->next) {
if (cv->type != DM_CFG_STRING) {
log_error("Ignoring invalid string in config file %s.",
_lvmconf);
continue;
}
str = cv->v.str;
if (!*str)
continue;
for (i = 0; i < DM_ARRAY_SIZE(_features); ++i)
if (strcasecmp(str, _features[i].feature) == 0)
_feature_mask |= _features[i].cache_feature;
}
}
_feature_mask = ~_feature_mask;
for (i = 0; i < DM_ARRAY_SIZE(_features); ++i)
if ((_attrs & _features[i].cache_feature) &&
!(_feature_mask & _features[i].cache_feature))
log_very_verbose("Target %s %s support disabled by %s",
_cache_module, _features[i].feature, _lvmconf);
}
*attributes = _attrs & _feature_mask;
}
return _cache_present;
@@ -378,9 +314,7 @@ static int _cache_add_target_line(struct dev_manager *dm,
metadata_uuid,
data_uuid,
origin_uuid,
seg->cleaner_policy ? "cleaner" :
/* undefined policy name -> likely an old "mq" */
cache_pool_seg->policy_name ? : "mq",
seg->cleaner_policy ? "cleaner" : cache_pool_seg->policy_name,
seg->cleaner_policy ? NULL : cache_pool_seg->policy_settings,
cache_pool_seg->chunk_size))
return_0;
@@ -442,8 +376,5 @@ int init_cache_segtypes(struct cmd_context *cmd,
return_0;
log_very_verbose("Initialised segtype: %s", segtype->name);
/* Reset mask for recalc */
_feature_mask = 0;
return 1;
}

View File

@@ -133,7 +133,6 @@ struct cmd_context {
unsigned lockd_gl_disable:1;
unsigned lockd_vg_disable:1;
unsigned lockd_lv_disable:1;
unsigned lockd_gl_removed:1;
unsigned lockd_vg_default_sh:1;
unsigned lockd_vg_enforce_sh:1;

View File

@@ -23,7 +23,6 @@
#include "toolcontext.h"
#include "lvm-file.h"
#include "memlock.h"
#include "segtype.h"
#include <sys/stat.h>
#include <sys/mman.h>
@@ -2416,27 +2415,3 @@ int get_default_allocation_cache_pool_chunk_size_CFG(struct cmd_context *cmd, st
{
return DEFAULT_CACHE_POOL_CHUNK_SIZE * 2;
}
const char *get_default_allocation_cache_policy_CFG(struct cmd_context *cmd, struct profile *profile)
{
const struct segment_type *segtype = get_segtype_from_string(cmd, "cache");
unsigned attr = ~0;
if (!segtype ||
!segtype->ops->target_present ||
!segtype->ops->target_present(cmd, NULL, &attr)) {
log_warn("WARNING: Cannot detect default cache policy, using \""
DEFAULT_CACHE_POLICY "\".");
return DEFAULT_CACHE_POLICY;
}
if (attr & CACHE_FEATURE_POLICY_SMQ)
return "smq";
if (attr & CACHE_FEATURE_POLICY_MQ)
return "mq";
log_warn("WARNING: Default cache policy not available.");
return NULL;
}

View File

@@ -50,7 +50,7 @@ struct profile_params {
struct dm_list profiles; /* list of profiles which are loaded already and which are ready for use */
};
#define CFG_PATH_MAX_LEN 128
#define CFG_PATH_MAX_LEN 64
/*
* Structures used for definition of a configuration tree.
@@ -296,7 +296,5 @@ int get_default_allocation_thin_pool_chunk_size_CFG(struct cmd_context *cmd, str
#define get_default_unconfigured_allocation_thin_pool_chunk_size_CFG NULL
int get_default_allocation_cache_pool_chunk_size_CFG(struct cmd_context *cmd, struct profile *profile);
#define get_default_unconfigured_allocation_cache_pool_chunk_size_CFG NULL
const char *get_default_allocation_cache_policy_CFG(struct cmd_context *cmd, struct profile *profile);
#define get_default_unconfigured_allocation_cache_policy_CFG NULL
#endif

View File

@@ -122,7 +122,7 @@ cfg_section(devices_CFG_SECTION, "devices", root_CFG_SECTION, 0, vsn(1, 0, 0), 0
"How LVM uses block devices.\n")
cfg_section(allocation_CFG_SECTION, "allocation", root_CFG_SECTION, CFG_PROFILABLE, vsn(2, 2, 77), 0, NULL,
"How LVM selects space and applies properties to LVs.\n")
"How LVM selects free space for Logical Volumes.\n")
cfg_section(log_CFG_SECTION, "log", root_CFG_SECTION, 0, vsn(1, 0, 0), 0, NULL,
"How LVM log information is reported.\n")
@@ -313,7 +313,7 @@ cfg(devices_md_chunk_alignment_CFG, "md_chunk_alignment", devices_CFG_SECTION, 0
cfg(devices_default_data_alignment_CFG, "default_data_alignment", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_DATA_ALIGNMENT, vsn(2, 2, 75), NULL, 0, NULL,
"Default alignment of the start of a PV data area in MB.\n"
"If set to 0, a value of 64KiB will be used.\n"
"If set to 0, a value of 64KB will be used.\n"
"Set to 1 for 1MiB, 2 for 2MiB, etc.\n")
cfg(devices_data_alignment_detection_CFG, "data_alignment_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_DATA_ALIGNMENT_DETECTION, vsn(2, 2, 51), NULL, 0, NULL,
@@ -329,7 +329,7 @@ cfg(devices_data_alignment_detection_CFG, "data_alignment_detection", devices_CF
"This setting takes precedence over md_chunk_alignment.\n")
cfg(devices_data_alignment_CFG, "data_alignment", devices_CFG_SECTION, 0, CFG_TYPE_INT, 0, vsn(2, 2, 45), NULL, 0, NULL,
"Alignment of the start of a PV data area in KiB.\n"
"Alignment of the start of a PV data area in KB.\n"
"If a PV is placed directly on an md device and\n"
"md_chunk_alignment or data_alignment_detection are enabled,\n"
"then this setting is ignored. Otherwise, md_chunk_alignment\n"
@@ -340,10 +340,10 @@ cfg(devices_data_alignment_offset_detection_CFG, "data_alignment_offset_detectio
"Detect PV data alignment offset based on sysfs device information.\n"
"The start of a PV aligned data area will be shifted by the\n"
"alignment_offset exposed in sysfs. This offset is often 0, but\n"
"may be non-zero. Certain 4KiB sector drives that compensate for\n"
"may be non-zero. Certain 4KB sector drives that compensate for\n"
"windows partitioning will have an alignment_offset of 3584 bytes\n"
"(sector 7 is the lowest aligned logical block, the 4KiB sectors start\n"
"at LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).\n"
"(sector 7 is the lowest aligned logical block, the 4KB sectors start\n"
"at LBA -1, and consequently sector 63 is aligned on a 4KB boundary).\n"
"pvcreate --dataalignmentoffset will skip this detection.\n")
cfg(devices_ignore_suspended_devices_CFG, "ignore_suspended_devices", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_IGNORE_SUSPENDED_DEVICES, vsn(1, 2, 19), NULL, 0, NULL,
@@ -383,9 +383,9 @@ cfg(devices_require_restorefile_with_uuid_CFG, "require_restorefile_with_uuid",
"Allow use of pvcreate --uuid without requiring --restorefile.\n")
cfg(devices_pv_min_size_CFG, "pv_min_size", devices_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_PV_MIN_SIZE_KB, vsn(2, 2, 85), NULL, 0, NULL,
"Minimum size in KiB of block devices which can be used as PVs.\n"
"Minimum size (in KB) of block devices which can be used as PVs.\n"
"In a clustered environment all nodes must use the same value.\n"
"Any value smaller than 512KiB is ignored. The previous built-in\n"
"Any value smaller than 512KB is ignored. The previous built-in\n"
"value was 512.\n")
cfg(devices_issue_discards_CFG, "issue_discards", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ISSUE_DISCARDS, vsn(2, 2, 85), NULL, 0, NULL,
@@ -439,7 +439,7 @@ cfg(allocation_wipe_signatures_when_zeroing_new_lvs_CFG, "wipe_signatures_when_z
"Look for and erase any signatures while zeroing a new LV.\n"
"Zeroing is controlled by the -Z/--zero option, and if not\n"
"specified, zeroing is used by default if possible.\n"
"Zeroing simply overwrites the first 4KiB of a new LV\n"
"Zeroing simply overwrites the first 4 KiB of a new LV\n"
"with zeroes and does no signature detection or wiping.\n"
"Signature wiping goes beyond zeroing and detects exact\n"
"types and positions of signatures within the whole LV.\n"
@@ -462,34 +462,16 @@ cfg(allocation_mirror_logs_require_separate_pvs_CFG, "mirror_logs_require_separa
cfg(allocation_cache_pool_metadata_require_separate_pvs_CFG, "cache_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 106), NULL, 0, NULL,
"Cache pool metadata and data will always use different PVs.\n")
cfg(allocation_cache_pool_cachemode_CFG, "cache_pool_cachemode", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_CACHE_MODE, vsn(2, 2, 113), NULL, vsn(2, 2, 128),
"This has been replaced by the allocation/cache_mode setting.\n",
"Cache mode.\n")
cfg(allocation_cache_mode_CFG, "cache_mode", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_CACHE_MODE, vsn(2, 2, 128), NULL, 0, NULL,
"The default cache mode used for new cache.\n"
cfg(allocation_cache_pool_cachemode_CFG, "cache_pool_cachemode", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_CACHE_POOL_CACHEMODE, vsn(2, 2, 113), NULL, 0, NULL,
"The default cache mode used for new cache pools.\n"
"Possible options are: writethrough, writeback.\n"
"writethrough - Data blocks are immediately written from\n"
"the cache to disk.\n"
"writeback - Data blocks are written from the cache back\n"
"to disk after some delay to improve performance.\n")
cfg_runtime(allocation_cache_policy_CFG, "cache_policy", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, vsn(2, 2, 127), 0, NULL,
"The default cache policy used for new cache volume.\n"
"Generally available policies are: mq, smq.\n"
"mq - Multiqueue policy with 88 bytes per block\n"
"smq - Stochastic multique with 25 bytes per block (kernel >= 4.2).\n")
cfg_section(allocation_cache_settings_CFG_SECTION, "cache_settings", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_DEFAULT_COMMENTED, vsn(2, 2, 127), 0, NULL,
"Individual settings for policies.\n"
"See the help for individual policies for more info.\n")
cfg_section(policy_settings_CFG_SUBSECTION, "policy_settings", allocation_cache_settings_CFG_SECTION, CFG_NAME_VARIABLE | CFG_SECTION_NO_CHECK | CFG_PROFILABLE | CFG_DEFAULT_COMMENTED, vsn(2, 2, 127), 0, NULL,
"Replace this subsection name with a policy name.\n"
"Multiple subsections for different policies can be created.\n")
cfg_runtime(allocation_cache_pool_chunk_size_CFG, "cache_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 106), 0, NULL,
"The minimal chunk size in KiB for cache pool volumes.\n"
cfg_runtime(allocation_cache_pool_chunk_size_CFG, "cache_pool_chunk_size", allocation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 106), 0, NULL,
"The minimal chunk size (in kiB) for cache pool volumes.\n"
"Using a chunk_size that is too large can result in wasteful\n"
"use of the cache, where small reads and writes can cause\n"
"large sections of an LV to be mapped into the cache. However,\n"
@@ -497,8 +479,8 @@ cfg_runtime(allocation_cache_pool_chunk_size_CFG, "cache_pool_chunk_size", alloc
"overhead trying to manage the numerous chunks that become mapped\n"
"into the cache. The former is more of a problem than the latter\n"
"in most cases, so we default to a value that is on the smaller\n"
"end of the spectrum. Supported values range from 32KiB to\n"
"1GiB in multiples of 32.\n")
"end of the spectrum. Supported values range from 32(kiB) to\n"
"1048576 in multiples of 32.\n")
cfg(allocation_thin_pool_metadata_require_separate_pvs_CFG, "thin_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 89), NULL, 0, NULL,
"Thin pool metdata and data will always use different PVs.\n")
@@ -524,16 +506,16 @@ cfg(allocation_thin_pool_chunk_size_policy_CFG, "thin_pool_chunk_size_policy", a
"The chunk size is always at least 512KiB.\n")
cfg_runtime(allocation_thin_pool_chunk_size_CFG, "thin_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 99), 0, NULL,
"The minimal chunk size in KiB for thin pool volumes.\n"
"The minimal chunk size (in KB) for thin pool volumes.\n"
"Larger chunk sizes may improve performance for plain\n"
"thin volumes, however using them for snapshot volumes\n"
"is less efficient, as it consumes more space and takes\n"
"extra time for copying. When unset, lvm tries to estimate\n"
"chunk size starting from 64KiB. Supported values are in\n"
"the range 64KiB to 1GiB.\n")
"chunk size starting from 64KB. Supported values are in\n"
"the range 64 to 1048576.\n")
cfg(allocation_physical_extent_size_CFG, "physical_extent_size", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_EXTENT_SIZE, vsn(2, 2, 112), NULL, 0, NULL,
"Default physical extent size in KiB to use for new VGs.\n")
"Default physical extent size to use for new VGs (in KB).\n")
cfg(log_verbose_CFG, "verbose", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_VERBOSE, vsn(1, 0, 0), NULL, 0, NULL,
"Controls the messages sent to stdout or stderr.\n")
@@ -810,11 +792,11 @@ cfg(global_sparse_segtype_default_CFG, "sparse_segtype_default", global_CFG_SECT
"The '--type snapshot|thin' option overrides this setting.\n")
cfg(global_lvdisplay_shows_full_device_path_CFG, "lvdisplay_shows_full_device_path", global_CFG_SECTION, CFG_PROFILABLE | CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_LVDISPLAY_SHOWS_FULL_DEVICE_PATH, vsn(2, 2, 89), NULL, 0, NULL,
"Enable this to reinstate the previous lvdisplay name format.\n"
"The default format for displaying LV names in lvdisplay was changed\n"
"in version 2.02.89 to show the LV name and path separately.\n"
"Previously this was always shown as /dev/vgname/lvname even when that\n"
"was never a valid path in the /dev filesystem.\n")
"was never a valid path in the /dev filesystem.\n"
"Enable this option to reinstate the previous format.\n")
cfg(global_use_lvmetad_CFG, "use_lvmetad", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_LVMETAD, vsn(2, 2, 93), "@DEFAULT_USE_LVMETAD@", 0, NULL,
"Use lvmetad to cache metadata and reduce disk scanning.\n"
@@ -909,14 +891,6 @@ cfg_array(global_thin_disabled_features_CFG, "thin_disabled_features", global_CF
"Example:\n"
"thin_disabled_features = [ \"discards\", \"block_size\" ]\n")
cfg_array(global_cache_disabled_features_CFG, "cache_disabled_features", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 2, 126), NULL, 0, NULL,
"Features to not use in the cache driver.\n"
"This can be helpful for testing, or to avoid\n"
"using a feature that is causing problems.\n"
"Features: policy_mq, policy_smq.\n"
"Example:\n"
"cache_disabled_features = [ \"policy_smq\" ]\n")
cfg(global_cache_check_executable_CFG, "cache_check_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, CACHE_CHECK_CMD, vsn(2, 2, 108), "@CACHE_CHECK_CMD@", 0, NULL,
"The full path to the cache_check command.\n"
"LVM uses this command to check that a cache metadata\n"
@@ -1039,11 +1013,11 @@ cfg(activation_use_linear_target_CFG, "use_linear_target", activation_CFG_SECTIO
"that only handles a single stripe.\n")
cfg(activation_reserved_stack_CFG, "reserved_stack", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RESERVED_STACK, vsn(1, 0, 0), NULL, 0, NULL,
"Stack size in KiB to reserve for use while devices are suspended.\n"
"Stack size in KB to reserve for use while devices are suspended.\n"
"Insufficent reserve risks I/O deadlock during device suspension.\n")
cfg(activation_reserved_memory_CFG, "reserved_memory", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RESERVED_MEMORY, vsn(1, 0, 0), NULL, 0, NULL,
"Memory size in KiB to reserve for use while devices are suspended.\n"
"Memory size in KB to reserve for use while devices are suspended.\n"
"Insufficent reserve risks I/O deadlock during device suspension.\n")
cfg(activation_process_priority_CFG, "process_priority", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_PROCESS_PRIORITY, vsn(1, 0, 0), NULL, 0, NULL,
@@ -1110,7 +1084,7 @@ cfg_array(activation_read_only_volume_list_CFG, "read_only_volume_list", activat
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
"This has been replaced by the activation/raid_region_size setting.\n",
"Size in KiB of each copy operation when mirroring.\n")
"Size (in KB) of each copy operation when mirroring.\n")
cfg(activation_raid_region_size_CFG, "raid_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(2, 2, 99), NULL, 0, NULL,
"Size in KiB of each raid or mirror synchronization region.\n"
@@ -1263,7 +1237,7 @@ cfg(activation_monitoring_CFG, "monitoring", activation_CFG_SECTION, 0, CFG_TYPE
"The '--ignoremonitoring' option overrides this setting.\n")
cfg(activation_polling_interval_CFG, "polling_interval", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_INTERVAL, vsn(2, 2, 63), NULL, 0, NULL,
"Check pvmove or lvconvert progress at this interval (seconds).\n"
"Check pvmove or lvconvert progress at this interval (seconds)\n"
"When pvmove or lvconvert must wait for the kernel to finish\n"
"synchronising or merging data, they check and report progress\n"
"at intervals of this number of seconds.\n"

View File

@@ -117,8 +117,8 @@
#define DEFAULT_CACHE_POOL_CHUNK_SIZE 64 /* KB */
#define DEFAULT_CACHE_POOL_MIN_METADATA_SIZE 2048 /* KB */
#define DEFAULT_CACHE_POOL_MAX_METADATA_SIZE (16 * 1024 * 1024) /* KB */
#define DEFAULT_CACHE_POLICY "mq"
#define DEFAULT_CACHE_MODE "writethrough"
#define DEFAULT_CACHE_POOL_CACHEMODE "writethrough"
#define DEFAULT_CACHE_POOL_POLICY "mq"
#define DEFAULT_UMASK 0077

View File

@@ -739,19 +739,6 @@ static int _free_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg)
if (!_lvmlockd_connected)
return 0;
/*
* vgremove originally held the global lock, but lost it because the
* vgremove command is removing multiple VGs, and removed the VG
* holding the global lock before attempting to remove this VG.
* To avoid this situation, the user should remove the VG holding
* the global lock in a command by itself, or as the last arg in a
* vgremove command that removes multiple VGs.
*/
if (cmd->lockd_gl_removed) {
log_error("Global lock failed: global lock was lost by removing a previous VG.");
return 0;
}
if (!vg->lock_args || !strlen(vg->lock_args)) {
/* Shouldn't happen in general, but maybe in some error cases? */
log_debug("_free_vg_sanlock %s no lock_args", vg->name);
@@ -786,21 +773,8 @@ static int _free_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg)
goto out;
}
/*
* If the global lock was been removed by removing this VG, then:
*
* Print a warning indicating that the global lock should be enabled
* in another remaining sanlock VG.
*
* Do not allow any more VGs to be removed by this command, e.g.
* if a command removes two sanlock VGs, like vgremove foo bar,
* and the global lock existed in foo, do not continue to remove
* VG bar without the global lock. See the corresponding check above.
*/
if (lockd_flags & LD_RF_WARN_GL_REMOVED) {
if (lockd_flags & LD_RF_WARN_GL_REMOVED)
log_warn("VG %s held the sanlock global lock, enable global lock in another VG.", vg->name);
cmd->lockd_gl_removed = 1;
}
/*
* The usleep delay gives sanlock time to close the lock lv,

View File

@@ -29,17 +29,7 @@
#define DM_HINT_OVERHEAD_PER_BLOCK 8 /* bytes */
#define DM_MAX_HINT_WIDTH (4+16) /* bytes. FIXME Configurable? */
int cache_mode_is_set(const struct lv_segment *seg)
{
if (seg_is_cache(seg))
seg = first_seg(seg->pool_lv);
return (seg->feature_flags & (DM_CACHE_FEATURE_WRITEBACK |
DM_CACHE_FEATURE_WRITETHROUGH |
DM_CACHE_FEATURE_PASSTHROUGH)) ? 1 : 0;
}
const char *get_cache_mode_name(const struct lv_segment *seg)
const char *get_cache_pool_cachemode_name(const struct lv_segment *seg)
{
if (seg->feature_flags & DM_CACHE_FEATURE_WRITEBACK)
return "writeback";
@@ -56,48 +46,19 @@ const char *get_cache_mode_name(const struct lv_segment *seg)
return NULL;
}
int cache_set_mode(struct lv_segment *seg, const char *str)
int set_cache_pool_feature(uint64_t *feature_flags, const char *str)
{
struct cmd_context *cmd = seg->lv->vg->cmd;
int id;
uint64_t mode;
if (!str && !seg_is_cache(seg))
return 1; /* Defaults only for cache */
if (seg_is_cache(seg))
seg = first_seg(seg->pool_lv);
if (!str) {
if (cache_mode_is_set(seg))
return 1; /* Default already set in cache pool */
id = allocation_cache_mode_CFG;
/* If present, check backward compatible settings */
if (!find_config_node(cmd, cmd->cft, id) &&
find_config_node(cmd, cmd->cft, allocation_cache_pool_cachemode_CFG))
id = allocation_cache_pool_cachemode_CFG;
str = find_config_tree_str(cmd, id, NULL);
}
if (!strcmp(str, "writeback"))
mode = DM_CACHE_FEATURE_WRITEBACK;
*feature_flags |= DM_CACHE_FEATURE_WRITEBACK;
else if (!strcmp(str, "writethrough"))
mode = DM_CACHE_FEATURE_WRITETHROUGH;
else if (!strcmp(str, "passthrough"))
mode = DM_CACHE_FEATURE_PASSTHROUGH;
*feature_flags |= DM_CACHE_FEATURE_WRITETHROUGH;
else if (!strcmp(str, "passhrough"))
*feature_flags |= DM_CACHE_FEATURE_PASSTHROUGH;
else {
log_error("Cannot set unknown cache mode \"%s\".", str);
log_error("Cache pool feature \"%s\" is unknown.", str);
return 0;
}
seg->feature_flags &= ~(DM_CACHE_FEATURE_WRITEBACK |
DM_CACHE_FEATURE_WRITETHROUGH |
DM_CACHE_FEATURE_PASSTHROUGH);
seg->feature_flags |= mode;
return 1;
}
@@ -434,72 +395,36 @@ int lv_is_cache_origin(const struct logical_volume *lv)
return seg && lv_is_cache(seg->lv) && !lv_is_pending_delete(seg->lv) && (seg_lv(seg, 0) == lv);
}
int cache_set_policy(struct lv_segment *seg, const char *name,
const struct dm_config_tree *settings)
int lv_cache_set_policy(struct logical_volume *lv, const char *name,
const struct dm_config_tree *settings)
{
struct dm_config_node *cn;
const struct dm_config_node *cns;
struct dm_config_tree *old = NULL, *new = NULL, *tmp = NULL;
int r = 0;
const int passed_seg_is_cache = seg_is_cache(seg);
struct lv_segment *seg = first_seg(lv);
if (passed_seg_is_cache)
if (lv_is_cache(lv))
seg = first_seg(seg->pool_lv);
if (name) {
if (!(seg->policy_name = dm_pool_strdup(seg->lv->vg->vgmem, name))) {
log_error("Failed to duplicate policy name.");
return 0;
}
} else if (!seg->policy_name && passed_seg_is_cache)
seg->policy_name = find_config_tree_str(seg->lv->vg->cmd, allocation_cache_policy_CFG, NULL);
if (settings) {
if (!seg->policy_name) {
log_error(INTERNAL_ERROR "Can't set policy settings without policy name.");
return 0;
}
if (seg->policy_settings) {
if (!(old = dm_config_create()))
goto_out;
if (!(new = dm_config_create()))
goto_out;
new->root = settings->root;
old->root = seg->policy_settings;
new->cascade = old;
if (!(tmp = dm_config_flatten(new)))
goto_out;
}
if ((cn = dm_config_find_node((tmp) ? tmp->root : settings->root, "policy_settings")) &&
!(seg->policy_settings = dm_config_clone_node_with_mem(seg->lv->vg->vgmem, cn, 0)))
if (seg->policy_settings) {
if (!(old = dm_config_create()))
goto_out;
} else if (passed_seg_is_cache && /* Look for command's profile cache_policies */
(cns = find_config_tree_node(seg->lv->vg->cmd, allocation_cache_settings_CFG_SECTION, NULL))) {
/* Try to find our section for given policy */
for (cn = cns->child; cn; cn = cn->sib) {
/* Only matching section names */
if (cn->v || strcmp(cn->key, seg->policy_name) != 0)
continue;
if (!(new = dm_config_create()))
goto_out;
new->root = settings->root;
old->root = seg->policy_settings;
new->cascade = old;
if (!(tmp = dm_config_flatten(new)))
goto_out;
}
if (!cn->child)
break;
if ((cn = dm_config_find_node((tmp) ? tmp->root : settings->root, "policy_settings")) &&
!(seg->policy_settings = dm_config_clone_node_with_mem(lv->vg->vgmem, cn, 0)))
goto_out;
if (!(new = dm_config_create()))
goto_out;
if (!(new->root = dm_config_clone_node_with_mem(new->mem,
cn->child, 1)))
goto_out;
if (!(seg->policy_settings = dm_config_create_node(new, "policy_settings")))
goto_out;
seg->policy_settings->child = new->root;
break; /* Only first match counts */
}
if (name && !(seg->policy_name = dm_pool_strdup(lv->vg->vgmem, name))) {
log_error("Failed to duplicate policy name.");
goto out;
}
restart: /* remove any 'default" nodes */

View File

@@ -131,7 +131,7 @@ char *lvseg_discards_dup(struct dm_pool *mem, const struct lv_segment *seg)
char *lvseg_cachemode_dup(struct dm_pool *mem, const struct lv_segment *seg)
{
const char *name = get_cache_mode_name(seg);
const char *name = get_cache_pool_cachemode_name(seg);
if (!name)
return_NULL;

View File

@@ -4205,18 +4205,6 @@ int lv_rename_update(struct cmd_context *cmd, struct logical_volume *lv,
return 0;
}
/*
* The lvmlockd LV lock is only acquired here to ensure the LV is not
* active on another host. This requests a transient LV lock.
* If the LV is active, a persistent LV lock already exists in
* lvmlockd, and the transient lock request does nothing.
* If the LV is not active, then no LV lock exists and the transient
* lock request acquires the LV lock (or fails). The transient lock
* is automatically released when the command exits.
*/
if (!lockd_lv(cmd, lv, "ex", 0))
return_0;
if (update_mda && !archive(vg))
return_0;
@@ -6998,7 +6986,7 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
if (seg_is_pool(lp))
status |= LVM_WRITE; /* Pool is always writable */
else if (seg_is_cache(lp) || seg_is_thin_volume(lp)) {
else if (seg_is_cache(lp) || seg_is_thin_volume(lp)) {
/* Resolve pool volume */
if (!lp->pool_name) {
/* Should be already checked */
@@ -7074,11 +7062,7 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
display_lvname(origin_lv));
return NULL;
}
} else if (seg_is_cache(lp)) {
if (!pool_lv) {
log_error(INTERNAL_ERROR "Pool LV for cache is missing.");
return NULL;
}
} else if (pool_lv && seg_is_cache(lp)) {
if (!lv_is_cache_pool(pool_lv)) {
log_error("Logical volume %s is not a cache pool.",
display_lvname(pool_lv));
@@ -7222,7 +7206,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
if (!archive(vg))
return_NULL;
if (pool_lv && seg_is_thin_volume(lp)) {
/* Ensure all stacked messages are submitted */
if ((pool_is_active(pool_lv) || is_change_activating(lp->activate)) &&
@@ -7269,25 +7252,16 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
memlock_unlock(vg->cmd);
if (seg_is_cache_pool(lp) || seg_is_cache(lp)) {
if (!cache_set_mode(first_seg(lv), lp->cache_mode)) {
stack;
goto revert_new_lv;
}
if (!cache_set_policy(first_seg(lv), lp->policy_name, lp->policy_settings)) {
stack;
goto revert_new_lv;
}
pool_lv = pool_lv ? : lv;
if (lp->chunk_size) {
first_seg(pool_lv)->chunk_size = lp->chunk_size;
/* TODO: some calc_policy solution for cache ? */
if (!recalculate_pool_chunk_size_with_dev_hints(pool_lv, lp->passed_args,
THIN_CHUNK_SIZE_CALC_METHOD_GENERIC)) {
stack;
goto revert_new_lv;
}
if (!lv_cache_set_policy(pool_lv, lp->policy_name, lp->policy_settings))
return_NULL; /* revert? */
first_seg(pool_lv)->chunk_size = lp->chunk_size;
first_seg(pool_lv)->feature_flags = lp->feature_flags;
/* TODO: some calc_policy solution for cache ? */
if (!recalculate_pool_chunk_size_with_dev_hints(pool_lv, lp->passed_args,
THIN_CHUNK_SIZE_CALC_METHOD_GENERIC)) {
stack;
goto revert_new_lv;
}
} else if (seg_is_raid(lp)) {
first_seg(lv)->min_recovery_rate = lp->min_recovery_rate;
@@ -7492,12 +7466,6 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
}
lv = tmp_lv;
if (!cache_set_mode(first_seg(lv), lp->cache_mode))
return_NULL; /* revert? */
if (!cache_set_policy(first_seg(lv), lp->policy_name, lp->policy_settings))
return_NULL; /* revert? */
if (!lv_update_and_reload(lv)) {
/* FIXME Do a better revert */
log_error("Aborting. Manual intervention required.");

View File

@@ -208,21 +208,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
}
}
if (seg_is_cache_pool(seg) &&
!dm_list_empty(&seg->lv->segs_using_this_lv)) {
switch (seg->feature_flags &
(DM_CACHE_FEATURE_PASSTHROUGH |
DM_CACHE_FEATURE_WRITETHROUGH |
DM_CACHE_FEATURE_WRITEBACK)) {
case DM_CACHE_FEATURE_PASSTHROUGH:
case DM_CACHE_FEATURE_WRITETHROUGH:
case DM_CACHE_FEATURE_WRITEBACK:
break;
default:
log_error("LV %s has invalid cache's feature flag.",
lv->name);
inc_error_count;
}
if (seg_is_cache_pool(seg)) {
if (!seg->policy_name) {
log_error("LV %s is missing cache policy name.", lv->name);
inc_error_count;

View File

@@ -901,7 +901,7 @@ struct lvcreate_params {
uint32_t min_recovery_rate; /* RAID */
uint32_t max_recovery_rate; /* RAID */
const char *cache_mode; /* cache */
uint64_t feature_flags; /* cache */
const char *policy_name; /* cache */
struct dm_config_tree *policy_settings; /* cache */
@@ -1153,11 +1153,8 @@ struct lv_status_cache {
dm_percent_t dirty_usage;
};
const char *get_cache_mode_name(const struct lv_segment *cache_seg);
int cache_mode_is_set(const struct lv_segment *seg);
int cache_set_mode(struct lv_segment *cache_seg, const char *str);
int cache_set_policy(struct lv_segment *cache_seg, const char *name,
const struct dm_config_tree *settings);
const char *get_cache_pool_cachemode_name(const struct lv_segment *seg);
int set_cache_pool_feature(uint64_t *feature_flags, const char *str);
int update_cache_pool_params(const struct segment_type *segtype,
struct volume_group *vg, unsigned attr,
int passed_args, uint32_t pool_data_extents,
@@ -1168,6 +1165,8 @@ int validate_lv_cache_create_origin(const struct logical_volume *origin_lv);
struct logical_volume *lv_cache_create(struct logical_volume *pool,
struct logical_volume *origin);
int lv_cache_remove(struct logical_volume *cache_lv);
int lv_cache_set_policy(struct logical_volume *cache_lv, const char *name,
const struct dm_config_tree *settings);
int wipe_cache_pool(struct logical_volume *cache_pool_lv);
/* -- metadata/cache_manip.c */

View File

@@ -191,9 +191,6 @@ int init_thin_segtypes(struct cmd_context *cmd, struct segtype_library *seglib);
int init_cache_segtypes(struct cmd_context *cmd, struct segtype_library *seglib);
#endif
#define CACHE_FEATURE_POLICY_MQ (1U << 0)
#define CACHE_FEATURE_POLICY_SMQ (1U << 1)
#define SNAPSHOT_FEATURE_FIXED_LEAK (1U << 0) /* version 1.12 */
#ifdef SNAPSHOT_INTERNAL

View File

@@ -516,9 +516,6 @@
/* Define to 1 if you have the <sys/time.h> header file. */
#undef HAVE_SYS_TIME_H
/* Define to 1 if you have the <sys/timerfd.h> header file. */
#undef HAVE_SYS_TIMERFD_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H

View File

@@ -2079,7 +2079,7 @@ static int _cachemode_disp(struct dm_report *rh, struct dm_pool *mem,
seg = first_seg(seg->pool_lv);
if (seg_is_cache_pool(seg)) {
if (!(cachemode_str = get_cache_mode_name(seg)))
if (!(cachemode_str = get_cache_pool_cachemode_name(seg)))
return_0;
return dm_report_field_string(rh, field, &cachemode_str);

View File

@@ -72,7 +72,6 @@ dm_task_get_ioctl_timestamp
dm_task_set_record_timestamp
dm_timestamp_alloc
dm_timestamp_compare
dm_timestamp_copy
dm_timestamp_delta
dm_timestamp_destroy
dm_timestamp_get

View File

@@ -404,32 +404,7 @@ int dm_get_status_thin(struct dm_pool *mem, const char *params,
*
* Operations on dm_stats objects include managing statistics regions
* and obtaining and manipulating current counter values from the
* kernel. Methods are provided to return baisc count values and to
* derive time-based metrics when a suitable interval estimate is
* provided.
*
* Internally the dm_stats handle contains a pointer to a table of one
* or more dm_stats_region objects representing the regions registered
* with the dm_stats_create_region() method. These in turn point to a
* table of one or more dm_stats_counters objects containing the
* counter sets for each defined area within the region:
*
* dm_stats->dm_stats_region[nr_regions]->dm_stats_counters[nr_areas]
*
* This structure is private to the library and may change in future
* versions: all users should make use of the public interface and treat
* the dm_stats type as an opaque handle.
*
* Regions and counter sets are stored in order of increasing region_id.
* Depending on region specifications and the sequence of create and
* delete operations this may not correspond to increasing sector
* number: users of the library should not assume that this is the case
* unless region creation is deliberately managed to ensure this (by
* always creating regions in strict order of ascending sector address).
*
* Regions may also overlap so the same sector range may be included in
* more than one region or area: applications should be prepared to deal
* with this or manage regions such that it does not occur.
* kernel.
*/
struct dm_stats;
@@ -603,25 +578,24 @@ void dm_stats_buffer_destroy(struct dm_stats *dms, char *buffer);
*
* Always returns zero on an empty handle.
*/
uint64_t dm_stats_get_nr_regions(const struct dm_stats *dms);
uint64_t dm_stats_get_nr_regions(struct dm_stats *dms);
/*
* Test whether region_id is present in this dm_stats handle.
*/
int dm_stats_region_present(const struct dm_stats *dms, uint64_t region_id);
int dm_stats_region_present(struct dm_stats *dms, uint64_t region_id);
/*
* Returns the number of areas (counter sets) contained in the specified
* region_id of the supplied dm_stats handle.
*/
uint64_t dm_stats_get_region_nr_areas(const struct dm_stats *dms,
uint64_t region_id);
uint64_t dm_stats_get_region_nr_areas(struct dm_stats *dms, uint64_t region_id);
/*
* Returns the total number of areas (counter sets) in all regions of the
* given dm_stats object.
*/
uint64_t dm_stats_get_nr_areas(const struct dm_stats *dms);
uint64_t dm_stats_get_nr_areas(struct dm_stats *dms);
/*
* Destroy a dm_stats object and all associated regions and counter
@@ -647,18 +621,15 @@ void dm_stats_destroy(struct dm_stats *dms);
* All values are stored internally with nanosecond precision and are
* converted to or from ms when the millisecond interfaces are used.
*/
void dm_stats_set_sampling_interval_ns(struct dm_stats *dms,
uint64_t interval_ns);
void dm_stats_set_sampling_interval_ms(struct dm_stats *dms,
uint64_t interval_ms);
void dm_stats_set_sampling_interval_ns(struct dm_stats *dms, uint64_t interval_ns);
void dm_stats_set_sampling_interval_ms(struct dm_stats *dms, uint64_t interval_ms);
/*
* Retrieve the configured sampling interval in either nanoseconds or
* milliseconds.
*/
uint64_t dm_stats_get_sampling_interval_ns(const struct dm_stats *dms);
uint64_t dm_stats_get_sampling_interval_ms(const struct dm_stats *dms);
uint64_t dm_stats_get_sampling_interval_ns(struct dm_stats *dms);
uint64_t dm_stats_get_sampling_interval_ms(struct dm_stats *dms);
/*
* Override program_id. This may be used to change the default
@@ -697,12 +668,12 @@ int dm_stats_set_program_id(struct dm_stats *dms, int allow_empty,
*
* All values are returned in units of 512b sectors.
*/
uint64_t dm_stats_get_region_start(const struct dm_stats *dms, uint64_t *start,
uint64_t dm_stats_get_region_start(struct dm_stats *dms, uint64_t *start,
uint64_t region_id);
uint64_t dm_stats_get_region_len(const struct dm_stats *dms, uint64_t *len,
uint64_t dm_stats_get_region_len(struct dm_stats *dms, uint64_t *len,
uint64_t region_id);
uint64_t dm_stats_get_region_area_len(const struct dm_stats *dms,
uint64_t *area_len, uint64_t region_id);
uint64_t dm_stats_get_region_area_len(struct dm_stats *dms, uint64_t *area_len,
uint64_t region_id);
/*
* Area properties: start and length.
@@ -715,7 +686,7 @@ uint64_t dm_stats_get_region_area_len(const struct dm_stats *dms,
*
* All values are returned in units of 512b sectors.
*/
uint64_t dm_stats_get_area_start(const struct dm_stats *dms, uint64_t *start,
uint64_t dm_stats_get_area_start(struct dm_stats *dms, uint64_t *start,
uint64_t region_id, uint64_t area_id);
/*
@@ -726,10 +697,10 @@ uint64_t dm_stats_get_area_start(const struct dm_stats *dms, uint64_t *start,
* dm_stats_populate(), or dm_stats_bind*() of the handle from which it
* was obtained.
*/
const char *dm_stats_get_region_program_id(const struct dm_stats *dms,
const char *dm_stats_get_region_program_id(struct dm_stats *dms,
uint64_t region_id);
const char *dm_stats_get_region_aux_data(const struct dm_stats *dms,
const char *dm_stats_get_region_aux_data(struct dm_stats *dms,
uint64_t region_id);
/*
@@ -821,8 +792,8 @@ dm_stats_walk_start((dms)); \
do
/*
* Start a 'while' style loop or end a 'do..while' loop iterating over the
* regions contained in dm_stats handle 'dms'.
* End a loop iterating over the regions contained in dm_stats handle
* 'dms'.
*/
#define dm_stats_walk_while(dms) \
while(!dm_stats_walk_end((dms)))
@@ -840,13 +811,13 @@ while(!dm_stats_walk_end((dms)))
* Returns the number of areas (counter sets) contained in the current
* region of the supplied dm_stats handle.
*/
uint64_t dm_stats_get_current_nr_areas(const struct dm_stats *dms);
uint64_t dm_stats_get_current_nr_areas(struct dm_stats *dms);
/*
* Retrieve the current values of the stats cursor.
*/
uint64_t dm_stats_get_current_region(const struct dm_stats *dms);
uint64_t dm_stats_get_current_area(const struct dm_stats *dms);
uint64_t dm_stats_get_current_region(struct dm_stats *dms);
uint64_t dm_stats_get_current_area(struct dm_stats *dms);
/*
* Current region properties: size, length & area_len.
@@ -856,14 +827,9 @@ uint64_t dm_stats_get_current_area(const struct dm_stats *dms);
*
* All values are returned in units of 512b sectors.
*/
uint64_t dm_stats_get_current_region_start(const struct dm_stats *dms,
uint64_t *start);
uint64_t dm_stats_get_current_region_len(const struct dm_stats *dms,
uint64_t *len);
uint64_t dm_stats_get_current_region_area_len(const struct dm_stats *dms,
uint64_t *area_len);
uint64_t dm_stats_get_current_region_start(struct dm_stats *dms, uint64_t *start);
uint64_t dm_stats_get_current_region_len(struct dm_stats *dms, uint64_t *len);
uint64_t dm_stats_get_current_region_area_len(struct dm_stats *dms, uint64_t *area_len);
/*
* Current area properties: start and length.
@@ -873,23 +839,20 @@ uint64_t dm_stats_get_current_region_area_len(const struct dm_stats *dms,
*
* All values are returned in units of 512b sectors.
*/
uint64_t dm_stats_get_current_area_start(const struct dm_stats *dms,
uint64_t *start);
uint64_t dm_stats_get_current_area_len(const struct dm_stats *dms,
uint64_t *start);
uint64_t dm_stats_get_current_area_start(struct dm_stats *dms, uint64_t *start);
uint64_t dm_stats_get_current_area_len(struct dm_stats *dms, uint64_t *start);
/*
* Return a pointer to the program_id string for region at the current
* cursor location.
*/
const char *dm_stats_get_current_region_program_id(const struct dm_stats *dms);
const char *dm_stats_get_current_region_program_id(struct dm_stats *dms);
/*
* Return a pointer to the aux_data string for the region at the current
* cursor location.
*/
const char *dm_stats_get_current_region_aux_data(const struct dm_stats *dms);
const char *dm_stats_get_current_region_aux_data(struct dm_stats *dms);
/*
* Call this to actually run the ioctl.
@@ -1308,7 +1271,7 @@ int dm_tree_node_add_cache_target(struct dm_tree_node *node,
const char *origin_uuid,
const char *policy_name,
const struct dm_config_node *policy_settings,
uint32_t data_block_size);
uint32_t chunk_size);
/*
* FIXME Add individual cache policy pairs <key> = value, like:
@@ -2182,11 +2145,6 @@ struct dm_timestamp *dm_timestamp_alloc(void);
*/
int dm_timestamp_get(struct dm_timestamp *ts);
/*
* Copy a timestamp from ts_old to ts_new.
*/
void dm_timestamp_copy(struct dm_timestamp *ts_new, struct dm_timestamp *ts_old);
/*
* Compare two timestamps.
*
@@ -2476,43 +2434,43 @@ void dm_report_field_set_value(struct dm_report_field *field, const void *value,
#define DM_STATS_REGION_CURRENT UINT64_MAX
#define DM_STATS_AREA_CURRENT UINT64_MAX
uint64_t dm_stats_get_reads(const struct dm_stats *dms,
uint64_t dm_stats_get_reads(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_reads_merged(const struct dm_stats *dms,
uint64_t dm_stats_get_reads_merged(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_read_sectors(const struct dm_stats *dms,
uint64_t dm_stats_get_read_sectors(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_read_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_read_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_writes(const struct dm_stats *dms,
uint64_t dm_stats_get_writes(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_writes_merged(const struct dm_stats *dms,
uint64_t dm_stats_get_writes_merged(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_write_sectors(const struct dm_stats *dms,
uint64_t dm_stats_get_write_sectors(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_write_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_write_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_io_in_progress(const struct dm_stats *dms,
uint64_t dm_stats_get_io_in_progress(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_io_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_io_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_weighted_io_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_weighted_io_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_total_read_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_total_read_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
uint64_t dm_stats_get_total_write_nsecs(const struct dm_stats *dms,
uint64_t dm_stats_get_total_write_nsecs(struct dm_stats *dms,
uint64_t region_id, uint64_t area_id);
/*
@@ -2542,51 +2500,46 @@ uint64_t dm_stats_get_total_write_nsecs(const struct dm_stats *dms,
* average_wr_wait_time: the average write wait time
*/
int dm_stats_get_rd_merges_per_sec(const struct dm_stats *dms, double *rrqm,
int dm_stats_get_rd_merges_per_sec(struct dm_stats *dms, double *rrqm,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_wr_merges_per_sec(const struct dm_stats *dms, double *rrqm,
int dm_stats_get_wr_merges_per_sec(struct dm_stats *dms, double *rrqm,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_reads_per_sec(const struct dm_stats *dms, double *rd_s,
int dm_stats_get_reads_per_sec(struct dm_stats *dms, double *rd_s,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_writes_per_sec(const struct dm_stats *dms, double *wr_s,
int dm_stats_get_writes_per_sec(struct dm_stats *dms, double *wr_s,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_read_sectors_per_sec(const struct dm_stats *dms,
double *rsec_s, uint64_t region_id,
uint64_t area_id);
int dm_stats_get_read_sectors_per_sec(struct dm_stats *dms, double *rsec_s,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_write_sectors_per_sec(const struct dm_stats *dms,
double *wr_s, uint64_t region_id,
uint64_t area_id);
int dm_stats_get_write_sectors_per_sec(struct dm_stats *dms, double *wr_s,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_average_request_size(const struct dm_stats *dms,
double *arqsz, uint64_t region_id,
uint64_t area_id);
int dm_stats_get_average_request_size(struct dm_stats *dms, double *arqsz,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_service_time(const struct dm_stats *dms, double *svctm,
int dm_stats_get_service_time(struct dm_stats *dms, double *svctm,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_average_queue_size(const struct dm_stats *dms, double *qusz,
int dm_stats_get_average_queue_size(struct dm_stats *dms, double *qusz,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_average_wait_time(const struct dm_stats *dms, double *await,
int dm_stats_get_average_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_average_rd_wait_time(const struct dm_stats *dms,
double *await, uint64_t region_id,
uint64_t area_id);
int dm_stats_get_average_rd_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_average_wr_wait_time(const struct dm_stats *dms,
double *await, uint64_t region_id,
uint64_t area_id);
int dm_stats_get_average_wr_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_throughput(const struct dm_stats *dms, double *tput,
int dm_stats_get_throughput(struct dm_stats *dms, double *tput,
uint64_t region_id, uint64_t area_id);
int dm_stats_get_utilization(const struct dm_stats *dms, dm_percent_t *util,
int dm_stats_get_utilization(struct dm_stats *dms, dm_percent_t *util,
uint64_t region_id, uint64_t area_id);
/*************************

View File

@@ -159,7 +159,7 @@ struct load_segment {
uint32_t stripe_size; /* Striped + raid */
int persistent; /* Snapshot */
uint32_t chunk_size; /* Snapshot */
uint32_t chunk_size; /* Snapshot + cache */
struct dm_tree_node *cow; /* Snapshot */
struct dm_tree_node *origin; /* Snapshot + Snapshot origin + Cache */
struct dm_tree_node *merge; /* Snapshot */
@@ -200,7 +200,7 @@ struct load_segment {
struct dm_list thin_messages; /* Thin_pool */
uint64_t transaction_id; /* Thin_pool */
uint64_t low_water_mark; /* Thin_pool */
uint32_t data_block_size; /* Thin_pool + cache */
uint32_t data_block_size; /* Thin_pool */
unsigned skip_block_zeroing; /* Thin_pool */
unsigned ignore_discard; /* Thin_pool target vsn 1.1 */
unsigned no_discard_passdown; /* Thin_pool target vsn 1.1 */
@@ -2429,8 +2429,8 @@ static int _cache_emit_segment_line(struct dm_task *dmt,
EMIT_PARAMS(pos, " %s %s %s", metadata, data, origin);
/* Data block size */
EMIT_PARAMS(pos, " %u", seg->data_block_size);
/* Chunk size */
EMIT_PARAMS(pos, " %u", seg->chunk_size);
/* Features */
/* feature_count = hweight32(seg->flags); */
@@ -3353,37 +3353,11 @@ int dm_tree_node_add_cache_target(struct dm_tree_node *node,
const char *origin_uuid,
const char *policy_name,
const struct dm_config_node *policy_settings,
uint32_t data_block_size)
uint32_t chunk_size)
{
struct dm_config_node *cn;
struct load_segment *seg;
switch (feature_flags &
(DM_CACHE_FEATURE_PASSTHROUGH |
DM_CACHE_FEATURE_WRITETHROUGH |
DM_CACHE_FEATURE_WRITEBACK)) {
case DM_CACHE_FEATURE_PASSTHROUGH:
case DM_CACHE_FEATURE_WRITETHROUGH:
case DM_CACHE_FEATURE_WRITEBACK:
break;
default:
log_error("Invalid cache's feature flag " FMTu64 ".",
feature_flags);
return 0;
}
if (data_block_size < DM_CACHE_MIN_DATA_BLOCK_SIZE) {
log_error("Data block size %u is lower then %u sectors.",
data_block_size, DM_CACHE_MIN_DATA_BLOCK_SIZE);
return 0;
}
if (data_block_size > DM_CACHE_MAX_DATA_BLOCK_SIZE) {
log_error("Data block size %u is higher then %u sectors.",
data_block_size, DM_CACHE_MAX_DATA_BLOCK_SIZE);
return 0;
}
if (!(seg = _add_segment(node, SEG_CACHE, size)))
return_0;
@@ -3405,6 +3379,7 @@ int dm_tree_node_add_cache_target(struct dm_tree_node *node,
if (!_link_tree_nodes(node, seg->metadata))
return_0;
if (!(seg->origin = dm_tree_find_node_by_uuid(node->dtree,
origin_uuid))) {
log_error("Missing cache's origin uuid %s.",
@@ -3414,7 +3389,7 @@ int dm_tree_node_add_cache_target(struct dm_tree_node *node,
if (!_link_tree_nodes(node, seg->origin))
return_0;
seg->data_block_size = data_block_size;
seg->chunk_size = chunk_size;
seg->flags = feature_flags;
seg->policy_name = policy_name;
@@ -3433,6 +3408,7 @@ int dm_tree_node_add_cache_target(struct dm_tree_node *node,
}
}
return 1;
}

View File

@@ -79,15 +79,11 @@ static char *_program_id_from_proc(void)
if (!(comm = fopen(PROC_SELF_COMM, "r")))
return_NULL;
if (!fgets(buf, sizeof(buf), comm)) {
log_error("Could not read from %s", PROC_SELF_COMM);
if(fclose(comm))
stack;
return NULL;
}
if (!fgets(buf, sizeof(buf), comm))
return_NULL;
if (fclose(comm))
stack;
return_NULL;
return dm_strdup(buf);
}
@@ -96,10 +92,10 @@ struct dm_stats *dm_stats_create(const char *program_id)
{
struct dm_stats *dms = NULL;
if (!(dms = dm_zalloc(sizeof(*dms))))
if (!(dms = dm_malloc(sizeof(*dms))))
return_NULL;
if (!(dms->mem = dm_pool_create("stats_pool", 4096)))
goto_out;
return_NULL;
if (!program_id || !strlen(program_id))
dms->program_id = _program_id_from_proc();
@@ -119,15 +115,12 @@ struct dm_stats *dm_stats_create(const char *program_id)
dms->regions = NULL;
return dms;
out:
dm_free(dms);
return NULL;
}
/**
* Test whether the stats region pointed to by region is present.
*/
static int _stats_region_present(const struct dm_stats_region *region)
static int _stats_region_present(struct dm_stats_region *region)
{
return !(region->region_id == DM_STATS_REGION_NOT_PRESENT);
}
@@ -342,13 +335,10 @@ static int _stats_parse_list(struct dm_stats *dms, const char *resp)
dms->max_region = max_region - 1;
dms->regions = dm_pool_end_object(mem);
if (fclose(list_rows))
stack;
fclose(list_rows);
return 1;
out:
if(fclose(list_rows))
stack;
fclose(list_rows);
dm_pool_abandon_object(mem);
return 0;
}
@@ -488,16 +478,13 @@ static int _stats_parse_region(struct dm_pool *mem, const char *resp,
region->timescale = timescale;
region->counters = dm_pool_end_object(mem);
if (fclose(stats_rows))
stack;
fclose(stats_rows);
return 1;
out:
if (stats_rows)
if(fclose(stats_rows))
stack;
fclose(stats_rows);
dm_pool_abandon_object(mem);
return 0;
}
@@ -511,7 +498,8 @@ static uint64_t _nr_areas(uint64_t len, uint64_t step)
* treat the entire region as a single area. Any partial area at the
* end of the region is treated as an additional complete area.
*/
return (len / (step ? : len)) + !!(len % step);
return (len && step)
? (len / (step ? step : len)) + !!(len % step) : 0;
}
static uint64_t _nr_areas_region(struct dm_stats_region *region)
@@ -519,7 +507,7 @@ static uint64_t _nr_areas_region(struct dm_stats_region *region)
return _nr_areas(region->len, region->step);
}
static void _stats_walk_next(const struct dm_stats *dms, int region,
static void _stats_walk_next(struct dm_stats *dms, int region,
uint64_t *cur_r, uint64_t *cur_a)
{
struct dm_stats_region *cur = NULL;
@@ -544,8 +532,7 @@ static void _stats_walk_next(const struct dm_stats *dms, int region,
}
static void _stats_walk_start(const struct dm_stats *dms,
uint64_t *cur_r, uint64_t *cur_a)
static void _stats_walk_start(struct dm_stats *dms, uint64_t *cur_r, uint64_t *cur_a)
{
if (!dms || !dms->regions)
return;
@@ -573,8 +560,7 @@ void dm_stats_walk_next_region(struct dm_stats *dms)
_stats_walk_next(dms, 1, &dms->cur_region, &dms->cur_area);
}
static int _stats_walk_end(const struct dm_stats *dms,
uint64_t *cur_r, uint64_t *cur_a)
static int _stats_walk_end(struct dm_stats *dms, uint64_t *cur_r, uint64_t *cur_a)
{
struct dm_stats_region *region = NULL;
int end = 0;
@@ -595,19 +581,18 @@ int dm_stats_walk_end(struct dm_stats *dms)
return _stats_walk_end(dms, &dms->cur_region, &dms->cur_area);
}
uint64_t dm_stats_get_region_nr_areas(const struct dm_stats *dms,
uint64_t region_id)
uint64_t dm_stats_get_region_nr_areas(struct dm_stats *dms, uint64_t region_id)
{
struct dm_stats_region *region = &dms->regions[region_id];
return _nr_areas_region(region);
}
uint64_t dm_stats_get_current_nr_areas(const struct dm_stats *dms)
uint64_t dm_stats_get_current_nr_areas(struct dm_stats *dms)
{
return dm_stats_get_region_nr_areas(dms, dms->cur_region);
}
uint64_t dm_stats_get_nr_areas(const struct dm_stats *dms)
uint64_t dm_stats_get_nr_areas(struct dm_stats *dms)
{
uint64_t nr_areas = 0;
/* use a separate cursor */
@@ -786,7 +771,7 @@ void dm_stats_buffer_destroy(struct dm_stats *dms, char *buffer)
dm_pool_free(dms->mem, buffer);
}
uint64_t dm_stats_get_nr_regions(const struct dm_stats *dms)
uint64_t dm_stats_get_nr_regions(struct dm_stats *dms)
{
if (!dms || !dms->regions)
return 0;
@@ -796,7 +781,7 @@ uint64_t dm_stats_get_nr_regions(const struct dm_stats *dms)
/**
* Test whether region_id is present in this set of stats data
*/
int dm_stats_region_present(const struct dm_stats *dms, uint64_t region_id)
int dm_stats_region_present(struct dm_stats *dms, uint64_t region_id)
{
if (!dms->regions)
return 0;
@@ -897,7 +882,7 @@ void dm_stats_destroy(struct dm_stats *dms)
* respectively.
*/
#define MK_STATS_GET_COUNTER_FN(counter) \
uint64_t dm_stats_get_ ## counter(const struct dm_stats *dms, \
uint64_t dm_stats_get_ ## counter(struct dm_stats *dms, \
uint64_t region_id, uint64_t area_id) \
{ \
region_id = (region_id == DM_STATS_REGION_CURRENT) \
@@ -922,7 +907,7 @@ MK_STATS_GET_COUNTER_FN(total_read_nsecs)
MK_STATS_GET_COUNTER_FN(total_write_nsecs)
#undef MK_STATS_GET_COUNTER_FN
int dm_stats_get_rd_merges_per_sec(const struct dm_stats *dms, double *rrqm,
int dm_stats_get_rd_merges_per_sec(struct dm_stats *dms, double *rrqm,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -940,7 +925,7 @@ int dm_stats_get_rd_merges_per_sec(const struct dm_stats *dms, double *rrqm,
return 1;
}
int dm_stats_get_wr_merges_per_sec(const struct dm_stats *dms, double *wrqm,
int dm_stats_get_wr_merges_per_sec(struct dm_stats *dms, double *wrqm,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -958,7 +943,7 @@ int dm_stats_get_wr_merges_per_sec(const struct dm_stats *dms, double *wrqm,
return 1;
}
int dm_stats_get_reads_per_sec(const struct dm_stats *dms, double *rd_s,
int dm_stats_get_reads_per_sec(struct dm_stats *dms, double *rd_s,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -976,7 +961,7 @@ int dm_stats_get_reads_per_sec(const struct dm_stats *dms, double *rd_s,
return 1;
}
int dm_stats_get_writes_per_sec(const struct dm_stats *dms, double *wr_s,
int dm_stats_get_writes_per_sec(struct dm_stats *dms, double *wr_s,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -996,7 +981,7 @@ int dm_stats_get_writes_per_sec(const struct dm_stats *dms, double *wr_s,
return 1;
}
int dm_stats_get_read_sectors_per_sec(const struct dm_stats *dms, double *rsec_s,
int dm_stats_get_read_sectors_per_sec(struct dm_stats *dms, double *rsec_s,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1016,7 +1001,7 @@ int dm_stats_get_read_sectors_per_sec(const struct dm_stats *dms, double *rsec_s
return 1;
}
int dm_stats_get_write_sectors_per_sec(const struct dm_stats *dms, double *wsec_s,
int dm_stats_get_write_sectors_per_sec(struct dm_stats *dms, double *wsec_s,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1035,7 +1020,7 @@ int dm_stats_get_write_sectors_per_sec(const struct dm_stats *dms, double *wsec_
return 1;
}
int dm_stats_get_average_request_size(const struct dm_stats *dms, double *arqsz,
int dm_stats_get_average_request_size(struct dm_stats *dms, double *arqsz,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1059,7 +1044,7 @@ int dm_stats_get_average_request_size(const struct dm_stats *dms, double *arqsz,
return 1;
}
int dm_stats_get_average_queue_size(const struct dm_stats *dms, double *qusz,
int dm_stats_get_average_queue_size(struct dm_stats *dms, double *qusz,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1082,7 +1067,7 @@ int dm_stats_get_average_queue_size(const struct dm_stats *dms, double *qusz,
return 1;
}
int dm_stats_get_average_wait_time(const struct dm_stats *dms, double *await,
int dm_stats_get_average_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1106,9 +1091,8 @@ int dm_stats_get_average_wait_time(const struct dm_stats *dms, double *await,
return 1;
}
int dm_stats_get_average_rd_wait_time(const struct dm_stats *dms,
double *await, uint64_t region_id,
uint64_t area_id)
int dm_stats_get_average_rd_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
uint64_t rd_io_ticks, nr_rd_ios;
@@ -1131,9 +1115,8 @@ int dm_stats_get_average_rd_wait_time(const struct dm_stats *dms,
return 1;
}
int dm_stats_get_average_wr_wait_time(const struct dm_stats *dms,
double *await, uint64_t region_id,
uint64_t area_id)
int dm_stats_get_average_wr_wait_time(struct dm_stats *dms, double *await,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
uint64_t wr_io_ticks, nr_wr_ios;
@@ -1156,7 +1139,7 @@ int dm_stats_get_average_wr_wait_time(const struct dm_stats *dms,
return 1;
}
int dm_stats_get_service_time(const struct dm_stats *dms, double *svctm,
int dm_stats_get_service_time(struct dm_stats *dms, double *svctm,
uint64_t region_id, uint64_t area_id)
{
dm_percent_t util;
@@ -1178,7 +1161,7 @@ int dm_stats_get_service_time(const struct dm_stats *dms, double *svctm,
return 1;
}
int dm_stats_get_throughput(const struct dm_stats *dms, double *tput,
int dm_stats_get_throughput(struct dm_stats *dms, double *tput,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1198,7 +1181,7 @@ int dm_stats_get_throughput(const struct dm_stats *dms, double *tput,
return 1;
}
int dm_stats_get_utilization(const struct dm_stats *dms, dm_percent_t *util,
int dm_stats_get_utilization(struct dm_stats *dms, dm_percent_t *util,
uint64_t region_id, uint64_t area_id)
{
struct dm_stats_counters *c;
@@ -1237,13 +1220,13 @@ void dm_stats_set_sampling_interval_ns(struct dm_stats *dms, uint64_t interval_n
dms->interval_ns = interval_ns;
}
uint64_t dm_stats_get_sampling_interval_ms(const struct dm_stats *dms)
uint64_t dm_stats_get_sampling_interval_ms(struct dm_stats *dms)
{
/* All times use nsecs internally. */
return (dms->interval_ns / NSEC_PER_MSEC);
}
uint64_t dm_stats_get_sampling_interval_ns(const struct dm_stats *dms)
uint64_t dm_stats_get_sampling_interval_ns(struct dm_stats *dms)
{
/* All times use nsecs internally. */
return (dms->interval_ns);
@@ -1270,17 +1253,17 @@ int dm_stats_set_program_id(struct dm_stats *dms, int allow_empty,
return 1;
}
uint64_t dm_stats_get_current_region(const struct dm_stats *dms)
uint64_t dm_stats_get_current_region(struct dm_stats *dms)
{
return dms->cur_region;
}
uint64_t dm_stats_get_current_area(const struct dm_stats *dms)
uint64_t dm_stats_get_current_area(struct dm_stats *dms)
{
return dms->cur_area;
}
uint64_t dm_stats_get_region_start(const struct dm_stats *dms, uint64_t *start,
uint64_t dm_stats_get_region_start(struct dm_stats *dms, uint64_t *start,
uint64_t region_id)
{
if (!dms || !dms->regions)
@@ -1289,7 +1272,7 @@ uint64_t dm_stats_get_region_start(const struct dm_stats *dms, uint64_t *start,
return 1;
}
uint64_t dm_stats_get_region_len(const struct dm_stats *dms, uint64_t *len,
uint64_t dm_stats_get_region_len(struct dm_stats *dms, uint64_t *len,
uint64_t region_id)
{
if (!dms || !dms->regions)
@@ -1298,7 +1281,7 @@ uint64_t dm_stats_get_region_len(const struct dm_stats *dms, uint64_t *len,
return 1;
}
uint64_t dm_stats_get_region_area_len(const struct dm_stats *dms, uint64_t *step,
uint64_t dm_stats_get_region_area_len(struct dm_stats *dms, uint64_t *step,
uint64_t region_id)
{
if (!dms || !dms->regions)
@@ -1307,26 +1290,23 @@ uint64_t dm_stats_get_region_area_len(const struct dm_stats *dms, uint64_t *step
return 1;
}
uint64_t dm_stats_get_current_region_start(const struct dm_stats *dms,
uint64_t *start)
uint64_t dm_stats_get_current_region_start(struct dm_stats *dms, uint64_t *start)
{
return dm_stats_get_region_start(dms, start, dms->cur_region);
}
uint64_t dm_stats_get_current_region_len(const struct dm_stats *dms,
uint64_t *len)
uint64_t dm_stats_get_current_region_len(struct dm_stats *dms, uint64_t *len)
{
return dm_stats_get_region_len(dms, len, dms->cur_region);
}
uint64_t dm_stats_get_current_region_area_len(const struct dm_stats *dms,
uint64_t *step)
uint64_t dm_stats_get_current_region_area_len(struct dm_stats *dms, uint64_t *step)
{
return dm_stats_get_region_area_len(dms, step, dms->cur_region);
}
uint64_t dm_stats_get_area_start(const struct dm_stats *dms, uint64_t *start,
uint64_t region_id, uint64_t area_id)
uint64_t dm_stats_get_area_start(struct dm_stats *dms, uint64_t *start,
uint64_t region_id, uint64_t area_id)
{
if (!dms || !dms->regions)
return_0;
@@ -1334,39 +1314,37 @@ uint64_t dm_stats_get_area_start(const struct dm_stats *dms, uint64_t *start,
return 1;
}
uint64_t dm_stats_get_current_area_start(const struct dm_stats *dms,
uint64_t *start)
uint64_t dm_stats_get_current_area_start(struct dm_stats *dms, uint64_t *start)
{
return dm_stats_get_area_start(dms, start,
dms->cur_region, dms->cur_area);
}
uint64_t dm_stats_get_current_area_len(const struct dm_stats *dms,
uint64_t *len)
uint64_t dm_stats_get_current_area_len(struct dm_stats *dms, uint64_t *len)
{
return dm_stats_get_region_area_len(dms, len, dms->cur_region);
}
const char *dm_stats_get_region_program_id(const struct dm_stats *dms,
const char *dm_stats_get_region_program_id(struct dm_stats *dms,
uint64_t region_id)
{
const char *program_id = dms->regions[region_id].program_id;
return (program_id) ? program_id : "";
}
const char *dm_stats_get_region_aux_data(const struct dm_stats *dms,
const char *dm_stats_get_region_aux_data(struct dm_stats *dms,
uint64_t region_id)
{
const char *aux_data = dms->regions[region_id].aux_data;
return (aux_data) ? aux_data : "" ;
}
const char *dm_stats_get_current_region_program_id(const struct dm_stats *dms)
const char *dm_stats_get_current_region_program_id(struct dm_stats *dms)
{
return dm_stats_get_region_program_id(dms, dms->cur_region);
}
const char *dm_stats_get_current_region_aux_data(const struct dm_stats *dms)
const char *dm_stats_get_current_region_aux_data(struct dm_stats *dms)
{
return dm_stats_get_region_aux_data(dms, dms->cur_region);
}

View File

@@ -168,11 +168,6 @@ uint64_t dm_timestamp_delta(struct dm_timestamp *ts1, struct dm_timestamp *ts2)
return t2 - t1;
}
void dm_timestamp_copy(struct dm_timestamp *ts_new, struct dm_timestamp *ts_old)
{
*ts_new = *ts_old;
}
void dm_timestamp_destroy(struct dm_timestamp *ts)
{
dm_free(ts);

View File

@@ -27,7 +27,6 @@ dmstats \(em device-mapper statistics management
.br
.B dmstats create
.I device_name
.RB [ \-\-alldevices ]
.RB [[ \-\-areas
.IR nr_areas ]
.RB |[ \-\-areasize
@@ -44,7 +43,7 @@ dmstats \(em device-mapper statistics management
.br
.B dmstats delete
.I device_name
.RB [ \-\-alldevices ]
.RB [ \-\-force ]
.RB [ \-\-allregions
.RB | \-\-regionid
.IR id ]
@@ -116,14 +115,10 @@ when run as 'dmsetup stats'.
When no device argument is given dmstats will by default operate on all
device-mapper devices present. The \fBcreate\fP and \fBdelete\fP
commands require the use of \fB--alldevices\fP when used in this way.
commands require the use of \fB--force\fP when used in this way.
.SH OPTIONS
.TP
.B \-\-alldevices
If no device arguments are given allow operation on all devices when
creating or deleting regions.
.TP
.B \-\-allprograms
Include regions from all program IDs for list and report operations.
.TP
@@ -272,7 +267,7 @@ stdout.
.TP
.B delete
.I [ device_name ]
.RB [ \-\-alldevices ]
.RB [ \-\-force ]
.RB [ \-\-allregions
.RB | \-\-regionid
.IR id ]
@@ -287,8 +282,7 @@ of subsequent list, print, or report operations.
All regions registered on a device may be removed using
\fB\-\-allregions\fP.
To remove all regions on all devices both \fB--allregions\fP and
\fB\-\-alldevices\fP must be used.
To remove all regions on all devices \fB\-\-force\fP must be used.
.br
.TP
.B help
@@ -501,20 +495,6 @@ The program ID value associated with this region.
.br
The auxiliary data value associated with this region.
.br
.HP
.B interval_ns
.br
The estimated interval over which the current counter values have
accumulated. The vaulue is reported as an interger expressed in units
of nanoseconds.
.br
.HP
.B interval
.br
The estimated interval over which the current counter values have
accumulated. The value is reported as a real number in units of
seconds.
.br
.SS Basic counters
Basic counters provide access to the raw counter data from the kernel,
allowing further processing to be carried out by another program.
@@ -621,7 +601,7 @@ Created region: 0
Delete all regions on all devices
.br
.br
# dmstats delete --alldevices --allregions
# dmstats delete --allregions --force
.br
.br
@@ -668,11 +648,11 @@ vg00-lvol1 0 0 0.00 0.00 8.00 0.00 48.00k 0 6.00k 0.
.br
vg00-lvol1 0 1 0.00 0.00 22.00 0.00 624.00k 0 28.00k 0.00 5.23 11.50 5.36
.br
vg00-lvol1 0 2 0.00 0.00 353.00 0.00 1.84m 0 5.00k 0.00 1.34 47.40 1.33
vg00/lvol1 0 2 0.00 0.00 353.00 0.00 1.84m 0 5.00k 0.00 1.34 47.40 1.33
.br
vg00-lvol1 0 3 0.00 0.00 73.00 0.00 592.00k 0 8.00k 0.00 2.10 15.30 2.10
vg00/lvol1 0 3 0.00 0.00 73.00 0.00 592.00k 0 8.00k 0.00 2.10 15.30 2.10
.br
vg00-lvol1 0 4 0.00 0.00 5.00 0.00 52.00k 0 10.00k 0.00 4.00 2.00 4.00
vg00/lvol1 0 4 0.00 0.00 5.00 0.00 52.00k 0 10.00k 0.00 4.00 2.00 4.00
.br
[...]
.br

View File

@@ -162,10 +162,6 @@ lvconvert \(em convert a logical volume from linear to mirror or snapshot
.IR ChunkSize [ bBsSkKmMgG ]]
.RB [ \-\-cachemode
.RI { writeback | writethrough }]
.RB [ \-\-cachepolicy
.IR policy ]
.RB [ \-\-cachesettings
.IR key=value ]
.RB [ \-\-poolmetadata
.IR CachePoolMetadataLogicalVolume { Name | Path }
|
@@ -226,21 +222,10 @@ Converts logical volume to a cached LV with the use of cache pool
specified with \fB\-\-cachepool\fP.
For more information on cache pool LVs and cache LVs, see \fBlvmcache\fP(7).
.TP
.B \-\-cachepolicy \fIpolicy
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy. \fImq\fP is the basic policy name. \fIsqm\fP is more advanced
version available in newer kernels.
.TP
.BR \-\-cachepool " " \fICachePoolLV
This argument is necessary when converting a logical volume to a cache LV.
For more information on cache pool LVs and cache LVs, see \fBlvmcache\fP(7).
.TP
.BR \-\-cachesettings " " \fIkey=value
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache tunable settings. In most use-cases, default values should be adequate.
Special string value \fIdefault\fP switches setting back to its default kernel value
and removes it from the list of settings stored in lvm2 metadata.
.TP
.BR \-m ", " \-\-mirrors " " \fIMirrors
Specifies the degree of the mirror you wish to create.
For example, "\fB\-m 1\fP" would convert the original logical
@@ -511,7 +496,7 @@ See \fBlvmthin\fP(7) for more info about thin provisioning support.
Uncaches \fICacheLogicalVolume\fP.
Before the volume becomes uncached, cache is flushed.
Unlike with \fB\-\-splitcache\fP the cache pool volume is removed.
This option could be seen as an inverse of \fB\-\-cache\fP.
This option could seen as an inverse of \fB\-\-cache\fP.
.SH Examples
Converts the linear logical volume "vg00/lvol1" to a two-way mirror

View File

@@ -13,13 +13,13 @@ lvcreate \- create a logical volume in an existing volume group
.RI { y | n }]
.RB [ \-H | \-\-cache ]
.RB [ \-\-cachemode
.RI { passthrough | writeback | writethrough }]
.RI { writeback | writethrough }]
.RB [ \-\-cachepolicy
.IR policy ]
.RB [ \-\-cachepool
.IR CachePoolLogicalVolume { Name | Path }
.RB [ \-\-cachesettings
.IR key=value ]
.RB [ \-\-cachepool
.IR CachePoolLogicalVolume { Name | Path }
.RB [ \-c | \-\-chunksize
.IR ChunkSize [ bBsSkKmMgG ]]
.RB [ \-\-commandprofile
@@ -188,7 +188,7 @@ See \fBlvmcache\fP(7) for more info about caching support.
Note that the cache segment type requires a dm-cache kernel module version
1.3.0 or greater.
.TP
.IR \fB\-\-cachemode " {" passthrough | writeback | writethrough }
.IR \fB\-\-cachemode " {" writeback | writethrough }
Specifying a cache mode determines when the writes to a cache LV
are considered complete. When \fIwriteback\fP is specified, a write is
considered complete as soon as it is stored in the cache pool LV.
@@ -198,21 +198,10 @@ While \fIwritethrough\fP may be slower for writes, it is more
resilient if something should happen to a device associated with the
cache pool LV.
.TP
.B \-\-cachepolicy \fIpolicy
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy. \fImq\fP is the basic policy name. \fIsqm\fP is more advanced
version available in newer kernels.
.TP
.IR \fB\-\-cachepool " " CachePoolLogicalVolume { Name | Path }
Specifies the name of cache pool volume name. The other way to specify pool name
is to append name to Volume group name argument.
.TP
.BR \-\-cachesettings " " \fIkey=value
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache tunable settings. In most use-cases, default values should be adequate.
Special string value \fIdefault\fP switches setting back to its default kernel value
and removes it from the list of settings stored in lvm2 metadata.
.TP
.BR \-c ", " \-\-chunksize " " \fIChunkSize [ \fIbBsSkKmMgG ]
Gives the size of chunk for snapshot, cache pool and thin pool logical volumes.
Default unit is in kilobytes.
@@ -262,6 +251,11 @@ Ignore the flag to skip Logical Volumes during activation.
Use \fB\-\-setactivationskip\fP option to set or reset
activation skipping flag persistently for logical volume.
.TP
.BR \-\-cachepolicy " " policy ", " \-\-cachesettings " " key=value
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy and its associated tunable settings. In most use-cases,
default values should be adequate.
.TP
.B \-\-ignoremonitoring
Make no attempt to interact with dmeventd unless \fB\-\-monitor\fP
is specified.

View File

@@ -398,9 +398,7 @@ for the kernel device-mapper.
%defattr(-,root,root,-)
%doc COPYING COPYING.LIB WHATS_NEW_DM VERSION_DM README INSTALL
%attr(755,root,root) %{_sbindir}/dmsetup
%{_sbindir}/dmstats
%{_mandir}/man8/dmsetup.8.gz
%{_mandir}/man8/dmstats.8.gz
%if %{enable_udev}
%doc udev/12-dm-permissions.rules
%dir %{_udevbasedir}

View File

@@ -199,8 +199,7 @@ install: .tests-stamp lib/paths-installed
$(INSTALL_PROGRAM) api/*.{t,py} $(DATADIR)/api/
$(INSTALL_DATA) lib/paths-installed $(DATADIR)/lib/paths
$(INSTALL_DATA) $(LIB_FLAVOURS) $(DATADIR)/lib/
for i in cache-mq cache-smq thin-performance ; do \
$(INSTALL_DATA) $(abs_top_srcdir)/conf/$$i.profile $(DATADIR)/lib/$$i.profile; done
$(INSTALL_DATA) $(abs_top_srcdir)/conf/thin-performance.profile $(DATADIR)/lib/thin-performance.profile
$(INSTALL_SCRIPT) $(LIB_SHARED) $(DATADIR)/lib/
for i in $(CMDS); do (cd $(DATADIR)/lib && $(LN_S) -f lvm-wrapper $$i); done

View File

@@ -16,8 +16,6 @@ test -e LOCAL_LVMPOLLD && skip
aux have_cache 1 3 0 || skip
aux prepare_vg 3
aux lvmconf 'global/cache_disabled_features = [ "policy_smq" ]'
lvcreate --type cache-pool -an -v -L 2 -n cpool $vg
lvcreate -H -L 4 -n corigin --cachepool $vg/cpool
lvcreate -n noncache -l 1 $vg

View File

@@ -18,8 +18,6 @@ test -e LOCAL_LVMPOLLD && skip
aux have_cache 1 3 0 || skip
aux have_raid 1 0 0 || skip
aux lvmconf 'global/cache_disabled_features = [ "policy_smq" ]'
aux prepare_vg 5 80
# Bug 1095843

View File

@@ -23,7 +23,6 @@ aux have_cache 1 3 0 || skip
# FIXME: parallel cache metadata allocator is crashing when used value 8000!
aux prepare_vg 5 80000
aux lvmconf 'global/cache_disabled_features = [ "policy_smq" ]'
#######################
# Cache_Pool creation #

View File

@@ -204,11 +204,9 @@ install_tools_static: lvm.static
install_dmsetup_dynamic: dmsetup
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
$(LN_S) -f $(<F) $(sbindir)/dmstats
install_dmsetup_static: dmsetup.static
$(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F)
$(LN_S) -f $(<F) $(sbindir)/dmstats
install_device-mapper: $(INSTALL_DMSETUP_TARGETS)

File diff suppressed because it is too large Load Diff

View File

@@ -690,9 +690,9 @@ static int _lvchange_cachepolicy(struct cmd_context *cmd, struct logical_volume
goto out;
}
if (!get_cache_params(cmd, NULL, &name, &settings))
if (!get_cache_policy_params(cmd, &name, &settings))
goto_out;
if (!cache_set_policy(first_seg(lv), name, settings))
if (!lv_cache_set_policy(lv, name, settings))
goto_out;
if (!lv_update_and_reload(lv))
goto_out;

View File

@@ -50,7 +50,7 @@ struct lvconvert_params {
uint32_t stripes;
uint32_t stripe_size;
uint32_t read_ahead;
const char *cache_mode; /* cache */
uint64_t feature_flags; /* cache_pool */
const char *policy_name; /* cache */
struct dm_config_tree *policy_settings; /* cache */
@@ -299,14 +299,26 @@ static int _read_pool_params(struct cmd_context *cmd, int *pargc, char ***pargv,
} else if (!strcmp(type_str, "thin-pool"))
thinpool = 1;
if (lp->cache && !cachepool) {
log_error("--cache requires --cachepool.");
return 0;
}
if ((lp->cache || cachepool) &&
!get_cache_params(cmd, &lp->cache_mode, &lp->policy_name, &lp->policy_settings)) {
log_error("Failed to parse cache policy and/or settings.");
return 0;
if (cachepool) {
const char *cachemode = arg_str_value(cmd, cachemode_ARG, NULL);
if (!cachemode)
cachemode = find_config_tree_str(cmd, allocation_cache_pool_cachemode_CFG, NULL);
if (!set_cache_pool_feature(&lp->feature_flags, cachemode))
return_0;
if (!get_cache_policy_params(cmd, &lp->policy_name, &lp->policy_settings)) {
log_error("Failed to parse cache policy and/or settings.");
return 0;
}
} else {
if (arg_from_list_is_set(cmd, "is valid only with cache pools",
cachepool_ARG, cachemode_ARG, -1))
return_0;
if (lp->cache) {
log_error("--cache requires --cachepool.");
return 0;
}
}
if (thinpool) {
@@ -3068,9 +3080,10 @@ mda_write:
seg->chunk_size = lp->chunk_size;
seg->discards = lp->discards;
seg->zero_new_blocks = lp->zero ? 1 : 0;
seg->feature_flags = lp->feature_flags; /* cache-pool */
if ((lp->policy_name || lp->policy_settings) &&
!cache_set_policy(seg, lp->policy_name, lp->policy_settings))
!lv_cache_set_policy(seg->lv, lp->policy_name, lp->policy_settings))
return_0;
/* Rename deactivated metadata LV to have _tmeta suffix */
@@ -3178,12 +3191,6 @@ static int _lvconvert_cache(struct cmd_context *cmd,
if (!(cache_lv = lv_cache_create(pool_lv, origin_lv)))
return_0;
if (!cache_set_mode(first_seg(cache_lv), lp->cache_mode))
return_0;
if (!cache_set_policy(first_seg(cache_lv), lp->policy_name, lp->policy_settings))
return_0;
if (!lv_update_and_reload(cache_lv))
return_0;
@@ -3420,12 +3427,14 @@ static int lvconvert_single(struct cmd_context *cmd, struct lvconvert_params *lp
}
/*
* Request a transient lock. If the LV is active, it has a persistent
* lock already, and this request does nothing. If the LV is not
* active, this acquires a transient lock that will be released when
* the command exits.
* If the lv is inactive before and after the command, the
* use of PERSISTENT here means the lv will remain locked as
* an effect of running the lvconvert.
* To unlock it, it would need to be activated+deactivated.
* Or, we could identify the commands for which the lv remains
* inactive, and not use PERSISTENT here for those cases.
*/
if (!lockd_lv(cmd, lv, "ex", 0))
if (!lockd_lv(cmd, lv, "ex", LDLV_PERSISTENT))
goto_bad;
/*

View File

@@ -567,15 +567,22 @@ static int _read_mirror_and_raid_params(struct cmd_context *cmd,
static int _read_cache_params(struct cmd_context *cmd,
struct lvcreate_params *lp)
{
const char *cachemode;
if (!seg_is_cache(lp) && !seg_is_cache_pool(lp))
return 1;
if (!get_cache_params(cmd,
&lp->cache_mode,
&lp->policy_name,
&lp->policy_settings))
if (!(cachemode = arg_str_value(cmd, cachemode_ARG, NULL)))
cachemode = find_config_tree_str(cmd, allocation_cache_pool_cachemode_CFG, NULL);
if (!set_cache_pool_feature(&lp->feature_flags, cachemode))
return_0;
if (!get_cache_policy_params(cmd, &lp->policy_name, &lp->policy_settings)) {
log_error("Failed to parse cache policy and/or settings.");
return 0;
}
return 1;
}
@@ -1082,6 +1089,8 @@ static int _determine_cache_argument(struct volume_group *vg,
/* If cache args not given, use those from cache pool */
if (!arg_is_set(cmd, chunksize_ARG))
lp->chunk_size = first_seg(lv)->chunk_size;
if (!arg_is_set(cmd, cachemode_ARG))
lp->feature_flags = first_seg(lv)->feature_flags;
} else if (lv) {
/* Origin exists, create cache pool volume */
if (!validate_lv_cache_create_origin(lv))

View File

@@ -1399,10 +1399,8 @@ static int _validate_cachepool_params(const char *name,
return 1;
}
int get_cache_params(struct cmd_context *cmd,
const char **mode,
const char **name,
struct dm_config_tree **settings)
int get_cache_policy_params(struct cmd_context *cmd, const char **name,
struct dm_config_tree **settings)
{
const char *str;
struct arg_value_group_list *group;
@@ -1410,14 +1408,7 @@ int get_cache_params(struct cmd_context *cmd,
struct dm_config_node *cn;
int ok = 0;
if (mode)
*mode = arg_str_value(cmd, cachemode_ARG, NULL);
if (name)
*name = arg_str_value(cmd, cachepolicy_ARG, NULL);
if (!settings)
return 1;
*name = arg_str_value(cmd, cachepolicy_ARG, DEFAULT_CACHE_POOL_POLICY);
dm_list_iterate_items(group, &cmd->arg_value_groups) {
if (!grouped_arg_is_set(group->arg_values, cachesettings_ARG))
@@ -1438,9 +1429,6 @@ int get_cache_params(struct cmd_context *cmd,
goto_out;
}
if (!current)
return 1;
if (!(result = dm_config_flatten(current)))
goto_out;

View File

@@ -190,10 +190,9 @@ int get_pool_params(struct cmd_context *cmd,
int get_stripe_params(struct cmd_context *cmd, uint32_t *stripes,
uint32_t *stripe_size);
int get_cache_params(struct cmd_context *cmd,
const char **mode,
const char **name,
struct dm_config_tree **settings);
int get_cache_policy_params(struct cmd_context *cmd,
const char **name,
struct dm_config_tree **settings);
int change_tag(struct cmd_context *cmd, struct volume_group *vg,
struct logical_volume *lv, struct physical_volume *pv, int arg);

View File

@@ -47,7 +47,7 @@ BLKID_RULE=IMPORT{program}=\"${SBIN}\/blkid -o udev -p \$$tempnode\"
endif
ifeq ("@UDEV_SYSTEMD_BACKGROUND_JOBS@", "yes")
PVSCAN_RULE=ACTION\!=\"remove\", ENV{LVM_PV_GONE}==\"1\", RUN\+=\"@bindir@/systemd-run $(LVM_EXEC)\/lvm pvscan --cache \$$major\:\$$minor\", GOTO=\"lvm_end\"\nENV{SYSTEMD_ALIAS}=\"\/dev\/block\/\$$major:\$$minor\"\nENV{ID_MODEL}=\"LVM PV \$$env{ID_FS_UUID_ENC} on \/dev\/\$$name\"\nENV{SYSTEMD_WANTS}\+=\"lvm2-pvscan@\$$major:\$$minor.service\"
PVSCAN_RULE=ACTION\!=\"remove\", ENV{LVM_PV_GONE}==\"1\", RUN\+=\"@bindir@/systemd-run $(LVM_EXEC)\/lvm pvscan --cache \$$major\:\$$minor\", GOTO=\"lvm_end\"\nENV{SYSTEMD_ALIAS}=\"\/dev\/block\/\$$major:\$$minor\"\nENV{ID_MODEL}=\"LVM PV \$$env{ID_FS_UUID_ENC} on \/dev\/\$$name\"\nENV{SYSTEMD_WANTS}=\"lvm2-pvscan@\$$major:\$$minor.service\"
else
PVSCAN_RULE=RUN\+\=\"$(LVM_EXEC)/lvm pvscan --background --cache --activate ay --major \$$major --minor \$$minor\", ENV{LVM_SCANNED}=\"1\"
endif