1
0
mirror of git://sourceware.org/git/lvm2.git synced 2026-01-16 20:32:47 +03:00

Compare commits

..

7 Commits

Author SHA1 Message Date
David Teigland
72c9735e01 lvconvert: remove unused calls for snapshots
snapshot commands are no longer called from the
monolithic lvconvert code, so remove the unused code.
2016-11-30 16:37:34 -06:00
David Teigland
24e69e5feb lvconvert: snapshot: use command definitions
Lift all the snapshot utilities (merge, split, combine)
out of the monolithic lvconvert implementation, using
the command definitions.  The old code associated with
these commands is now unused and will be removed separately.
2016-11-30 16:37:30 -06:00
David Teigland
4cce085468 lvconvert: remove unused calls for repair and replace
repair and replace are no longer called from the
monolithic lvconvert code, so remove the unused code.
2016-11-30 16:37:29 -06:00
David Teigland
ca7357b254 lvconvert: repair and replace: use command definitions
This lifts the lvconvert --repair and --replace commands
out of the monolithic lvconvert implementation.  The
previous calls into repair/replace can no longer be
reached and will be removed in a separate commit.
2016-11-30 16:37:29 -06:00
David Teigland
4f39d020d3 lvchange: make use of command definitions
Reorganize the lvchange code to take advantage of
the command definition, and remove the validation
that is done by the command definintion rules.
2016-11-30 16:37:29 -06:00
David Teigland
268374c235 process_each_lv: add check_single_lv function
The new check_single_lv() function is called prior to the
existing process_single_lv().  If the check function returns 0,
the LV will not be processed.

The check_single_lv function is meant to be a standard method
to validate the combination of specific command + specific LV,
and decide if the combination is allowed.  The check_single
function can be used by anything that calls process_each_lv.

As commands are migrated to take advantage of command
definitions, each command definition gets its own entry
point which calls process_each for itself, passing a
pair of check_single/process_single functions which can
be specific to the narrowly defined command def.
2016-11-30 16:37:29 -06:00
David Teigland
45e23131b8 commands: new method for defining commands
. Define a prototype for every lvm command.
. Match every user command with one definition.
. Generate help text and man pages from them.

The new file command-lines.in defines a prototype for every
unique lvm command.  A unique lvm command is a unique
combination of: command name + required option args +
required positional args.  Each of these prototypes also
includes the optional option args and optional positional
args that the command will accept, a description, and a
unique string ID for the definition.  Any valid command
will match one of the prototypes.

Here's an example of the lvresize command definitions from
command-lines.in, there are three unique lvresize commands:

lvresize --size SizeMB LV
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync, --reportformat String, --resizefs,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SizeMB
OP: PV ...
ID: lvresize_by_size
DESC: Resize an LV by a specified size.

lvresize LV PV ...
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --resizefs, --stripes Number, --stripesize SizeKB
ID: lvresize_by_pv
DESC: Resize an LV by specified PV extents.
FLAGS: SECONDARY_SYNTAX

lvresize --poolmetadatasize SizeMB LV_thinpool
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --stripes Number, --stripesize SizeKB
OP: PV ...
ID: lvresize_pool_metadata_by_size
DESC: Resize a pool metadata SubLV by a specified size.

The three commands have separate definitions because they have
different required parameters.  Required parameters are specified
on the first line of the definition.  Optional options are
listed after OO, and optional positional args are listed after OP.

This data is used to generate corresponding command definition
structures for lvm in command-lines.h.  usage/help output is also
auto generated, so it is always in sync with the definitions.

Example of the corresponding generated structure in
command-lines.h for the first lvresize prototype
(these structures are never edited directly):

commands[83].name = "lvresize";
commands[83].command_line_id = "lvresize_by_size";
commands[83].command_line_enum = lvresize_by_size_CMD;
commands[83].fn = lvresize;
commands[83].ro_count = 1;
commands[83].rp_count = 1;
commands[83].oo_count = 22;
commands[83].op_count = 1;
commands[83].cmd_flags = 0;
commands[83].desc = "DESC: Resize an LV by a specified size.";
commands[83].usage = "lvresize --size Number[m|unit] LV"
" [ --resizefs, --poolmetadatasize Number[m|unit], COMMON_OPTIONS ]"
" [ PV ... ]";
commands[83].usage_common =
" [ --alloc contiguous|cling|cling_by_tags|normal|anywhere|inherit, --nosync, --reportformat String, --autobackup y|n, --stripes Number, --stripesize Number[k|unit], --nofsck, --commandprofile String, --config String, --debug, --driverloaded y|n, --help, --profile String, --quiet, --verbose, --version, --yes, --test, --force, --noudevsync ]";
commands[83].required_opt_args[0].opt = size_ARG;
commands[83].required_opt_args[0].def.val_bits = val_enum_to_bit(sizemb_VAL);
commands[83].required_pos_args[0].pos = 1;
commands[83].required_pos_args[0].def.val_bits = val_enum_to_bit(lv_VAL);
commands[83].optional_opt_args[0].opt = commandprofile_ARG;
commands[83].optional_opt_args[0].def.val_bits = val_enum_to_bit(string_VAL);
commands[83].optional_opt_args[1].opt = config_ARG;
commands[83].optional_opt_args[1].def.val_bits = val_enum_to_bit(string_VAL);
commands[83].optional_opt_args[2].opt = debug_ARG;
commands[83].optional_opt_args[3].opt = driverloaded_ARG;
commands[83].optional_opt_args[3].def.val_bits = val_enum_to_bit(bool_VAL);
commands[83].optional_opt_args[4].opt = help_ARG;
commands[83].optional_opt_args[5].opt = profile_ARG;
commands[83].optional_opt_args[5].def.val_bits = val_enum_to_bit(string_VAL);
commands[83].optional_opt_args[6].opt = quiet_ARG;
commands[83].optional_opt_args[7].opt = verbose_ARG;
commands[83].optional_opt_args[8].opt = version_ARG;
commands[83].optional_opt_args[9].opt = yes_ARG;
commands[83].optional_opt_args[10].opt = test_ARG;
commands[83].optional_opt_args[11].opt = alloc_ARG;
commands[83].optional_opt_args[11].def.val_bits = val_enum_to_bit(alloc_VAL);
commands[83].optional_opt_args[12].opt = autobackup_ARG;
commands[83].optional_opt_args[12].def.val_bits = val_enum_to_bit(bool_VAL);
commands[83].optional_opt_args[13].opt = force_ARG;
commands[83].optional_opt_args[14].opt = nofsck_ARG;
commands[83].optional_opt_args[15].opt = nosync_ARG;
commands[83].optional_opt_args[16].opt = noudevsync_ARG;
commands[83].optional_opt_args[17].opt = reportformat_ARG;
commands[83].optional_opt_args[17].def.val_bits = val_enum_to_bit(string_VAL);
commands[83].optional_opt_args[18].opt = resizefs_ARG;
commands[83].optional_opt_args[19].opt = stripes_ARG;
commands[83].optional_opt_args[19].def.val_bits = val_enum_to_bit(number_VAL);
commands[83].optional_opt_args[20].opt = stripesize_ARG;
commands[83].optional_opt_args[20].def.val_bits = val_enum_to_bit(sizekb_VAL);
commands[83].optional_opt_args[21].opt = poolmetadatasize_ARG;
commands[83].optional_opt_args[21].def.val_bits = val_enum_to_bit(sizemb_VAL);
commands[83].optional_pos_args[0].pos = 2;
commands[83].optional_pos_args[0].def.val_bits = val_enum_to_bit(pv_VAL);
commands[83].optional_pos_args[0].def.flags = ARG_DEF_FLAG_MAY_REPEAT;

Every user-entered command is compared against the set of
command structures, and matched with one.  An error is
reported if an entered command does not have the required
parameters for any definition.  The closest match is printed
as a suggestion, and running lvresize --help will display
the usage for each possible lvresize command.

The prototype syntax used for help/man output includes
required --option and positional args on the first line,
and optional --option and positional args enclosed in [ ]
on subsequent lines.

  command_name <required_opt_args> <required_pos_args>
          [ <optional_opt_args> ]
          [ <optional_pos_args> ]

$ lvresize --help
  lvresize - Resize a logical volume

  Resize an LV by a specified size.
  lvresize --size Number[m|unit] LV
        [ --resizefs,
          --poolmetadatasize Number[m|unit],
          COMMON_OPTIONS ]
        [ PV ... ]

  Resize a pool metadata SubLV by a specified size.
  lvresize --poolmetadatasize Number[m|unit] LV_thinpool
        [ COMMON_OPTIONS ]
        [ PV ... ]

  Common options:
        [ --alloc contiguous|cling|cling_by_tags|normal|anywhere|inherit,
          --nosync,
          --reportformat String,
          --autobackup y|n,
          --stripes Number,
          --stripesize Number[k|unit],
          --nofsck,
          --commandprofile String,
          --config String,
          --debug,
          --driverloaded y|n,
          --help,
          --profile String,
          --quiet,
          --verbose,
          --version,
          --yes,
          --test,
          --force,
          --noudevsync ]

  (Use --help --help for usage notes.)

$ lvresize --poolmetadatasize 4
  Failed to find a matching command definition.
  Closest command usage is:
  lvresize --poolmetadatasize Number[m|unit] LV_thinpool

Command definitions that are not to be advertised/suggested
have the flag SECONDARY_SYNTAX.  These commands will not be
printed in the normal help output.

Man page prototypes are also generated from the same original
command definitions, and are always in sync with the code
and help text.

Very early in command execution, a matching command definition
is found.  lvm then knows the operation being done, and that
the provided args conform to the definition.  This will allow
lots of ad hoc checking/validation to be removed throughout
the code.

Each command definition can also be routed to a specific
function to implement it.  The function is associated with
an enum value for the command definition (generated from
the ID string.)  These per-command-definition implementation
functions have not yet been created, so all commands
currently fall back to the existing per-command-name
implementation functions.

Using per-command-definition functions will allow lots of
code to be removed which tries to figure out what the
command is meant to do.  This is currently based on ad hoc
and complicated option analysis.  When using the new
functions, what the command is doing is already known
from the associated command definition.

So, this first phase validates every user-entered command
against the set of command prototypes, then calls the existing
implementation.  The second phase can associate an implementation
function with each definition, and take further advantage of the
known operation to avoid the complicated option analysis.
2016-11-30 16:37:20 -06:00
321 changed files with 16575 additions and 15596 deletions

View File

@@ -59,8 +59,6 @@ liblvm: lib
daemons: lib libdaemon tools
tools: lib libdaemon device-mapper
po: tools daemons
man: tools
all_man: tools
scripts: liblvm libdm
lib.device-mapper: include.device-mapper

View File

@@ -1 +1 @@
2.02.169(2)-git (2016-11-30)
2.02.168(2)-git (2016-11-05)

View File

@@ -1 +1 @@
1.02.138-git (2016-11-30)
1.02.137-git (2016-11-05)

View File

@@ -1,83 +1,6 @@
Version 2.02.169 -
=====================================
Reject writemostly/writebehind in lvchange during resynchronization.
Deactivate active origin first before removal for improved workflow.
Fix regression of accepting options --type and -m with lvresize (2.02.158).
Add lvconvert --swapmetadata, new specific way to swap pool metadata LVs.
Add lvconvert --startpoll, new specific way to start polling conversions.
Add lvconvert --mergethin, new specific way to merge thin snapshots.
Add lvconvert --mergemirrors, new specific way to merge split mirrors.
Add lvconvert --mergesnapshot, new specific way to combine cow LVs.
Split up lvconvert code based on command definitions.
Split up lvchange code based on command definitions.
Generate help output and man pages from command definitions.
Verify all command line items against command definition.
Match every command run to one command definition.
Specify every allowed command definition/syntax in command-lines.in.
Add extra memory page when limiting pthread stack size in clvmd.
Support striped/raid0* <-> raid10_near conversions
Support shrinking of RaidLvs
Support region size changes on existing RaidLVs
Avoid parallel usage of cpg_mcast_joined() in clvmd with corosync.
Support raid6_{ls,rs,la,ra}_6 segment types and conversions from/to it.
Support raid6_n_6 segment type and conversions from/to it.
Support raid5_n segment type and conversions from/to it.
Support new internal command _dmeventd_thin_command.
Introduce new dmeventd/thin_command configurable setting.
Use new default units 'r' for displaying sizes.
Also unmount mount point on top of MD device if using blkdeactivate -u.
Restore check preventing resize of cache type volumes (2.02.158).
Add missing udev sync when flushing dirty cache content.
vgchange -p accepts only uint32 numbers.
Report thin LV date for merged LV when the merge is in progress.
Detect if snapshot merge really started before polling for progress.
Checking LV for merging origin requires also it has merged snapshot.
Extend validation of metadata processing.
Enable usage of cached volumes as snapshot origin LV.
Fix displayed lv name when splitting snapshot (2.02.146).
Warn about command not making metadata backup just once per command.
Enable usage of cached volume as thin volume's external origin.
Support cache volume activation with -real layer.
Improve search of lock-holder for external origin and thin-pool.
Support status checking of cache volume used in layer.
Avoid shifting by one number of blocks when clearing dirty cache volume.
Extend metadata validation of external origin LV use count.
Fix dm table when the last user of active external origin is removed.
Improve reported lvs status for active external origin volume.
Fix table load for splitted RAID LV and require explicit activation.
Always active splitted RAID LV exclusively locally.
Do not use LV RAID status bit for segment status.
Check segtype directly instead of checking RAID in segment status.
Reusing exiting code for raid image removal.
Fix pvmove leaving -pvmove0 error device in clustered VG.
Avoid adding extra '_' at end of raid extracted images or metadata.
Optimize another _rmeta clearing code.
Fix deactivation of raid orphan devices for clustered VG.
Fix lvconvert raid1 to mirror table reload order.
Add internal function for separate mirror log preparation.
Fix segfault in lvmetad from missing NULL in daemon_reply_simple.
Simplify internal _info_run() and use _setup_task_run() for mknod.
Better API for internal function _setup_task_run.
Avoid using lv_has_target_type() call within lv_info_with_seg_status.
Simplify internal lv_info_with_seg_status API.
Decide which status is needed in one place for lv_info_with_seg_status.
Fix matching of LV segment when checking for it info status.
Report log_warn when status cannot be parsed.
Test segment type before accessing segment members when checking status.
Implement compatible target function for stripe segment.
Use status info to report merge failed and snapshot invalid lvs fields.
Version 2.02.168 - 30th November 2016
=====================================
Display correct sync_percent on large RaidLVs
lvmdbusd --blackboxsize <n> added, used to override default size of 16
Allow a transiently failed RaidLV to be refreshed
Use lv_update_and_reload() inside mirror code where it applies.
Preserve mirrored status for temporary layered mirrors.
Use transient raid check before repairing raid volume.
Implement transient status check for raid volumes.
Version 2.02.168 -
====================================
Only log msg as debug if lvm2-lvmdbusd unit missing for D-Bus notification.
Avoid duplicated underscore in name of extracted LV image.
Missing stripe filler now could be also 'zero'.
lvconvert --repair accepts --interval and --background option.
More efficiently prepare _rmeta devices when creating a new raid LV.

View File

@@ -1,24 +1,5 @@
Version 1.02.138 -
=====================================
Add extra memory page when limiting pthread stack size in dmeventd.
Avoids immediate resume when preloaded device is smaller.
Do not suppress kernel key description in dmsetup table output.
Support configurable command executed from dmeventd thin plugin.
Support new R|r human readable units output format.
Thin dmeventd plugin reacts faster on lvextend failure path with umount.
Add dm_stats_bind_from_fd() to bind a stats handle from a file descriptor.
Do not try call callback when reverting activation on error path.
Fix file mapping for extents with physically adjacent extents.
Validation vsnprintf result in runtime translate of dm_log (1.02.136).
Separate filemap extent allocation from region table.
Fix segmentation fault when filemap region creation fails.
Fix performance of region cleanup for failed filemap creation.
Fix very slow region deletion with many regions.
Version 1.02.137 - 30th November 2016
=====================================
Document raid status values.
Always exit dmsetup with success when asked to display help/version.
Version 1.02.137 -
====================================
Version 1.02.136 - 5th November 2016
====================================

View File

@@ -61,174 +61,3 @@ AC_DEFUN([AC_TRY_LDFLAGS],
ifelse([$4], [], [:], [$4])
fi
])
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_gcc_builtin.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_GCC_BUILTIN(BUILTIN)
#
# DESCRIPTION
#
# This macro checks if the compiler supports one of GCC's built-in
# functions; many other compilers also provide those same built-ins.
#
# The BUILTIN parameter is the name of the built-in function.
#
# If BUILTIN is supported define HAVE_<BUILTIN>. Keep in mind that since
# builtins usually start with two underscores they will be copied over
# into the HAVE_<BUILTIN> definition (e.g. HAVE___BUILTIN_EXPECT for
# __builtin_expect()).
#
# The macro caches its result in the ax_cv_have_<BUILTIN> variable (e.g.
# ax_cv_have___builtin_expect).
#
# The macro currently supports the following built-in functions:
#
# __builtin_assume_aligned
# __builtin_bswap16
# __builtin_bswap32
# __builtin_bswap64
# __builtin_choose_expr
# __builtin___clear_cache
# __builtin_clrsb
# __builtin_clrsbl
# __builtin_clrsbll
# __builtin_clz
# __builtin_clzl
# __builtin_clzll
# __builtin_complex
# __builtin_constant_p
# __builtin_ctz
# __builtin_ctzl
# __builtin_ctzll
# __builtin_expect
# __builtin_ffs
# __builtin_ffsl
# __builtin_ffsll
# __builtin_fpclassify
# __builtin_huge_val
# __builtin_huge_valf
# __builtin_huge_vall
# __builtin_inf
# __builtin_infd128
# __builtin_infd32
# __builtin_infd64
# __builtin_inff
# __builtin_infl
# __builtin_isinf_sign
# __builtin_nan
# __builtin_nand128
# __builtin_nand32
# __builtin_nand64
# __builtin_nanf
# __builtin_nanl
# __builtin_nans
# __builtin_nansf
# __builtin_nansl
# __builtin_object_size
# __builtin_parity
# __builtin_parityl
# __builtin_parityll
# __builtin_popcount
# __builtin_popcountl
# __builtin_popcountll
# __builtin_powi
# __builtin_powif
# __builtin_powil
# __builtin_prefetch
# __builtin_trap
# __builtin_types_compatible_p
# __builtin_unreachable
#
# Unsuppored built-ins will be tested with an empty parameter set and the
# result of the check might be wrong or meaningless so use with care.
#
# LICENSE
#
# Copyright (c) 2013 Gabriele Svelto <gabriele.svelto@gmail.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 3
AC_DEFUN([AX_GCC_BUILTIN], [
AS_VAR_PUSHDEF([ac_var], [ax_cv_have_$1])
AC_CACHE_CHECK([for $1], [ac_var], [
AC_LINK_IFELSE([AC_LANG_PROGRAM([], [
m4_case([$1],
[__builtin_assume_aligned], [$1("", 0)],
[__builtin_bswap16], [$1(0)],
[__builtin_bswap32], [$1(0)],
[__builtin_bswap64], [$1(0)],
[__builtin_choose_expr], [$1(0, 0, 0)],
[__builtin___clear_cache], [$1("", "")],
[__builtin_clrsb], [$1(0)],
[__builtin_clrsbl], [$1(0)],
[__builtin_clrsbll], [$1(0)],
[__builtin_clz], [$1(0)],
[__builtin_clzl], [$1(0)],
[__builtin_clzll], [$1(0)],
[__builtin_complex], [$1(0.0, 0.0)],
[__builtin_constant_p], [$1(0)],
[__builtin_ctz], [$1(0)],
[__builtin_ctzl], [$1(0)],
[__builtin_ctzll], [$1(0)],
[__builtin_expect], [$1(0, 0)],
[__builtin_ffs], [$1(0)],
[__builtin_ffsl], [$1(0)],
[__builtin_ffsll], [$1(0)],
[__builtin_fpclassify], [$1(0, 1, 2, 3, 4, 0.0)],
[__builtin_huge_val], [$1()],
[__builtin_huge_valf], [$1()],
[__builtin_huge_vall], [$1()],
[__builtin_inf], [$1()],
[__builtin_infd128], [$1()],
[__builtin_infd32], [$1()],
[__builtin_infd64], [$1()],
[__builtin_inff], [$1()],
[__builtin_infl], [$1()],
[__builtin_isinf_sign], [$1(0.0)],
[__builtin_nan], [$1("")],
[__builtin_nand128], [$1("")],
[__builtin_nand32], [$1("")],
[__builtin_nand64], [$1("")],
[__builtin_nanf], [$1("")],
[__builtin_nanl], [$1("")],
[__builtin_nans], [$1("")],
[__builtin_nansf], [$1("")],
[__builtin_nansl], [$1("")],
[__builtin_object_size], [$1("", 0)],
[__builtin_parity], [$1(0)],
[__builtin_parityl], [$1(0)],
[__builtin_parityll], [$1(0)],
[__builtin_popcount], [$1(0)],
[__builtin_popcountl], [$1(0)],
[__builtin_popcountll], [$1(0)],
[__builtin_powi], [$1(0, 0)],
[__builtin_powif], [$1(0, 0)],
[__builtin_powil], [$1(0, 0)],
[__builtin_prefetch], [$1("")],
[__builtin_trap], [$1()],
[__builtin_types_compatible_p], [$1(int, int)],
[__builtin_unreachable], [$1()],
[m4_warn([syntax], [Unsupported built-in $1, the test may fail])
$1()]
)
])],
[AS_VAR_SET([ac_var], [yes])],
[AS_VAR_SET([ac_var], [no])])
])
AS_IF([test yes = AS_VAR_GET([ac_var])],
[AC_DEFINE_UNQUOTED(AS_TR_CPP(HAVE_$1), 1,
[Define to 1 if the system has the `$1' built-in function])], [])
AS_VAR_POPDEF([ac_var])
])

1
aclocal.m4 vendored
View File

@@ -536,5 +536,4 @@ AC_DEFUN([AM_RUN_LOG],
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
m4_include([acinclude.m4])

View File

@@ -665,7 +665,7 @@ global {
# Configuration option global/units.
# Default value for --units argument.
units = "r"
units = "h"
# Configuration option global/si_unit_consistency.
# Distinguish between powers of 1024 and 1000 bytes.
@@ -1156,8 +1156,7 @@ activation {
# Configuration option activation/missing_stripe_filler.
# Method to fill missing stripes when activating an incomplete LV.
# Using 'error' will make inaccessible parts of the device return I/O
# errors on access. Using 'zero' will return success (and zero) on I/O
# You can instead use a device path, in which case,
# errors on access. You can instead use a device path, in which case,
# that device will be used in place of missing stripes. Using anything
# other than 'error' with mirrored or snapshotted volumes is likely to
# result in data corruption.
@@ -2049,15 +2048,6 @@ dmeventd {
# warning is repeated when 85%, 90% and 95% of the pool is filled.
thin_library = "libdevmapper-event-lvm2thin.so"
# Configuration option dmeventd/thin_command.
# The plugin runs command with each 5% increment when thin-pool data volume
# or metadata volume gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# User handler is specified with the full path starting with '/'.
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.

242
configure vendored
View File

@@ -821,8 +821,6 @@ HAVE_PIE
POW_LIB
LIBOBJS
ALLOCA
SORT
WC
CHMOD
CSCOPE_CMD
CFLOW_CMD
@@ -5236,202 +5234,6 @@ else
CHMOD="$ac_cv_path_CHMOD"
fi
if test -n "$ac_tool_prefix"; then
# Extract the first word of "${ac_tool_prefix}wc", so it can be a program name with args.
set dummy ${ac_tool_prefix}wc; ac_word=$2
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
$as_echo_n "checking for $ac_word... " >&6; }
if ${ac_cv_path_WC+:} false; then :
$as_echo_n "(cached) " >&6
else
case $WC in
[\\/]* | ?:[\\/]*)
ac_cv_path_WC="$WC" # Let the user override the test with a path.
;;
*)
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for ac_exec_ext in '' $ac_executable_extensions; do
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
ac_cv_path_WC="$as_dir/$ac_word$ac_exec_ext"
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
break 2
fi
done
done
IFS=$as_save_IFS
;;
esac
fi
WC=$ac_cv_path_WC
if test -n "$WC"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $WC" >&5
$as_echo "$WC" >&6; }
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
fi
fi
if test -z "$ac_cv_path_WC"; then
ac_pt_WC=$WC
# Extract the first word of "wc", so it can be a program name with args.
set dummy wc; ac_word=$2
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
$as_echo_n "checking for $ac_word... " >&6; }
if ${ac_cv_path_ac_pt_WC+:} false; then :
$as_echo_n "(cached) " >&6
else
case $ac_pt_WC in
[\\/]* | ?:[\\/]*)
ac_cv_path_ac_pt_WC="$ac_pt_WC" # Let the user override the test with a path.
;;
*)
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for ac_exec_ext in '' $ac_executable_extensions; do
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
ac_cv_path_ac_pt_WC="$as_dir/$ac_word$ac_exec_ext"
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
break 2
fi
done
done
IFS=$as_save_IFS
;;
esac
fi
ac_pt_WC=$ac_cv_path_ac_pt_WC
if test -n "$ac_pt_WC"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_WC" >&5
$as_echo "$ac_pt_WC" >&6; }
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
fi
if test "x$ac_pt_WC" = x; then
WC=""
else
case $cross_compiling:$ac_tool_warned in
yes:)
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
ac_tool_warned=yes ;;
esac
WC=$ac_pt_WC
fi
else
WC="$ac_cv_path_WC"
fi
if test -n "$ac_tool_prefix"; then
# Extract the first word of "${ac_tool_prefix}sort", so it can be a program name with args.
set dummy ${ac_tool_prefix}sort; ac_word=$2
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
$as_echo_n "checking for $ac_word... " >&6; }
if ${ac_cv_path_SORT+:} false; then :
$as_echo_n "(cached) " >&6
else
case $SORT in
[\\/]* | ?:[\\/]*)
ac_cv_path_SORT="$SORT" # Let the user override the test with a path.
;;
*)
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for ac_exec_ext in '' $ac_executable_extensions; do
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
ac_cv_path_SORT="$as_dir/$ac_word$ac_exec_ext"
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
break 2
fi
done
done
IFS=$as_save_IFS
;;
esac
fi
SORT=$ac_cv_path_SORT
if test -n "$SORT"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $SORT" >&5
$as_echo "$SORT" >&6; }
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
fi
fi
if test -z "$ac_cv_path_SORT"; then
ac_pt_SORT=$SORT
# Extract the first word of "sort", so it can be a program name with args.
set dummy sort; ac_word=$2
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
$as_echo_n "checking for $ac_word... " >&6; }
if ${ac_cv_path_ac_pt_SORT+:} false; then :
$as_echo_n "(cached) " >&6
else
case $ac_pt_SORT in
[\\/]* | ?:[\\/]*)
ac_cv_path_ac_pt_SORT="$ac_pt_SORT" # Let the user override the test with a path.
;;
*)
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for ac_exec_ext in '' $ac_executable_extensions; do
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
ac_cv_path_ac_pt_SORT="$as_dir/$ac_word$ac_exec_ext"
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
break 2
fi
done
done
IFS=$as_save_IFS
;;
esac
fi
ac_pt_SORT=$ac_cv_path_ac_pt_SORT
if test -n "$ac_pt_SORT"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_SORT" >&5
$as_echo "$ac_pt_SORT" >&6; }
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
fi
if test "x$ac_pt_SORT" = x; then
SORT=""
else
case $cross_compiling:$ac_tool_warned in
yes:)
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
ac_tool_warned=yes ;;
esac
SORT=$ac_pt_SORT
fi
else
SORT="$ac_cv_path_SORT"
fi
################################################################################
ac_header_dirent=no
@@ -6519,50 +6321,6 @@ _ACEOF
esac
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_clz" >&5
$as_echo_n "checking for __builtin_clz... " >&6; }
if ${ax_cv_have___builtin_clz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
__builtin_clz(0)
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
ax_cv_have___builtin_clz=yes
else
ax_cv_have___builtin_clz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_have___builtin_clz" >&5
$as_echo "$ax_cv_have___builtin_clz" >&6; }
if test yes = $ax_cv_have___builtin_clz; then :
cat >>confdefs.h <<_ACEOF
#define HAVE___BUILTIN_CLZ 1
_ACEOF
fi
################################################################################
for ac_func in ftruncate gethostname getpagesize gettimeofday localtime_r \
memchr memset mkdir mkfifo munmap nl_langinfo realpath rmdir setenv \

View File

@@ -86,8 +86,6 @@ AC_PROG_RANLIB
AC_PATH_TOOL(CFLOW_CMD, cflow)
AC_PATH_TOOL(CSCOPE_CMD, cscope)
AC_PATH_TOOL(CHMOD, chmod)
AC_PATH_TOOL(WC, wc)
AC_PATH_TOOL(SORT, sort)
################################################################################
dnl -- Check for header files.
@@ -136,7 +134,6 @@ AC_TYPE_UINT8_T
AC_TYPE_UINT16_T
AC_TYPE_UINT32_T
AC_TYPE_UINT64_T
AX_GCC_BUILTIN([__builtin_clz])
################################################################################
dnl -- Check for functions

View File

@@ -41,11 +41,11 @@ ifeq ("@BUILD_LVMPOLLD@", "yes")
endif
ifeq ("@BUILD_LVMLOCKD@", "yes")
SUBDIRS += lvmlockd
SUBDIRS += lvmlockd
endif
ifeq ("@BUILD_LVMDBUSD@", "yes")
SUBDIRS += lvmdbusd
SUBDIRS += lvmdbusd
endif
ifeq ($(MAKECMDGOALS),distclean)

View File

@@ -532,7 +532,6 @@ static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
static pthread_mutex_t _mutex = PTHREAD_MUTEX_INITIALIZER;
struct iovec iov[2];
cs_error_t err;
int target_node;
@@ -547,10 +546,7 @@ static int _cluster_send_message(const void *buf, int msglen, const char *csid,
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
pthread_mutex_lock(&_mutex);
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
pthread_mutex_unlock(&_mutex);
return cs_to_errno(err);
}

View File

@@ -517,7 +517,7 @@ int main(int argc, char *argv[])
/* Initialise the LVM thread variables */
dm_list_init(&lvm_cmd_head);
if (pthread_attr_init(&stack_attr) ||
pthread_attr_setstacksize(&stack_attr, STACK_SIZE + getpagesize())) {
pthread_attr_setstacksize(&stack_attr, STACK_SIZE)) {
log_sys_error("pthread_attr_init", "");
exit(1);
}

View File

@@ -468,7 +468,7 @@ static int _pthread_create_smallstack(pthread_t *t, void *(*fun)(void *), void *
/*
* We use a smaller stack since it gets preallocated in its entirety
*/
pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE + getpagesize());
pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE);
/*
* If no-one will be waiting, we need to detach.

View File

@@ -121,7 +121,6 @@ int dmeventd_lvm2_run(const char *cmdline)
int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
const char *cmd, const char *device)
{
static char _internal_prefix[] = "_dmeventd_";
char *vg = NULL, *lv = NULL, *layer;
int r;
@@ -136,21 +135,6 @@ int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
(layer = strstr(lv, "_mlog")))
*layer = '\0';
if (!strncmp(cmd, _internal_prefix, sizeof(_internal_prefix) - 1)) {
dmeventd_lvm2_lock();
/* output of internal command passed via env var */
if (!dmeventd_lvm2_run(cmd))
cmd = NULL;
else if ((cmd = getenv(cmd)))
cmd = dm_pool_strdup(mem, cmd); /* copy with lock */
dmeventd_lvm2_unlock();
if (!cmd) {
log_error("Unable to find configured command.");
return 0;
}
}
r = dm_snprintf(buffer, size, "%s %s/%s", cmd, vg, lv);
dm_pool_free(mem, vg);

View File

@@ -184,12 +184,16 @@ int register_device(const char *device,
goto_bad;
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvscan, sizeof(state->cmd_lvscan),
"lvscan --cache", device))
"lvscan --cache", device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
"lvconvert --repair --use-policies", device))
"lvconvert --repair --use-policies", device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
*user = state;
@@ -199,9 +203,6 @@ int register_device(const char *device,
bad:
log_error("Failed to monitor mirror %s.", device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}

View File

@@ -38,7 +38,6 @@ static int _process_raid_event(struct dso_state *state, char *params, const char
struct dm_status_raid *status;
const char *d;
int dead = 0, r = 1;
uint32_t dev;
if (!dm_get_status_raid(state->mem, params, &status)) {
log_error("Failed to process status line for %s.", device);
@@ -47,26 +46,24 @@ static int _process_raid_event(struct dso_state *state, char *params, const char
d = status->dev_health;
while ((d = strchr(d, 'D'))) {
dev = (uint32_t)(d - status->dev_health);
uint32_t dev = (uint32_t)(d - status->dev_health);
if (!(state->raid_devs[dev / 64] & (UINT64_C(1) << (dev % 64)))) {
state->raid_devs[dev / 64] |= (UINT64_C(1) << (dev % 64));
log_warn("WARNING: Device #%u of %s array, %s, has failed.",
dev, status->raid_type, device);
}
if (!(state->raid_devs[dev / 64] & (UINT64_C(1) << (dev % 64))))
log_error("Device #%u of %s array, %s, has failed.",
dev, status->raid_type, device);
state->raid_devs[dev / 64] |= (UINT64_C(1) << (dev % 64));
d++;
dead = 1;
}
if (dead) {
if (status->insync_regions < status->total_regions) {
if (!state->warned) {
state->warned = 1;
if (!state->warned)
log_warn("WARNING: waiting for resynchronization to finish "
"before initiating repair on RAID device %s.", device);
}
"before initiating repair on RAID device %s", device);
state->warned = 1;
goto out; /* Not yet done syncing with accessible devices */
}
@@ -140,8 +137,10 @@ int register_device(const char *device,
"lvscan --cache", device) ||
!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
"lvconvert --config devices{ignore_suspended_devices=1} "
"--repair --use-policies", device))
"--repair --use-policies", device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
*user = state;
@@ -151,9 +150,6 @@ int register_device(const char *device,
bad:
log_error("Failed to monitor RAID %s.", device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}

View File

@@ -254,8 +254,10 @@ int register_device(const char *device,
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvextend,
sizeof(state->cmd_lvextend),
"lvextend --use-policies", device))
"lvextend --use-policies", device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
state->percent_check = CHECK_MINIMUM;
*user = state;
@@ -266,9 +268,6 @@ int register_device(const char *device,
bad:
log_error("Failed to monitor snapshot %s.", device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2011-2017 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2016 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -18,6 +18,7 @@
#include <sys/wait.h>
#include <stdarg.h>
#include <pthread.h>
/* TODO - move this mountinfo code into library to be reusable */
#ifdef __linux__
@@ -39,122 +40,277 @@
#define UMOUNT_COMMAND "/bin/umount"
#define MAX_FAILS (256) /* ~42 mins between cmd call retry with 10s delay */
#define MAX_FAILS (10)
#define THIN_DEBUG 0
struct dso_state {
struct dm_pool *mem;
int metadata_percent_check;
int metadata_percent;
int metadata_warn_once;
int data_percent_check;
int data_percent;
int data_warn_once;
uint64_t known_metadata_size;
uint64_t known_data_size;
unsigned fails;
unsigned max_fails;
int restore_sigset;
sigset_t old_sigset;
pid_t pid;
char *argv[3];
char *cmd_str;
char cmd_str[1024];
};
DM_EVENT_LOG_FN("thin")
#define UUID_PREFIX "LVM-"
static int _run_command(struct dso_state *state)
/* Figure out device UUID has LVM- prefix and is OPEN */
static int _has_unmountable_prefix(int major, int minor)
{
char val[3][36];
char *env[] = { val[0], val[1], val[2], NULL };
int i;
struct dm_task *dmt;
struct dm_info info;
const char *uuid;
int r = 0;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) dm_snprintf(val[0], sizeof(val[0]), "LVM_RUN_BY_DMEVENTD=1");
if (!(dmt = dm_task_create(DM_DEVICE_INFO)))
return_0;
if (state->data_percent) {
/* Prepare some known data to env vars for easy use */
(void) dm_snprintf(val[1], sizeof(val[1]), "DMEVENTD_THIN_POOL_DATA=%d",
state->data_percent / DM_PERCENT_1);
(void) dm_snprintf(val[2], sizeof(val[2]), "DMEVENTD_THIN_POOL_METADATA=%d",
state->metadata_percent / DM_PERCENT_1);
} else {
/* For an error event it's for a user to check status and decide */
env[1] = NULL;
log_debug("Error event processing.");
if (!dm_task_set_major_minor(dmt, major, minor, 1))
goto_out;
if (!dm_task_no_flush(dmt))
stack;
if (!dm_task_run(dmt))
goto out;
if (!dm_task_get_info(dmt, &info))
goto out;
if (!info.exists || !info.open_count)
goto out; /* Not open -> not mounted */
if (!(uuid = dm_task_get_uuid(dmt)))
goto out;
/* Check it's public mountable LV
* has prefix LVM- and UUID size is 68 chars */
if (memcmp(uuid, UUID_PREFIX, sizeof(UUID_PREFIX) - 1) ||
strlen(uuid) != 68)
goto out;
#if THIN_DEBUG
log_debug("Found logical volume %s (%u:%u).", uuid, major, minor);
#endif
r = 1;
out:
dm_task_destroy(dmt);
return r;
}
/* Get dependencies for device, and try to find matching device */
static int _has_deps(const char *name, int tp_major, int tp_minor, int *dev_minor)
{
struct dm_task *dmt;
const struct dm_deps *deps;
struct dm_info info;
int major, minor;
int r = 0;
if (!(dmt = dm_task_create(DM_DEVICE_DEPS)))
return 0;
if (!dm_task_set_name(dmt, name))
goto out;
if (!dm_task_no_open_count(dmt))
goto out;
if (!dm_task_run(dmt))
goto out;
if (!dm_task_get_info(dmt, &info))
goto out;
if (!(deps = dm_task_get_deps(dmt)))
goto out;
if (!info.exists || deps->count != 1)
goto out;
major = (int) MAJOR(deps->device[0]);
minor = (int) MINOR(deps->device[0]);
if ((major != tp_major) || (minor != tp_minor))
goto out;
*dev_minor = info.minor;
if (!_has_unmountable_prefix(major, info.minor))
goto out;
#if THIN_DEBUG
{
char dev_name[PATH_MAX];
if (dm_device_get_name(major, minor, 0, dev_name, sizeof(dev_name)))
log_debug("Found %s (%u:%u) depends on %s.",
name, major, *dev_minor, dev_name);
}
#endif
r = 1;
out:
dm_task_destroy(dmt);
return r;
}
/* Get all active devices */
static int _find_all_devs(dm_bitset_t bs, int tp_major, int tp_minor)
{
struct dm_task *dmt;
struct dm_names *names;
unsigned next = 0;
int minor, r = 1;
if (!(dmt = dm_task_create(DM_DEVICE_LIST)))
return 0;
if (!dm_task_run(dmt)) {
r = 0;
goto out;
}
log_verbose("Executing command: %s", state->cmd_str);
if (!(names = dm_task_get_names(dmt))) {
r = 0;
goto out;
}
/* TODO:
* Support parallel run of 'task' and it's waitpid maintainence
* ATM we can't handle signaling of SIGALRM
* as signalling is not allowed while 'process_event()' is running
*/
if (!(state->pid = fork())) {
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execve(state->argv[0], state->argv, env);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
state->fails = 1;
return 0;
if (!names->dev)
goto out;
do {
names = (struct dm_names *)((char *) names + next);
if (_has_deps(names->name, tp_major, tp_minor, &minor))
dm_bit_set(bs, minor);
next = names->next;
} while (next);
out:
dm_task_destroy(dmt);
return r;
}
static int _run(const char *cmd, ...)
{
va_list ap;
int argc = 1; /* for argv[0], i.e. cmd */
int i = 0;
const char **argv;
pid_t pid = fork();
int status;
if (pid == 0) { /* child */
va_start(ap, cmd);
while (va_arg(ap, const char *))
++argc;
va_end(ap);
/* + 1 for the terminating NULL */
argv = alloca(sizeof(const char *) * (argc + 1));
argv[0] = cmd;
va_start(ap, cmd);
while ((argv[++i] = va_arg(ap, const char *)));
va_end(ap);
execvp(cmd, (char **)argv);
log_sys_error("exec", cmd);
exit(127);
}
if (pid > 0) { /* parent */
if (waitpid(pid, &status, 0) != pid)
return 0; /* waitpid failed */
if (!WIFEXITED(status) || WEXITSTATUS(status))
return 0; /* the child failed */
}
if (pid < 0)
return 0; /* fork failed */
return 1; /* all good */
}
struct mountinfo_s {
const char *device;
struct dm_info info;
dm_bitset_t minors; /* Bitset for active thin pool minors */
};
static int _umount_device(char *buffer, unsigned major, unsigned minor,
char *target, void *cb_data)
{
struct mountinfo_s *data = cb_data;
char *words[10];
if ((major == data->info.major) && dm_bit(data->minors, minor)) {
if (dm_split_words(buffer, DM_ARRAY_SIZE(words), 0, words) < DM_ARRAY_SIZE(words))
words[9] = NULL; /* just don't show device name */
log_info("Unmounting thin %s (%d:%d) of thin pool %s (%u:%u) from mount point \"%s\".",
words[9] ? : "", major, minor, data->device,
data->info.major, data->info.minor,
target);
if (!_run(UMOUNT_COMMAND, "-fl", target, NULL))
log_error("Failed to lazy umount thin %s (%d:%d) from %s: %s.",
words[9], major, minor, target, strerror(errno));
}
return 1;
}
/*
* Find all thin pool LV users and try to umount them.
* TODO: work with read-only thin pool support
*/
static void _umount(struct dm_task *dmt)
{
/* TODO: Convert to use hash to reduce memory usage */
static const size_t MINORS = (1U << 20); /* 20 bit */
struct mountinfo_s data = { NULL };
if (!dm_task_get_info(dmt, &data.info))
return;
data.device = dm_task_get_name(dmt);
if (!(data.minors = dm_bitset_create(NULL, MINORS))) {
log_error("Failed to allocate bitset. Not unmounting %s.", data.device);
goto out;
}
if (!_find_all_devs(data.minors, data.info.major, data.info.minor)) {
log_error("Failed to detect mounted volumes for %s.", data.device);
goto out;
}
if (!dm_mountinfo_read(_umount_device, &data)) {
log_error("Could not parse mountinfo file.");
goto out;
}
out:
if (data.minors)
dm_bitset_destroy(data.minors);
}
static int _use_policy(struct dm_task *dmt, struct dso_state *state)
{
#if THIN_DEBUG
log_debug("dmeventd executes: %s.", state->cmd_str);
#endif
if (state->argv[0])
return _run_command(state);
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
log_error("Failed command for %s.", dm_task_get_name(dmt));
state->fails = 1;
log_error("Failed to extend thin pool %s.",
dm_task_get_name(dmt));
state->fails++;
return 0;
}
state->fails = 0;
return 1;
}
/* Check if executed command has finished
* Only 1 command may run */
static int _wait_for_pid(struct dso_state *state)
{
int status = 0;
if (state->pid == -1)
return 1;
if (!waitpid(state->pid, &status, WNOHANG))
return 0;
/* Wait for finish */
if (WIFEXITED(status)) {
log_verbose("Child %d exited with status %d.",
state->pid, WEXITSTATUS(status));
state->fails = WEXITSTATUS(status) ? 1 : 0;
} else {
if (WIFSIGNALED(status))
log_verbose("Child %d was terminated with status %d.",
state->pid, WTERMSIG(status));
state->fails = 1;
}
state->pid = -1;
return 1;
}
@@ -163,6 +319,7 @@ void process_event(struct dm_task *dmt,
void **user)
{
const char *device = dm_task_get_name(dmt);
int percent;
struct dso_state *state = *user;
struct dm_status_thin_pool *tps = NULL;
void *next = NULL;
@@ -170,48 +327,25 @@ void process_event(struct dm_task *dmt,
char *target_type = NULL;
char *params;
int needs_policy = 0;
struct dm_task *new_dmt = NULL;
int needs_umount = 0;
#if THIN_DEBUG
log_debug("Watch for tp-data:%.2f%% tp-metadata:%.2f%%.",
dm_percent_to_float(state->data_percent_check),
dm_percent_to_float(state->metadata_percent_check));
#endif
if (!_wait_for_pid(state)) {
log_warn("WARNING: Skipping event, child %d is still running (%s).",
state->pid, state->cmd_str);
return;
}
#if 0
/* No longer monitoring, waiting for remove */
if (!state->meta_percent_check && !state->data_percent_check)
return;
#endif
if (event & DM_EVENT_DEVICE_ERROR) {
/* Error -> no need to check and do instant resize */
state->data_percent = state->metadata_percent = 0;
if (_use_policy(dmt, state))
goto out;
stack;
/*
* Rather update oldish status
* since after 'command' processing
* percentage info could have changed a lot.
* If we would get above UMOUNT_THRESH
* we would wait for next sigalarm.
*/
if (!(new_dmt = dm_task_create(DM_DEVICE_STATUS)))
goto_out;
if (!dm_task_set_uuid(new_dmt, dm_task_get_uuid(dmt)))
goto_out;
/* Non-blocking status read */
if (!dm_task_no_flush(new_dmt))
log_warn("WARNING: Can't set no_flush for dm status.");
if (!dm_task_run(new_dmt))
goto_out;
dmt = new_dmt;
}
dm_get_next_target(dmt, next, &start, &length, &target_type, &params);
@@ -223,6 +357,7 @@ void process_event(struct dm_task *dmt,
if (!dm_get_status_thin_pool(state->mem, params, &tps)) {
log_error("Failed to parse status.");
needs_umount = 1;
goto out;
}
@@ -237,112 +372,67 @@ void process_event(struct dm_task *dmt,
if (state->known_metadata_size != tps->total_metadata_blocks) {
state->metadata_percent_check = CHECK_MINIMUM;
state->known_metadata_size = tps->total_metadata_blocks;
state->fails = 0;
}
if (state->known_data_size != tps->total_data_blocks) {
state->data_percent_check = CHECK_MINIMUM;
state->known_data_size = tps->total_data_blocks;
state->fails = 0;
}
/*
* Trigger action when threshold boundary is exceeded.
* Report 80% threshold warning when it's used above 80%.
* Only 100% is exception as it cannot be surpased so policy
* action is called for: >50%, >55% ... >95%, 100%
*/
state->metadata_percent = dm_make_percent(tps->used_metadata_blocks, tps->total_metadata_blocks);
if (state->metadata_percent <= WARNING_THRESH)
state->metadata_warn_once = 0; /* Dropped bellow threshold, reset warn once */
else if (!state->metadata_warn_once++) /* Warn once when raised above threshold */
log_warn("WARNING: Thin pool %s metadata is now %.2f%% full.",
device, dm_percent_to_float(state->metadata_percent));
if (state->metadata_percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->metadata_percent > state->metadata_percent_check)
needs_policy = 1;
state->metadata_percent_check = (state->metadata_percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->metadata_percent_check == DM_PERCENT_100)
state->metadata_percent_check--; /* Can't get bigger then 100% */
} else
state->metadata_percent_check = CHECK_MINIMUM;
percent = dm_make_percent(tps->used_metadata_blocks, tps->total_metadata_blocks);
if (percent >= state->metadata_percent_check) {
/*
* Usage has raised more than CHECK_STEP since the last
* time. Run actions.
*/
state->metadata_percent_check = (percent / CHECK_STEP) * CHECK_STEP + CHECK_STEP;
state->data_percent = dm_make_percent(tps->used_data_blocks, tps->total_data_blocks);
if (state->data_percent <= WARNING_THRESH)
state->data_warn_once = 0;
else if (!state->data_warn_once++)
log_warn("WARNING: Thin pool %s data is now %.2f%% full.",
device, dm_percent_to_float(state->data_percent));
if (state->data_percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->data_percent > state->data_percent_check)
needs_policy = 1;
state->data_percent_check = (state->data_percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->data_percent_check == DM_PERCENT_100)
state->data_percent_check--; /* Can't get bigger then 100% */
} else
state->data_percent_check = CHECK_MINIMUM;
/* FIXME: extension of metadata needs to be written! */
if (percent >= WARNING_THRESH) /* Print a warning to syslog. */
log_warn("WARNING: Thin pool %s metadata is now %.2f%% full.",
device, dm_percent_to_float(percent));
needs_policy = 1;
/* Reduce number of _use_policy() calls by power-of-2 factor till frequency of MAX_FAILS is reached.
* Avoids too high number of error retries, yet shows some status messages in log regularly.
* i.e. PV could have been pvmoved and VG/LV was locked for a while...
*/
if (state->fails) {
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
return;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;
state->fails = needs_policy = 1; /* Retry failing command */
} else
state->max_fails = 1; /* Reset on success */
if (percent >= UMOUNT_THRESH)
needs_umount = 1;
}
if (needs_policy)
_use_policy(dmt, state);
percent = dm_make_percent(tps->used_data_blocks, tps->total_data_blocks);
if (percent >= state->data_percent_check) {
/*
* Usage has raised more than CHECK_STEP since
* the last time. Run actions.
*/
state->data_percent_check = (percent / CHECK_STEP) * CHECK_STEP + CHECK_STEP;
if (percent >= WARNING_THRESH) /* Print a warning to syslog. */
log_warn("WARNING: Thin pool %s data is now %.2f%% full.",
device, dm_percent_to_float(percent));
needs_policy = 1;
if (percent >= UMOUNT_THRESH)
needs_umount = 1;
}
if (needs_policy &&
_use_policy(dmt, state))
needs_umount = 0; /* No umount when command was successful */
out:
if (needs_umount) {
_umount(dmt);
/* Until something changes, do not retry any more actions */
state->data_percent_check = state->metadata_percent_check = (DM_PERCENT_1 * 101);
}
if (tps)
dm_pool_free(state->mem, tps);
if (new_dmt)
dm_task_destroy(new_dmt);
}
/* Handle SIGCHLD for a thread */
static void _sig_child(int signum __attribute__((unused)))
{
/* empty SIG_IGN */;
}
/* Setup handler for SIGCHLD when executing external command
* to get quick 'waitpid()' reaction
* It will interrupt syscall just like SIGALRM and
* invoke process_event().
*/
static void _init_thread_signals(struct dso_state *state)
{
struct sigaction act = { .sa_handler = _sig_child };
sigset_t my_sigset;
sigemptyset(&my_sigset);
if (sigaction(SIGCHLD, &act, NULL))
log_warn("WARNING: Failed to set SIGCHLD action.");
else if (sigaddset(&my_sigset, SIGCHLD))
log_warn("WARNING: Failed to add SIGCHLD to set.");
else if (pthread_sigmask(SIG_UNBLOCK, &my_sigset, &state->old_sigset))
log_warn("WARNING: Failed to unblock SIGCHLD.");
else
state->restore_sigset = 1;
}
static void _restore_thread_signals(struct dso_state *state)
{
if (state->restore_sigset &&
pthread_sigmask(SIG_SETMASK, &state->old_sigset, NULL))
log_warn("WARNING: Failed to block SIGCHLD.");
if (state->fails >= MAX_FAILS) {
log_warn("WARNING: Dropping monitoring of %s. "
"lvm2 command fails too often (%u times in row).",
device, state->fails);
pthread_kill(pthread_self(), SIGALRM);
}
}
int register_device(const char *device,
@@ -352,56 +442,28 @@ int register_device(const char *device,
void **user)
{
struct dso_state *state;
char *str;
char cmd_str[PATH_MAX + 128 + 2]; /* cmd ' ' vg/lv \0 */
if (!dmeventd_lvm2_init_with_pool("thin_pool_state", state))
goto_bad;
if (!dmeventd_lvm2_command(state->mem, cmd_str, sizeof(cmd_str),
"_dmeventd_thin_command", device))
if (!dmeventd_lvm2_command(state->mem, state->cmd_str,
sizeof(state->cmd_str),
"lvextend --use-policies",
device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
if (strncmp(cmd_str, "lvm ", 4) == 0) {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str + 4))) {
log_error("Failed to copy lvm command.");
goto bad;
}
} else if (cmd_str[0] == '/') {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str))) {
log_error("Failed to copy thin command.");
goto bad;
}
/* Find last space before 'vg/lv' */
if (!(str = strrchr(state->cmd_str, ' ')))
goto inval;
if (!(state->argv[0] = dm_pool_strndup(state->mem, state->cmd_str,
str - state->cmd_str))) {
log_error("Failed to copy command.");
goto bad;
}
state->argv[1] = str + 1; /* 1 argument - vg/lv */
_init_thread_signals(state);
} else /* Unuspported command format */
goto inval;
state->pid = -1;
state->metadata_percent_check = CHECK_MINIMUM;
state->data_percent_check = CHECK_MINIMUM;
*user = state;
log_info("Monitoring thin pool %s.", device);
return 1;
inval:
log_error("Invalid command for monitoring: %s.", cmd_str);
bad:
log_error("Failed to monitor thin pool %s.", device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}
@@ -412,28 +474,6 @@ int unregister_device(const char *device,
void **user)
{
struct dso_state *state = *user;
int i;
for (i = 0; !_wait_for_pid(state) && (i < 6); ++i) {
if (i == 0)
/* Give it 2 seconds, then try to terminate & kill it */
log_verbose("Child %d still not finished (%s) waiting.",
state->pid, state->cmd_str);
else if (i == 3) {
log_warn("WARNING: Terminating child %d.", state->pid);
kill(state->pid, SIGINT);
kill(state->pid, SIGTERM);
} else if (i == 5) {
log_warn("WARNING: Killing child %d.", state->pid);
kill(state->pid, SIGKILL);
}
sleep(1);
}
if (state->pid != -1)
log_warn("WARNING: Cannot kill child %d!", state->pid);
_restore_thread_signals(state);
dmeventd_lvm2_exit_with_pool(state);
log_info("No longer monitoring thin pool %s.", device);

View File

@@ -1 +0,0 @@
path.py

View File

@@ -33,6 +33,7 @@ LVMDBUS_SRCDIR_FILES = \
manager.py \
objectmanager.py \
pv.py \
refresh.py \
request.py \
state.py \
udevwatch.py \

View File

@@ -159,7 +159,10 @@ class AutomatedProperties(dbus.service.Object):
cfg.om.lookup_update(self, new_id[0], new_id[1])
# Grab the properties values, then replace the state of the object
# and retrieve the new values.
# and retrieve the new values
# TODO: We need to add locking to prevent concurrent access to the
# properties so that a client is not accessing while we are
# replacing.
o_prop = get_properties(self)
self.state = new_state
n_prop = get_properties(self)

View File

@@ -13,7 +13,6 @@ from .cmdhandler import options_to_cli_args
import dbus
from .utils import pv_range_append, pv_dest_ranges, log_error, log_debug
import os
import threading
def pv_move_lv_cmd(move_options, lv_full_name,
@@ -127,16 +126,3 @@ def merge(interface_name, lv_uuid, lv_name, merge_options, job_state):
raise dbus.exceptions.DBusException(
interface_name,
'LV with uuid %s and name %s not present!' % (lv_uuid, lv_name))
def _run_cmd(req):
log_debug(
"_run_cmd: Running method: %s with args %s" %
(str(req.method), str(req.arguments)))
req.run_cmd()
log_debug("_run_cmd: complete!")
def cmd_runner(request):
t = threading.Thread(target=_run_cmd, args=(request,))
t.start()

View File

@@ -38,12 +38,12 @@ SHELL_IN_USE = None
# Lock used by pprint
stdout_lock = multiprocessing.Lock()
kick_q = multiprocessing.Queue()
worker_q = queue.Queue()
# Main event loop
loop = None
BUS_NAME = os.getenv('LVM_DBUS_NAME', 'com.redhat.lvmdbus1')
BASE_INTERFACE = 'com.redhat.lvmdbus1'
PV_INTERFACE = BASE_INTERFACE + '.Pv'
VG_INTERFACE = BASE_INTERFACE + '.Vg'
@@ -77,7 +77,6 @@ hidden_lv = itertools.count()
# Used to prevent circular imports...
load = None
event = None
# Global cached state
db = None

View File

@@ -15,9 +15,14 @@ import collections
import traceback
import os
from lvmdbusd import cfg
from lvmdbusd.utils import pv_dest_ranges, log_debug, log_error
from lvmdbusd.lvm_shell_proxy import LVMShellProxy
try:
from . import cfg
from .utils import pv_dest_ranges, log_debug, log_error
from .lvm_shell_proxy import LVMShellProxy
except SystemError:
import cfg
from utils import pv_dest_ranges, log_debug, log_error
from lvm_shell_proxy import LVMShellProxy
try:
import simplejson as json
@@ -55,19 +60,18 @@ class LvmExecutionMeta(object):
class LvmFlightRecorder(object):
def __init__(self, size=16):
self.queue = collections.deque(maxlen=size)
def __init__(self):
self.queue = collections.deque(maxlen=16)
def add(self, lvm_exec_meta):
self.queue.append(lvm_exec_meta)
def dump(self):
with cmd_lock:
if len(self.queue):
log_error("LVM dbus flight recorder START")
for c in self.queue:
log_error(str(c))
log_error("LVM dbus flight recorder END")
log_error("LVM dbus flight recorder START")
for c in self.queue:
log_error(str(c))
log_error("LVM dbus flight recorder END")
cfg.blackbox = LvmFlightRecorder()
@@ -113,7 +117,6 @@ _t_call = call_lvm
def _shell_cfg():
global _t_call
# noinspection PyBroadException
try:
lvm_shell = LVMShellProxy()
_t_call = lvm_shell.call_lvm
@@ -295,7 +298,7 @@ def vg_lv_snapshot(vg_name, snapshot_options, name, size_bytes):
return call(cmd)
def _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool):
def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
@@ -303,18 +306,20 @@ def _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool):
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
return cmd
def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
cmd = _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool)
cmd.extend(['--name', name, vg_name])
return call(cmd)
def vg_lv_create_striped(vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool):
cmd = _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool)
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
if not thin_pool:
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
cmd.extend(['--stripes', str(num_stripes)])
if stripe_size_kb != 0:
@@ -352,8 +357,7 @@ def vg_lv_create_raid(vg_name, create_options, name, raid_type, size_bytes,
size_bytes, num_stripes, stripe_size_kb)
def vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies):
def vg_lv_create_mirror(vg_name, create_options, name, size_bytes, num_copies):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
@@ -728,8 +732,8 @@ def lv_retrieve_with_segments():
'lv_attr', 'lv_tags', 'vg_uuid', 'lv_active', 'data_lv',
'metadata_lv', 'seg_pe_ranges', 'segtype', 'lv_parent',
'lv_role', 'lv_layout',
'snap_percent', 'metadata_percent', 'copy_percent',
'sync_percent', 'lv_metadata_size', 'move_pv', 'move_pv_uuid']
'snap_percent', 'metadata_percent', 'copy_percent',
'sync_percent', 'lv_metadata_size', 'move_pv', 'move_pv_uuid']
cmd = _dc('lvs', ['-a', '-o', ','.join(columns)])
rc, out, err = call(cmd)
@@ -746,4 +750,4 @@ if __name__ == '__main__':
pv_data = pv_retrieve_with_segs()
for p in pv_data:
print(str(p))
log_debug(str(p))

View File

@@ -11,10 +11,7 @@ from .pv import load_pvs
from .vg import load_vgs
from .lv import load_lvs
from . import cfg
from .utils import MThreadRunner, log_debug, log_error
import threading
import queue
import traceback
from .utils import MThreadRunner, log_debug
def _main_thread_load(refresh=True, emit_signal=True):
@@ -48,114 +45,3 @@ def load(refresh=True, emit_signal=True, cache_refresh=True, log=True,
rc = _main_thread_load(refresh, emit_signal)
return rc
# Even though lvm can handle multiple changes concurrently it really doesn't
# make sense to make a 1-1 fetch of data for each change of lvm because when
# we fetch the data once all previous changes are reflected.
class StateUpdate(object):
class UpdateRequest(object):
def __init__(self, refresh, emit_signal, cache_refresh, log,
need_main_thread):
self.is_done = False
self.refresh = refresh
self.emit_signal = emit_signal
self.cache_refresh = cache_refresh
self.log = log
self.need_main_thread = need_main_thread
self.result = None
self.cond = threading.Condition(threading.Lock())
def done(self):
with self.cond:
if not self.is_done:
self.cond.wait()
return self.result
def set_result(self, result):
with self.cond:
self.result = result
self.is_done = True
self.cond.notify_all()
@staticmethod
def update_thread(obj):
while cfg.run.value != 0:
# noinspection PyBroadException
try:
queued_requests = []
refresh = True
emit_signal = True
cache_refresh = True
log = True
need_main_thread = True
with obj.lock:
wait = not obj.deferred
obj.deferred = False
if wait:
queued_requests.append(obj.queue.get(True, 2))
# Ok we have one or the deferred queue has some,
# check if any others
try:
while True:
queued_requests.append(obj.queue.get(False))
except queue.Empty:
pass
if len(queued_requests) > 1:
log_debug("Processing %d updates!" % len(queued_requests),
'bg_black', 'fg_light_green')
# We have what we can, run the update with the needed options
for i in queued_requests:
if not i.refresh:
refresh = False
if not i.emit_signal:
emit_signal = False
if not i.cache_refresh:
cache_refresh = False
if not i.log:
log = False
if not i.need_main_thread:
need_main_thread = False
num_changes = load(refresh, emit_signal, cache_refresh, log,
need_main_thread)
# Update is done, let everyone know!
for i in queued_requests:
i.set_result(num_changes)
except queue.Empty:
pass
except Exception:
st = traceback.format_exc()
log_error("update_thread exception: \n%s" % st)
def __init__(self):
self.lock = threading.RLock()
self.queue = queue.Queue()
self.deferred = False
# Do initial load
load(refresh=False, emit_signal=False, need_main_thread=False)
self.thread = threading.Thread(target=StateUpdate.update_thread,
args=(self,))
def load(self, refresh=True, emit_signal=True, cache_refresh=True,
log=True, need_main_thread=True):
# Place this request on the queue and wait for it to be completed
req = StateUpdate.UpdateRequest(refresh, emit_signal, cache_refresh,
log, need_main_thread)
self.queue.put(req)
return req.done()
def event(self):
with self.lock:
self.deferred = True

View File

@@ -8,12 +8,11 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from .automatedproperties import AutomatedProperties
from .utils import job_obj_path_generate, mt_async_result, mt_run_no_wait
from .utils import job_obj_path_generate, mt_async_result, log_debug
from . import cfg
from .cfg import JOB_INTERFACE
import dbus
import threading
# noinspection PyUnresolvedReferences
from gi.repository import GLib
@@ -180,15 +179,9 @@ class Job(AutomatedProperties):
def Complete(self):
return dbus.Boolean(self.state.Complete)
@staticmethod
def _signal_complete(obj):
obj.PropertiesChanged(
JOB_INTERFACE, dict(Complete=dbus.Boolean(obj.state.Complete)), [])
@Complete.setter
def Complete(self, value):
self.state.Complete = value
mt_run_no_wait(Job._signal_complete, self)
@property
def GetError(self):

View File

@@ -272,26 +272,6 @@ class LvCommon(AutomatedProperties):
self.state = object_state
self._move_pv = self._get_move_pv()
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(lv_uuid, lv_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if not dbo:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return dbo
@property
def VolumeType(self):
type_map = {'C': 'Cache', 'm': 'mirrored',
@@ -428,10 +408,24 @@ class Lv(LvCommon):
@staticmethod
def _remove(lv_uuid, lv_name, remove_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Remove the LV, if successful then remove from the model
rc, out, err = cmdhandler.lv_remove(lv_name, remove_options)
LvCommon.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if dbo:
# Remove the LV, if successful then remove from the model
rc, out, err = cmdhandler.lv_remove(lv_name, remove_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return '/'
@dbus.service.method(
@@ -449,11 +443,24 @@ class Lv(LvCommon):
@staticmethod
def _rename(lv_uuid, lv_name, new_name, rename_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Rename the logical volume
rc, out, err = cmdhandler.lv_rename(lv_name, new_name,
rename_options)
LvCommon.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if dbo:
# Rename the logical volume
rc, out, err = cmdhandler.lv_rename(lv_name, new_name,
rename_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return '/'
@dbus.service.method(
@@ -487,27 +494,38 @@ class Lv(LvCommon):
pv_dests_and_ranges, move_options, job_state), cb, cbe, False,
job_state)
background.cmd_runner(r)
cfg.worker_q.put(r)
@staticmethod
def _snap_shot(lv_uuid, lv_name, name, optional_size,
snapshot_options):
# Make sure we have a dbus object representing it
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
# If you specify a size you get a 'thick' snapshot even if
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
rc, out, err = cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options, name, optional_size)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
if dbo:
# If you specify a size you get a 'thick' snapshot even if
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder
rc, out, err = cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options, name, optional_size)
if rc == 0:
cfg.load()
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -530,24 +548,38 @@ class Lv(LvCommon):
resize_options):
# Make sure we have a dbus object representing it
pv_dests = []
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
# If we have PVs, verify them
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'PV Destination (%s) not found' % pr[0])
if dbo:
# If we have PVs, verify them
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'PV Destination (%s) not found' % pr[0])
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
size_change = new_size_bytes - dbo.SizeBytes
rc, out, err = cmdhandler.lv_resize(dbo.lvm_id, size_change,
pv_dests, resize_options)
LvCommon.handle_execute(rc, out, err)
return "/"
size_change = new_size_bytes - dbo.SizeBytes
rc, out, err = cmdhandler.lv_resize(dbo.lvm_id, size_change,
pv_dests, resize_options)
if rc == 0:
# Refresh what's changed
cfg.load()
return "/"
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -580,11 +612,23 @@ class Lv(LvCommon):
def _lv_activate_deactivate(uuid, lv_name, activate, control_flags,
options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(uuid, lv_name)
rc, out, err = cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options)
LvCommon.handle_execute(rc, out, err)
return '/'
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, lv_name)
if dbo:
rc, out, err = cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(uuid, lv_name))
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -616,11 +660,25 @@ class Lv(LvCommon):
@staticmethod
def _add_rm_tags(uuid, lv_name, tags_add, tags_del, tag_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(uuid, lv_name)
rc, out, err = cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options)
LvCommon.handle_execute(rc, out, err)
return '/'
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, lv_name)
if dbo:
rc, out, err = cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(uuid, lv_name))
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -678,13 +736,27 @@ class LvThinPool(Lv):
@staticmethod
def _lv_create(lv_uuid, lv_name, name, size_bytes, create_options):
# Make sure we have a dbus object representing it
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
rc, out, err = cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
lv_created = '/'
if dbo:
rc, out, err = cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes)
if rc == 0:
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
cfg.load()
lv_created = cfg.om.get_object_path_by_lvm_id(full_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return lv_created
@dbus.service.method(
dbus_interface=THIN_POOL_INTERFACE,
@@ -721,13 +793,14 @@ class LvCachePool(Lv):
@staticmethod
def _cache_lv(lv_uuid, lv_name, lv_object_path, cache_options):
# Make sure we have a dbus object representing cache pool
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
# Make sure we have dbus object representing lv to cache
lv_to_cache = cfg.om.get_object_by_path(lv_object_path)
if lv_to_cache:
if dbo and lv_to_cache:
fcn = lv_to_cache.lv_full_name()
rc, out, err = cmdhandler.lv_cache_lv(
dbo.lv_full_name(), fcn, cache_options)
@@ -739,14 +812,22 @@ class LvCachePool(Lv):
cfg.load()
lv_converted = cfg.om.get_object_path_by_lvm_id(fcn)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE, 'LV to cache with object path %s not present!' %
lv_object_path)
msg = ""
if not dbo:
dbo += 'CachePool LV with uuid %s and name %s not present!' % \
(lv_uuid, lv_name)
if not lv_to_cache:
dbo += 'LV to cache with object path %s not present!' % \
(lv_object_path)
raise dbus.exceptions.DBusException(LV_INTERFACE, msg)
return lv_converted
@dbus.service.method(
@@ -777,25 +858,31 @@ class LvCacheLv(Lv):
@staticmethod
def _detach_lv(lv_uuid, lv_name, detach_options, destroy_cache):
# Make sure we have a dbus object representing cache pool
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
# Get current cache name
cache_pool = cfg.om.get_object_by_path(dbo.CachePool)
if dbo:
rc, out, err = cmdhandler.lv_detach_cache(
dbo.lv_full_name(), detach_options, destroy_cache)
if rc == 0:
# The cache pool gets removed as hidden and put back to
# visible, so lets delete
mt_remove_dbus_objects((cache_pool, dbo))
cfg.load()
# Get current cache name
cache_pool = cfg.om.get_object_by_path(dbo.CachePool)
uncached_lv_path = cfg.om.get_object_path_by_lvm_id(lv_name)
rc, out, err = cmdhandler.lv_detach_cache(
dbo.lv_full_name(), detach_options, destroy_cache)
if rc == 0:
# The cache pool gets removed as hidden and put back to
# visible, so lets delete
mt_remove_dbus_objects((cache_pool, dbo))
cfg.load()
uncached_lv_path = cfg.om.get_object_path_by_lvm_id(lv_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return uncached_lv_path
@dbus.service.method(
@@ -829,4 +916,4 @@ class LvSnapShot(Lv):
(SNAPSHOT_INTERFACE, self.Uuid, self.lvm_id,
merge_options, job_state), cb, cbe, False,
job_state)
background.cmd_runner(r)
cfg.worker_q.put(r)

View File

@@ -42,81 +42,59 @@ def _quote_arg(arg):
class LVMShellProxy(object):
@staticmethod
def _read(stream):
tmp = stream.read()
if tmp:
return tmp.decode("utf-8")
return ''
# Read until we get prompt back and a result
# @param: no_output Caller expects no output to report FD
# Returns stdout, report, stderr (report is JSON!)
def _read_until_prompt(self, no_output=False):
def _read_until_prompt(self):
stdout = ""
report = ""
stderr = ""
keep_reading = True
extra_passes = 3
report_json = {}
prev_report_len = 0
# Try reading from all FDs to prevent one from filling up and causing
# a hang. Keep reading until we get the prompt back and the report
# FD does not contain valid JSON
while keep_reading:
# a hang. We are also assuming that we won't get the lvm prompt back
# until we have already received all the output from stderr and the
# report descriptor too.
while not stdout.endswith(SHELL_PROMPT):
try:
rd_fd = [
self.lvm_shell.stdout.fileno(),
self.report_stream.fileno(),
self.report_r,
self.lvm_shell.stderr.fileno()]
ready = select.select(rd_fd, [], [], 2)
for r in ready[0]:
if r == self.lvm_shell.stdout.fileno():
stdout += LVMShellProxy._read(self.lvm_shell.stdout)
elif r == self.report_stream.fileno():
report += LVMShellProxy._read(self.report_stream)
while True:
tmp = self.lvm_shell.stdout.read()
if tmp:
stdout += tmp.decode("utf-8")
else:
break
elif r == self.report_r:
while True:
tmp = os.read(self.report_r, 16384)
if tmp:
report += tmp.decode("utf-8")
if len(tmp) != 16384:
break
else:
break
elif r == self.lvm_shell.stderr.fileno():
stderr += LVMShellProxy._read(self.lvm_shell.stderr)
while True:
tmp = self.lvm_shell.stderr.read()
if tmp:
stderr += tmp.decode("utf-8")
else:
break
# Check to see if the lvm process died on us
if self.lvm_shell.poll():
raise Exception(self.lvm_shell.returncode, "%s" % stderr)
if stdout.endswith(SHELL_PROMPT):
if no_output:
keep_reading = False
else:
cur_report_len = len(report)
if cur_report_len != 0:
# Only bother to parse if we have more data
if prev_report_len != cur_report_len:
prev_report_len = cur_report_len
# Parse the JSON if it's good we are done,
# if not we will try to read some more.
try:
report_json = json.loads(report)
keep_reading = False
except ValueError:
pass
if keep_reading:
extra_passes -= 1
if extra_passes <= 0:
if len(report):
raise ValueError("Invalid json: %s" %
report)
else:
raise ValueError(
"lvm returned no JSON output!")
except IOError as ioe:
log_debug(str(ioe))
pass
return stdout, report_json, stderr
return stdout, report, stderr
def _write_cmd(self, cmd):
cmd_bytes = bytes(cmd, "utf-8")
@@ -124,11 +102,6 @@ class LVMShellProxy(object):
assert (num_written == len(cmd_bytes))
self.lvm_shell.stdin.flush()
@staticmethod
def _make_non_block(stream):
flags = fcntl(stream, F_GETFL)
fcntl(stream, F_SETFL, flags | os.O_NONBLOCK)
def __init__(self):
# Create a temp directory
@@ -141,10 +114,7 @@ class LVMShellProxy(object):
except FileExistsError:
pass
# We have to open non-blocking as the other side isn't open until
# we actually fork the process.
self.report_fd = os.open(tmp_file, os.O_NONBLOCK)
self.report_stream = os.fdopen(self.report_fd, 'rb', 0)
self.report_r = os.open(tmp_file, os.O_NONBLOCK)
# Setup the environment for using our own socket for reporting
local_env = copy.deepcopy(os.environ)
@@ -155,6 +125,9 @@ class LVMShellProxy(object):
# when utilizing the lvm shell.
local_env["LVM_LOG_FILE_MAX_LINES"] = "0"
flags = fcntl(self.report_r, F_GETFL)
fcntl(self.report_r, F_SETFL, flags | os.O_NONBLOCK)
# run the lvm shell
self.lvm_shell = subprocess.Popen(
[LVM_CMD + " 32>%s" % tmp_file],
@@ -162,18 +135,20 @@ class LVMShellProxy(object):
stderr=subprocess.PIPE, close_fds=True, shell=True)
try:
LVMShellProxy._make_non_block(self.lvm_shell.stdout)
LVMShellProxy._make_non_block(self.lvm_shell.stderr)
flags = fcntl(self.lvm_shell.stdout, F_GETFL)
fcntl(self.lvm_shell.stdout, F_SETFL, flags | os.O_NONBLOCK)
flags = fcntl(self.lvm_shell.stderr, F_GETFL)
fcntl(self.lvm_shell.stderr, F_SETFL, flags | os.O_NONBLOCK)
# wait for the first prompt
errors = self._read_until_prompt(no_output=True)[2]
errors = self._read_until_prompt()[2]
if errors and len(errors):
raise RuntimeError(errors)
except:
raise
finally:
# These will get deleted when the FD count goes to zero so we
# can be sure to clean up correctly no matter how we finish
# These will get deleted when the FD count goes to zero so we can be
# sure to clean up correctly no matter how we finish
os.unlink(tmp_file)
os.rmdir(tmp_dir)
@@ -182,24 +157,33 @@ class LVMShellProxy(object):
self._write_cmd('lastlog\n')
# read everything from the STDOUT to the next prompt
stdout, report_json, stderr = self._read_until_prompt()
if 'log' in report_json:
error_msg = ""
# Walk the entire log array and build an error string
for log_entry in report_json['log']:
if log_entry['log_type'] == "error":
if error_msg:
error_msg += ', ' + log_entry['log_message']
else:
error_msg = log_entry['log_message']
stdout, report, stderr = self._read_until_prompt()
return error_msg
try:
log = json.loads(report)
return 'No error reason provided! (missing "log" section)'
if 'log' in log:
error_msg = ""
# Walk the entire log array and build an error string
for log_entry in log['log']:
if log_entry['log_type'] == "error":
if error_msg:
error_msg += ', ' + log_entry['log_message']
else:
error_msg = log_entry['log_message']
return error_msg
return 'No error reason provided! (missing "log" section)'
except ValueError:
log_error("Invalid JSON returned from LVM")
log_error("BEGIN>>\n%s\n<<END" % report)
return "Invalid JSON returned from LVM when retrieving exit code"
def call_lvm(self, argv, debug=False):
rc = 1
error_msg = ""
json_result = ""
if self.lvm_shell.poll():
raise Exception(
@@ -214,21 +198,23 @@ class LVMShellProxy(object):
self._write_cmd(cmd)
# read everything from the STDOUT to the next prompt
stdout, report_json, stderr = self._read_until_prompt()
stdout, report, stderr = self._read_until_prompt()
# Parse the report to see what happened
if 'log' in report_json:
if report_json['log'][-1:][0]['log_ret_code'] == '1':
rc = 0
else:
error_msg = self.get_error_msg()
if report and len(report):
json_result = json.loads(report)
if 'log' in json_result:
if json_result['log'][-1:][0]['log_ret_code'] == '1':
rc = 0
else:
error_msg = self.get_error_msg()
if debug or rc != 0:
log_error(('CMD: %s' % cmd))
log_error(("EC = %d" % rc))
log_error(("ERROR_MSG=\n %s\n" % error_msg))
return rc, report_json, error_msg
return rc, json_result, error_msg
def exit_shell(self):
try:
@@ -265,3 +251,5 @@ if __name__ == "__main__":
pass
except Exception:
traceback.print_exc(file=sys.stdout)
finally:
print()

View File

@@ -68,20 +68,6 @@ class DataStore(object):
else:
table[key] = record
@staticmethod
def _pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup):
for p in c_pvs.values():
# Capture which PVs are associated with which VG
if p['vg_uuid'] not in c_pvs_in_vgs:
c_pvs_in_vgs[p['vg_uuid']] = []
if p['vg_name']:
c_pvs_in_vgs[p['vg_uuid']].append(
(p['pv_name'], p['pv_uuid']))
# Lookup for translating between /dev/<name> and pv uuid
c_lookup[p['pv_name']] = p['pv_uuid']
@staticmethod
def _parse_pvs(_pvs):
pvs = sorted(_pvs, key=lambda pk: pk['pv_name'])
@@ -95,7 +81,18 @@ class DataStore(object):
c_pvs, p['pv_uuid'], p,
['pvseg_start', 'pvseg_size', 'segtype'])
DataStore._pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup)
for p in c_pvs.values():
# Capture which PVs are associated with which VG
if p['vg_uuid'] not in c_pvs_in_vgs:
c_pvs_in_vgs[p['vg_uuid']] = []
if p['vg_name']:
c_pvs_in_vgs[p['vg_uuid']].append(
(p['pv_name'], p['pv_uuid']))
# Lookup for translating between /dev/<name> and pv uuid
c_lookup[p['pv_name']] = p['pv_uuid']
return c_pvs, c_lookup, c_pvs_in_vgs
@staticmethod
@@ -135,7 +132,17 @@ class DataStore(object):
i['pvseg_size'] = i['pv_pe_count']
i['segtype'] = 'free'
DataStore._pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup)
for p in c_pvs.values():
# Capture which PVs are associated with which VG
if p['vg_uuid'] not in c_pvs_in_vgs:
c_pvs_in_vgs[p['vg_uuid']] = []
if p['vg_name']:
c_pvs_in_vgs[p['vg_uuid']].append(
(p['pv_name'], p['pv_uuid']))
# Lookup for translating between /dev/<name> and pv uuid
c_lookup[p['pv_name']] = p['pv_uuid']
return c_pvs, c_lookup, c_pvs_in_vgs

View File

@@ -10,7 +10,7 @@
from . import cfg
from . import objectmanager
from . import utils
from .cfg import BUS_NAME, BASE_INTERFACE, BASE_OBJ_PATH, MANAGER_OBJ_PATH
from .cfg import BASE_INTERFACE, BASE_OBJ_PATH, MANAGER_OBJ_PATH
import threading
from . import cmdhandler
import time
@@ -20,7 +20,7 @@ import dbus.mainloop.glib
from . import lvmdb
# noinspection PyUnresolvedReferences
from gi.repository import GLib
from .fetch import StateUpdate
from .fetch import load
from .manager import Manager
import traceback
import queue
@@ -29,7 +29,7 @@ from .utils import log_debug, log_error
import argparse
import os
import sys
from .cmdhandler import LvmFlightRecorder
from .refresh import handle_external_event, event_complete
class Lvm(objectmanager.ObjectManager):
@@ -37,15 +37,54 @@ class Lvm(objectmanager.ObjectManager):
super(Lvm, self).__init__(object_path, BASE_INTERFACE)
def _discard_pending_refreshes():
# We just handled a refresh, if we have any in the queue they can be
# removed because by definition they are older than the refresh we just did.
# As we limit the number of refreshes getting into the queue
# we should only ever have one to remove.
requests = []
while not cfg.worker_q.empty():
try:
r = cfg.worker_q.get(block=False)
if r.method != handle_external_event:
requests.append(r)
else:
# Make sure we make this event complete even though it didn't
# run, otherwise no other events will get processed
event_complete()
break
except queue.Empty:
break
# Any requests we removed, but did not discard need to be re-queued
for r in requests:
cfg.worker_q.put(r)
def process_request():
while cfg.run.value != 0:
# noinspection PyBroadException
try:
req = cfg.worker_q.get(True, 5)
start = cfg.db.num_refreshes
log_debug(
"Running method: %s with args %s" %
(str(req.method), str(req.arguments)))
req.run_cmd()
end = cfg.db.num_refreshes
num_refreshes = end - start
if num_refreshes > 0:
_discard_pending_refreshes()
if num_refreshes > 1:
log_debug(
"Inspect method %s for too many refreshes" %
(str(req.method)))
log_debug("Method complete ")
except queue.Empty:
pass
@@ -54,14 +93,6 @@ def process_request():
utils.log_error("process_request exception: \n%s" % st)
def check_bb_size(value):
v = int(value)
if v < 0:
raise argparse.ArgumentTypeError(
"positive integers only ('%s' invalid)" % value)
return v
def main():
start = time.time()
# Add simple command line handling
@@ -84,12 +115,6 @@ def main():
help="Use the lvm shell, not fork & exec lvm",
default=False,
dest='use_lvm_shell')
parser.add_argument(
"--blackboxsize",
help="Size of the black box flight recorder, 0 to disable",
default=10,
type=check_bb_size,
dest='bb_size')
use_session = os.getenv('LVMDBUSD_USE_SESSION', False)
@@ -98,15 +123,12 @@ def main():
cfg.args = parser.parse_args()
# We create a flight recorder in cmdhandler too, but we replace it here
# as the user may be specifying a different size. The default one in
# cmdhandler is for when we are running other code with a different main.
cfg.blackbox = LvmFlightRecorder(cfg.args.bb_size)
if cfg.args.use_lvm_shell and not cfg.args.use_json:
log_error("You cannot specify --lvmshell and --nojson")
sys.exit(1)
cmdhandler.set_execution(cfg.args.use_lvm_shell)
# List of threads that we start up
thread_list = []
@@ -120,37 +142,30 @@ def main():
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
dbus.mainloop.glib.threads_init()
cmdhandler.set_execution(cfg.args.use_lvm_shell)
if use_session:
cfg.bus = dbus.SessionBus()
else:
cfg.bus = dbus.SystemBus()
# The base name variable needs to exist for things to work.
# noinspection PyUnusedLocal
base_name = dbus.service.BusName(BUS_NAME, cfg.bus)
base_name = dbus.service.BusName(BASE_INTERFACE, cfg.bus)
cfg.om = Lvm(BASE_OBJ_PATH)
cfg.om.register_object(Manager(MANAGER_OBJ_PATH))
cfg.load = load
cfg.db = lvmdb.DataStore(cfg.args.use_json)
# Using a thread to process requests, we cannot hang the dbus library
# thread that is handling the dbus interface
thread_list.append(threading.Thread(target=process_request))
# Have a single thread handling updating lvm and the dbus model so we
# don't have multiple threads doing this as the same time
updater = StateUpdate()
thread_list.append(updater.thread)
cfg.load = updater.load
cfg.event = updater.event
cfg.load(refresh=False, emit_signal=False, need_main_thread=False)
cfg.loop = GLib.MainLoop()
for thread in thread_list:
thread.damon = True
thread.start()
for process in thread_list:
process.damon = True
process.start()
# Add udev watching
if cfg.args.use_udev:
@@ -172,8 +187,8 @@ def main():
cfg.loop.run()
udevwatch.remove()
for thread in thread_list:
thread.join()
for process in thread_list:
process.join()
except KeyboardInterrupt:
utils.handler(signal.SIGINT, None)
return 0

View File

@@ -14,7 +14,9 @@ from .cfg import MANAGER_INTERFACE
import dbus
from . import cfg
from . import cmdhandler
from .fetch import load_pvs, load_vgs
from .request import RequestEntry
from .refresh import event_add
from . import udevwatch
@@ -30,16 +32,6 @@ class Manager(AutomatedProperties):
def Version(self):
return dbus.String('1.0.0')
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def _pv_create(device, create_options):
@@ -51,8 +43,15 @@ class Manager(AutomatedProperties):
MANAGER_INTERFACE, "PV Already exists!")
rc, out, err = cmdhandler.pv_create(create_options, [device])
Manager.handle_execute(rc, out, err)
return cfg.om.get_object_path_by_lvm_id(device)
if rc == 0:
cfg.load()
created_pv = cfg.om.get_object_path_by_lvm_id(device)
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
return created_pv
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
@@ -79,8 +78,14 @@ class Manager(AutomatedProperties):
MANAGER_INTERFACE, 'object path = %s not found' % p)
rc, out, err = cmdhandler.vg_create(create_options, pv_devices, name)
Manager.handle_execute(rc, out, err)
return cfg.om.get_object_path_by_lvm_id(name)
if rc == 0:
cfg.load()
return cfg.om.get_object_path_by_lvm_id(name)
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
@@ -155,24 +160,16 @@ class Manager(AutomatedProperties):
return p
return '/'
@staticmethod
def _use_lvm_shell(yes_no):
return dbus.Boolean(cmdhandler.set_execution(yes_no))
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
in_signature='b', out_signature='b',
async_callbacks=('cb', 'cbe'))
def UseLvmShell(self, yes_no, cb, cbe):
in_signature='b', out_signature='b')
def UseLvmShell(self, yes_no):
"""
Allow the client to enable/disable lvm shell, used for testing
:param yes_no:
:param cb: dbus python call back parameter, not client visible
:param cbe: dbus python error call back parameter, not client visible
:return: Nothing
"""
r = RequestEntry(-1, Manager._use_lvm_shell, (yes_no,), cb, cbe, False)
cfg.worker_q.put(r)
return dbus.Boolean(cmdhandler.set_execution(yes_no))
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
@@ -186,8 +183,7 @@ class Manager(AutomatedProperties):
"udev monitoring")
# We are dependent on external events now to stay current!
cfg.ee = True
utils.log_debug("ExternalEvent %s" % command)
cfg.event()
event_add((command,))
return dbus.Int32(0)
@staticmethod
@@ -197,8 +193,15 @@ class Manager(AutomatedProperties):
activate, cache, device_path,
major_minor, scan_options)
Manager.handle_execute(rc, out, err)
return '/'
if rc == 0:
# This could potentially change the state quite a bit, so lets
# update everything to be safe
cfg.load()
return '/'
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,

View File

@@ -18,7 +18,7 @@ from .utils import vg_obj_path_generate, n, pv_obj_path_generate, \
from .loader import common
from .request import RequestEntry
from .state import State
from .utils import round_size
from .utils import round_size, mt_remove_dbus_objects
# noinspection PyUnusedLocal
@@ -135,30 +135,23 @@ class Pv(AutomatedProperties):
def _remove(pv_uuid, pv_name, remove_options):
# Remove the PV, if successful then remove from the model
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
rc, out, err = cmdhandler.pv_remove(pv_name, remove_options)
Pv.handle_execute(rc, out, err)
return '/'
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(pv_uuid, pv_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
if not dbo:
if dbo:
rc, out, err = cmdhandler.pv_remove(pv_name, remove_options)
if rc == 0:
mt_remove_dbus_objects((dbo,))
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
return dbo
return '/'
@dbus.service.method(
dbus_interface=PV_INTERFACE,
@@ -175,11 +168,22 @@ class Pv(AutomatedProperties):
@staticmethod
def _resize(pv_uuid, pv_name, new_size_bytes, resize_options):
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
rc, out, err = cmdhandler.pv_resize(pv_name, new_size_bytes,
if dbo:
rc, out, err = cmdhandler.pv_resize(pv_name, new_size_bytes,
resize_options)
Pv.handle_execute(rc, out, err)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
return '/'
@dbus.service.method(
@@ -197,10 +201,21 @@ class Pv(AutomatedProperties):
@staticmethod
def _allocation_enabled(pv_uuid, pv_name, yes_no, allocation_options):
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
rc, out, err = cmdhandler.pv_allocatable(
pv_name, yes_no, allocation_options)
Pv.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
if dbo:
rc, out, err = cmdhandler.pv_allocatable(
pv_name, yes_no, allocation_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE, 'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
return '/'
@dbus.service.method(

View File

@@ -0,0 +1,45 @@
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Try and minimize the refreshes we do.
import threading
from .request import RequestEntry
from . import cfg
from . import utils
_rlock = threading.RLock()
_count = 0
def handle_external_event(command):
utils.log_debug("External event: '%s'" % command)
event_complete()
cfg.load()
def event_add(params):
global _rlock
global _count
with _rlock:
if _count == 0:
_count += 1
r = RequestEntry(
-1, handle_external_event,
params, None, None, False)
cfg.worker_q.put(r)
def event_complete():
global _rlock
global _count
with _rlock:
if _count > 0:
_count -= 1
return _count

View File

@@ -19,6 +19,7 @@ from .utils import log_error, mt_async_result
class RequestEntry(object):
def __init__(self, tmo, method, arguments, cb, cb_error,
return_tuple=True, job_state=None):
self.tmo = tmo
self.method = method
self.arguments = arguments
self.cb = cb
@@ -34,38 +35,31 @@ class RequestEntry(object):
self._return_tuple = return_tuple
self._job_state = job_state
if tmo < 0:
if self.tmo < 0:
# Client is willing to block forever
pass
elif tmo == 0:
self._return_job()
else:
# Note: using 990 instead of 1000 for second to ms conversion to
# account for overhead. Goal is to return just before the
# timeout amount has expired. Better to be a little early than
# late.
self.timer_id = GLib.timeout_add(
tmo * 990, RequestEntry._request_timeout, self)
self.timer_id = GLib.timeout_add_seconds(
tmo, RequestEntry._request_timeout, self)
@staticmethod
def _request_timeout(r):
"""
Method which gets called when the timer runs out!
:param r: RequestEntry which timed out
:return: Result of timer_expired
:return: Nothing
"""
return r.timer_expired()
r.timer_expired()
def _return_job(self):
# Return job is only called when we create a request object or when
# we pop a timer. In both cases we are running in the correct context
# and do not need to schedule the call back in main context.
self._job = Job(self, self._job_state)
cfg.om.register_object(self._job, True)
if self._return_tuple:
self.cb(('/', self._job.dbus_object_path()))
mt_async_result(self.cb, ('/', self._job.dbus_object_path()))
else:
self.cb(self._job.dbus_object_path())
mt_async_result(self.cb, self._job.dbus_object_path())
def run_cmd(self):
try:
@@ -132,6 +126,7 @@ class RequestEntry(object):
mt_async_result(self.cb_error, error_exception)
else:
# We have a job and it's complete, indicate that it's done.
# TODO: We need to signal the job is done too.
self._job.Complete = True
self._job = None

View File

@@ -9,6 +9,7 @@
import pyudev
import threading
from .refresh import event_add
from . import cfg
observer = None
@@ -37,7 +38,7 @@ def filter_event(action, device):
refresh = True
if refresh:
cfg.event()
event_add(('udev',))
def add():

View File

@@ -17,7 +17,6 @@ import datetime
import dbus
from lvmdbusd import cfg
# noinspection PyUnresolvedReferences
from gi.repository import GLib
import threading
@@ -509,17 +508,11 @@ def _async_result(call_back, results):
log_debug('Results = %s' % str(results))
call_back(results)
# Return result in main thread
def mt_async_result(call_back, results):
GLib.idle_add(_async_result, call_back, results)
# Take the supplied function and run it on the main thread and not wait for
# a result!
def mt_run_no_wait(function, param):
GLib.idle_add(function, param)
# Run the supplied function and arguments on the main thread and wait for them
# to complete while allowing the ability to get the return value too.
#
@@ -530,7 +523,6 @@ class MThreadRunner(object):
@staticmethod
def runner(obj):
# noinspection PyProtectedMember
obj._run()
with obj.cond:
obj.function_complete = True

View File

@@ -145,35 +145,29 @@ class Vg(AutomatedProperties):
@staticmethod
def fetch_new_lv(vg_name, lv_name):
cfg.load()
return cfg.om.get_object_path_by_lvm_id("%s/%s" % (vg_name, lv_name))
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(vg_uuid, vg_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(vg_uuid, vg_name)
if not dbo:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(vg_uuid, vg_name))
return dbo
@staticmethod
def _rename(uuid, vg_name, new_name, rename_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_rename(
vg_name, new_name, rename_options)
Vg.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_rename(vg_name, new_name,
rename_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return '/'
@dbus.service.method(
@@ -190,10 +184,24 @@ class Vg(AutomatedProperties):
@staticmethod
def _remove(uuid, vg_name, remove_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
# Remove the VG, if successful then remove from the model
rc, out, err = cmdhandler.vg_remove(vg_name, remove_options)
Vg.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
# Remove the VG, if successful then remove from the model
rc, out, err = cmdhandler.vg_remove(vg_name, remove_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return '/'
@dbus.service.method(
@@ -208,9 +216,26 @@ class Vg(AutomatedProperties):
@staticmethod
def _change(uuid, vg_name, change_options):
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_change(change_options, vg_name)
Vg.handle_execute(rc, out, err)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_change(change_options, vg_name)
# To use an example with d-feet (Method input)
# {"activate": __import__('gi.repository.GLib', globals(),
# locals(), ['Variant']).Variant("s", "n")}
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return '/'
# TODO: This should be broken into a number of different methods
@@ -231,24 +256,34 @@ class Vg(AutomatedProperties):
@staticmethod
def _reduce(uuid, vg_name, missing, pv_object_paths, reduce_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
pv_devices = []
if dbo:
pv_devices = []
# If pv_object_paths is not empty, then get the device paths
if pv_object_paths and len(pv_object_paths) > 0:
for pv_op in pv_object_paths:
pv = cfg.om.get_object_by_path(pv_op)
if pv:
pv_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Object path not found = %s!' % pv_op)
# If pv_object_paths is not empty, then get the device paths
if pv_object_paths and len(pv_object_paths) > 0:
for pv_op in pv_object_paths:
pv = cfg.om.get_object_by_path(pv_op)
if pv:
pv_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Object path not found = %s!' % pv_op)
rc, out, err = cmdhandler.vg_reduce(vg_name, missing, pv_devices,
reduce_options)
Vg.handle_execute(rc, out, err)
rc, out, err = cmdhandler.vg_reduce(vg_name, missing, pv_devices,
reduce_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return '/'
@dbus.service.method(
@@ -265,26 +300,36 @@ class Vg(AutomatedProperties):
@staticmethod
def _extend(uuid, vg_name, pv_object_paths, extend_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
extend_devices = []
if dbo:
extend_devices = []
for i in pv_object_paths:
pv = cfg.om.get_object_by_path(i)
if pv:
extend_devices.append(pv.lvm_id)
for i in pv_object_paths:
pv = cfg.om.get_object_by_path(i)
if pv:
extend_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV Object path not found = %s!' % i)
if len(extend_devices):
rc, out, err = cmdhandler.vg_extend(vg_name, extend_devices,
extend_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV Object path not found = %s!' % i)
if len(extend_devices):
rc, out, err = cmdhandler.vg_extend(vg_name, extend_devices,
extend_options)
Vg.handle_execute(rc, out, err)
VG_INTERFACE, 'No pv_object_paths provided!')
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'No pv_object_paths provided!')
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return '/'
@dbus.service.method(
@@ -321,24 +366,33 @@ class Vg(AutomatedProperties):
create_options):
# Make sure we have a dbus object representing it
pv_dests = []
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Destination (%s) not found' % pr[0])
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Destination (%s) not found' % pr[0])
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
rc, out, err = cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests)
rc, out, err = cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
if rc == 0:
return Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -374,13 +428,25 @@ class Vg(AutomatedProperties):
def _lv_create_linear(uuid, vg_name, name, size_bytes,
thin_pool, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -400,12 +466,24 @@ class Vg(AutomatedProperties):
def _lv_create_striped(uuid, vg_name, name, size_bytes, num_stripes,
stripe_size_kb, thin_pool, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_striped(
vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_striped(
vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -428,11 +506,25 @@ class Vg(AutomatedProperties):
def _lv_create_mirror(uuid, vg_name, name, size_bytes,
num_copies, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -453,12 +545,26 @@ class Vg(AutomatedProperties):
def _lv_create_raid(uuid, vg_name, name, raid_type, size_bytes,
num_stripes, stripe_size_kb, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_raid(
vg_name, create_options, name, raid_type, size_bytes,
num_stripes, stripe_size_kb)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_raid(
vg_name, create_options, name, raid_type, size_bytes,
num_stripes, stripe_size_kb)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -479,27 +585,33 @@ class Vg(AutomatedProperties):
def _create_pool(uuid, vg_name, meta_data_lv, data_lv,
create_options, create_method):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
# Retrieve the full names for the metadata and data lv
md = cfg.om.get_object_by_path(meta_data_lv)
data = cfg.om.get_object_by_path(data_lv)
if md and data:
if dbo and md and data:
new_name = data.Name
rc, out, err = create_method(
md.lv_full_name(), data.lv_full_name(), create_options)
if rc == 0:
mt_remove_dbus_objects((md, data))
Vg.handle_execute(rc, out, err)
cache_pool_lv = Vg.fetch_new_lv(vg_name, new_name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
msg = ""
if not dbo:
msg += 'VG with uuid %s and name %s not present!' % \
(uuid, vg_name)
if not md:
msg += 'Meta data LV with object path %s not present!' % \
(meta_data_lv)
@@ -510,7 +622,7 @@ class Vg(AutomatedProperties):
raise dbus.exceptions.DBusException(VG_INTERFACE, msg)
return Vg.fetch_new_lv(vg_name, new_name)
return cache_pool_lv
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -544,21 +656,33 @@ class Vg(AutomatedProperties):
pv_devices = []
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
# Check for existence of pv object paths
for p in pv_object_paths:
pv = cfg.om.get_object_by_path(p)
if pv:
pv_devices.append(pv.Name)
if dbo:
# Check for existence of pv object paths
for p in pv_object_paths:
pv = cfg.om.get_object_by_path(p)
if pv:
pv_devices.append(pv.Name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV object path = %s not found' % p)
rc, out, err = cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options)
if rc == 0:
cfg.load()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV object path = %s not found' % p)
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
rc, out, err = cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -596,12 +720,25 @@ class Vg(AutomatedProperties):
@staticmethod
def _vg_add_rm_tags(uuid, vg_name, tags_add, tags_del, tag_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
rc, out, err = cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
if dbo:
rc, out, err = cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -638,10 +775,23 @@ class Vg(AutomatedProperties):
@staticmethod
def _vg_change_set(uuid, vg_name, method, value, options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = method(vg_name, value, options)
Vg.handle_execute(rc, out, err)
return '/'
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = method(vg_name, value, options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -699,11 +849,23 @@ class Vg(AutomatedProperties):
def _vg_activate_deactivate(uuid, vg_name, activate, control_flags,
options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options)
Vg.handle_execute(rc, out, err)
return '/'
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options)
if rc == 0:
cfg.load()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
@dbus.service.method(
dbus_interface=VG_INTERFACE,

View File

@@ -2745,8 +2745,7 @@ static response handler(daemon_state s, client_handle h, request r)
"expected = %s", state->token,
"received = %s", token,
"update_pid = " FMTd64, (int64_t)state->update_pid,
"reason = %s", "another command has populated the cache",
NULL);
"reason = %s", "another command has populated the cache");
}
DEBUGLOG(state, "token_update end len %d pid %d new token %s",
@@ -2779,8 +2778,7 @@ static response handler(daemon_state s, client_handle h, request r)
"expected = %s", state->token,
"received = %s", token,
"update_pid = " FMTd64, (int64_t)state->update_pid,
"reason = %s", "another command has populated the cache",
NULL);
"reason = %s", "another command has populated the cache");
}
/* If a pid doing update was cancelled, ignore its update messages. */
@@ -2795,8 +2793,7 @@ static response handler(daemon_state s, client_handle h, request r)
"expected = %s", state->token,
"received = %s", token,
"update_pid = " FMTd64, (int64_t)state->update_pid,
"reason = %s", "another command has populated the lvmetad cache",
NULL);
"reason = %s", "another command has populated the lvmetad cache");
}
pthread_mutex_unlock(&state->token_lock);

View File

@@ -19,12 +19,10 @@
#define MIN_ARGV_SIZE 8
static const char *const polling_ops[] = {
[PVMOVE] = LVMPD_REQ_PVMOVE,
[CONVERT] = LVMPD_REQ_CONVERT,
[MERGE] = LVMPD_REQ_MERGE,
[MERGE_THIN] = LVMPD_REQ_MERGE_THIN
};
static const char *const const polling_ops[] = { [PVMOVE] = LVMPD_REQ_PVMOVE,
[CONVERT] = LVMPD_REQ_CONVERT,
[MERGE] = LVMPD_REQ_MERGE,
[MERGE_THIN] = LVMPD_REQ_MERGE_THIN };
const char *polling_op(enum poll_type type)
{

View File

@@ -19,8 +19,6 @@
#include <fcntl.h>
#include <signal.h>
static const char LVM_SYSTEM_DIR[] = "LVM_SYSTEM_DIR=";
static char *_construct_full_lvname(const char *vgname, const char *lvname)
{
char *name;
@@ -54,7 +52,7 @@ static char *_construct_lvm_system_dir_env(const char *sysdir)
*env = '\0';
if (sysdir && dm_snprintf(env, l, "%s%s", LVM_SYSTEM_DIR, sysdir) < 0) {
if (sysdir && dm_snprintf(env, l, "LVM_SYSTEM_DIR=%s", sysdir) < 0) {
dm_free(env);
env = NULL;
}
@@ -261,8 +259,8 @@ static void _pdlv_locked_dump(struct buffer *buff, const struct lvmpolld_lv *pdl
buffer_append(buff, tmp);
if (dm_snprintf(tmp, sizeof(tmp), "\t\tlvm_command_interval=\"%s\"\n", pdlv->sinterval ?: "<undefined>") > 0)
buffer_append(buff, tmp);
if (dm_snprintf(tmp, sizeof(tmp), "\t\t%s\"%s\"\n", LVM_SYSTEM_DIR,
(*pdlv->lvm_system_dir_env ? (pdlv->lvm_system_dir_env + (sizeof(LVM_SYSTEM_DIR) - 1)) : "<undefined>")) > 0)
if (dm_snprintf(tmp, sizeof(tmp), "\t\tLVM_SYSTEM_DIR=\"%s\"\n",
(*pdlv->lvm_system_dir_env ? (pdlv->lvm_system_dir_env + strlen("LVM_SYSTEM_DIR=")) : "<undefined>")) > 0)
buffer_append(buff, tmp);
if (dm_snprintf(tmp, sizeof(tmp), "\t\tlvm_command_pid=%d\n", pdlv->cmd_pid) > 0)
buffer_append(buff, tmp);

View File

@@ -1,14 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/

View File

@@ -341,9 +341,6 @@
/* Define to 1 if the system has the type `ptrdiff_t'. */
#undef HAVE_PTRDIFF_T
/* Define to 1 if the compiler has the `__builtin_clz` builtin. */
#undef HAVE___BUILTIN_CLZ
/* Define to 1 if you have the <readline/history.h> header file. */
#undef HAVE_READLINE_HISTORY_H

View File

@@ -358,10 +358,6 @@ int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv)
{
return 1;
}
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv)
{
return 1;
}
int pv_uses_vg(struct physical_volume *pv,
struct volume_group *vg)
{
@@ -659,10 +655,6 @@ int target_present(struct cmd_context *cmd, const char *target_name,
&maj, &min, &patchlevel);
}
/*
* When '*info' is NULL, returns 1 only when LV is active.
* When '*info' != NULL, returns 1 when info structure is populated.
*/
static int _lv_info(struct cmd_context *cmd, const struct logical_volume *lv,
int use_layer, struct lvinfo *info,
const struct lv_segment *seg,
@@ -696,6 +688,32 @@ static int _lv_info(struct cmd_context *cmd, const struct logical_volume *lv,
if (seg_status) {
/* TODO: for now it's mess with seg_status */
seg_status->seg = seg;
if (lv_is_merging_cow(lv)) {
if (lv_has_target_type(cmd->mem, origin_from_cow(lv), NULL, TARGET_NAME_SNAPSHOT_MERGE)) {
/*
* When the snapshot-merge has not yet started, query COW LVs as is.
* When merge is in progress, query merging origin LV instead.
* COW volume is already mapped as error target in this case.
*/
lv = origin_from_cow(lv);
seg_status->seg = first_seg(lv);
log_debug_activation("Snapshot merge is in progress, querying status of %s instead.",
display_lvname(lv));
}
} else if (!use_layer && lv_is_origin(lv) && !lv_is_external_origin(lv)) {
/*
* Query status for 'layered' (-real) device most of the time,
* only when snapshot merge started, query its progress.
* TODO: single LV may need couple status to be exposed at once....
* but this needs more logical background
*/
if (!lv_is_merging_origin(lv) ||
!lv_has_target_type(cmd->mem, origin_from_cow(lv), NULL, TARGET_NAME_SNAPSHOT_MERGE))
use_layer = 1;
} else if (lv_is_cow(lv)) {
/* Hadle fictional lvm2 snapshot and query snapshotX volume */
seg_status->seg = find_snapshot(lv);
}
}
if (!dev_manager_info(cmd, lv,
@@ -749,96 +767,44 @@ int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
}
/*
* Returns 1 if lv_with_info_and_seg_status info structure populated,
* Returns 1 if lv_seg_status structure populated,
* else 0 on failure or if device not active locally.
*
* When seg_status parsing had troubles it will set type to SEG_STATUS_UNKNOWN.
*
* Using usually one ioctl to obtain info and status.
* More complex segment do collect info from one device,
* but status from another device.
*
* TODO: further improve with more statuses (i.e. snapshot's origin/merge)
*/
int lv_info_with_seg_status(struct cmd_context *cmd,
const struct lv_segment *lv_seg,
struct lv_with_info_and_seg_status *status,
int with_open_count, int with_read_ahead)
int lv_status(struct cmd_context *cmd, const struct lv_segment *lv_seg,
int use_layer, struct lv_seg_status *lv_seg_status)
{
const struct logical_volume *olv, *lv = status->lv = lv_seg->lv;
if (!activation())
return 0;
if (lv_is_used_cache_pool(lv)) {
/* INFO is not set as cache-pool cannot be active.
* STATUS is collected from cache LV */
lv_seg = get_only_segment_using_this_lv(lv);
(void) _lv_info(cmd, lv_seg->lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
}
return _lv_info(cmd, lv_seg->lv, use_layer, NULL, lv_seg, lv_seg_status, 0, 0);
}
if (lv_is_thin_pool(lv)) {
/* Always collect status for '-tpool' */
if (_lv_info(cmd, lv, 1, &status->info, lv_seg, &status->seg_status, 0, 0) &&
(status->seg_status.type == SEG_STATUS_THIN_POOL)) {
/* There is -tpool device, but query 'active' state of 'fake' thin-pool */
if (!_lv_info(cmd, lv, 0, NULL, NULL, NULL, 0, 0) &&
!status->seg_status.thin_pool->needs_check)
status->info.exists = 0; /* So pool LV is not active */
}
return 1;
} else if (lv_is_external_origin(lv)) {
if (!_lv_info(cmd, lv, 0, &status->info, NULL, NULL,
with_open_count, with_read_ahead))
return_0;
/*
* Returns 1 if lv_with_info_and_seg_status structure populated,
* else 0 on failure or if device not active locally.
*
* This is the same as calling lv_info and lv_status,
* but* it's done in one go with one ioctl if possible! ]
*/
int lv_info_with_seg_status(struct cmd_context *cmd, const struct logical_volume *lv,
const struct lv_segment *lv_seg, int use_layer,
struct lv_with_info_and_seg_status *status,
int with_open_count, int with_read_ahead)
{
if (!activation())
return 0;
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
} else if (lv_is_origin(lv)) {
/* Query segment status for 'layered' (-real) device most of the time,
* only for merging snapshot, query its progress.
* TODO: single LV may need couple status to be exposed at once....
* but this needs more logical background
*/
/* Show INFO for actual origin and grab status for merging origin */
if (!_lv_info(cmd, lv, 0, &status->info, lv_seg,
lv_is_merging_origin(lv) ? &status->seg_status : NULL,
with_open_count, with_read_ahead))
return_0;
if (lv == lv_seg->lv)
return _lv_info(cmd, lv, use_layer, &status->info, lv_seg, &status->seg_status,
with_open_count, with_read_ahead);
if (status->info.exists &&
(status->seg_status.type != SEG_STATUS_SNAPSHOT)) /* Not merging */
/* Grab STATUS from layered -real */
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
} else if (lv_is_cow(lv)) {
if (lv_is_merging_cow(lv)) {
olv = origin_from_cow(lv);
if (!_lv_info(cmd, olv, 0, &status->info, first_seg(olv), &status->seg_status,
with_open_count, with_read_ahead))
return_0;
if (status->seg_status.type == SEG_STATUS_SNAPSHOT) {
log_debug_activation("Snapshot merge is in progress, querying status of %s instead.",
display_lvname(lv));
/*
* When merge is in progress, query merging origin LV instead.
* COW volume is already mapped as error target in this case.
*/
status->lv = olv;
return 1;
}
/* Merge not yet started, still a snapshot... */
}
/* Hadle fictional lvm2 snapshot and query snapshotX volume */
lv_seg = find_snapshot(lv);
}
return _lv_info(cmd, lv, 0, &status->info, lv_seg, &status->seg_status,
with_open_count, with_read_ahead);
/*
* If the info is requested for an LV and segment
* status for segment that belong to another LV,
* we need to acquire info and status separately!
*/
return _lv_info(cmd, lv, use_layer, &status->info, NULL, NULL, with_open_count, with_read_ahead) &&
_lv_info(cmd, lv_seg->lv, use_layer, NULL, lv_seg, &status->seg_status, 0, 0);
}
#define OPEN_COUNT_CHECK_RETRIES 25
@@ -1175,7 +1141,7 @@ int lv_cache_status(const struct logical_volume *cache_lv,
return 0;
}
if (!lv_info(cache_lv->vg->cmd, cache_lv, 1, NULL, 0, 0)) {
if (!lv_info(cache_lv->vg->cmd, cache_lv, 0, NULL, 0, 0)) {
log_error("Cannot check status for locally inactive cache volume %s.",
display_lvname(cache_lv));
return 0;
@@ -1940,7 +1906,7 @@ int monitor_dev_for_events(struct cmd_context *cmd, const struct logical_volume
/* FIXME specify events */
if (!monitor_fn(seg, 0)) {
log_error("%s: %s segment monitoring function failed.",
display_lvname(lv), lvseg_name(seg));
display_lvname(lv), seg->segtype->name);
return 0;
}
} else
@@ -1948,13 +1914,16 @@ int monitor_dev_for_events(struct cmd_context *cmd, const struct logical_volume
/* Check [un]monitor results */
/* Try a couple times if pending, but not forever... */
for (i = 0;; i++) {
for (i = 0; i < 40; i++) {
pending = 0;
monitored = seg->segtype->ops->target_monitored(seg, &pending);
if (!pending || i >= 40)
if (pending ||
(!monitored && monitor) ||
(monitored && !monitor))
log_very_verbose("%s %smonitoring still pending: waiting...",
display_lvname(lv), monitor ? "" : "un");
else
break;
log_very_verbose("%s %smonitoring still pending: waiting...",
display_lvname(lv), monitor ? "" : "un");
usleep(10000 * i);
}
@@ -2574,77 +2543,6 @@ int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv)
return r;
}
/* Remove any existing, closed mapped device by @name */
static int _remove_dm_dev_by_name(const char *name)
{
int r = 0;
struct dm_task *dmt;
struct dm_info info;
if (!(dmt = dm_task_create(DM_DEVICE_INFO)))
return_0;
/* Check, if the device exists. */
if (dm_task_set_name(dmt, name) && dm_task_run(dmt) && dm_task_get_info(dmt, &info)) {
dm_task_destroy(dmt);
/* Ignore non-existing or open dm devices */
if (!info.exists || info.open_count)
return 1;
if (!(dmt = dm_task_create(DM_DEVICE_REMOVE)))
return_0;
if (dm_task_set_name(dmt, name))
r = dm_task_run(dmt);
}
dm_task_destroy(dmt);
return r;
}
/* Work all segments of @lv removing any existing, closed "*-missing_N_0" sub devices. */
static int _lv_remove_any_missing_subdevs(struct logical_volume *lv)
{
if (lv) {
uint32_t seg_no = 0;
char name[257];
struct lv_segment *seg;
dm_list_iterate_items(seg, &lv->segments) {
if (seg->area_count != 1)
return_0;
if (dm_snprintf(name, sizeof(name), "%s-%s-missing_%u_0", seg->lv->vg->name, seg->lv->name, seg_no) < 0)
return 0;
if (!_remove_dm_dev_by_name(name))
return 0;
seg_no++;
}
}
return 1;
}
/* Remove any "*-missing_*" sub devices added by the activation layer for an rmate/rimage missing PV mapping */
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv)
{
uint32_t s;
struct lv_segment *seg = first_seg(lv);
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_LV &&
!_lv_remove_any_missing_subdevs(seg_lv(seg, s)))
return 0;
if (seg->meta_areas && seg_metatype(seg, s) == AREA_LV &&
!_lv_remove_any_missing_subdevs(seg_metalv(seg, s)))
return 0;
}
return 1;
}
/*
* Does PV use VG somewhere in its construction?
* Returns 1 on failure.

View File

@@ -54,12 +54,11 @@ struct lv_seg_status {
};
struct lv_with_info_and_seg_status {
const struct logical_volume *lv; /* input */
int info_ok;
const struct logical_volume *lv; /* output */
struct lvinfo info; /* output */
int seg_part_of_lv; /* output */
struct lv_seg_status seg_status; /* output, see lv_seg_status */
/* TODO: add extra status for snapshot origin */
struct lv_seg_status seg_status; /* input/output, see lv_seg_status */
};
struct lv_activate_opts {
@@ -124,8 +123,6 @@ int lv_deactivate(struct cmd_context *cmd, const char *lvid_s, const struct logi
int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv);
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv);
/*
* Returns 1 if info structure has been populated, else 0 on failure.
* When lvinfo* is NULL, it returns 1 if the device is locally active, 0 otherwise.
@@ -135,6 +132,13 @@ int lv_info(struct cmd_context *cmd, const struct logical_volume *lv, int use_la
int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
struct lvinfo *info, int with_open_count, int with_read_ahead);
/*
* Returns 1 if lv_seg_status structure has been populated,
* else 0 on failure or if device not active locally.
*/
int lv_status(struct cmd_context *cmd, const struct lv_segment *lv_seg,
int use_layer, struct lv_seg_status *lv_seg_status);
/*
* Returns 1 if lv_info_and_seg_status structure has been populated,
* else 0 on failure or if device not active locally.
@@ -142,8 +146,8 @@ int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
* lv_info_with_seg_status is the same as calling lv_info and then lv_status,
* but this fn tries to do that with one ioctl if possible.
*/
int lv_info_with_seg_status(struct cmd_context *cmd,
const struct lv_segment *lv_seg,
int lv_info_with_seg_status(struct cmd_context *cmd, const struct logical_volume *lv,
const struct lv_segment *lv_seg, int use_layer,
struct lv_with_info_and_seg_status *status,
int with_open_count, int with_read_ahead);

File diff suppressed because it is too large Load Diff

View File

@@ -192,9 +192,6 @@ static int _get_env_vars(struct cmd_context *cmd)
}
}
if (strcmp((getenv("LVM_RUN_BY_DMEVENTD") ? : "0"), "1") == 0)
init_run_by_dmeventd(cmd);
return 1;
}
@@ -1000,7 +997,7 @@ static int _init_dev_cache(struct cmd_context *cmd)
if (!(cn = find_config_tree_array(cmd, devices_scan_CFG, NULL))) {
log_error(INTERNAL_ERROR "Unable to find configuration for devices/scan.");
return 0;
return_0;
}
for (cv = cn->v; cv; cv = cv->next) {
@@ -1758,15 +1755,6 @@ bad:
return 0;
}
int init_run_by_dmeventd(struct cmd_context *cmd)
{
init_dmeventd_monitor(DMEVENTD_MONITOR_IGNORE);
init_ignore_suspended_devices(1);
init_disable_dmeventd_monitoring(1); /* Lock settings */
return 0;
}
void destroy_config_context(struct cmd_context *cmd)
{
_destroy_config(cmd);

View File

@@ -89,7 +89,6 @@ struct cmd_context {
*/
const char *cmd_line;
const char *name; /* needed before cmd->command is set */
struct command_name *cname;
struct command *command;
char **argv;
struct arg_values *opt_arg_values;
@@ -242,7 +241,6 @@ int config_files_changed(struct cmd_context *cmd);
int init_lvmcache_orphans(struct cmd_context *cmd);
int init_filters(struct cmd_context *cmd, unsigned load_persistent_cache);
int init_connections(struct cmd_context *cmd);
int init_run_by_dmeventd(struct cmd_context *cmd);
/*
* A config context is a very light weight cmd struct that

View File

@@ -389,7 +389,7 @@ int override_config_tree_from_string(struct cmd_context *cmd,
!config_force_check(cmd, CONFIG_STRING, cft_new)) {
log_error("Ignoring invalid configuration string.");
dm_config_destroy(cft_new);
return 0;
return_0;
}
if (!(cs = dm_pool_zalloc(cft_new->mem, sizeof(struct config_source)))) {

View File

@@ -1221,15 +1221,14 @@ cfg_array(activation_read_only_volume_list_CFG, "read_only_volume_list", activat
"read_only_volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ]\n"
"#\n")
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
"This has been replaced by the activation/raid_region_size setting.\n",
"Size in KiB of each raid or mirror synchronization region.\n")
"Size in KiB of each copy operation when mirroring.\n")
cfg(activation_raid_region_size_CFG, "raid_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(2, 2, 99), NULL, 0, NULL,
"Size in KiB of each raid or mirror synchronization region.\n"
"The clean/dirty state of data is tracked for each region.\n"
"The value is rounded down to a power of two if necessary, and\n"
"is ignored if it is not a multiple of the machine memory page size.\n")
"For raid or mirror segment types, this is the amount of data that is\n"
"copied at once when initializing, or moved at once by pvmove.\n")
cfg(activation_error_when_full_CFG, "error_when_full", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ERROR_WHEN_FULL, vsn(2, 2, 115), NULL, 0, NULL,
"Return errors if a thin pool runs out of space.\n"
@@ -1860,14 +1859,6 @@ cfg(dmeventd_thin_library_CFG, "thin_library", dmeventd_CFG_SECTION, 0, CFG_TYPE
"and emits a warning through syslog when the usage exceeds 80%. The\n"
"warning is repeated when 85%, 90% and 95% of the pool is filled.\n")
cfg(dmeventd_thin_command_CFG, "thin_command", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_THIN_COMMAND, vsn(2, 2, 169), NULL, 0, NULL,
"The plugin runs command with each 5% increment when thin-pool data volume\n"
"or metadata volume gets above 50%.\n"
"Command which starts with 'lvm ' prefix is internal lvm command.\n"
"You can write your own handler to customise behaviour in more details.\n"
"User handler is specified with the full path starting with '/'.\n")
/* TODO: systemd service handler */
cfg(dmeventd_executable_CFG, "executable", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_PATH, vsn(2, 2, 73), "@DMEVENTD_PATH@", 0, NULL,
"The full path to the dmeventd binary.\n")

View File

@@ -81,7 +81,6 @@
#define DEFAULT_DMEVENTD_MIRROR_LIB "libdevmapper-event-lvm2mirror.so"
#define DEFAULT_DMEVENTD_SNAPSHOT_LIB "libdevmapper-event-lvm2snapshot.so"
#define DEFAULT_DMEVENTD_THIN_LIB "libdevmapper-event-lvm2thin.so"
#define DEFAULT_DMEVENTD_THIN_COMMAND "lvm lvextend --use-policies"
#define DEFAULT_DMEVENTD_MONITOR 1
#define DEFAULT_BACKGROUND_POLLING 1
@@ -177,7 +176,7 @@
#define DEFAULT_INDENT 1
#define DEFAULT_ABORT_ON_INTERNAL_ERRORS 0
#define DEFAULT_DETECT_INTERNAL_VG_CACHE_CORRUPTION 0
#define DEFAULT_UNITS "r"
#define DEFAULT_UNITS "h"
#define DEFAULT_SUFFIX 1
#define DEFAULT_HOSTTAGS 0

View File

@@ -125,10 +125,6 @@ struct dev_types *create_dev_types(const char *proc_dir,
if (!strncmp("emcpower", line + i, 8) && isspace(*(line + i + 8)))
dt->emcpower_major = line_maj;
/* Look for Veritas Dynamic Multipathing */
if (!strncmp("VxDMP", line + i, 5) && isspace(*(line + i + 5)))
dt->vxdmp_major = line_maj;
if (!strncmp("loop", line + i, 4) && isspace(*(line + i + 4)))
dt->loop_major = line_maj;
@@ -222,9 +218,6 @@ int dev_subsystem_part_major(struct dev_types *dt, struct device *dev)
if (MAJOR(dev->dev) == dt->power2_major)
return 1;
if (MAJOR(dev->dev) == dt->vxdmp_major)
return 1;
if ((MAJOR(dev->dev) == dt->blkext_major) &&
dev_get_primary_dev(dt, dev, &primary_dev) &&
(MAJOR(primary_dev) == dt->md_major))
@@ -253,9 +246,6 @@ const char *dev_subsystem_name(struct dev_types *dt, struct device *dev)
if (MAJOR(dev->dev) == dt->power2_major)
return "POWER2";
if (MAJOR(dev->dev) == dt->vxdmp_major)
return "VXDMP";
if (MAJOR(dev->dev) == dt->blkext_major)
return "BLKEXT";
@@ -1019,7 +1009,7 @@ int udev_dev_is_mpath_component(struct device *dev)
if (!udev_context) {
log_warn("WARNING: No udev context available to check if device %s is multipath component.", dev_name(dev));
return 0;
return_0;
}
while (1) {

View File

@@ -41,7 +41,6 @@ struct dev_types {
int drbd_major;
int device_mapper_major;
int emcpower_major;
int vxdmp_major;
int power2_major;
int dasd_major;
int loop_major;

View File

@@ -63,6 +63,5 @@ static const dev_known_type_t _dev_known_types[] = {
{"bcache", 1, "bcache block device cache"},
{"nvme", 64, "NVM Express"},
{"zvol", 16, "ZFS Zvols"},
{"VxDMP", 16, "Veritas Dynamic Multipathing"},
{"", 0, ""}
};

View File

@@ -398,7 +398,7 @@ int export_extents(struct disk_list *dl, uint32_t lv_num,
if (!(seg->segtype->flags & SEG_FORMAT1_SUPPORT)) {
log_error("Segment type %s in LV %s: "
"unsupported by format1",
lvseg_name(seg), lv->name);
seg->segtype->name, lv->name);
return 0;
}
if (seg_type(seg, s) != AREA_PV) {
@@ -510,7 +510,7 @@ int export_lvs(struct disk_list *dl, struct volume_group *vg,
goto_out;
dm_list_iterate_items(ll, &vg->lvs) {
if (lv_is_snapshot(ll->lv))
if (ll->lv->status & SNAPSHOT)
continue;
if (!(lvdl = dm_pool_alloc(dl->mem, sizeof(*lvdl))))

View File

@@ -56,7 +56,7 @@ static struct dm_hash_table *_create_lv_maps(struct dm_pool *mem,
}
dm_list_iterate_items(ll, &vg->lvs) {
if (lv_is_snapshot(ll->lv))
if (ll->lv->status & SNAPSHOT)
continue;
if (!(lvm = dm_pool_alloc(mem, sizeof(*lvm))))

View File

@@ -267,7 +267,7 @@ int import_pool_segments(struct dm_list *lvs, struct dm_pool *mem,
dm_list_iterate_items(lvl, lvs) {
lv = lvl->lv;
if (lv_is_snapshot(lv))
if (lv->status & SNAPSHOT)
continue;
for (i = 0; i < subpools; i++) {

View File

@@ -35,7 +35,6 @@ struct archive_params {
struct backup_params {
int enabled;
char *dir;
int suppress;
};
int archive_init(struct cmd_context *cmd, const char *dir,
@@ -236,8 +235,7 @@ static int _backup(struct volume_group *vg)
int backup_locally(struct volume_group *vg)
{
if (!vg->cmd->backup_params->enabled || !vg->cmd->backup_params->dir) {
log_warn_suppress(vg->cmd->backup_params->suppress++,
"WARNING: This metadata update is NOT backed up.");
log_warn("WARNING: This metadata update is NOT backed up");
return 1;
}

View File

@@ -623,7 +623,7 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
break;
case AREA_LV:
/* FIXME This helper code should be target-independent! Check for metadata LV property. */
if (!seg_is_raid(seg)) {
if (!(seg->status & RAID)) {
outf(f, "\"%s\", %u%s",
seg_lv(seg, s)->name,
seg_le(seg, s),

View File

@@ -59,7 +59,7 @@ int add_da(struct dm_pool *mem, struct dm_list *das,
void del_das(struct dm_list *das);
int add_ba(struct dm_pool *mem, struct dm_list *eas,
uint64_t start, uint64_t size);
void del_bas(struct dm_list *bas);
void del_bas(struct dm_list *eas);
int add_mda(const struct format_type *fmt, struct dm_pool *mem, struct dm_list *mdas,
struct device *dev, uint64_t start, uint64_t size, unsigned ignored);
void del_mdas(struct dm_list *mdas);

View File

@@ -512,7 +512,7 @@ static int _read_segments(struct logical_volume *lv, const struct dm_config_node
count++;
}
/* FIXME Remove this restriction */
if (lv_is_snapshot(lv) && count > 1) {
if ((lv->status & SNAPSHOT) && count > 1) {
log_error("Only one segment permitted for snapshot");
return 0;
}
@@ -732,7 +732,7 @@ static int _read_historical_lvnames(struct format_instance *fid __attribute__((u
if (!(hlvn = hlvn->child)) {
log_error("Empty removed logical volume section.");
goto bad;
goto_bad;
}
if (!_read_id(&glv->historical->lvid.id[1], hlvn, "id")) {

View File

@@ -2651,7 +2651,7 @@ int lockd_lv_uses_lock(struct logical_volume *lv)
if (lv_is_cow(lv))
return 0;
if (lv_is_snapshot(lv))
if (lv->status & SNAPSHOT)
return 0;
/* FIXME: lv_is_virtual_origin ? */

View File

@@ -322,11 +322,12 @@ int validate_lv_cache_create_origin(const struct logical_volume *origin_lv)
if (lv_is_cache_type(origin_lv) ||
lv_is_mirror_type(origin_lv) ||
lv_is_thin_volume(origin_lv) || lv_is_thin_pool_metadata(origin_lv) ||
lv_is_merging_origin(origin_lv) ||
lv_is_origin(origin_lv) || lv_is_merging_origin(origin_lv) ||
lv_is_cow(origin_lv) || lv_is_merging_cow(origin_lv) ||
lv_is_external_origin(origin_lv) ||
lv_is_virtual(origin_lv)) {
log_error("Cache is not supported with %s segment type of the original logical volume %s.",
lvseg_name(first_seg(origin_lv)), display_lvname(origin_lv));
first_seg(origin_lv)->segtype->name, display_lvname(origin_lv));
return 0;
}
@@ -382,7 +383,7 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
const struct logical_volume *lock_lv = lv_lock_holder(cache_lv);
struct lv_segment *cache_seg = first_seg(cache_lv);
struct lv_status_cache *status;
int cleaner_policy, writeback;
int cleaner_policy;
uint64_t dirty_blocks;
*is_clean = 0;
@@ -401,11 +402,14 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
cleaner_policy = !strcmp(status->cache->policy_name, "cleaner");
dirty_blocks = status->cache->dirty_blocks;
writeback = (status->cache->feature_flags & DM_CACHE_FEATURE_WRITEBACK);
/* No clear policy and writeback mode means dirty */
if (!cleaner_policy &&
(status->cache->feature_flags & DM_CACHE_FEATURE_WRITEBACK))
dirty_blocks++;
dm_pool_destroy(status->mem);
/* Only clear when policy is Clear or mode != writeback */
if (!dirty_blocks && (cleaner_policy || !writeback))
if (!dirty_blocks)
break;
log_print_unless_silent("Flushing " FMTu64 " blocks for cache %s.",
@@ -416,23 +420,11 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
continue;
}
if (!(cache_lv->status & LVM_WRITE)) {
log_warn("WARNING: Dirty blocks found on read-only cache volume %s.",
display_lvname(cache_lv));
/* TODO: can we actually clean something? */
}
/* Switch to cleaner policy to flush the cache */
cache_seg->cleaner_policy = 1;
/* Reload cache volume with "cleaner" policy */
/* Reaload kernel with "cleaner" policy */
if (!lv_update_and_reload_origin(cache_lv))
return_0;
if (!sync_local_dev_names(cache_lv->vg->cmd)) {
log_error("Failed to sync local devices when clearing cache volume %s.",
display_lvname(cache_lv));
return 0;
}
}
/*
@@ -442,12 +434,6 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
if (1) {
if (!lv_refresh_suspend_resume(lock_lv))
return_0;
if (!sync_local_dev_names(cache_lv->vg->cmd)) {
log_error("Failed to sync local devices after final clearing of cache %s.",
display_lvname(cache_lv));
return 0;
}
}
cache_seg->cleaner_policy = 0;
@@ -486,7 +472,7 @@ int lv_cache_remove(struct logical_volume *cache_lv)
}
/* Localy active volume is needed for writeback */
if (!lv_info(cache_lv->vg->cmd, cache_lv, 1, NULL, 0, 0)) {
if (!lv_is_active_locally(cache_lv)) {
/* Give up any remote locks */
if (!deactivate_lv(cache_lv->vg->cmd, cache_lv)) {
log_error("Cannot deactivate remotely active cache volume %s.",

View File

@@ -243,7 +243,10 @@ char *lvseg_kernel_discards_dup(struct dm_pool *mem, const struct lv_segment *se
{
char *ret = NULL;
struct lv_with_info_and_seg_status status = {
.seg_status.type = SEG_STATUS_NONE
.seg_status = {
.type = SEG_STATUS_NONE,
.seg = seg
},
};
if (!lv_is_thin_pool(seg->lv))
@@ -252,14 +255,12 @@ char *lvseg_kernel_discards_dup(struct dm_pool *mem, const struct lv_segment *se
if (!(status.seg_status.mem = dm_pool_create("reporter_pool", 1024)))
return_NULL;
if (!(status.info_ok = lv_info_with_seg_status(seg->lv->vg->cmd, seg, &status, 0, 0)))
if (!(status.info_ok = lv_info_with_seg_status(seg->lv->vg->cmd, seg->lv, seg, 1, &status, 0, 0)))
goto_bad;
if (!(ret = lvseg_kernel_discards_dup_with_info_and_seg_status(mem, &status)))
stack;
ret = lvseg_kernel_discards_dup_with_info_and_seg_status(mem, &status);
bad:
dm_pool_destroy(status.seg_status.mem);
return ret;
}
@@ -406,7 +407,7 @@ dm_percent_t lvseg_percent_with_info_and_seg_status(const struct lv_with_info_an
/* TODO: expose highest mapped sector */
p = DM_PERCENT_INVALID;
else {
seg = lvdm->seg_status.seg;
seg = first_seg(lvdm->lv);
/* Pool allocates whole chunk so round-up to nearest one */
csize = first_seg(seg->pool_lv)->chunk_size;
csize = ((seg->lv->size + csize - 1) / csize) * csize;
@@ -415,8 +416,8 @@ dm_percent_t lvseg_percent_with_info_and_seg_status(const struct lv_with_info_an
else {
log_warn("WARNING: Thin volume %s maps %s while the size is only %s.",
display_lvname(seg->lv),
display_size(seg->lv->vg->cmd, s->thin->mapped_sectors),
display_size(seg->lv->vg->cmd, csize));
display_size(lvdm->lv->vg->cmd, s->thin->mapped_sectors),
display_size(lvdm->lv->vg->cmd, csize));
/* Don't show nonsense numbers like i.e. 1000% full */
p = DM_PERCENT_100;
}
@@ -971,7 +972,7 @@ int lv_mirror_image_in_sync(const struct logical_volume *lv)
struct lv_segment *seg = first_seg(lv);
struct lv_segment *mirror_seg;
if (!lv_is_mirror_image(lv) || !seg ||
if (!(lv->status & MIRROR_IMAGE) || !seg ||
!(mirror_seg = find_mirror_seg(seg))) {
log_error(INTERNAL_ERROR "Cannot find mirror segment.");
return 0;
@@ -1183,8 +1184,7 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
if (lv_is_historical(lv)) {
repstr[4] = 'h';
repstr[5] = '-';
} else if (!activation() || !lvdm->info_ok ||
(lvdm->seg_status.type == SEG_STATUS_UNKNOWN)) {
} else if (!activation() || !lvdm->info_ok) {
repstr[4] = 'X'; /* Unknown */
repstr[5] = 'X'; /* Unknown */
} else if (lvdm->info.exists) {
@@ -1216,10 +1216,8 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
/* 'c' when cache/thin-pool is active with needs_check flag
* 'C' for suspend */
if ((lv_is_thin_pool(lv) &&
(lvdm->seg_status.type == SEG_STATUS_THIN_POOL) &&
lvdm->seg_status.thin_pool->needs_check) ||
(lv_is_cache(lv) &&
(lvdm->seg_status.type == SEG_STATUS_CACHE) &&
lvdm->seg_status.cache->needs_check))
repstr[4] = lvdm->info.suspended ? 'C' : 'c';
@@ -1261,10 +1259,6 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
repstr[7] = '-';
repstr[8] = '-';
/* TODO: also convert raid health
* lv_is_raid_type() is to wide
* NOTE: snapshot origin is 'mostly' showing it's layered status
*/
if (lv_is_partial(lv))
repstr[8] = 'p';
else if (lv_is_raid_type(lv)) {
@@ -1278,23 +1272,31 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
repstr[8] = 'm'; /* RAID has 'm'ismatches */
} else if (lv->status & LV_WRITEMOSTLY)
repstr[8] = 'w'; /* sub-LV has 'w'ritemostly */
} else if (lvdm->seg_status.type == SEG_STATUS_CACHE) {
if (lvdm->seg_status.cache->fail)
} else if (lv_is_cache(lv) &&
(lvdm->seg_status.type != SEG_STATUS_NONE)) {
if (lvdm->seg_status.type == SEG_STATUS_UNKNOWN)
repstr[8] = 'X'; /* Unknown */
else if (lvdm->seg_status.cache->fail)
repstr[8] = 'F';
else if (lvdm->seg_status.cache->read_only)
repstr[8] = 'M';
} else if (lvdm->seg_status.type == SEG_STATUS_THIN_POOL) {
if (lvdm->seg_status.thin_pool->fail)
} else if (lv_is_thin_pool(lv) &&
(lvdm->seg_status.type != SEG_STATUS_NONE)) {
if (lvdm->seg_status.type == SEG_STATUS_UNKNOWN)
repstr[8] = 'X'; /* Unknown */
else if (lvdm->seg_status.thin_pool->fail)
repstr[8] = 'F';
else if (lvdm->seg_status.thin_pool->out_of_data_space)
repstr[8] = 'D';
else if (lvdm->seg_status.thin_pool->read_only)
repstr[8] = 'M';
} else if (lvdm->seg_status.type == SEG_STATUS_THIN) {
if (lvdm->seg_status.thin->fail)
} else if (lv_is_thin_volume(lv) &&
(lvdm->seg_status.type != SEG_STATUS_NONE)) {
if (lvdm->seg_status.type == SEG_STATUS_UNKNOWN)
repstr[8] = 'X'; /* Unknown */
else if (lvdm->seg_status.thin->fail)
repstr[8] = 'F';
} else if (lvdm->seg_status.type == SEG_STATUS_UNKNOWN)
repstr[8] = 'X'; /* Unknown */
}
if (lv->status & LV_ACTIVATION_SKIP)
repstr[9] = 'k';
@@ -1311,12 +1313,13 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
char *ret = NULL;
struct lv_with_info_and_seg_status status = {
.seg_status.type = SEG_STATUS_NONE,
.lv = lv
};
if (!(status.seg_status.mem = dm_pool_create("reporter_pool", 1024)))
return_0;
if (!(status.info_ok = lv_info_with_seg_status(lv->vg->cmd, first_seg(lv), &status, 1, 1)))
if (!(status.info_ok = lv_info_with_seg_status(lv->vg->cmd, lv, first_seg(lv), 1, &status, 1, 1)))
goto_bad;
ret = lv_attr_dup_with_info_and_seg_status(mem, &status);
@@ -1551,19 +1554,14 @@ const struct logical_volume *lv_lock_holder(const struct logical_volume *lv)
if (lv_is_cow(lv))
return lv_lock_holder(origin_from_cow(lv));
if (lv_is_thin_pool(lv) ||
lv_is_external_origin(lv)) {
/* FIXME: Ensure cluster keeps thin-pool active exlusively.
* External origin can be activated on more nodes (depends on type).
*/
if (!lv_is_active(lv))
/* Find any active LV from the pool or external origin */
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (lv_is_active(sl->seg->lv)) {
log_debug_activation("Thin volume %s is active.",
display_lvname(lv));
return sl->seg->lv;
}
if (lv_is_thin_pool(lv)) {
/* Find any active LV from the pool */
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (lv_is_active(sl->seg->lv)) {
log_debug_activation("Thin volume %s is active.",
display_lvname(lv));
return sl->seg->lv;
}
return lv;
}
@@ -1578,6 +1576,9 @@ const struct logical_volume *lv_lock_holder(const struct logical_volume *lv)
lv_is_thin_volume(sl->seg->lv) &&
first_seg(lv)->pool_lv == sl->seg->pool_lv)
continue; /* Skip thin snaphost */
if (lv_is_external_origin(lv) &&
lv_is_thin_volume(sl->seg->lv))
continue; /* Skip external origin */
if (lv_is_pending_delete(sl->seg->lv))
continue; /* Skip deleted LVs */
return lv_lock_holder(sl->seg->lv);

View File

@@ -712,7 +712,6 @@ static int _round_down_pow2(int r)
int get_default_region_size(struct cmd_context *cmd)
{
int pagesize = lvm_getpagesize();
int region_size = _get_default_region_size(cmd);
if (!is_power_of_2(region_size)) {
@@ -721,12 +720,6 @@ int get_default_region_size(struct cmd_context *cmd)
region_size / 2);
}
if (region_size % (pagesize >> SECTOR_SHIFT)) {
region_size = DEFAULT_RAID_REGION_SIZE * 2;
log_verbose("Using default region size %u kiB (multiple of page size).",
region_size / 2);
}
return region_size;
}
@@ -742,11 +735,11 @@ int add_seg_to_segs_using_this_lv(struct logical_volume *lv,
}
}
log_very_verbose("Adding %s:" FMTu32 " as an user of %s.",
display_lvname(seg->lv), seg->le, display_lvname(lv));
log_very_verbose("Adding %s:%" PRIu32 " as an user of %s",
seg->lv->name, seg->le, lv->name);
if (!(sl = dm_pool_zalloc(lv->vg->vgmem, sizeof(*sl)))) {
log_error("Failed to allocate segment list.");
log_error("Failed to allocate segment list");
return 0;
}
@@ -768,16 +761,16 @@ int remove_seg_from_segs_using_this_lv(struct logical_volume *lv,
if (sl->count > 1)
sl->count--;
else {
log_very_verbose("%s:" FMTu32 " is no longer a user of %s.",
display_lvname(seg->lv), seg->le,
display_lvname(lv));
log_very_verbose("%s:%" PRIu32 " is no longer a user "
"of %s", seg->lv->name, seg->le,
lv->name);
dm_list_del(&sl->list);
}
return 1;
}
log_error(INTERNAL_ERROR "Segment %s:" FMTu32 " is not a user of %s.",
display_lvname(seg->lv), seg->le, display_lvname(lv));
log_error(INTERNAL_ERROR "Segment %s:%u is not a user of %s.",
seg->lv->name, seg->le, lv->name);
return 0;
}
@@ -804,9 +797,8 @@ struct lv_segment *get_only_segment_using_this_lv(const struct logical_volume *l
if (sl->count != 1) {
log_error("%s is expected to have only one segment using it, "
"while %s:" FMTu32 " uses it %d times.",
display_lvname(lv), display_lvname(sl->seg->lv),
sl->seg->le, sl->count);
"while %s:%" PRIu32 " uses it %d times.",
display_lvname(lv), sl->seg->lv->name, sl->seg->le, sl->count);
return NULL;
}
@@ -897,9 +889,8 @@ static uint32_t _round_to_stripe_boundary(struct volume_group *vg, uint32_t exte
/* Round up extents to stripe divisible amount */
if ((size_rest = extents % stripes)) {
new_extents += extend ? stripes - size_rest : -size_rest;
log_print_unless_silent("Rounding size %s (%u extents) %s to stripe boundary size %s(%u extents).",
log_print_unless_silent("Rounding size %s (%d extents) up to stripe boundary size %s (%d extents).",
display_size(vg->cmd, (uint64_t) extents * vg->extent_size), extents,
new_extents < extents ? "down" : "up",
display_size(vg->cmd, (uint64_t) new_extents * vg->extent_size), new_extents);
}
@@ -974,37 +965,6 @@ struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
return seg;
}
/*
* Temporary helper to return number of data copies for
* RAID segment @seg until seg->data_copies got added
*/
static uint32_t _raid_data_copies(struct lv_segment *seg)
{
/*
* FIXME: needs to change once more than 2 are supported.
* I.e. use seg->data_copies then
*/
if (seg_is_raid10(seg))
return 2;
else if (seg_is_raid1(seg))
return seg->area_count;
return seg->segtype->parity_devs + 1;
}
/* Data image count for RAID segment @seg */
static uint32_t _raid_stripes_count(struct lv_segment *seg)
{
/*
* FIXME: raid10 needs to change once more than
* 2 data_copies and odd # of legs supported.
*/
if (seg_is_raid10(seg))
return seg->area_count / _raid_data_copies(seg);
return seg->area_count - seg->segtype->parity_devs;
}
static int _release_and_discard_lv_segment_area(struct lv_segment *seg, uint32_t s,
uint32_t area_reduction, int with_discard)
{
@@ -1045,53 +1005,43 @@ static int _release_and_discard_lv_segment_area(struct lv_segment *seg, uint32_t
}
if (lv_is_raid_image(lv)) {
/* Calculate the amount of extents to reduce per rmate/rimage LV */
uint32_t rimage_extents;
/* FIXME: avoid extra seg_is_*() conditonals */
area_reduction =_round_to_stripe_boundary(lv->vg, area_reduction,
(seg_is_raid1(seg) || seg_is_any_raid0(seg)) ? 0 : _raid_stripes_count(seg), 0);
rimage_extents = raid_rimage_extents(seg->segtype, area_reduction, seg_is_any_raid0(seg) ? 0 : _raid_stripes_count(seg),
seg_is_raid10(seg) ? 1 :_raid_data_copies(seg));
if (!rimage_extents)
return 0;
if (seg->meta_areas) {
uint32_t meta_area_reduction;
struct logical_volume *mlv;
struct volume_group *vg = lv->vg;
if (seg_metatype(seg, s) != AREA_LV ||
!(mlv = seg_metalv(seg, s)))
/*
* FIXME: Use lv_reduce not lv_remove
* We use lv_remove for now, because I haven't figured out
* why lv_reduce won't remove the LV.
lv_reduce(lv, area_reduction);
*/
if (area_reduction != seg->area_len) {
log_error("Unable to reduce RAID LV - operation not implemented.");
return_0;
} else {
if (!lv_remove(lv)) {
log_error("Failed to remove RAID image %s",
lv->name);
return 0;
meta_area_reduction = raid_rmeta_extents_delta(vg->cmd, lv->le_count, lv->le_count - rimage_extents,
seg->region_size, vg->extent_size);
/* Limit for raid0_meta not having region size set */
if (meta_area_reduction > mlv->le_count ||
!(lv->le_count - rimage_extents))
meta_area_reduction = mlv->le_count;
if (meta_area_reduction &&
!lv_reduce(mlv, meta_area_reduction))
return_0; /* FIXME: any upper level reporting */
}
}
if (!lv_reduce(lv, rimage_extents))
return_0; /* FIXME: any upper level reporting */
/* Remove metadata area if image has been removed */
if (seg->meta_areas && seg_metalv(seg, s) && (area_reduction == seg->area_len)) {
if (!lv_reduce(seg_metalv(seg, s),
seg_metalv(seg, s)->le_count)) {
log_error("Failed to remove RAID meta-device %s",
seg_metalv(seg, s)->name);
return 0;
}
}
return 1;
}
if (area_reduction == seg->area_len) {
log_very_verbose("Remove %s:" FMTu32 "[" FMTu32 "] from "
"the top of LV %s:" FMTu32 ".",
display_lvname(seg->lv), seg->le, s,
display_lvname(lv), seg_le(seg, s));
log_very_verbose("Remove %s:%" PRIu32 "[%" PRIu32 "] from "
"the top of LV %s:%" PRIu32,
seg->lv->name, seg->le, s,
lv->name, seg_le(seg, s));
if (!remove_seg_from_segs_using_this_lv(lv, seg))
return_0;
seg_lv(seg, s) = NULL;
seg_le(seg, s) = 0;
seg_type(seg, s) = AREA_UNASSIGNED;
@@ -1181,16 +1131,14 @@ int set_lv_segment_area_lv(struct lv_segment *seg, uint32_t area_num,
struct logical_volume *lv, uint32_t le,
uint64_t status)
{
log_very_verbose("Stack %s:" FMTu32 "[" FMTu32 "] on LV %s:" FMTu32 ".",
display_lvname(seg->lv), seg->le, area_num,
display_lvname(lv), le);
log_very_verbose("Stack %s:%" PRIu32 "[%" PRIu32 "] on LV %s:%" PRIu32,
seg->lv->name, seg->le, area_num, lv->name, le);
lv->status |= status;
if (lv_is_raid_metadata(lv)) {
if (status & RAID_META) {
seg->meta_areas[area_num].type = AREA_LV;
seg_metalv(seg, area_num) = lv;
if (le) {
log_error(INTERNAL_ERROR "Meta le != 0.");
log_error(INTERNAL_ERROR "Meta le != 0");
return 0;
}
seg_metale(seg, area_num) = 0;
@@ -1199,6 +1147,7 @@ int set_lv_segment_area_lv(struct lv_segment *seg, uint32_t area_num,
seg_lv(seg, area_num) = lv;
seg_le(seg, area_num) = le;
}
lv->status |= status;
if (!add_seg_to_segs_using_this_lv(lv, seg))
return_0;
@@ -1310,7 +1259,6 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
uint32_t count = extents;
uint32_t reduction;
struct logical_volume *pool_lv;
struct logical_volume *external_lv = NULL;
if (lv_is_merging_origin(lv)) {
log_debug_metadata("Dropping snapshot merge of %s to removed origin %s.",
@@ -1322,9 +1270,6 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
if (!count)
break;
if (seg->external_lv)
external_lv = seg->external_lv;
if (seg->len <= count) {
if (seg->merge_lv) {
log_debug_metadata("Dropping snapshot merge of removed %s to origin %s.",
@@ -1391,12 +1336,6 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
!lv->vg->fid->fmt->ops->lv_setup(lv->vg->fid, lv))
return_0;
/* Removal of last user enforces refresh */
if (external_lv && !lv_is_external_origin(external_lv) &&
lv_is_active(external_lv) &&
!lv_update_and_reload(external_lv))
return_0;
return 1;
}
@@ -1438,7 +1377,7 @@ int replace_lv_with_error_segment(struct logical_volume *lv)
return 1;
}
static int _lv_refresh_suspend_resume(const struct logical_volume *lv)
int lv_refresh_suspend_resume(const struct logical_volume *lv)
{
struct cmd_context *cmd = lv->vg->cmd;
int r = 1;
@@ -1463,35 +1402,11 @@ static int _lv_refresh_suspend_resume(const struct logical_volume *lv)
return r;
}
int lv_refresh_suspend_resume(const struct logical_volume *lv)
{
if (!_lv_refresh_suspend_resume(lv))
return 0;
/*
* Remove any transiently activated error
* devices which arean't used any more.
*/
if (lv_is_raid(lv) && !lv_deactivate_any_missing_subdevs(lv)) {
log_error("Failed to remove temporary SubLVs from %s", display_lvname(lv));
return 0;
}
return 1;
}
/*
* Remove given number of extents from LV.
*/
int lv_reduce(struct logical_volume *lv, uint32_t extents)
{
struct lv_segment *seg = first_seg(lv);
/* Ensure stipe boundary extents on RAID LVs */
if (lv_is_raid(lv) && extents != lv->le_count)
extents =_round_to_stripe_boundary(lv->vg, extents,
seg_is_raid1(seg) ? 0 : _raid_stripes_count(seg), 0);
return _lv_reduce(lv, extents, 1);
}
@@ -3355,24 +3270,19 @@ static struct alloc_handle *_alloc_init(struct cmd_context *cmd,
if (segtype_is_raid(segtype)) {
if (metadata_area_count) {
uint32_t cur_rimage_extents, new_rimage_extents;
if (metadata_area_count != area_count)
log_error(INTERNAL_ERROR
"Bad metadata_area_count");
ah->metadata_area_count = area_count;
ah->alloc_and_split_meta = 1;
ah->log_len = RAID_METADATA_AREA_LEN;
/* Calculate log_len (i.e. length of each rmeta device) for RAID */
cur_rimage_extents = raid_rimage_extents(segtype, existing_extents, stripes, mirrors);
new_rimage_extents = raid_rimage_extents(segtype, existing_extents + new_extents, stripes, mirrors),
ah->log_len = raid_rmeta_extents_delta(cmd, cur_rimage_extents, new_rimage_extents,
region_size, extent_size);
ah->metadata_area_count = metadata_area_count;
ah->alloc_and_split_meta = !!ah->log_len;
/*
* We need 'log_len' extents for each
* RAID device's metadata_area
*/
total_extents += ah->log_len * (segtype_is_raid1(segtype) ? 1 : ah->area_multiple);
total_extents += (ah->log_len * ah->area_multiple);
} else {
ah->log_area_count = 0;
ah->log_len = 0;
@@ -3525,7 +3435,7 @@ int lv_add_segment(struct alloc_handle *ah,
region_size))
return_0;
if (segtype_can_split(segtype) && !lv_merge_segments(lv)) {
if ((segtype->flags & SEG_CAN_SPLIT) && !lv_merge_segments(lv)) {
log_error("Couldn't merge segments after extending "
"logical volume.");
return 0;
@@ -3568,7 +3478,7 @@ static struct lv_segment *_convert_seg_to_mirror(struct lv_segment *seg,
seg->area_count, seg->area_len,
seg->chunk_size, region_size,
seg->extents_copied, NULL))) {
log_error("Couldn't allocate converted LV segment.");
log_error("Couldn't allocate converted LV segment");
return NULL;
}
@@ -3601,14 +3511,13 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
if (!lv_is_pvmove(lv)) {
log_error(INTERNAL_ERROR
"Non-pvmove LV, %s, passed as argument.",
display_lvname(lv));
"Non-pvmove LV, %s, passed as argument", lv->name);
return 0;
}
if (seg_type(first_seg(lv), 0) != AREA_PV) {
log_error(INTERNAL_ERROR
"Bad segment type for first segment area.");
"Bad segment type for first segment area");
return 0;
}
@@ -3619,8 +3528,8 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
*/
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
log_error("Failed to find segment for %s extent %"
PRIu32, lv->name, current_le);
return 0;
}
@@ -3628,8 +3537,7 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
if (aa[0].len < seg->area_len) {
if (!lv_split_segment(lv, seg->le + aa[0].len)) {
log_error("Failed to split segment at %s "
"extent " FMTu32 ".",
display_lvname(lv), le);
"extent %" PRIu32, lv->name, le);
return 0;
}
}
@@ -3639,8 +3547,8 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
current_le = le;
if (!insert_layer_for_lv(lv->vg->cmd, lv, PVMOVE, "_mimage_0")) {
log_error("Failed to build pvmove LV-type mirror %s.",
display_lvname(lv));
log_error("Failed to build pvmove LV-type mirror, %s",
lv->name);
return 0;
}
orig_lv = seg_lv(first_seg(lv), 0);
@@ -3661,8 +3569,8 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(orig_lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
log_error("Failed to find segment for %s extent %"
PRIu32, lv->name, current_le);
return 0;
}
@@ -3708,16 +3616,16 @@ int lv_add_mirror_areas(struct alloc_handle *ah,
dm_list_iterate_items(aa, &ah->alloced_areas[0]) {
if (!(seg = find_seg_by_le(lv, current_le))) {
log_error("Failed to find segment for %s extent " FMTu32 ".",
display_lvname(lv), current_le);
log_error("Failed to find segment for %s extent %"
PRIu32, lv->name, current_le);
return 0;
}
/* Allocator assures aa[0].len <= seg->area_len */
if (aa[0].len < seg->area_len) {
if (!lv_split_segment(lv, seg->le + aa[0].len)) {
log_error("Failed to split segment at %s extent " FMTu32 ".",
display_lvname(lv), le);
log_error("Failed to split segment at %s "
"extent %" PRIu32, lv->name, le);
return 0;
}
}
@@ -3758,13 +3666,15 @@ int lv_add_mirror_lvs(struct logical_volume *lv,
uint32_t num_extra_areas,
uint64_t status, uint32_t region_size)
{
uint32_t m;
struct lv_segment *seg;
uint32_t old_area_count, new_area_count;
uint32_t m;
struct segment_type *mirror_segtype;
struct lv_segment *seg = first_seg(lv);
seg = first_seg(lv);
if (dm_list_size(&lv->segments) != 1 || seg_type(seg, 0) != AREA_LV) {
log_error(INTERNAL_ERROR "Mirror layer must be inserted before adding mirrors.");
log_error("Mirror layer must be inserted before adding mirrors");
return 0;
}
@@ -3774,7 +3684,7 @@ int lv_add_mirror_lvs(struct logical_volume *lv,
return_0;
if (region_size && region_size != seg->region_size) {
log_error("Conflicting region_size.");
log_error("Conflicting region_size");
return 0;
}
@@ -3783,7 +3693,7 @@ int lv_add_mirror_lvs(struct logical_volume *lv,
if (!_lv_segment_add_areas(lv, seg, new_area_count)) {
log_error("Failed to allocate widened LV segment for %s.",
display_lvname(lv));
lv->name);
return 0;
}
@@ -4069,6 +3979,19 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
if (!_setup_lv_size(lv, lv->le_count + extents))
return_0;
/*
* The MD bitmap is limited to being able to track 2^21 regions.
* The region_size must be adjusted to meet that criteria
* unless raid0/raid0_meta, which doesn't have a bitmap.
*/
if (seg_is_raid(seg) && !seg_is_any_raid0(seg))
while (seg->region_size < (lv->size / (1 << 21))) {
seg->region_size *= 2;
log_very_verbose("Adjusting RAID region_size from %uS to %uS"
" to support large LV size",
seg->region_size/2, seg->region_size);
}
return 1;
}
@@ -4095,7 +4018,6 @@ int lv_extend(struct logical_volume *lv,
uint32_t sub_lv_count;
uint32_t old_extents;
uint32_t new_extents; /* Total logical size after extension. */
uint64_t raid_size;
log_very_verbose("Adding segment of type %s to LV %s.", segtype->name, lv->name);
@@ -4117,22 +4039,6 @@ int lv_extend(struct logical_volume *lv,
}
/* FIXME log_count should be 1 for mirrors */
if (segtype_is_raid(segtype) && !segtype_is_any_raid0(segtype)) {
raid_size = ((uint64_t) lv->le_count + extents) * lv->vg->extent_size;
/*
* The MD bitmap is limited to being able to track 2^21 regions.
* The region_size must be adjusted to meet that criteria
* unless raid0/raid0_meta, which doesn't have a bitmap.
*/
region_size = raid_ensure_min_region_size(lv, raid_size, region_size);
if (first_seg(lv))
first_seg(lv)->region_size = region_size;
}
if (!(ah = allocate_extents(lv->vg, lv, segtype, stripes, mirrors,
log_count, region_size, extents,
allocatable_pvs, alloc, approx_alloc, NULL)))
@@ -4658,7 +4564,7 @@ static int _lvresize_adjust_policy(const struct logical_volume *lv,
if (!policy_amount) {
log_error("Can't extend %s with %s autoextend percent set to 0%%.",
display_lvname(lv), lvseg_name(first_seg(lv)));
display_lvname(lv), first_seg(lv)->segtype->name);
return 0;
}
@@ -4713,11 +4619,6 @@ static uint32_t lvseg_get_stripes(struct lv_segment *seg, uint32_t *stripesize)
return seg->area_count;
}
if (seg_is_raid(seg)) {
*stripesize = seg->stripe_size;
return _raid_stripes_count(seg);
}
*stripesize = 0;
return 0;
}
@@ -4739,7 +4640,8 @@ static int _lvresize_check(struct logical_volume *lv,
if (lv_is_raid_image(lv) || lv_is_raid_metadata(lv)) {
log_error("Cannot resize a RAID %s directly",
lv_is_raid_image(lv) ? "image" : "metadata area");
(lv->status & RAID_IMAGE) ? "image" :
"metadata area");
return 0;
}
@@ -4754,11 +4656,6 @@ static int _lvresize_check(struct logical_volume *lv,
return 0;
}
if (lv_is_cache_type(lv)) {
log_error("Unable to resize logical volumes of cache type.");
return 0;
}
if (!lv_is_visible(lv) &&
!lv_is_thin_pool_metadata(lv) &&
!lv_is_lockd_sanlock_lv(lv)) {
@@ -5383,7 +5280,6 @@ int lv_resize(struct logical_volume *lv,
struct logical_volume *lock_lv = (struct logical_volume*) lv_lock_holder(lv);
struct logical_volume *aux_lv = NULL; /* Note: aux_lv never resizes fs */
struct lvresize_params aux_lp;
struct lv_segment *seg = first_seg(lv);
int activated = 0;
int ret = 0;
int status;
@@ -5425,11 +5321,6 @@ int lv_resize(struct logical_volume *lv,
}
}
/* Ensure stripe boundary extents! */
if (!lp->percent && lv_is_raid(lv))
lp->extents =_round_to_stripe_boundary(lv->vg, lp->extents,
seg_is_raid1(seg) ? 0 : _raid_stripes_count(seg),
lp->resize == LV_REDUCE ? 0 : 1);
if (aux_lv && !_lvresize_prepare(&aux_lv, &aux_lp, pvh))
return_0;
@@ -6229,21 +6120,12 @@ int lv_remove_with_dependencies(struct cmd_context *cmd, struct logical_volume *
/* Remove snapshot LVs first */
if ((force == PROMPT) &&
/* Active snapshot already needs to confirm each active LV */
(yes_no_prompt("Do you really want to remove%s "
"%sorigin logical volume %s with %u snapshot(s)? [y/n]: ",
lv_is_active(lv) ? " active" : "",
vg_is_clustered(lv->vg) ? "clustered " : "",
display_lvname(lv),
lv->origin_count) == 'n'))
!lv_is_active(lv) &&
yes_no_prompt("Removing origin %s will also remove %u "
"snapshots(s). Proceed? [y/n]: ",
lv->name, lv->origin_count) == 'n')
goto no_remove;
if (!deactivate_lv(cmd, lv)) {
stack;
goto no_remove;
}
log_verbose("Removing origin logical volume %s with %u snapshots(s).",
display_lvname(lv), lv->origin_count);
dm_list_iterate_safe(snh, snht, &lv->snapshot_segs)
if (!lv_remove_with_dependencies(cmd, dm_list_struct_base(snh, struct lv_segment,
origin_list)->cow,
@@ -6339,12 +6221,12 @@ static int _lv_update_and_reload(struct logical_volume *lv, int origin_only)
int lv_update_and_reload(struct logical_volume *lv)
{
return _lv_update_and_reload(lv, 0);
return _lv_update_and_reload(lv, 0);
}
int lv_update_and_reload_origin(struct logical_volume *lv)
{
return _lv_update_and_reload(lv, 1);
return _lv_update_and_reload(lv, 1);
}
/*
@@ -6615,43 +6497,12 @@ int remove_layer_from_lv(struct logical_volume *lv,
* Before removal, the layer should be cleaned up,
* i.e. additional segments and areas should have been removed.
*/
/* FIXME:
* These are all INTERNAL_ERROR, but ATM there is
* some internal API problem and this code is wrongle
* executed with certain mirror manipulations.
* So we need to fix mirror code first, then switch...
*/
if (dm_list_size(&parent_lv->segments) != 1) {
log_error("Invalid %d segments in %s, expected only 1.",
dm_list_size(&parent_lv->segments),
display_lvname(parent_lv));
return 0;
}
if (parent_seg->area_count != 1) {
log_error("Invalid %d area count(s) in %s, expected only 1.",
parent_seg->area_count, display_lvname(parent_lv));
return 0;
}
if (seg_type(parent_seg, 0) != AREA_LV) {
log_error("Invalid seg_type %d in %s, expected LV.",
seg_type(parent_seg, 0), display_lvname(parent_lv));
return 0;
}
if (layer_lv != seg_lv(parent_seg, 0)) {
log_error("Layer doesn't match segment in %s.",
display_lvname(parent_lv));
return 0;
}
if (parent_lv->le_count != layer_lv->le_count) {
log_error("Inconsistent extent count (%u != %u) of layer %s.",
parent_lv->le_count, layer_lv->le_count,
display_lvname(parent_lv));
return 0;
}
if (dm_list_size(&parent_lv->segments) != 1 ||
parent_seg->area_count != 1 ||
seg_type(parent_seg, 0) != AREA_LV ||
layer_lv != seg_lv(parent_seg, 0) ||
parent_lv->le_count != layer_lv->le_count)
return_0;
if (!lv_empty(parent_lv))
return_0;
@@ -6721,7 +6572,7 @@ struct logical_volume *insert_layer_for_lv(struct cmd_context *cmd,
if (lv_is_active_exclusive_locally(lv_where))
exclusive = 1;
if (lv_is_active(lv_where) && strstr(name, MIRROR_SYNC_LAYER)) {
if (lv_is_active(lv_where) && strstr(name, "_mimagetmp")) {
log_very_verbose("Creating transient LV %s for mirror conversion in VG %s.", name, lv_where->vg->name);
segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_ERROR);
@@ -7166,7 +7017,7 @@ static int _should_wipe_lv(struct lvcreate_params *lp,
struct logical_volume *lv, int warn)
{
/* Unzeroable segment */
if (seg_cannot_be_zeroed(first_seg(lv)))
if (first_seg(lv)->segtype->flags & SEG_CANNOT_BE_ZEROED)
return 0;
/* Thin snapshot need not to be zeroed */
@@ -7530,7 +7381,7 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
return NULL;
}
if (lv_is_cache_type(origin_lv) && !lv_is_cache(origin_lv)) {
if (lv_is_cache_type(origin_lv)) {
log_error("Snapshots of cache type volume %s "
"is not supported.", display_lvname(origin_lv));
return NULL;

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -71,13 +71,6 @@ int lv_merge_segments(struct logical_volume *lv)
if (error_count++ > ERROR_MAX) \
goto out
#define seg_error(msg) { \
log_error("LV %s, segment %u invalid: %s for %s segment.", \
seg->lv->name, seg_count, (msg), lvseg_name(seg)); \
if ((*error_count)++ > ERROR_MAX) \
return; \
}
/*
* RAID segment property checks.
*
@@ -148,15 +141,7 @@ static void _check_raid1_seg(struct lv_segment *seg, int *error_count)
static void _check_raid45610_seg(struct lv_segment *seg, int *error_count)
{
/* Checks applying to any raid4/5/6/10 */
/*
* Allow raid4 + raid5_n to get activated w/o metadata.
*
* This is mandatory during conversion between them,
* because switching the dedicated parity SubLVs
* beginning <-> end changes the roles of all SubLVs
* which the kernel would reject.
*/
if (!(seg_is_raid4(seg) || seg_is_raid5_n(seg)) && !seg->meta_areas)
if (!seg->meta_areas)
raid_seg_error("no meta areas");
if (!seg->stripe_size)
raid_seg_error("zero stripe size");
@@ -203,10 +188,44 @@ static void _check_non_raid_seg_members(struct lv_segment *seg, int *error_count
{
if (seg->origin) /* snap and thin */
raid_seg_error("non-zero origin LV");
if (seg->indirect_origin) /* thin */
raid_seg_error("non-zero indirect_origin LV");
if (seg->merge_lv) /* thin */
raid_seg_error("non-zero merge LV");
if (seg->cow) /* snap */
raid_seg_error("non-zero cow LV");
if (!dm_list_empty(&seg->origin_list)) /* snap */
raid_seg_error("non-zero origin_list");
if (seg->log_lv)
raid_seg_error("non-zero log LV");
if (seg->segtype_private)
raid_seg_error("non-zero segtype_private");
/* thin members */
if (seg->metadata_lv)
raid_seg_error("non-zero metadata LV");
if (seg->transaction_id)
raid_seg_error("non-zero transaction_id");
if (seg->zero_new_blocks)
raid_seg_error("non-zero zero_new_blocks");
if (seg->discards)
raid_seg_error("non-zero discards");
if (!dm_list_empty(&seg->thin_messages))
raid_seg_error("non-zero thin_messages list");
if (seg->external_lv)
raid_seg_error("non-zero external LV");
if (seg->pool_lv)
raid_seg_error("non-zero pool LV");
if (seg->device_id)
raid_seg_error("non-zero device_id");
/* cache members */
if (seg->cache_mode)
raid_seg_error("non-zero cache_mode");
if (seg->policy_name)
raid_seg_error("non-zero policy_name");
if (seg->policy_settings)
raid_seg_error("non-zero policy_settings");
if (seg->cleaner_policy)
raid_seg_error("non-zero cleaner_policy");
/* replicator members (deprecated) */
if (seg->replicator)
raid_seg_error("non-zero replicator");
@@ -230,6 +249,9 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
uint32_t area_len, s;
/* General checks applying to all RAIDs */
if (!seg_is_raid(seg))
raid_seg_error("erroneous RAID check");
if (!seg->area_count)
raid_seg_error("zero area count");
@@ -254,6 +276,9 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
return;
}
if (seg->chunk_size)
raid_seg_error_val("non-zero chunk_size", seg->chunk_size);
/* FIXME: should we check any non-RAID segment struct members at all? */
_check_non_raid_seg_members(seg, error_count);
@@ -304,169 +329,6 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
}
/* END: RAID segment property checks. */
static void _check_lv_segment(struct logical_volume *lv, struct lv_segment *seg,
unsigned seg_count, int *error_count)
{
struct lv_segment *seg2;
if (lv_is_mirror_image(lv) &&
(!(seg2 = find_mirror_seg(seg)) || !seg_is_mirrored(seg2)))
seg_error("mirror image is not mirrored");
if (seg_is_cache(seg)) {
if (!lv_is_cache(lv))
seg_error("is not flagged as cache LV");
if (!seg->pool_lv) {
seg_error("is missing cache pool LV");
} else if (!lv_is_cache_pool(seg->pool_lv))
seg_error("is not referencing cache pool LV");
} else { /* !cache */
if (seg->cleaner_policy)
seg_error("sets cleaner_policy");
}
if (seg_is_cache_pool(seg)) {
if (!dm_list_empty(&seg->lv->segs_using_this_lv)) {
switch (seg->cache_mode) {
case CACHE_MODE_WRITETHROUGH:
case CACHE_MODE_WRITEBACK:
case CACHE_MODE_PASSTHROUGH:
break;
default:
seg_error("has invalid cache's feature flag")
}
if (!seg->policy_name)
seg_error("is missing cache policy name");
}
} else { /* !cache_pool */
if (seg->cache_mode)
seg_error("sets cache mode");
if (seg->policy_name)
seg_error("sets policy name");
if (seg->policy_settings)
seg_error("sets policy settings");
}
if (!seg_can_error_when_full(seg) && lv_is_error_when_full(lv))
seg_error("does not support flag ERROR_WHEN_FULL.");
if (seg_is_mirrored(seg)) {
/* Check mirror log - which is attached to the mirrored seg */
if (seg->log_lv) {
if (!lv_is_mirror_log(seg->log_lv))
seg_error("log LV is not a mirror log");
if (!(seg2 = first_seg(seg->log_lv)) || (find_mirror_seg(seg2) != seg))
seg_error("log LV does not point back to mirror segment");
}
} else { /* !mirrored */
if (seg->log_lv) {
if (lv_is_raid_image(lv))
seg_error("log LV is not a mirror log or a RAID image");
}
}
if (seg_is_raid(seg))
_check_raid_seg(seg, error_count);
if (seg_is_pool(seg)) {
if ((seg->area_count != 1) || (seg_type(seg, 0) != AREA_LV)) {
seg_error("is missing a pool data LV");
} else if (!(seg2 = first_seg(seg_lv(seg, 0))) || (find_pool_seg(seg2) != seg))
seg_error("data LV does not refer back to pool LV");
if (!seg->metadata_lv) {
seg_error("is missing a pool metadata LV");
} else if (!(seg2 = first_seg(seg->metadata_lv)) || (find_pool_seg(seg2) != seg))
seg_error("metadata LV does not refer back to pool LV");
if (!validate_pool_chunk_size(lv->vg->cmd, seg->segtype, seg->chunk_size))
seg_error("has invalid chunk size.");
} else { /* !thin_pool && !cache_pool */
if (seg->metadata_lv)
seg_error("must not have pool metadata LV set");
}
if (seg_is_thin_pool(seg)) {
if (!lv_is_thin_pool(lv))
seg_error("is not flagged as thin pool LV");
if (lv_is_thin_volume(lv))
seg_error("is a thin volume that must not contain thin pool segment");
} else { /* !thin_pool */
if (seg->zero_new_blocks)
seg_error("sets zero_new_blocks");
if (seg->discards)
seg_error("sets discards");
if (!dm_list_empty(&seg->thin_messages))
seg_error("sets thin_messages list");
}
if (seg_is_thin_volume(seg)) {
if (!lv_is_thin_volume(lv))
seg_error("is not flagged as thin volume LV");
if (lv_is_thin_pool(lv))
seg_error("is a thin pool that must not contain thin volume segment");
if (!seg->pool_lv) {
seg_error("is missing thin pool LV");
} else if (!lv_is_thin_pool(seg->pool_lv))
seg_error("is not referencing thin pool LV");
if (seg->device_id > DM_THIN_MAX_DEVICE_ID)
seg_error("has too large device id");
if (seg->external_lv &&
!lv_is_external_origin(seg->external_lv))
seg_error("external LV is not flagged as a external origin LV");
if (seg->merge_lv) {
if (!lv_is_thin_volume(seg->merge_lv))
seg_error("merge LV is not flagged as a thin LV");
if (!lv_is_merging_origin(seg->merge_lv))
seg_error("merge LV is not flagged as merging");
}
} else { /* !thin */
if (seg->device_id)
seg_error("sets device_id");
if (seg->external_lv)
seg_error("sets external LV");
if (seg->merge_lv)
seg_error("sets merge LV");
if (seg->indirect_origin)
seg_error("sets indirect_origin LV");
}
/* Some multi-seg vars excluded here */
if (!seg_is_cache(seg) &&
!seg_is_thin_volume(seg)) {
if (seg->pool_lv)
seg_error("sets pool LV");
}
if (!seg_is_pool(seg) &&
/* FIXME: format_pool/import_export.c _add_linear_seg() sets chunk_size */
!seg_is_linear(seg) &&
!seg_is_snapshot(seg)) {
if (seg->chunk_size)
seg_error("sets chunk_size");
}
if (!seg_is_thin_pool(seg) &&
!seg_is_thin_volume(seg)) {
if (seg->transaction_id)
seg_error("sets transaction_id");
}
if (!seg_unknown(seg)) {
if (seg->segtype_private)
seg_error("set segtype_private");
}
}
/*
* Verify that an LV's segments are consecutive, complete and don't overlap.
*/
@@ -474,7 +336,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
{
struct lv_segment *seg, *seg2;
uint32_t le = 0;
unsigned seg_count = 0, seg_found, external_lv_found = 0;
unsigned seg_count = 0, seg_found;
uint32_t area_multiplier, s;
struct seg_list *sl;
struct glv_list *glvl;
@@ -482,15 +344,68 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
struct replicator_site *rsite;
struct replicator_device *rdev;
dm_list_iterate_items(seg, &lv->segments) {
seg_count++;
/* Check LV flags match first segment type */
if (complete_vg) {
if (lv_is_thin_volume(lv)) {
if (dm_list_size(&lv->segments) != 1) {
log_error("LV %s is thin volume without exactly one segment.",
lv->name);
inc_error_count;
} else if (!seg_is_thin_volume(first_seg(lv))) {
log_error("LV %s is thin volume without first thin volume segment.",
lv->name);
inc_error_count;
}
}
if (seg->lv != lv) {
log_error("LV %s invalid: segment %u is referencing different LV.",
lv->name, seg_count);
if (lv_is_thin_pool(lv)) {
if (dm_list_size(&lv->segments) != 1) {
log_error("LV %s is thin pool volume without exactly one segment.",
lv->name);
inc_error_count;
} else if (!seg_is_thin_pool(first_seg(lv))) {
log_error("LV %s is thin pool without first thin pool segment.",
lv->name);
inc_error_count;
}
}
if (lv_is_pool_data(lv) &&
(!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->area_count != 1 || seg_type(seg2, 0) != AREA_LV ||
seg_lv(seg2, 0) != lv)) {
log_error("LV %s: segment 1 pool data LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_pool_metadata(lv)) {
if (!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->metadata_lv != lv) {
log_error("LV %s: segment 1 pool metadata LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_thin_pool_metadata(lv) &&
!strstr(lv->name, "_tmeta")) {
log_error("LV %s: thin pool metadata LV does not use _tmeta",
lv->name);
inc_error_count;
} else if (lv_is_cache_pool_metadata(lv) &&
!strstr(lv->name, "_cmeta")) {
log_error("LV %s: cache pool metadata LV does not use _cmeta",
lv->name);
inc_error_count;
}
}
}
dm_list_iterate_items(seg, &lv->segments) {
seg_count++;
if (complete_vg && seg_is_raid(seg))
_check_raid_seg(seg, &error_count);
if (seg->le != le) {
log_error("LV %s invalid: segment %u should begin at "
"LE %" PRIu32 " (found %" PRIu32 ").",
@@ -508,6 +423,186 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
inc_error_count;
}
if (lv_is_error_when_full(lv) &&
!seg_can_error_when_full(seg)) {
log_error("LV %s: segment %u (%s) does not support flag "
"ERROR_WHEN_FULL.", lv->name, seg_count, seg->segtype->name);
inc_error_count;
}
if (complete_vg && seg->log_lv &&
!seg_is_mirrored(seg) && !(seg->status & RAID_IMAGE)) {
log_error("LV %s: segment %u log LV %s is not a "
"mirror log or a RAID image",
lv->name, seg_count, seg->log_lv->name);
inc_error_count;
}
/*
* Check mirror log - which is attached to the mirrored seg
*/
if (complete_vg && seg->log_lv && seg_is_mirrored(seg)) {
if (!lv_is_mirror_log(seg->log_lv)) {
log_error("LV %s: segment %u log LV %s is not "
"a mirror log",
lv->name, seg_count, seg->log_lv->name);
inc_error_count;
}
if (!(seg2 = first_seg(seg->log_lv)) ||
find_mirror_seg(seg2) != seg) {
log_error("LV %s: segment %u log LV does not "
"point back to mirror segment",
lv->name, seg_count);
inc_error_count;
}
}
if (complete_vg && seg->status & MIRROR_IMAGE) {
if (!find_mirror_seg(seg) ||
!seg_is_mirrored(find_mirror_seg(seg))) {
log_error("LV %s: segment %u mirror image "
"is not mirrored",
lv->name, seg_count);
inc_error_count;
}
}
/* Check the various thin segment types */
if (complete_vg) {
if (seg_is_thin_pool(seg)) {
if (!lv_is_thin_pool(lv)) {
log_error("LV %s is missing thin pool flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (lv_is_thin_volume(lv)) {
log_error("LV %s is a thin volume that must not contain thin pool segment %u",
lv->name, seg_count);
inc_error_count;
}
}
if (seg_is_cache_pool(seg) &&
!dm_list_empty(&seg->lv->segs_using_this_lv)) {
switch (seg->cache_mode) {
case CACHE_MODE_WRITETHROUGH:
case CACHE_MODE_WRITEBACK:
case CACHE_MODE_PASSTHROUGH:
break;
default:
log_error("LV %s has invalid cache's feature flag.",
lv->name);
inc_error_count;
}
if (!seg->policy_name) {
log_error("LV %s is missing cache policy name.", lv->name);
inc_error_count;
}
}
if (seg_is_pool(seg)) {
if (seg->area_count != 1 ||
seg_type(seg, 0) != AREA_LV) {
log_error("LV %s: %s segment %u is missing a pool data LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg_lv(seg, 0))) || find_pool_seg(seg2) != seg) {
log_error("LV %s: %s segment %u data LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
if (!seg->metadata_lv) {
log_error("LV %s: %s segment %u is missing a pool metadata LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg->metadata_lv)) ||
find_pool_seg(seg2) != seg) {
log_error("LV %s: %s segment %u metadata LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
if (!validate_pool_chunk_size(lv->vg->cmd, seg->segtype, seg->chunk_size)) {
log_error("LV %s: %s segment %u has invalid chunk size %u.",
lv->name, seg->segtype->name, seg_count, seg->chunk_size);
inc_error_count;
}
} else {
if (seg->metadata_lv) {
log_error("LV %s: segment %u must not have pool metadata LV set",
lv->name, seg_count);
inc_error_count;
}
}
if (seg_is_thin_volume(seg)) {
if (!lv_is_thin_volume(lv)) {
log_error("LV %s is missing thin volume flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (lv_is_thin_pool(lv)) {
log_error("LV %s is a thin pool that must not contain thin volume segment %u",
lv->name, seg_count);
inc_error_count;
}
if (!seg->pool_lv) {
log_error("LV %s: segment %u is missing thin pool LV",
lv->name, seg_count);
inc_error_count;
} else if (!lv_is_thin_pool(seg->pool_lv)) {
log_error("LV %s: thin volume segment %u pool LV is not flagged as a pool LV",
lv->name, seg_count);
inc_error_count;
}
if (seg->device_id > DM_THIN_MAX_DEVICE_ID) {
log_error("LV %s: thin volume segment %u has too large device id %u",
lv->name, seg_count, seg->device_id);
inc_error_count;
}
if (seg->external_lv && (seg->external_lv->status & LVM_WRITE)) {
log_error("LV %s: external origin %s is writable.",
lv->name, seg->external_lv->name);
inc_error_count;
}
if (seg->merge_lv) {
if (!lv_is_thin_volume(seg->merge_lv)) {
log_error("LV %s: thin volume segment %u merging LV %s is not flagged as a thin LV",
lv->name, seg_count, seg->merge_lv->name);
inc_error_count;
}
if (!lv_is_merging_origin(seg->merge_lv)) {
log_error("LV %s: merging LV %s is not flagged as merging.",
lv->name, seg->merge_lv->name);
inc_error_count;
}
}
} else if (seg_is_cache(seg)) {
if (!lv_is_cache(lv)) {
log_error("LV %s is missing cache flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (!seg->pool_lv) {
log_error("LV %s: segment %u is missing cache_pool LV",
lv->name, seg_count);
inc_error_count;
}
} else {
if (seg->pool_lv) {
log_error("LV %s: segment %u must not have pool LV set",
lv->name, seg_count);
inc_error_count;
}
}
}
if (seg_is_snapshot(seg)) {
if (seg->cow && seg->cow == seg->origin) {
log_error("LV %s: segment %u has same LV %s for "
@@ -520,9 +615,6 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
if (seg_is_replicator(seg) && !check_replicator_segment(seg))
inc_error_count;
if (complete_vg)
_check_lv_segment(lv, seg, seg_count, &error_count);
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_UNASSIGNED) {
log_error("LV %s: segment %u has unassigned "
@@ -604,12 +696,6 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
le += seg->len;
}
if (le != lv->le_count) {
log_error("LV %s: inconsistent LE count %u != %u",
lv->name, le, lv->le_count);
inc_error_count;
}
dm_list_iterate_items(sl, &lv->segs_using_this_lv) {
seg = sl->seg;
seg_found = 0;
@@ -618,7 +704,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
continue;
if (lv == seg_lv(seg, s))
seg_found++;
if (seg->meta_areas && seg_is_raid_with_meta(seg) && (lv == seg_metalv(seg, s)))
if (seg_is_raid_with_meta(seg) && (lv == seg_metalv(seg, s)))
seg_found++;
}
if (seg_is_replicator_dev(seg)) {
@@ -670,10 +756,6 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
lv->name);
inc_error_count;
}
/* Validation of external origin counter */
if (seg->external_lv == lv)
external_lv_found++;
}
dm_list_iterate_items(glvl, &lv->indirect_glvs) {
@@ -695,51 +777,10 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
}
}
/* Check LV flags match first segment type */
if (complete_vg) {
if ((seg_count != 1) &&
(lv_is_cache(lv) ||
lv_is_cache_pool(lv) ||
lv_is_raid(lv) ||
lv_is_snapshot(lv) ||
lv_is_thin_pool(lv) ||
lv_is_thin_volume(lv))) {
log_error("LV %s must have exactly one segment.",
lv->name);
inc_error_count;
}
if (lv_is_pool_data(lv) &&
(!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->area_count != 1 || seg_type(seg2, 0) != AREA_LV ||
seg_lv(seg2, 0) != lv)) {
log_error("LV %s: segment 1 pool data LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_thin_pool_metadata(lv) && !strstr(lv->name, "_tmeta")) {
log_error("LV %s: thin pool metadata LV does not use _tmeta.",
lv->name);
inc_error_count;
} else if (lv_is_cache_pool_metadata(lv) && !strstr(lv->name, "_cmeta")) {
log_error("LV %s: cache pool metadata LV does not use _cmeta.",
lv->name);
inc_error_count;
}
if (lv_is_external_origin(lv)) {
if (lv->external_count != external_lv_found) {
log_error("LV %s: external origin count does not match.",
lv->name);
inc_error_count;
}
if (lv->status & LVM_WRITE) {
log_error("LV %s: external origin cant't be writable.",
lv->name);
inc_error_count;
}
}
if (le != lv->le_count) {
log_error("LV %s: inconsistent LE count %u != %u",
lv->name, le, lv->le_count);
inc_error_count;
}
out:

View File

@@ -198,7 +198,7 @@
#define lv_is_partial(lv) (((lv)->status & PARTIAL_LV) ? 1 : 0)
#define lv_is_virtual(lv) (((lv)->status & VIRTUAL) ? 1 : 0)
#define lv_is_merging(lv) (((lv)->status & MERGING) ? 1 : 0)
#define lv_is_merging_origin(lv) (lv_is_merging(lv) && (lv)->snapshot)
#define lv_is_merging_origin(lv) (lv_is_merging(lv))
#define lv_is_snapshot(lv) (((lv)->status & SNAPSHOT) ? 1 : 0)
#define lv_is_converting(lv) (((lv)->status & CONVERTING) ? 1 : 0)
#define lv_is_external_origin(lv) (((lv)->external_count > 0) ? 1 : 0)
@@ -226,7 +226,6 @@
#define lv_is_raid(lv) (((lv)->status & RAID) ? 1 : 0)
#define lv_is_raid_image(lv) (((lv)->status & RAID_IMAGE) ? 1 : 0)
#define lv_is_raid_image_with_tracking(lv) ((lv_is_raid_image(lv) && !((lv)->status & LVM_WRITE)) ? 1 : 0)
#define lv_is_raid_metadata(lv) (((lv)->status & RAID_META) ? 1 : 0)
#define lv_is_raid_type(lv) (((lv)->status & (RAID | RAID_IMAGE | RAID_META)) ? 1 : 0)
@@ -1165,10 +1164,6 @@ struct logical_volume *detach_mirror_log(struct lv_segment *seg);
int attach_mirror_log(struct lv_segment *seg, struct logical_volume *lv);
int remove_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
struct dm_list *removable_pvs, int force);
struct logical_volume *prepare_mirror_log(struct logical_volume *lv,
int in_sync, uint32_t region_size,
struct dm_list *allocatable_pvs,
alloc_policy_t alloc);
int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
uint32_t log_count, uint32_t region_size,
struct dm_list *allocatable_pvs, alloc_policy_t alloc);
@@ -1224,15 +1219,6 @@ int lv_raid_replace(struct logical_volume *lv, int force,
struct dm_list *remove_pvs, struct dm_list *allocate_pvs);
int lv_raid_remove_missing(struct logical_volume *lv);
int partial_raid_lv_supports_degraded_activation(const struct logical_volume *lv);
uint32_t raid_rmeta_extents_delta(struct cmd_context *cmd,
uint32_t rimage_extents_cur, uint32_t rimage_extents_new,
uint32_t region_size, uint32_t extent_size);
uint32_t raid_rimage_extents(const struct segment_type *segtype,
uint32_t extents, uint32_t stripes, uint32_t data_copies);
uint32_t raid_ensure_min_region_size(const struct logical_volume *lv, uint64_t raid_size, uint32_t region_size);
int lv_raid_change_region_size(struct logical_volume *lv,
int yes, int force, uint32_t new_region_size);
int lv_raid_in_sync(const struct logical_volume *lv);
/* -- metadata/raid_manip.c */
/* ++ metadata/cache_manip.c */

View File

@@ -828,25 +828,25 @@ int vg_extend_each_pv(struct volume_group *vg, struct pvcreate_params *pp)
struct pv_list *pvl;
unsigned int max_phys_block_size = 0;
log_debug_metadata("Adding PVs to VG %s.", vg->name);
log_debug_metadata("Adding PVs to VG %s", vg->name);
if (_vg_bad_status_bits(vg, RESIZEABLE_VG))
return_0;
dm_list_iterate_items(pvl, &pp->pvs) {
log_debug_metadata("Adding PV %s to VG %s.", pv_dev_name(pvl->pv), vg->name);
log_debug_metadata("Adding PV %s to VG %s", pv_dev_name(pvl->pv), vg->name);
if (!(check_dev_block_size_for_vg(pvl->pv->dev,
(const struct volume_group *) vg,
&max_phys_block_size))) {
log_error("PV %s has wrong block size.", pv_dev_name(pvl->pv));
return 0;
log_error("PV %s has wrong block size", pv_dev_name(pvl->pv));
return_0;
}
if (!add_pv_to_vg(vg, pv_dev_name(pvl->pv), pvl->pv, 0)) {
log_error("PV %s cannot be added to VG %s.",
pv_dev_name(pvl->pv), vg->name);
return 0;
return_0;
}
}
@@ -1256,7 +1256,7 @@ uint32_t extents_from_percent_size(struct volume_group *vg, const struct dm_list
}
break;
}
/* fall through to use all PVs in VG like %FREE */
/* Fall back to use all PVs in VG like %FREE */
case PERCENT_FREE:
if (!(extents = vg->free_count)) {
log_error("No free extents in Volume group %s.", vg->name);
@@ -2544,7 +2544,7 @@ static int _lv_mark_if_partial_collect(struct logical_volume *lv, void *data)
static int _lv_mark_if_partial_single(struct logical_volume *lv, void *data)
{
unsigned s;
struct _lv_mark_if_partial_baton baton = { .partial = 0 };
struct _lv_mark_if_partial_baton baton;
struct lv_segment *lvseg;
dm_list_iterate_items(lvseg, &lv->segments) {
@@ -2556,6 +2556,7 @@ static int _lv_mark_if_partial_single(struct logical_volume *lv, void *data)
}
}
baton.partial = 0;
if (!_lv_each_dependency(lv, _lv_mark_if_partial_collect, &baton))
return_0;
@@ -5447,19 +5448,6 @@ int vg_flag_write_locked(struct volume_group *vg)
return 0;
}
static int _access_vg_clustered(struct cmd_context *cmd, const struct volume_group *vg)
{
if (vg_is_clustered(vg) && !locking_is_clustered()) {
if (!cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
return 0;
}
return 1;
}
/*
* Performs a set of checks against a VG according to bits set in status
* and returns FAILED_* bits for those that aren't acceptable.
@@ -5471,9 +5459,15 @@ static uint32_t _vg_bad_status_bits(const struct volume_group *vg,
{
uint32_t failure = 0;
if ((status & CLUSTERED) && !_access_vg_clustered(vg->cmd, vg))
if ((status & CLUSTERED) &&
(vg_is_clustered(vg)) && !locking_is_clustered()) {
if (!vg->cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
/* Return because other flags are considered undefined. */
return FAILED_CLUSTERED;
}
if ((status & EXPORTED_VG) &&
vg_is_exported(vg)) {
@@ -5562,6 +5556,19 @@ static int _allow_extra_system_id(struct cmd_context *cmd, const char *system_id
return 0;
}
static int _access_vg_clustered(struct cmd_context *cmd, struct volume_group *vg)
{
if (vg_is_clustered(vg) && !locking_is_clustered()) {
if (!cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
return 0;
}
return 1;
}
static int _access_vg_lock_type(struct cmd_context *cmd, struct volume_group *vg,
uint32_t lockd_state, uint32_t *failure)
{
@@ -5629,11 +5636,6 @@ static int _access_vg_lock_type(struct cmd_context *cmd, struct volume_group *vg
}
}
if (test_mode()) {
log_error("Test mode is not yet supported with lock type %s.", vg->lock_type);
return 0;
}
return 1;
}
@@ -6388,7 +6390,7 @@ int vg_strip_outdated_historical_lvs(struct volume_group *vg) {
* Removal time in the future? Not likely,
* but skip this item in any case.
*/
if (current_time < (time_t) glvl->glv->historical->timestamp_removed)
if ((current_time) < glvl->glv->historical->timestamp_removed)
continue;
if ((current_time - glvl->glv->historical->timestamp_removed) > threshold) {

View File

@@ -136,15 +136,16 @@ struct lv_segment *find_mirror_seg(struct lv_segment *seg)
{
struct lv_segment *mirror_seg;
if (!(mirror_seg = get_only_segment_using_this_lv(seg->lv))) {
log_error("Failed to find mirror_seg for %s", display_lvname(seg->lv));
mirror_seg = get_only_segment_using_this_lv(seg->lv);
if (!mirror_seg) {
log_error("Failed to find mirror_seg for %s", seg->lv->name);
return NULL;
}
if (!seg_is_mirrored(mirror_seg)) {
log_error("LV %s on %s is not a mirror segments.",
display_lvname(mirror_seg->lv),
display_lvname(seg->lv));
log_error("%s on %s is not a mirror segments",
mirror_seg->lv->name, seg->lv->name);
return NULL;
}
@@ -245,7 +246,7 @@ int shift_mirror_images(struct lv_segment *mirrored_seg, unsigned mimage)
if (mimage >= mirrored_seg->area_count) {
log_error("Invalid index (%u) of mirror image supplied "
"to shift_mirror_images().", mimage);
"to shift_mirror_images()", mimage);
return 0;
}
@@ -281,22 +282,21 @@ static int _write_log_header(struct cmd_context *cmd, struct logical_volume *lv)
log_header.nr_regions = xlate64((uint64_t)-1);
if (!(name = dm_pool_alloc(cmd->mem, PATH_MAX))) {
log_error("Name allocation failed - log header not written (%s).",
display_lvname(lv));
log_error("Name allocation failed - log header not written (%s)",
lv->name);
return 0;
}
if (dm_snprintf(name, PATH_MAX, "%s%s/%s", cmd->dev_dir,
lv->vg->name, lv->name) < 0) {
log_error("Name too long - log header not written (%s).",
display_lvname(lv));
log_error("Name too long - log header not written (%s)", lv->name);
return 0;
}
log_verbose("Writing log header to device %s.", display_lvname(lv));
log_verbose("Writing log header to device, %s", lv->name);
if (!(dev = dev_cache_get(name, NULL))) {
log_error("%s: not found: log header not written.", name);
log_error("%s: not found: log header not written", name);
return 0;
}
@@ -304,7 +304,7 @@ static int _write_log_header(struct cmd_context *cmd, struct logical_volume *lv)
return 0;
if (!dev_write(dev, UINT64_C(0), sizeof(log_header), &log_header)) {
log_error("Failed to write log header to %s.", name);
log_error("Failed to write log header to %s", name);
dev_close_immediate(dev);
return 0;
}
@@ -661,11 +661,10 @@ static int _split_mirror_images(struct logical_volume *lv,
struct dm_list split_images;
struct lv_list *lvl;
struct cmd_context *cmd = lv->vg->cmd;
char layer_name[NAME_LEN], format[NAME_LEN];
if (!lv_is_mirrored(lv)) {
log_error("Unable to split non-mirrored LV %s.",
display_lvname(lv));
log_error("Unable to split non-mirrored LV, %s",
lv->name);
return 0;
}
@@ -674,8 +673,8 @@ static int _split_mirror_images(struct logical_volume *lv,
return 0;
}
log_verbose("Detaching %d images from mirror %s.",
split_count, display_lvname(lv));
log_verbose("Detaching %d images from mirror, %s",
split_count, lv->name);
if (!_move_removable_mimages_to_end(lv, split_count, removable_pvs)) {
/*
@@ -685,7 +684,8 @@ static int _split_mirror_images(struct logical_volume *lv,
* removable PVs or all of them. Should we allow
* them to just specify some - making us pick the rest?
*/
log_error("Insufficient removable PVs given to satisfy request.");
log_error("Insufficient removable PVs given"
" to satisfy request");
return 0;
}
@@ -704,7 +704,7 @@ static int _split_mirror_images(struct logical_volume *lv,
if (!release_lv_segment_area(mirrored_seg, mirrored_seg->area_count, mirrored_seg->area_len))
return_0;
log_very_verbose("LV %s assigned to be split.", display_lvname(sub_lv));
log_very_verbose("%s assigned to be split", sub_lv->name);
if (!new_lv) {
lv_set_visible(sub_lv);
@@ -715,7 +715,7 @@ static int _split_mirror_images(struct logical_volume *lv,
/* If there is more than one image being split, add to list */
lvl = dm_pool_alloc(lv->vg->vgmem, sizeof(*lvl));
if (!lvl) {
log_error("lv_list alloc failed.");
log_error("lv_list alloc failed");
return 0;
}
lvl->lv = sub_lv;
@@ -729,14 +729,17 @@ static int _split_mirror_images(struct logical_volume *lv,
}
if (!dm_list_empty(&split_images)) {
size_t len = strlen(new_lv->name) + 32;
char *layer_name, format[len];
/*
* A number of images have been split and
* a new mirror layer must be formed
*/
if (!insert_layer_for_lv(cmd, new_lv, 0, "_mimage_%d")) {
log_error("Failed to build new mirror, %s.",
display_lvname(new_lv));
log_error("Failed to build new mirror, %s",
new_lv->name);
return 0;
}
@@ -745,25 +748,27 @@ static int _split_mirror_images(struct logical_volume *lv,
dm_list_iterate_items(lvl, &split_images) {
sub_lv = lvl->lv;
if (dm_snprintf(format, sizeof(format), "%s_mimage_%%d",
if (dm_snprintf(format, len, "%s_mimage_%%d",
new_lv->name) < 0) {
log_error("Failed to build new image name for %s.",
display_lvname(new_lv));
log_error("Failed to build new image name.");
return 0;
}
if (!generate_lv_name(lv->vg, format, layer_name, sizeof(layer_name))) {
log_error("Failed to generate new image names for %s.",
display_lvname(new_lv));
layer_name = dm_pool_alloc(lv->vg->vgmem, len);
if (!layer_name) {
log_error("Unable to allocate memory");
return 0;
}
if (!(sub_lv->name = dm_pool_strdup(lv->vg->vgmem, layer_name))) {
log_error("Unable to allocate memory.");
if (!generate_lv_name(lv->vg, format, layer_name, len)||
sscanf(layer_name, format, &i) != 1) {
log_error("Failed to generate new image names");
return 0;
}
sub_lv->name = layer_name;
}
if (!_merge_mirror_images(new_lv, &split_images)) {
log_error("Failed to group split images into new mirror.");
log_error("Failed to group split "
"images into new mirror");
return 0;
}
@@ -789,15 +794,41 @@ static int _split_mirror_images(struct logical_volume *lv,
detached_log_lv = detach_mirror_log(mirrored_seg);
if (!remove_layer_from_lv(lv, sub_lv))
return_0;
lv->status &= ~(MIRROR | MIRRORED | LV_NOTSYNCED);
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
}
if (!vg_write(mirrored_seg->lv->vg)) {
log_error("Intermediate VG metadata write failed.");
return 0;
}
/*
* Suspend and resume the mirror - this includes all
* the sub-LVs and soon-to-be-split sub-LVs
* Suspend the mirror - this includes all the sub-LVs and
* soon-to-be-split sub-LVs
*/
if (!lv_update_and_reload(mirrored_seg->lv))
return_0;
if (!suspend_lv(cmd, mirrored_seg->lv)) {
log_error("Failed to lock %s", mirrored_seg->lv->name);
vg_revert(mirrored_seg->lv->vg);
return 0;
}
if (!vg_commit(mirrored_seg->lv->vg)) {
resume_lv(cmd, mirrored_seg->lv);
return 0;
}
log_very_verbose("Updating \"%s\" in kernel", mirrored_seg->lv->name);
/*
* Resume the mirror - this also activates the visible, independent
* soon-to-be-split sub-LVs
*/
if (!resume_lv(cmd, mirrored_seg->lv)) {
log_error("Problem resuming %s", mirrored_seg->lv->name);
return 0;
}
/*
* Recycle newly split LV so it is properly renamed.
@@ -862,7 +893,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
struct logical_volume *sub_lv;
struct logical_volume *detached_log_lv = NULL;
struct logical_volume *temp_layer_lv = NULL;
struct lv_segment *seg, *pvmove_seg, *mirrored_seg = first_seg(lv);
struct lv_segment *pvmove_seg, *mirrored_seg = first_seg(lv);
uint32_t old_area_count = mirrored_seg->area_count;
uint32_t new_area_count = mirrored_seg->area_count;
struct lv_list *lvl;
@@ -873,14 +904,15 @@ static int _remove_mirror_images(struct logical_volume *lv,
if (removed)
*removed = 0;
log_very_verbose("Reducing mirror set %s from " FMTu32 " to " FMTu32
" image(s)%s.", display_lvname(lv),
log_very_verbose("Reducing mirror set %s from %" PRIu32 " to %"
PRIu32 " image(s)%s.", lv->name,
old_area_count, old_area_count - num_removed,
remove_log ? " and no log volume" : "");
if (collapse && (old_area_count - num_removed != 1)) {
log_error("Incompatible parameters to _remove_mirror_images.");
log_error("Incompatible parameters to _remove_mirror_images");
return 0;
}
num_removed = 0;
@@ -904,8 +936,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
*/
if ((s == 0) && !_mirrored_lv_in_sync(lv) &&
!(lv_is_partial(lv))) {
log_error("Unable to remove primary mirror image while mirror volume "
"%s is not in-sync.", display_lvname(lv));
log_error("Unable to remove primary mirror image while mirror is not in-sync");
return 0;
}
if (!shift_mirror_images(mirrored_seg, s))
@@ -934,7 +965,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
seg_lv(mirrored_seg, m)->status &= ~MIRROR_IMAGE;
lv_set_visible(seg_lv(mirrored_seg, m));
if (!(lvl = dm_pool_alloc(lv->vg->cmd->mem, sizeof(*lvl)))) {
log_error("lv_list alloc failed.");
log_error("lv_list alloc failed");
return 0;
}
lvl->lv = seg_lv(mirrored_seg, m);
@@ -956,17 +987,19 @@ static int _remove_mirror_images(struct logical_volume *lv,
if (!remove_layer_from_lv(lv, temp_layer_lv))
return_0;
if (collapse && !_merge_mirror_images(lv, &tmp_orphan_lvs)) {
log_error("Failed to add mirror images.");
log_error("Failed to add mirror images");
return 0;
}
/*
* No longer a mirror? Even though new_area_count was 1,
* _merge_mirror_images may have resulted into lv being still a
* mirror. Fix up the flags if we only have one image left.
*/
if (lv_mirror_count(lv) == 1)
lv->status &= ~(MIRROR | MIRRORED | LV_NOTSYNCED);
/*
* No longer a mirror? Even though new_area_count was 1,
* _merge_mirror_images may have resulted into lv being still a
* mirror. Fix up the flags if we only have one image left.
*/
if (lv_mirror_count(lv) == 1) {
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
}
mirrored_seg = first_seg(lv);
if (remove_log && !detached_log_lv)
detached_log_lv = detach_mirror_log(mirrored_seg);
@@ -975,12 +1008,14 @@ static int _remove_mirror_images(struct logical_volume *lv,
dm_list_iterate_items(pvmove_seg, &lv->segments)
pvmove_seg->status |= PVMOVE;
} else if (new_area_count == 0) {
log_very_verbose("All mimages of %s are gone.", display_lvname(lv));
log_very_verbose("All mimages of %s are gone", lv->name);
/* All mirror images are gone.
* It can happen for vgreduce --removemissing. */
detached_log_lv = detach_mirror_log(mirrored_seg);
lv->status &= ~(MIRROR | MIRRORED | LV_NOTSYNCED);
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
if (!replace_lv_with_error_segment(lv))
return_0;
} else if (remove_log)
@@ -1000,10 +1035,10 @@ static int _remove_mirror_images(struct logical_volume *lv,
*/
if (detached_log_lv && lv_is_mirrored(detached_log_lv) &&
lv_is_partial(detached_log_lv)) {
seg = first_seg(detached_log_lv);
struct lv_segment *seg = first_seg(detached_log_lv);
log_very_verbose("%s being removed due to failures.",
display_lvname(detached_log_lv));
log_very_verbose("%s being removed due to failures",
detached_log_lv->name);
/*
* We are going to replace the mirror with an
@@ -1012,24 +1047,47 @@ static int _remove_mirror_images(struct logical_volume *lv,
* the sub-lv's)
*/
for (m = 0; m < seg->area_count; m++) {
if (!(lvl = dm_pool_alloc(lv->vg->cmd->mem,
sizeof(*lvl))))
return_0;
seg_lv(seg, m)->status &= ~MIRROR_IMAGE;
lv_set_visible(seg_lv(seg, m));
if (!(lvl = dm_pool_alloc(lv->vg->cmd->mem,
sizeof(*lvl)))) {
log_error("dm_pool_alloc failed");
return 0;
}
lvl->lv = seg_lv(seg, m);
dm_list_add(&tmp_orphan_lvs, &lvl->list);
}
if (!replace_lv_with_error_segment(detached_log_lv)) {
log_error("Failed error target substitution for %s.",
display_lvname(detached_log_lv));
log_error("Failed error target substitution for %s",
detached_log_lv->name);
return 0;
}
if (!lv_update_and_reload(detached_log_lv))
if (!vg_write(detached_log_lv->vg)) {
log_error("intermediate VG write failed.");
return 0;
}
if (!suspend_lv(detached_log_lv->vg->cmd,
detached_log_lv)) {
log_error("Failed to suspend %s",
detached_log_lv->name);
return 0;
}
if (!vg_commit(detached_log_lv->vg)) {
if (!resume_lv(detached_log_lv->vg->cmd,
detached_log_lv))
stack;
return_0;
}
if (!resume_lv(detached_log_lv->vg->cmd, detached_log_lv)) {
log_error("Failed to resume %s",
detached_log_lv->name);
return 0;
}
}
/*
@@ -1037,13 +1095,14 @@ static int _remove_mirror_images(struct logical_volume *lv,
* remove the LVs from the mirror set, commit that metadata
* then deactivate and remove them fully.
*/
if (!vg_write(mirrored_seg->lv->vg)) {
log_error("intermediate VG write failed.");
return 0;
}
if (!suspend_lv_origin(mirrored_seg->lv->vg->cmd, mirrored_seg->lv)) {
log_error("Failed to lock %s.", display_lvname(mirrored_seg->lv));
log_error("Failed to lock %s", mirrored_seg->lv->name);
vg_revert(mirrored_seg->lv->vg);
return 0;
}
@@ -1056,7 +1115,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
* FIXME: check propagation of suspend with visible flag
*/
if (temp_layer_lv && !suspend_lv(temp_layer_lv->vg->cmd, temp_layer_lv))
log_error("Problem suspending temporary LV %s.", display_lvname(temp_layer_lv));
log_error("Problem suspending temporary LV %s", temp_layer_lv->name);
if (!vg_commit(mirrored_seg->lv->vg)) {
if (!resume_lv(mirrored_seg->lv->vg->cmd, mirrored_seg->lv))
@@ -1064,7 +1123,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
return_0;
}
log_very_verbose("Updating %s in kernel.", display_lvname(mirrored_seg->lv));
log_very_verbose("Updating \"%s\" in kernel", mirrored_seg->lv->name);
/*
* Avoid having same mirror target loaded twice simultaneously by first
@@ -1073,12 +1132,12 @@ static int _remove_mirror_images(struct logical_volume *lv,
* explicitly.
*/
if (temp_layer_lv && !resume_lv(temp_layer_lv->vg->cmd, temp_layer_lv)) {
log_error("Problem resuming temporary LV %s.", display_lvname(temp_layer_lv));
log_error("Problem resuming temporary LV, %s", temp_layer_lv->name);
return 0;
}
if (!resume_lv_origin(mirrored_seg->lv->vg->cmd, mirrored_seg->lv)) {
log_error("Problem reactivating %s.", display_lvname(mirrored_seg->lv));
log_error("Problem reactivating %s", mirrored_seg->lv->name);
return 0;
}
@@ -1103,8 +1162,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
1, &lv->tags, 0)) {
/* As a result, unnecessary sync may run after
* collapsing. But safe.*/
log_error("Failed to initialize log device %s.",
display_lvname(first_seg(lv)->log_lv));
log_error("Failed to initialize log device");
return 0;
}
}
@@ -1112,9 +1170,8 @@ static int _remove_mirror_images(struct logical_volume *lv,
if (removed)
*removed = old_area_count - new_area_count;
log_very_verbose(FMTu32 " image(s) removed from %s.",
old_area_count - new_area_count,
display_lvname(lv));
log_very_verbose(FMTu32 " image(s) removed from %s",
old_area_count - new_area_count, lv->name);
return 1;
}
@@ -1296,15 +1353,15 @@ static int _replace_mirror_images(struct lv_segment *mirrored_seg,
/* FIXME: Use lvconvert rather than duplicating its code */
if (mirrored_seg->area_count < num_mirrors) {
log_warn("WARNING: Failed to replace mirror device in %s.",
display_lvname(mirrored_seg->lv);
log_warn("WARNING: Failed to replace mirror device in %s/%s",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
if ((mirrored_seg->area_count > 1) && !mirrored_seg->log_lv)
log_warn("WARNING: Use 'lvconvert -m %d %s --corelog' to replace failed devices.",
num_mirrors - 1, display_lvname(lv));
log_warn("WARNING: Use 'lvconvert -m %d %s/%s --corelog' to replace failed devices",
num_mirrors - 1, lv->vg->name, lv->name);
else
log_warn("WARNING: Use 'lvconvert -m %d %s' to replace failed devices.",
num_mirrors - 1, display_lvname(lv));
log_warn("WARNING: Use 'lvconvert -m %d %s/%s' to replace failed devices",
num_mirrors - 1, lv->vg->name, lv->name);
r = 0;
/* REMEMBER/FIXME: set in_sync to 0 if a new mirror device was added */
@@ -1317,11 +1374,11 @@ static int _replace_mirror_images(struct lv_segment *mirrored_seg,
*/
if ((mirrored_seg->area_count > 1) && !mirrored_seg->log_lv &&
(log_policy != MIRROR_REMOVE)) {
log_warn("WARNING: Failed to replace mirror log device in %s.",
display_lvname(lv));
log_warn("WARNING: Failed to replace mirror log device in %s/%s",
lv->vg->name, lv->name);
log_warn("WARNING: Use 'lvconvert -m %d %s' to replace failed devices.",
mirrored_seg->area_count - 1 , display_lvname(lv));
log_warn("WARNING: Use 'lvconvert -m %d %s/%s' to replace failed devices",
mirrored_seg->area_count - 1 , lv->vg->name, lv->name);
r = 0;
}
@@ -1354,8 +1411,8 @@ int reconfigure_mirror_images(struct lv_segment *mirrored_seg, uint32_t num_mirr
/* Unable to remove bad devices */
return 0;
log_warn("WARNING: Bad device removed from mirror volume %s.",
display_lvname(mirrored_seg->lv));
log_warn("WARNING: Bad device removed from mirror volume, %s/%s",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
log_policy = _get_mirror_log_fault_policy(mirrored_seg->lv->vg->cmd);
dev_policy = _get_mirror_device_fault_policy(mirrored_seg->lv->vg->cmd);
@@ -1367,20 +1424,20 @@ int reconfigure_mirror_images(struct lv_segment *mirrored_seg, uint32_t num_mirr
if (!r)
/* Failed to replace device(s) */
log_warn("WARNING: Unable to find substitute device for mirror volume %s.",
display_lvname(mirrored_seg->lv));
log_warn("WARNING: Unable to find substitute device for mirror volume, %s/%s",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
else if (r > 0)
/* Success in replacing device(s) */
log_warn("WARNING: Mirror volume %s restored - substitute for failed device found.",
display_lvname(mirrored_seg->lv));
log_warn("WARNING: Mirror volume, %s/%s restored - substitute for failed device found.",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
else
/* Bad device removed, but not replaced because of policy */
if (mirrored_seg->area_count == 1) {
log_warn("WARNING: Mirror volume %s converted to linear due to device failure.",
display_lvname(mirrored_seg->lv);
log_warn("WARNING: Mirror volume, %s/%s converted to linear due to device failure.",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
} else if (had_log && !mirrored_seg->log_lv) {
log_warn("WARNING: Mirror volume %s disk log removed due to device failure.",
display_lvname(mirrored_seg->lv));
log_warn("WARNING: Mirror volume, %s/%s disk log removed due to device failure.",
mirrored_seg->lv->vg->name, mirrored_seg->lv->name);
}
/*
* If we made it here, we at least removed the bad device.
@@ -1399,11 +1456,15 @@ static int _create_mimage_lvs(struct alloc_handle *ah,
int log)
{
uint32_t m, first_area;
char img_name[NAME_LEN];
char *img_name;
size_t len;
len = strlen(lv->name) + 32;
img_name = alloca(len);
if (dm_snprintf(img_name, sizeof(img_name), "%s_mimage_%%d", lv->name) < 0) {
log_error("Failed to build new mirror image name for %s.",
display_lvname(lv));
if (dm_snprintf(img_name, len, "%s_mimage_%%d", lv->name) < 0) {
log_error("img_name allocation failed. "
"Remove new LV and retry.");
return 0;
}
@@ -1673,26 +1734,23 @@ static int _add_mirrors_that_preserve_segments(struct logical_volume *lv,
if (!(ah = allocate_extents(lv->vg, NULL, segtype, 1, mirrors, 0, 0,
lv->le_count, allocatable_pvs, alloc, 0,
parallel_areas))) {
log_error("Unable to allocate mirror extents for %s.",
display_lvname(lv));
log_error("Unable to allocate mirror extents for %s.", lv->name);
return 0;
}
if (flags & MIRROR_BY_SEG) {
if (!lv_add_mirror_areas(ah, lv, 0, adjusted_region_size)) {
log_error("Failed to add mirror areas to %s.",
display_lvname(lv));
log_error("Failed to add mirror areas to %s", lv->name);
r = 0;
}
} else if (flags & MIRROR_BY_SEGMENTED_LV) {
if (!lv_add_segmented_mirror_image(ah, lv, 0,
adjusted_region_size)) {
log_error("Failed to add mirror areas to %s.",
display_lvname(lv));
log_error("Failed to add mirror areas to %s", lv->name);
r = 0;
}
} else {
log_error(INTERNAL_ERROR "Unknown mirror flag.");
log_error(INTERNAL_ERROR "Unknown mirror flag");
r = 0;
}
alloc_destroy(ah);
@@ -1726,7 +1784,7 @@ int remove_mirror_log(struct cmd_context *cmd,
/* Unimplemented features */
if (dm_list_size(&lv->segments) != 1) {
log_error("Multiple-segment mirror is not supported.");
log_error("Multiple-segment mirror is not supported");
return 0;
}
@@ -1738,19 +1796,19 @@ int remove_mirror_log(struct cmd_context *cmd,
return 0;
}
} else if (lv_is_active(lv)) {
log_error("Unable to determine sync status of "
"remotely active mirror volume %s.", display_lvname(lv));
log_error("Unable to determine sync status of"
" remotely active mirror, %s", lv->name);
return 0;
} else if (vg_is_clustered(vg)) {
log_error("Unable to convert the log of an inactive "
"cluster mirror volume %s.", display_lvname(lv));
"cluster mirror, %s", lv->name);
return 0;
} else if (force || yes_no_prompt("Full resync required to convert inactive "
"mirror volume %s to core log. "
"Proceed? [y/n]: ", display_lvname(lv)) == 'y')
} else if (force || yes_no_prompt("Full resync required to convert "
"inactive mirror %s to core log. "
"Proceed? [y/n]: ", lv->name) == 'y')
sync_percent = 0;
else {
log_error("Logical volume %s NOT converted.", display_lvname(lv));
log_error("Logical volume %s NOT converted.", lv->name);
return 0;
}
@@ -1776,10 +1834,14 @@ static struct logical_volume *_create_mirror_log(struct logical_volume *lv,
const char *suffix)
{
struct logical_volume *log_lv;
char log_name[NAME_LEN];
char *log_name;
size_t len;
if (dm_snprintf(log_name, sizeof(log_name), "%s%s", lv_name, suffix) < 0) {
log_error("Failed to build new mirror log name for %s.", lv_name);
len = strlen(lv_name) + 32;
log_name = alloca(len); /* alloca never fails */
if (dm_snprintf(log_name, len, "%s%s", lv_name, suffix) < 0) {
log_error("log_name allocation failed.");
return NULL;
}
@@ -1810,7 +1872,7 @@ static int _form_mirror(struct cmd_context *cmd, struct alloc_handle *ah,
if (dm_list_size(&lv->segments) != 1 ||
seg_type(first_seg(lv), 0) != AREA_LV)
if (!insert_layer_for_lv(cmd, lv, 0, "_mimage_%d"))
return_0;
return 0;
/*
* create mirror image LVs
@@ -1821,9 +1883,7 @@ static int _form_mirror(struct cmd_context *cmd, struct alloc_handle *ah,
return_0;
if (!lv_add_mirror_lvs(lv, img_lvs, mirrors,
/* Pass through MIRRORED & LOCKED status flag
* TODO: Any other would be needed ?? */
MIRROR_IMAGE | (lv->status & (MIRRORED | LOCKED)),
MIRROR_IMAGE | (lv->status & LOCKED),
region_size)) {
log_error("Aborting. Failed to add mirror segment. "
"Remove new LV and retry.");
@@ -1900,49 +1960,6 @@ int attach_mirror_log(struct lv_segment *seg, struct logical_volume *log_lv)
return add_seg_to_segs_using_this_lv(log_lv, seg);
}
/* Prepare disk mirror log for raid1->mirror conversion */
struct logical_volume *prepare_mirror_log(struct logical_volume *lv,
int in_sync, uint32_t region_size,
struct dm_list *allocatable_pvs,
alloc_policy_t alloc)
{
struct cmd_context *cmd = lv->vg->cmd;
const struct segment_type *segtype;
struct dm_list *parallel_areas;
struct alloc_handle *ah;
struct logical_volume *log_lv;
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0, 0)))
return_NULL;
if (!(segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_MIRROR)))
return_NULL;
/* Allocate destination extents */
if (!(ah = allocate_extents(lv->vg, NULL, segtype,
0, 0, 1, region_size,
lv->le_count, allocatable_pvs,
alloc, 0, parallel_areas))) {
log_error("Unable to allocate extents for mirror log.");
return NULL;
}
if (!(log_lv = _create_mirror_log(lv, ah, alloc, lv->name, "_mlog"))) {
log_error("Failed to create mirror log.");
goto out;
}
if (!_init_mirror_log(cmd, log_lv, in_sync, &lv->tags, 1)) {
log_error("Failed to initialise mirror log.");
log_lv = NULL;
goto out;
}
out:
alloc_destroy(ah);
return log_lv;
}
int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
uint32_t log_count, uint32_t region_size,
struct dm_list *allocatable_pvs, alloc_policy_t alloc)
@@ -1957,25 +1974,25 @@ int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
int r = 0;
if (vg_is_clustered(lv->vg) && (log_count > 1)) {
log_error("Log type, \"mirrored\", is unavailable to cluster mirrors.");
log_error("Log type, \"mirrored\", is unavailable to cluster mirrors");
return 0;
}
if (dm_list_size(&lv->segments) != 1) {
log_error("Multiple-segment mirror is not supported.");
log_error("Multiple-segment mirror is not supported");
return 0;
}
if (lv_is_active_but_not_locally(lv)) {
log_error("Unable to convert the log of a mirror, %s, that is "
"active remotely but not locally.", display_lvname(lv));
"active remotely but not locally", lv->name);
return 0;
}
log_lv = first_seg(lv)->log_lv;
old_log_count = (log_lv) ? lv_mirror_count(log_lv) : 0;
if (old_log_count == log_count) {
log_verbose("Mirror %s already has a %s log.", display_lvname(lv),
log_verbose("Mirror already has a %s log",
!log_count ? "core" :
(log_count == 1) ? "disk" : "mirrored");
return 1;
@@ -1990,15 +2007,16 @@ int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
if (activation() && segtype->ops->target_present &&
!segtype->ops->target_present(cmd, NULL, NULL)) {
log_error("%s: Required device-mapper target(s) not "
"detected in your kernel.", segtype->name);
"detected in your kernel", segtype->name);
return 0;
}
/* allocate destination extents */
if (!(ah = allocate_extents(lv->vg, NULL, segtype,
0, 0, log_count - old_log_count, region_size,
lv->le_count, allocatable_pvs,
alloc, 0, parallel_areas))) {
ah = allocate_extents(lv->vg, NULL, segtype,
0, 0, log_count - old_log_count, region_size,
lv->le_count, allocatable_pvs,
alloc, 0, parallel_areas);
if (!ah) {
log_error("Unable to allocate extents for mirror log.");
return 0;
}
@@ -2059,9 +2077,10 @@ int add_mirror_images(struct cmd_context *cmd, struct logical_volume *lv,
if (!(segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_MIRROR)))
return_0;
if (!(ah = allocate_extents(lv->vg, NULL, segtype,
stripes, mirrors, log_count, region_size, lv->le_count,
allocatable_pvs, alloc, 0, parallel_areas))) {
ah = allocate_extents(lv->vg, NULL, segtype,
stripes, mirrors, log_count, region_size, lv->le_count,
allocatable_pvs, alloc, 0, parallel_areas);
if (!ah) {
log_error("Unable to allocate extents for mirror(s).");
return 0;
}
@@ -2117,7 +2136,7 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
struct dm_list *pvs, alloc_policy_t alloc, uint32_t flags)
{
if (!mirrors && !log_count) {
log_error("No conversion is requested.");
log_error("No conversion is requested");
return 0;
}
@@ -2137,7 +2156,7 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
* log daemon is multi-threaded.
*/
if (log_count > 1) {
log_error("Log type, \"mirrored\", is unavailable to cluster mirrors.");
log_error("Log type, \"mirrored\", is unavailable to cluster mirrors");
return 0;
}
}
@@ -2155,12 +2174,12 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
if (flags & MIRROR_BY_SEG) {
if (log_count) {
log_error("Persistent log is not supported on "
"segment-by-segment mirroring.");
"segment-by-segment mirroring");
return 0;
}
if (stripes > 1) {
log_error("Striped-mirroring is not supported on "
"segment-by-segment mirroring.");
"segment-by-segment mirroring");
return 0;
}
@@ -2170,7 +2189,7 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
} else if (flags & MIRROR_BY_SEGMENTED_LV) {
if (stripes > 1) {
log_error("Striped-mirroring is not supported on "
"segment-by-segment mirroring.");
"segment-by-segment mirroring");
return 0;
}
@@ -2186,27 +2205,26 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
pvs, alloc, log_count);
}
log_error("Unsupported mirror conversion type.");
log_error("Unsupported mirror conversion type");
return 0;
}
int lv_split_mirror_images(struct logical_volume *lv, const char *split_name,
uint32_t split_count, struct dm_list *removable_pvs)
{
int r;
int historical;
if (lv_name_is_used_in_vg(lv->vg, split_name, &historical)) {
log_error("%sLogical Volume \"%s\" already exists in "
"volume group \"%s\".", historical ? "historical " : "",
"volume group \"%s\"", historical ? "historical " : "",
split_name, lv->vg->name);
return 0;
}
/* Can't split a mirror that is not in-sync... unless force? */
if (!_mirrored_lv_in_sync(lv)) {
log_error("Unable to split mirror %s that is not in-sync.",
display_lvname(lv));
log_error("Unable to split mirror that is not in-sync.");
return 0;
}
@@ -2220,7 +2238,8 @@ int lv_split_mirror_images(struct logical_volume *lv, const char *split_name,
* is being implemented. For now, we force the user to
* come up with a name for their LV.
*/
if (!_split_mirror_images(lv, split_name, split_count, removable_pvs))
r = _split_mirror_images(lv, split_name, split_count, removable_pvs);
if (!r)
return_0;
return 1;
@@ -2242,18 +2261,18 @@ int lv_remove_mirrors(struct cmd_context *cmd __attribute__((unused)),
struct lv_segment *seg;
if (!mirrors && !log_count) {
log_error("No conversion is requested.");
log_error("No conversion is requested");
return 0;
}
seg = first_seg(lv);
if (!seg_is_mirrored(seg)) {
log_error("Not a mirror segment.");
log_error("Not a mirror segment");
return 0;
}
if (lv_mirror_count(lv) <= mirrors) {
log_error("Removing more than existing: %d <= %d.",
log_error("Removing more than existing: %d <= %d",
seg->area_count, mirrors);
return 0;
}
@@ -2269,7 +2288,7 @@ int lv_remove_mirrors(struct cmd_context *cmd __attribute__((unused)),
/* MIRROR_BY_SEG */
if (log_count) {
log_error("Persistent log is not supported on "
"segment-by-segment mirroring.");
"segment-by-segment mirroring");
return 0;
}
return remove_mirrors_from_segments(lv, new_mirrors, status_mask);

File diff suppressed because it is too large Load Diff

View File

@@ -50,8 +50,7 @@ struct dev_manager;
#define SEG_RAID0 0x0000000000040000ULL
#define SEG_RAID0_META 0x0000000000080000ULL
#define SEG_RAID1 0x0000000000100000ULL
#define SEG_RAID10_NEAR 0x0000000000200000ULL
#define SEG_RAID10 SEG_RAID10_NEAR
#define SEG_RAID10 0x0000000000200000ULL
#define SEG_RAID4 0x0000000000400000ULL
#define SEG_RAID5_N 0x0000000000800000ULL
#define SEG_RAID5_LA 0x0000000001000000ULL
@@ -133,10 +132,6 @@ struct dev_manager;
#define segtype_is_raid6_nr(segtype) ((segtype)->flags & SEG_RAID6_NR ? 1 : 0)
#define segtype_is_raid6_n_6(segtype) ((segtype)->flags & SEG_RAID6_N_6 ? 1 : 0)
#define segtype_is_raid6_zr(segtype) ((segtype)->flags & SEG_RAID6_ZR ? 1 : 0)
#define segtype_is_raid6_ls_6(segtype) ((segtype)->flags & SEG_RAID6_LS_6 ? 1 : 0)
#define segtype_is_raid6_rs_6(segtype) ((segtype)->flags & SEG_RAID6_RS_6 ? 1 : 0)
#define segtype_is_raid6_la_6(segtype) ((segtype)->flags & SEG_RAID6_LA_6 ? 1 : 0)
#define segtype_is_raid6_ra_6(segtype) ((segtype)->flags & SEG_RAID6_RA_6 ? 1 : 0)
#define segtype_is_any_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10_near(segtype) segtype_is_raid10(segtype)
@@ -149,12 +144,6 @@ struct dev_manager;
#define segtype_is_virtual(segtype) ((segtype)->flags & SEG_VIRTUAL ? 1 : 0)
#define segtype_is_unknown(segtype) ((segtype)->flags & SEG_UNKNOWN ? 1 : 0)
#define segtype_can_split(segtype) ((segtype)->flags & SEG_CAN_SPLIT ? 1 : 0)
#define segtype_cannot_be_zeroed(segtype) ((segtype)->flags & SEG_CANNOT_BE_ZEROED ? 1 : 0)
#define segtype_monitored(segtype) ((segtype)->flags & SEG_MONITORED ? 1 : 0)
#define segtype_only_exclusive(segtype) ((segtype)->flags & SEG_ONLY_EXCLUSIVE ? 1 : 0)
#define segtype_can_error_when_full(segtype) ((segtype)->flags & SEG_CAN_ERROR_WHEN_FULL ? 1 : 0)
#define segtype_supports_stripe_size(segtype) \
((segtype_is_striped(segtype) || segtype_is_mirror(segtype) || \
segtype_is_cache(segtype) || segtype_is_cache_pool(segtype) || \
@@ -199,11 +188,11 @@ struct dev_manager;
#define seg_is_thin_volume(seg) segtype_is_thin_volume((seg)->segtype)
#define seg_is_virtual(seg) segtype_is_virtual((seg)->segtype)
#define seg_unknown(seg) segtype_is_unknown((seg)->segtype)
#define seg_can_split(seg) segtype_can_split((seg)->segtype)
#define seg_cannot_be_zeroed(seg) segtype_cannot_be_zeroed((seg)->segtype)
#define seg_monitored(seg) segtype_monitored((seg)->segtype)
#define seg_only_exclusive(seg) segtype_only_exclusive((seg)->segtype)
#define seg_can_error_when_full(seg) segtype_can_error_when_full((seg)->segtype)
#define seg_can_split(seg) ((seg)->segtype->flags & SEG_CAN_SPLIT ? 1 : 0)
#define seg_cannot_be_zeroed(seg) ((seg)->segtype->flags & SEG_CANNOT_BE_ZEROED ? 1 : 0)
#define seg_monitored(seg) ((seg)->segtype->flags & SEG_MONITORED ? 1 : 0)
#define seg_only_exclusive(seg) ((seg)->segtype->flags & SEG_ONLY_EXCLUSIVE ? 1 : 0)
#define seg_can_error_when_full(seg) ((seg)->segtype->flags & SEG_CAN_ERROR_WHEN_FULL ? 1 : 0)
struct segment_type {
struct dm_list list; /* Internal */

View File

@@ -121,7 +121,7 @@ int lv_is_visible(const struct logical_volume *lv)
if (lv_is_historical(lv))
return 1;
if (lv_is_snapshot(lv))
if (lv->status & SNAPSHOT)
return 0;
if (lv_is_cow(lv)) {

View File

@@ -112,7 +112,7 @@ static takeover_fn_t _takeover_fns[][11] = {
/* raid1 */ { r1__lin, r1__str, r1__mir, r1__r0, r1__r0m, r1__r1, r1__r45, X , r1__r10, X , X },
/* raid4/5 */ { r45_lin, r45_str, r45_mir, r45_r0, r45_r0m, r45_r1, r45_r54, r45_r6, X , X , X },
/* raid6 */ { X , r6__str, X , r6__r0, r6__r0m, X , r6__r45, X , X , X , X },
/* raid10 */ { r10_lin, r10_str, r10_mir, r10_r0, r10_r0m, r10_r1, X , X , X , X , X },
/* raid10 */ // { r10_lin, r10_str, r10_mir, r10_r0, r10_r0m, r10_r1, X , X , r10_r10, r10_r01, X },
/* raid01 */ // { X , r01_str, X , X , X , X , X , X , r01_r10, r01_r01, X },
/* other */ { X , X , X , X , X , X , X , X , X , X , X },
};

View File

@@ -99,10 +99,6 @@ int attach_thin_external_origin(struct lv_segment *seg,
external_lv->name);
external_lv->status &= ~LVM_WRITE;
}
// TODO: should we mark even origin read-only ?
//if (lv_is_cache(external_lv)) /* read-only corigin of cache LV */
// seg_lv(first_seg(external_lv), 0)->status &= ~LVM_WRITE;
}
return 1;

View File

@@ -15,6 +15,7 @@
#include "lib.h"
#include "config.h"
#include "lvm-file.h"
#include "lvm-flock.h"
#include "lvm-signal.h"
#include "locking.h"

View File

@@ -35,47 +35,6 @@
#define uninitialized_var(x) x = x
#endif
/*
* GCC 3.4 adds a __builtin_clz, which uses the count leading zeros (clz)
* instruction on arches that have one. Provide a fallback using shifts
* and comparisons for older compilers.
*/
#ifdef HAVE___BUILTIN_CLZ
#define clz(x) __builtin_clz((x))
#else /* ifdef HAVE___BUILTIN_CLZ */
unsigned _dm_clz(unsigned x)
{
int n;
if ((int)x <= 0) return (~x >> 26) & 32;
n = 1;
if ((x >> 16) == 0) {
n = n + 16;
x = x << 16;
}
if ((x >> 24) == 0) {
n = n + 8;
x = x << 8;
}
if ((x >> 28) == 0) {
n = n + 4;
x = x << 4;
}
if ((x >> 30) == 0) {
n = n + 2;
x = x << 2;
}
n = n - (x >> 31);
return n;
}
#define clz(x) _dm_clz((x))
#endif /* ifdef HAVE___BUILTIN_CLZ */
#define KERNEL_VERSION(major, minor, release) (((major) << 16) + ((minor) << 8) + (release))
/* Define some portable printing types */

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2011-2017 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2016 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -87,13 +87,13 @@ static int _raid_text_import_areas(struct lv_segment *seg,
}
/* Metadata device comes first. */
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
}
if (!seg_is_raid0(seg)) {
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
}
if (strstr(lv->name, "_rmeta_")) {
if (!set_lv_segment_area_lv(seg, s, lv, 0, RAID_META))
return_0;
cv = cv->next;
@@ -167,6 +167,7 @@ static int _raid_text_import(struct lv_segment *seg,
if (seg_is_any_raid0(seg))
seg->area_len /= seg->area_count;
seg->status |= RAID;
return 1;
}
@@ -341,69 +342,13 @@ static int _raid_target_percent(void **target_state,
*total_denominator += denominator;
if (seg)
seg->extents_copied = (uint64_t) seg->area_len * dm_make_percent(numerator, denominator) / DM_PERCENT_100;
seg->extents_copied = seg->area_len * numerator / denominator;
*percent = dm_make_percent(numerator, denominator);
return 1;
}
static int _raid_transient_status(struct dm_pool *mem,
struct lv_segment *seg,
char *params)
{
int failed = 0, r = 0;
unsigned i;
struct lvinfo info;
struct logical_volume *lv;
struct dm_status_raid *sr;
log_debug("Raid transient status %s.", params);
if (!dm_get_status_raid(mem, params, &sr))
return_0;
if (sr->dev_count != seg->area_count) {
log_error("Active raid has a wrong number of raid images!");
log_error("Metadata says %u, kernel says %u.",
seg->area_count, sr->dev_count);
goto out;
}
if (seg->meta_areas)
for (i = 0; i < seg->area_count; ++i) {
lv = seg_metalv(seg, i);
if (!lv_info(lv->vg->cmd, lv, 0, &info, 0, 0)) {
log_error("Check for existence of raid meta %s failed.",
display_lvname(lv));
goto out;
}
}
for (i = 0; i < seg->area_count; ++i) {
lv = seg_lv(seg, i);
if (!lv_info(lv->vg->cmd, lv, 0, &info, 0, 0)) {
log_error("Check for existence of raid image %s failed.",
display_lvname(lv));
goto out;
}
if (sr->dev_health[i] == 'D') {
lv->status |= PARTIAL_LV;
++failed;
}
}
/* Update PARTIAL_LV flags across the VG */
if (failed)
vg_mark_partial_lvs(lv->vg, 0);
r = 1;
out:
dm_pool_free(mem, sr);
return r;
}
static int _raid_target_present(struct cmd_context *cmd,
const struct lv_segment *seg __attribute__((unused)),
unsigned *attributes)
@@ -515,7 +460,6 @@ static struct segtype_handler _raid_ops = {
#ifdef DEVMAPPER_SUPPORT
.target_percent = _raid_target_percent,
.target_present = _raid_target_present,
.check_transient_status = _raid_transient_status,
.modules_needed = _raid_modules_needed,
# ifdef DMEVENTD
.target_monitored = _raid_target_monitored,
@@ -537,20 +481,14 @@ static const struct raid_type {
{ SEG_TYPE_NAME_RAID10, 0, SEG_RAID10 | SEG_AREAS_MIRRORED },
{ SEG_TYPE_NAME_RAID4, 1, SEG_RAID4 },
{ SEG_TYPE_NAME_RAID5, 1, SEG_RAID5 },
{ SEG_TYPE_NAME_RAID5_N, 1, SEG_RAID5_N },
{ SEG_TYPE_NAME_RAID5_LA, 1, SEG_RAID5_LA },
{ SEG_TYPE_NAME_RAID5_LS, 1, SEG_RAID5_LS },
{ SEG_TYPE_NAME_RAID5_RA, 1, SEG_RAID5_RA },
{ SEG_TYPE_NAME_RAID5_RS, 1, SEG_RAID5_RS },
{ SEG_TYPE_NAME_RAID6, 2, SEG_RAID6 },
{ SEG_TYPE_NAME_RAID6_N_6, 2, SEG_RAID6_N_6 },
{ SEG_TYPE_NAME_RAID6_NC, 2, SEG_RAID6_NC },
{ SEG_TYPE_NAME_RAID6_NR, 2, SEG_RAID6_NR },
{ SEG_TYPE_NAME_RAID6_ZR, 2, SEG_RAID6_ZR },
{ SEG_TYPE_NAME_RAID6_LS_6, 2, SEG_RAID6_LS_6 },
{ SEG_TYPE_NAME_RAID6_RS_6, 2, SEG_RAID6_RS_6 },
{ SEG_TYPE_NAME_RAID6_LA_6, 2, SEG_RAID6_LA_6 },
{ SEG_TYPE_NAME_RAID6_RA_6, 2, SEG_RAID6_RA_6 }
{ SEG_TYPE_NAME_RAID6_ZR, 2, SEG_RAID6_ZR }
};
static struct segment_type *_init_raid_segtype(struct cmd_context *cmd,

View File

@@ -21,19 +21,9 @@
* determines the order the entries appear in this file.
*
* When adding new entries take care to use the existing style.
*
* Do not interleave fields from different report types - for example,
* if you have a field of type "LVS" add it in between "LVS type fields"
* and "End of LVS type fields" comment. If you interleaved fields of
* different types here in this file, they would end up interleaved in
* the <reporting_command> -o help output too which may be confusing
* for users.
*
* Displayed fields names normally have a type prefix and use underscores.
*
* Field-specific internal functions names normally match the displayed
* field names but without underscores.
*
* Help text ends with a full stop.
*/
@@ -42,15 +32,13 @@
*/
/* *INDENT-OFF* */
/*
* LVS type fields
*/
FIELD(LVS, lv, STR, "LV UUID", lvid, 38, lvuuid, lv_uuid, "Unique identifier.", 0)
FIELD(LVS, lv, STR, "LV", lvid, 4, lvname, lv_name, "Name. LVs created for internal use are enclosed in brackets.", 0)
FIELD(LVS, lv, STR, "LV", lvid, 4, lvfullname, lv_full_name, "Full name of LV including its VG, namely VG/LV.", 0)
FIELD(LVS, lv, STR, "Path", lvid, 0, lvpath, lv_path, "Full pathname for LV. Blank for internal LVs.", 0)
FIELD(LVS, lv, STR, "DMPath", lvid, 0, lvdmpath, lv_dm_path, "Internal device-mapper pathname for LV (in /dev/mapper directory).", 0)
FIELD(LVS, lv, STR, "Parent", lvid, 0, lvparent, lv_parent, "For LVs that are components of another LV, the parent LV.", 0)
FIELD(LVSINFOSTATUS, lv, STR, "Attr", lvid, 0, lvstatus, lv_attr, "Various attributes - see man page.", 0)
FIELD(LVS, lv, STR_LIST, "Layout", lvid, 10, lvlayout, lv_layout, "LV layout.", 0)
FIELD(LVS, lv, STR_LIST, "Role", lvid, 10, lvrole, lv_role, "LV role.", 0)
FIELD(LVS, lv, BIN, "InitImgSync", lvid, 10, lvinitialimagesync, lv_initial_image_sync, "Set if mirror/RAID images underwent initial resynchronization.", 0)
@@ -60,6 +48,8 @@ FIELD(LVS, lv, BIN, "Converting", lvid, 0, lvconverting, lv_converting, "Set if
FIELD(LVS, lv, STR, "AllocPol", lvid, 10, lvallocationpolicy, lv_allocation_policy, "LV allocation policy.", 0)
FIELD(LVS, lv, BIN, "AllocLock", lvid, 10, lvallocationlocked, lv_allocation_locked, "Set if LV is locked against allocation changes.", 0)
FIELD(LVS, lv, BIN, "FixMin", lvid, 10, lvfixedminor, lv_fixed_minor, "Set if LV has fixed minor number assigned.", 0)
FIELD(LVS, lv, BIN, "MergeFailed", lvid, 15, lvmergefailed, lv_merge_failed, "Set if snapshot merge failed.", 0)
FIELD(LVS, lv, BIN, "SnapInvalid", lvid, 15, lvsnapshotinvalid, lv_snapshot_invalid, "Set if snapshot LV is invalid.", 0)
FIELD(LVS, lv, BIN, "SkipAct", lvid, 15, lvskipactivation, lv_skip_activation, "Set if LV is skipped on activation.", 0)
FIELD(LVS, lv, STR, "WhenFull", lvid, 15, lvwhenfull, lv_when_full, "For thin pools, behavior when full.", 0)
FIELD(LVS, lv, STR, "Active", lvid, 0, lvactive, lv_active, "Active state of the LV.", 0)
@@ -79,6 +69,11 @@ FIELD(LVS, lv, STR_LIST, "Ancestors", lvid, 0, lvancestors, lv_ancestors, "LV an
FIELD(LVS, lv, STR_LIST, "FAncestors", lvid, 0, lvfullancestors, lv_full_ancestors, "LV ancestors including stored history of the ancestry chain.", 0)
FIELD(LVS, lv, STR_LIST, "Descendants", lvid, 0, lvdescendants, lv_descendants, "LV descendants ignoring any stored history of the ancestry chain.", 0)
FIELD(LVS, lv, STR_LIST, "FDescendants", lvid, 0, lvfulldescendants, lv_full_descendants, "LV descendants including stored history of the ancestry chain.", 0)
FIELD(LVSSTATUS, lv, PCT, "Data%", lvid, 6, datapercent, data_percent, "For snapshot, cache and thin pools and volumes, the percentage full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Snap%", lvid, 6, snpercent, snap_percent, "For snapshots, the percentage full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Meta%", lvid, 6, metadatapercent, metadata_percent, "For cache and thin pools, the percentage of metadata full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Cpy%Sync", lvid, 0, copypercent, copy_percent, "For Cache, RAID, mirrors and pvmove, current percentage in-sync.", 0)
FIELD(LVSSTATUS, lv, PCT, "Cpy%Sync", lvid, 0, copypercent, sync_percent, "For Cache, RAID, mirrors and pvmove, current percentage in-sync.", 0)
FIELD(LVS, lv, NUM, "Mismatches", lvid, 0, raidmismatchcount, raid_mismatch_count, "For RAID, number of mismatches found or repaired.", 0)
FIELD(LVS, lv, STR, "SyncAction", lvid, 0, raidsyncaction, raid_sync_action, "For RAID, the current synchronization action being performed.", 0)
FIELD(LVS, lv, NUM, "WBehind", lvid, 0, raidwritebehind, raid_write_behind, "For RAID1, the number of outstanding writes allowed to writemostly devices.", 0)
@@ -104,13 +99,7 @@ FIELD(LVS, lv, TIM, "RTime", lvid, 26, lvtimeremoved, lv_time_removed, "Removal
FIELD(LVS, lv, STR, "Host", lvid, 10, lvhost, lv_host, "Creation host of the LV, if known.", 0)
FIELD(LVS, lv, STR_LIST, "Modules", lvid, 0, modules, lv_modules, "Kernel device-mapper modules required for this LV.", 0)
FIELD(LVS, lv, BIN, "Historical", lvid, 0, lvhistorical, lv_historical, "Set if the LV is historical.", 0)
/*
* End of LVS type fields
*/
/*
* LVSINFO type fields
*/
FIELD(LVSINFO, lv, SNUM, "KMaj", lvid, 0, lvkmaj, lv_kernel_major, "Currently assigned major number or -1 if LV is not active.", 0)
FIELD(LVSINFO, lv, SNUM, "KMin", lvid, 0, lvkmin, lv_kernel_minor, "Currently assigned minor number or -1 if LV is not active.", 0)
FIELD(LVSINFO, lv, SIZ, "KRahead", lvid, 0, lvkreadahead, lv_kernel_read_ahead, "Currently-in-use read ahead setting in current units.", 0)
@@ -119,18 +108,7 @@ FIELD(LVSINFO, lv, BIN, "Suspended", lvid, 10, lvsuspended, lv_suspended, "Set i
FIELD(LVSINFO, lv, BIN, "LiveTable", lvid, 20, lvlivetable, lv_live_table, "Set if LV has live table present.", 0)
FIELD(LVSINFO, lv, BIN, "InactiveTable", lvid, 20, lvinactivetable, lv_inactive_table, "Set if LV has inactive table present.", 0)
FIELD(LVSINFO, lv, BIN, "DevOpen", lvid, 10, lvdeviceopen, lv_device_open, "Set if LV device is open.", 0)
/*
* End of LVSINFO type fields
*/
/*
* LVSSTATUS type fields
*/
FIELD(LVSSTATUS, lv, PCT, "Data%", lvid, 6, datapercent, data_percent, "For snapshot, cache and thin pools and volumes, the percentage full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Snap%", lvid, 6, snpercent, snap_percent, "For snapshots, the percentage full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Meta%", lvid, 6, metadatapercent, metadata_percent, "For cache and thin pools, the percentage of metadata full if LV is active.", 0)
FIELD(LVSSTATUS, lv, PCT, "Cpy%Sync", lvid, 0, copypercent, copy_percent, "For Cache, RAID, mirrors and pvmove, current percentage in-sync.", 0)
FIELD(LVSSTATUS, lv, PCT, "Cpy%Sync", lvid, 0, copypercent, sync_percent, "For Cache, RAID, mirrors and pvmove, current percentage in-sync.", 0)
FIELD(LVSSTATUS, lv, NUM, "CacheTotalBlocks", lvid, 0, cache_total_blocks, cache_total_blocks, "Total cache blocks.", 0)
FIELD(LVSSTATUS, lv, NUM, "CacheUsedBlocks", lvid, 16, cache_used_blocks, cache_used_blocks, "Used cache blocks.", 0)
FIELD(LVSSTATUS, lv, NUM, "CacheDirtyBlocks", lvid, 0, cache_dirty_blocks, cache_dirty_blocks, "Dirty cache blocks.", 0)
@@ -143,23 +121,7 @@ FIELD(LVSSTATUS, lv, STR, "KCachePolicy", lvid, 18, kernel_cache_policy, kernel_
FIELD(LVSSTATUS, lv, STR, "Health", lvid, 15, lvhealthstatus, lv_health_status, "LV health status.", 0)
FIELD(LVSSTATUS, lv, STR, "KDiscards", lvid, 0, kdiscards, kernel_discards, "For thin pools, how discards are handled in kernel.", 0)
FIELD(LVSSTATUS, lv, BIN, "CheckNeeded", lvid, 15, lvcheckneeded, lv_check_needed, "For thin pools and cache volumes, whether metadata check is needed.", 0)
FIELD(LVSSTATUS, lv, BIN, "MergeFailed", lvid, 15, lvmergefailed, lv_merge_failed, "Set if snapshot merge failed.", 0)
FIELD(LVSSTATUS, lv, BIN, "SnapInvalid", lvid, 15, lvsnapshotinvalid, lv_snapshot_invalid, "Set if snapshot LV is invalid.", 0)
/*
* End of LVSSTATUS type fields
*/
/*
* LVSINFOSTATUS type fields
*/
FIELD(LVSINFOSTATUS, lv, STR, "Attr", lvid, 0, lvstatus, lv_attr, "Various attributes - see man page.", 0)
/*
* End of LVSINFOSTATUS type fields
*/
/*
* LABEL type fields
*/
FIELD(LABEL, label, STR, "Fmt", type, 0, pvfmt, pv_fmt, "Type of metadata.", 0)
FIELD(LABEL, label, STR, "PV UUID", type, 38, pvuuid, pv_uuid, "Unique identifier.", 0)
FIELD(LABEL, label, SIZ, "DevSize", dev, 0, devsize, dev_size, "Size of underlying device in current units.", 0)
@@ -169,13 +131,7 @@ FIELD(LABEL, label, STR, "Min", dev, 0, devminor, pv_minor, "Device minor number
FIELD(LABEL, label, SIZ, "PMdaFree", type, 9, pvmdafree, pv_mda_free, "Free metadata area space on this device in current units.", 0)
FIELD(LABEL, label, SIZ, "PMdaSize", type, 9, pvmdasize, pv_mda_size, "Size of smallest metadata area on this device in current units.", 0)
FIELD(LABEL, label, NUM, "PExtVsn", type, 0, pvextvsn, pv_ext_vsn, "PV header extension version.", 0)
/*
* End of LABEL type fields
*/
/*
* PVS type fields
*/
FIELD(PVS, pv, NUM, "1st PE", pe_start, 7, size64, pe_start, "Offset to the start of data on the underlying device.", 0)
FIELD(PVS, pv, SIZ, "PSize", id, 0, pvsize, pv_size, "Size of PV in current units.", 0)
FIELD(PVS, pv, SIZ, "PFree", id, 0, pvfree, pv_free, "Total amount of unallocated space in current units.", 0)
@@ -193,13 +149,7 @@ FIELD(PVS, pv, SIZ, "BA Start", ba_start, 0, size64, pv_ba_start, "Offset to the
FIELD(PVS, pv, SIZ, "BA Size", ba_size, 0, size64, pv_ba_size, "Size of PV Bootloader Area in current units.", 0)
FIELD(PVS, pv, BIN, "PInUse", id, 0, pvinuse, pv_in_use, "Set if PV is used.", 0)
FIELD(PVS, pv, BIN, "Duplicate", id, 0, pvduplicate, pv_duplicate, "Set if PV is an unchosen duplicate.", 0)
/*
* End of PVS type fields
*/
/*
* VGS type fields
*/
FIELD(VGS, vg, STR, "Fmt", cmd, 0, vgfmt, vg_fmt, "Type of metadata.", 0)
FIELD(VGS, vg, STR, "VG UUID", id, 38, uuid, vg_uuid, "Unique identifier.", 0)
FIELD(VGS, vg, STR, "VG", name, 0, string, vg_name, "Name.", 0)
@@ -233,13 +183,7 @@ FIELD(VGS, vg, NUM, "#VMdaUse", cmd, 0, vgmdasused, vg_mda_used_count, "Number o
FIELD(VGS, vg, SIZ, "VMdaFree", cmd, 9, vgmdafree, vg_mda_free, "Free metadata area space for this VG in current units.", 0)
FIELD(VGS, vg, SIZ, "VMdaSize", cmd, 9, vgmdasize, vg_mda_size, "Size of smallest metadata area for this VG in current units.", 0)
FIELD(VGS, vg, NUM, "#VMdaCps", cmd, 0, vgmdacopies, vg_mda_copies, "Target number of in use metadata areas in the VG.", 1)
/*
* End of VGS type fields
*/
/*
* SEGS type fields
*/
FIELD(SEGS, seg, STR, "Type", list, 0, segtype, segtype, "Type of LV segment.", 0)
FIELD(SEGS, seg, NUM, "#Str", area_count, 0, uint32, stripes, "Number of stripes or mirror legs.", 0)
FIELD(SEGS, seg, SIZ, "Stripe", stripe_size, 0, size32, stripe_size, "For stripes, amount of data placed on one device before switching to the next.", 0)
@@ -264,16 +208,7 @@ FIELD(SEGS, seg, STR_LIST, "Metadata Devs", list, 0, metadatadevices, metadata_d
FIELD(SEGS, seg, STR, "Monitor", list, 0, segmonitor, seg_monitor, "Dmeventd monitoring status of the segment.", 0)
FIELD(SEGS, seg, STR, "CachePolicy", list, 0, cache_policy, cache_policy, "The cache policy (cached segments only).", 0)
FIELD(SEGS, seg, STR_LIST, "CacheSettings", list, 0, cache_settings, cache_settings, "Cache settings/parameters (cached segments only).", 0)
/*
* End of SEGS type fields
*/
/*
* PVSEGS type fields
*/
FIELD(PVSEGS, pvseg, NUM, "Start", pe, 0, uint32, pvseg_start, "Physical Extent number of start of segment.", 0)
FIELD(PVSEGS, pvseg, NUM, "SSize", len, 0, uint32, pvseg_size, "Number of extents in segment.", 0)
/*
* End of PVSEGS type fields
*/
/* *INDENT-ON* */

View File

@@ -2936,16 +2936,12 @@ static int _metadatapercent_disp(struct dm_report *rh,
const void *data, void *private)
{
const struct lv_with_info_and_seg_status *lvdm = (const struct lv_with_info_and_seg_status *) data;
dm_percent_t percent;
dm_percent_t percent = DM_PERCENT_INVALID;
switch (lvdm->seg_status.type) {
case SEG_STATUS_CACHE:
case SEG_STATUS_THIN_POOL:
if (lv_is_thin_pool(lvdm->lv) ||
lv_is_cache(lvdm->lv) ||
lv_is_used_cache_pool(lvdm->lv))
percent = lvseg_percent_with_info_and_seg_status(lvdm, PERCENT_GET_METADATA);
break;
default:
percent = DM_PERCENT_INVALID;
}
return dm_report_field_percent(rh, field, &percent);
}
@@ -3367,26 +3363,30 @@ static int _lvmergefailed_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_with_info_and_seg_status *lvdm = (const struct lv_with_info_and_seg_status *) data;
const struct logical_volume *lv = (const struct logical_volume *) data;
dm_percent_t snap_percent;
int merge_failed;
if (lvdm->seg_status.type != SEG_STATUS_SNAPSHOT)
if (!lv_is_cow(lv) || !lv_snapshot_percent(lv, &snap_percent))
return _binary_undef_disp(rh, mem, field, private);
return _binary_disp(rh, mem, field, lvdm->seg_status.snapshot->merge_failed,
GET_FIRST_RESERVED_NAME(lv_merge_failed_y), private);
merge_failed = snap_percent == LVM_PERCENT_MERGE_FAILED;
return _binary_disp(rh, mem, field, merge_failed, GET_FIRST_RESERVED_NAME(lv_merge_failed_y), private);
}
static int _lvsnapshotinvalid_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_with_info_and_seg_status *lvdm = (const struct lv_with_info_and_seg_status *) data;
const struct logical_volume *lv = (const struct logical_volume *) data;
dm_percent_t snap_percent;
int snap_invalid;
if (lvdm->seg_status.type != SEG_STATUS_SNAPSHOT)
if (!lv_is_cow(lv))
return _binary_undef_disp(rh, mem, field, private);
return _binary_disp(rh, mem, field, lvdm->seg_status.snapshot->invalid,
GET_FIRST_RESERVED_NAME(lv_snapshot_invalid_y), private);
snap_invalid = !lv_snapshot_percent(lv, &snap_percent) || snap_percent == DM_PERCENT_INVALID;
return _binary_disp(rh, mem, field, snap_invalid, GET_FIRST_RESERVED_NAME(lv_snapshot_invalid_y), private);
}
static int _lvsuspended_disp(struct dm_report *rh, struct dm_pool *mem,

View File

@@ -159,11 +159,6 @@ static int _striped_merge_segments(struct lv_segment *seg1, struct lv_segment *s
}
#ifdef DEVMAPPER_SUPPORT
static int _striped_target_status_compatible(const char *type)
{
return (strcmp(type, TARGET_NAME_LINEAR) == 0);
}
static int _striped_add_target_line(struct dev_manager *dm,
struct dm_pool *mem __attribute__((unused)),
struct cmd_context *cmd __attribute__((unused)),
@@ -223,7 +218,6 @@ static struct segtype_handler _striped_ops = {
.text_export = _striped_text_export,
.merge_segments = _striped_merge_segments,
#ifdef DEVMAPPER_SUPPORT
.target_status_compatible = _striped_target_status_compatible,
.add_target_line = _striped_add_target_line,
.target_present = _striped_target_present,
#endif

View File

@@ -42,7 +42,7 @@ static int _pthread_create(pthread_t *t, void *(*fun)(void *), void *arg, int st
/*
* We use a smaller stack since it gets preallocated in its entirety
*/
pthread_attr_setstacksize(&attr, stacksize + getpagesize());
pthread_attr_setstacksize(&attr, stacksize);
return pthread_create(t, &attr, fun, arg);
}
#endif

View File

@@ -1,5 +0,0 @@
dm_bit_get_last
dm_bit_get_prev
dm_stats_update_regions_from_fd
dm_bitset_parse_list
dm_stats_bind_from_fd

View File

@@ -76,13 +76,6 @@ static int _test_word(uint32_t test, int bit)
return (tb ? ffs(tb) + bit - 1 : -1);
}
static int _test_word_rev(uint32_t test, int bit)
{
uint32_t tb = test << (DM_BITS_PER_INT - 1 - bit);
return (tb ? bit - clz(tb) : -1);
}
int dm_bit_get_next(dm_bitset_t bs, int last_bit)
{
int bit, word;
@@ -108,45 +101,15 @@ int dm_bit_get_next(dm_bitset_t bs, int last_bit)
return -1;
}
int dm_bit_get_prev(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit--; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit >= 0) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word_rev(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = (last_bit & ~(DM_BITS_PER_INT - 1)) - 1;
}
return -1;
}
int dm_bit_get_first(dm_bitset_t bs)
{
return dm_bit_get_next(bs, -1);
}
int dm_bit_get_last(dm_bitset_t bs)
{
return dm_bit_get_prev(bs, bs[0] + 1);
}
/*
* Based on the Linux kernel __bitmap_parselist from lib/bitmap.c
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits)
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem)
{
unsigned a, b;
int c, old_c, totaldigits, ndigits, nmaskbits;
@@ -222,9 +185,6 @@ scan:
} while (len && c == ',');
if (!mask) {
if (min_num_bits && (nmaskbits < min_num_bits))
nmaskbits = min_num_bits;
if (!(mask = dm_bitset_create(mem, nmaskbits)))
goto_bad;
str = start;
@@ -241,19 +201,3 @@ bad:
}
return NULL;
}
#if defined(__GNUC__)
/*
* Maintain backward compatibility with older versions that did not
* accept a 'min_num_bits' argument to dm_bitset_parse_list().
*/
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem);
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem)
{
return dm_bitset_parse_list(str, mem, 0);
}
DM_EXPORT_SYMBOL(dm_bitset_parse_list, 1_02_129);
#else /* if defined(__GNUC__) */
#endif

View File

@@ -1851,10 +1851,10 @@ static struct dm_ioctl *_do_dm_ioctl(struct dm_task *dmt, unsigned command,
dmi->flags &= ~DM_EXISTS_FLAG; /* FIXME */
else {
if (_log_suppress || dmt->ioctl_errno == EINTR)
log_verbose("device-mapper: %s ioctl on %s %s%s%.0d%s%.0d%s%s "
log_verbose("device-mapper: %s ioctl on %s%s%s%.0d%s%.0d%s%s "
"failed: %s",
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
dmt->major > 0 ? "(" : "",
dmt->major > 0 ? dmt->major : 0,
dmt->major > 0 ? ":" : "",
@@ -1863,10 +1863,10 @@ static struct dm_ioctl *_do_dm_ioctl(struct dm_task *dmt, unsigned command,
dmt->major > 0 ? ")" : "",
strerror(dmt->ioctl_errno));
else
log_error("device-mapper: %s ioctl on %s %s%s%.0d%s%.0d%s%s "
log_error("device-mapper: %s ioctl on %s%s%s%.0d%s%.0d%s%s "
"failed: %s",
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
dmi->name, dmi->uuid,
dmt->major > 0 ? "(" : "",
dmt->major > 0 ? dmt->major : 0,
dmt->major > 0 ? ":" : "",

View File

@@ -327,9 +327,7 @@ struct dm_status_raid {
uint64_t mismatch_count;
uint32_t dev_count;
char *raid_type;
/* A - alive, a - alive not in-sync, D - dead/failed */
char *dev_health;
/* idle, frozen, resync, recover, check, repair */
char *sync_action;
};
@@ -517,16 +515,6 @@ int dm_stats_bind_name(struct dm_stats *dms, const char *name);
*/
int dm_stats_bind_uuid(struct dm_stats *dms, const char *uuid);
/*
* Bind a dm_stats handle to the device backing the file referenced
* by the specified file descriptor.
*
* File descriptor fd must reference a regular file, open for reading,
* in a local file system, backed by a device-mapper device, that
* supports the FIEMAP ioctl, and that returns data describing the
* physical location of extents.
*/
int dm_stats_bind_from_fd(struct dm_stats *dms, int fd);
/*
* Test whether the running kernel supports the precise_timestamps
* feature. Presence of this feature also implies histogram support.
@@ -1148,9 +1136,9 @@ for (dm_stats_walk_init((dms), DM_STATS_WALK_AREA), \
*/
#define dm_stats_foreach_group(dms) \
for (dm_stats_walk_init((dms), DM_STATS_WALK_GROUP), \
dm_stats_walk_start(dms); \
!dm_stats_walk_end(dms); \
dm_stats_walk_next(dms))
dm_stats_group_walk_start(dms); \
!dm_stats_group_walk_end(dms); \
dm_stats_group_walk_next(dms))
/*
* Start a walk iterating over the regions contained in dm_stats handle
@@ -1307,66 +1295,29 @@ int dm_stats_get_group_descriptor(const struct dm_stats *dms,
* filesystem and optionally place them into a group.
*
* File descriptor fd must reference a regular file, open for reading,
* in a local file system that supports the FIEMAP ioctl, and that
* in a local file system that supports the FIEMAP ioctl and that
* returns data describing the physical location of extents.
*
* The file descriptor can be closed by the caller following the call
* to dm_stats_create_regions_from_fd().
*
* The function returns a pointer to an array of uint64_t containing
* the IDs of the newly created regions. The array is terminated by the
* value DM_STATS_REGIONS_ALL and should be freed using dm_free() when
* no longer required.
*
* Unless nogroup is non-zero the regions will be placed into a group
* and the group alias set to the value supplied (if alias is NULL no
* group alias will be assigned).
*
* On success the function returns a pointer to an array of uint64_t
* containing the IDs of the newly created regions. The region_id
* array is terminated by the value DM_STATS_REGION_NOT_PRESENT and
* should be freed using dm_free() when no longer required.
*
* On error NULL is returned.
*
* Following a call to dm_stats_create_regions_from_fd() the handle
* is guaranteed to be in a listed state, and to contain any region
* and group identifiers created by the operation.
* and the group alias is set to the value supplied.
*
* The group_id for the new group is equal to the region_id value in
* the first array element.
*
* File mapped histograms will be supported in a future version.
*/
uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
int group, int precise,
struct dm_histogram *bounds,
const char *alias);
/*
* Update a group of regions that correspond to the extents of a file
* in the filesystem, adding and removing regions to account for
* allocation changes in the underlying file.
*
* File descriptor fd must reference a regular file, open for reading,
* in a local file system that supports the FIEMAP ioctl, and that
* returns data describing the physical location of extents.
*
* The file descriptor can be closed by the caller following the call
* to dm_stats_update_regions_from_fd().
*
* On success the function returns a pointer to an array of uint64_t
* containing the IDs of the updated regions (including any existing
* regions that were not modified by the call).
*
* The region_id array is terminated by the special value
* DM_STATS_REGION_NOT_PRESENT and should be freed using dm_free()
* when no longer required.
*
* On error NULL is returned.
*
* Following a call to dm_stats_update_regions_from_fd() the handle
* is guaranteed to be in a listed state, and to contain any region
* and group identifiers created by the operation.
*
* This function cannot be used with file mapped regions that are
* not members of a group: either group the regions, or remove them
* and re-map them with dm_stats_create_regions_from_fd().
*/
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t group_id);
/*
* Call this to actually run the ioctl.
@@ -2111,8 +2062,6 @@ void dm_bit_and(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2);
void dm_bit_union(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2);
int dm_bit_get_first(dm_bitset_t bs);
int dm_bit_get_next(dm_bitset_t bs, int last_bit);
int dm_bit_get_last(dm_bitset_t bs);
int dm_bit_get_prev(dm_bitset_t bs, int last_bit);
#define DM_BITS_PER_INT (sizeof(int) * CHAR_BIT)
@@ -2132,7 +2081,7 @@ int dm_bit_get_prev(dm_bitset_t bs, int last_bit);
memset((bs) + 1, 0, ((*(bs) / DM_BITS_PER_INT) + 1) * sizeof(int))
#define dm_bit_copy(bs1, bs2) \
memcpy((bs1) + 1, (bs2) + 1, ((*(bs2) / DM_BITS_PER_INT) + 1) * sizeof(int))
memcpy((bs1) + 1, (bs2) + 1, ((*(bs1) / DM_BITS_PER_INT) + 1) * sizeof(int))
/*
* Parse a string representation of a bitset into a dm_bitset_t. The
@@ -2142,8 +2091,7 @@ int dm_bit_get_prev(dm_bitset_t bs, int last_bit);
* dm_malloc(). Otherwise the bitset will be allocated using the supplied
* dm_pool.
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits);
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem);
/* Returns number of set bits */
static inline unsigned hweight32(uint32_t i)

View File

@@ -173,16 +173,14 @@ static void _log_to_default_log(int level,
const char *file, int line, int dm_errno_or_class,
const char *f, ...)
{
int n;
va_list ap;
char buf[2 * PATH_MAX + 256]; /* big enough for most messages */
va_start(ap, f);
n = vsnprintf(buf, sizeof(buf), f, ap);
vsnprintf(buf, sizeof(buf), f, ap);
va_end(ap);
if (n > 0) /* Could be truncated */
dm_log(level, file, line, "%s", buf);
dm_log(level, file, line, "%s", buf);
}
/*
@@ -197,16 +195,14 @@ __attribute__((format(printf, 4, 5)))
static void _log_to_default_log_with_errno(int level,
const char *file, int line, const char *f, ...)
{
int n;
va_list ap;
char buf[2 * PATH_MAX + 256]; /* big enough for most messages */
va_start(ap, f);
n = vsnprintf(buf, sizeof(buf), f, ap);
vsnprintf(buf, sizeof(buf), f, ap);
va_end(ap);
if (n > 0) /* Could be truncated */
dm_log_with_errno(level, file, line, 0, "%s", buf);
dm_log_with_errno(level, file, line, 0, "%s", buf);
}
void dm_log_init(dm_log_fn fn)

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2017 Red Hat, Inc. All rights reserved.
* Copyright (C) 2005-2016 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
@@ -47,19 +47,13 @@ enum {
SEG_RAID1,
SEG_RAID10,
SEG_RAID4,
SEG_RAID5_N,
SEG_RAID5_LA,
SEG_RAID5_RA,
SEG_RAID5_LS,
SEG_RAID5_RS,
SEG_RAID6_N_6,
SEG_RAID6_ZR,
SEG_RAID6_NR,
SEG_RAID6_NC,
SEG_RAID6_LS_6,
SEG_RAID6_RS_6,
SEG_RAID6_LA_6,
SEG_RAID6_RA_6,
};
/* FIXME Add crypt and multipath support */
@@ -87,20 +81,13 @@ static const struct {
{ SEG_RAID1, "raid1"},
{ SEG_RAID10, "raid10"},
{ SEG_RAID4, "raid4"},
{ SEG_RAID5_N, "raid5_n"},
{ SEG_RAID5_LA, "raid5_la"},
{ SEG_RAID5_RA, "raid5_ra"},
{ SEG_RAID5_LS, "raid5_ls"},
{ SEG_RAID5_RS, "raid5_rs"},
{ SEG_RAID6_N_6,"raid6_n_6"},
{ SEG_RAID6_ZR, "raid6_zr"},
{ SEG_RAID6_NR, "raid6_nr"},
{ SEG_RAID6_NC, "raid6_nc"},
{ SEG_RAID6_LS_6, "raid6_ls_6"},
{ SEG_RAID6_RS_6, "raid6_rs_6"},
{ SEG_RAID6_LA_6, "raid6_la_6"},
{ SEG_RAID6_RA_6, "raid6_ra_6"},
/*
* WARNING: Since 'raid' target overloads this 1:1 mapping table
@@ -2153,19 +2140,13 @@ static int _emit_areas_line(struct dm_task *dmt __attribute__((unused)),
case SEG_RAID1:
case SEG_RAID10:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
if (!area->dev_node) {
EMIT_PARAMS(*pos, " -");
break;
@@ -2607,19 +2588,13 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
case SEG_RAID1:
case SEG_RAID10:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
target_type_is_raid = 1;
r = _raid_emit_segment_line(dmt, major, minor, seg, seg_start,
params, paramsize);
@@ -2803,10 +2778,6 @@ static int _dm_tree_revert_activated(struct dm_tree_node *parent)
dm_list_iterate_items_gen(child, &parent->activated, activated_list) {
log_debug_activation("Reverting %s.", child->name);
if (child->callback) {
log_debug_activation("Dropping callback for %s.", child->name);
child->callback = NULL;
}
if (!_deactivate_node(child->name, child->info.major, child->info.minor,
&child->dtree->cookie, child->udev_flags, 0)) {
log_error("Unable to deactivate %s (%" PRIu32
@@ -2874,8 +2845,8 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
else if (child->props.size_changed < 0)
dnode->props.size_changed = -1;
/* No resume for a device without parents or with unchanged or smaller size */
if (!dm_tree_node_num_children(child, 1) || (child->props.size_changed <= 0))
/* Resume device immediately if it has parents and its size changed */
if (!dm_tree_node_num_children(child, 1) || !child->props.size_changed)
continue;
if (!node_created && (dm_list_size(&child->props.segs) == 1)) {
@@ -3894,19 +3865,13 @@ int dm_tree_node_add_null_area(struct dm_tree_node *node, uint64_t offset)
case SEG_RAID0_META:
case SEG_RAID1:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
break;
default:
log_error("dm_tree_node_add_null_area() called on an unsupported segment type");

View File

@@ -95,8 +95,8 @@ struct report_group_item {
uint32_t finished_count;
} store;
struct report_group_item *parent;
unsigned output_done:1;
unsigned needs_closing:1;
int output_done:1;
int needs_closing:1;
void *data;
};
@@ -3062,31 +3062,26 @@ static void _get_final_time(time_range_t range, struct tm *tm,
tm_up.tm_sec += 1;
break;
}
/* fall through */
case RANGE_MINUTE:
if (tm_up.tm_min < 59) {
tm_up.tm_min += 1;
break;
}
/* fall through */
case RANGE_HOUR:
if (tm_up.tm_hour < 23) {
tm_up.tm_hour += 1;
break;
}
/* fall through */
case RANGE_DAY:
if (tm_up.tm_mday < _get_days_in_month(tm_up.tm_mon, tm_up.tm_year)) {
tm_up.tm_mday += 1;
break;
}
/* fall through */
case RANGE_MONTH:
if (tm_up.tm_mon < 11) {
tm_up.tm_mon += 1;
break;
}
/* fall through */
case RANGE_YEAR:
tm_up.tm_year += 1;
break;
@@ -4209,7 +4204,7 @@ static void _recalculate_fields(struct dm_report *rh)
{
struct row *row;
struct dm_report_field *field;
int len;
size_t len;
dm_list_iterate_items(row, &rh->rows) {
dm_list_iterate_items(field, &row->fields) {

File diff suppressed because it is too large Load Diff

View File

@@ -468,12 +468,10 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
unsigned base = BASE_UNKNOWN;
unsigned s;
int precision;
double d;
uint64_t byte = UINT64_C(0);
uint64_t units = UINT64_C(1024);
char *size_buf = NULL;
char new_unit_type = '\0', unit_type_buf[2];
const char *prefix = "";
const char * const size_str[][3] = {
/* BASE_UNKNOWN */
{" ", " ", " "}, /* [0] */
@@ -572,7 +570,7 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
byte = unit_factor;
} else {
/* Human-readable style */
if (unit_type == 'H' || unit_type == 'R') {
if (unit_type == 'H') {
units = UINT64_C(1000);
base = BASE_1000;
} else {
@@ -588,16 +586,6 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
for (s = 0; s < NUM_UNIT_PREFIXES && size < byte; s++)
byte /= units;
if ((s < NUM_UNIT_PREFIXES) &&
((unit_type == 'R') || (unit_type == 'r'))) {
/* When the rounding would cause difference, add '<' prefix
* i.e. 2043M is more then 1.9949G prints <2.00G
* This version is for 2 digits fixed precision */
d = 100. * (double) size / byte;
if (!_close_enough(floorl(d), nearbyintl(d)))
prefix = "<";
}
include_suffix = 1;
}
@@ -611,7 +599,7 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
precision = 2;
}
snprintf(size_buf, SIZE_BUF - 1, "%s%.*f%s", prefix, precision,
snprintf(size_buf, SIZE_BUF - 1, "%.*f%s", precision,
(double) size / byte, include_suffix ? size_str[base + s][suffix_type] : "");
return size_buf;
@@ -626,7 +614,7 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
uint64_t multiplier;
if (endptr)
*endptr = units;
*endptr = (char *) units;
if (isdigit(*units)) {
custom_value = strtod(units, &ptr);
@@ -651,8 +639,6 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
switch (*units) {
case 'h':
case 'H':
case 'r':
case 'R':
multiplier = v = UINT64_C(1);
*unit_type = *units;
break;
@@ -709,7 +695,7 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
}
if (endptr)
*endptr = units + 1;
*endptr = (char *) units + 1;
if (_close_enough(custom_value, 0.))
return v * multiplier; /* Use integer arithmetic */

View File

@@ -19,7 +19,6 @@
#ifdef DEBUG
#include <ctype.h>
__attribute__ ((__unused__))
static void _regex_print(struct rx_node *rx, int depth, unsigned show_nodes)
{
int i, numchars;

View File

@@ -40,11 +40,6 @@ SED = @SED@
CFLOW_CMD = @CFLOW_CMD@
AWK = @AWK@
CHMOD = @CHMOD@
EGREP = @EGREP@
GREP = @GREP@
SORT = @SORT@
WC = @WC@
PYTHON2 = @PYTHON2@
PYTHON3 = @PYTHON3@
PYCOMPILE = $(top_srcdir)/autoconf/py-compile
@@ -517,9 +512,9 @@ ifeq (,$(firstword $(EXPORTED_SYMBOLS)))
) > $@
else
set -e;\
R=$$($(SORT) $^ | uniq -u);\
R=$$(sort $^ | uniq -u);\
test -z "$$R" || { echo "Mismatch between symbols in shared library and lists in .exported_symbols.* files: $$R"; false; } ;\
( for i in $$(echo $(EXPORTED_SYMBOLS) | tr ' ' '\n' | $(SORT) -rnt_ -k5 ); do\
( for i in $$(echo $(EXPORTED_SYMBOLS) | tr ' ' '\n' | sort -rnt_ -k5 ); do\
echo "$${i##*.} {"; echo " global:";\
$(SED) "s/^/ /;s/$$/;/" $$i;\
echo "};";\

View File

@@ -31,20 +31,18 @@ LVMRAIDMAN = lvmraid.7
MAN5=lvm.conf.5
MAN7=lvmsystemid.7 lvmreport.7
MAN8=lvm.8 lvmconf.8 lvmdump.8
MAN8DM=dmsetup.8 dmstats.8
MAN8CLUSTER=
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
MAN8GEN=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
lvcreate.8 lvchange.8 lvmconfig.8 lvconvert.8 lvdisplay.8 lvextend.8 \
MAN8=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
lvchange.8 lvmconfig.8 lvconvert.8 lvcreate.8 lvdisplay.8 lvextend.8 \
lvm.8 lvmchange.8 lvmconf.8 lvmdiskscan.8 lvmdump.8 lvmsadc.8 lvmsar.8 \
lvreduce.8 lvremove.8 lvrename.8 lvresize.8 lvs.8 \
lvscan.8 pvchange.8 pvck.8 pvcreate.8 pvdisplay.8 pvmove.8 pvremove.8 \
pvresize.8 pvs.8 pvscan.8 vgcfgbackup.8 vgcfgrestore.8 vgchange.8 \
vgck.8 vgcreate.8 vgconvert.8 vgdisplay.8 vgexport.8 vgextend.8 \
vgimport.8 vgimportclone.8 vgmerge.8 vgmknodes.8 vgreduce.8 vgremove.8 \
vgrename.8 vgs.8 vgscan.8 vgsplit.8 \
lvmsar.8 lvmsadc.8 lvmdiskscan.8 lvmchange.8
vgrename.8 vgs.8 vgscan.8 vgsplit.8
MAN8DM=dmsetup.8 dmstats.8
MAN8CLUSTER=
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
ifeq ($(MAKECMDGOALS),all_man)
MAN_ALL="yes"
@@ -115,8 +113,8 @@ MAN8DIR=$(mandir)/man8
include $(top_builddir)/make.tmpl
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM) *.gen man-generator
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM)
DISTCLEAN_TARGETS+=$(FSADMMAN) $(BLKDEACTIVATEMAN) $(DMEVENTDMAN) \
$(LVMETADMAN) $(LVMPOLLDMAN) $(LVMLOCKDMAN) $(CLVMDMAN) $(CMIRRORDMAN) \
$(LVMCACHEMAN) $(LVMTHINMAN) $(LVMDBUSDMAN) $(LVMRAIDMAN)
@@ -127,11 +125,11 @@ all: man device-mapper
device-mapper: $(MAN8DM)
man: $(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8CLUSTER) $(MAN8SYSTEMD_GENERATORS)
man: $(MAN5) $(MAN7) $(MAN8) $(MAN8CLUSTER) $(MAN8SYSTEMD_GENERATORS)
all_man: man
$(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8DM) $(MAN8CLUSTER): Makefile
$(MAN5) $(MAN7) $(MAN8) $(MAN8DM) $(MAN8CLUSTER): Makefile
Makefile: Makefile.in
@:
@@ -142,17 +140,12 @@ Makefile: Makefile.in
*) echo "Creating $@" ; $(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $< > $@ ;; \
esac
man-generator:
$(CC) -DMAN_PAGE_GENERATOR -I$(top_builddir)/tools $(CFLAGS) $(top_srcdir)/tools/command.c -o $@
- ./man-generator lvmconfig > test.gen
if [ ! -s test.gen ] ; then cp genfiles/*.gen $(top_builddir)/man; fi;
ccmd: ../tools/create-commands.c
$(CC) ../tools/create-commands.c -o ccmd
$(MAN8GEN): man-generator
echo "Generating $@" ;
if [ ! -e $@.gen ]; then ./man-generator $(basename $@) $(top_srcdir)/man/$@.des > $@.gen; fi
if [ -f $(top_srcdir)/man/$@.end ]; then cat $(top_srcdir)/man/$@.end >> $@.gen; fi;
cat $(top_srcdir)/man/see_also.end >> $@.gen
$(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $@.gen > $@
generate: ccmd
./ccmd --output man -s 0 -p 1 -c lvcreate ../tools/command-lines.in > lvcreate.8.a
cat lvcreate.8.a lvcreate.8.b > lvcreate.8.in
install_man5: $(MAN5)
$(INSTALL) -d $(MAN5DIR)
@@ -162,10 +155,9 @@ install_man7: $(MAN7)
$(INSTALL) -d $(MAN7DIR)
$(INSTALL_DATA) $(MAN7) $(MAN7DIR)/
install_man8: $(MAN8) $(MAN8GEN)
install_man8: $(MAN8)
$(INSTALL) -d $(MAN8DIR)
$(INSTALL_DATA) $(MAN8) $(MAN8DIR)/
$(INSTALL_DATA) $(MAN8GEN) $(MAN8DIR)/
install_lvm2: install_man5 install_man7 install_man8

View File

@@ -154,32 +154,6 @@ This timeout will be ignored if you start \fBclvmd\fP with the \fB\-d\fP.
.br
Display the version of the cluster LVM daemon.
.
.SH NOTES
.
.SS Activation
.
In a clustered VG, clvmd is used for activation, and the following values are
possible with \fBlvchange/vgchange -a\fP:
.IP \fBy\fP|\fBsy\fP
clvmd activates the LV in shared mode (with a shared lock),
allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
an exclusive lock is automatically used instead.
clvmd attempts to activate the LV concurrently on all nodes.
.IP \fBey\fP
clvmd activates the LV in exclusive mode (with an exclusive lock),
allowing a single node to activate the LV.
clvmd attempts to activate the LV concurrently on all nodes, but only
one will succeed.
.IP \fBly\fP
clvmd attempts to activate the LV only on the local node.
If the LV type allows concurrent access, then shared mode is used,
otherwise exclusive.
.IP \fBn\fP
clvmd deactivates the LV on all nodes.
.IP \fBln\fP
clvmd deactivates the LV on the local node.
.
.SH ENVIRONMENT VARIABLES
.TP
.B LVM_CLVMD_BINARY

View File

@@ -23,6 +23,40 @@ dmeventd is the event monitoring daemon for device-mapper devices.
Library plugins can register and carry out actions triggered when
particular events occur.
.
.SH LVM PLUGINS
.
.HP
.IR Mirror
.br
Attempts to handle device failure automatically. See
.BR lvm.conf (5).
.
.HP
.IR Raid
.br
Attempts to handle device failure automatically. See
.BR lvm.conf (5).
.
.HP
.IR Snapshot
.br
Monitors how full a snapshot is becoming and emits a warning to
syslog when it exceeds 80% full.
The warning is repeated when 85%, 90% and 95% of the snapshot is filled.
See
.BR lvm.conf (5).
Snapshot which runs out of space gets invalid and when it is mounted,
it gets umounted if possible.
.
.HP
.IR Thin
.br
Monitors how full a thin pool data and metadata is becoming and emits
a warning to syslog when it exceeds 80% full.
The warning is repeated when 85%, 90% and 95% of the thin pool is filled.
See
.BR lvm.conf (5).
If the thin-pool runs out of space, thin volumes are umounted if possible.
.
.SH OPTIONS
.
@@ -70,80 +104,6 @@ events to monitor from the currently running daemon.
.br
Show version of dmeventd.
.
.SH LVM PLUGINS
.
.HP
.BR Mirror
.br
Attempts to handle device failure automatically. See
.BR lvm.conf (5).
.
.HP
.BR Raid
.br
Attempts to handle device failure automatically. See
.BR lvm.conf (5).
.
.HP
.BR Snapshot
.br
Monitors how full a snapshot is becoming and emits a warning to
syslog when it exceeds 80% full.
The warning is repeated when 85%, 90% and 95% of the snapshot is filled.
See
.BR lvm.conf (5).
Snapshot which runs out of space gets invalid and when it is mounted,
it gets umounted if possible.
.
.HP
.BR Thin
.br
Monitors how full a thin pool data and metadata is becoming and emits
a warning to syslog when it exceeds 80% full.
The warning is repeated when more then 85%, 90% and 95%
of the thin pool is filled. See
.BR lvm.conf (5).
When a thin pool fills over 50% (data or metadata) thin plugin calls
configured \fIdmeventd/thin_command\fP with every 5% increase.
With default setting it calls internal
\fBlvm lvextend --use-policies\fP to resize thin pool
when it's been filled above configured threshold
\fIactivation/thin_pool_autoextend_threshold\fP.
If the command fails, dmeventd thin plugin will keep
retrying execution with increasing time delay between
retries upto 42 minutes.
User may also configure external command to support more advanced
maintenance operations of a thin pool.
Such external command can e.g. remove some unneeded snapshots,
use \fBfstrim\fP(8) to free recover space in a thin pool,
but also can use \fBlvextend --use-policies\fP if other actions
have not released enough space.
Command is executed with environmental variable
\fBLVM_RUN_BY_DMEVENTD=1\fP so any lvm2 command executed
in this environment will not try to interact with dmeventd.
To see the fullness of a thin pool command may check these
two environmental variables
\fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_DATA\fP.
Command can also read status with tools like \fBlvs\fP(8).
.
.SH ENVIRONMENT VARIABLES
.
.TP
.B DMEVENTD_THIN_POOL_DATA
Variable is set by thin plugin and is available to executed program. Value present
actual usage of thin pool data volume. Variable is not set when error event
is processed.
.TP
.B DMEVENTD_THIN_POOL_DATA
Variable is set by thin plugin and is available to executed program. Value present
actual usage of thin pool metadata volume. Variable is not set when error event
is processed.
.TP
.B LVM_RUN_BY_DMEVENTD
Variable is set by thin plugin to prohibit recursive interation
with dmeventd by any executed lvm2 command from
a thin_command environment.
.
.SH SEE ALSO
.
.BR lvm (8),

View File

@@ -27,9 +27,9 @@ dmsetup \(em low level logical volume management
. IR uuid ]
. RB \%[ \-\-addnodeoncreate | \-\-addnodeonresume ]
. RB \%[ \-n | \-\-notable | \-\-table
. IR \%table | table_file ]
. RI \%{ table | table_file }]
. RB [ \-\-readahead
. RB \%[ + ] \fIsectors | auto | none ]
. RB \%{[ + ] \fIsectors | auto | none }]
. ad b
..
.CMD_CREATE
@@ -41,7 +41,7 @@ dmsetup \(em low level logical volume management
. BR deps
. RB [ \-o
. IR options ]
. RI [ device_name ...]
. RI [ device_name ]
. ad b
..
.CMD_DEPS
@@ -58,7 +58,7 @@ dmsetup \(em low level logical volume management
.B dmsetup
.de CMD_INFO
. BR info
. RI [ device_name ...]
. RI [ device_name ]
..
.CMD_INFO
.
@@ -92,7 +92,7 @@ dmsetup \(em low level logical volume management
. BR load
. IR device_name
. RB [ \-\-table
. IR table | table_file ]
. RI { table | table_file }]
. ad b
..
.CMD_LOAD
@@ -117,7 +117,7 @@ dmsetup \(em low level logical volume management
.B dmsetup
.de CMD_MANGLE
. BR mangle
. RI [ device_name ...]
. RI [ device_name ]
..
.CMD_MANGLE
.
@@ -135,7 +135,7 @@ dmsetup \(em low level logical volume management
.B dmsetup
.de CMD_MKNODES
. BR mknodes
. RI [ device_name ...]
. RI [ device_name ]
..
.CMD_MKNODES
.
@@ -146,7 +146,7 @@ dmsetup \(em low level logical volume management
. BR reload
. IR device_name
. RB [ \-\-table
. IR table | table_file ]
. RI { table | table_file }]
. ad b
..
.CMD_RELOAD
@@ -159,7 +159,7 @@ dmsetup \(em low level logical volume management
. RB [ \-f | \-\-force ]
. RB [ \-\-retry ]
. RB [ \-\-deferred ]
. IR device_name ...
. IR device_name
. ad b
..
.CMD_REMOVE
@@ -197,12 +197,12 @@ dmsetup \(em low level logical volume management
.de CMD_RESUME
. ad l
. BR resume
. IR device_name ...
. IR device_name
. RB [ \-\-addnodeoncreate | \-\-addnodeonresume ]
. RB [ \-\-noflush ]
. RB [ \-\-nolockfs ]
. RB \%[ \-\-readahead
. RB \%[ + ] \fIsectors | auto | none ]
. RB \%{[ + ] \fIsectors | auto | none }]
. ad b
..
.CMD_RESUME
@@ -247,7 +247,7 @@ dmsetup \(em low level logical volume management
. RB [ \-\-target
. IR target_type ]
. RB [ \-\-noflush ]
. RI [ device_name ...]
. RI [ device_name ]
. ad b
..
.CMD_STATUS
@@ -259,7 +259,7 @@ dmsetup \(em low level logical volume management
. BR suspend
. RB [ \-\-nolockfs ]
. RB [ \-\-noflush ]
. IR device_name ...
. IR device_name
. ad b
..
.CMD_SUSPEND
@@ -272,7 +272,7 @@ dmsetup \(em low level logical volume management
. RB [ \-\-target
. IR target_type ]
. RB [ \-\-showkeys ]
. RI [ device_name ...]
. RI [ device_name ]
. ad b
..
.CMD_TABLE
@@ -354,7 +354,7 @@ dmsetup \(em low level logical volume management
.de CMD_WIPE_TABLE
. ad l
. BR wipe_table
. IR device_name ...
. IR device_name
. RB [ \-f | \-\-force ]
. RB [ \-\-noflush ]
. RB [ \-\-nolockfs ]
@@ -447,7 +447,7 @@ The default interval is one second.
.
.HP
.BR \-\-manglename
.BR auto | hex | none
.RB { auto | hex | none }
.br
Mangle any character not on a whitelist using mangling_mode when
processing device-mapper device names and UUIDs. The names and UUIDs
@@ -529,7 +529,7 @@ Specify which fields to display.
.
.HP
.BR \-\-readahead
.RB [ + ] \fIsectors | auto | none
.RB {[ + ] \fIsectors | auto | none }
.br
Specify read ahead size in units of sectors.
The default value is \fBauto\fP which allows the kernel to choose
@@ -820,10 +820,8 @@ Outputs the current table for the device in a format that can be fed
back in using the create or load commands.
With \fB\-\-target\fP, only information relating to the specified target type
is displayed.
Real encryption keys are suppressed in the table output for the crypt
target unless the \fB\-\-showkeys\fP parameter is supplied. Kernel key
references prefixed with \fB:\fP are not affected by the parameter and get
displayed always.
Encryption keys are suppressed in the table output for the crypt
target unless the \fB\-\-showkeys\fP parameter is supplied.
.
.HP
.CMD_TARGETS

View File

@@ -44,7 +44,7 @@ dmstats \(em device-mapper statistics management
.B dmsetup
.B stats
.I command
[OPTIONS]
.RB [ options ]
.sp
.
.PD 0
@@ -53,14 +53,12 @@ dmstats \(em device-mapper statistics management
.de CMD_COMMAND
. ad l
. IR command
. IR device_name " |"
. BR \-\-major
. RI [ device_name
. RB | \-\-uuid
. IR uuid | \fB\-\-major
. IR major
. BR \-\-minor
. IR minor " |"
. BR \-u | \-\-uuid
. IR uuid
. RB \%[ \-v | \-\-verbose]
. IR minor ]
. ad b
..
.CMD_COMMAND
@@ -82,7 +80,9 @@ dmstats \(em device-mapper statistics management
.de CMD_CREATE
. ad l
. BR create
. IR device_name... | file_path... | \fB\-\-alldevices
. RB [ device_name...
. RB | file_path... ]
. RB [ \-\-alldevices ]
. RB [ \-\-areas
. IR nr_areas | \fB\-\-areasize
. IR area_size ]
@@ -108,7 +108,8 @@ dmstats \(em device-mapper statistics management
.de CMD_DELETE
. ad l
. BR delete
. IR device_name | \fB\-\-alldevices
. RI [ device_name ]
. RB [ \-\-alldevices ]
. OPT_PROGRAMS
. OPT_REGIONS
. ad b
@@ -120,9 +121,10 @@ dmstats \(em device-mapper statistics management
.de CMD_GROUP
. ad l
. BR group
. RI [ device_name | \fB\-\-alldevices ]
. RI [ device_name ]
. RB [ \-\-alias
. IR name ]
. RB [ \-\-alldevices ]
. RB [ \-\-regions
. IR regions ]
. ad b
@@ -151,7 +153,7 @@ dmstats \(em device-mapper statistics management
. OPT_OBJECTS
. RB \%[ \-\-nosuffix ]
. RB [ \-\-notimesuffix ]
. RB \%[ \-v | \-\-verbose]
. RB \%[ \-v | \-\-verbose [ \-v | \-\-verbose ]]
. ad b
..
.CMD_LIST
@@ -201,23 +203,13 @@ dmstats \(em device-mapper statistics management
.de CMD_UNGROUP
. ad l
. BR ungroup
. RI [ device_name | \fB\-\-alldevices ]
. RI [ device_name ]
. RB [ \-\-alldevices ]
. RB [ \-\-groupid
. IR id ]
. ad b
..
.CMD_UNGROUP
.HP
.B dmstats
.de CMD_UPDATE_FILEMAP
. ad l
. BR update_filemap
. IR file_path
. RB [ \-\-groupid
. IR id ]
. ad b
..
.CMD_UPDATE_FILEMAP
.
.PD
.ad b
@@ -674,21 +666,6 @@ Remove an existing group and return all the group's regions to their
original state.
The group to be removed is specified using \fB\-\-groupid\fP.
.HP
.CMD_UPDATE_FILEMAP
.br
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
that were previously created with \fB\-\-filemap\fP. This will add
and remove regions to reflect changes in the allocated extents of
the file on-disk, since the time that it was crated or last updated.
Use of this command is not normally needed since the \fBdmfilemapd\fP
daemon will automatically monitor filemap groups and perform these
updates when required.
If a filemapped group was created with \fB\-\-nominitor\fP, or the
daemon has been killed, the \fBupdate_filemap\fP can be used to
manually force an update.
.
.SH REGIONS, AREAS, AND GROUPS
.
@@ -899,7 +876,7 @@ Count of writes merged this interval.
.B write_sector_count
Count of 512 byte sectors written this interval.
.TP
.B write_time
.B write_nsecs
Accumulated duration of all write requests (ns).
.TP
.B in_progress_count

View File

@@ -1,2 +0,0 @@
lvchange changes LV attributes in the VG, changes LV activation in the
kernel, and includes other utilities for LV maintenance.

View File

@@ -1,6 +0,0 @@
.SH EXAMPLES
Change LV permission to read-only:
.sp
.B lvchange \-pr vg00/lvol1

491
man/lvchange.8.in Normal file
View File

@@ -0,0 +1,491 @@
.TH LVCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.de UNITS
..
.
.SH NAME
.
lvchange \(em change attributes of a logical volume
.
.SH SYNOPSIS
.
.ad l
.B lvchange
.RB [ \-a | \-\-activate
.RB [ a ][ e | s | l ]{ y | n }]
.RB [ \-\-activationmode
.RB { complete | degraded | partial }]
.RB [ \-\-addtag
.IR Tag ]
.RB [ \-K | \-\-ignoreactivationskip ]
.RB [ \-k | \-\-setactivationskip
.RB { y | n }]
.RB [ \-\-alloc
.IR AllocationPolicy ]
.RB [ \-A | \-\-autobackup
.RB { y | n }]
.RB [ \-\-rebuild
.IR PhysicalVolume ]
.RB [ \-\-cachemode
.RB { passthrough | writeback | writethrough }]
.RB [ \-\-cachepolicy
.IR Policy ]
.RB [ \-\-cachesettings
.IR Key \fB= Value ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-C | \-\-contiguous
.RB { y | n }]
.RB [ \-d | \-\-debug ]
.RB [ \-\-deltag
.IR Tag ]
.RB [ \-\-detachprofile ]
.RB [ \-\-discards
.RB { ignore | nopassdown | passdown }]
.RB [ \-\-errorwhenfull
.RB { y | n }]
.RB [ \-h | \-? | \-\-help ]
.RB \%[ \-\-ignorelockingfailure ]
.RB \%[ \-\-ignoremonitoring ]
.RB \%[ \-\-ignoreskippedcluster ]
.RB \%[ \-\-metadataprofile
.IR ProfileName ]
.RB [ \-\-monitor
.RB { y | n }]
.RB [ \-\-noudevsync ]
.RB [ \-P | \-\-partial ]
.RB [ \-p | \-\-permission
.RB { r | rw }]
.RB [ \-M | \-\-persistent
.RB { y | n }
.RB [ \-\-major
.IR Major ]
.RB [ \-\-minor
.IR Minor ]]
.RB [ \-\-poll
.RB { y | n }]
.RB [ \-\- [ raid ] maxrecoveryrate
.IR Rate ]
.RB [ \-\- [ raid ] minrecoveryrate
.IR Rate ]
.RB [ \-\- [ raid ] syncaction
.RB { check | repair }]
.RB [ \-\- [ raid ] writebehind
.IR IOCount ]
.RB [ \-\- [ raid ] writemostly
.BR \fIPhysicalVolume [ : { y | n | t }]]
.RB [ \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }]
.RB [ \-\-refresh ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-\-resync ]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-\-sysinit ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-Z | \-\-zero
.RB { y | n }]
.RI [ LogicalVolumePath ...]
.ad b
.
.SH DESCRIPTION
.
lvchange allows you to change the attributes of a logical volume
including making them known to the kernel ready for use.
.
.SH OPTIONS
.
See \fBlvm\fP(8) for common options.
.
.HP
.BR \-a | \-\-activate
.RB [ a ][ e | s | l ]{ y | n }
.br
Controls the availability of the logical volumes for use.
Communicates with the kernel device-mapper driver via
libdevmapper to activate (\fB\-ay\fP) or deactivate (\fB\-an\fP) the
logical volumes.
.br
Activation of a logical volume creates a symbolic link
\fI/dev/VolumeGroupName/LogicalVolumeName\fP pointing to the device node.
This link is removed on deactivation.
All software and scripts should access the device through
this symbolic link and present this as the name of the device.
The location and name of the underlying device node may depend on
the distribution and configuration (e.g. udev) and might change
from release to release.
.br
If autoactivation option is used (\fB\-aay\fP),
the logical volume is activated only if it matches an item in
the \fBactivation/auto_activation_volume_list\fP
set in \fBlvm.conf\fP(5).
If this list is not set, then all volumes are considered for
activation. The \fB\-aay\fP option should be also used during system
boot so it's possible to select which volumes to activate using
the \fBactivation/auto_activation_volume_list\fP setting.
.br
In a clustered VG, clvmd is used for activation, and the
following options are possible:
With \fB\-aey\fP, clvmd activates the LV in exclusive mode
(with an exclusive lock), allowing a single node to activate the LV.
With \fB\-asy\fP, clvmd activates the LV in shared mode
(with a shared lock), allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
the '\fBs\fP' option is ignored and an exclusive lock is used.
With \fB\-ay\fP (no mode specified), clvmd activates the LV in shared mode
if the LV type allows concurrent access, such as a linear LV.
Otherwise, clvmd activates the LV in exclusive mode.
With \fB\-aey\fP, \fB\-asy\fP, and \fB\-ay\fP, clvmd attempts to activate the LV
on all nodes. If exclusive mode is used, then only one of the
nodes will be successful.
With \fB\-an\fP, clvmd attempts to deactivate the LV on all nodes.
With \fB\-aly\fP, clvmd activates the LV only on the local node, and \fB\-aln\fP
deactivates only on the local node. If the LV type allows concurrent
access, then shared mode is used, otherwise exclusive.
LVs with snapshots are always activated exclusively because they can only
be used on one node at once.
For local VGs \fB\-ay\fP, \fB\-aey\fP, and \fB\-asy\fP are all equivalent.
.
.HP
.BR \-\-activationmode
.RB { complete | degraded | partial }
.br
The activation mode determines whether logical volumes are allowed to
activate when there are physical volumes missing (e.g. due to a device
failure). \fBcomplete\fP is the most restrictive; allowing only those
logical volumes to be activated that are not affected by the missing
PVs. \fBdegraded\fP allows RAID logical volumes to be activated even if
they have PVs missing. (Note that the "\fImirror\fP" segment type is not
considered a RAID logical volume. The "\fIraid1\fP" segment type should
be used instead.) Finally, \fBpartial\fP allows any logical volume to
be activated even if portions are missing due to a missing or failed
PV. This last option should only be used when performing recovery or
repair operations. \fBdegraded\fP is the default mode. To change it,
modify \fBactivation_mode\fP in \fBlvm.conf\fP(5).
.
.HP
.BR \-K | \-\-ignoreactivationskip
.br
Ignore the flag to skip Logical Volumes during activation.
.
.HP
.BR \-k | \-\-setactivationskip
.RB { y | n }
.br
Controls whether Logical Volumes are persistently flagged to be
skipped during activation. By default, thin snapshot volumes are
flagged for activation skip. To activate such volumes,
an extra \fB\-\-ignoreactivationskip\fP option must be used.
The flag is not applied during deactivation. To see whether
the flag is attached, use \fBlvs\fP(8) command where the state
of the flag is reported within \fBlv_attr\fP bits.
.
.HP
.BR \-\-cachemode
.RB { passthrough | writeback | writethrough }
.br
Specifying a cache mode determines when the writes to a cache LV
are considered complete. When \fBwriteback\fP is specified, a write is
considered complete as soon as it is stored in the cache pool LV.
If \fBwritethough\fP is specified, a write is considered complete only
when it has been stored in the cache pool LV and on the origin LV.
While \fBwritethrough\fP may be slower for writes, it is more
resilient if something should happen to a device associated with the
cache pool LV. With \fBpassthrough\fP mode, all reads are served
from origin LV (all reads miss the cache) and all writes are
forwarded to the origin LV; additionally, write hits cause cache
block invalidates. See \fBlvmcache(7)\fP for more details.
.
.HP
.BR \-\-cachepolicy
.IR Policy ,
.BR \-\-cachesettings
.IR Key \fB= Value
.br
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy and its associated tunable settings. In most use-cases,
default values should be adequate.
.
.HP
.BR \-C | \-\-contiguous
.RB { y | n }
.br
Tries to set or reset the contiguous allocation policy for
logical volumes. It's only possible to change a non-contiguous
logical volume's allocation policy to contiguous, if all of the
allocated physical extents are already contiguous.
.
.HP
.BR \-\-detachprofile
.br
Detach any metadata configuration profiles attached to given
Logical Volumes. See \fBlvm.conf\fP(5) for more information
about metadata profiles.
.
.HP
.BR \-\-discards
.RB { ignore | nopassdown | passdown }
.br
Set this to \fBignore\fP to ignore any discards received by a
thin pool Logical Volume. Set to \fBnopassdown\fP to process such
discards within the thin pool itself and allow the no-longer-needed
extents to be overwritten by new data. Set to \fBpassdown\fP (the
default) to process them both within the thin pool itself and to
pass them down the underlying device.
.
.HP
.BR \-\-errorwhenfull
.RB { y | n }
.br
Sets thin pool behavior when data space is exhaused. See
.BR lvcreate (8)
for information.
.
.HP
.BR \-\-ignoremonitoring
.br
Make no attempt to interact with dmeventd unless \fB\-\-monitor\fP
is specified.
Do not use this if dmeventd is already monitoring a device.
.
.HP
.BR \-\-major
.IR Major
.br
Sets the major number. This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
.
.HP
.BR \-\-minor
.IR Minor
.br
Set the minor number.
.
.HP
.BR \-\-metadataprofile
.IR ProfileName
.br
Uses and attaches \fIProfileName\fP configuration profile to the logical
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
See \fBlvm.conf\fP(5) for more information about metadata profiles.
.
.HP
.BR \-\-monitor
.RB { y | n }
.br
Start or stop monitoring a mirrored or snapshot logical volume with
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
\%\fBmirror_image_fault_policy\fP and \fBmirror_log_fault_policy\fP
set in \fBlvm.conf\fP(5).
.
.HP
.BR \-\-noudevsync
.br
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.
.HP
.BR \-p | \-\-permission
.RB { r | rw }
.br
Change access permission to read-only or read/write.
.
.HP
.BR \-M | \-\-persistent
.RB { y | n }
.br
Set to \fBy\fP to make the minor number specified persistent.
Change of persistent numbers is not supported for pool volumes.
.
.HP
.BR \-\-poll
.RB { y | n }
.br
Without polling a logical volume's backgrounded transformation process
will never complete. If there is an incomplete pvmove or lvconvert (for
example, on rebooting after a crash), use \fB\-\-poll y\fP to restart the
process from its last checkpoint. However, it may not be appropriate to
immediately poll a logical volume when it is activated, use
\fB\-\-poll n\fP to defer and then \fB\-\-poll y\fP to restart the process.
.
.HP
.BR \-\- [ raid ] rebuild
.BR \fIPhysicalVolume
.br
Option can be repeated multiple times.
Selects PhysicalVolume(s) to be rebuild in a RaidLV.
Use this option instead of
.BR \-\-resync
or
.BR \-\- [ raid ] syncaction
\fBrepair\fP in case the PVs with corrupted data are known and their data
should be reconstructed rather than reconstructing default (rotating) data.
.br
E.g. in a raid1 mirror, the master leg on /dev/sda may hold corrupt data due
to a known transient disk error, thus
.br
\fBlvchange --rebuild /dev/sda LV\fP
.br
will request the master leg to be rebuild rather than rebuilding
all other legs from the master.
On a raid5 with rotating data and parity
.br
\fBlvchange --rebuild /dev/sda LV\fP
.br
will rebuild all data and parity blocks in the stripe on /dev/sda.
.HP
.BR \-\- [ raid ] maxrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the maximum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to \fB0\fP means it will be unbounded.
.
.HP
.BR \-\- [ raid ] minrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the minimum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to \fB0\fP means it will be unbounded.
.
.HP
.BR \-\- [ raid ] syncaction
.RB { check | repair }
.br
This argument is used to initiate various RAID synchronization operations.
The \fBcheck\fP and \fBrepair\fP options provide a way to check the
integrity of a RAID logical volume (often referred to as "scrubbing").
These options cause the RAID logical volume to
read all of the data and parity blocks in the array and check for any
discrepancies (e.g. mismatches between mirrors or incorrect parity values).
If \fBcheck\fP is used, the discrepancies will be counted but not repaired.
If \fBrepair\fP is used, the discrepancies will be corrected as they are
encountered. The \fBlvs\fP(8) command can be used to show the number of
discrepancies found or repaired.
.
.HP
.BR \-\- [ raid ] writebehind
.IR IOCount
.br
Specify the maximum number of outstanding writes that are allowed to
devices in a RAID1 logical volume that are marked as write-mostly.
Once this value is exceeded, writes become synchronous (i.e. all writes
to the constituent devices must complete before the array signals the
write has completed). Setting the value to zero clears the preference
and allows the system to choose the value arbitrarily.
.
.HP
.BR \-\- [ raid ] writemostly
.BR \fIPhysicalVolume [ : { y | n | t }]
.br
Mark a device in a RAID1 logical volume as write-mostly. All reads
to these drives will be avoided unless absolutely necessary. This keeps
the number of I/Os to the drive to a minimum. The default behavior is to
set the write-mostly attribute for the specified physical volume in the
logical volume. It is possible to also remove the write-mostly flag by
appending a "\fB:n\fP" to the physical volume or to toggle the value by specifying
"\fB:t\fP". The \fB\-\-writemostly\fP argument can be specified more than one time
in a single command; making it possible to toggle the write-mostly attributes
for all the physical volumes in a logical volume at once.
.
.HP
.BR \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }
.br
Set read ahead sector count of this logical volume.
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120 sectors.
The default value is "\fBauto\fP" which allows the kernel to choose
a suitable value automatically.
"\fBnone\fP" is equivalent to specifying zero.
.
.HP
.BR \-\-refresh
.br
If the logical volume is active, reload its metadata.
This is not necessary in normal operation, but may be useful
if something has gone wrong or if you're doing clustering
manually without a clustered lock manager.
.
.HP
.BR \-\-resync
.br
Forces the complete resynchronization of a mirror. In normal
circumstances you should not need this option because synchronization
happens automatically. Data is read from the primary mirror device
and copied to the others, so this can take a considerable amount of
time - and during this time you are without a complete redundant copy
of your data.
.
.HP
.BR \-\-sysinit
.br
Indicates that \fBlvchange\fP(8) is being invoked from early system
initialisation scripts (e.g. rc.sysinit or an initrd),
before writeable filesystems are available. As such,
some functionality needs to be disabled and this option
acts as a shortcut which selects an appropriate set of options. Currently
this is equivalent to using \fB\-\-ignorelockingfailure\fP,
\fB\-\-ignoremonitoring\fP, \fB\-\-poll n\fP and setting
\fBLVM_SUPPRESS_LOCKING_FAILURE_MESSAGES\fP
environment variable.
If \fB\-\-sysinit\fP is used in conjunction with
\fBlvmetad\fP(8) enabled and running,
autoactivation is preferred over manual activation via direct lvchange call.
Logical volumes are autoactivated according to
\fBauto_activation_volume_list\fP set in \fBlvm.conf\fP(5).
.
.HP
.BR \-Z | \-\-zero
.RB { y | n }
.br
Set zeroing mode for thin pool. Note: already provisioned blocks from pool
in non-zero mode are not cleared in unwritten parts when setting zero to
\fBy\fP.
.
.SH ENVIRONMENT VARIABLES
.
.TP
.B LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES
Suppress locking failure messages.
.
.SH Examples
.
Changes the permission on volume lvol1 in volume group vg00 to be read-only:
.sp
.B lvchange \-pr vg00/lvol1
.
.SH SEE ALSO
.
.nh
.BR lvm (8),
.BR lvmetad (8),
.BR lvs (8),
.BR lvcreate (8),
.BR vgchange (8),
.BR lvmcache (7),
.BR lvmthin (7),
.BR lvm.conf (5)

Some files were not shown because too many files have changed in this diff Show More