mirror of
git://sourceware.org/git/lvm2.git
synced 2025-10-16 23:33:16 +03:00
Compare commits
32 Commits
dev-bmr-dm
...
dev-dct-cm
Author | SHA1 | Date | |
---|---|---|---|
|
5ffedbcec1 | ||
|
417c3dece8 | ||
|
311d3c7e8f | ||
|
c025c8b46f | ||
|
2baabc823f | ||
|
d48dd64e35 | ||
|
eee727883b | ||
|
fa3a14df14 | ||
|
43480f222f | ||
|
d7651132a7 | ||
|
61e4fbca11 | ||
|
13818a4630 | ||
|
003437673d | ||
|
d4784802de | ||
|
24b93c6e35 | ||
|
98f007034e | ||
|
d5ac07f818 | ||
|
bd8965194f | ||
|
1eaad64ded | ||
|
f16abdce55 | ||
|
44058693d5 | ||
|
715c959983 | ||
|
7a6a99a488 | ||
|
7ef774e2c2 | ||
|
88db11bece | ||
|
1c5014eda9 | ||
|
0c5cd2ae9f | ||
|
96227b575d | ||
|
29da4255b2 | ||
|
106ef06c2c | ||
|
6467dea0db | ||
|
2da7b10988 |
@@ -59,8 +59,6 @@ liblvm: lib
|
||||
daemons: lib libdaemon tools
|
||||
tools: lib libdaemon device-mapper
|
||||
po: tools daemons
|
||||
man: tools
|
||||
all_man: tools
|
||||
scripts: liblvm libdm
|
||||
|
||||
lib.device-mapper: include.device-mapper
|
||||
|
5
README
5
README
@@ -6,12 +6,11 @@ Installation instructions are in INSTALL.
|
||||
There is no warranty - see COPYING and COPYING.LIB.
|
||||
|
||||
Tarballs are available from:
|
||||
ftp://sourceware.org/pub/lvm2/
|
||||
ftp://sources.redhat.com/pub/lvm2/
|
||||
|
||||
The source code is stored in git:
|
||||
https://sourceware.org/git/?p=lvm2.git
|
||||
git clone git://sourceware.org/git/lvm2.git
|
||||
http://git.fedorahosted.org/git/lvm2.git
|
||||
git clone git://git.fedorahosted.org/git/lvm2.git
|
||||
|
||||
Mailing list for general discussion related to LVM2:
|
||||
linux-lvm@redhat.com
|
||||
|
19
WHATS_NEW
19
WHATS_NEW
@@ -1,24 +1,5 @@
|
||||
Version 2.02.169 -
|
||||
=====================================
|
||||
Upstream git moved to https://sourceware.org/git/?p=lvm2
|
||||
Support conversion of raid type, stripesize and number of disks
|
||||
Reject writemostly/writebehind in lvchange during resynchronization.
|
||||
Deactivate active origin first before removal for improved workflow.
|
||||
Fix regression of accepting options --type and -m with lvresize (2.02.158).
|
||||
Add lvconvert --swapmetadata, new specific way to swap pool metadata LVs.
|
||||
Add lvconvert --startpoll, new specific way to start polling conversions.
|
||||
Add lvconvert --mergethin, new specific way to merge thin snapshots.
|
||||
Add lvconvert --mergemirrors, new specific way to merge split mirrors.
|
||||
Add lvconvert --mergesnapshot, new specific way to combine cow LVs.
|
||||
Split up lvconvert code based on command definitions.
|
||||
Split up lvchange code based on command definitions.
|
||||
Generate help output and man pages from command definitions.
|
||||
Verify all command line items against command definition.
|
||||
Match every command run to one command definition.
|
||||
Specify every allowed command definition/syntax in command-lines.in.
|
||||
Add extra memory page when limiting pthread stack size in clvmd.
|
||||
Support striped/raid0* <-> raid10_near conversions
|
||||
Support shrinking of RaidLvs
|
||||
Support region size changes on existing RaidLVs
|
||||
Avoid parallel usage of cpg_mcast_joined() in clvmd with corosync.
|
||||
Support raid6_{ls,rs,la,ra}_6 segment types and conversions from/to it.
|
||||
|
@@ -1,8 +1,5 @@
|
||||
Version 1.02.138 -
|
||||
=====================================
|
||||
Add extra memory page when limiting pthread stack size in dmeventd.
|
||||
Avoids immediate resume when preloaded device is smaller.
|
||||
Do not suppress kernel key description in dmsetup table output.
|
||||
Support configurable command executed from dmeventd thin plugin.
|
||||
Support new R|r human readable units output format.
|
||||
Thin dmeventd plugin reacts faster on lvextend failure path with umount.
|
||||
|
@@ -2054,7 +2054,6 @@ dmeventd {
|
||||
# or metadata volume gets above 50%.
|
||||
# Command which starts with 'lvm ' prefix is internal lvm command.
|
||||
# You can write your own handler to customise behaviour in more details.
|
||||
# User handler is specified with the full path starting with '/'.
|
||||
# This configuration option has an automatic default value.
|
||||
# thin_command = "lvm lvextend --use-policies"
|
||||
|
||||
|
240
configure
vendored
240
configure
vendored
@@ -702,7 +702,6 @@ BLKDEACTIVATE
|
||||
FSADM
|
||||
ELDFLAGS
|
||||
DM_LIB_PATCHLEVEL
|
||||
DMFILEMAPD
|
||||
DMEVENTD_PATH
|
||||
DMEVENTD
|
||||
DL_LIBS
|
||||
@@ -738,7 +737,6 @@ CLDNOWHOLEARCHIVE
|
||||
CLDFLAGS
|
||||
CACHE
|
||||
BUILD_NOTIFYDBUS
|
||||
BUILD_DMFILEMAPD
|
||||
BUILD_LOCKDDLM
|
||||
BUILD_LOCKDSANLOCK
|
||||
BUILD_LVMLOCKD
|
||||
@@ -823,8 +821,6 @@ HAVE_PIE
|
||||
POW_LIB
|
||||
LIBOBJS
|
||||
ALLOCA
|
||||
SORT
|
||||
WC
|
||||
CHMOD
|
||||
CSCOPE_CMD
|
||||
CFLOW_CMD
|
||||
@@ -962,7 +958,6 @@ enable_use_lvmetad
|
||||
with_lvmetad_pidfile
|
||||
enable_use_lvmpolld
|
||||
with_lvmpolld_pidfile
|
||||
enable_dmfilemapd
|
||||
enable_notify_dbus
|
||||
enable_blkid_wiping
|
||||
enable_udev_systemd_background_jobs
|
||||
@@ -1697,7 +1692,6 @@ Optional Features:
|
||||
--disable-use-lvmlockd disable usage of LVM lock daemon
|
||||
--disable-use-lvmetad disable usage of LVM Metadata Daemon
|
||||
--disable-use-lvmpolld disable usage of LVM Poll Daemon
|
||||
--enable-dmfilemapd enable the dmstats filemap daemon
|
||||
--enable-notify-dbus enable LVM notification using dbus
|
||||
--disable-blkid_wiping disable libblkid detection of signatures when wiping
|
||||
and use native code instead
|
||||
@@ -5240,202 +5234,6 @@ else
|
||||
CHMOD="$ac_cv_path_CHMOD"
|
||||
fi
|
||||
|
||||
if test -n "$ac_tool_prefix"; then
|
||||
# Extract the first word of "${ac_tool_prefix}wc", so it can be a program name with args.
|
||||
set dummy ${ac_tool_prefix}wc; ac_word=$2
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
|
||||
$as_echo_n "checking for $ac_word... " >&6; }
|
||||
if ${ac_cv_path_WC+:} false; then :
|
||||
$as_echo_n "(cached) " >&6
|
||||
else
|
||||
case $WC in
|
||||
[\\/]* | ?:[\\/]*)
|
||||
ac_cv_path_WC="$WC" # Let the user override the test with a path.
|
||||
;;
|
||||
*)
|
||||
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
|
||||
for as_dir in $PATH
|
||||
do
|
||||
IFS=$as_save_IFS
|
||||
test -z "$as_dir" && as_dir=.
|
||||
for ac_exec_ext in '' $ac_executable_extensions; do
|
||||
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
|
||||
ac_cv_path_WC="$as_dir/$ac_word$ac_exec_ext"
|
||||
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
|
||||
break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
IFS=$as_save_IFS
|
||||
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
WC=$ac_cv_path_WC
|
||||
if test -n "$WC"; then
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $WC" >&5
|
||||
$as_echo "$WC" >&6; }
|
||||
else
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
|
||||
$as_echo "no" >&6; }
|
||||
fi
|
||||
|
||||
|
||||
fi
|
||||
if test -z "$ac_cv_path_WC"; then
|
||||
ac_pt_WC=$WC
|
||||
# Extract the first word of "wc", so it can be a program name with args.
|
||||
set dummy wc; ac_word=$2
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
|
||||
$as_echo_n "checking for $ac_word... " >&6; }
|
||||
if ${ac_cv_path_ac_pt_WC+:} false; then :
|
||||
$as_echo_n "(cached) " >&6
|
||||
else
|
||||
case $ac_pt_WC in
|
||||
[\\/]* | ?:[\\/]*)
|
||||
ac_cv_path_ac_pt_WC="$ac_pt_WC" # Let the user override the test with a path.
|
||||
;;
|
||||
*)
|
||||
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
|
||||
for as_dir in $PATH
|
||||
do
|
||||
IFS=$as_save_IFS
|
||||
test -z "$as_dir" && as_dir=.
|
||||
for ac_exec_ext in '' $ac_executable_extensions; do
|
||||
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
|
||||
ac_cv_path_ac_pt_WC="$as_dir/$ac_word$ac_exec_ext"
|
||||
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
|
||||
break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
IFS=$as_save_IFS
|
||||
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
ac_pt_WC=$ac_cv_path_ac_pt_WC
|
||||
if test -n "$ac_pt_WC"; then
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_WC" >&5
|
||||
$as_echo "$ac_pt_WC" >&6; }
|
||||
else
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
|
||||
$as_echo "no" >&6; }
|
||||
fi
|
||||
|
||||
if test "x$ac_pt_WC" = x; then
|
||||
WC=""
|
||||
else
|
||||
case $cross_compiling:$ac_tool_warned in
|
||||
yes:)
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
|
||||
$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
|
||||
ac_tool_warned=yes ;;
|
||||
esac
|
||||
WC=$ac_pt_WC
|
||||
fi
|
||||
else
|
||||
WC="$ac_cv_path_WC"
|
||||
fi
|
||||
|
||||
if test -n "$ac_tool_prefix"; then
|
||||
# Extract the first word of "${ac_tool_prefix}sort", so it can be a program name with args.
|
||||
set dummy ${ac_tool_prefix}sort; ac_word=$2
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
|
||||
$as_echo_n "checking for $ac_word... " >&6; }
|
||||
if ${ac_cv_path_SORT+:} false; then :
|
||||
$as_echo_n "(cached) " >&6
|
||||
else
|
||||
case $SORT in
|
||||
[\\/]* | ?:[\\/]*)
|
||||
ac_cv_path_SORT="$SORT" # Let the user override the test with a path.
|
||||
;;
|
||||
*)
|
||||
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
|
||||
for as_dir in $PATH
|
||||
do
|
||||
IFS=$as_save_IFS
|
||||
test -z "$as_dir" && as_dir=.
|
||||
for ac_exec_ext in '' $ac_executable_extensions; do
|
||||
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
|
||||
ac_cv_path_SORT="$as_dir/$ac_word$ac_exec_ext"
|
||||
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
|
||||
break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
IFS=$as_save_IFS
|
||||
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
SORT=$ac_cv_path_SORT
|
||||
if test -n "$SORT"; then
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $SORT" >&5
|
||||
$as_echo "$SORT" >&6; }
|
||||
else
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
|
||||
$as_echo "no" >&6; }
|
||||
fi
|
||||
|
||||
|
||||
fi
|
||||
if test -z "$ac_cv_path_SORT"; then
|
||||
ac_pt_SORT=$SORT
|
||||
# Extract the first word of "sort", so it can be a program name with args.
|
||||
set dummy sort; ac_word=$2
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
|
||||
$as_echo_n "checking for $ac_word... " >&6; }
|
||||
if ${ac_cv_path_ac_pt_SORT+:} false; then :
|
||||
$as_echo_n "(cached) " >&6
|
||||
else
|
||||
case $ac_pt_SORT in
|
||||
[\\/]* | ?:[\\/]*)
|
||||
ac_cv_path_ac_pt_SORT="$ac_pt_SORT" # Let the user override the test with a path.
|
||||
;;
|
||||
*)
|
||||
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
|
||||
for as_dir in $PATH
|
||||
do
|
||||
IFS=$as_save_IFS
|
||||
test -z "$as_dir" && as_dir=.
|
||||
for ac_exec_ext in '' $ac_executable_extensions; do
|
||||
if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
|
||||
ac_cv_path_ac_pt_SORT="$as_dir/$ac_word$ac_exec_ext"
|
||||
$as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
|
||||
break 2
|
||||
fi
|
||||
done
|
||||
done
|
||||
IFS=$as_save_IFS
|
||||
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
ac_pt_SORT=$ac_cv_path_ac_pt_SORT
|
||||
if test -n "$ac_pt_SORT"; then
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_pt_SORT" >&5
|
||||
$as_echo "$ac_pt_SORT" >&6; }
|
||||
else
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
|
||||
$as_echo "no" >&6; }
|
||||
fi
|
||||
|
||||
if test "x$ac_pt_SORT" = x; then
|
||||
SORT=""
|
||||
else
|
||||
case $cross_compiling:$ac_tool_warned in
|
||||
yes:)
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
|
||||
$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
|
||||
ac_tool_warned=yes ;;
|
||||
esac
|
||||
SORT=$ac_pt_SORT
|
||||
fi
|
||||
else
|
||||
SORT="$ac_cv_path_SORT"
|
||||
fi
|
||||
|
||||
|
||||
################################################################################
|
||||
ac_header_dirent=no
|
||||
@@ -12078,21 +11876,6 @@ cat >>confdefs.h <<_ACEOF
|
||||
_ACEOF
|
||||
|
||||
|
||||
################################################################################
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build dmfilemapd" >&5
|
||||
$as_echo_n "checking whether to build dmfilemapd... " >&6; }
|
||||
# Check whether --enable-dmfilemapd was given.
|
||||
if test "${enable_dmfilemapd+set}" = set; then :
|
||||
enableval=$enable_dmfilemapd; DMFILEMAPD=$enableval
|
||||
fi
|
||||
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $DMFILEMAPD" >&5
|
||||
$as_echo "$DMFILEMAPD" >&6; }
|
||||
BUILD_DMFILEMAPD=$DMFILEMAPD
|
||||
|
||||
$as_echo "#define DMFILEMAPD 1" >>confdefs.h
|
||||
|
||||
|
||||
################################################################################
|
||||
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build notifydbus" >&5
|
||||
$as_echo_n "checking whether to build notifydbus... " >&6; }
|
||||
@@ -15142,24 +14925,6 @@ done
|
||||
|
||||
fi
|
||||
|
||||
if test "$DMFILEMAPD" = yes; then
|
||||
for ac_header in sys/inotify.h
|
||||
do :
|
||||
ac_fn_c_check_header_mongrel "$LINENO" "sys/inotify.h" "ac_cv_header_sys_inotify_h" "$ac_includes_default"
|
||||
if test "x$ac_cv_header_sys_inotify_h" = xyes; then :
|
||||
cat >>confdefs.h <<_ACEOF
|
||||
#define HAVE_SYS_INOTIFY_H 1
|
||||
_ACEOF
|
||||
|
||||
else
|
||||
hard_bailout
|
||||
fi
|
||||
|
||||
done
|
||||
|
||||
fi
|
||||
|
||||
|
||||
################################################################################
|
||||
if test -n "$ac_tool_prefix"; then
|
||||
# Extract the first word of "${ac_tool_prefix}modprobe", so it can be a program name with args.
|
||||
@@ -15619,13 +15384,11 @@ LVM_LIBAPI=`echo "$VER" | $AWK -F '[()]' '{print $2}'`
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
################################################################################
|
||||
ac_config_files="$ac_config_files Makefile make.tmpl daemons/Makefile daemons/clvmd/Makefile daemons/cmirrord/Makefile daemons/dmeventd/Makefile daemons/dmeventd/libdevmapper-event.pc daemons/dmeventd/plugins/Makefile daemons/dmeventd/plugins/lvm2/Makefile daemons/dmeventd/plugins/raid/Makefile daemons/dmeventd/plugins/mirror/Makefile daemons/dmeventd/plugins/snapshot/Makefile daemons/dmeventd/plugins/thin/Makefile daemons/dmfilemapd/Makefile daemons/lvmdbusd/Makefile daemons/lvmdbusd/path.py daemons/lvmetad/Makefile daemons/lvmpolld/Makefile daemons/lvmlockd/Makefile conf/Makefile conf/example.conf conf/lvmlocal.conf conf/command_profile_template.profile conf/metadata_profile_template.profile include/.symlinks include/Makefile lib/Makefile lib/format1/Makefile lib/format_pool/Makefile lib/locking/Makefile lib/mirror/Makefile lib/replicator/Makefile include/lvm-version.h lib/raid/Makefile lib/snapshot/Makefile lib/thin/Makefile lib/cache_segtype/Makefile libdaemon/Makefile libdaemon/client/Makefile libdaemon/server/Makefile libdm/Makefile libdm/libdevmapper.pc liblvm/Makefile liblvm/liblvm2app.pc man/Makefile po/Makefile python/Makefile python/setup.py scripts/blkdeactivate.sh scripts/blk_availability_init_red_hat scripts/blk_availability_systemd_red_hat.service scripts/clvmd_init_red_hat scripts/cmirrord_init_red_hat scripts/com.redhat.lvmdbus1.service scripts/dm_event_systemd_red_hat.service scripts/dm_event_systemd_red_hat.socket scripts/lvm2_cluster_activation_red_hat.sh scripts/lvm2_cluster_activation_systemd_red_hat.service scripts/lvm2_clvmd_systemd_red_hat.service scripts/lvm2_cmirrord_systemd_red_hat.service scripts/lvm2_lvmdbusd_systemd_red_hat.service scripts/lvm2_lvmetad_init_red_hat scripts/lvm2_lvmetad_systemd_red_hat.service scripts/lvm2_lvmetad_systemd_red_hat.socket scripts/lvm2_lvmpolld_init_red_hat scripts/lvm2_lvmpolld_systemd_red_hat.service scripts/lvm2_lvmpolld_systemd_red_hat.socket scripts/lvm2_lvmlockd_systemd_red_hat.service scripts/lvm2_lvmlocking_systemd_red_hat.service scripts/lvm2_monitoring_init_red_hat scripts/lvm2_monitoring_systemd_red_hat.service scripts/lvm2_pvscan_systemd_red_hat@.service scripts/lvm2_tmpfiles_red_hat.conf scripts/lvmdump.sh scripts/Makefile test/Makefile test/api/Makefile test/unit/Makefile tools/Makefile udev/Makefile unit-tests/datastruct/Makefile unit-tests/regex/Makefile unit-tests/mm/Makefile"
|
||||
ac_config_files="$ac_config_files Makefile make.tmpl daemons/Makefile daemons/clvmd/Makefile daemons/cmirrord/Makefile daemons/dmeventd/Makefile daemons/dmeventd/libdevmapper-event.pc daemons/dmeventd/plugins/Makefile daemons/dmeventd/plugins/lvm2/Makefile daemons/dmeventd/plugins/raid/Makefile daemons/dmeventd/plugins/mirror/Makefile daemons/dmeventd/plugins/snapshot/Makefile daemons/dmeventd/plugins/thin/Makefile daemons/lvmdbusd/Makefile daemons/lvmdbusd/path.py daemons/lvmetad/Makefile daemons/lvmpolld/Makefile daemons/lvmlockd/Makefile conf/Makefile conf/example.conf conf/lvmlocal.conf conf/command_profile_template.profile conf/metadata_profile_template.profile include/.symlinks include/Makefile lib/Makefile lib/format1/Makefile lib/format_pool/Makefile lib/locking/Makefile lib/mirror/Makefile lib/replicator/Makefile include/lvm-version.h lib/raid/Makefile lib/snapshot/Makefile lib/thin/Makefile lib/cache_segtype/Makefile libdaemon/Makefile libdaemon/client/Makefile libdaemon/server/Makefile libdm/Makefile libdm/libdevmapper.pc liblvm/Makefile liblvm/liblvm2app.pc man/Makefile po/Makefile python/Makefile python/setup.py scripts/blkdeactivate.sh scripts/blk_availability_init_red_hat scripts/blk_availability_systemd_red_hat.service scripts/clvmd_init_red_hat scripts/cmirrord_init_red_hat scripts/com.redhat.lvmdbus1.service scripts/dm_event_systemd_red_hat.service scripts/dm_event_systemd_red_hat.socket scripts/lvm2_cluster_activation_red_hat.sh scripts/lvm2_cluster_activation_systemd_red_hat.service scripts/lvm2_clvmd_systemd_red_hat.service scripts/lvm2_cmirrord_systemd_red_hat.service scripts/lvm2_lvmdbusd_systemd_red_hat.service scripts/lvm2_lvmetad_init_red_hat scripts/lvm2_lvmetad_systemd_red_hat.service scripts/lvm2_lvmetad_systemd_red_hat.socket scripts/lvm2_lvmpolld_init_red_hat scripts/lvm2_lvmpolld_systemd_red_hat.service scripts/lvm2_lvmpolld_systemd_red_hat.socket scripts/lvm2_lvmlockd_systemd_red_hat.service scripts/lvm2_lvmlocking_systemd_red_hat.service scripts/lvm2_monitoring_init_red_hat scripts/lvm2_monitoring_systemd_red_hat.service scripts/lvm2_pvscan_systemd_red_hat@.service scripts/lvm2_tmpfiles_red_hat.conf scripts/lvmdump.sh scripts/Makefile test/Makefile test/api/Makefile test/unit/Makefile tools/Makefile udev/Makefile unit-tests/datastruct/Makefile unit-tests/regex/Makefile unit-tests/mm/Makefile"
|
||||
|
||||
cat >confcache <<\_ACEOF
|
||||
# This file is a shell script that caches the results of configure
|
||||
@@ -16333,7 +16096,6 @@ do
|
||||
"daemons/dmeventd/plugins/mirror/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/mirror/Makefile" ;;
|
||||
"daemons/dmeventd/plugins/snapshot/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/snapshot/Makefile" ;;
|
||||
"daemons/dmeventd/plugins/thin/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/thin/Makefile" ;;
|
||||
"daemons/dmfilemapd/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmfilemapd/Makefile" ;;
|
||||
"daemons/lvmdbusd/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/lvmdbusd/Makefile" ;;
|
||||
"daemons/lvmdbusd/path.py") CONFIG_FILES="$CONFIG_FILES daemons/lvmdbusd/path.py" ;;
|
||||
"daemons/lvmetad/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/lvmetad/Makefile" ;;
|
||||
|
19
configure.in
19
configure.in
@@ -86,8 +86,6 @@ AC_PROG_RANLIB
|
||||
AC_PATH_TOOL(CFLOW_CMD, cflow)
|
||||
AC_PATH_TOOL(CSCOPE_CMD, cscope)
|
||||
AC_PATH_TOOL(CHMOD, chmod)
|
||||
AC_PATH_TOOL(WC, wc)
|
||||
AC_PATH_TOOL(SORT, sort)
|
||||
|
||||
################################################################################
|
||||
dnl -- Check for header files.
|
||||
@@ -1271,16 +1269,6 @@ fi
|
||||
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMPOLLD, [$DEFAULT_USE_LVMPOLLD],
|
||||
[Use lvmpolld by default.])
|
||||
|
||||
################################################################################
|
||||
dnl -- Check dmfilemapd
|
||||
AC_MSG_CHECKING(whether to build dmfilemapd)
|
||||
AC_ARG_ENABLE(dmfilemapd, AC_HELP_STRING([--enable-dmfilemapd],
|
||||
[enable the dmstats filemap daemon]),
|
||||
DMFILEMAPD=$enableval)
|
||||
AC_MSG_RESULT($DMFILEMAPD)
|
||||
BUILD_DMFILEMAPD=$DMFILEMAPD
|
||||
AC_DEFINE([DMFILEMAPD], 1, [Define to 1 to enable the device-mapper filemap daemon.])
|
||||
|
||||
################################################################################
|
||||
dnl -- Build notifydbus
|
||||
AC_MSG_CHECKING(whether to build notifydbus)
|
||||
@@ -1865,10 +1853,6 @@ if test "$UDEV_SYNC" = yes; then
|
||||
AC_CHECK_HEADERS(sys/ipc.h sys/sem.h,,hard_bailout)
|
||||
fi
|
||||
|
||||
if test "$DMFILEMAPD" = yes; then
|
||||
AC_CHECK_HEADERS([sys/inotify.h],,hard_bailout)
|
||||
fi
|
||||
|
||||
################################################################################
|
||||
AC_PATH_TOOL(MODPROBE_CMD, modprobe)
|
||||
|
||||
@@ -2008,7 +1992,6 @@ AC_SUBST(BUILD_LVMPOLLD)
|
||||
AC_SUBST(BUILD_LVMLOCKD)
|
||||
AC_SUBST(BUILD_LOCKDSANLOCK)
|
||||
AC_SUBST(BUILD_LOCKDDLM)
|
||||
AC_SUBST(BUILD_DMFILEMAPD)
|
||||
AC_SUBST(BUILD_NOTIFYDBUS)
|
||||
AC_SUBST(CACHE)
|
||||
AC_SUBST(CFLAGS)
|
||||
@@ -2058,7 +2041,6 @@ AC_SUBST(DLM_LIBS)
|
||||
AC_SUBST(DL_LIBS)
|
||||
AC_SUBST(DMEVENTD)
|
||||
AC_SUBST(DMEVENTD_PATH)
|
||||
AC_SUBST(DMFILEMAPD)
|
||||
AC_SUBST(DM_LIB_PATCHLEVEL)
|
||||
AC_SUBST(ELDFLAGS)
|
||||
AC_SUBST(FSADM)
|
||||
@@ -2174,7 +2156,6 @@ daemons/dmeventd/plugins/raid/Makefile
|
||||
daemons/dmeventd/plugins/mirror/Makefile
|
||||
daemons/dmeventd/plugins/snapshot/Makefile
|
||||
daemons/dmeventd/plugins/thin/Makefile
|
||||
daemons/dmfilemapd/Makefile
|
||||
daemons/lvmdbusd/Makefile
|
||||
daemons/lvmdbusd/path.py
|
||||
daemons/lvmetad/Makefile
|
||||
|
@@ -48,12 +48,8 @@ ifeq ("@BUILD_LVMDBUSD@", "yes")
|
||||
SUBDIRS += lvmdbusd
|
||||
endif
|
||||
|
||||
ifeq ("@BUILD_DMFILEMAPD@", "yes")
|
||||
SUBDIRS += dmfilemapd
|
||||
endif
|
||||
|
||||
ifeq ($(MAKECMDGOALS),distclean)
|
||||
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd dmfilemapd
|
||||
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd
|
||||
endif
|
||||
|
||||
include $(top_builddir)/make.tmpl
|
||||
|
@@ -517,7 +517,7 @@ int main(int argc, char *argv[])
|
||||
/* Initialise the LVM thread variables */
|
||||
dm_list_init(&lvm_cmd_head);
|
||||
if (pthread_attr_init(&stack_attr) ||
|
||||
pthread_attr_setstacksize(&stack_attr, STACK_SIZE + getpagesize())) {
|
||||
pthread_attr_setstacksize(&stack_attr, STACK_SIZE)) {
|
||||
log_sys_error("pthread_attr_init", "");
|
||||
exit(1);
|
||||
}
|
||||
|
@@ -468,7 +468,7 @@ static int _pthread_create_smallstack(pthread_t *t, void *(*fun)(void *), void *
|
||||
/*
|
||||
* We use a smaller stack since it gets preallocated in its entirety
|
||||
*/
|
||||
pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE + getpagesize());
|
||||
pthread_attr_setstacksize(&attr, THREAD_STACK_SIZE);
|
||||
|
||||
/*
|
||||
* If no-one will be waiting, we need to detach.
|
||||
|
@@ -184,12 +184,16 @@ int register_device(const char *device,
|
||||
goto_bad;
|
||||
|
||||
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvscan, sizeof(state->cmd_lvscan),
|
||||
"lvscan --cache", device))
|
||||
"lvscan --cache", device)) {
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
goto_bad;
|
||||
}
|
||||
|
||||
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
|
||||
"lvconvert --repair --use-policies", device))
|
||||
"lvconvert --repair --use-policies", device)) {
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
goto_bad;
|
||||
}
|
||||
|
||||
*user = state;
|
||||
|
||||
@@ -199,9 +203,6 @@ int register_device(const char *device,
|
||||
bad:
|
||||
log_error("Failed to monitor mirror %s.", device);
|
||||
|
||||
if (state)
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -140,8 +140,10 @@ int register_device(const char *device,
|
||||
"lvscan --cache", device) ||
|
||||
!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
|
||||
"lvconvert --config devices{ignore_suspended_devices=1} "
|
||||
"--repair --use-policies", device))
|
||||
"--repair --use-policies", device)) {
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
goto_bad;
|
||||
}
|
||||
|
||||
*user = state;
|
||||
|
||||
@@ -151,9 +153,6 @@ int register_device(const char *device,
|
||||
bad:
|
||||
log_error("Failed to monitor RAID %s.", device);
|
||||
|
||||
if (state)
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -254,8 +254,10 @@ int register_device(const char *device,
|
||||
|
||||
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvextend,
|
||||
sizeof(state->cmd_lvextend),
|
||||
"lvextend --use-policies", device))
|
||||
"lvextend --use-policies", device)) {
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
goto_bad;
|
||||
}
|
||||
|
||||
state->percent_check = CHECK_MINIMUM;
|
||||
*user = state;
|
||||
@@ -266,9 +268,6 @@ int register_device(const char *device,
|
||||
bad:
|
||||
log_error("Failed to monitor snapshot %s.", device);
|
||||
|
||||
if (state)
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@@ -18,6 +18,7 @@
|
||||
|
||||
#include <sys/wait.h>
|
||||
#include <stdarg.h>
|
||||
#include <pthread.h>
|
||||
|
||||
/* TODO - move this mountinfo code into library to be reusable */
|
||||
#ifdef __linux__
|
||||
@@ -58,8 +59,8 @@ struct dso_state {
|
||||
int restore_sigset;
|
||||
sigset_t old_sigset;
|
||||
pid_t pid;
|
||||
char *argv[3];
|
||||
char *cmd_str;
|
||||
char **argv;
|
||||
char cmd_str[1024];
|
||||
};
|
||||
|
||||
DM_EVENT_LOG_FN("thin")
|
||||
@@ -85,7 +86,7 @@ static int _run_command(struct dso_state *state)
|
||||
} else {
|
||||
/* For an error event it's for a user to check status and decide */
|
||||
env[1] = NULL;
|
||||
log_debug("Error event processing.");
|
||||
log_debug("Error event processing");
|
||||
}
|
||||
|
||||
log_verbose("Executing command: %s", state->cmd_str);
|
||||
@@ -115,7 +116,7 @@ static int _use_policy(struct dm_task *dmt, struct dso_state *state)
|
||||
#if THIN_DEBUG
|
||||
log_debug("dmeventd executes: %s.", state->cmd_str);
|
||||
#endif
|
||||
if (state->argv[0])
|
||||
if (state->argv)
|
||||
return _run_command(state);
|
||||
|
||||
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
|
||||
@@ -352,41 +353,34 @@ int register_device(const char *device,
|
||||
void **user)
|
||||
{
|
||||
struct dso_state *state;
|
||||
int maxcmd;
|
||||
char *str;
|
||||
char cmd_str[PATH_MAX + 128 + 2]; /* cmd ' ' vg/lv \0 */
|
||||
|
||||
if (!dmeventd_lvm2_init_with_pool("thin_pool_state", state))
|
||||
goto_bad;
|
||||
|
||||
if (!dmeventd_lvm2_command(state->mem, cmd_str, sizeof(cmd_str),
|
||||
"_dmeventd_thin_command", device))
|
||||
if (!dmeventd_lvm2_command(state->mem, state->cmd_str,
|
||||
sizeof(state->cmd_str),
|
||||
"_dmeventd_thin_command", device)) {
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
goto_bad;
|
||||
}
|
||||
|
||||
if (strncmp(cmd_str, "lvm ", 4) == 0) {
|
||||
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str + 4))) {
|
||||
log_error("Failed to copy lvm command.");
|
||||
goto bad;
|
||||
}
|
||||
} else if (cmd_str[0] == '/') {
|
||||
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str))) {
|
||||
log_error("Failed to copy thin command.");
|
||||
if (strncmp(state->cmd_str, "lvm ", 4)) {
|
||||
maxcmd = 2; /* space for last NULL element */
|
||||
for (str = state->cmd_str; *str; str++)
|
||||
if (*str == ' ')
|
||||
maxcmd++;
|
||||
if (!(str = dm_pool_strdup(state->mem, state->cmd_str)) ||
|
||||
!(state->argv = dm_pool_zalloc(state->mem, maxcmd * sizeof(char *)))) {
|
||||
log_error("Failed to allocate memory for command.");
|
||||
goto bad;
|
||||
}
|
||||
|
||||
/* Find last space before 'vg/lv' */
|
||||
if (!(str = strrchr(state->cmd_str, ' ')))
|
||||
goto inval;
|
||||
|
||||
if (!(state->argv[0] = dm_pool_strndup(state->mem, state->cmd_str,
|
||||
str - state->cmd_str))) {
|
||||
log_error("Failed to copy command.");
|
||||
goto bad;
|
||||
}
|
||||
|
||||
state->argv[1] = str + 1; /* 1 argument - vg/lv */
|
||||
dm_split_words(str, maxcmd - 1, 0, state->argv);
|
||||
_init_thread_signals(state);
|
||||
} else /* Unuspported command format */
|
||||
goto inval;
|
||||
} else
|
||||
memmove(state->cmd_str, state->cmd_str + 4, strlen(state->cmd_str + 4) + 1);
|
||||
|
||||
state->pid = -1;
|
||||
*user = state;
|
||||
@@ -394,14 +388,9 @@ int register_device(const char *device,
|
||||
log_info("Monitoring thin pool %s.", device);
|
||||
|
||||
return 1;
|
||||
inval:
|
||||
log_error("Invalid command for monitoring: %s.", cmd_str);
|
||||
bad:
|
||||
log_error("Failed to monitor thin pool %s.", device);
|
||||
|
||||
if (state)
|
||||
dmeventd_lvm2_exit_with_pool(state);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
1
daemons/dmfilemapd/.gitignore
vendored
1
daemons/dmfilemapd/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
dmfilemapd
|
@@ -1,69 +0,0 @@
|
||||
#
|
||||
# Copyright (C) 2016 Red Hat, Inc. All rights reserved.
|
||||
#
|
||||
# This file is part of the device-mapper userspace tools.
|
||||
#
|
||||
# This copyrighted material is made available to anyone wishing to use,
|
||||
# modify, copy, or redistribute it subject to the terms and conditions
|
||||
# of the GNU Lesser General Public License v.2.1.
|
||||
#
|
||||
# You should have received a copy of the GNU Lesser General Public License
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
srcdir = @srcdir@
|
||||
top_srcdir = @top_srcdir@
|
||||
top_builddir = @top_builddir@
|
||||
|
||||
SOURCES = dmfilemapd.c
|
||||
|
||||
TARGETS = dmfilemapd
|
||||
|
||||
.PHONY: install_dmeventd install_dmeventd_static
|
||||
|
||||
INSTALL_DMFILEMAPD_TARGETS = install_dmfilemapd_dynamic
|
||||
|
||||
CLEAN_TARGETS = dmfilemapd.static
|
||||
|
||||
CFLOW_LIST = $(SOURCES)
|
||||
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
|
||||
CFLOW_TARGET = dmfilemapd
|
||||
|
||||
include $(top_builddir)/make.tmpl
|
||||
|
||||
all: device-mapper
|
||||
device-mapper: $(TARGETS)
|
||||
|
||||
LIBS += -ldevmapper
|
||||
LVMLIBS += -ldevmapper-event $(PTHREAD_LIBS)
|
||||
|
||||
CFLAGS_dmeventd.o += $(EXTRA_EXEC_CFLAGS)
|
||||
|
||||
dmfilemapd: $(LIB_SHARED) dmfilemapd.o
|
||||
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) -L. -o $@ dmfilemapd.o \
|
||||
$(DL_LIBS) $(LVMLIBS) $(LIBS) -rdynamic
|
||||
|
||||
dmfilemapd.static: $(LIB_STATIC) dmfilemapd.o $(interfacebuilddir)/libdevmapper.a
|
||||
$(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -static -L. -L$(interfacebuilddir) -o $@ \
|
||||
dmfilemapd.o $(DL_LIBS) $(LVMLIBS) $(LIBS) $(STATIC_LIBS)
|
||||
|
||||
ifneq ("$(CFLOW_CMD)", "")
|
||||
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
|
||||
-include $(top_builddir)/libdm/libdevmapper.cflow
|
||||
-include $(top_builddir)/lib/liblvm-internal.cflow
|
||||
-include $(top_builddir)/lib/liblvm2cmd.cflow
|
||||
-include $(top_builddir)/daemons/dmfilemapd/$(LIB_NAME).cflow
|
||||
endif
|
||||
|
||||
install_dmfilemapd_dynamic: dmfilemapd
|
||||
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
|
||||
|
||||
install_dmfilemapd_static: dmfilemapd.static
|
||||
$(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F)
|
||||
|
||||
install_dmfilemapd: $(INSTALL_DMEVENTD_TARGETS)
|
||||
|
||||
install: install_dmfilemapd
|
||||
|
||||
install_device-mapper: install_dmfilemapd
|
||||
|
@@ -1,764 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2016 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of the device-mapper userspace tools.
|
||||
*
|
||||
* It includes tree drawing code based on pstree: http://psmisc.sourceforge.net/
|
||||
*
|
||||
* This copyrighted material is made available to anyone wishing to use,
|
||||
* modify, copy, or redistribute it subject to the terms and conditions
|
||||
* of the GNU General Public License v.2.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software Foundation,
|
||||
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include "tool.h"
|
||||
|
||||
#include "dm-logging.h"
|
||||
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <sys/inotify.h>
|
||||
#include <dirent.h>
|
||||
#include <ctype.h>
|
||||
|
||||
#ifdef __linux__
|
||||
# include "kdev_t.h"
|
||||
#else
|
||||
# define MAJOR(x) major((x))
|
||||
# define MINOR(x) minor((x))
|
||||
# define MKDEV(x,y) makedev((x),(y))
|
||||
#endif
|
||||
|
||||
/* limit to two updates/sec */
|
||||
#define FILEMAPD_WAIT_USECS 500000
|
||||
|
||||
/* how long to wait for unlinked files */
|
||||
#define FILEMAPD_NOFILE_WAIT_USECS 100000
|
||||
#define FILEMAPD_NOFILE_WAIT_TRIES 10
|
||||
|
||||
struct filemap_monitor {
|
||||
dm_filemapd_mode_t mode;
|
||||
/* group_id to update */
|
||||
uint64_t group_id;
|
||||
char *path;
|
||||
int inotify_fd;
|
||||
int inotify_watch_fd;
|
||||
/* file to monitor */
|
||||
int fd;
|
||||
|
||||
/* monitoring heuristics */
|
||||
int64_t blocks; /* allocated blocks, from stat.st_blocks */
|
||||
int64_t nr_regions;
|
||||
int deleted;
|
||||
};
|
||||
|
||||
static int _foreground;
|
||||
static int _verbose;
|
||||
|
||||
const char *const _usage = "dmfilemapd <fd> <group_id> <path> <mode> "
|
||||
"[<foreground>[<log_level>]]";
|
||||
|
||||
/*
|
||||
* Daemon logging. By default, all messages are thrown away: messages
|
||||
* are only written to the terminal if the daemon is run in the foreground.
|
||||
*/
|
||||
__attribute__((format(printf, 5, 0)))
|
||||
static void _dmfilemapd_log_line(int level,
|
||||
const char *file __attribute__((unused)),
|
||||
int line __attribute__((unused)),
|
||||
int dm_errno_or_class,
|
||||
const char *f, va_list ap)
|
||||
{
|
||||
static int _abort_on_internal_errors = -1;
|
||||
FILE *out = log_stderr(level) ? stderr : stdout;
|
||||
|
||||
level = log_level(level);
|
||||
|
||||
if (level <= _LOG_WARN || _verbose) {
|
||||
if (level < _LOG_WARN)
|
||||
out = stderr;
|
||||
vfprintf(out, f, ap);
|
||||
fputc('\n', out);
|
||||
}
|
||||
|
||||
if (_abort_on_internal_errors < 0)
|
||||
/* Set when env DM_ABORT_ON_INTERNAL_ERRORS is not "0" */
|
||||
_abort_on_internal_errors =
|
||||
strcmp(getenv("DM_ABORT_ON_INTERNAL_ERRORS") ? : "0", "0");
|
||||
|
||||
if (_abort_on_internal_errors &&
|
||||
!strncmp(f, INTERNAL_ERROR, sizeof(INTERNAL_ERROR) - 1))
|
||||
abort();
|
||||
}
|
||||
|
||||
__attribute__((format(printf, 5, 6)))
|
||||
static void _dmfilemapd_log_with_errno(int level,
|
||||
const char *file, int line,
|
||||
int dm_errno_or_class,
|
||||
const char *f, ...)
|
||||
{
|
||||
va_list ap;
|
||||
|
||||
va_start(ap, f);
|
||||
_dmfilemapd_log_line(level, file, line, dm_errno_or_class, f, ap);
|
||||
va_end(ap);
|
||||
}
|
||||
|
||||
/*
|
||||
* Only used for reporting errors before daemonise().
|
||||
*/
|
||||
__attribute__((format(printf, 1, 2)))
|
||||
static void _early_log(const char *fmt, ...)
|
||||
{
|
||||
va_list ap;
|
||||
|
||||
va_start(ap, fmt);
|
||||
vfprintf(stderr, fmt, ap);
|
||||
fputc('\n', stderr);
|
||||
va_end(ap);
|
||||
}
|
||||
|
||||
static void _setup_logging(void)
|
||||
{
|
||||
dm_log_init_verbose(_verbose - 1);
|
||||
dm_log_with_errno_init(_dmfilemapd_log_with_errno);
|
||||
}
|
||||
|
||||
#define PROC_FD_DELETED_STR " (deleted)"
|
||||
/*
|
||||
* Scan the /proc/<pid>/fd directory for pid and check for an fd
|
||||
* symlink whose contents match path.
|
||||
*/
|
||||
static int _is_open_in_pid(pid_t pid, const char *path)
|
||||
{
|
||||
char deleted_path[PATH_MAX + sizeof(PROC_FD_DELETED_STR)];
|
||||
struct dirent *pid_dp = NULL;
|
||||
char path_buf[PATH_MAX];
|
||||
char link_buf[PATH_MAX];
|
||||
DIR *pid_d = NULL;
|
||||
ssize_t len;
|
||||
|
||||
if (pid == getpid())
|
||||
return 0;
|
||||
|
||||
if (dm_snprintf(path_buf, sizeof(path_buf), "/proc/%d/fd", pid) < 0) {
|
||||
log_error("Could not format pid path.");
|
||||
goto bad;
|
||||
}
|
||||
|
||||
/*
|
||||
* Test for the kernel 'file (deleted)' form when scanning.
|
||||
*/
|
||||
if (dm_snprintf(deleted_path, sizeof(deleted_path), "%s%s",
|
||||
path, PROC_FD_DELETED_STR) < 0) {
|
||||
log_error("Could not format check path.");
|
||||
}
|
||||
|
||||
pid_d = opendir(path_buf);
|
||||
if (!pid_d) {
|
||||
log_error("Could not open proc path: %s.", path_buf);
|
||||
goto bad;
|
||||
}
|
||||
|
||||
while ((pid_dp = readdir(pid_d)) != NULL) {
|
||||
if (pid_dp->d_name[0] == '.')
|
||||
continue;
|
||||
if ((len = readlinkat(dirfd(pid_d), pid_dp->d_name, link_buf,
|
||||
sizeof(link_buf))) < 0) {
|
||||
log_error("readlink failed for /proc/%d/fd/.", pid);
|
||||
goto bad;
|
||||
}
|
||||
link_buf[len] = '\0';
|
||||
if (!strcmp(deleted_path, link_buf)) {
|
||||
closedir(pid_d);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
bad:
|
||||
closedir(pid_d);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Attempt to guess detect whether a file is open by any process by
|
||||
* scanning symbolic links in /proc/<pid>/fd.
|
||||
*
|
||||
* This is a heuristic since it cannot guarantee to detect brief
|
||||
* access in all cases: a process that opens and then closes the
|
||||
* file rapidly may never be seen by the scan.
|
||||
*
|
||||
* The method will also give false-positives if a process exists
|
||||
* that has a deleted file open that had the same path, but a
|
||||
* different inode number, to the file being monitored.
|
||||
*
|
||||
* For this reason the daemon only uses _is_open() for unlinked
|
||||
* files when the mode is DM_FILEMAPD_FOLLOW_INODE, since these
|
||||
* files can no longer be newly opened by processes.
|
||||
*
|
||||
* In this situation !is_open(path) provides an indication that
|
||||
* the daemon should shut down: the file has been unlinked form
|
||||
* the file system and we appear to hold the final reference.
|
||||
*/
|
||||
static int _is_open(const char *path)
|
||||
{
|
||||
struct dirent *proc_dp = NULL;
|
||||
DIR *proc_d = NULL;
|
||||
pid_t pid;
|
||||
|
||||
proc_d = opendir("/proc");
|
||||
if (!proc_d)
|
||||
return 0;
|
||||
while ((proc_dp = readdir(proc_d)) != NULL) {
|
||||
if (!isdigit(proc_dp->d_name[0]))
|
||||
continue;
|
||||
pid = strtol(proc_dp->d_name, NULL, 10);
|
||||
if (!pid)
|
||||
continue;
|
||||
if (_is_open_in_pid(pid, path)) {
|
||||
closedir(proc_d);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
closedir(proc_d);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void _filemap_monitor_wait(uint64_t usecs)
|
||||
{
|
||||
if (_verbose) {
|
||||
if (usecs == FILEMAPD_WAIT_USECS)
|
||||
log_very_verbose("waiting for FILEMAPD_WAIT");
|
||||
if (usecs == FILEMAPD_NOFILE_WAIT_USECS)
|
||||
log_very_verbose("waiting for FILEMAPD_NOFILE_WAIT");
|
||||
}
|
||||
usleep((useconds_t) usecs);
|
||||
}
|
||||
|
||||
static int _parse_args(int argc, char **argv, struct filemap_monitor *fm)
|
||||
{
|
||||
char *endptr;
|
||||
|
||||
/* we don't care what is in argv[0]. */
|
||||
argc--;
|
||||
argv++;
|
||||
|
||||
if (argc < 5) {
|
||||
_early_log("Wrong number of arguments.");
|
||||
_early_log("usage: %s", _usage);
|
||||
return 1;
|
||||
}
|
||||
|
||||
memset(fm, 0, sizeof(*fm));
|
||||
|
||||
/*
|
||||
* We don't know the true nr_regions at daemon start time,
|
||||
* and it is not worth a dm_stats_list()/group walk to count:
|
||||
* we can assume that there is at least one region or the
|
||||
* daemon would not have been started.
|
||||
*
|
||||
* A correct value will be obtained following the first update
|
||||
* of the group's regions.
|
||||
*/
|
||||
fm->nr_regions = 1;
|
||||
|
||||
/* parse <fd> */
|
||||
fm->fd = strtol(argv[0], &endptr, 10);
|
||||
if (*endptr) {
|
||||
_early_log("Could not parse file descriptor: %s", argv[0]);
|
||||
return 0;
|
||||
}
|
||||
|
||||
argc--;
|
||||
argv++;
|
||||
|
||||
/* parse <group_id> */
|
||||
fm->group_id = strtoull(argv[0], &endptr, 10);
|
||||
if (*endptr) {
|
||||
_early_log("Could not parse group identifier: %s", argv[0]);
|
||||
return 0;
|
||||
}
|
||||
|
||||
argc--;
|
||||
argv++;
|
||||
|
||||
/* parse <path> */
|
||||
if (!argv[0] || !strlen(argv[0])) {
|
||||
_early_log("Path argument is required.");
|
||||
return 0;
|
||||
}
|
||||
fm->path = dm_strdup(argv[0]);
|
||||
if (!fm->path) {
|
||||
_early_log("Could not allocate memory for path argument.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
argc--;
|
||||
argv++;
|
||||
|
||||
/* parse <mode> */
|
||||
if (!argv[0] || !strlen(argv[0])) {
|
||||
_early_log("Mode argument is required.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
fm->mode = dm_filemapd_mode_from_string(argv[0]);
|
||||
if (fm->mode == DM_FILEMAPD_FOLLOW_NONE)
|
||||
return 0;
|
||||
|
||||
argc--;
|
||||
argv++;
|
||||
|
||||
/* parse [<foreground>[<verbose>]] */
|
||||
if (argc) {
|
||||
_foreground = strtol(argv[0], &endptr, 10);
|
||||
if (*endptr) {
|
||||
_early_log("Could not parse debug argument: %s.",
|
||||
argv[0]);
|
||||
return 0;
|
||||
}
|
||||
argc--;
|
||||
argv++;
|
||||
if (argc) {
|
||||
_verbose = strtol(argv[0], &endptr, 10);
|
||||
if (*endptr) {
|
||||
_early_log("Could not parse verbose "
|
||||
"argument: %s", argv[0]);
|
||||
return 0;
|
||||
}
|
||||
if (_verbose < 0 || _verbose > 3) {
|
||||
_early_log("Verbose argument out of range: %d.",
|
||||
_verbose);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _filemap_fd_check_changed(struct filemap_monitor *fm)
|
||||
{
|
||||
int64_t blocks, old_blocks;
|
||||
struct stat buf;
|
||||
|
||||
if (fm->fd < 0) {
|
||||
log_error("Filemap fd is not open.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (fstat(fm->fd, &buf)) {
|
||||
log_error("Failed to fstat filemap file descriptor.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
blocks = buf.st_blocks;
|
||||
|
||||
/* first check? */
|
||||
if (fm->blocks < 0)
|
||||
old_blocks = buf.st_blocks;
|
||||
else
|
||||
old_blocks = fm->blocks;
|
||||
|
||||
fm->blocks = blocks;
|
||||
|
||||
return (fm->blocks != old_blocks);
|
||||
}
|
||||
|
||||
static void _filemap_monitor_end_notify(struct filemap_monitor *fm)
|
||||
{
|
||||
inotify_rm_watch(fm->inotify_fd, fm->inotify_watch_fd);
|
||||
if (close(fm->inotify_fd))
|
||||
log_error("Error closing inotify fd.");
|
||||
}
|
||||
|
||||
static int _filemap_monitor_set_notify(struct filemap_monitor *fm)
|
||||
{
|
||||
int inotify_fd, watch_fd;
|
||||
|
||||
/*
|
||||
* Set IN_NONBLOCK since we do not want to block in event read()
|
||||
* calls. Do not set IN_CLOEXEC as dmfilemapd is single-threaded
|
||||
* and does not fork or exec.
|
||||
*/
|
||||
if ((inotify_fd = inotify_init1(IN_NONBLOCK)) < 0) {
|
||||
_early_log("Failed to initialise inotify.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if ((watch_fd = inotify_add_watch(inotify_fd, fm->path,
|
||||
IN_MODIFY | IN_DELETE_SELF)) < 0) {
|
||||
_early_log("Failed to add inotify watch.");
|
||||
return 0;
|
||||
}
|
||||
fm->inotify_fd = inotify_fd;
|
||||
fm->inotify_watch_fd = watch_fd;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void _filemap_monitor_close_fd(struct filemap_monitor *fm)
|
||||
{
|
||||
if (close(fm->fd))
|
||||
log_error("Error closing file descriptor.");
|
||||
fm->fd = -1;
|
||||
}
|
||||
|
||||
static int _filemap_monitor_reopen_fd(struct filemap_monitor *fm)
|
||||
{
|
||||
int tries = FILEMAPD_NOFILE_WAIT_TRIES;
|
||||
|
||||
/*
|
||||
* In DM_FILEMAPD_FOLLOW_PATH mode, inotify watches must be
|
||||
* re-established whenever the file at the watched path is
|
||||
* changed.
|
||||
*
|
||||
* FIXME: stat file and skip if inode is unchanged.
|
||||
*/
|
||||
_filemap_monitor_end_notify(fm);
|
||||
if (fm->fd > 0)
|
||||
log_error("Filemap file descriptor already open.");
|
||||
|
||||
while ((fm->fd < 0) && --tries)
|
||||
if (((fm->fd = open(fm->path, O_RDONLY)) < 0) && tries)
|
||||
_filemap_monitor_wait(FILEMAPD_NOFILE_WAIT_USECS);
|
||||
|
||||
if (!tries && (fm->fd < 0)) {
|
||||
log_error("Could not re-open file descriptor.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return _filemap_monitor_set_notify(fm);
|
||||
}
|
||||
|
||||
static int _filemap_monitor_get_events(struct filemap_monitor *fm)
|
||||
{
|
||||
/* alignment as per man(7) inotify */
|
||||
char buf[sizeof(struct inotify_event) + NAME_MAX + 1]
|
||||
__attribute__ ((aligned(__alignof__(struct inotify_event))));
|
||||
|
||||
struct inotify_event *event;
|
||||
int check = 0;
|
||||
ssize_t len;
|
||||
char *ptr;
|
||||
|
||||
if (fm->mode == DM_FILEMAPD_FOLLOW_PATH)
|
||||
_filemap_monitor_close_fd(fm);
|
||||
|
||||
len = read(fm->inotify_fd, (void *) &buf, sizeof(buf));
|
||||
|
||||
/* no events to read? */
|
||||
if (len < 0 && (errno == EAGAIN))
|
||||
goto out;
|
||||
|
||||
/* interrupted by signal? */
|
||||
if (len < 0 && (errno == EINTR))
|
||||
goto out;
|
||||
|
||||
if (len < 0)
|
||||
return -1;
|
||||
|
||||
if (!len)
|
||||
goto out;
|
||||
|
||||
for (ptr = buf; ptr < buf + len; ptr += sizeof(*event) + event->len) {
|
||||
event = (struct inotify_event *) ptr;
|
||||
if (event->mask & IN_DELETE_SELF)
|
||||
fm->deleted = 1;
|
||||
if (event->mask & IN_MODIFY)
|
||||
check = 1;
|
||||
/*
|
||||
* Event IN_IGNORED is generated when a file has been deleted
|
||||
* and IN_DELETE_SELF generated, and indicates that the file
|
||||
* watch has been automatically removed.
|
||||
*
|
||||
* This can only happen for the DM_FILEMAPD_FOLLOW_PATH mode,
|
||||
* since inotify IN_DELETE events are generated at the time
|
||||
* the inode is destroyed: DM_FILEMAPD_FOLLOW_INODE will hold
|
||||
* the file descriptor open, meaning that the event will not
|
||||
* be generated until after the daemon closes the file.
|
||||
*
|
||||
* The event is ignored here since inotify monitoring will
|
||||
* be reestablished (or the daemon will terminate) following
|
||||
* deletion of a DM_FILEMAPD_FOLLOW_PATH monitored file.
|
||||
*/
|
||||
if (event->mask & IN_IGNORED)
|
||||
log_very_verbose("Inotify watch removed: IN_IGNORED "
|
||||
"in event->mask");
|
||||
}
|
||||
|
||||
out:
|
||||
/*
|
||||
* Re-open file descriptor if required and log disposition.
|
||||
*/
|
||||
if (fm->mode == DM_FILEMAPD_FOLLOW_PATH)
|
||||
if (!_filemap_monitor_reopen_fd(fm))
|
||||
return -1;
|
||||
|
||||
log_very_verbose("exiting _filemap_monitor_get_events() with "
|
||||
"deleted=%d, check=%d", fm->deleted, check);
|
||||
return check;
|
||||
}
|
||||
|
||||
static void _filemap_monitor_destroy(struct filemap_monitor *fm)
|
||||
{
|
||||
if (fm->fd > 0) {
|
||||
_filemap_monitor_end_notify(fm);
|
||||
if (close(fm->fd))
|
||||
log_error("Error closing fd %d.", fm->fd);
|
||||
}
|
||||
}
|
||||
|
||||
static int _filemap_monitor_check_same_file(int fd1, int fd2)
|
||||
{
|
||||
struct stat buf1, buf2;
|
||||
|
||||
if ((fd1 < 0) || (fd2 < 0))
|
||||
return 0;
|
||||
|
||||
if (fstat(fd1, &buf1)) {
|
||||
log_error("Failed to fstat file descriptor %d", fd1);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (fstat(fd2, &buf2)) {
|
||||
log_error("Failed to fstat file descriptor %d", fd2);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return ((buf1.st_dev == buf2.st_dev) && (buf1.st_ino == buf2.st_ino));
|
||||
}
|
||||
|
||||
static int _filemap_monitor_check_file_unlinked(struct filemap_monitor *fm)
|
||||
{
|
||||
char path_buf[PATH_MAX];
|
||||
char link_buf[PATH_MAX];
|
||||
int same, fd, len;
|
||||
|
||||
fm->deleted = 0;
|
||||
|
||||
if ((fd = open(fm->path, O_RDONLY)) < 0)
|
||||
goto check_unlinked;
|
||||
|
||||
if ((same = _filemap_monitor_check_same_file(fm->fd, fd)) < 0)
|
||||
return 0;
|
||||
|
||||
if (close(fd))
|
||||
log_error("Error closing fd %d", fd);
|
||||
|
||||
if (same)
|
||||
return 1;
|
||||
|
||||
check_unlinked:
|
||||
/*
|
||||
* The file has been unlinked from its original location: test
|
||||
* whether it is still reachable in the filesystem, or if it is
|
||||
* unlinked and anonymous.
|
||||
*/
|
||||
if (dm_snprintf(path_buf, sizeof(path_buf),
|
||||
"/proc/%d/fd/%d", getpid(), fm->fd) < 0) {
|
||||
log_error("Could not format pid path.");
|
||||
return 0;
|
||||
}
|
||||
if ((len = readlink(path_buf, link_buf, sizeof(link_buf))) < 0) {
|
||||
log_error("readlink failed for /proc/%d/fd/%d.",
|
||||
getpid(), fm->fd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to re-open the file, from the path now reported in /proc/pid/fd.
|
||||
*/
|
||||
if ((fd = open(link_buf, O_RDONLY)) < 0)
|
||||
fm->deleted = 1;
|
||||
|
||||
if ((same = _filemap_monitor_check_same_file(fm->fd, fd)) < 0)
|
||||
return 0;
|
||||
|
||||
if ((fd > 0) && close(fd))
|
||||
log_error("Error closing fd %d", fd);
|
||||
|
||||
/* Should not happen with normal /proc. */
|
||||
if ((fd > 0) && !same) {
|
||||
log_error("File descriptor mismatch: %d and %s (read from %s) "
|
||||
"are not the same file!", fm->fd, link_buf, path_buf);
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _daemonise(struct filemap_monitor *fm)
|
||||
{
|
||||
pid_t pid = 0, sid;
|
||||
int fd;
|
||||
|
||||
if (!(sid = setsid())) {
|
||||
_early_log("setsid failed.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if ((pid = fork()) < 0) {
|
||||
_early_log("Failed to fork daemon process.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (pid > 0) {
|
||||
if (_verbose)
|
||||
_early_log("Started dmfilemapd with pid=%d", pid);
|
||||
exit(0);
|
||||
}
|
||||
|
||||
if (chdir("/")) {
|
||||
_early_log("Failed to change directory.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!_verbose) {
|
||||
if (close(STDIN_FILENO))
|
||||
_early_log("Error closing stdin");
|
||||
if (close(STDOUT_FILENO))
|
||||
_early_log("Error closing stdout");
|
||||
if (close(STDERR_FILENO))
|
||||
_early_log("Error closing stderr");
|
||||
if ((open("/dev/null", O_RDONLY) < 0) ||
|
||||
(open("/dev/null", O_WRONLY) < 0) ||
|
||||
(open("/dev/null", O_WRONLY) < 0)) {
|
||||
_early_log("Error opening stdio streams.");
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
for (fd = sysconf(_SC_OPEN_MAX) - 1; fd > STDERR_FILENO; fd--) {
|
||||
if (fd == fm->fd)
|
||||
continue;
|
||||
close(fd);
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _update_regions(struct dm_stats *dms, struct filemap_monitor *fm)
|
||||
{
|
||||
uint64_t *regions = NULL, *region, nr_regions = 0;
|
||||
|
||||
regions = dm_stats_update_regions_from_fd(dms, fm->fd, fm->group_id);
|
||||
if (!regions) {
|
||||
log_error("Failed to update filemap regions for group_id="
|
||||
FMTu64 ".", fm->group_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
for (region = regions; *region != DM_STATS_REGIONS_ALL; region++)
|
||||
nr_regions++;
|
||||
|
||||
if (regions[0] != fm->group_id) {
|
||||
log_warn("group_id changed from " FMTu64 " to " FMTu64,
|
||||
fm->group_id, regions[0]);
|
||||
fm->group_id = regions[0];
|
||||
}
|
||||
|
||||
fm->nr_regions = nr_regions;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _dmfilemapd(struct filemap_monitor *fm)
|
||||
{
|
||||
int running = 1, check = 0, open = 0;
|
||||
struct dm_stats *dms;
|
||||
|
||||
dms = dm_stats_create("dmstats"); /* FIXME */
|
||||
if (!dm_stats_bind_from_fd(dms, fm->fd)) {
|
||||
log_error("Could not bind dm_stats handle to file descriptor "
|
||||
"%d", fm->fd);
|
||||
goto bad;
|
||||
}
|
||||
|
||||
if (!_filemap_monitor_set_notify(fm))
|
||||
goto bad;
|
||||
|
||||
do {
|
||||
if (!dm_stats_list(dms, NULL)) {
|
||||
log_error("Failed to list stats handle.");
|
||||
goto bad;
|
||||
}
|
||||
|
||||
if (!dm_stats_group_present(dms, fm->group_id)) {
|
||||
log_info("Filemap group removed: exiting.");
|
||||
running = 0;
|
||||
continue;
|
||||
}
|
||||
|
||||
if ((check = _filemap_monitor_get_events(fm)) < 0)
|
||||
goto bad;
|
||||
|
||||
if (!check)
|
||||
goto wait;
|
||||
|
||||
if ((check = _filemap_fd_check_changed(fm)) < 0)
|
||||
goto bad;
|
||||
|
||||
if (!check)
|
||||
goto wait;
|
||||
|
||||
if (!_update_regions(dms, fm))
|
||||
goto bad;
|
||||
|
||||
wait:
|
||||
_filemap_monitor_wait(FILEMAPD_WAIT_USECS);
|
||||
|
||||
running = !!fm->nr_regions;
|
||||
|
||||
/* mode=inode termination condions */
|
||||
if (fm->mode == DM_FILEMAPD_FOLLOW_INODE) {
|
||||
if (!_filemap_monitor_check_file_unlinked(fm))
|
||||
goto bad;
|
||||
if (fm->deleted && !(open = _is_open(fm->path))) {
|
||||
log_info("File unlinked and closed: exiting.");
|
||||
running = 0;
|
||||
} else if (fm->deleted && open)
|
||||
log_verbose("File unlinked and open: "
|
||||
"continuing.");
|
||||
}
|
||||
|
||||
} while (running);
|
||||
|
||||
_filemap_monitor_destroy(fm);
|
||||
dm_stats_destroy(dms);
|
||||
return 0;
|
||||
|
||||
bad:
|
||||
_filemap_monitor_destroy(fm);
|
||||
dm_stats_destroy(dms);
|
||||
log_error("Exiting");
|
||||
return 1;
|
||||
}
|
||||
|
||||
static const char * _mode_names[] = {
|
||||
"inode",
|
||||
"path"
|
||||
};
|
||||
|
||||
/*
|
||||
* dmfilemapd <fd> <group_id> <path> <mode> [<foreground>[<log_level>]]
|
||||
*/
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
struct filemap_monitor fm;
|
||||
|
||||
if (!_parse_args(argc, argv, &fm))
|
||||
return 1;
|
||||
|
||||
_setup_logging();
|
||||
|
||||
log_info("Starting dmfilemapd with fd=%d, group_id=" FMTu64 " "
|
||||
"mode=%s, path=%s", fm.fd, fm.group_id,
|
||||
_mode_names[fm.mode], fm.path);
|
||||
|
||||
if (!_foreground && !_daemonise(&fm))
|
||||
return 1;
|
||||
|
||||
return _dmfilemapd(&fm);
|
||||
}
|
@@ -19,12 +19,10 @@
|
||||
|
||||
#define MIN_ARGV_SIZE 8
|
||||
|
||||
static const char *const polling_ops[] = {
|
||||
[PVMOVE] = LVMPD_REQ_PVMOVE,
|
||||
[CONVERT] = LVMPD_REQ_CONVERT,
|
||||
[MERGE] = LVMPD_REQ_MERGE,
|
||||
[MERGE_THIN] = LVMPD_REQ_MERGE_THIN
|
||||
};
|
||||
static const char *const const polling_ops[] = { [PVMOVE] = LVMPD_REQ_PVMOVE,
|
||||
[CONVERT] = LVMPD_REQ_CONVERT,
|
||||
[MERGE] = LVMPD_REQ_MERGE,
|
||||
[MERGE_THIN] = LVMPD_REQ_MERGE_THIN };
|
||||
|
||||
const char *polling_op(enum poll_type type)
|
||||
{
|
||||
|
@@ -127,9 +127,6 @@
|
||||
/* Path to dmeventd pidfile. */
|
||||
#undef DMEVENTD_PIDFILE
|
||||
|
||||
/* Define to 1 to enable the device-mapper filemap daemon. */
|
||||
#undef DMFILEMAPD
|
||||
|
||||
/* Define to enable compat protocol */
|
||||
#undef DM_COMPAT
|
||||
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -272,18 +272,10 @@ int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt)
|
||||
{
|
||||
return 0;
|
||||
@@ -992,30 +984,6 @@ int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
|
||||
return lv_mirror_percent(lv->vg->cmd, lv, 0, percent, NULL);
|
||||
}
|
||||
|
||||
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset)
|
||||
{
|
||||
int r;
|
||||
struct dev_manager *dm;
|
||||
struct dm_status_raid *status;
|
||||
|
||||
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
|
||||
return 0;
|
||||
|
||||
log_debug_activation("Checking raid data offset and dev sectors for LV %s/%s",
|
||||
lv->vg->name, lv->name);
|
||||
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
|
||||
return_0;
|
||||
|
||||
if (!(r = dev_manager_raid_status(dm, lv, &status)))
|
||||
stack;
|
||||
|
||||
*data_offset = status->data_offset;
|
||||
|
||||
dev_manager_destroy(dm);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
|
||||
{
|
||||
int r;
|
||||
@@ -1045,32 +1013,6 @@ int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
|
||||
return r;
|
||||
}
|
||||
|
||||
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt)
|
||||
{
|
||||
struct dev_manager *dm;
|
||||
struct dm_status_raid *status;
|
||||
|
||||
*dev_cnt = 0;
|
||||
|
||||
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
|
||||
return 0;
|
||||
|
||||
log_debug_activation("Checking raid device count for LV %s/%s",
|
||||
lv->vg->name, lv->name);
|
||||
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
|
||||
return_0;
|
||||
|
||||
if (!dev_manager_raid_status(dm, lv, &status)) {
|
||||
dev_manager_destroy(dm);
|
||||
return_0;
|
||||
}
|
||||
*dev_cnt = status->dev_count;
|
||||
|
||||
dev_manager_destroy(dm);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt)
|
||||
{
|
||||
struct dev_manager *dm;
|
||||
@@ -2006,13 +1948,16 @@ int monitor_dev_for_events(struct cmd_context *cmd, const struct logical_volume
|
||||
|
||||
/* Check [un]monitor results */
|
||||
/* Try a couple times if pending, but not forever... */
|
||||
for (i = 0;; i++) {
|
||||
for (i = 0; i < 40; i++) {
|
||||
pending = 0;
|
||||
monitored = seg->segtype->ops->target_monitored(seg, &pending);
|
||||
if (!pending || i >= 40)
|
||||
if (pending ||
|
||||
(!monitored && monitor) ||
|
||||
(monitored && !monitor))
|
||||
log_very_verbose("%s %smonitoring still pending: waiting...",
|
||||
display_lvname(lv), monitor ? "" : "un");
|
||||
else
|
||||
break;
|
||||
log_very_verbose("%s %smonitoring still pending: waiting...",
|
||||
display_lvname(lv), monitor ? "" : "un");
|
||||
usleep(10000 * i);
|
||||
}
|
||||
|
||||
|
@@ -168,8 +168,6 @@ int lv_snapshot_percent(const struct logical_volume *lv, dm_percent_t *percent);
|
||||
int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
|
||||
int wait, dm_percent_t *percent, uint32_t *event_nr);
|
||||
int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent);
|
||||
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt);
|
||||
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset);
|
||||
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health);
|
||||
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt);
|
||||
int lv_raid_sync_action(const struct logical_volume *lv, char **sync_action);
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -214,14 +214,6 @@ typedef enum {
|
||||
STATUS, /* DM_DEVICE_STATUS ioctl */
|
||||
} info_type_t;
|
||||
|
||||
/* Return length of segment depending on type and reshape_len */
|
||||
static uint32_t _seg_len(const struct lv_segment *seg)
|
||||
{
|
||||
uint32_t reshape_len = seg_is_raid(seg) ? ((seg->area_count - seg->segtype->parity_devs) * seg->reshape_len) : 0;
|
||||
|
||||
return seg->len - reshape_len;
|
||||
}
|
||||
|
||||
static int _info_run(const char *dlid, struct dm_info *dminfo,
|
||||
uint32_t *read_ahead,
|
||||
struct lv_seg_status *seg_status,
|
||||
@@ -258,7 +250,7 @@ static int _info_run(const char *dlid, struct dm_info *dminfo,
|
||||
if (seg_status && dminfo->exists) {
|
||||
start = length = seg_status->seg->lv->vg->extent_size;
|
||||
start *= seg_status->seg->le;
|
||||
length *= _seg_len(seg_status->seg);
|
||||
length *= seg_status->seg->len;
|
||||
|
||||
do {
|
||||
target = dm_get_next_target(dmt, target, &target_start,
|
||||
@@ -1316,13 +1308,14 @@ int dev_manager_raid_message(struct dev_manager *dm,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* These are the supported RAID messages for dm-raid v1.9.0 */
|
||||
/* These are the supported RAID messages for dm-raid v1.5.0 */
|
||||
if (strcmp(msg, "idle") &&
|
||||
strcmp(msg, "frozen") &&
|
||||
strcmp(msg, "resync") &&
|
||||
strcmp(msg, "recover") &&
|
||||
strcmp(msg, "check") &&
|
||||
strcmp(msg, "repair")) {
|
||||
strcmp(msg, "repair") &&
|
||||
strcmp(msg, "reshape")) {
|
||||
log_error(INTERNAL_ERROR "Unknown RAID message: %s.", msg);
|
||||
return 0;
|
||||
}
|
||||
@@ -2221,7 +2214,7 @@ static char *_add_error_or_zero_device(struct dev_manager *dm, struct dm_tree *d
|
||||
struct lv_segment *seg_i;
|
||||
struct dm_info info;
|
||||
int segno = -1, i = 0;
|
||||
uint64_t size = (uint64_t) _seg_len(seg) * seg->lv->vg->extent_size;
|
||||
uint64_t size = (uint64_t) seg->len * seg->lv->vg->extent_size;
|
||||
|
||||
dm_list_iterate_items(seg_i, &seg->lv->segments) {
|
||||
if (seg == seg_i) {
|
||||
@@ -2507,7 +2500,7 @@ static int _add_target_to_dtree(struct dev_manager *dm,
|
||||
return seg->segtype->ops->add_target_line(dm, dm->mem, dm->cmd,
|
||||
&dm->target_state, seg,
|
||||
laopts, dnode,
|
||||
extent_size * _seg_len(seg),
|
||||
extent_size * seg->len,
|
||||
&dm->pvmove_mirror_count);
|
||||
}
|
||||
|
||||
@@ -2700,7 +2693,7 @@ static int _add_segment_to_dtree(struct dev_manager *dm,
|
||||
/* Replace target and all its used devs with error mapping */
|
||||
log_debug_activation("Using error for pending delete %s.",
|
||||
display_lvname(seg->lv));
|
||||
if (!dm_tree_node_add_error_target(dnode, (uint64_t)seg->lv->vg->extent_size * _seg_len(seg)))
|
||||
if (!dm_tree_node_add_error_target(dnode, (uint64_t)seg->lv->vg->extent_size * seg->len))
|
||||
return_0;
|
||||
} else if (!_add_target_to_dtree(dm, dnode, seg, laopts))
|
||||
return_0;
|
||||
@@ -3172,6 +3165,7 @@ static int _tree_action(struct dev_manager *dm, const struct logical_volume *lv,
|
||||
log_error(INTERNAL_ERROR "_tree_action: Action %u not supported.", action);
|
||||
goto out;
|
||||
}
|
||||
|
||||
r = 1;
|
||||
|
||||
out:
|
||||
|
@@ -1864,9 +1864,7 @@ cfg(dmeventd_thin_command_CFG, "thin_command", dmeventd_CFG_SECTION, CFG_DEFAULT
|
||||
"The plugin runs command with each 5% increment when thin-pool data volume\n"
|
||||
"or metadata volume gets above 50%.\n"
|
||||
"Command which starts with 'lvm ' prefix is internal lvm command.\n"
|
||||
"You can write your own handler to customise behaviour in more details.\n"
|
||||
"User handler is specified with the full path starting with '/'.\n")
|
||||
/* TODO: systemd service handler */
|
||||
"You can write your own handler to customise behaviour in more details.\n")
|
||||
|
||||
cfg(dmeventd_executable_CFG, "executable", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_PATH, vsn(2, 2, 73), "@DMEVENTD_PATH@", 0, NULL,
|
||||
"The full path to the dmeventd binary.\n")
|
||||
|
@@ -71,7 +71,7 @@
|
||||
* FIXME: Increase these to 64 and further to the MD maximum
|
||||
* once the SubLVs split and name shift got enhanced
|
||||
*/
|
||||
#define DEFAULT_RAID1_MAX_IMAGES 64
|
||||
#define DEFAULT_RAID1_MAX_IMAGES 10
|
||||
#define DEFAULT_RAID_MAX_IMAGES 64
|
||||
#define DEFAULT_ALLOCATION_STRIPE_ALL_DEVICES 0 /* Don't stripe across all devices if not -i/--stripes given */
|
||||
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -225,8 +225,8 @@ static int _read_linear(struct cmd_context *cmd, struct lv_map *lvm)
|
||||
while (le < lvm->lv->le_count) {
|
||||
len = _area_length(lvm, le);
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lvm->lv, le, len, 0, 0, 0,
|
||||
NULL, 1, len, 0, 0, 0, 0, NULL))) {
|
||||
if (!(seg = alloc_lv_segment(segtype, lvm->lv, le, len, 0, 0,
|
||||
NULL, 1, len, 0, 0, 0, NULL))) {
|
||||
log_error("Failed to allocate linear segment.");
|
||||
return 0;
|
||||
}
|
||||
@@ -297,10 +297,10 @@ static int _read_stripes(struct cmd_context *cmd, struct lv_map *lvm)
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lvm->lv,
|
||||
lvm->stripes * first_area_le,
|
||||
lvm->stripes * area_len, 0,
|
||||
lvm->stripes * area_len,
|
||||
0, lvm->stripe_size, NULL,
|
||||
lvm->stripes,
|
||||
area_len, 0, 0, 0, 0, NULL))) {
|
||||
area_len, 0, 0, 0, NULL))) {
|
||||
log_error("Failed to allocate striped segment.");
|
||||
return 0;
|
||||
}
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 1997-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -192,9 +192,9 @@ static int _add_stripe_seg(struct dm_pool *mem,
|
||||
return_0;
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, *le_cur,
|
||||
area_len * usp->num_devs, 0, 0,
|
||||
area_len * usp->num_devs, 0,
|
||||
usp->striping, NULL, usp->num_devs,
|
||||
area_len, 0, 0, 0, 0, NULL))) {
|
||||
area_len, 0, 0, 0, NULL))) {
|
||||
log_error("Unable to allocate striped lv_segment structure");
|
||||
return 0;
|
||||
}
|
||||
@@ -232,8 +232,8 @@ static int _add_linear_seg(struct dm_pool *mem,
|
||||
area_len = (usp->devs[j].blocks) / POOL_PE_SIZE;
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, *le_cur,
|
||||
area_len, 0, 0, usp->striping,
|
||||
NULL, 1, area_len, 0,
|
||||
area_len, 0, usp->striping,
|
||||
NULL, 1, area_len,
|
||||
POOL_PE_SIZE, 0, 0, NULL))) {
|
||||
log_error("Unable to allocate linear lv_segment "
|
||||
"structure");
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2009 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -583,10 +583,8 @@ static int _print_segment(struct formatter *f, struct volume_group *vg,
|
||||
outf(f, "start_extent = %u", seg->le);
|
||||
outsize(f, (uint64_t) seg->len * vg->extent_size,
|
||||
"extent_count = %u", seg->len);
|
||||
|
||||
outnl(f);
|
||||
if (seg->reshape_len)
|
||||
outsize(f, (uint64_t) seg->reshape_len * vg->extent_size,
|
||||
"reshape_count = %u", seg->reshape_len);
|
||||
outf(f, "type = \"%s\"", seg->segtype->name);
|
||||
|
||||
if (!_out_list(f, &seg->tags, "tags"))
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2013 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -61,9 +61,6 @@ static const struct flag _lv_flags[] = {
|
||||
{LOCKED, "LOCKED", STATUS_FLAG},
|
||||
{LV_NOTSYNCED, "NOTSYNCED", STATUS_FLAG},
|
||||
{LV_REBUILD, "REBUILD", STATUS_FLAG},
|
||||
{LV_RESHAPE_DELTA_DISKS_PLUS, "RESHAPE_DELTA_DISKS_PLUS", STATUS_FLAG},
|
||||
{LV_RESHAPE_DELTA_DISKS_MINUS, "RESHAPE_DELTA_DISKS_MINUS", STATUS_FLAG},
|
||||
{LV_REMOVE_AFTER_RESHAPE, "REMOVE_AFTER_RESHAPE", STATUS_FLAG},
|
||||
{LV_WRITEMOSTLY, "WRITEMOSTLY", STATUS_FLAG},
|
||||
{LV_ACTIVATION_SKIP, "ACTIVATION_SKIP", COMPATIBLE_FLAG},
|
||||
{LV_ERROR_WHEN_FULL, "ERROR_WHEN_FULL", COMPATIBLE_FLAG},
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -354,7 +354,7 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
|
||||
struct lv_segment *seg;
|
||||
const struct dm_config_node *sn_child = sn->child;
|
||||
const struct dm_config_value *cv;
|
||||
uint32_t area_extents, start_extent, extent_count, reshape_count, data_copies;
|
||||
uint32_t start_extent, extent_count;
|
||||
struct segment_type *segtype;
|
||||
const char *segtype_str;
|
||||
|
||||
@@ -375,12 +375,6 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!_read_int32(sn_child, "reshape_count", &reshape_count))
|
||||
reshape_count = 0;
|
||||
|
||||
if (!_read_int32(sn_child, "data_copies", &data_copies))
|
||||
data_copies = 1;
|
||||
|
||||
segtype_str = SEG_TYPE_NAME_STRIPED;
|
||||
|
||||
if (!dm_config_get_str(sn_child, "type", &segtype_str)) {
|
||||
@@ -395,11 +389,9 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
|
||||
!segtype->ops->text_import_area_count(sn_child, &area_count))
|
||||
return_0;
|
||||
|
||||
area_extents = segtype->parity_devs ?
|
||||
raid_rimage_extents(segtype, extent_count, area_count - segtype->parity_devs, data_copies) : extent_count;
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, start_extent,
|
||||
extent_count, reshape_count, 0, 0, NULL, area_count,
|
||||
area_extents, data_copies, 0, 0, 0, NULL))) {
|
||||
extent_count, 0, 0, NULL, area_count,
|
||||
extent_count, 0, 0, 0, NULL))) {
|
||||
log_error("Segment allocation failed");
|
||||
return 0;
|
||||
}
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -1104,19 +1104,6 @@ int lv_raid_healthy(const struct logical_volume *lv)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Helper: check for any sub LVs after a disk removing reshape */
|
||||
static int _sublvs_remove_after_reshape(const struct logical_volume *lv)
|
||||
{
|
||||
uint32_t s;
|
||||
struct lv_segment *seg = first_seg(lv);
|
||||
|
||||
for (s = seg->area_count -1; s; s--)
|
||||
if (seg_lv(seg, s)->status & LV_REMOVE_AFTER_RESHAPE)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_with_info_and_seg_status *lvdm)
|
||||
{
|
||||
const struct logical_volume *lv = lvdm->lv;
|
||||
@@ -1282,8 +1269,6 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
|
||||
repstr[8] = 'p';
|
||||
else if (lv_is_raid_type(lv)) {
|
||||
uint64_t n;
|
||||
char *sync_action;
|
||||
|
||||
if (!activation())
|
||||
repstr[8] = 'X'; /* Unknown */
|
||||
else if (!lv_raid_healthy(lv))
|
||||
@@ -1291,17 +1276,8 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
|
||||
else if (lv_is_raid(lv)) {
|
||||
if (lv_raid_mismatch_count(lv, &n) && n)
|
||||
repstr[8] = 'm'; /* RAID has 'm'ismatches */
|
||||
else if (lv_raid_sync_action(lv, &sync_action) &&
|
||||
!strcmp(sync_action, "reshape"))
|
||||
repstr[8] = 's'; /* LV is re(s)haping */
|
||||
else if (_sublvs_remove_after_reshape(lv))
|
||||
repstr[8] = 'R'; /* sub-LV got freed from raid set by reshaping
|
||||
and has to be 'R'emoved */
|
||||
} else if (lv->status & LV_WRITEMOSTLY)
|
||||
repstr[8] = 'w'; /* sub-LV has 'w'ritemostly */
|
||||
else if (lv->status & LV_REMOVE_AFTER_RESHAPE)
|
||||
repstr[8] = 'R'; /* sub-LV got freed from raid set by reshaping
|
||||
and has to be 'R'emoved */
|
||||
} else if (lvdm->seg_status.type == SEG_STATUS_CACHE) {
|
||||
if (lvdm->seg_status.cache->fail)
|
||||
repstr[8] = 'F';
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2003-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -21,13 +21,11 @@
|
||||
struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
|
||||
struct logical_volume *lv,
|
||||
uint32_t le, uint32_t len,
|
||||
uint32_t reshape_len,
|
||||
uint64_t status,
|
||||
uint32_t stripe_size,
|
||||
struct logical_volume *log_lv,
|
||||
uint32_t area_count,
|
||||
uint32_t area_len,
|
||||
uint32_t data_copies,
|
||||
uint32_t chunk_size,
|
||||
uint32_t region_size,
|
||||
uint32_t extents_copied,
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2014 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -897,9 +897,8 @@ static uint32_t _round_to_stripe_boundary(struct volume_group *vg, uint32_t exte
|
||||
/* Round up extents to stripe divisible amount */
|
||||
if ((size_rest = extents % stripes)) {
|
||||
new_extents += extend ? stripes - size_rest : -size_rest;
|
||||
log_print_unless_silent("Rounding size %s (%u extents) %s to stripe boundary size %s(%u extents).",
|
||||
log_print_unless_silent("Rounding size %s (%u extents) up to stripe boundary size %s (%u extents).",
|
||||
display_size(vg->cmd, (uint64_t) extents * vg->extent_size), extents,
|
||||
new_extents < extents ? "down" : "up",
|
||||
display_size(vg->cmd, (uint64_t) new_extents * vg->extent_size), new_extents);
|
||||
}
|
||||
|
||||
@@ -912,13 +911,11 @@ static uint32_t _round_to_stripe_boundary(struct volume_group *vg, uint32_t exte
|
||||
struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
|
||||
struct logical_volume *lv,
|
||||
uint32_t le, uint32_t len,
|
||||
uint32_t reshape_len,
|
||||
uint64_t status,
|
||||
uint32_t stripe_size,
|
||||
struct logical_volume *log_lv,
|
||||
uint32_t area_count,
|
||||
uint32_t area_len,
|
||||
uint32_t data_copies,
|
||||
uint32_t chunk_size,
|
||||
uint32_t region_size,
|
||||
uint32_t extents_copied,
|
||||
@@ -952,12 +949,10 @@ struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
|
||||
seg->lv = lv;
|
||||
seg->le = le;
|
||||
seg->len = len;
|
||||
seg->reshape_len = reshape_len;
|
||||
seg->status = status;
|
||||
seg->stripe_size = stripe_size;
|
||||
seg->area_count = area_count;
|
||||
seg->area_len = area_len;
|
||||
seg->data_copies = data_copies ? : lv_raid_data_copies(segtype, area_count);
|
||||
seg->chunk_size = chunk_size;
|
||||
seg->region_size = region_size;
|
||||
seg->extents_copied = extents_copied;
|
||||
@@ -978,37 +973,6 @@ struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
|
||||
return seg;
|
||||
}
|
||||
|
||||
/*
|
||||
* Temporary helper to return number of data copies for
|
||||
* RAID segment @seg until seg->data_copies got added
|
||||
*/
|
||||
static uint32_t _raid_data_copies(struct lv_segment *seg)
|
||||
{
|
||||
/*
|
||||
* FIXME: needs to change once more than 2 are supported.
|
||||
* I.e. use seg->data_copies then
|
||||
*/
|
||||
if (seg_is_raid10(seg))
|
||||
return 2;
|
||||
else if (seg_is_raid1(seg))
|
||||
return seg->area_count;
|
||||
|
||||
return seg->segtype->parity_devs + 1;
|
||||
}
|
||||
|
||||
/* Data image count for RAID segment @seg */
|
||||
static uint32_t _raid_stripes_count(struct lv_segment *seg)
|
||||
{
|
||||
/*
|
||||
* FIXME: raid10 needs to change once more than
|
||||
* 2 data_copies and odd # of legs supported.
|
||||
*/
|
||||
if (seg_is_raid10(seg))
|
||||
return seg->area_count / _raid_data_copies(seg);
|
||||
|
||||
return seg->area_count - seg->segtype->parity_devs;
|
||||
}
|
||||
|
||||
static int _release_and_discard_lv_segment_area(struct lv_segment *seg, uint32_t s,
|
||||
uint32_t area_reduction, int with_discard)
|
||||
{
|
||||
@@ -1049,39 +1013,32 @@ static int _release_and_discard_lv_segment_area(struct lv_segment *seg, uint32_t
|
||||
}
|
||||
|
||||
if (lv_is_raid_image(lv)) {
|
||||
/* Calculate the amount of extents to reduce per rmate/rimage LV */
|
||||
uint32_t rimage_extents;
|
||||
struct lv_segment *seg1 = first_seg(lv);
|
||||
|
||||
/* FIXME: avoid extra seg_is_*() conditionals here */
|
||||
rimage_extents = raid_rimage_extents(seg1->segtype, area_reduction, seg_is_any_raid0(seg) ? 0 : _raid_stripes_count(seg),
|
||||
seg_is_raid10(seg) ? 1 :_raid_data_copies(seg));
|
||||
if (!rimage_extents)
|
||||
/*
|
||||
* FIXME: Use lv_reduce not lv_remove
|
||||
* We use lv_remove for now, because I haven't figured out
|
||||
* why lv_reduce won't remove the LV.
|
||||
lv_reduce(lv, area_reduction);
|
||||
*/
|
||||
if (area_reduction != seg->area_len) {
|
||||
log_error("Unable to reduce RAID LV - operation not implemented.");
|
||||
return 0;
|
||||
|
||||
if (seg->meta_areas) {
|
||||
uint32_t meta_area_reduction;
|
||||
struct logical_volume *mlv;
|
||||
struct volume_group *vg = lv->vg;
|
||||
|
||||
if (seg_metatype(seg, s) != AREA_LV ||
|
||||
!(mlv = seg_metalv(seg, s)))
|
||||
} else {
|
||||
if (!lv_remove(lv)) {
|
||||
log_error("Failed to remove RAID image %s.",
|
||||
display_lvname(lv));
|
||||
return 0;
|
||||
|
||||
meta_area_reduction = raid_rmeta_extents_delta(vg->cmd, lv->le_count, lv->le_count - rimage_extents,
|
||||
seg->region_size, vg->extent_size);
|
||||
/* Limit for raid0_meta not having region size set */
|
||||
if (meta_area_reduction > mlv->le_count ||
|
||||
!(lv->le_count - rimage_extents))
|
||||
meta_area_reduction = mlv->le_count;
|
||||
|
||||
if (meta_area_reduction &&
|
||||
!lv_reduce(mlv, meta_area_reduction))
|
||||
return_0; /* FIXME: any upper level reporting */
|
||||
}
|
||||
}
|
||||
|
||||
if (!lv_reduce(lv, rimage_extents))
|
||||
return_0; /* FIXME: any upper level reporting */
|
||||
/* Remove metadata area if image has been removed */
|
||||
if (seg->meta_areas && seg_metalv(seg, s) && (area_reduction == seg->area_len)) {
|
||||
if (!lv_reduce(seg_metalv(seg, s),
|
||||
seg_metalv(seg, s)->le_count)) {
|
||||
log_error("Failed to remove RAID meta-device %s.",
|
||||
display_lvname(seg_metalv(seg, s)));
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
@@ -1261,7 +1218,7 @@ static uint32_t _calc_area_multiple(const struct segment_type *segtype,
|
||||
* the 'stripes' argument will always need to
|
||||
* be given.
|
||||
*/
|
||||
if (segtype_is_raid10(segtype)) {
|
||||
if (!strcmp(segtype->name, _lv_type_names[LV_TYPE_RAID10])) {
|
||||
if (!stripes)
|
||||
return area_count / 2;
|
||||
return stripes;
|
||||
@@ -1281,17 +1238,16 @@ static uint32_t _calc_area_multiple(const struct segment_type *segtype,
|
||||
static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
|
||||
{
|
||||
uint32_t area_reduction, s;
|
||||
uint32_t areas = (seg->area_count / (seg_is_raid10(seg) ? seg->data_copies : 1)) - seg->segtype->parity_devs;
|
||||
|
||||
/* Caller must ensure exact divisibility */
|
||||
if (seg_is_striped(seg) || seg_is_striped_raid(seg)) {
|
||||
if (reduction % areas) {
|
||||
if (seg_is_striped(seg)) {
|
||||
if (reduction % seg->area_count) {
|
||||
log_error("Segment extent reduction %" PRIu32
|
||||
" not divisible by #stripes %" PRIu32,
|
||||
reduction, seg->area_count);
|
||||
return 0;
|
||||
}
|
||||
area_reduction = reduction / areas;
|
||||
area_reduction = (reduction / seg->area_count);
|
||||
} else
|
||||
area_reduction = reduction;
|
||||
|
||||
@@ -1300,11 +1256,7 @@ static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
|
||||
return_0;
|
||||
|
||||
seg->len -= reduction;
|
||||
|
||||
if (seg_is_raid(seg))
|
||||
seg->area_len = seg->len;
|
||||
else
|
||||
seg->area_len -= area_reduction;
|
||||
seg->area_len -= area_reduction;
|
||||
|
||||
return 1;
|
||||
}
|
||||
@@ -1314,13 +1266,11 @@ static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
|
||||
*/
|
||||
static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
|
||||
{
|
||||
struct lv_segment *seg = first_seg(lv);;
|
||||
struct lv_segment *seg;
|
||||
uint32_t count = extents;
|
||||
uint32_t reduction;
|
||||
struct logical_volume *pool_lv;
|
||||
struct logical_volume *external_lv = NULL;
|
||||
int is_raid10 = seg_is_any_raid10(seg) && seg->reshape_len;
|
||||
uint32_t data_copies = seg->data_copies;
|
||||
|
||||
if (lv_is_merging_origin(lv)) {
|
||||
log_debug_metadata("Dropping snapshot merge of %s to removed origin %s.",
|
||||
@@ -1383,18 +1333,8 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
|
||||
count -= reduction;
|
||||
}
|
||||
|
||||
seg = first_seg(lv);
|
||||
|
||||
if (is_raid10) {
|
||||
lv->le_count -= extents * data_copies;
|
||||
if (seg)
|
||||
seg->len = seg->area_len = lv->le_count;
|
||||
} else
|
||||
lv->le_count -= extents;
|
||||
|
||||
lv->le_count -= extents;
|
||||
lv->size = (uint64_t) lv->le_count * lv->vg->extent_size;
|
||||
if (seg)
|
||||
seg->extents_copied = seg->len;
|
||||
|
||||
if (!delete)
|
||||
return 1;
|
||||
@@ -1505,12 +1445,6 @@ int lv_refresh_suspend_resume(const struct logical_volume *lv)
|
||||
*/
|
||||
int lv_reduce(struct logical_volume *lv, uint32_t extents)
|
||||
{
|
||||
struct lv_segment *seg = first_seg(lv);
|
||||
|
||||
/* Ensure stripe boundary extents on RAID LVs */
|
||||
if (lv_is_raid(lv) && extents != lv->le_count)
|
||||
extents =_round_to_stripe_boundary(lv->vg, extents,
|
||||
seg_is_raid1(seg) ? 0 : _raid_stripes_count(seg), 0);
|
||||
return _lv_reduce(lv, extents, 1);
|
||||
}
|
||||
|
||||
@@ -1812,10 +1746,10 @@ static int _setup_alloced_segment(struct logical_volume *lv, uint64_t status,
|
||||
area_multiple = _calc_area_multiple(segtype, area_count, 0);
|
||||
extents = aa[0].len * area_multiple;
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents, 0,
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents,
|
||||
status, stripe_size, NULL,
|
||||
area_count,
|
||||
aa[0].len, 0, 0u, region_size, 0u, NULL))) {
|
||||
aa[0].len, 0u, region_size, 0u, NULL))) {
|
||||
log_error("Couldn't allocate new LV segment.");
|
||||
return 0;
|
||||
}
|
||||
@@ -3253,9 +3187,9 @@ int lv_add_virtual_segment(struct logical_volume *lv, uint64_t status,
|
||||
seg->area_len += extents;
|
||||
seg->len += extents;
|
||||
} else {
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents, 0,
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents,
|
||||
status, 0, NULL, 0,
|
||||
extents, 0, 0, 0, 0, NULL))) {
|
||||
extents, 0, 0, 0, NULL))) {
|
||||
log_error("Couldn't allocate new %s segment.", segtype->name);
|
||||
return 0;
|
||||
}
|
||||
@@ -3374,24 +3308,19 @@ static struct alloc_handle *_alloc_init(struct cmd_context *cmd,
|
||||
|
||||
if (segtype_is_raid(segtype)) {
|
||||
if (metadata_area_count) {
|
||||
uint32_t cur_rimage_extents, new_rimage_extents;
|
||||
|
||||
if (metadata_area_count != area_count)
|
||||
log_error(INTERNAL_ERROR
|
||||
"Bad metadata_area_count");
|
||||
ah->metadata_area_count = area_count;
|
||||
ah->alloc_and_split_meta = 1;
|
||||
|
||||
ah->log_len = RAID_METADATA_AREA_LEN;
|
||||
|
||||
/* Calculate log_len (i.e. length of each rmeta device) for RAID */
|
||||
cur_rimage_extents = raid_rimage_extents(segtype, existing_extents, stripes, mirrors);
|
||||
new_rimage_extents = raid_rimage_extents(segtype, existing_extents + new_extents, stripes, mirrors),
|
||||
ah->log_len = raid_rmeta_extents_delta(cmd, cur_rimage_extents, new_rimage_extents,
|
||||
region_size, extent_size);
|
||||
ah->metadata_area_count = metadata_area_count;
|
||||
ah->alloc_and_split_meta = !!ah->log_len;
|
||||
/*
|
||||
* We need 'log_len' extents for each
|
||||
* RAID device's metadata_area
|
||||
*/
|
||||
total_extents += ah->log_len * (segtype_is_raid1(segtype) ? 1 : ah->area_multiple);
|
||||
total_extents += (ah->log_len * ah->area_multiple);
|
||||
} else {
|
||||
ah->log_area_count = 0;
|
||||
ah->log_len = 0;
|
||||
@@ -3581,10 +3510,10 @@ static struct lv_segment *_convert_seg_to_mirror(struct lv_segment *seg,
|
||||
}
|
||||
|
||||
if (!(newseg = alloc_lv_segment(get_segtype_from_string(seg->lv->vg->cmd, SEG_TYPE_NAME_MIRROR),
|
||||
seg->lv, seg->le, seg->len, 0,
|
||||
seg->lv, seg->le, seg->len,
|
||||
seg->status, seg->stripe_size,
|
||||
log_lv,
|
||||
seg->area_count, seg->area_len, 0,
|
||||
seg->area_count, seg->area_len,
|
||||
seg->chunk_size, region_size,
|
||||
seg->extents_copied, NULL))) {
|
||||
log_error("Couldn't allocate converted LV segment.");
|
||||
@@ -3686,8 +3615,8 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
|
||||
}
|
||||
|
||||
if (!(new_seg = alloc_lv_segment(segtype, copy_lv,
|
||||
seg->le, seg->len, 0, PVMOVE, 0,
|
||||
NULL, 1, seg->len, 0,
|
||||
seg->le, seg->len, PVMOVE, 0,
|
||||
NULL, 1, seg->len,
|
||||
0, 0, 0, NULL)))
|
||||
return_0;
|
||||
|
||||
@@ -3882,9 +3811,9 @@ static int _lv_insert_empty_sublvs(struct logical_volume *lv,
|
||||
/*
|
||||
* First, create our top-level segment for our top-level LV
|
||||
*/
|
||||
if (!(mapseg = alloc_lv_segment(segtype, lv, 0, 0, 0, lv->status,
|
||||
if (!(mapseg = alloc_lv_segment(segtype, lv, 0, 0, lv->status,
|
||||
stripe_size, NULL,
|
||||
devices, 0, 0, 0, region_size, 0, NULL))) {
|
||||
devices, 0, 0, region_size, 0, NULL))) {
|
||||
log_error("Failed to create mapping segment for %s.",
|
||||
display_lvname(lv));
|
||||
return 0;
|
||||
@@ -3944,7 +3873,7 @@ bad:
|
||||
static int _lv_extend_layered_lv(struct alloc_handle *ah,
|
||||
struct logical_volume *lv,
|
||||
uint32_t extents, uint32_t first_area,
|
||||
uint32_t mirrors, uint32_t stripes, uint32_t stripe_size)
|
||||
uint32_t stripes, uint32_t stripe_size)
|
||||
{
|
||||
const struct segment_type *segtype;
|
||||
struct logical_volume *sub_lv, *meta_lv;
|
||||
@@ -3972,7 +3901,7 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
|
||||
for (fa = first_area, s = 0; s < seg->area_count; s++) {
|
||||
if (is_temporary_mirror_layer(seg_lv(seg, s))) {
|
||||
if (!_lv_extend_layered_lv(ah, seg_lv(seg, s), extents / area_multiple,
|
||||
fa, mirrors, stripes, stripe_size))
|
||||
fa, stripes, stripe_size))
|
||||
return_0;
|
||||
fa += lv_mirror_count(seg_lv(seg, s));
|
||||
continue;
|
||||
@@ -3986,8 +3915,6 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
|
||||
return 0;
|
||||
}
|
||||
|
||||
last_seg(lv)->data_copies = mirrors;
|
||||
|
||||
/* Extend metadata LVs only on initial creation */
|
||||
if (seg_is_raid_with_meta(seg) && !lv->le_count) {
|
||||
if (!seg->meta_areas) {
|
||||
@@ -4084,15 +4011,25 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
|
||||
lv_set_hidden(seg_metalv(seg, s));
|
||||
}
|
||||
|
||||
seg->area_len += extents / area_multiple;
|
||||
seg->len += extents;
|
||||
if (seg_is_raid(seg))
|
||||
seg->area_len = seg->len;
|
||||
else
|
||||
seg->area_len += extents / area_multiple;
|
||||
|
||||
if (!_setup_lv_size(lv, lv->le_count + extents))
|
||||
return_0;
|
||||
|
||||
/*
|
||||
* The MD bitmap is limited to being able to track 2^21 regions.
|
||||
* The region_size must be adjusted to meet that criteria
|
||||
* unless raid0/raid0_meta, which doesn't have a bitmap.
|
||||
*/
|
||||
if (seg_is_raid(seg) && !seg_is_any_raid0(seg))
|
||||
while (seg->region_size < (lv->size / (1 << 21))) {
|
||||
seg->region_size *= 2;
|
||||
log_very_verbose("Adjusting RAID region_size from %uS to %uS"
|
||||
" to support large LV size",
|
||||
seg->region_size/2, seg->region_size);
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
@@ -4119,7 +4056,6 @@ int lv_extend(struct logical_volume *lv,
|
||||
uint32_t sub_lv_count;
|
||||
uint32_t old_extents;
|
||||
uint32_t new_extents; /* Total logical size after extension. */
|
||||
uint64_t raid_size;
|
||||
|
||||
log_very_verbose("Adding segment of type %s to LV %s.", segtype->name, lv->name);
|
||||
|
||||
@@ -4141,22 +4077,6 @@ int lv_extend(struct logical_volume *lv,
|
||||
}
|
||||
/* FIXME log_count should be 1 for mirrors */
|
||||
|
||||
if (segtype_is_raid(segtype) && !segtype_is_any_raid0(segtype)) {
|
||||
raid_size = ((uint64_t) lv->le_count + extents) * lv->vg->extent_size;
|
||||
|
||||
/*
|
||||
* The MD bitmap is limited to being able to track 2^21 regions.
|
||||
* The region_size must be adjusted to meet that criteria
|
||||
* unless raid0/raid0_meta, which doesn't have a bitmap.
|
||||
*/
|
||||
|
||||
region_size = raid_ensure_min_region_size(lv, raid_size, region_size);
|
||||
|
||||
if (first_seg(lv))
|
||||
first_seg(lv)->region_size = region_size;
|
||||
|
||||
}
|
||||
|
||||
if (!(ah = allocate_extents(lv->vg, lv, segtype, stripes, mirrors,
|
||||
log_count, region_size, extents,
|
||||
allocatable_pvs, alloc, approx_alloc, NULL)))
|
||||
@@ -4195,7 +4115,7 @@ int lv_extend(struct logical_volume *lv,
|
||||
}
|
||||
|
||||
if (!(r = _lv_extend_layered_lv(ah, lv, new_extents - lv->le_count, 0,
|
||||
mirrors, stripes, stripe_size)))
|
||||
stripes, stripe_size)))
|
||||
goto_out;
|
||||
|
||||
/*
|
||||
@@ -4737,11 +4657,6 @@ static uint32_t lvseg_get_stripes(struct lv_segment *seg, uint32_t *stripesize)
|
||||
return seg->area_count;
|
||||
}
|
||||
|
||||
if (seg_is_raid(seg)) {
|
||||
*stripesize = seg->stripe_size;
|
||||
return _raid_stripes_count(seg);
|
||||
}
|
||||
|
||||
*stripesize = 0;
|
||||
return 0;
|
||||
}
|
||||
@@ -5407,7 +5322,6 @@ int lv_resize(struct logical_volume *lv,
|
||||
struct logical_volume *lock_lv = (struct logical_volume*) lv_lock_holder(lv);
|
||||
struct logical_volume *aux_lv = NULL; /* Note: aux_lv never resizes fs */
|
||||
struct lvresize_params aux_lp;
|
||||
struct lv_segment *seg = first_seg(lv);
|
||||
int activated = 0;
|
||||
int ret = 0;
|
||||
int status;
|
||||
@@ -5415,17 +5329,6 @@ int lv_resize(struct logical_volume *lv,
|
||||
if (!_lvresize_check(lv, lp))
|
||||
return_0;
|
||||
|
||||
if (seg->reshape_len) {
|
||||
/* Prevent resizing on out-of-sync reshapable raid */
|
||||
if (!lv_raid_in_sync(lv)) {
|
||||
log_error("Can't resize reshaping LV %s.", display_lvname(lv));
|
||||
return 0;
|
||||
}
|
||||
/* Remove any striped raid reshape space for LV resizing */
|
||||
if (!lv_raid_free_reshape_space(lv))
|
||||
return_0;
|
||||
}
|
||||
|
||||
if (lp->use_policies) {
|
||||
lp->extents = 0;
|
||||
lp->sign = SIGN_PLUS;
|
||||
@@ -5460,11 +5363,6 @@ int lv_resize(struct logical_volume *lv,
|
||||
}
|
||||
}
|
||||
|
||||
/* Ensure stripe boundary extents! */
|
||||
if (!lp->percent && lv_is_raid(lv))
|
||||
lp->extents =_round_to_stripe_boundary(lv->vg, lp->extents,
|
||||
seg_is_raid1(seg) ? 0 : _raid_stripes_count(seg),
|
||||
lp->resize == LV_REDUCE ? 0 : 1);
|
||||
if (aux_lv && !_lvresize_prepare(&aux_lv, &aux_lp, pvh))
|
||||
return_0;
|
||||
|
||||
@@ -5937,7 +5835,6 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
|
||||
int ask_discard;
|
||||
struct lv_list *lvl;
|
||||
struct seg_list *sl;
|
||||
struct lv_segment *seg = first_seg(lv);
|
||||
int is_last_pool = lv_is_pool(lv);
|
||||
|
||||
vg = lv->vg;
|
||||
@@ -6044,13 +5941,6 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
|
||||
is_last_pool = 1;
|
||||
}
|
||||
|
||||
/* Special case removing a striped raid LV with allocated reshape space */
|
||||
if (seg && seg->reshape_len) {
|
||||
if (!(seg->segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_STRIPED)))
|
||||
return_0;
|
||||
lv->le_count = seg->len = seg->area_len = seg_lv(seg, 0)->le_count * seg->area_count;
|
||||
}
|
||||
|
||||
/* Used cache pool, COW or historical LV cannot be activated */
|
||||
if ((!lv_is_cache_pool(lv) || dm_list_empty(&lv->segs_using_this_lv)) &&
|
||||
!lv_is_cow(lv) && !lv_is_historical(lv) &&
|
||||
@@ -6272,21 +6162,12 @@ int lv_remove_with_dependencies(struct cmd_context *cmd, struct logical_volume *
|
||||
/* Remove snapshot LVs first */
|
||||
if ((force == PROMPT) &&
|
||||
/* Active snapshot already needs to confirm each active LV */
|
||||
(yes_no_prompt("Do you really want to remove%s "
|
||||
"%sorigin logical volume %s with %u snapshot(s)? [y/n]: ",
|
||||
lv_is_active(lv) ? " active" : "",
|
||||
vg_is_clustered(lv->vg) ? "clustered " : "",
|
||||
display_lvname(lv),
|
||||
lv->origin_count) == 'n'))
|
||||
!lv_is_active(lv) &&
|
||||
yes_no_prompt("Removing origin %s will also remove %u "
|
||||
"snapshots(s). Proceed? [y/n]: ",
|
||||
lv->name, lv->origin_count) == 'n')
|
||||
goto no_remove;
|
||||
|
||||
if (!deactivate_lv(cmd, lv)) {
|
||||
stack;
|
||||
goto no_remove;
|
||||
}
|
||||
log_verbose("Removing origin logical volume %s with %u snapshots(s).",
|
||||
display_lvname(lv), lv->origin_count);
|
||||
|
||||
dm_list_iterate_safe(snh, snht, &lv->snapshot_segs)
|
||||
if (!lv_remove_with_dependencies(cmd, dm_list_struct_base(snh, struct lv_segment,
|
||||
origin_list)->cow,
|
||||
@@ -6352,6 +6233,7 @@ static int _lv_update_and_reload(struct logical_volume *lv, int origin_only)
|
||||
|
||||
log_very_verbose("Updating logical volume %s on disk(s)%s.",
|
||||
display_lvname(lock_lv), origin_only ? " (origin only)": "");
|
||||
|
||||
if (!vg_write(vg))
|
||||
return_0;
|
||||
|
||||
@@ -6818,8 +6700,8 @@ struct logical_volume *insert_layer_for_lv(struct cmd_context *cmd,
|
||||
return_NULL;
|
||||
|
||||
/* allocate a new linear segment */
|
||||
if (!(mapseg = alloc_lv_segment(segtype, lv_where, 0, layer_lv->le_count, 0,
|
||||
status, 0, NULL, 1, layer_lv->le_count, 0,
|
||||
if (!(mapseg = alloc_lv_segment(segtype, lv_where, 0, layer_lv->le_count,
|
||||
status, 0, NULL, 1, layer_lv->le_count,
|
||||
0, 0, 0, NULL)))
|
||||
return_NULL;
|
||||
|
||||
@@ -6875,8 +6757,8 @@ static int _extend_layer_lv_for_segment(struct logical_volume *layer_lv,
|
||||
|
||||
/* allocate a new segment */
|
||||
if (!(mapseg = alloc_lv_segment(segtype, layer_lv, layer_lv->le_count,
|
||||
seg->area_len, 0, status, 0,
|
||||
NULL, 1, seg->area_len, 0, 0, 0, 0, seg)))
|
||||
seg->area_len, status, 0,
|
||||
NULL, 1, seg->area_len, 0, 0, 0, seg)))
|
||||
return_0;
|
||||
|
||||
/* map the new segment to the original underlying are */
|
||||
|
@@ -236,7 +236,7 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
|
||||
if (!seg->areas)
|
||||
raid_seg_error("zero areas");
|
||||
|
||||
if (seg->extents_copied > seg->len)
|
||||
if (seg->extents_copied > seg->area_len)
|
||||
raid_seg_error_val("extents_copied too large", seg->extents_copied);
|
||||
|
||||
/* Default < 10, change once raid1 split shift and rename SubLVs works! */
|
||||
@@ -475,7 +475,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
|
||||
struct lv_segment *seg, *seg2;
|
||||
uint32_t le = 0;
|
||||
unsigned seg_count = 0, seg_found, external_lv_found = 0;
|
||||
uint32_t data_rimage_count, s;
|
||||
uint32_t area_multiplier, s;
|
||||
struct seg_list *sl;
|
||||
struct glv_list *glvl;
|
||||
int error_count = 0;
|
||||
@@ -498,13 +498,13 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
|
||||
inc_error_count;
|
||||
}
|
||||
|
||||
data_rimage_count = seg->area_count - seg->segtype->parity_devs;
|
||||
/* FIXME: raid varies seg->area_len? */
|
||||
if (seg->len != seg->area_len &&
|
||||
seg->len != seg->area_len * data_rimage_count) {
|
||||
log_error("LV %s: segment %u with len=%u "
|
||||
" has inconsistent area_len %u",
|
||||
lv->name, seg_count, seg->len, seg->area_len);
|
||||
area_multiplier = segtype_is_striped(seg->segtype) ?
|
||||
seg->area_count : 1;
|
||||
|
||||
if (seg->area_len * area_multiplier != seg->len) {
|
||||
log_error("LV %s: segment %u has inconsistent "
|
||||
"area_len %u",
|
||||
lv->name, seg_count, seg->area_len);
|
||||
inc_error_count;
|
||||
}
|
||||
|
||||
@@ -766,10 +766,10 @@ static int _lv_split_segment(struct logical_volume *lv, struct lv_segment *seg,
|
||||
|
||||
/* Clone the existing segment */
|
||||
if (!(split_seg = alloc_lv_segment(seg->segtype,
|
||||
seg->lv, seg->le, seg->len, seg->reshape_len,
|
||||
seg->lv, seg->le, seg->len,
|
||||
seg->status, seg->stripe_size,
|
||||
seg->log_lv,
|
||||
seg->area_count, seg->area_len, seg->data_copies,
|
||||
seg->area_count, seg->area_len,
|
||||
seg->chunk_size, seg->region_size,
|
||||
seg->extents_copied, seg->pvmove_source_seg))) {
|
||||
log_error("Couldn't allocate cloned LV segment.");
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -137,11 +137,7 @@
|
||||
e.g. to prohibit allocation of a RAID image
|
||||
on a PV already holing an image of the RAID set */
|
||||
#define LOCKD_SANLOCK_LV UINT64_C(0x0080000000000000) /* LV - Internal use only */
|
||||
#define LV_RESHAPE_DELTA_DISKS_PLUS UINT64_C(0x0100000000000000) /* LV reshape flag delta disks plus image(s) */
|
||||
#define LV_RESHAPE_DELTA_DISKS_MINUS UINT64_C(0x0200000000000000) /* LV reshape flag delta disks minus image(s) */
|
||||
|
||||
#define LV_REMOVE_AFTER_RESHAPE UINT64_C(0x0400000000000000) /* LV needs to be removed after a shrinking reshape */
|
||||
/* Next unused flag: UINT64_C(0x0800000000000000) */
|
||||
/* Next unused flag: UINT64_C(0x0100000000000000) */
|
||||
|
||||
/* Format features flags */
|
||||
#define FMT_SEGMENTS 0x00000001U /* Arbitrary segment params? */
|
||||
@@ -450,7 +446,6 @@ struct lv_segment {
|
||||
const struct segment_type *segtype;
|
||||
uint32_t le;
|
||||
uint32_t len;
|
||||
uint32_t reshape_len; /* For RAID: user hidden additional out of place reshaping length off area_len and len */
|
||||
|
||||
uint64_t status;
|
||||
|
||||
@@ -459,7 +454,6 @@ struct lv_segment {
|
||||
uint32_t writebehind; /* For RAID (RAID1 only) */
|
||||
uint32_t min_recovery_rate; /* For RAID */
|
||||
uint32_t max_recovery_rate; /* For RAID */
|
||||
uint32_t data_offset; /* For RAID: data offset in sectors on each data component image */
|
||||
uint32_t area_count;
|
||||
uint32_t area_len;
|
||||
uint32_t chunk_size; /* For snapshots/thin_pool. In sectors. */
|
||||
@@ -470,7 +464,6 @@ struct lv_segment {
|
||||
struct logical_volume *cow;
|
||||
struct dm_list origin_list;
|
||||
uint32_t region_size; /* For mirrors, replicators - in sectors */
|
||||
uint32_t data_copies; /* For RAID: number of data copies (e.g. 3 for RAID 6 */
|
||||
uint32_t extents_copied;/* Number of extents synced for raids/mirrors */
|
||||
struct logical_volume *log_lv;
|
||||
struct lv_segment *pvmove_source_seg;
|
||||
@@ -916,8 +909,8 @@ struct lvcreate_params {
|
||||
int wipe_signatures; /* all */
|
||||
int32_t major; /* all */
|
||||
int32_t minor; /* all */
|
||||
int log_count; /* mirror/RAID */
|
||||
int nosync; /* mirror/RAID */
|
||||
int log_count; /* mirror */
|
||||
int nosync; /* mirror */
|
||||
int pool_metadata_spare; /* pools */
|
||||
int type; /* type arg is given */
|
||||
int temporary; /* temporary LV */
|
||||
@@ -948,15 +941,15 @@ struct lvcreate_params {
|
||||
#define PASS_ARG_ZERO 0x08
|
||||
int passed_args;
|
||||
|
||||
uint32_t stripes; /* striped/RAID */
|
||||
uint32_t stripe_size; /* striped/RAID */
|
||||
uint32_t stripes; /* striped */
|
||||
uint32_t stripe_size; /* striped */
|
||||
uint32_t chunk_size; /* snapshot */
|
||||
uint32_t region_size; /* mirror/RAID */
|
||||
uint32_t region_size; /* mirror */
|
||||
|
||||
unsigned stripes_supplied; /* striped/RAID */
|
||||
unsigned stripe_size_supplied; /* striped/RAID */
|
||||
unsigned stripes_supplied; /* striped */
|
||||
unsigned stripe_size_supplied; /* striped */
|
||||
|
||||
uint32_t mirrors; /* mirror/RAID */
|
||||
uint32_t mirrors; /* mirror */
|
||||
|
||||
uint32_t min_recovery_rate; /* RAID */
|
||||
uint32_t max_recovery_rate; /* RAID */
|
||||
@@ -1212,9 +1205,7 @@ struct logical_volume *first_replicator_dev(const struct logical_volume *lv);
|
||||
int lv_is_raid_with_tracking(const struct logical_volume *lv);
|
||||
uint32_t lv_raid_image_count(const struct logical_volume *lv);
|
||||
int lv_raid_change_image_count(struct logical_volume *lv,
|
||||
uint32_t new_count,
|
||||
uint32_t new_region_size,
|
||||
struct dm_list *allocate_pvs);
|
||||
uint32_t new_count, struct dm_list *allocate_pvs);
|
||||
int lv_raid_split(struct logical_volume *lv, const char *split_name,
|
||||
uint32_t new_count, struct dm_list *splittable_pvs);
|
||||
int lv_raid_split_and_track(struct logical_volume *lv,
|
||||
@@ -1233,17 +1224,8 @@ int lv_raid_replace(struct logical_volume *lv, int force,
|
||||
struct dm_list *remove_pvs, struct dm_list *allocate_pvs);
|
||||
int lv_raid_remove_missing(struct logical_volume *lv);
|
||||
int partial_raid_lv_supports_degraded_activation(const struct logical_volume *lv);
|
||||
uint32_t raid_rmeta_extents_delta(struct cmd_context *cmd,
|
||||
uint32_t rimage_extents_cur, uint32_t rimage_extents_new,
|
||||
uint32_t region_size, uint32_t extent_size);
|
||||
uint32_t raid_rimage_extents(const struct segment_type *segtype,
|
||||
uint32_t extents, uint32_t stripes, uint32_t data_copies);
|
||||
uint32_t raid_ensure_min_region_size(const struct logical_volume *lv, uint64_t raid_size, uint32_t region_size);
|
||||
int lv_raid_change_region_size(struct logical_volume *lv,
|
||||
int yes, int force, uint32_t new_region_size);
|
||||
int lv_raid_in_sync(const struct logical_volume *lv);
|
||||
uint32_t lv_raid_data_copies(const struct segment_type *segtype, uint32_t area_count);
|
||||
int lv_raid_free_reshape_space(const struct logical_volume *lv);
|
||||
/* -- metadata/raid_manip.c */
|
||||
|
||||
/* ++ metadata/cache_manip.c */
|
||||
|
@@ -1256,7 +1256,7 @@ uint32_t extents_from_percent_size(struct volume_group *vg, const struct dm_list
|
||||
}
|
||||
break;
|
||||
}
|
||||
/* fall through to use all PVs in VG like %FREE */
|
||||
/* Fall back to use all PVs in VG like %FREE */
|
||||
case PERCENT_FREE:
|
||||
if (!(extents = vg->free_count)) {
|
||||
log_error("No free extents in Volume group %s.", vg->name);
|
||||
@@ -6388,7 +6388,7 @@ int vg_strip_outdated_historical_lvs(struct volume_group *vg) {
|
||||
* Removal time in the future? Not likely,
|
||||
* but skip this item in any case.
|
||||
*/
|
||||
if (current_time < (time_t) glvl->glv->historical->timestamp_removed)
|
||||
if ((current_time) < glvl->glv->historical->timestamp_removed)
|
||||
continue;
|
||||
|
||||
if ((current_time - glvl->glv->historical->timestamp_removed) > threshold) {
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -43,8 +43,7 @@ struct segment_type *get_segtype_from_flag(struct cmd_context *cmd, uint64_t fla
|
||||
{
|
||||
struct segment_type *segtype;
|
||||
|
||||
/* Iterate backwards to provide aliases; e.g. raid5 instead of raid5_ls */
|
||||
dm_list_iterate_back_items(segtype, &cmd->segtypes)
|
||||
dm_list_iterate_items(segtype, &cmd->segtypes)
|
||||
if (flag & segtype->flags)
|
||||
return segtype;
|
||||
|
||||
|
@@ -50,8 +50,7 @@ struct dev_manager;
|
||||
#define SEG_RAID0 0x0000000000040000ULL
|
||||
#define SEG_RAID0_META 0x0000000000080000ULL
|
||||
#define SEG_RAID1 0x0000000000100000ULL
|
||||
#define SEG_RAID10_NEAR 0x0000000000200000ULL
|
||||
#define SEG_RAID10 SEG_RAID10_NEAR
|
||||
#define SEG_RAID10 0x0000000000200000ULL
|
||||
#define SEG_RAID4 0x0000000000400000ULL
|
||||
#define SEG_RAID5_N 0x0000000000800000ULL
|
||||
#define SEG_RAID5_LA 0x0000000001000000ULL
|
||||
@@ -140,11 +139,7 @@ struct dev_manager;
|
||||
#define segtype_is_any_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
|
||||
#define segtype_is_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
|
||||
#define segtype_is_raid10_near(segtype) segtype_is_raid10(segtype)
|
||||
/* FIXME: once raid10_offset supported */
|
||||
#define segtype_is_raid10_offset(segtype) 0 // ((segtype)->flags & SEG_RAID10_OFFSET ? 1 : 0)
|
||||
#define segtype_is_raid_with_meta(segtype) (segtype_is_raid(segtype) && !segtype_is_raid0(segtype))
|
||||
#define segtype_is_striped_raid(segtype) (segtype_is_raid(segtype) && !segtype_is_raid1(segtype))
|
||||
#define segtype_is_reshapable_raid(segtype) ((segtype_is_striped_raid(segtype) && !segtype_is_any_raid0(segtype)) || segtype_is_raid10_near(segtype) || segtype_is_raid10_offset(segtype))
|
||||
#define segtype_is_snapshot(segtype) ((segtype)->flags & SEG_SNAPSHOT ? 1 : 0)
|
||||
#define segtype_is_striped(segtype) ((segtype)->flags & SEG_AREAS_STRIPED ? 1 : 0)
|
||||
#define segtype_is_thin(segtype) ((segtype)->flags & (SEG_THIN_POOL|SEG_THIN_VOLUME) ? 1 : 0)
|
||||
@@ -194,8 +189,6 @@ struct dev_manager;
|
||||
#define seg_is_raid10(seg) segtype_is_raid10((seg)->segtype)
|
||||
#define seg_is_raid10_near(seg) segtype_is_raid10_near((seg)->segtype)
|
||||
#define seg_is_raid_with_meta(seg) segtype_is_raid_with_meta((seg)->segtype)
|
||||
#define seg_is_striped_raid(seg) segtype_is_striped_raid((seg)->segtype)
|
||||
#define seg_is_reshapable_raid(seg) segtype_is_reshapable_raid((seg)->segtype)
|
||||
#define seg_is_replicator(seg) ((seg)->segtype->flags & SEG_REPLICATOR ? 1 : 0)
|
||||
#define seg_is_replicator_dev(seg) ((seg)->segtype->flags & SEG_REPLICATOR_DEV ? 1 : 0)
|
||||
#define seg_is_snapshot(seg) segtype_is_snapshot((seg)->segtype)
|
||||
@@ -286,7 +279,6 @@ struct segment_type *init_unknown_segtype(struct cmd_context *cmd,
|
||||
#define RAID_FEATURE_RAID0 (1U << 1) /* version 1.7 */
|
||||
#define RAID_FEATURE_RESHAPING (1U << 2) /* version 1.8 */
|
||||
#define RAID_FEATURE_RAID4 (1U << 3) /* ! version 1.8 or 1.9.0 */
|
||||
#define RAID_FEATURE_RESHAPE (1U << 4) /* version 1.10.1 */
|
||||
|
||||
#ifdef RAID_INTERNAL
|
||||
int init_raid_segtypes(struct cmd_context *cmd, struct segtype_library *seglib);
|
||||
|
@@ -238,8 +238,8 @@ static struct lv_segment *_alloc_snapshot_seg(struct logical_volume *lv)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, 0, lv->le_count, 0, 0, 0,
|
||||
NULL, 0, lv->le_count, 0, 0, 0, 0, NULL))) {
|
||||
if (!(seg = alloc_lv_segment(segtype, lv, 0, lv->le_count, 0, 0,
|
||||
NULL, 0, lv->le_count, 0, 0, 0, NULL))) {
|
||||
log_error("Couldn't allocate new snapshot segment.");
|
||||
return NULL;
|
||||
}
|
||||
|
@@ -58,13 +58,13 @@
|
||||
#define r1__r0m _takeover_from_raid1_to_raid0_meta
|
||||
#define r1__r1 _takeover_from_raid1_to_raid1
|
||||
#define r1__r10 _takeover_from_raid1_to_raid10
|
||||
#define r1__r5 _takeover_from_raid1_to_raid5
|
||||
#define r1__r45 _takeover_from_raid1_to_raid45
|
||||
#define r1__str _takeover_from_raid1_to_striped
|
||||
#define r45_lin _takeover_from_raid45_to_linear
|
||||
#define r45_mir _takeover_from_raid45_to_mirrored
|
||||
#define r45_r0 _takeover_from_raid45_to_raid0
|
||||
#define r45_r0m _takeover_from_raid45_to_raid0_meta
|
||||
#define r5_r1 _takeover_from_raid5_to_raid1
|
||||
#define r45_r1 _takeover_from_raid45_to_raid1
|
||||
#define r45_r54 _takeover_from_raid45_to_raid54
|
||||
#define r45_r6 _takeover_from_raid45_to_raid6
|
||||
#define r45_str _takeover_from_raid45_to_striped
|
||||
@@ -109,10 +109,10 @@ static takeover_fn_t _takeover_fns[][11] = {
|
||||
/* mirror */ { X , X , N , mir_r0, mir_r0m, mir_r1, mir_r45, X , mir_r10, X , X },
|
||||
/* raid0 */ { r0__lin, r0__str, r0__mir, N , r0__r0m, r0__r1, r0__r45, r0__r6, r0__r10, X , X },
|
||||
/* raid0_meta */ { r0m_lin, r0m_str, r0m_mir, r0m_r0, N , r0m_r1, r0m_r45, r0m_r6, r0m_r10, X , X },
|
||||
/* raid1 */ { r1__lin, r1__str, r1__mir, r1__r0, r1__r0m, r1__r1, r1__r5, X , r1__r10, X , X },
|
||||
/* raid4/5 */ { r45_lin, r45_str, r45_mir, r45_r0, r45_r0m, r5_r1 , r45_r54, r45_r6, X , X , X },
|
||||
/* raid1 */ { r1__lin, r1__str, r1__mir, r1__r0, r1__r0m, r1__r1, r1__r45, X , r1__r10, X , X },
|
||||
/* raid4/5 */ { r45_lin, r45_str, r45_mir, r45_r0, r45_r0m, r45_r1, r45_r54, r45_r6, X , X , X },
|
||||
/* raid6 */ { X , r6__str, X , r6__r0, r6__r0m, X , r6__r45, X , X , X , X },
|
||||
/* raid10 */ { r10_lin, r10_str, r10_mir, r10_r0, r10_r0m, r10_r1, X , X , X , X , X },
|
||||
/* raid10 */ // { r10_lin, r10_str, r10_mir, r10_r0, r10_r0m, r10_r1, X , X , r10_r10, r10_r01, X },
|
||||
/* raid01 */ // { X , r01_str, X , X , X , X , X , X , r01_r10, r01_r01, X },
|
||||
/* other */ { X , X , X , X , X , X , X , X , X , X , X },
|
||||
};
|
||||
|
@@ -15,6 +15,7 @@
|
||||
|
||||
#include "lib.h"
|
||||
#include "config.h"
|
||||
#include "lvm-file.h"
|
||||
#include "lvm-flock.h"
|
||||
#include "lvm-signal.h"
|
||||
#include "locking.h"
|
||||
|
189
lib/raid/raid.c
189
lib/raid/raid.c
@@ -137,7 +137,6 @@ static int _raid_text_import(struct lv_segment *seg,
|
||||
} raid_attr_import[] = {
|
||||
{ "region_size", &seg->region_size },
|
||||
{ "stripe_size", &seg->stripe_size },
|
||||
{ "data_copies", &seg->data_copies },
|
||||
{ "writebehind", &seg->writebehind },
|
||||
{ "min_recovery_rate", &seg->min_recovery_rate },
|
||||
{ "max_recovery_rate", &seg->max_recovery_rate },
|
||||
@@ -147,10 +146,6 @@ static int _raid_text_import(struct lv_segment *seg,
|
||||
for (i = 0; i < DM_ARRAY_SIZE(raid_attr_import); i++, aip++) {
|
||||
if (dm_config_has_node(sn, aip->name)) {
|
||||
if (!dm_config_get_uint32(sn, aip->name, aip->var)) {
|
||||
if (!strcmp(aip->name, "data_copies")) {
|
||||
*aip->var = 0;
|
||||
continue;
|
||||
}
|
||||
log_error("Couldn't read '%s' for segment %s of logical volume %s.",
|
||||
aip->name, dm_config_parent_name(sn), seg->lv->name);
|
||||
return 0;
|
||||
@@ -170,9 +165,6 @@ static int _raid_text_import(struct lv_segment *seg,
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (seg->data_copies < 2)
|
||||
seg->data_copies = lv_raid_data_copies(seg->segtype, seg->area_count);
|
||||
|
||||
if (seg_is_any_raid0(seg))
|
||||
seg->area_len /= seg->area_count;
|
||||
|
||||
@@ -191,31 +183,18 @@ static int _raid_text_export_raid0(const struct lv_segment *seg, struct formatte
|
||||
|
||||
static int _raid_text_export_raid(const struct lv_segment *seg, struct formatter *f)
|
||||
{
|
||||
int raid0 = seg_is_any_raid0(seg);
|
||||
|
||||
if (raid0)
|
||||
outfc(f, (seg->area_count == 1) ? "# linear" : NULL,
|
||||
"stripe_count = %u", seg->area_count);
|
||||
|
||||
else {
|
||||
outf(f, "device_count = %u", seg->area_count);
|
||||
if (seg_is_any_raid10(seg) && seg->data_copies > 0)
|
||||
outf(f, "data_copies = %" PRIu32, seg->data_copies);
|
||||
if (seg->region_size)
|
||||
outf(f, "region_size = %" PRIu32, seg->region_size);
|
||||
}
|
||||
outf(f, "device_count = %u", seg->area_count);
|
||||
|
||||
if (seg->stripe_size)
|
||||
outf(f, "stripe_size = %" PRIu32, seg->stripe_size);
|
||||
|
||||
if (!raid0) {
|
||||
if (seg_is_raid1(seg) && seg->writebehind)
|
||||
outf(f, "writebehind = %" PRIu32, seg->writebehind);
|
||||
if (seg->min_recovery_rate)
|
||||
outf(f, "min_recovery_rate = %" PRIu32, seg->min_recovery_rate);
|
||||
if (seg->max_recovery_rate)
|
||||
outf(f, "max_recovery_rate = %" PRIu32, seg->max_recovery_rate);
|
||||
}
|
||||
if (seg->region_size)
|
||||
outf(f, "region_size = %" PRIu32, seg->region_size);
|
||||
if (seg->writebehind)
|
||||
outf(f, "writebehind = %" PRIu32, seg->writebehind);
|
||||
if (seg->min_recovery_rate)
|
||||
outf(f, "min_recovery_rate = %" PRIu32, seg->min_recovery_rate);
|
||||
if (seg->max_recovery_rate)
|
||||
outf(f, "max_recovery_rate = %" PRIu32, seg->max_recovery_rate);
|
||||
|
||||
return out_areas(f, seg, "raid");
|
||||
}
|
||||
@@ -237,16 +216,14 @@ static int _raid_add_target_line(struct dev_manager *dm __attribute__((unused)),
|
||||
struct dm_tree_node *node, uint64_t len,
|
||||
uint32_t *pvmove_mirror_count __attribute__((unused)))
|
||||
{
|
||||
int delta_disks = 0, delta_disks_minus = 0, delta_disks_plus = 0, data_offset = 0;
|
||||
uint32_t s;
|
||||
uint64_t flags = 0;
|
||||
uint64_t rebuilds[RAID_BITMAP_SIZE];
|
||||
uint64_t writemostly[RAID_BITMAP_SIZE];
|
||||
struct dm_tree_node_raid_params_v2 params;
|
||||
uint64_t rebuilds = 0;
|
||||
uint64_t writemostly = 0;
|
||||
struct dm_tree_node_raid_params params;
|
||||
int raid0 = seg_is_any_raid0(seg);
|
||||
|
||||
memset(¶ms, 0, sizeof(params));
|
||||
memset(&rebuilds, 0, sizeof(rebuilds));
|
||||
memset(&writemostly, 0, sizeof(writemostly));
|
||||
|
||||
if (!seg->area_count) {
|
||||
log_error(INTERNAL_ERROR "_raid_add_target_line called "
|
||||
@@ -255,85 +232,64 @@ static int _raid_add_target_line(struct dev_manager *dm __attribute__((unused)),
|
||||
}
|
||||
|
||||
/*
|
||||
* 253 device restriction imposed by kernel due to MD and dm-raid bitfield limitation in superblock.
|
||||
* It is not strictly a userspace limitation.
|
||||
* 64 device restriction imposed by kernel as well. It is
|
||||
* not strictly a userspace limitation.
|
||||
*/
|
||||
if (seg->area_count > DEFAULT_RAID_MAX_IMAGES) {
|
||||
log_error("Unable to handle more than %u devices in a "
|
||||
"single RAID array", DEFAULT_RAID_MAX_IMAGES);
|
||||
if (seg->area_count > 64) {
|
||||
log_error("Unable to handle more than 64 devices in a "
|
||||
"single RAID array");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!seg_is_any_raid0(seg)) {
|
||||
if (!raid0) {
|
||||
if (!seg->region_size) {
|
||||
log_error("Missing region size for raid segment in %s.",
|
||||
seg_lv(seg, 0)->name);
|
||||
log_error("Missing region size for mirror segment.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
for (s = 0; s < seg->area_count; s++) {
|
||||
uint64_t status = seg_lv(seg, s)->status;
|
||||
for (s = 0; s < seg->area_count; s++)
|
||||
if (seg_lv(seg, s)->status & LV_REBUILD)
|
||||
rebuilds |= 1ULL << s;
|
||||
|
||||
if (status & LV_REBUILD)
|
||||
rebuilds[s/64] |= 1ULL << (s%64);
|
||||
|
||||
if (status & LV_RESHAPE_DELTA_DISKS_PLUS) {
|
||||
delta_disks++;
|
||||
delta_disks_plus++;
|
||||
} else if (status & LV_RESHAPE_DELTA_DISKS_MINUS) {
|
||||
delta_disks--;
|
||||
delta_disks_minus++;
|
||||
}
|
||||
|
||||
if (delta_disks_plus && delta_disks_minus) {
|
||||
log_error(INTERNAL_ERROR "Invalid request for delta disks minus and delta disks plus!");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (status & LV_WRITEMOSTLY)
|
||||
writemostly[s/64] |= 1ULL << (s%64);
|
||||
}
|
||||
|
||||
data_offset = seg->data_offset;
|
||||
for (s = 0; s < seg->area_count; s++)
|
||||
if (seg_lv(seg, s)->status & LV_WRITEMOSTLY)
|
||||
writemostly |= 1ULL << s;
|
||||
|
||||
if (mirror_in_sync())
|
||||
flags = DM_NOSYNC;
|
||||
}
|
||||
|
||||
params.raid_type = lvseg_name(seg);
|
||||
|
||||
if (seg->segtype->parity_devs) {
|
||||
/* RAID 4/5/6 */
|
||||
params.mirrors = 1;
|
||||
params.stripes = seg->area_count - seg->segtype->parity_devs;
|
||||
} else if (seg_is_any_raid0(seg)) {
|
||||
params.mirrors = 1;
|
||||
params.stripes = seg->area_count;
|
||||
} else if (seg_is_any_raid10(seg)) {
|
||||
params.data_copies = seg->data_copies;
|
||||
params.stripes = seg->area_count;
|
||||
} else {
|
||||
/* RAID 1 */
|
||||
params.mirrors = seg->data_copies;
|
||||
params.stripes = 1;
|
||||
params.writebehind = seg->writebehind;
|
||||
memcpy(params.writemostly, writemostly, sizeof(params.writemostly));
|
||||
}
|
||||
|
||||
/* RAID 0 doesn't have a bitmap, thus no region_size, rebuilds etc. */
|
||||
if (!seg_is_any_raid0(seg)) {
|
||||
params.region_size = seg->region_size;
|
||||
memcpy(params.rebuilds, rebuilds, sizeof(params.rebuilds));
|
||||
params.min_recovery_rate = seg->min_recovery_rate;
|
||||
params.max_recovery_rate = seg->max_recovery_rate;
|
||||
params.delta_disks = delta_disks;
|
||||
params.data_offset = data_offset;
|
||||
}
|
||||
|
||||
params.stripe_size = seg->stripe_size;
|
||||
params.flags = flags;
|
||||
|
||||
if (!dm_tree_node_add_raid_target_with_params_v2(node, len, ¶ms))
|
||||
if (raid0) {
|
||||
params.mirrors = 1;
|
||||
params.stripes = seg->area_count;
|
||||
} else if (seg->segtype->parity_devs) {
|
||||
/* RAID 4/5/6 */
|
||||
params.mirrors = 1;
|
||||
params.stripes = seg->area_count - seg->segtype->parity_devs;
|
||||
} else if (seg_is_raid10(seg)) {
|
||||
/* RAID 10 only supports 2 mirrors now */
|
||||
params.mirrors = 2;
|
||||
params.stripes = seg->area_count / 2;
|
||||
} else {
|
||||
/* RAID 1 */
|
||||
params.mirrors = seg->area_count;
|
||||
params.stripes = 1;
|
||||
params.writebehind = seg->writebehind;
|
||||
}
|
||||
|
||||
if (!raid0) {
|
||||
params.region_size = seg->region_size;
|
||||
params.rebuilds = rebuilds;
|
||||
params.writemostly = writemostly;
|
||||
params.min_recovery_rate = seg->min_recovery_rate;
|
||||
params.max_recovery_rate = seg->max_recovery_rate;
|
||||
}
|
||||
|
||||
if (!dm_tree_node_add_raid_target_with_params(node, len, ¶ms))
|
||||
return_0;
|
||||
|
||||
return add_areas_line(dm, seg, node, 0u, seg->area_count);
|
||||
@@ -448,32 +404,19 @@ out:
|
||||
return r;
|
||||
}
|
||||
|
||||
/* Define raid feature based on the tuple(major, minor, patchlevel) of raid target */
|
||||
struct raid_feature {
|
||||
uint32_t maj;
|
||||
uint32_t min;
|
||||
uint32_t patchlevel;
|
||||
unsigned raid_feature;
|
||||
const char *feature;
|
||||
};
|
||||
|
||||
/* Return true if tuple(@maj, @min, @patchlevel) is greater/equal to @*feature members */
|
||||
static int _check_feature(const struct raid_feature *feature, uint32_t maj, uint32_t min, uint32_t patchlevel)
|
||||
{
|
||||
return (maj > feature->maj) ||
|
||||
(maj == feature->maj && min >= feature->min) ||
|
||||
(maj == feature->maj && min == feature->min && patchlevel >= feature->patchlevel);
|
||||
}
|
||||
|
||||
static int _raid_target_present(struct cmd_context *cmd,
|
||||
const struct lv_segment *seg __attribute__((unused)),
|
||||
unsigned *attributes)
|
||||
{
|
||||
/* List of features with their kernel target version */
|
||||
const struct raid_feature _features[] = {
|
||||
{ 1, 3, 0, RAID_FEATURE_RAID10, SEG_TYPE_NAME_RAID10 },
|
||||
{ 1, 7, 0, RAID_FEATURE_RAID0, SEG_TYPE_NAME_RAID0 },
|
||||
{ 1, 10, 1, RAID_FEATURE_RESHAPE, "reshaping" },
|
||||
static const struct feature {
|
||||
uint32_t maj;
|
||||
uint32_t min;
|
||||
unsigned raid_feature;
|
||||
const char *feature;
|
||||
} _features[] = {
|
||||
{ 1, 3, RAID_FEATURE_RAID10, SEG_TYPE_NAME_RAID10 },
|
||||
{ 1, 7, RAID_FEATURE_RAID0, SEG_TYPE_NAME_RAID0 },
|
||||
};
|
||||
|
||||
static int _raid_checked = 0;
|
||||
@@ -495,19 +438,13 @@ static int _raid_target_present(struct cmd_context *cmd,
|
||||
return_0;
|
||||
|
||||
for (i = 0; i < DM_ARRAY_SIZE(_features); ++i)
|
||||
if (_check_feature(_features + i, maj, min, patchlevel))
|
||||
if ((maj > _features[i].maj) ||
|
||||
(maj == _features[i].maj && min >= _features[i].min))
|
||||
_raid_attrs |= _features[i].raid_feature;
|
||||
else
|
||||
log_very_verbose("Target raid does not support %s.",
|
||||
_features[i].feature);
|
||||
|
||||
/*
|
||||
* Seperate check for proper raid4 mapping supported
|
||||
*
|
||||
* If we get more of these range checks, avoid them
|
||||
* altogether by enhancing 'struct raid_feature'
|
||||
* and _check_feature() to handle them.
|
||||
*/
|
||||
if (!(maj == 1 && (min == 8 || (min == 9 && patchlevel == 0))))
|
||||
_raid_attrs |= RAID_FEATURE_RAID4;
|
||||
else
|
||||
|
@@ -69,7 +69,7 @@ FIELD(LVS, lv, BIN, "ActExcl", lvid, 10, lvactiveexclusively, lv_active_exclusiv
|
||||
FIELD(LVS, lv, SNUM, "Maj", major, 0, int32, lv_major, "Persistent major number or -1 if not persistent.", 0)
|
||||
FIELD(LVS, lv, SNUM, "Min", minor, 0, int32, lv_minor, "Persistent minor number or -1 if not persistent.", 0)
|
||||
FIELD(LVS, lv, SIZ, "Rahead", lvid, 0, lvreadahead, lv_read_ahead, "Read ahead setting in current units.", 0)
|
||||
FIELD(LVS, lv, SIZ, "LSize", lvid, 0, lv_size, lv_size, "Size of LV in current units.", 0)
|
||||
FIELD(LVS, lv, SIZ, "LSize", size, 0, size64, lv_size, "Size of LV in current units.", 0)
|
||||
FIELD(LVS, lv, SIZ, "MSize", lvid, 0, lvmetadatasize, lv_metadata_size, "For thin and cache pools, the size of the LV that holds the metadata.", 0)
|
||||
FIELD(LVS, lv, NUM, "#Seg", lvid, 0, lvsegcount, seg_count, "Number of segments in LV.", 0)
|
||||
FIELD(LVS, lv, STR, "Origin", lvid, 0, origin, origin, "For snapshots and thins, the origin device of this LV.", 0)
|
||||
@@ -241,16 +241,9 @@ FIELD(VGS, vg, NUM, "#VMdaCps", cmd, 0, vgmdacopies, vg_mda_copies, "Target numb
|
||||
* SEGS type fields
|
||||
*/
|
||||
FIELD(SEGS, seg, STR, "Type", list, 0, segtype, segtype, "Type of LV segment.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#Str", list, 0, seg_stripes, stripes, "Number of stripes or mirror/raid1 legs.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#DStr", list, 0, seg_data_stripes, data_stripes, "Number of data stripes or mirror/raid1 legs.", 0)
|
||||
FIELD(SEGS, seg, SIZ, "RSize", list, 0, seg_reshape_len, reshape_len, "Size of out-of-place reshape space in current units.", 0)
|
||||
FIELD(SEGS, seg, NUM, "RSize", list, 0, seg_reshape_len_le, reshape_len_le, "Size of out-of-place reshape space in logical extents.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#Cpy", list, 0, seg_data_copies, data_copies, "Number of data copies.", 0)
|
||||
FIELD(SEGS, seg, NUM, "DOff", list, 0, seg_data_offset, data_offset, "Data offset on each image device.", 0)
|
||||
FIELD(SEGS, seg, NUM, "NOff", list, 0, seg_new_data_offset, new_data_offset, "New data offset after any reshape on each image device.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#Par", list, 0, seg_parity_chunks, parity_chunks, "Number of (rotating) parity chunks.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#Str", area_count, 0, uint32, stripes, "Number of stripes or mirror legs.", 0)
|
||||
FIELD(SEGS, seg, SIZ, "Stripe", stripe_size, 0, size32, stripe_size, "For stripes, amount of data placed on one device before switching to the next.", 0)
|
||||
FIELD(SEGS, seg, SIZ, "Region", region_size, 0, size32, region_size, "For mirrors/raids, the unit of data per leg when synchronizing devices.", 0)
|
||||
FIELD(SEGS, seg, SIZ, "Region", region_size, 0, size32, region_size, "For mirrors, the unit of data copied when synchronising devices.", 0)
|
||||
FIELD(SEGS, seg, SIZ, "Chunk", list, 0, chunksize, chunk_size, "For snapshots, the unit of data used when tracking changes.", 0)
|
||||
FIELD(SEGS, seg, NUM, "#Thins", list, 0, thincount, thin_count, "For thin pools, the number of thin volumes in this pool.", 0)
|
||||
FIELD(SEGS, seg, STR, "Discards", list, 0, discards, discards, "For thin pools, how discards are handled.", 0)
|
||||
|
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright (C) 2010-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2010-2013 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
@@ -446,22 +446,8 @@ GET_VG_NUM_PROPERTY_FN(vg_missing_pv_count, vg_missing_pv_count(vg))
|
||||
/* LVSEG */
|
||||
GET_LVSEG_STR_PROPERTY_FN(segtype, lvseg_segtype_dup(lvseg->lv->vg->vgmem, lvseg))
|
||||
#define _segtype_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(data_copies, lvseg->data_copies)
|
||||
#define _data_copies_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(reshape_len, lvseg->reshape_len)
|
||||
#define _reshape_len_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(reshape_len_le, lvseg->reshape_len)
|
||||
#define _reshape_len_le_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(data_offset, lvseg->data_offset)
|
||||
#define _data_offset_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(new_data_offset, lvseg->data_offset)
|
||||
#define _new_data_offset_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(parity_chunks, lvseg->data_offset)
|
||||
#define _parity_chunks_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(stripes, lvseg->area_count)
|
||||
#define _stripes_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(data_stripes, lvseg->area_count)
|
||||
#define _data_stripes_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(stripe_size, (SECTOR_SIZE * lvseg->stripe_size))
|
||||
#define _stripe_size_set prop_not_implemented_set
|
||||
GET_LVSEG_NUM_PROPERTY_FN(region_size, (SECTOR_SIZE * lvseg->region_size))
|
||||
|
@@ -2296,22 +2296,6 @@ static int _size64_disp(struct dm_report *rh __attribute__((unused)),
|
||||
return _field_set_value(field, repstr, sortval);
|
||||
}
|
||||
|
||||
static int _lv_size_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct logical_volume *lv = (const struct logical_volume *) data;
|
||||
const struct lv_segment *seg = first_seg(lv);
|
||||
uint64_t size = lv->le_count;
|
||||
|
||||
if (!lv_is_raid_image(lv))
|
||||
size -= seg->reshape_len * (seg->area_count > 2 ? seg->area_count : 1);
|
||||
|
||||
size *= lv->vg->extent_size;
|
||||
|
||||
return _size64_disp(rh, mem, field, &size, private);
|
||||
}
|
||||
|
||||
static int _uint32_disp(struct dm_report *rh, struct dm_pool *mem __attribute__((unused)),
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private __attribute__((unused)))
|
||||
@@ -2428,197 +2412,6 @@ static int _segstartpe_disp(struct dm_report *rh,
|
||||
return dm_report_field_uint32(rh, field, &seg->le);
|
||||
}
|
||||
|
||||
/* Hepler: get used stripes = total stripes minux any to remove after reshape */
|
||||
static int _get_seg_used_stripes(const struct lv_segment *seg)
|
||||
{
|
||||
uint32_t s;
|
||||
uint32_t stripes = seg->area_count;
|
||||
|
||||
for (s = seg->area_count - 1; stripes && s; s--) {
|
||||
if (seg_type(seg, s) == AREA_LV &&
|
||||
seg_lv(seg, s)->status & LV_REMOVE_AFTER_RESHAPE)
|
||||
stripes--;
|
||||
else
|
||||
break;
|
||||
}
|
||||
|
||||
return stripes;
|
||||
}
|
||||
|
||||
static int _seg_stripes_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = ((const struct lv_segment *) data);
|
||||
|
||||
return dm_report_field_uint32(rh, field, &seg->area_count);
|
||||
}
|
||||
|
||||
/* Report the number of data stripes, which is less than total stripes (e.g. 2 less for raid6) */
|
||||
static int _seg_data_stripes_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = (const struct lv_segment *) data;
|
||||
uint32_t stripes = _get_seg_used_stripes(seg) - seg->segtype->parity_devs;
|
||||
|
||||
/* FIXME: in case of odd numbers of raid10 stripes */
|
||||
if (seg_is_raid10(seg))
|
||||
stripes /= seg->data_copies;
|
||||
|
||||
return dm_report_field_uint32(rh, field, &stripes);
|
||||
}
|
||||
|
||||
/* Helper: return the top-level, reshapable raid LV in case @seg belongs to an raid rimage LV */
|
||||
static struct logical_volume *_lv_for_raid_image_seg(const struct lv_segment *seg, struct dm_pool *mem)
|
||||
{
|
||||
char *lv_name;
|
||||
|
||||
if (seg_is_reshapable_raid(seg))
|
||||
return seg->lv;
|
||||
|
||||
if (seg->lv &&
|
||||
lv_is_raid_image(seg->lv) && !seg->le &&
|
||||
(lv_name = dm_pool_strdup(mem, seg->lv->name))) {
|
||||
char *p = strchr(lv_name, '_');
|
||||
|
||||
if (p) {
|
||||
/* Handle duplicated sub LVs */
|
||||
if (strstr(p, "_dup_"))
|
||||
p = strchr(p + 5, '_');
|
||||
|
||||
if (p) {
|
||||
struct lv_list *lvl;
|
||||
|
||||
*p = '\0';
|
||||
if ((lvl = find_lv_in_vg(seg->lv->vg, lv_name)) &&
|
||||
seg_is_reshapable_raid(first_seg(lvl->lv)))
|
||||
return lvl->lv;
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Helper: return the top-level raid LV in case it is reshapale for @seg or @seg if it is */
|
||||
static const struct lv_segment *_get_reshapable_seg(const struct lv_segment *seg, struct dm_pool *mem)
|
||||
{
|
||||
return _lv_for_raid_image_seg(seg, mem) ? seg : NULL;
|
||||
}
|
||||
|
||||
/* Display segment reshape length in current units */
|
||||
static int _seg_reshape_len_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = _get_reshapable_seg((const struct lv_segment *) data, mem);
|
||||
|
||||
if (seg) {
|
||||
uint32_t reshape_len = seg->reshape_len * seg->area_count * seg->lv->vg->extent_size;
|
||||
|
||||
return _size32_disp(rh, mem, field, &reshape_len, private);
|
||||
}
|
||||
|
||||
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
|
||||
}
|
||||
|
||||
/* Display segment reshape length of in logical extents */
|
||||
static int _seg_reshape_len_le_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = _get_reshapable_seg((const struct lv_segment *) data, mem);
|
||||
|
||||
if (seg) {
|
||||
uint32_t reshape_len = seg->reshape_len* seg->area_count;
|
||||
|
||||
return dm_report_field_uint32(rh, field, &reshape_len);
|
||||
}
|
||||
|
||||
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
|
||||
}
|
||||
|
||||
/* Display segment data copies (e.g. 3 for raid6) */
|
||||
static int _seg_data_copies_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = (const struct lv_segment *) data;
|
||||
|
||||
if (seg->data_copies)
|
||||
return dm_report_field_uint32(rh, field, &seg->data_copies);
|
||||
|
||||
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
|
||||
}
|
||||
|
||||
/* Helper: display segment data offset/new data offset in sectors */
|
||||
static int _segdata_offset(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private, int new_data_offset)
|
||||
{
|
||||
const struct lv_segment *seg = (const struct lv_segment *) data;
|
||||
struct logical_volume *lv;
|
||||
|
||||
if ((lv = _lv_for_raid_image_seg(seg, mem))) {
|
||||
uint64_t data_offset;
|
||||
|
||||
if (lv_raid_data_offset(lv, &data_offset)) {
|
||||
if (new_data_offset && !lv_raid_image_in_sync(seg->lv))
|
||||
data_offset = data_offset ? 0 : seg->reshape_len * lv->vg->extent_size;
|
||||
|
||||
return dm_report_field_uint64(rh, field, &data_offset);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
|
||||
}
|
||||
|
||||
static int _seg_data_offset_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
return _segdata_offset(rh, mem, field, data, private, 0);
|
||||
}
|
||||
|
||||
static int _seg_new_data_offset_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
return _segdata_offset(rh, mem, field, data, private, 1);
|
||||
}
|
||||
|
||||
static int _seg_parity_chunks_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
const struct lv_segment *seg = (const struct lv_segment *) data;
|
||||
uint32_t parity_chunks = seg->segtype->parity_devs ?: seg->data_copies - 1;
|
||||
|
||||
if (parity_chunks) {
|
||||
uint32_t s, resilient_sub_lvs = 0;
|
||||
|
||||
for (s = 0; s < seg->area_count; s++) {
|
||||
if (seg_type(seg, s) == AREA_LV) {
|
||||
struct lv_segment *seg1 = first_seg(seg_lv(seg, s));
|
||||
|
||||
if (seg1->segtype->parity_devs ||
|
||||
seg1->data_copies > 1)
|
||||
resilient_sub_lvs++;
|
||||
}
|
||||
}
|
||||
|
||||
if (resilient_sub_lvs && resilient_sub_lvs == seg->area_count)
|
||||
parity_chunks++;
|
||||
|
||||
return dm_report_field_uint32(rh, field, &parity_chunks);
|
||||
}
|
||||
|
||||
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
|
||||
}
|
||||
|
||||
static int _segsize_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
|
@@ -42,7 +42,7 @@ static int _pthread_create(pthread_t *t, void *(*fun)(void *), void *arg, int st
|
||||
/*
|
||||
* We use a smaller stack since it gets preallocated in its entirety
|
||||
*/
|
||||
pthread_attr_setstacksize(&attr, stacksize + getpagesize());
|
||||
pthread_attr_setstacksize(&attr, stacksize);
|
||||
return pthread_create(t, &attr, fun, arg);
|
||||
}
|
||||
#endif
|
||||
|
@@ -1,8 +1,5 @@
|
||||
dm_bit_get_last
|
||||
dm_bit_get_prev
|
||||
dm_filemapd_mode_from_string
|
||||
dm_stats_update_regions_from_fd
|
||||
dm_bitset_parse_list
|
||||
dm_stats_bind_from_fd
|
||||
dm_stats_start_filemapd
|
||||
dm_tree_node_add_raid_target_with_params_v2
|
||||
|
@@ -1,6 +1,6 @@
|
||||
/*
|
||||
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
|
||||
* Copyright (C) 2006 Rackable Systems All rights reserved.
|
||||
*
|
||||
* This file is part of the device-mapper userspace tools.
|
||||
@@ -331,7 +331,6 @@ struct dm_status_raid {
|
||||
char *dev_health;
|
||||
/* idle, frozen, resync, recover, check, repair */
|
||||
char *sync_action;
|
||||
uint64_t data_offset; /* RAID out-of-place reshaping */
|
||||
};
|
||||
|
||||
int dm_get_status_raid(struct dm_pool *mem, const char *params,
|
||||
@@ -1369,69 +1368,6 @@ uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
uint64_t group_id);
|
||||
|
||||
|
||||
/*
|
||||
* The file map monitoring daemon can monitor files in two distinct
|
||||
* ways: the mode affects the behaviour of the daemon when a file
|
||||
* under monitoring is renamed or unlinked, and the conditions which
|
||||
* cause the daemon to terminate.
|
||||
*
|
||||
* In both modes, the daemon will always shut down when the group
|
||||
* being monitored is deleted.
|
||||
*
|
||||
* Follow inode:
|
||||
* The daemon follows the inode of the file, as it was at the time the
|
||||
* daemon started. The file descriptor referencing the file is kept
|
||||
* open at all times, and the daemon will exit when it detects that
|
||||
* the file has been unlinked and it is the last holder of a reference
|
||||
* to the file.
|
||||
*
|
||||
* This mode is useful if the file is expected to be renamed, or moved
|
||||
* within the file system, while it is being monitored.
|
||||
*
|
||||
* Follow path:
|
||||
* The daemon follows the path that was given on the daemon command
|
||||
* line. The file descriptor referencing the file is re-opened on each
|
||||
* iteration of the daemon, and the daemon will exit if no file exists
|
||||
* at this location (a tolerance is allowed so that a brief delay
|
||||
* between unlink() and creat() is permitted).
|
||||
*
|
||||
* This mode is useful if the file is updated by unlinking the original
|
||||
* and placing a new file at the same path.
|
||||
*/
|
||||
|
||||
typedef enum {
|
||||
DM_FILEMAPD_FOLLOW_INODE,
|
||||
DM_FILEMAPD_FOLLOW_PATH,
|
||||
DM_FILEMAPD_FOLLOW_NONE
|
||||
} dm_filemapd_mode_t;
|
||||
|
||||
/*
|
||||
* Parse a string representation of a dmfilemapd mode.
|
||||
*
|
||||
* Returns a valid dm_filemapd_mode_t value on success, or
|
||||
* DM_FILEMAPD_FOLLOW_NONE on error.
|
||||
*/
|
||||
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str);
|
||||
|
||||
/*
|
||||
* Start the dmfilemapd filemap monitoring daemon for the specified
|
||||
* file descriptor, group, and file system path. The daemon will
|
||||
* monitor the file for allocation changes, and when a change is
|
||||
* detected, call dm_stats_update_regions_from_fd() to update the
|
||||
* mapped regions for the file.
|
||||
*
|
||||
* The mode parameter controls the behaviour of the daemon when the
|
||||
* file being monitored is unlinked or moved: see the comments for
|
||||
* dm_filemapd_mode_t for a full description and possible values.
|
||||
*
|
||||
* The daemon can be stopped at any time by sending SIGTERM to the
|
||||
* daemon pid.
|
||||
*/
|
||||
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
|
||||
dm_filemapd_mode_t mode, unsigned foreground,
|
||||
unsigned verbose);
|
||||
|
||||
/*
|
||||
* Call this to actually run the ioctl.
|
||||
*/
|
||||
@@ -1802,11 +1738,6 @@ int dm_tree_node_add_raid_target(struct dm_tree_node *node,
|
||||
*/
|
||||
#define DM_CACHE_METADATA_MAX_SECTORS DM_THIN_METADATA_MAX_SECTORS
|
||||
|
||||
/*
|
||||
* Define number of elements in rebuild and writemostly arrays
|
||||
* 'of struct dm_tree_node_raid_params'.
|
||||
*/
|
||||
|
||||
struct dm_tree_node_raid_params {
|
||||
const char *raid_type;
|
||||
|
||||
@@ -1818,70 +1749,25 @@ struct dm_tree_node_raid_params {
|
||||
/*
|
||||
* 'rebuilds' and 'writemostly' are bitfields that signify
|
||||
* which devices in the array are to be rebuilt or marked
|
||||
* writemostly. The kernel supports up to 253 legs.
|
||||
* We limit ourselves by choosing a lower value
|
||||
* for DEFAULT_RAID{1}_MAX_IMAGES in defaults.h.
|
||||
* writemostly. By choosing a 'uint64_t', we limit ourself
|
||||
* to RAID arrays with 64 devices.
|
||||
*/
|
||||
uint64_t rebuilds;
|
||||
uint64_t writemostly;
|
||||
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
|
||||
uint32_t sync_daemon_sleep; /* ms (kernel default = 5sec) */
|
||||
uint32_t max_recovery_rate; /* kB/sec/disk */
|
||||
uint32_t min_recovery_rate; /* kB/sec/disk */
|
||||
uint32_t stripe_cache; /* sectors */
|
||||
|
||||
uint64_t flags; /* [no]sync */
|
||||
uint32_t reserved2;
|
||||
};
|
||||
|
||||
/*
|
||||
* Version 2 of above node raid params struct to keeep API compatibility.
|
||||
*
|
||||
* Extended for more than 64 legs (max 253 in the MD kernel runtime!),
|
||||
* delta_disks for disk add/remove reshaping,
|
||||
* data_offset for out-of-place reshaping
|
||||
* and data_copies for odd number of raid10 legs.
|
||||
*/
|
||||
#define RAID_BITMAP_SIZE 4 /* 4 * 64 bit elements in rebuilds/writemostly arrays */
|
||||
struct dm_tree_node_raid_params_v2 {
|
||||
const char *raid_type;
|
||||
|
||||
uint32_t stripes;
|
||||
uint32_t mirrors;
|
||||
uint32_t region_size;
|
||||
uint32_t stripe_size;
|
||||
|
||||
int delta_disks; /* +/- number of disks to add/remove (reshaping) */
|
||||
int data_offset; /* data offset to set (out-of-place reshaping) */
|
||||
|
||||
/*
|
||||
* 'rebuilds' and 'writemostly' are bitfields that signify
|
||||
* which devices in the array are to be rebuilt or marked
|
||||
* writemostly. The kernel supports up to 253 legs.
|
||||
* We limit ourselvs by choosing a lower value
|
||||
* for DEFAULT_RAID_MAX_IMAGES.
|
||||
*/
|
||||
uint64_t rebuilds[RAID_BITMAP_SIZE];
|
||||
uint64_t writemostly[RAID_BITMAP_SIZE];
|
||||
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
|
||||
uint32_t data_copies; /* RAID # of data copies */
|
||||
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
|
||||
uint32_t sync_daemon_sleep; /* ms (kernel default = 5sec) */
|
||||
uint32_t max_recovery_rate; /* kB/sec/disk */
|
||||
uint32_t min_recovery_rate; /* kB/sec/disk */
|
||||
uint32_t stripe_cache; /* sectors */
|
||||
|
||||
uint64_t flags; /* [no]sync */
|
||||
uint64_t reserved2;
|
||||
};
|
||||
|
||||
int dm_tree_node_add_raid_target_with_params(struct dm_tree_node *node,
|
||||
uint64_t size,
|
||||
const struct dm_tree_node_raid_params *p);
|
||||
|
||||
/* Version 2 API function taking dm_tree_node_raid_params_v2 for aforementioned extensions. */
|
||||
int dm_tree_node_add_raid_target_with_params_v2(struct dm_tree_node *node,
|
||||
uint64_t size,
|
||||
const struct dm_tree_node_raid_params_v2 *p);
|
||||
|
||||
/* Cache feature_flags */
|
||||
#define DM_CACHE_FEATURE_WRITEBACK 0x00000001
|
||||
#define DM_CACHE_FEATURE_WRITETHROUGH 0x00000002
|
||||
|
@@ -205,14 +205,11 @@ struct load_segment {
|
||||
struct dm_tree_node *replicator;/* Replicator-dev */
|
||||
uint64_t rdevice_index; /* Replicator-dev */
|
||||
|
||||
int delta_disks; /* raid reshape number of disks */
|
||||
int data_offset; /* raid reshape data offset on disk to set */
|
||||
uint64_t rebuilds[RAID_BITMAP_SIZE]; /* raid */
|
||||
uint64_t writemostly[RAID_BITMAP_SIZE]; /* raid */
|
||||
uint64_t rebuilds; /* raid */
|
||||
uint64_t writemostly; /* raid */
|
||||
uint32_t writebehind; /* raid */
|
||||
uint32_t max_recovery_rate; /* raid kB/sec/disk */
|
||||
uint32_t min_recovery_rate; /* raid kB/sec/disk */
|
||||
uint32_t data_copies; /* raid10 data_copies */
|
||||
|
||||
struct dm_tree_node *metadata; /* Thin_pool + Cache */
|
||||
struct dm_tree_node *pool; /* Thin_pool, Thin */
|
||||
@@ -2356,21 +2353,16 @@ static int _mirror_emit_segment_line(struct dm_task *dmt, struct load_segment *s
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _2_if_value(unsigned p)
|
||||
{
|
||||
return p ? 2 : 0;
|
||||
}
|
||||
/* Is parameter non-zero? */
|
||||
#define PARAM_IS_SET(p) ((p) ? 1 : 0)
|
||||
|
||||
/* Return number of bits passed in @bits assuming 2 * 64 bit size */
|
||||
static int _get_params_count(uint64_t *bits)
|
||||
/* Return number of bits assuming 4 * 64 bit size */
|
||||
static int _get_params_count(uint64_t bits)
|
||||
{
|
||||
int r = 0;
|
||||
int i = RAID_BITMAP_SIZE;
|
||||
|
||||
while (i--) {
|
||||
r += 2 * hweight32(bits[i] & 0xFFFFFFFF);
|
||||
r += 2 * hweight32(bits[i] >> 32);
|
||||
}
|
||||
r += 2 * hweight32(bits & 0xFFFFFFFF);
|
||||
r += 2 * hweight32(bits >> 32);
|
||||
|
||||
return r;
|
||||
}
|
||||
@@ -2381,60 +2373,32 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
|
||||
size_t paramsize)
|
||||
{
|
||||
uint32_t i;
|
||||
uint32_t area_count = seg->area_count / 2;
|
||||
int param_count = 1; /* mandatory 'chunk size'/'stripe size' arg */
|
||||
int pos = 0;
|
||||
unsigned type;
|
||||
|
||||
if (seg->area_count % 2)
|
||||
return 0;
|
||||
unsigned type = seg->type;
|
||||
|
||||
if ((seg->flags & DM_NOSYNC) || (seg->flags & DM_FORCESYNC))
|
||||
param_count++;
|
||||
|
||||
param_count += _2_if_value(seg->data_offset) +
|
||||
_2_if_value(seg->delta_disks) +
|
||||
_2_if_value(seg->region_size) +
|
||||
_2_if_value(seg->writebehind) +
|
||||
_2_if_value(seg->min_recovery_rate) +
|
||||
_2_if_value(seg->max_recovery_rate) +
|
||||
_2_if_value(seg->data_copies > 1);
|
||||
param_count += 2 * (PARAM_IS_SET(seg->region_size) +
|
||||
PARAM_IS_SET(seg->writebehind) +
|
||||
PARAM_IS_SET(seg->min_recovery_rate) +
|
||||
PARAM_IS_SET(seg->max_recovery_rate));
|
||||
|
||||
/* rebuilds and writemostly are BITMAP_SIZE * 64 bits */
|
||||
/* rebuilds and writemostly are 64 bits */
|
||||
param_count += _get_params_count(seg->rebuilds);
|
||||
param_count += _get_params_count(seg->writemostly);
|
||||
|
||||
if ((seg->type == SEG_RAID1) && seg->stripe_size)
|
||||
log_info("WARNING: Ignoring RAID1 stripe size");
|
||||
if ((type == SEG_RAID1) && seg->stripe_size)
|
||||
log_error("WARNING: Ignoring RAID1 stripe size");
|
||||
|
||||
/* Kernel only expects "raid0", not "raid0_meta" */
|
||||
type = seg->type;
|
||||
if (type == SEG_RAID0_META)
|
||||
type = SEG_RAID0;
|
||||
#if 0
|
||||
/* Kernel only expects "raid10", not "raid10_{far,offset}" */
|
||||
else if (type == SEG_RAID10_FAR ||
|
||||
type == SEG_RAID10_OFFSET) {
|
||||
param_count += 2;
|
||||
type = SEG_RAID10_NEAR;
|
||||
}
|
||||
#endif
|
||||
|
||||
EMIT_PARAMS(pos, "%s %d %u",
|
||||
// type == SEG_RAID10_NEAR ? "raid10" : _dm_segtypes[type].target,
|
||||
type == SEG_RAID10 ? "raid10" : _dm_segtypes[type].target,
|
||||
EMIT_PARAMS(pos, "%s %d %u", _dm_segtypes[type].target,
|
||||
param_count, seg->stripe_size);
|
||||
|
||||
#if 0
|
||||
if (seg->type == SEG_RAID10_FAR)
|
||||
EMIT_PARAMS(pos, " raid10_format far");
|
||||
else if (seg->type == SEG_RAID10_OFFSET)
|
||||
EMIT_PARAMS(pos, " raid10_format offset");
|
||||
#endif
|
||||
|
||||
if (seg->data_copies > 1 && type == SEG_RAID10)
|
||||
EMIT_PARAMS(pos, " raid10_copies %u", seg->data_copies);
|
||||
|
||||
if (seg->flags & DM_NOSYNC)
|
||||
EMIT_PARAMS(pos, " nosync");
|
||||
else if (seg->flags & DM_FORCESYNC)
|
||||
@@ -2443,38 +2407,27 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
|
||||
if (seg->region_size)
|
||||
EMIT_PARAMS(pos, " region_size %u", seg->region_size);
|
||||
|
||||
/* If seg-data_offset == 1, kernel needs a zero offset to adjust to it */
|
||||
if (seg->data_offset)
|
||||
EMIT_PARAMS(pos, " data_offset %d", seg->data_offset == 1 ? 0 : seg->data_offset);
|
||||
|
||||
if (seg->delta_disks)
|
||||
EMIT_PARAMS(pos, " delta_disks %d", seg->delta_disks);
|
||||
|
||||
for (i = 0; i < area_count; i++)
|
||||
if (seg->rebuilds[i/64] & (1ULL << (i%64)))
|
||||
for (i = 0; i < (seg->area_count / 2); i++)
|
||||
if (seg->rebuilds & (1ULL << i))
|
||||
EMIT_PARAMS(pos, " rebuild %u", i);
|
||||
|
||||
for (i = 0; i < area_count; i++)
|
||||
if (seg->writemostly[i/64] & (1ULL << (i%64)))
|
||||
EMIT_PARAMS(pos, " write_mostly %u", i);
|
||||
|
||||
if (seg->writebehind)
|
||||
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
|
||||
|
||||
/*
|
||||
* Has to be before "min_recovery_rate" or the kernels
|
||||
* check will fail when both set and min > previous max
|
||||
*/
|
||||
if (seg->max_recovery_rate)
|
||||
EMIT_PARAMS(pos, " max_recovery_rate %u",
|
||||
seg->max_recovery_rate);
|
||||
|
||||
if (seg->min_recovery_rate)
|
||||
EMIT_PARAMS(pos, " min_recovery_rate %u",
|
||||
seg->min_recovery_rate);
|
||||
|
||||
if (seg->max_recovery_rate)
|
||||
EMIT_PARAMS(pos, " max_recovery_rate %u",
|
||||
seg->max_recovery_rate);
|
||||
|
||||
for (i = 0; i < (seg->area_count / 2); i++)
|
||||
if (seg->writemostly & (1ULL << i))
|
||||
EMIT_PARAMS(pos, " write_mostly %u", i);
|
||||
|
||||
if (seg->writebehind)
|
||||
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
|
||||
|
||||
/* Print number of metadata/data device pairs */
|
||||
EMIT_PARAMS(pos, " %u", area_count);
|
||||
EMIT_PARAMS(pos, " %u", seg->area_count/2);
|
||||
|
||||
if (_emit_areas_line(dmt, seg, params, paramsize, &pos) <= 0)
|
||||
return_0;
|
||||
@@ -2921,8 +2874,8 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
|
||||
else if (child->props.size_changed < 0)
|
||||
dnode->props.size_changed = -1;
|
||||
|
||||
/* No resume for a device without parents or with unchanged or smaller size */
|
||||
if (!dm_tree_node_num_children(child, 1) || (child->props.size_changed <= 0))
|
||||
/* Resume device immediately if it has parents and its size changed */
|
||||
if (!dm_tree_node_num_children(child, 1) || !child->props.size_changed)
|
||||
continue;
|
||||
|
||||
if (!node_created && (dm_list_size(&child->props.segs) == 1)) {
|
||||
@@ -3314,10 +3267,8 @@ int dm_tree_node_add_raid_target_with_params(struct dm_tree_node *node,
|
||||
seg->region_size = p->region_size;
|
||||
seg->stripe_size = p->stripe_size;
|
||||
seg->area_count = 0;
|
||||
memset(seg->rebuilds, 0, sizeof(seg->rebuilds));
|
||||
seg->rebuilds[0] = p->rebuilds;
|
||||
memset(seg->writemostly, 0, sizeof(seg->writemostly));
|
||||
seg->writemostly[0] = p->writemostly;
|
||||
seg->rebuilds = p->rebuilds;
|
||||
seg->writemostly = p->writemostly;
|
||||
seg->writebehind = p->writebehind;
|
||||
seg->min_recovery_rate = p->min_recovery_rate;
|
||||
seg->max_recovery_rate = p->max_recovery_rate;
|
||||
@@ -3345,47 +3296,6 @@ int dm_tree_node_add_raid_target(struct dm_tree_node *node,
|
||||
return dm_tree_node_add_raid_target_with_params(node, size, ¶ms);
|
||||
}
|
||||
|
||||
/*
|
||||
* Version 2 of dm_tree_node_add_raid_target() allowing for:
|
||||
*
|
||||
* - maximum 253 legs in a raid set (MD kernel limitation)
|
||||
* - delta_disks for disk add/remove reshaping
|
||||
* - data_offset for out-of-place reshaping
|
||||
* - data_copies to cope witth odd numbers of raid10 disks
|
||||
*/
|
||||
int dm_tree_node_add_raid_target_with_params_v2(struct dm_tree_node *node,
|
||||
uint64_t size,
|
||||
const struct dm_tree_node_raid_params_v2 *p)
|
||||
{
|
||||
unsigned i;
|
||||
struct load_segment *seg = NULL;
|
||||
|
||||
for (i = 0; i < DM_ARRAY_SIZE(_dm_segtypes) && !seg; ++i)
|
||||
if (!strcmp(p->raid_type, _dm_segtypes[i].target))
|
||||
if (!(seg = _add_segment(node,
|
||||
_dm_segtypes[i].type, size)))
|
||||
return_0;
|
||||
if (!seg) {
|
||||
log_error("Unsupported raid type %s.", p->raid_type);
|
||||
return 0;
|
||||
}
|
||||
|
||||
seg->region_size = p->region_size;
|
||||
seg->stripe_size = p->stripe_size;
|
||||
seg->area_count = 0;
|
||||
seg->delta_disks = p->delta_disks;
|
||||
seg->data_offset = p->data_offset;
|
||||
memcpy(seg->rebuilds, p->rebuilds, sizeof(seg->rebuilds));
|
||||
memcpy(seg->writemostly, p->writemostly, sizeof(seg->writemostly));
|
||||
seg->writebehind = p->writebehind;
|
||||
seg->data_copies = p->data_copies;
|
||||
seg->min_recovery_rate = p->min_recovery_rate;
|
||||
seg->max_recovery_rate = p->max_recovery_rate;
|
||||
seg->flags = p->flags;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
int dm_tree_node_add_cache_target(struct dm_tree_node *node,
|
||||
uint64_t size,
|
||||
uint64_t feature_flags, /* DM_CACHE_FEATURE_* */
|
||||
|
@@ -3062,31 +3062,26 @@ static void _get_final_time(time_range_t range, struct tm *tm,
|
||||
tm_up.tm_sec += 1;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
case RANGE_MINUTE:
|
||||
if (tm_up.tm_min < 59) {
|
||||
tm_up.tm_min += 1;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
case RANGE_HOUR:
|
||||
if (tm_up.tm_hour < 23) {
|
||||
tm_up.tm_hour += 1;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
case RANGE_DAY:
|
||||
if (tm_up.tm_mday < _get_days_in_month(tm_up.tm_mon, tm_up.tm_year)) {
|
||||
tm_up.tm_mday += 1;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
case RANGE_MONTH:
|
||||
if (tm_up.tm_mon < 11) {
|
||||
tm_up.tm_mon += 1;
|
||||
break;
|
||||
}
|
||||
/* fall through */
|
||||
case RANGE_YEAR:
|
||||
tm_up.tm_year += 1;
|
||||
break;
|
||||
@@ -4209,7 +4204,7 @@ static void _recalculate_fields(struct dm_report *rh)
|
||||
{
|
||||
struct row *row;
|
||||
struct dm_report_field *field;
|
||||
int len;
|
||||
size_t len;
|
||||
|
||||
dm_list_iterate_items(row, &rh->rows) {
|
||||
dm_list_iterate_items(field, &row->fields) {
|
||||
|
@@ -402,7 +402,7 @@ static int _stats_bound(const struct dm_stats *dms)
|
||||
if (dms->bind_major > 0 || dms->bind_name || dms->bind_uuid)
|
||||
return 1;
|
||||
/* %p format specifier expects a void pointer. */
|
||||
log_debug("Stats handle at %p is not bound.", dms);
|
||||
log_debug("Stats handle at %p is not bound.", (void *) dms);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3294,7 +3294,7 @@ static void _sum_histogram_bins(const struct dm_stats *dms,
|
||||
struct dm_stats_region *region;
|
||||
struct dm_histogram_bin *bins;
|
||||
struct dm_histogram *dmh_cur;
|
||||
int bin;
|
||||
uint64_t bin;
|
||||
|
||||
region = &dms->regions[region_id];
|
||||
dmh_cur = region->counters[area_id].histogram;
|
||||
@@ -3857,9 +3857,9 @@ struct _extent {
|
||||
*/
|
||||
static int _extent_start_compare(const void *p1, const void *p2)
|
||||
{
|
||||
const struct _extent *r1, *r2;
|
||||
r1 = (const struct _extent *) p1;
|
||||
r2 = (const struct _extent *) p2;
|
||||
struct _extent *r1, *r2;
|
||||
r1 = (struct _extent *) p1;
|
||||
r2 = (struct _extent *) p2;
|
||||
|
||||
if (r1->start < r2->start)
|
||||
return -1;
|
||||
@@ -3868,6 +3868,37 @@ static int _extent_start_compare(const void *p1, const void *p2)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Resize the group bitmap corresponding to group_id so that it can
|
||||
* contain at least num_regions members.
|
||||
*/
|
||||
static int _stats_resize_group(struct dm_stats_group *group, int num_regions)
|
||||
{
|
||||
int last_bit = dm_bit_get_last(group->regions);
|
||||
dm_bitset_t new, old;
|
||||
|
||||
if (last_bit >= num_regions) {
|
||||
log_error("Cannot resize group bitmap to %d with bit %d set.",
|
||||
num_regions, last_bit);
|
||||
return 0;
|
||||
}
|
||||
|
||||
log_very_verbose("Resizing group bitmap from %d to %d (last_bit: %d).",
|
||||
group->regions[0], num_regions, last_bit);
|
||||
|
||||
new = dm_bitset_create(NULL, num_regions);
|
||||
if (!new) {
|
||||
log_error("Could not allocate memory for new group bitmap.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
old = group->regions;
|
||||
dm_bit_copy(new, old);
|
||||
group->regions = new;
|
||||
dm_bitset_destroy(old);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _stats_create_group(struct dm_stats *dms, dm_bitset_t regions,
|
||||
const char *alias, uint64_t *group_id)
|
||||
{
|
||||
@@ -3972,7 +4003,7 @@ merge:
|
||||
static void _stats_copy_histogram_bounds(struct dm_histogram *to,
|
||||
struct dm_histogram *from)
|
||||
{
|
||||
int i;
|
||||
uint64_t i;
|
||||
|
||||
to->nr_bins = from->nr_bins;
|
||||
|
||||
@@ -3988,7 +4019,7 @@ static void _stats_copy_histogram_bounds(struct dm_histogram *to,
|
||||
static int _stats_check_histogram_bounds(struct dm_histogram *h1,
|
||||
struct dm_histogram *h2)
|
||||
{
|
||||
int i;
|
||||
uint64_t i;
|
||||
|
||||
if (!h1 || !h2)
|
||||
return 0;
|
||||
@@ -4171,37 +4202,6 @@ int dm_stats_get_group_descriptor(const struct dm_stats *dms,
|
||||
}
|
||||
|
||||
#ifdef HAVE_LINUX_FIEMAP_H
|
||||
/*
|
||||
* Resize the group bitmap corresponding to group_id so that it can
|
||||
* contain at least num_regions members.
|
||||
*/
|
||||
static int _stats_resize_group(struct dm_stats_group *group, int num_regions)
|
||||
{
|
||||
int last_bit = dm_bit_get_last(group->regions);
|
||||
dm_bitset_t new, old;
|
||||
|
||||
if (last_bit >= num_regions) {
|
||||
log_error("Cannot resize group bitmap to %d with bit %d set.",
|
||||
num_regions, last_bit);
|
||||
return 0;
|
||||
}
|
||||
|
||||
log_very_verbose("Resizing group bitmap from %d to %d (last_bit: %d).",
|
||||
group->regions[0], num_regions, last_bit);
|
||||
|
||||
new = dm_bitset_create(NULL, num_regions);
|
||||
if (!new) {
|
||||
log_error("Could not allocate memory for new group bitmap.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
old = group->regions;
|
||||
dm_bit_copy(new, old);
|
||||
group->regions = new;
|
||||
dm_bitset_destroy(old);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Group a table of region_ids corresponding to the extents of a file.
|
||||
*/
|
||||
@@ -4557,7 +4557,7 @@ static int _stats_unmap_regions(struct dm_stats *dms, uint64_t group_id,
|
||||
log_error("Could not finalize region extent table.");
|
||||
goto out;
|
||||
}
|
||||
log_very_verbose("Kept " FMTi64 " of " FMTi64 " old extents",
|
||||
log_very_verbose("Kept %ld of %ld old extents",
|
||||
nr_kept, nr_old);
|
||||
log_very_verbose("Found " FMTu64 " new extents",
|
||||
*count - nr_kept);
|
||||
@@ -4725,7 +4725,7 @@ static uint64_t *_stats_map_file_regions(struct dm_stats *dms, int fd,
|
||||
|
||||
dm_pool_free(extent_mem, extents);
|
||||
dm_pool_destroy(extent_mem);
|
||||
|
||||
dm_free(hist_arg);
|
||||
return regions;
|
||||
|
||||
out_remove:
|
||||
@@ -4842,7 +4842,7 @@ uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
if (!bounds) {
|
||||
log_error("Could not allocate memory for group "
|
||||
"histogram bounds.");
|
||||
goto out;
|
||||
return NULL;
|
||||
}
|
||||
_stats_copy_histogram_bounds(bounds,
|
||||
dms->regions[group_id].bounds);
|
||||
@@ -4869,160 +4869,10 @@ uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
bad:
|
||||
_stats_cleanup_region_ids(dms, regions, count);
|
||||
dm_free(bounds);
|
||||
dm_free(regions);
|
||||
out:
|
||||
dm_free((char *) alias);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#ifdef DMFILEMAPD
|
||||
static const char *_filemapd_mode_names[] = {
|
||||
"inode",
|
||||
"path",
|
||||
NULL
|
||||
};
|
||||
|
||||
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str)
|
||||
{
|
||||
dm_filemapd_mode_t mode = DM_FILEMAPD_FOLLOW_INODE;
|
||||
const char **mode_name;
|
||||
|
||||
if (mode_str) {
|
||||
for (mode_name = _filemapd_mode_names; *mode_name; mode_name++)
|
||||
if (!strcmp(*mode_name, mode_str))
|
||||
break;
|
||||
if (*mode_name)
|
||||
mode = DM_FILEMAPD_FOLLOW_INODE
|
||||
+ (mode_name - _filemapd_mode_names);
|
||||
else {
|
||||
log_error("Could not parse dmfilemapd mode: %s",
|
||||
mode_str);
|
||||
return DM_FILEMAPD_FOLLOW_NONE;
|
||||
}
|
||||
}
|
||||
return mode;
|
||||
}
|
||||
|
||||
#define DM_FILEMAPD "dmfilemapd"
|
||||
#define NR_FILEMAPD_ARGS 6
|
||||
/*
|
||||
* Start dmfilemapd to monitor the specified file descriptor, and to
|
||||
* update the group given by 'group_id' when the file's allocation
|
||||
* changes.
|
||||
*
|
||||
* usage: dmfilemapd <fd> <group_id> <mode> [<foreground>[<log_level>]]
|
||||
*/
|
||||
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
|
||||
dm_filemapd_mode_t mode, unsigned foreground,
|
||||
unsigned verbose)
|
||||
{
|
||||
char fd_str[8], group_str[8], fg_str[2], verb_str[2];
|
||||
const char *mode_str = _filemapd_mode_names[mode];
|
||||
char *args[NR_FILEMAPD_ARGS + 1];
|
||||
pid_t pid = 0;
|
||||
int argc = 0;
|
||||
|
||||
if (fd < 0) {
|
||||
log_error("dmfilemapd file descriptor must be "
|
||||
"non-negative: %d", fd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (mode < DM_FILEMAPD_FOLLOW_INODE
|
||||
|| mode > DM_FILEMAPD_FOLLOW_PATH) {
|
||||
log_error("Invalid dmfilemapd mode argument: "
|
||||
"Must be DM_FILEMAPD_FOLLOW_INODE or "
|
||||
"DM_FILEMAPD_FOLLOW_PATH");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (foreground > 1) {
|
||||
log_error("Invalid dmfilemapd foreground argument. "
|
||||
"Must be 0 or 1: %d.", foreground);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (verbose > 3) {
|
||||
log_error("Invalid dmfilemapd verbose argument. "
|
||||
"Must be 0..3: %d.", verbose);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* set argv[0] */
|
||||
args[argc++] = (char *) DM_FILEMAPD;
|
||||
|
||||
/* set <fd> */
|
||||
if ((dm_snprintf(fd_str, sizeof(fd_str), "%d", fd)) < 0) {
|
||||
log_error("Could not format fd argument.");
|
||||
return 0;
|
||||
}
|
||||
args[argc++] = fd_str;
|
||||
|
||||
/* set <group_id> */
|
||||
if ((dm_snprintf(group_str, sizeof(group_str), FMTu64, group_id)) < 0) {
|
||||
log_error("Could not format group_id argument.");
|
||||
return 0;
|
||||
}
|
||||
args[argc++] = group_str;
|
||||
|
||||
/* set <path> */
|
||||
args[argc++] = (char *) path;
|
||||
|
||||
/* set <mode> */
|
||||
args[argc++] = (char *) mode_str;
|
||||
|
||||
/* set <foreground> */
|
||||
if ((dm_snprintf(fg_str, sizeof(fg_str), "%u", foreground)) < 0) {
|
||||
log_error("Could not format foreground argument.");
|
||||
return 0;
|
||||
}
|
||||
args[argc++] = fg_str;
|
||||
|
||||
/* set <verbose> */
|
||||
if ((dm_snprintf(verb_str, sizeof(verb_str), "%u", verbose)) < 0) {
|
||||
log_error("Could not format verbose argument.");
|
||||
return 0;
|
||||
}
|
||||
args[argc++] = verb_str;
|
||||
|
||||
/* terminate args[argc] */
|
||||
args[argc] = NULL;
|
||||
|
||||
log_very_verbose("Spawning daemon as '%s %d " FMTu64 " %s %s %u %u'",
|
||||
*args, fd, group_id, path, mode_str,
|
||||
foreground, verbose);
|
||||
|
||||
if (!foreground && ((pid = fork()) < 0)) {
|
||||
log_error("Failed to fork filemapd process.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (pid > 0) {
|
||||
log_very_verbose("Forked filemapd process as pid %d", pid);
|
||||
return 1;
|
||||
}
|
||||
|
||||
execvp(args[0], args);
|
||||
log_error("execvp() failed.");
|
||||
if (!foreground)
|
||||
_exit(127);
|
||||
return 0;
|
||||
}
|
||||
# else /* !DMFILEMAPD */
|
||||
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str)
|
||||
{
|
||||
return 0;
|
||||
};
|
||||
|
||||
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
|
||||
dm_filemapd_mode_t mode, unsigned foreground,
|
||||
unsigned verbose)
|
||||
{
|
||||
log_error("dmfilemapd support disabled.");
|
||||
return 0;
|
||||
}
|
||||
#endif /* DMFILEMAPD */
|
||||
|
||||
#else /* HAVE_LINUX_FIEMAP */
|
||||
|
||||
uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
@@ -5040,13 +4890,6 @@ uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
|
||||
log_error("File mapping requires FIEMAP ioctl support.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dm_stats_start_filemapd(struct dm_stats *dms, int fd, uint64_t group_id,
|
||||
const char *path)
|
||||
{
|
||||
log_error("File mapping requires FIEMAP ioctl support.");
|
||||
return 0;
|
||||
}
|
||||
#endif /* HAVE_LINUX_FIEMAP */
|
||||
|
||||
/*
|
||||
|
@@ -626,7 +626,7 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
|
||||
uint64_t multiplier;
|
||||
|
||||
if (endptr)
|
||||
*endptr = units;
|
||||
*endptr = (char *) units;
|
||||
|
||||
if (isdigit(*units)) {
|
||||
custom_value = strtod(units, &ptr);
|
||||
@@ -709,7 +709,7 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
|
||||
}
|
||||
|
||||
if (endptr)
|
||||
*endptr = units + 1;
|
||||
*endptr = (char *) units + 1;
|
||||
|
||||
if (_close_enough(custom_value, 0.))
|
||||
return v * multiplier; /* Use integer arithmetic */
|
||||
|
@@ -89,8 +89,6 @@ static unsigned _count_fields(const char *p)
|
||||
* <raid_type> <#devs> <health_str> <sync_ratio>
|
||||
* Versions 1.5.0+ (6 fields):
|
||||
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt>
|
||||
* Versions 1.9.0+ (7 fields):
|
||||
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt> <data_offset>
|
||||
*/
|
||||
int dm_get_status_raid(struct dm_pool *mem, const char *params,
|
||||
struct dm_status_raid **status)
|
||||
@@ -149,22 +147,6 @@ int dm_get_status_raid(struct dm_pool *mem, const char *params,
|
||||
if (sscanf(p, "%s %" PRIu64, s->sync_action, &s->mismatch_count) != 2)
|
||||
goto_bad;
|
||||
|
||||
if (num_fields < 7)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* All pre-1.9.0 version parameters are read. Now we check
|
||||
* for additional 1.9.0+ parameters (i.e. nr_fields at least 7).
|
||||
*
|
||||
* Note that data_offset will be 0 if the
|
||||
* kernel returns a pre-1.9.0 status.
|
||||
*/
|
||||
msg_fields = "<data_offset>";
|
||||
if (!(p = _skip_fields(params, 6))) /* skip pre-1.9.0 params */
|
||||
goto bad;
|
||||
if (sscanf(p, "%" PRIu64, &s->data_offset) != 1)
|
||||
goto bad;
|
||||
|
||||
out:
|
||||
*status = s;
|
||||
|
||||
|
@@ -40,11 +40,6 @@ SED = @SED@
|
||||
CFLOW_CMD = @CFLOW_CMD@
|
||||
AWK = @AWK@
|
||||
CHMOD = @CHMOD@
|
||||
EGREP = @EGREP@
|
||||
GREP = @GREP@
|
||||
SORT = @SORT@
|
||||
WC = @WC@
|
||||
|
||||
PYTHON2 = @PYTHON2@
|
||||
PYTHON3 = @PYTHON3@
|
||||
PYCOMPILE = $(top_srcdir)/autoconf/py-compile
|
||||
@@ -517,9 +512,9 @@ ifeq (,$(firstword $(EXPORTED_SYMBOLS)))
|
||||
) > $@
|
||||
else
|
||||
set -e;\
|
||||
R=$$($(SORT) $^ | uniq -u);\
|
||||
R=$$(sort $^ | uniq -u);\
|
||||
test -z "$$R" || { echo "Mismatch between symbols in shared library and lists in .exported_symbols.* files: $$R"; false; } ;\
|
||||
( for i in $$(echo $(EXPORTED_SYMBOLS) | tr ' ' '\n' | $(SORT) -rnt_ -k5 ); do\
|
||||
( for i in $$(echo $(EXPORTED_SYMBOLS) | tr ' ' '\n' | sort -rnt_ -k5 ); do\
|
||||
echo "$${i##*.} {"; echo " global:";\
|
||||
$(SED) "s/^/ /;s/$$/;/" $$i;\
|
||||
echo "};";\
|
||||
|
@@ -45,9 +45,6 @@ MAN8GEN=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
|
||||
vgimport.8 vgimportclone.8 vgmerge.8 vgmknodes.8 vgreduce.8 vgremove.8 \
|
||||
vgrename.8 vgs.8 vgscan.8 vgsplit.8 \
|
||||
lvmsar.8 lvmsadc.8 lvmdiskscan.8 lvmchange.8
|
||||
MAN8DM=dmsetup.8 dmstats.8 dmfilemapd.8
|
||||
MAN8CLUSTER=
|
||||
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
|
||||
|
||||
ifeq ($(MAKECMDGOALS),all_man)
|
||||
MAN_ALL="yes"
|
||||
@@ -147,12 +144,10 @@ Makefile: Makefile.in
|
||||
|
||||
man-generator:
|
||||
$(CC) -DMAN_PAGE_GENERATOR -I$(top_builddir)/tools $(CFLAGS) $(top_srcdir)/tools/command.c -o $@
|
||||
- ./man-generator --primary lvmconfig > test.gen
|
||||
if [ ! -s test.gen ] ; then cp genfiles/*.gen $(top_builddir)/man; fi;
|
||||
|
||||
$(MAN8GEN): man-generator
|
||||
echo "Generating $@" ;
|
||||
if [ ! -e $@.gen ]; then ./man-generator --primary $(basename $@) $(top_srcdir)/man/$@.des > $@.gen; ./man-generator --secondary $(basename $@) >> $@.gen; fi
|
||||
./man-generator $(basename $@) > $@.gen
|
||||
if [ -f $(top_srcdir)/man/$@.end ]; then cat $(top_srcdir)/man/$@.end >> $@.gen; fi;
|
||||
cat $(top_srcdir)/man/see_also.end >> $@.gen
|
||||
$(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $@.gen > $@
|
||||
|
@@ -23,6 +23,40 @@ dmeventd is the event monitoring daemon for device-mapper devices.
|
||||
Library plugins can register and carry out actions triggered when
|
||||
particular events occur.
|
||||
.
|
||||
.SH LVM PLUGINS
|
||||
.
|
||||
.HP
|
||||
.IR Mirror
|
||||
.br
|
||||
Attempts to handle device failure automatically. See
|
||||
.BR lvm.conf (5).
|
||||
.
|
||||
.HP
|
||||
.IR Raid
|
||||
.br
|
||||
Attempts to handle device failure automatically. See
|
||||
.BR lvm.conf (5).
|
||||
.
|
||||
.HP
|
||||
.IR Snapshot
|
||||
.br
|
||||
Monitors how full a snapshot is becoming and emits a warning to
|
||||
syslog when it exceeds 80% full.
|
||||
The warning is repeated when 85%, 90% and 95% of the snapshot is filled.
|
||||
See
|
||||
.BR lvm.conf (5).
|
||||
Snapshot which runs out of space gets invalid and when it is mounted,
|
||||
it gets umounted if possible.
|
||||
.
|
||||
.HP
|
||||
.IR Thin
|
||||
.br
|
||||
Monitors how full a thin pool data and metadata is becoming and emits
|
||||
a warning to syslog when it exceeds 80% full.
|
||||
The warning is repeated when 85%, 90% and 95% of the thin pool is filled.
|
||||
See
|
||||
.BR lvm.conf (5).
|
||||
If the thin-pool runs out of space, thin volumes are umounted if possible.
|
||||
.
|
||||
.SH OPTIONS
|
||||
.
|
||||
@@ -70,80 +104,6 @@ events to monitor from the currently running daemon.
|
||||
.br
|
||||
Show version of dmeventd.
|
||||
.
|
||||
.SH LVM PLUGINS
|
||||
.
|
||||
.HP
|
||||
.BR Mirror
|
||||
.br
|
||||
Attempts to handle device failure automatically. See
|
||||
.BR lvm.conf (5).
|
||||
.
|
||||
.HP
|
||||
.BR Raid
|
||||
.br
|
||||
Attempts to handle device failure automatically. See
|
||||
.BR lvm.conf (5).
|
||||
.
|
||||
.HP
|
||||
.BR Snapshot
|
||||
.br
|
||||
Monitors how full a snapshot is becoming and emits a warning to
|
||||
syslog when it exceeds 80% full.
|
||||
The warning is repeated when 85%, 90% and 95% of the snapshot is filled.
|
||||
See
|
||||
.BR lvm.conf (5).
|
||||
Snapshot which runs out of space gets invalid and when it is mounted,
|
||||
it gets umounted if possible.
|
||||
.
|
||||
.HP
|
||||
.BR Thin
|
||||
.br
|
||||
Monitors how full a thin pool data and metadata is becoming and emits
|
||||
a warning to syslog when it exceeds 80% full.
|
||||
The warning is repeated when more then 85%, 90% and 95%
|
||||
of the thin pool is filled. See
|
||||
.BR lvm.conf (5).
|
||||
When a thin pool fills over 50% (data or metadata) thin plugin calls
|
||||
configured \fIdmeventd/thin_command\fP with every 5% increase.
|
||||
With default setting it calls internal
|
||||
\fBlvm lvextend --use-policies\fP to resize thin pool
|
||||
when it's been filled above configured threshold
|
||||
\fIactivation/thin_pool_autoextend_threshold\fP.
|
||||
If the command fails, dmeventd thin plugin will keep
|
||||
retrying execution with increasing time delay between
|
||||
retries upto 42 minutes.
|
||||
User may also configure external command to support more advanced
|
||||
maintenance operations of a thin pool.
|
||||
Such external command can e.g. remove some unneeded snapshots,
|
||||
use \fBfstrim\fP(8) to free recover space in a thin pool,
|
||||
but also can use \fBlvextend --use-policies\fP if other actions
|
||||
have not released enough space.
|
||||
Command is executed with environmental variable
|
||||
\fBLVM_RUN_BY_DMEVENTD=1\fP so any lvm2 command executed
|
||||
in this environment will not try to interact with dmeventd.
|
||||
To see the fullness of a thin pool command may check these
|
||||
two environmental variables
|
||||
\fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_DATA\fP.
|
||||
Command can also read status with tools like \fBlvs\fP(8).
|
||||
.
|
||||
.SH ENVIRONMENT VARIABLES
|
||||
.
|
||||
.TP
|
||||
.B DMEVENTD_THIN_POOL_DATA
|
||||
Variable is set by thin plugin and is available to executed program. Value present
|
||||
actual usage of thin pool data volume. Variable is not set when error event
|
||||
is processed.
|
||||
.TP
|
||||
.B DMEVENTD_THIN_POOL_DATA
|
||||
Variable is set by thin plugin and is available to executed program. Value present
|
||||
actual usage of thin pool metadata volume. Variable is not set when error event
|
||||
is processed.
|
||||
.TP
|
||||
.B LVM_RUN_BY_DMEVENTD
|
||||
Variable is set by thin plugin to prohibit recursive interation
|
||||
with dmeventd by any executed lvm2 command from
|
||||
a thin_command environment.
|
||||
.
|
||||
.SH SEE ALSO
|
||||
.
|
||||
.BR lvm (8),
|
||||
|
@@ -1,212 +0,0 @@
|
||||
.TH DMFILEMAPD 8 "Dec 17 2016" "Linux" "MAINTENANCE COMMANDS"
|
||||
|
||||
.de OPT_FD
|
||||
. RB [ file_descriptor ]
|
||||
..
|
||||
.
|
||||
.de OPT_GROUP
|
||||
. RB [ group_id ]
|
||||
..
|
||||
.de OPT_PATH
|
||||
. RB [ path ]
|
||||
..
|
||||
.
|
||||
.de OPT_MODE
|
||||
. RB [ mode ]
|
||||
..
|
||||
.
|
||||
.de OPT_DEBUG
|
||||
. RB [ foreground [ verbose ] ]
|
||||
..
|
||||
.
|
||||
.SH NAME
|
||||
.
|
||||
dmfilemapd \(em device-mapper filemap monitoring daemon
|
||||
.
|
||||
.SH SYNOPSIS
|
||||
.
|
||||
.de CMD_DMFILEMAPD
|
||||
. ad l
|
||||
. IR dmfilemapd
|
||||
. OPT_FD
|
||||
. OPT_GROUP
|
||||
. OPT_PATH
|
||||
. OPT_MODE
|
||||
. OPT_DEBUG
|
||||
. ad b
|
||||
..
|
||||
.CMD_DMFILEMAPD
|
||||
.
|
||||
.PD
|
||||
.ad b
|
||||
.
|
||||
.SH DESCRIPTION
|
||||
.
|
||||
The dmfilemapd daemon monitors groups of \fIdmstats\fP regions that
|
||||
correspond to the extents of a file, adding and removing regions to
|
||||
reflect the changing state of the file on-disk.
|
||||
|
||||
The daemon is normally launched automatically by the \fPdmstats
|
||||
create\fP command, but can be run manually, either to create a new
|
||||
daemon where one did not previously exist, or to change the options
|
||||
previously used, by killing the existing daemon and starting a new
|
||||
one.
|
||||
.
|
||||
.SH OPTIONS
|
||||
.
|
||||
.HP
|
||||
.BR file_descriptor
|
||||
.br
|
||||
Specify the file descriptor number for the file to be monitored.
|
||||
The file descriptor must reference a regular file, open for reading,
|
||||
in a local file system that supports the FIEMAP ioctl, and that
|
||||
returns data describing the physical location of extents.
|
||||
|
||||
The process that executes \fBdmfilemapd\fP is responsible for
|
||||
opening the file descriptor that is handed to the daemon.
|
||||
.
|
||||
.HP
|
||||
.BR group_id
|
||||
.br
|
||||
The \fBdmstats\fP group identifier of the group that \fBdmfilemapd\fP
|
||||
should update. The group must exist and it should correspond to
|
||||
a set of regions created by a previous filemap operation.
|
||||
.
|
||||
.HP
|
||||
.BR path
|
||||
.br
|
||||
The path to the file being monitored, at the time that it was
|
||||
opened. The use of \fBpath\fP by the daemon differs, depending
|
||||
on the filemap following mode in use; see \fBMODES\fP and the
|
||||
\fBmode\fP option for more information.
|
||||
|
||||
.br
|
||||
.HP
|
||||
.BR mode
|
||||
.br
|
||||
The filemap monitoring mode the daemon should use: either "inode"
|
||||
(\fBDM_FILEMAP_FOLLOW_INODE\fP), or "path"
|
||||
(\fBDM_FILEMAP_FOLLOW_PATH\fP), to enable follow-inode or
|
||||
follow-path mode respectively.
|
||||
.
|
||||
.HP
|
||||
.BR [foreground]
|
||||
.br
|
||||
If set to 1, disable forking and allow the daemon to run in the
|
||||
foreground.
|
||||
.
|
||||
.HP
|
||||
.BR [verbose]
|
||||
Control daemon logging. If set to zero, the daemon will close all
|
||||
stdio streams and run silently. If \fBverbose\fP is a number
|
||||
between 1 and 3, stdio will be retained and the daemon will log
|
||||
messages to stdout and stderr that match the specified verbosity
|
||||
level.
|
||||
.
|
||||
.
|
||||
.SH MODES
|
||||
.
|
||||
The file map monitoring daemon can monitor files in two distinct
|
||||
ways: the mode affects the behaviour of the daemon when a file
|
||||
under monitoring is renamed or unlinked, and the conditions which
|
||||
cause the daemon to terminate.
|
||||
|
||||
In both modes, the daemon will always shut down when the group
|
||||
being monitored is deleted.
|
||||
|
||||
.P
|
||||
.B Follow inode
|
||||
.P
|
||||
The daemon follows the inode of the file, as it was at the time the
|
||||
daemon started. The file descriptor referencing the file is kept
|
||||
open at all times, and the daemon will exit when it detects that
|
||||
the file has been unlinked and it is the last holder of a reference
|
||||
to the file.
|
||||
|
||||
This mode is useful if the file is expected to be renamed, or moved
|
||||
within the file system, while it is being monitored.
|
||||
|
||||
.P
|
||||
.B Follow path
|
||||
.P
|
||||
The daemon follows the path that was given on the daemon command
|
||||
line. The file descriptor referencing the file is re-opened on each
|
||||
iteration of the daemon, and the daemon will exit if no file exists
|
||||
at this location (a tolerance is allowed so that a brief delay
|
||||
between removal and replacement is permitted).
|
||||
|
||||
This mode is useful if the file is updated by unlinking the original
|
||||
and placing a new file at the same path.
|
||||
.
|
||||
.SH LIMITATIONS
|
||||
.
|
||||
The daemon attempts to maintain good synchronisation between the file
|
||||
extents and the regions contained in the group, however, since the
|
||||
daemon can only react to new allocations once they have been written,
|
||||
there are inevitably some IO events that cannot be counted when a
|
||||
file is growing, particularly if the file is being extended by a
|
||||
single thread writing beyond EOF (for example, the \fBdd\fP program).
|
||||
|
||||
There is a further loss of events in that there is currently no way
|
||||
to atomically resize a \fBdmstats\fP region and preserve its current
|
||||
counter values. This affects files when they grow by extending the
|
||||
final extent, rather than allocating a new extent: any events that
|
||||
had accumulated in the region between any prior operation and the
|
||||
resize are lost.
|
||||
|
||||
File mapping is currently most effective in cases where the majority
|
||||
of IO does not trigger extent allocation. Future updates may address
|
||||
these limitations when kernel support is available.
|
||||
.
|
||||
.SH EXAMPLES
|
||||
.
|
||||
Normally the daemon is started automatically by the \fBdmstats\fP
|
||||
\fBcreate\fP or \fBupdate_filemap\fP commands but it can be run
|
||||
manually for debugging or testing purposes.
|
||||
.P
|
||||
Start the daemon in the background, in follow-path mode
|
||||
.br
|
||||
#
|
||||
.B dmfilemapd 3 0 vm.img path 0 0 3< vm.img
|
||||
.br
|
||||
.P
|
||||
Start the daemon in follow-inode mode, disable forking and enable
|
||||
verbose logging
|
||||
.br
|
||||
#
|
||||
.B dmfilemapd 3 0 vm.img inode 1 3 3< vm.img
|
||||
.br
|
||||
Starting dmfilemapd with fd=3, group_id=0 mode=inode, path=vm.img
|
||||
.br
|
||||
dm version [ opencount flush ] [16384] (*1)
|
||||
.br
|
||||
dm info (253:0) [ opencount flush ] [16384] (*1)
|
||||
.br
|
||||
dm message (253:0) [ opencount flush ] @stats_list dmstats [16384] (*1)
|
||||
.br
|
||||
Read alias 'vm.img' from aux_data
|
||||
.br
|
||||
Found group_id 0: alias="vm.img"
|
||||
.br
|
||||
dm_stats_walk_init: initialised flags to 4000000000000
|
||||
.br
|
||||
starting stats walk with GROUP
|
||||
.br
|
||||
exiting _filemap_monitor_get_events() with deleted=0, check=0
|
||||
.br
|
||||
waiting for FILEMAPD_WAIT
|
||||
.br
|
||||
.P
|
||||
.
|
||||
.SH AUTHORS
|
||||
.
|
||||
Bryn M. Reeves <bmr@redhat.com>
|
||||
.
|
||||
.SH SEE ALSO
|
||||
.
|
||||
.BR dmstats (8)
|
||||
|
||||
LVM2 resource page: https://www.sourceware.org/lvm2/
|
||||
.br
|
||||
Device-mapper resource page: http://sources.redhat.com/dm/
|
||||
.br
|
@@ -27,9 +27,9 @@ dmsetup \(em low level logical volume management
|
||||
. IR uuid ]
|
||||
. RB \%[ \-\-addnodeoncreate | \-\-addnodeonresume ]
|
||||
. RB \%[ \-n | \-\-notable | \-\-table
|
||||
. IR \%table | table_file ]
|
||||
. RI \%{ table | table_file }]
|
||||
. RB [ \-\-readahead
|
||||
. RB \%[ + ] \fIsectors | auto | none ]
|
||||
. RB \%{[ + ] \fIsectors | auto | none }]
|
||||
. ad b
|
||||
..
|
||||
.CMD_CREATE
|
||||
@@ -41,7 +41,7 @@ dmsetup \(em low level logical volume management
|
||||
. BR deps
|
||||
. RB [ \-o
|
||||
. IR options ]
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
. ad b
|
||||
..
|
||||
.CMD_DEPS
|
||||
@@ -58,7 +58,7 @@ dmsetup \(em low level logical volume management
|
||||
.B dmsetup
|
||||
.de CMD_INFO
|
||||
. BR info
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
..
|
||||
.CMD_INFO
|
||||
.
|
||||
@@ -92,7 +92,7 @@ dmsetup \(em low level logical volume management
|
||||
. BR load
|
||||
. IR device_name
|
||||
. RB [ \-\-table
|
||||
. IR table | table_file ]
|
||||
. RI { table | table_file }]
|
||||
. ad b
|
||||
..
|
||||
.CMD_LOAD
|
||||
@@ -117,7 +117,7 @@ dmsetup \(em low level logical volume management
|
||||
.B dmsetup
|
||||
.de CMD_MANGLE
|
||||
. BR mangle
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
..
|
||||
.CMD_MANGLE
|
||||
.
|
||||
@@ -135,7 +135,7 @@ dmsetup \(em low level logical volume management
|
||||
.B dmsetup
|
||||
.de CMD_MKNODES
|
||||
. BR mknodes
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
..
|
||||
.CMD_MKNODES
|
||||
.
|
||||
@@ -146,7 +146,7 @@ dmsetup \(em low level logical volume management
|
||||
. BR reload
|
||||
. IR device_name
|
||||
. RB [ \-\-table
|
||||
. IR table | table_file ]
|
||||
. RI { table | table_file }]
|
||||
. ad b
|
||||
..
|
||||
.CMD_RELOAD
|
||||
@@ -159,7 +159,7 @@ dmsetup \(em low level logical volume management
|
||||
. RB [ \-f | \-\-force ]
|
||||
. RB [ \-\-retry ]
|
||||
. RB [ \-\-deferred ]
|
||||
. IR device_name ...
|
||||
. IR device_name
|
||||
. ad b
|
||||
..
|
||||
.CMD_REMOVE
|
||||
@@ -197,12 +197,12 @@ dmsetup \(em low level logical volume management
|
||||
.de CMD_RESUME
|
||||
. ad l
|
||||
. BR resume
|
||||
. IR device_name ...
|
||||
. IR device_name
|
||||
. RB [ \-\-addnodeoncreate | \-\-addnodeonresume ]
|
||||
. RB [ \-\-noflush ]
|
||||
. RB [ \-\-nolockfs ]
|
||||
. RB \%[ \-\-readahead
|
||||
. RB \%[ + ] \fIsectors | auto | none ]
|
||||
. RB \%{[ + ] \fIsectors | auto | none }]
|
||||
. ad b
|
||||
..
|
||||
.CMD_RESUME
|
||||
@@ -247,7 +247,7 @@ dmsetup \(em low level logical volume management
|
||||
. RB [ \-\-target
|
||||
. IR target_type ]
|
||||
. RB [ \-\-noflush ]
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
. ad b
|
||||
..
|
||||
.CMD_STATUS
|
||||
@@ -259,7 +259,7 @@ dmsetup \(em low level logical volume management
|
||||
. BR suspend
|
||||
. RB [ \-\-nolockfs ]
|
||||
. RB [ \-\-noflush ]
|
||||
. IR device_name ...
|
||||
. IR device_name
|
||||
. ad b
|
||||
..
|
||||
.CMD_SUSPEND
|
||||
@@ -272,7 +272,7 @@ dmsetup \(em low level logical volume management
|
||||
. RB [ \-\-target
|
||||
. IR target_type ]
|
||||
. RB [ \-\-showkeys ]
|
||||
. RI [ device_name ...]
|
||||
. RI [ device_name ]
|
||||
. ad b
|
||||
..
|
||||
.CMD_TABLE
|
||||
@@ -354,7 +354,7 @@ dmsetup \(em low level logical volume management
|
||||
.de CMD_WIPE_TABLE
|
||||
. ad l
|
||||
. BR wipe_table
|
||||
. IR device_name ...
|
||||
. IR device_name
|
||||
. RB [ \-f | \-\-force ]
|
||||
. RB [ \-\-noflush ]
|
||||
. RB [ \-\-nolockfs ]
|
||||
@@ -447,7 +447,7 @@ The default interval is one second.
|
||||
.
|
||||
.HP
|
||||
.BR \-\-manglename
|
||||
.BR auto | hex | none
|
||||
.RB { auto | hex | none }
|
||||
.br
|
||||
Mangle any character not on a whitelist using mangling_mode when
|
||||
processing device-mapper device names and UUIDs. The names and UUIDs
|
||||
@@ -529,7 +529,7 @@ Specify which fields to display.
|
||||
.
|
||||
.HP
|
||||
.BR \-\-readahead
|
||||
.RB [ + ] \fIsectors | auto | none
|
||||
.RB {[ + ] \fIsectors | auto | none }
|
||||
.br
|
||||
Specify read ahead size in units of sectors.
|
||||
The default value is \fBauto\fP which allows the kernel to choose
|
||||
@@ -820,10 +820,8 @@ Outputs the current table for the device in a format that can be fed
|
||||
back in using the create or load commands.
|
||||
With \fB\-\-target\fP, only information relating to the specified target type
|
||||
is displayed.
|
||||
Real encryption keys are suppressed in the table output for the crypt
|
||||
target unless the \fB\-\-showkeys\fP parameter is supplied. Kernel key
|
||||
references prefixed with \fB:\fP are not affected by the parameter and get
|
||||
displayed always.
|
||||
Encryption keys are suppressed in the table output for the crypt
|
||||
target unless the \fB\-\-showkeys\fP parameter is supplied.
|
||||
.
|
||||
.HP
|
||||
.CMD_TARGETS
|
||||
|
199
man/dmstats.8.in
199
man/dmstats.8.in
@@ -14,9 +14,6 @@
|
||||
. RB [ \-\-region ]
|
||||
. RB [ \-\-group ]
|
||||
..
|
||||
.de OPT_FOREGROUND
|
||||
. RB [ \-\-foreground ]
|
||||
..
|
||||
.
|
||||
.\" Print units suffix, use with arg to print human
|
||||
.\" man2html can't handle too many changes per command
|
||||
@@ -47,7 +44,7 @@ dmstats \(em device-mapper statistics management
|
||||
.B dmsetup
|
||||
.B stats
|
||||
.I command
|
||||
[OPTIONS]
|
||||
.RB [ options ]
|
||||
.sp
|
||||
.
|
||||
.PD 0
|
||||
@@ -56,13 +53,13 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_COMMAND
|
||||
. ad l
|
||||
. IR command
|
||||
. IR device_name " |"
|
||||
. BR \-\-major
|
||||
. RI [ device_name |
|
||||
. RB [ \-u | \-\-uuid
|
||||
. IR uuid ]
|
||||
. RB | [ \-\-major
|
||||
. IR major
|
||||
. BR \-\-minor
|
||||
. IR minor " |"
|
||||
. BR \-u | \-\-uuid
|
||||
. IR uuid
|
||||
. IR minor ]
|
||||
. RB \%[ \-v | \-\-verbose]
|
||||
. ad b
|
||||
..
|
||||
@@ -85,17 +82,15 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_CREATE
|
||||
. ad l
|
||||
. BR create
|
||||
. IR device_name... | file_path... | \fB\-\-alldevices
|
||||
. RB [ device_name...
|
||||
. RB | file_path...
|
||||
. RB | [ \-\-alldevices ]]
|
||||
. RB [ \-\-areas
|
||||
. IR nr_areas | \fB\-\-areasize
|
||||
. IR area_size ]
|
||||
. RB [ \-\-bounds
|
||||
. IR \%histogram_boundaries ]
|
||||
. RB [ \-\-filemap ]
|
||||
. RB [ \-\-follow
|
||||
. IR follow_mode ]
|
||||
. OPT_FOREGROUND
|
||||
. RB [ \-\-nomonitor ]
|
||||
. RB [ \-\-nogroup ]
|
||||
. RB [ \-\-precise ]
|
||||
. RB [ \-\-start
|
||||
@@ -115,7 +110,8 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_DELETE
|
||||
. ad l
|
||||
. BR delete
|
||||
. IR device_name | \fB\-\-alldevices
|
||||
. RI [ device_name ]
|
||||
. RB [ \-\-alldevices ]
|
||||
. OPT_PROGRAMS
|
||||
. OPT_REGIONS
|
||||
. ad b
|
||||
@@ -127,9 +123,10 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_GROUP
|
||||
. ad l
|
||||
. BR group
|
||||
. RI [ device_name | \fB\-\-alldevices ]
|
||||
. RI [ device_name ]
|
||||
. RB [ \-\-alias
|
||||
. IR name ]
|
||||
. RB [ \-\-alldevices ]
|
||||
. RB [ \-\-regions
|
||||
. IR regions ]
|
||||
. ad b
|
||||
@@ -208,7 +205,8 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_UNGROUP
|
||||
. ad l
|
||||
. BR ungroup
|
||||
. RI [ device_name | \fB\-\-alldevices ]
|
||||
. RI [ device_name ]
|
||||
. RB [ \-\-alldevices ]
|
||||
. RB [ \-\-groupid
|
||||
. IR id ]
|
||||
. ad b
|
||||
@@ -219,12 +217,9 @@ dmstats \(em device-mapper statistics management
|
||||
.de CMD_UPDATE_FILEMAP
|
||||
. ad l
|
||||
. BR update_filemap
|
||||
. IR file_path
|
||||
. RI file_path
|
||||
. RB [ \-\-groupid
|
||||
. IR id ]
|
||||
. RB [ \-\-follow
|
||||
. IR follow_mode ]
|
||||
. OPT_FOREGROUND
|
||||
. ad b
|
||||
..
|
||||
.CMD_UPDATE_FILEMAP
|
||||
@@ -324,60 +319,6 @@ create regions corresponding to the locations of the on-disk extents
|
||||
allocated to the file(s).
|
||||
.
|
||||
.HP
|
||||
.BR \-\-nomonitor
|
||||
.br
|
||||
Disable the \fBdmfilemapd\fP daemon when creating new file mapped
|
||||
groups. Normally the device-mapper filemap monitoring daemon,
|
||||
\fBdmfilemapd\fP, is started for each file mapped group to update the
|
||||
set of regions as the file changes on-disk: use of this option
|
||||
disables this behaviour.
|
||||
|
||||
Regions in the group may still be updated with the
|
||||
\fBupdate_filemap\fP command, or by starting the daemon manually.
|
||||
.
|
||||
.HP
|
||||
.BR \-\-follow
|
||||
.IR follow_mode
|
||||
.br
|
||||
Specify the \fBdmfilemapd\fP file following mode. The file map
|
||||
monitoring daemon can monitor files in two distinct ways: the mode
|
||||
affects the behaviour of the daemon when a file under monitoring is
|
||||
renamed or unlinked, and the conditions which cause the daemon to
|
||||
terminate.
|
||||
|
||||
The \fBfollow_mode\fP argument is either "inode", for follow-inode
|
||||
mode, or "path", for follow-path.
|
||||
|
||||
If follow-inode mode is used, the daemon will hold the file open, and
|
||||
continue to update regions from the same file descriptor. This means
|
||||
that the mapping will follow rename, move (within the same file
|
||||
system), and unlink operations. This mode is useful if the file is
|
||||
expected to be moved, renamed, or unlinked while it is being
|
||||
monitored.
|
||||
|
||||
In follow-inode mode, the daemon will exit once it detects that the
|
||||
file has been unlinked and it is the last holder of a reference to it.
|
||||
|
||||
If follow-path is used, the daemon will re-open the provided path on
|
||||
each monitoring iteration. This means that the group will be updated
|
||||
to reflect a new file being moved to the same path as the original
|
||||
file. This mode is useful for files that are expected to be updated
|
||||
via unlink and rename.
|
||||
|
||||
In follow-path mode, the daemon will exit if the file is removed and
|
||||
not replaced within a brief tolerance interval.
|
||||
|
||||
In either mode, the daemon exits automatically if the monitored group
|
||||
is removed.
|
||||
.
|
||||
.HP
|
||||
.BR \-\-foreground
|
||||
.br
|
||||
Specify that the \fBdmfilemapd\fP daemon should run in the foreground.
|
||||
The daemon will not fork into the background, and will replace the
|
||||
\fBdmstats\fP command that started it.
|
||||
.
|
||||
.HP
|
||||
.BR \-\-groupid
|
||||
.IR id
|
||||
.br
|
||||
@@ -632,11 +573,6 @@ By default regions that map a file are placed into a group and the
|
||||
group alias is set to the basename of the file. This behaviour can be
|
||||
overridden with the \fB\-\-alias\fP and \fB\-\-nogroup\fP options.
|
||||
|
||||
Creating a group that maps a file automatically starts a daemon,
|
||||
\fBdmfilemapd\fP to monitor the file and update the mapping as the
|
||||
extents allocated to the file change. This behaviour can be disabled
|
||||
using the \fB\-\-nomonitor\fP option.
|
||||
|
||||
Use the \fB\-\-group\fP option to only display information for groups
|
||||
when listing and reporting.
|
||||
.
|
||||
@@ -747,23 +683,17 @@ The group to be removed is specified using \fB\-\-groupid\fP.
|
||||
.CMD_UPDATE_FILEMAP
|
||||
.br
|
||||
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
|
||||
that were previously created with \fB\-\-filemap\fP, either directly,
|
||||
or by starting the monitoring daemon, \fBdmfilemapd\fP.
|
||||
|
||||
This will add and remove regions to reflect changes in the allocated
|
||||
extents of the file on-disk, since the time that it was crated or last
|
||||
updated.
|
||||
that were previously created with \fB\-\-filemap\fP. This will add
|
||||
and remove regions to reflect changes in the allocated extents of
|
||||
the file on-disk, since the time that it was crated or last updated.
|
||||
|
||||
Use of this command is not normally needed since the \fBdmfilemapd\fP
|
||||
daemon will automatically monitor filemap groups and perform these
|
||||
updates when required.
|
||||
|
||||
If a filemapped group was created with \fB\-\-nomonitor\fP, or the
|
||||
If a filemapped group was created with \fB\-\-nominitor\fP, or the
|
||||
daemon has been killed, the \fBupdate_filemap\fP can be used to
|
||||
manually force an update or start a new daemon.
|
||||
|
||||
Use \fB\-\-nomonitor\fP to force a direct update and disable starting
|
||||
the monitoring daemon.
|
||||
manually force an update.
|
||||
.
|
||||
.SH REGIONS, AREAS, AND GROUPS
|
||||
.
|
||||
@@ -825,93 +755,6 @@ containing device.
|
||||
The \fBgroup_id\fP should be treated as an opaque identifier used to
|
||||
reference the group.
|
||||
.
|
||||
.SH FILE MAPPING
|
||||
.
|
||||
Using \fB\-\-filemap\fP, it is possible to create regions that
|
||||
correspond to the extents of a file in the file system. This allows
|
||||
IO statistics to be monitored on a per-file basis, for example to
|
||||
observe large database files, virtual machine images, or other files
|
||||
of interest.
|
||||
|
||||
To be able to use file mapping, the file must be backed by a
|
||||
device-mapper device, and in a file system that supports the FIEMAP
|
||||
ioctl (and which returns data describing the physical location of
|
||||
extents). This currently includes \fBxfs(5)\fP and \fBext4(5)\fP.
|
||||
|
||||
By default the regions making up a file are placed together in a
|
||||
group, and the group alias is set to the \fBbasename(3)\fP of the
|
||||
file. This allows statistics to be reported for the file as a whole,
|
||||
aggregating values for the regions making up the group. To see only
|
||||
the whole file (group) when using the \fBlist\fP and \fBreport\fP
|
||||
commands, use \fB\-\-group\fP.
|
||||
|
||||
Since it is possible for the file to change after the initial
|
||||
group of regions is created, the \fBupdate_filemap\fP command, and
|
||||
\fBdmfilemapd\fP daemon are provided to update file mapped groups
|
||||
either manually or automatically.
|
||||
.
|
||||
.P
|
||||
.B File follow modes
|
||||
.P
|
||||
The file map monitoring daemon can monitor files in two distinct ways:
|
||||
follow-inode mode, and follow-path mode.
|
||||
|
||||
The mode affects the behaviour of the daemon when a file under
|
||||
monitoring is renamed or unlinked, and the conditions which cause the
|
||||
daemon to terminate.
|
||||
|
||||
If follow-inode mode is used, the daemon will hold the file open, and
|
||||
continue to update regions from the same file descriptor. This means
|
||||
that the mapping will follow rename, move (within the same file
|
||||
system), and unlink operations. This mode is useful if the file is
|
||||
expected to be moved, renamed, or unlinked while it is being
|
||||
monitored.
|
||||
|
||||
In follow-inode mode, the daemon will exit once it detects that the
|
||||
file has been unlinked and it is the last holder of a reference to it.
|
||||
|
||||
If follow-path is used, the daemon will re-open the provided path on
|
||||
each monitoring iteration. This means that the group will be updated
|
||||
to reflect a new file being moved to the same path as the original
|
||||
file. This mode is useful for files that are expected to be updated
|
||||
via unlink and rename.
|
||||
|
||||
In follow-path mode, the daemon will exit if the file is removed and
|
||||
not replaced within a brief tolerance interval (one second).
|
||||
|
||||
To stop the daemon, delete the group containing the mapped regions:
|
||||
the daemon will automatically shut down.
|
||||
|
||||
The daemon can also be safely killed at any time and the group kept:
|
||||
if the file is still being allocated the mapping will become
|
||||
progressively out-of-date as extents are added and removed (in this
|
||||
case the daemon can be re-started or the group updated manually with
|
||||
the \fBupdate_filemap\fP command).
|
||||
|
||||
See the \fBcreate\fP command and \fB\-\-filemap\fP, \fB\-\-follow\fP,
|
||||
and \fB\-\-nomonitor\fP options for further information.
|
||||
.
|
||||
.P
|
||||
.B Limitations
|
||||
.P
|
||||
The daemon attempts to maintain good synchronisation between the file
|
||||
extents and the regions contained in the group, however, since it can
|
||||
only react to new allocations once they have been written, there are
|
||||
inevitably some IO events that cannot be counted when a file is
|
||||
growing, particularly if the file is being extended by a single thread
|
||||
writing beyond end-of-file (for example, the \fBdd\fP program).
|
||||
|
||||
There is a further loss of events in that there is currently no way
|
||||
to atomically resize a \fBdmstats\fP region and preserve its current
|
||||
counter values. This affects files when they grow by extending the
|
||||
final extent, rather than allocating a new extent: any events that
|
||||
had accumulated in the region between any prior operation and the
|
||||
resize are lost.
|
||||
|
||||
File mapping is currently most effective in cases where the majority
|
||||
of IO does not trigger extent allocation. Future updates may address
|
||||
these limitations when kernel support is available.
|
||||
.
|
||||
.SH REPORT FIELDS
|
||||
.
|
||||
The dmstats report provides several types of field that may be added to
|
||||
|
@@ -1,4 +1,4 @@
|
||||
.SH EXAMPLES
|
||||
.EXAMPLES
|
||||
|
||||
Change LV permission to read-only:
|
||||
.sp
|
||||
|
@@ -7,6 +7,21 @@ To display the current LV type, run the command:
|
||||
.B lvs \-o name,segtype
|
||||
.I LV
|
||||
|
||||
A command to change the LV type uses the general pattern:
|
||||
|
||||
.B lvconvert \-\-type
|
||||
.I NewType LV
|
||||
|
||||
LVs with the following types can be modified by lvconvert:
|
||||
.B striped,
|
||||
.B snapshot,
|
||||
.B mirror,
|
||||
.B raid*,
|
||||
.B thin,
|
||||
.B cache,
|
||||
.B thin\-pool,
|
||||
.B cache\-pool.
|
||||
|
||||
The
|
||||
.B linear
|
||||
type is equivalent to the
|
||||
@@ -20,6 +35,12 @@ type is deprecated and the
|
||||
.B raid1
|
||||
type should be used. They are both implementations of mirroring.
|
||||
|
||||
The
|
||||
.B raid*
|
||||
type refers to one of many raid levels, e.g.
|
||||
.B raid1,
|
||||
.B raid5.
|
||||
|
||||
In some cases, an LV is a single device mapper (dm) layer above physical
|
||||
devices. In other cases, hidden LVs (dm devices) are layered between the
|
||||
visible LV and physical devices. LVs in the middle layers are called sub LVs.
|
||||
@@ -27,39 +48,6 @@ A command run on a visible LV sometimes operates on a sub LV rather than
|
||||
the specified LV. In other cases, a sub LV must be specified directly on
|
||||
the command line.
|
||||
|
||||
Striped raid types are
|
||||
.B raid0/raid0_meta
|
||||
,
|
||||
.B raid5
|
||||
(an alias for raid5_ls),
|
||||
.B raid6
|
||||
(an alias for raid6_zr) and
|
||||
.B raid10
|
||||
(an alias for raid10_near).
|
||||
|
||||
As opposed to mirroring, raid5 and raid6 stripe data and calculate parity
|
||||
blocks. The parity blocks can be used for data block recovery in case devices
|
||||
fail. A maximum number of one device in a raid5 LV may fail and two in case
|
||||
of raid6. Striped raid types typically rotate the parity blocks for performance
|
||||
reasons thus avoiding contention on a single device. Layouts of raid5 rotating
|
||||
parity blocks can be one of left-asymmetric (raid5_la), left-symmetric (raid5_ls
|
||||
with alias raid5), right-asymmetric (raid5_ra), right-symmetric (raid5_rs) and raid5_n,
|
||||
which doesn't rotate parity blocks. Any \"_n\" layouts allow for conversion between
|
||||
raid levels (raid5_n -> raid6 or raid5_n -> striped/raid0/raid0_meta).
|
||||
raid6 layouts are zero-restart (raid6_zr with alias raid6), next-restart (raid6_nr),
|
||||
next-continue (raid6_nc). Additionally, special raid6 layouts for raid level conversions
|
||||
between raid5 and raid6 are raid6_ls_6, raid6_rs_6, raid6_la_6 and raid6_ra_6. Those
|
||||
correspond to their raid5 counterparts (e.g. raid5_rs can be directly converted to raid6_rs_6
|
||||
and vice-versa).
|
||||
raid10 (an alias for raid10_near) is currently limited to one data copy and even number of
|
||||
sub LVs. This is a mirror group layout thus a single sub LV may fail per mirror group
|
||||
without data loss.
|
||||
Striped raid types support converting the layout, their stripesize
|
||||
and their number of stripes.
|
||||
|
||||
The striped raid types combined with raid1 allow for conversion from linear -> striped/raid0/raid0_meta
|
||||
and vice-versa by e.g. linear <-> raid1 <-> raid5_n (then adding stripes) <-> striped/raid0/raid0_meta.
|
||||
|
||||
Sub LVs can be displayed with the command
|
||||
.B lvs -a
|
||||
|
||||
|
@@ -1,85 +1,64 @@
|
||||
.SH NOTES
|
||||
|
||||
This previous command syntax would perform two different operations:
|
||||
.br
|
||||
\fBlvconvert --thinpool\fP \fILV1\fP \fB--poolmetadata\fP \fILV2\fP
|
||||
.br
|
||||
If LV1 was not a thin pool, the command would convert LV1 to
|
||||
a thin pool, optionally using a specified LV for metadata.
|
||||
But, if LV1 was already a thin pool, the command would swap
|
||||
the current metadata LV with LV2 (for repair purposes.)
|
||||
|
||||
In the same way, this previous command syntax would perform two different
|
||||
operations:
|
||||
.br
|
||||
\fBlvconvert --cachepool\fP \fILV1\fP \fB--poolmetadata\fP \fILV2\fP
|
||||
.br
|
||||
If LV1 was not a cache pool, the command would convert LV1 to
|
||||
a cache pool, optionally using a specified LV for metadata.
|
||||
But, if LV1 was already a cache pool, the command would swap
|
||||
the current metadata LV with LV2 (for repair purposes.)
|
||||
|
||||
.SH EXAMPLES
|
||||
|
||||
Convert a linear LV to a two-way mirror LV.
|
||||
Convert a linear LV to a two-way mirror LV:
|
||||
.br
|
||||
.B lvconvert \-\-type mirror \-\-mirrors 1 vg/lvol1
|
||||
|
||||
Convert a linear LV to a two-way RAID1 LV.
|
||||
Convert a linear LV to a two-way RAID1 LV:
|
||||
.br
|
||||
.B lvconvert \-\-type raid1 \-\-mirrors 1 vg/lvol1
|
||||
|
||||
Convert a mirror LV to use an in\-memory log.
|
||||
Convert a mirror LV to use an in\-memory log:
|
||||
.br
|
||||
.B lvconvert \-\-mirrorlog core vg/lvol1
|
||||
|
||||
Convert a mirror LV to use a disk log.
|
||||
Convert a mirror LV to use a disk log:
|
||||
.br
|
||||
.B lvconvert \-\-mirrorlog disk vg/lvol1
|
||||
|
||||
Convert a mirror or raid1 LV to a linear LV.
|
||||
Convert a mirror or raid1 LV to a linear LV:
|
||||
.br
|
||||
.B lvconvert --type linear vg/lvol1
|
||||
|
||||
Convert a mirror LV to a raid1 LV with the same number of images.
|
||||
Convert a mirror LV to a raid1 LV with the same number of images:
|
||||
.br
|
||||
.B lvconvert \-\-type raid1 vg/lvol1
|
||||
|
||||
Convert a linear LV to a two-way mirror LV, allocating new extents from specific
|
||||
PV ranges.
|
||||
PV ranges:
|
||||
.br
|
||||
.B lvconvert \-\-mirrors 1 vg/lvol1 /dev/sda:0\-15 /dev/sdb:0\-15
|
||||
|
||||
Convert a mirror LV to a linear LV, freeing physical extents from a specific PV.
|
||||
Convert a mirror LV to a linear LV, freeing physical extents from a specific PV:
|
||||
.br
|
||||
.B lvconvert \-\-type linear vg/lvol1 /dev/sda
|
||||
|
||||
Split one image from a mirror or raid1 LV, making it a new LV.
|
||||
Split one image from a mirror or raid1 LV, making it a new LV:
|
||||
.br
|
||||
.B lvconvert \-\-splitmirrors 1 \-\-name lv_split vg/lvol1
|
||||
|
||||
Split one image from a raid1 LV, and track changes made to the raid1 LV
|
||||
while the split image remains detached.
|
||||
while the split image remains detached:
|
||||
.br
|
||||
.B lvconvert \-\-splitmirrors 1 \-\-trackchanges vg/lvol1
|
||||
|
||||
Merge an image (that was previously created with \-\-splitmirrors and
|
||||
\-\-trackchanges) back into the original raid1 LV.
|
||||
\-\-trackchanges) back into the original raid1 LV:
|
||||
.br
|
||||
.B lvconvert \-\-mergemirrors vg/lvol1_rimage_1
|
||||
|
||||
Replace PV /dev/sdb1 with PV /dev/sdf1 in a raid1/4/5/6/10 LV.
|
||||
Replace PV /dev/sdb1 with PV /dev/sdf1 in a raid1/4/5/6/10 LV:
|
||||
.br
|
||||
.B lvconvert \-\-replace /dev/sdb1 vg/lvol1 /dev/sdf1
|
||||
|
||||
Replace 3 PVs /dev/sd[b-d]1 with PVs /dev/sd[f-h]1 in a raid1 LV.
|
||||
Replace 3 PVs /dev/sd[b-d]1 with PVs /dev/sd[f-h]1 in a raid1 LV:
|
||||
.br
|
||||
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 \-\-replace /dev/sdd1
|
||||
.RS
|
||||
.B vg/lvol1 /dev/sd[fgh]1
|
||||
.RE
|
||||
|
||||
Replace the maximum of 2 PVs /dev/sd[bc]1 with PVs /dev/sd[gh]1 in a raid6 LV.
|
||||
Replace the maximum of 2 PVs /dev/sd[bc]1 with PVs /dev/sd[gh]1 in a raid6 LV:
|
||||
.br
|
||||
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
|
||||
|
||||
@@ -90,7 +69,7 @@ is used as an external read\-only origin for the new thin LV.
|
||||
|
||||
Convert an LV into a thin LV in the specified thin pool. The existing LV
|
||||
is used as an external read\-only origin for the new thin LV, and is
|
||||
renamed "external".
|
||||
renamed "external":
|
||||
.br
|
||||
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1
|
||||
.RS
|
||||
@@ -98,19 +77,19 @@ renamed "external".
|
||||
.RE
|
||||
|
||||
Convert an LV to a cache pool LV using another specified LV for cache pool
|
||||
metadata.
|
||||
metadata:
|
||||
.br
|
||||
.B lvconvert \-\-type cache-pool \-\-poolmetadata vg/poolmeta1 vg/lvol1
|
||||
|
||||
Convert an LV to a cache LV using the specified cache pool and chunk size.
|
||||
Convert an LV to a cache LV using the specified cache pool and chunk size:
|
||||
.br
|
||||
.B lvconvert \-\-type cache \-\-cachepool vg/cpool1 \-c 128 vg/lvol1
|
||||
|
||||
Detach and keep the cache pool from a cache LV.
|
||||
Detach and keep the cache pool from a cache LV:
|
||||
.br
|
||||
.B lvconvert \-\-splitcache vg/lvol1
|
||||
|
||||
Detach and remove the cache pool from a cache LV.
|
||||
Detach and remove the cache pool from a cache LV:
|
||||
.br
|
||||
.B lvconvert \-\-uncache vg/lvol1
|
||||
|
||||
|
@@ -26,14 +26,3 @@ virtual size rather than a physical size. A cache LV is the combination of
|
||||
a standard LV with a cache pool, used to cache active portions of the LV
|
||||
to improve performance.
|
||||
|
||||
.SS Usage notes
|
||||
|
||||
In the usage section below, \fB--size\fP \fISize\fP can be replaced
|
||||
with \fB--extents\fP \fINumber\fP. See both descriptions
|
||||
the options section.
|
||||
|
||||
In the usage section below, \fB--name\fP is omitted from the required
|
||||
options, even though it is typically used. When the name is not
|
||||
specified, a new LV name is generated with the "lvol" prefix and a unique
|
||||
numeric suffix. Also see the description in the options section.
|
||||
|
||||
|
@@ -1,12 +1,5 @@
|
||||
lvextend extends the size of an LV. This requires allocating logical
|
||||
extents from the VG's free physical extents. If the extension adds a new
|
||||
LV segment, the new segment will use the existing segment type of the LV.
|
||||
|
||||
Extending a copy\-on\-write snapshot LV adds space for COW blocks.
|
||||
|
||||
Use \fBlvconvert\fP(8) to change the number of data images in a RAID or
|
||||
extents from the VG's free physical extents. A copy\-on\-write snapshot LV
|
||||
can also be extended to provide more space to hold COW blocks. Use
|
||||
\fBlvconvert\fP(8) to change the number of data images in a RAID or
|
||||
mirrored LV.
|
||||
|
||||
In the usage section below, \fB--size\fP \fISize\fP can be replaced
|
||||
with \fB--extents\fP \fINumber\fP. See both descriptions
|
||||
the options section.
|
||||
|
@@ -1,5 +0,0 @@
|
||||
This command is the same as \fBlvmconfig\fP(8).
|
||||
|
||||
lvm config produces formatted output from the LVM configuration tree. The
|
||||
sources of the configuration data include \fBlvm.conf\fP(5) and command
|
||||
line settings from \-\-config.
|
@@ -1,5 +0,0 @@
|
||||
This command is the same as \fBlvmconfig\fP(8).
|
||||
|
||||
lvm dumpconfig produces formatted output from the LVM configuration tree. The
|
||||
sources of the configuration data include \fBlvm.conf\fP(5) and command
|
||||
line settings from \-\-config.
|
@@ -1,5 +1,6 @@
|
||||
.SH NOTES
|
||||
|
||||
.IP \[bu] 3
|
||||
To find the name of the pvmove LV that was created by an original
|
||||
\fBpvmove /dev/name\fP command, use the command:
|
||||
.br
|
||||
@@ -7,27 +8,27 @@ To find the name of the pvmove LV that was created by an original
|
||||
|
||||
.SH EXAMPLES
|
||||
|
||||
Continue polling a pvmove operation.
|
||||
Continue polling a pvmove operation:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation pvmove vg00/pvmove0
|
||||
|
||||
Abort a pvmove operation.
|
||||
Abort a pvmove operation:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation pvmove --abort vg00/pvmove0
|
||||
|
||||
Continue polling a mirror conversion.
|
||||
Continue polling a mirror conversion:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation convert vg00/lvmirror
|
||||
|
||||
Continue mirror repair.
|
||||
Continue mirror repair:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation convert vg/damaged_mirror --handlemissingpvs
|
||||
|
||||
Continue snapshot merge.
|
||||
Continue snapshot merge:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation merge vg/snapshot_old
|
||||
|
||||
Continue thin snapshot merge.
|
||||
Continue thin snapshot merge:
|
||||
.br
|
||||
.B lvm lvpoll --polloperation merge_thin vg/thin_snapshot
|
||||
|
||||
|
108
man/lvm.8.in
108
man/lvm.8.in
@@ -484,70 +484,48 @@ directly.
|
||||
.SH SEE ALSO
|
||||
.
|
||||
.nh
|
||||
.BR lvm (8)
|
||||
.BR lvm.conf (5)
|
||||
.BR lvmconfig (8)
|
||||
|
||||
.BR pvchange (8)
|
||||
.BR pvck (8)
|
||||
.BR pvcreate (8)
|
||||
.BR pvdisplay (8)
|
||||
.BR pvmove (8)
|
||||
.BR pvremove (8)
|
||||
.BR pvresize (8)
|
||||
.BR pvs (8)
|
||||
.BR pvscan (8)
|
||||
|
||||
.BR vgcfgbackup (8)
|
||||
.BR vgcfgrestore (8)
|
||||
.BR vgchange (8)
|
||||
.BR vgck (8)
|
||||
.BR vgcreate (8)
|
||||
.BR vgconvert (8)
|
||||
.BR vgdisplay (8)
|
||||
.BR vgexport (8)
|
||||
.BR vgextend (8)
|
||||
.BR vgimport (8)
|
||||
.BR vgimportclone (8)
|
||||
.BR vgmerge (8)
|
||||
.BR vgmknodes (8)
|
||||
.BR vgreduce (8)
|
||||
.BR vgremove (8)
|
||||
.BR vgrename (8)
|
||||
.BR vgs (8)
|
||||
.BR vgscan (8)
|
||||
.BR vgsplit (8)
|
||||
|
||||
.BR lvcreate (8)
|
||||
.BR lvchange (8)
|
||||
.BR lvconvert (8)
|
||||
.BR lvdisplay (8)
|
||||
.BR lvextend (8)
|
||||
.BR lvreduce (8)
|
||||
.BR lvremove (8)
|
||||
.BR lvrename (8)
|
||||
.BR lvresize (8)
|
||||
.BR lvs (8)
|
||||
.BR lvscan (8)
|
||||
|
||||
.BR lvm2-activation-generator (8)
|
||||
.BR blkdeactivate (8)
|
||||
.BR lvmdump (8)
|
||||
|
||||
.BR dmeventd (8)
|
||||
.BR lvmetad (8)
|
||||
.BR lvmpolld (8)
|
||||
.BR lvmlockd (8)
|
||||
.BR lvmlockctl (8)
|
||||
.BR clvmd (8)
|
||||
.BR cmirrord (8)
|
||||
.BR lvmdbusd (8)
|
||||
|
||||
.BR lvmsystemid (7)
|
||||
.BR lvmreport (7)
|
||||
.BR lvmraid (7)
|
||||
.BR lvmthin (7)
|
||||
.BR lvmcache (7)
|
||||
|
||||
.BR lvm.conf (5),
|
||||
.BR lvmcache (7),
|
||||
.BR lvmreport(7),
|
||||
.BR lvmthin (7),
|
||||
.BR clvmd (8),
|
||||
.BR dmsetup (8),
|
||||
.BR lvchange (8),
|
||||
.BR lvcreate (8),
|
||||
.BR lvdisplay (8),
|
||||
.BR lvextend (8),
|
||||
.BR lvmchange (8),
|
||||
.BR lvmconfig (8),
|
||||
.BR lvmdiskscan (8),
|
||||
.BR lvreduce (8),
|
||||
.BR lvremove (8),
|
||||
.BR lvrename (8),
|
||||
.BR lvresize (8),
|
||||
.BR lvs (8),
|
||||
.BR lvscan (8),
|
||||
.BR pvchange (8),
|
||||
.BR pvck (8),
|
||||
.BR pvcreate (8),
|
||||
.BR pvdisplay (8),
|
||||
.BR pvmove (8),
|
||||
.BR pvremove (8),
|
||||
.BR pvs (8),
|
||||
.BR pvscan (8),
|
||||
.BR vgcfgbackup (8),
|
||||
.BR vgchange (8),
|
||||
.BR vgck (8),
|
||||
.BR vgconvert (8),
|
||||
.BR vgcreate (8),
|
||||
.BR vgdisplay (8),
|
||||
.BR vgextend (8),
|
||||
.BR vgimport (8),
|
||||
.BR vgimportclone (8),
|
||||
.BR vgmerge (8),
|
||||
.BR vgmknodes (8),
|
||||
.BR vgreduce (8),
|
||||
.BR vgremove (8),
|
||||
.BR vgrename (8),
|
||||
.BR vgs (8),
|
||||
.BR vgscan (8),
|
||||
.BR vgsplit (8),
|
||||
.BR readline (3)
|
||||
|
@@ -19,11 +19,6 @@ LVM RAID uses both Device Mapper (DM) and Multiple Device (MD) drivers
|
||||
from the Linux kernel. DM is used to create and manage visible LVM
|
||||
devices, and MD is used to place data on physical devices.
|
||||
|
||||
LVM creates hidden LVs (dm devices) layered between the visible LV and
|
||||
physical devices. LVs in that middle layers are called sub LVs.
|
||||
For LVM raid, a sub LV pair to store data and metadata (raid superblock
|
||||
and bitmap) is created per raid image/leg (see lvs command examples below).
|
||||
|
||||
.SH Create a RAID LV
|
||||
|
||||
To create a RAID LV, use lvcreate and specify an LV type.
|
||||
@@ -82,7 +77,7 @@ data that is written to one device before moving to the next.
|
||||
|
||||
Also called mirroring, raid1 uses multiple devices to duplicate LV data.
|
||||
The LV data remains available if all but one of the devices fail.
|
||||
The minimum number of devices (i.e. sub LV pairs) required is 2.
|
||||
The minimum number of devices required is 2.
|
||||
|
||||
.B lvcreate \-\-type raid1
|
||||
[\fB\-\-mirrors\fP \fINumber\fP]
|
||||
@@ -103,8 +98,8 @@ original and one mirror image.
|
||||
|
||||
\&
|
||||
|
||||
raid4 is a form of striping that uses an extra, first device dedicated to
|
||||
storing parity blocks. The LV data remains available if one device fails. The
|
||||
raid4 is a form of striping that uses an extra device dedicated to storing
|
||||
parity blocks. The LV data remains available if one device fails. The
|
||||
parity is used to recalculate data that is lost from a single device. The
|
||||
minimum number of devices required is 3.
|
||||
|
||||
@@ -136,10 +131,10 @@ stored on the same device.
|
||||
\&
|
||||
|
||||
raid5 is a form of striping that uses an extra device for storing parity
|
||||
blocks. LV data and parity blocks are stored on each device, typically in
|
||||
a rotating pattern for performance reasons. The LV data remains available
|
||||
if one device fails. The parity is used to recalculate data that is lost
|
||||
from a single device. The minimum number of devices required is 3.
|
||||
blocks. LV data and parity blocks are stored on each device. The LV data
|
||||
remains available if one device fails. The parity is used to recalculate
|
||||
data that is lost from a single device. The minimum number of devices
|
||||
required is 3.
|
||||
|
||||
.B lvcreate \-\-type raid5
|
||||
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
|
||||
@@ -172,8 +167,7 @@ parity 0 with data restart.) See \fBRAID5 variants\fP below.
|
||||
\&
|
||||
|
||||
raid6 is a form of striping like raid5, but uses two extra devices for
|
||||
parity blocks. LV data and parity blocks are stored on each device, typically
|
||||
in a rotating pattern for perfomramce reasons. The
|
||||
parity blocks. LV data and parity blocks are stored on each device. The
|
||||
LV data remains available if up to two devices fail. The parity is used
|
||||
to recalculate data that is lost from one or two devices. The minimum
|
||||
number of devices required is 5.
|
||||
@@ -925,6 +919,7 @@ Convert the linear LV to raid1 with three images
|
||||
# lvconvert --type raid1 --mirrors 2 vg/my_lv
|
||||
.fi
|
||||
|
||||
.ig
|
||||
4. Converting an LV from \fBstriped\fP (with 4 stripes) to \fBraid6_nc\fP.
|
||||
|
||||
.nf
|
||||
@@ -932,9 +927,9 @@ Start with a striped LV:
|
||||
|
||||
# lvcreate --stripes 4 -L64M -n my_lv vg
|
||||
|
||||
Convert the striped LV to raid6_n_6:
|
||||
Convert the striped LV to raid6_nc:
|
||||
|
||||
# lvconvert --type raid6 vg/my_lv
|
||||
# lvconvert --type raid6_nc vg/my_lv
|
||||
|
||||
# lvs -a -o lv_name,segtype,sync_percent,data_copies
|
||||
LV Type Cpy%Sync #Cpy
|
||||
@@ -959,12 +954,14 @@ existing stripe devices. It then creates 2 additional MetaLV/DataLV pairs
|
||||
|
||||
If rotating data/parity is required, such as with raid6_nr, it must be
|
||||
done by reshaping (see below).
|
||||
..
|
||||
|
||||
|
||||
.SH RAID Reshaping
|
||||
|
||||
RAID reshaping is changing attributes of a RAID LV while keeping the same
|
||||
RAID level. This includes changing RAID layout, stripe size, or number of
|
||||
RAID level, i.e. changes that do not involve changing the number of
|
||||
devices. This includes changing RAID layout, stripe size, or number of
|
||||
stripes.
|
||||
|
||||
When changing the RAID layout or stripe size, no new SubLVs (MetaLVs or
|
||||
@@ -978,12 +975,15 @@ partially updated and corrupted. Instead, an existing stripe is quiesced,
|
||||
read, changed in layout, and the new stripe written to free space. Once
|
||||
that is done, the new stripe is unquiesced and used.)
|
||||
|
||||
(The reshaping features are planned for a future release.)
|
||||
|
||||
.ig
|
||||
.SS Examples
|
||||
|
||||
1. Converting raid6_n_6 to raid6_nr with rotating data/parity.
|
||||
|
||||
This conversion naturally follows a previous conversion from striped/raid0
|
||||
to raid6_n_6 (shown above). It completes the transition to a more
|
||||
This conversion naturally follows a previous conversion from striped to
|
||||
raid6_n_6 (shown above). It completes the transition to a more
|
||||
traditional RAID6.
|
||||
|
||||
.nf
|
||||
@@ -1029,13 +1029,15 @@ traditional RAID6.
|
||||
The DataLVs are larger (additional segment in each) which provides space
|
||||
for out-of-place reshaping. The result is:
|
||||
|
||||
FIXME: did the lv name change from my_lv to r?
|
||||
.br
|
||||
FIXME: should we change device names in the example to sda,sdb,sdc?
|
||||
.br
|
||||
FIXME: include -o devices or seg_pe_ranges above also?
|
||||
|
||||
.nf
|
||||
# lvs -a -o lv_name,segtype,seg_pe_ranges,dataoffset
|
||||
LV Type PE Ranges Doff
|
||||
LV Type PE Ranges data
|
||||
r raid6_nr r_rimage_0:0-32 \\
|
||||
r_rimage_1:0-32 \\
|
||||
r_rimage_2:0-32 \\
|
||||
@@ -1091,15 +1093,19 @@ RAID5 right asymmetric
|
||||
\[bu]
|
||||
Rotating parity 0 with data continuation
|
||||
|
||||
.ig
|
||||
raid5_n
|
||||
.br
|
||||
\[bu]
|
||||
RAID5 parity n
|
||||
RAID5 striping
|
||||
.br
|
||||
\[bu]
|
||||
Dedicated parity device n used for striped/raid0 conversions
|
||||
Same layout as raid4 with a dedicated parity N with striped data.
|
||||
.br
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
..
|
||||
|
||||
.SH RAID6 Variants
|
||||
|
||||
@@ -1138,24 +1144,7 @@ RAID6 N continue
|
||||
\[bu]
|
||||
Rotating parity N with data continuation
|
||||
|
||||
raid6_n_6
|
||||
.br
|
||||
\[bu]
|
||||
RAID6 last parity devices
|
||||
.br
|
||||
\[bu]
|
||||
Dedicated last parity devices used for striped/raid0 conversions
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
|
||||
raid6_{ls,rs,la,ra}_6
|
||||
.br
|
||||
\[bu]
|
||||
RAID6 last parity device
|
||||
.br
|
||||
\[bu]
|
||||
Dedicated last parity device used for conversions from/to raid5_{ls,rs,la,ra}
|
||||
|
||||
.ig
|
||||
raid6_n_6
|
||||
.br
|
||||
\[bu]
|
||||
@@ -1165,7 +1154,8 @@ RAID6 N continue
|
||||
Fixed P-Syndrome N-1 and Q-Syndrome N with striped data
|
||||
.br
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
|
||||
raid6_ls_6
|
||||
.br
|
||||
@@ -1176,7 +1166,8 @@ RAID6 N continue
|
||||
Same as raid5_ls for N-1 disks with fixed Q-Syndrome N
|
||||
.br
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
|
||||
raid6_la_6
|
||||
.br
|
||||
@@ -1187,7 +1178,8 @@ RAID6 N continue
|
||||
Same as raid5_la for N-1 disks with fixed Q-Syndrome N
|
||||
.br
|
||||
\[bu]
|
||||
Used forRAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
|
||||
raid6_rs_6
|
||||
.br
|
||||
@@ -1198,7 +1190,8 @@ RAID6 N continue
|
||||
Same as raid5_rs for N-1 disks with fixed Q-Syndrome N
|
||||
.br
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
|
||||
raid6_ra_6
|
||||
.br
|
||||
@@ -1209,7 +1202,9 @@ RAID6 N continue
|
||||
Same as raid5_ra for N-1 disks with fixed Q-Syndrome N
|
||||
.br
|
||||
\[bu]
|
||||
Used for RAID Takeover
|
||||
Used for
|
||||
.B RAID Takeover
|
||||
..
|
||||
|
||||
|
||||
.ig
|
||||
|
@@ -1,3 +0,0 @@
|
||||
lvmsadc is not currently supported in LVM. The device-mapper statistics
|
||||
facility provides similar performance metrics using the \fBdmstats(8)\fP
|
||||
command.
|
@@ -1,3 +0,0 @@
|
||||
lvmsar is not currently supported in LVM. The device-mapper statistics
|
||||
facility provides similar performance metrics using the \fBdmstats(8)\fP
|
||||
command.
|
@@ -278,6 +278,22 @@ or vgchange to activate thin snapshots with the "k" attribute.
|
||||
|
||||
\&
|
||||
|
||||
.SS Alternate syntax for specifying type thin\-pool
|
||||
|
||||
\&
|
||||
|
||||
The fully specified syntax for creating a thin pool LV shown above is:
|
||||
|
||||
.B lvconvert \-\-type thin-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
|
||||
|
||||
An alternate syntax may be used for the same operation:
|
||||
|
||||
.B lvconvert \-\-thinpool VG/ThinDataLV \-\-poolmetadata VG/ThinMetaLV
|
||||
|
||||
The thin-pool type is inferred by lvm; the \-\-thinpool option is not an
|
||||
alias for \-\-type thin\-pool.
|
||||
|
||||
|
||||
.SS Automatic pool metadata LV
|
||||
|
||||
\&
|
||||
|
@@ -12,8 +12,3 @@ system.
|
||||
Sizes will be rounded if necessary. For example, the LV size must be an
|
||||
exact number of extents, and the size of a striped segment must be a
|
||||
multiple of the number of stripes.
|
||||
|
||||
In the usage section below, \fB--size\fP \fISize\fP can be replaced
|
||||
with \fB--extents\fP \fINumber\fP. See both descriptions
|
||||
the options section.
|
||||
|
||||
|
@@ -6,11 +6,6 @@ removal. LVs cannot be deactivated or removed while they are open (e.g.
|
||||
if they contain a mounted filesystem). Removing an origin LV will also
|
||||
remove all dependent snapshots.
|
||||
|
||||
When a single force option is used, LVs are removed without confirmation,
|
||||
and the command will try to deactivate unused LVs.
|
||||
|
||||
To remove damaged LVs, two force options may be required (\fB-ff\fP).
|
||||
|
||||
\fBHistorical LVs\fP
|
||||
|
||||
If the configuration setting \fBmetadata/record_lvs_history\fP is enabled
|
||||
|
@@ -1,7 +1,2 @@
|
||||
lvresize resizes an LV in the same way as lvextend and lvreduce. See
|
||||
\fBlvextend\fP(8) and \fBlvreduce\fP(8) for more information.
|
||||
|
||||
In the usage section below, \fB--size\fP \fISize\fP can be replaced
|
||||
with \fB--extents\fP \fINumber\fP. See both descriptions
|
||||
the options section.
|
||||
|
||||
|
@@ -56,7 +56,6 @@ Inconsistencies are detected by initiating a "check" on a RAID logical volume.
|
||||
(The scrubbing operations, "check" and "repair", can be performed on a RAID
|
||||
logical volume via the 'lvchange' command.) (w)ritemostly signifies the
|
||||
devices in a RAID 1 logical volume that have been marked write-mostly.
|
||||
(R)emove after reshape signifies freed striped raid images to be removed.
|
||||
.IP
|
||||
Related to Thin pool Logical Volumes: (F)ailed, out of (D)ata space,
|
||||
(M)etadata read only.
|
||||
|
@@ -16,6 +16,3 @@ data on that disk. This can be done by zeroing the first sector with:
|
||||
Use \fBvgcreate\fP(8) to create a new VG on the PV, or \fBvgextend\fP(8)
|
||||
to add the PV to existing VG.
|
||||
|
||||
The force option will create a PV without confirmation. Repeating the
|
||||
force option (\fB-ff\fP) will forcibly create a PV, overriding checks that
|
||||
normally prevent it, e.g. if the PV is already in a VG.
|
||||
|
@@ -1,16 +0,0 @@
|
||||
pvmove moves the allocated physical extents (PEs) on a source PV to one or
|
||||
more destination PVs. You can optionally specify a source LV in which
|
||||
case only extents used by that LV will be moved to free (or specified)
|
||||
extents on the destination PV. If no destination PV is specified, the
|
||||
normal allocation rules for the VG are used.
|
||||
|
||||
If pvmove is interrupted for any reason (e.g. the machine crashes) then
|
||||
run pvmove again without any PV arguments to restart any operations that
|
||||
were in progress from the last checkpoint. Alternatively, use the abort
|
||||
option at any time to abort the operation. The resulting location of LVs
|
||||
after an abort depends on whether the atomic option was used.
|
||||
|
||||
More than one pvmove can run concurrently if they are moving data from
|
||||
different source PVs, but additional pvmoves will ignore any LVs already
|
||||
in the process of being changed, so some data might not get moved.
|
||||
|
@@ -1,6 +1,6 @@
|
||||
.SH NOTES
|
||||
|
||||
pvmove works as follows:
|
||||
.
|
||||
\fBpvmove\fP works as follows:
|
||||
|
||||
1. A temporary 'pvmove' LV is created to store details of all the data
|
||||
movements required.
|
||||
@@ -9,7 +9,7 @@ movements required.
|
||||
according to the command line arguments.
|
||||
For each piece of data found, a new segment is added to the end of the
|
||||
pvmove LV.
|
||||
This segment takes the form of a temporary mirror to copy the data
|
||||
This segment takes the form of a temporary mirror to copy the data
|
||||
from the original location to a newly allocated location.
|
||||
The original LV is updated to use the new temporary mirror segment
|
||||
in the pvmove LV instead of accessing the data directly.
|
||||
|
@@ -1,7 +0,0 @@
|
||||
pvremove wipes the label on a device so that LVM will no longer recognise
|
||||
it as a PV.
|
||||
|
||||
A PV cannot be removed from a VG while it is used by an active LV.
|
||||
|
||||
Repeat the force option (\fB-ff\fP) to forcibly remove a PV belonging to
|
||||
an existing VG. Normally, \fBvgreduce\fP(8) should be used instead.
|
@@ -1,2 +0,0 @@
|
||||
pvresize resizes a PV. The PV may already be in a VG and may have active
|
||||
LVs allocated on it.
|
@@ -1,5 +1,6 @@
|
||||
.SH NOTES
|
||||
|
||||
.IP \[bu] 3
|
||||
pvresize will refuse to shrink a PV if it has allocated extents beyond the
|
||||
new end.
|
||||
|
||||
|
@@ -1 +0,0 @@
|
||||
pvs produces formatted output about PVs.
|
@@ -1,6 +1,6 @@
|
||||
pvscan scans all supported LVM block devices in the system for PVs.
|
||||
.SH NOTES
|
||||
|
||||
\fBScanning with lvmetad\fP
|
||||
.SS Scanning with lvmetad
|
||||
|
||||
pvscan operates differently when used with the
|
||||
.BR lvmetad (8)
|
||||
@@ -64,9 +64,7 @@ be temporarily disabled if they are seen.
|
||||
To notify lvmetad about a device that is no longer present, the major and
|
||||
minor numbers must be given, not the path.
|
||||
|
||||
.P
|
||||
|
||||
\fBAutomatic activation\fP
|
||||
.SS Automatic activation
|
||||
|
||||
When event-driven system services detect a new LVM device, the first step
|
||||
is to automatically scan and cache the metadata from the device. This is
|
@@ -1,16 +0,0 @@
|
||||
vgcfgbackup creates back up files containing metadata of VGs.
|
||||
If no VGs are named, back up files are created for all VGs.
|
||||
See \fBvgcfgrestore\fP for information on using the back up
|
||||
files.
|
||||
|
||||
In a default installation, each VG is backed up into a separate file
|
||||
bearing the name of the VG in the directory \fI#DEFAULT_BACKUP_DIR#\fP.
|
||||
|
||||
To use an alternative back up file, use \fB\-f\fP. In this case, when
|
||||
backing up multiple VGs, the file name is treated as a template, with %s
|
||||
replaced by the VG name.
|
||||
|
||||
NB. This DOES NOT back up the data content of LVs.
|
||||
|
||||
It may also be useful to regularly back up the files in
|
||||
\fI#DEFAULT_SYS_DIR#\fP.
|
@@ -1,11 +0,0 @@
|
||||
vgcfgrestore restores the metadata of a VG from a text back up file
|
||||
produced by \fBvgcfgbackup\fP. This writes VG metadata onto the devices
|
||||
specifed in back up file.
|
||||
|
||||
A back up file can be specified with \fB\-\-file\fP. If no backup file is
|
||||
specified, the most recent one is used. Use \fB\-\-list\fP for a list of
|
||||
the available back up and archive files of a VG.
|
||||
|
||||
WARNING: When a VG contains thin pools, changes to thin metadata cannot be
|
||||
reverted, and data loss may occur if thin metadata has changed. The force
|
||||
option is required to restore in this case.
|
@@ -1,9 +1,11 @@
|
||||
.SH NOTES
|
||||
|
||||
.IP \[bu] 3
|
||||
To replace PVs, \fBvgdisplay \-\-partial \-\-verbose\fP will show the
|
||||
UUIDs and sizes of any PVs that are no longer present. If a PV in the VG
|
||||
is lost and you wish to substitute another of the same size, use
|
||||
\fBpvcreate \-\-restorefile filename \-\-uuid uuid\fP (plus additional
|
||||
arguments as appropriate) to initialise it with the same UUID as the
|
||||
missing PV. Repeat for all other missing PVs in the VG. Then use
|
||||
\fBvgcfgrestore \-\-file filename\fP to restore the VG's metadata.
|
||||
\fBvgcfgrestore \-\-file filename\fP to restore the volume group's
|
||||
metadata.
|
||||
|
@@ -1,2 +0,0 @@
|
||||
vgchange changes VG attributes, changes LV activation in the kernel, and
|
||||
includes other utilities for VG maintenance.
|
@@ -1,10 +1,4 @@
|
||||
.SH NOTES
|
||||
|
||||
If vgchange recognizes COW snapshot LVs that were dropped because they ran
|
||||
out of space, it displays a message informing the administrator that the
|
||||
snapshots should be removed.
|
||||
|
||||
.SH EXAMPLES
|
||||
.EXAMPLES
|
||||
|
||||
Activate all LVs in all VGs on all existing devices.
|
||||
.br
|
||||
|
@@ -1 +0,0 @@
|
||||
vgck checks LVM metadata for consistency.
|
@@ -1,7 +0,0 @@
|
||||
vgconvert converts VG metadata from one format to another. The new
|
||||
metadata format must be able to fit into the space provided by the old
|
||||
format.
|
||||
|
||||
Because the LVM1 format should no longer be used, this command is no
|
||||
longer needed in general.
|
||||
|
@@ -1,4 +0,0 @@
|
||||
vgcreate creates a new VG on block devices. If the devices were not
|
||||
previously intialized as PVs with \fBpvcreate\fP(8), vgcreate will
|
||||
inititialize them, making them PVs. The pvcreate options for initializing
|
||||
devices are also available with vgcreate.
|
@@ -1,4 +1,4 @@
|
||||
.SH EXAMPLES
|
||||
.EXAMPLES
|
||||
|
||||
Create a VG with two PVs, using the default physical extent size.
|
||||
.br
|
||||
|
@@ -1,4 +0,0 @@
|
||||
vgdisplay shows the attributes of VGs, and the associated PVs and LVs.
|
||||
|
||||
\fBvgs\fP(8) is a preferred alternative that shows the same information
|
||||
and more, using a more compact and configurable output format.
|
@@ -1,8 +1,14 @@
|
||||
vgexport makes inactive VGs unknown to the system. In this state, all the
|
||||
PVs in the VG can be moved to a different system, from which
|
||||
.SH NOTES
|
||||
.
|
||||
.IP \[bu] 3
|
||||
vgexport can make inactive VG(s) unknown to the system. In this state,
|
||||
all the PVs in the VG can be moved to a different system, from which
|
||||
\fBvgimport\fP can then be run.
|
||||
|
||||
.IP \[bu] 3
|
||||
Most LVM tools ignore exported VGs.
|
||||
|
||||
.IP \[bu] 3
|
||||
vgexport clears the VG system ID, and vgimport sets the VG system ID to
|
||||
match the host running vgimport (if the host has a system ID).
|
||||
|
@@ -1,11 +0,0 @@
|
||||
vgextend adds one or more PVs to a VG. This increases the space available
|
||||
for LVs in the VG.
|
||||
|
||||
Also, PVs that have gone missing and then returned, e.g. due to a
|
||||
transient device failure, can be added back to the VG without
|
||||
re-initializing them (see \-\-restoremissing).
|
||||
|
||||
If the specified PVs have not yet been initialized with pvcreate, vgextend
|
||||
will initialize them. In this case pvcreate options can be used, e.g.
|
||||
\-\-labelsector, \-\-metadatasize, \-\-metadataignore,
|
||||
\-\-pvmetadatacopies, \-\-dataalignment, \-\-dataalignmentoffset.
|
@@ -1,3 +1,10 @@
|
||||
.SH NOTES
|
||||
|
||||
If the specified PVs have not yet been initialized with pvcreate,
|
||||
vgextend will initialize them. In this case the PV options apply,
|
||||
e.g. \-\-labelsector, \-\-metadatasize, \-\-metadataignore,
|
||||
\-\-pvmetadatacopies, \-\-dataalignment, \-\-dataalignmentoffset.
|
||||
|
||||
.SH EXAMPLES
|
||||
|
||||
Add two PVs to a VG.
|
||||
|
@@ -1,5 +0,0 @@
|
||||
vgimport makes exported VGs known to the system again, perhaps after
|
||||
moving the PVs from a different system.
|
||||
|
||||
vgexport clears the VG system ID, and vgimport sets the VG system ID to
|
||||
match the host running vgimport (if the host has a system ID).
|
9
man/vgimport.8.end
Normal file
9
man/vgimport.8.end
Normal file
@@ -0,0 +1,9 @@
|
||||
.SH NOTES
|
||||
.
|
||||
.IP \[bu] 3
|
||||
vgimport makes exported VG(s) known to the system again, perhaps
|
||||
after moving the PVs from a different system.
|
||||
|
||||
.IP \[bu] 3
|
||||
vgexport clears the VG system ID, and vgimport sets the VG system ID
|
||||
to match the host running vgimport (if the host has a system ID).
|
@@ -1,6 +0,0 @@
|
||||
vgimportclone imports a VG from duplicated PVs, e.g. created by a hardware
|
||||
snapshot of existing PVs.
|
||||
|
||||
A duplicated VG cannot used until it is made to coexist with the original
|
||||
VG. vgimportclone renames the VG associated with the specified PVs and
|
||||
changes the associated VG and PV UUIDs.
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user