1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-10-08 19:33:19 +03:00

Compare commits

...

71 Commits

Author SHA1 Message Date
Alasdair G Kergon
396377bc03 pre-release
Removing some unused new lines and changing some incorrect "can't
release until this is fixed" comments.  Rename license.txt to make
it clear its merely an included file, not itself a licence.
2017-03-28 16:11:35 +01:00
Alasdair G Kergon
b9399f2148 man: pre-generated files weren't committed 2017-03-28 01:32:59 +01:00
Heinz Mauelshagen
19a72e601f man: fix / typo 2017-03-28 00:27:04 +02:00
Heinz Mauelshagen
7f31261844 man: enhance man postprocessing regexp 2017-03-28 00:17:43 +02:00
Zdenek Kabelac
88e408b8ed tests: update to better fit
Die is automatic on 'error' result
Cleanup everything on 'regular' code path.
2017-03-27 20:50:19 +02:00
Zdenek Kabelac
e3a3cf01eb cleanup: use more common FMTd64 type
We use 'd' for plain singed integers.
2017-03-27 20:50:19 +02:00
Mikulas Patocka
78d004efa8 build: fix x32 arch
This patch fixed lvm2 compilation running on x32 arch.
(Using 64bit x86 cpu features but running on 32b address space,
so consuming less mem in VM).

On x32 arch 'time_t' is 64bit while 'long' is 32bit.
2017-03-27 20:50:19 +02:00
Heinz Mauelshagen
36cac41115 man-generator/man/help: simplify hyphen escaping
Commits a29bb6a14b
    ... 5c199d99f4
narrowed down on addressing the escaping of hyphens
in the dynamic creation of manuals whilst avoiding
them in creating help texts.  This lead to a sequence
of slipping through hyphens adrressed by additional
patches in aforementioned commit series.

On the other hand, postprocessing dynamically man-generator
created and statically provided manuals catches all hyphens
in need of escaping.

Changes:
- revert the above commits whilst keeping man-generator
  streamlining and the detection of any '\' when generating
  help texts in order to avoid escapes to slip in

- Dynamically escape hyphens in manaual pages using sed(1)
  in the respective Makefile targets

- remove any manually added escaping on hyphens from any
  static manual sources or headers
2017-03-27 16:49:39 +02:00
Heinz Mauelshagen
6165e09221 lvchange: reject setting all raid1 images to writemostly
raid1 doesn't allow to set all images to writemostly because at
least one image is required to receive any written data immediately.

The dm-raid target will detect such invalid request and
fail it iwith a kernel error message.

Reject such request in uspace displaying a respective error message.
2017-03-26 20:28:04 +02:00
Heinz Mauelshagen
5c199d99f4 man: a few more missed '-' to escape 2017-03-25 03:40:02 +01:00
Heinz Mauelshagen
66b2084b96 man-generator: more escaped '-' 2017-03-24 18:57:45 +01:00
Heinz Mauelshagen
d823c65d50 man-generator: fix buffer length calculation 2017-03-24 18:33:03 +01:00
Alasdair G Kergon
25c841af00 make: Don't hard-code SHELL as /bin/sh 2017-03-24 15:29:17 +00:00
Heinz Mauelshagen
4046f9bd95 man/help: avoid escaping of '-' with --help 2017-03-24 15:14:21 +01:00
Heinz Mauelshagen
e9433a9de9 WHATS_NEW: man-generator escape '-' 2017-03-24 14:31:19 +01:00
Heinz Mauelshagen
9354ce6045 man-generator: cleanup escape '-' 2017-03-24 14:27:59 +01:00
Heinz Mauelshagen
10e0a5066e man-generator: emit escaped '-' 2017-03-24 04:00:47 +01:00
Heinz Mauelshagen
5eec3de41f man: escape all single '-' 2017-03-24 02:46:11 +01:00
Heinz Mauelshagen
93467f0d9f man: revert erouneous '-' escapes in Makefine.in 2017-03-24 01:39:50 +01:00
Heinz Mauelshagen
a29bb6a14b man: escape all double '-' 2017-03-24 01:03:58 +01:00
Alasdair G Kergon
2eaca7ab63 tools: Reinstate lvm script processing.
We check for a script if the command isn't recognised (ENO_SUCH_CMD).
(Also added a few comments and fixed some whitespace.)
2017-03-23 23:20:53 +00:00
Heinz Mauelshagen
fe3b9bb7d4 libdm: typo 2017-03-24 00:12:41 +01:00
David Teigland
6471bb2c41 commands: improve error message for unknown command
when running "lvm foo".
2017-03-23 03:35:06 -05:00
David Teigland
0dabe7237c commands: fix commands with run with path basename
The recent command definitions commit took the command
name from argv[0] without applying basename to the value,
so a pathname, e.g. /usr/sbin, would cause lvm to not
recognize the command name.
2017-03-23 03:06:07 -05:00
Alasdair G Kergon
e8362b4cb7 tools: Show configuration command line in lvm version.
Also update configure.in with some items recently added to the tree.
2017-03-23 01:01:35 +00:00
Heinz Mauelshagen
b84bf3e8cd raid: adjust to misordered raid table line output
This commit supersedes reverted 1e4462dbfb
to avoid changes to liblvm and the libdm API completely.

The libdevmapper interface compares existing table line retrieved from
the kernel to new table line created to decide if it can suppress a reload.
Any difference between input and output of the table line is taken to be a
change thus causing a table reload.

The dm-raid target started to misorder the raid parameters (e.g. 'raid10_copies')
starting with dm-raid target version 1.9.0 up to (excluding) 1.11.0.  This causes
runtime failures (limited to raid10 as of tests) and needs to be reversed to allow
e.g. old lvm2 uspace to run properly.

Check for the aforementioned version range in libdm and adjust creation of the table line
to the respective (mis)ordered sequence inside and correct order outside the range
(as described for the raid target in the kernels Documentation/device-mapper/dm-raid.txt).
2017-03-23 01:20:00 +01:00
Heinz Mauelshagen
1bf90dac77 Revert "raid: adjust to misordered raid table line output"
This reverts commit 1e4462dbfb
in favour of an enhanced solution avoiding changes in liblvm
completetly by checking the target versions in libdm and emitting
the respective parameter lines.
2017-03-23 01:19:41 +01:00
David Teigland
14c4d32247 commands: fix combined thin pool and vol create defs
Fixes command defs related to creating a new thin pool and
then a new thin lv in the new pool.

1. lvcreate --size --virtualsize --thinpool
   Needs a cmd def, it was missing.
   The def is unique by the three required
   options: size, virtualsize and thinpool.

2. lvcreate --size --virtualsize --thinpool VG
   Needs a cmd def, it was missing.
   The def is unique by the three required
   options: size, virtualsize and thinpool,
   and one required positional arg: VG.

3. lvcreate --thin --virtualsize --size LV_new|VG
   This existing def should not accept an optional
   --type thin, which if used makes it indistinct
   from another def.

4. lvcreate --size --virtualsize VG
   This existing def should not accept an optional
   --type thin or --thin, which if used makes it
   indistinct from other defs (e.g. 3)
2017-03-21 22:04:01 -05:00
David Teigland
3be2e61c9f Revert "lvcreate: continue to accept --thinpool with -L and -V but not -T"
This reverts commit 642d682d8d.

Using the thinpool option with this cmd def makes it
indistinct from other cmd defs where thinpool is a
required option.
2017-03-21 22:01:19 -05:00
Heinz Mauelshagen
7126fb13e7 metadata: cleanup flags definition to be consistent
Use shift bitops throughout segtype.h.
2017-03-22 00:29:49 +01:00
Heinz Mauelshagen
1810162b51 WHATS_NEW: adjust to misordered raid parameters 2017-03-21 18:18:58 +01:00
Heinz Mauelshagen
1e4462dbfb raid: adjust to misordered raid table line output
The libdevmapper interface compares existing table line retrieved from
the kernel to new table line created to decide if it can suppress a reload.
Any difference between input and output of the table line is taken to be a
change thus causing a table reload.

The dm-raid target started to misorder the raid parameters (e.g. 'raid10_copies')
starting with dm-raid target version 1.9.0 up to (excluding) 1.11.0.  This causes
runtime failures (limited to raid10 as of tests) and needs to be reversed to allow
e.g. old lvm2 uspace to run properly.

Check for the aforementioned version range and adjust creation of the table line
to the respective (mis)ordered sequence inside and correct order outside the range
(as described for the raid target in the kernels Documentation/device-mapper/dm-raid.txt).
2017-03-21 18:17:42 +01:00
Alasdair G Kergon
642d682d8d lvcreate: continue to accept --thinpool with -L and -V but not -T
lvcreate --thinpool POOL1 -L 100M --virtualsize 100M snapper_thinp

https://bugzilla.redhat.com/1434027

(The general rule is that a command is accepted if it is unambiguous.
The combination -L -V --thinpool uniquely identifies the operation.)
2017-03-20 22:04:37 +00:00
Alasdair G Kergon
b3e833c777 man-generator: Remove unused variable.
man-generator.c:2976:6: warning: variable "sep" set but not used
2017-03-20 16:55:30 +00:00
Tony Asleson
862ca6e8b7 lvmdbusd: Rename ee to got_external_event
This variable is global, make it more descriptive.
2017-03-20 10:08:39 -05:00
Tony Asleson
b65a9230a3 lvmdbusd: Update state during pv move
Periodically update the state during pv move so that all the different
dbus objects reflect something close to reality during the process.
2017-03-20 10:08:39 -05:00
Tony Asleson
3ead4fb7ac lvmdbusd: Limit state refreshes for udev events
Udev events can come in like a flood when something changes.  It really
doesn't do us any good to refresh the state of the service numerous times
when 1 would suffice.  We had something like this before, but it was
removed when we added the refresh thread.  However, we have since learned
that we need to sequence events in the correct order and block dbus
operations if we believe the state has been affected, thus udev events are
being processed on the main work queue.  This change limits spurious
work items from getting to the queue.
2017-03-20 10:08:39 -05:00
Tony Asleson
7eeb093fdd lvmdbusd: Call add_no_notify for *move commands
Missed this in change when "add_no_notify" was added.  This was causing
extra external events to process when we did moves.
2017-03-20 10:08:38 -05:00
Tony Asleson
2dc71fc291 lvmdbusd: Only disable notify_dbus after getting external event
If we always disable the sending of notify dbus events then in the case
where all the users are lvm dbus users we will be in udev handling mode
until at least 1 external lvm command occurs.  Instead we will not disable
notify dbus until after we get at least 1 external event.  This makes the
service get into the correct mode of operation faster.
2017-03-20 10:08:38 -05:00
Zdenek Kabelac
17b56a4aed tests: early detect leaking error dev
lvconvert should not leak 'error' device.

(This patch is not fix the problem, just makes it more easily visible
instead of more confusing 'clvmd' trace).
2017-03-20 14:18:50 +01:00
David Teigland
07040942ed man: advise against mirrored mirror log 2017-03-17 11:54:39 -05:00
David Teigland
8d7be8f5df help: align option list in pv/lv/vgchange cases
Align one-required options like is done for
optional options.
2017-03-17 11:23:38 -05:00
Heinz Mauelshagen
fec2ea76cf raid: check target version for shrink support
Starting with dm-raid target version 1.9.0 shrinking of mapped devices is supported.
Check for support being present in lvresize and lvreduce.

Related: rhbz1394048
2017-03-17 16:46:33 +01:00
Heinz Mauelshagen
17a8f3d6f0 raid: conditionally reject convert to striped/raid; fix
Fix a logic flaw introduced in commit 17bee733d1
preventing e.g. striped -> raid5 conversions.

Related: rhbz1191935
Related: rhbz1366296
2017-03-17 16:03:35 +01:00
Heinz Mauelshagen
6ebf39da91 WHATS_NEW: conditionally reject raid convert 2017-03-17 14:51:10 +01:00
Heinz Mauelshagen
76709aaf39 raid: cleanup; remove unused function
Remove unused function (lv_has_constant_stripes() is used instead).
2017-03-17 14:24:44 +01:00
Zdenek Kabelac
07ea9887d3 tests: raise min dm cache version
Since we want to test different cache policies with profiles mq&smq
raise version to 1.8.

TODO: maybe split in more tests so older targets can test here as well.
N.B.: passthrough is also not supported with version 1.3
2017-03-17 14:22:33 +01:00
Zdenek Kabelac
4a271e7ee7 properties: only thin-pool provides discards
Quering non-thin-pool segment for discard property may lead
to intenal error if the segment had set 'out-of-range' value,
so only thin-pool is allowed, for other it returns NULL.
2017-03-17 14:22:33 +01:00
Zdenek Kabelac
d211c98581 lvm2app: correct internal API changes
Internal library gets more strict about setting discard and
zero_new_blocks parameter.
2017-03-17 14:22:33 +01:00
Heinz Mauelshagen
e0ea569045 raid: cleanup
Move function _raid45_to_raid54_wrapper() to avoid superfluous declaration.
2017-03-17 14:14:42 +01:00
Heinz Mauelshagen
1520fec3e8 raid: name variables consistently
Related: rhbz1191935
Related: rhbz1366296
2017-03-17 14:04:03 +01:00
Heinz Mauelshagen
17bee733d1 raid: conditionally reject convert to striped/raid0*
If SubLVs to be removed still exist after an image removing
conversion (i.e. "lvconvert --yes --force --stripes N "
with N < total stripes) any request to convert to a different
striped/raid* level has to be rejected until after those freed
SubLVs got removed by running the aforementioned lvconvert again.

Add tests to check conversion to striped/raid* gets rejected.
Enhance a test comment.

Related: rhbz1191935
Related: rhbz1366296
2017-03-17 13:58:54 +01:00
Alasdair G Kergon
5e7bc8d854 man: Build man-generator in tools dir.
Use ln to make a copy of command.c for compilation with different DEFS
then handle dependencies the normal way.
2017-03-16 23:10:40 +00:00
Alasdair G Kergon
270ed9bc90 man: Preserve template variables in pre-generated pages. 2017-03-16 23:08:59 +00:00
Alasdair G Kergon
0c74afa1c6 make.tmpl: Support per-object DEFS.
Same as CFLAGS.
2017-03-16 23:03:03 +00:00
Alasdair G Kergon
2d00767394 tools: Avoid man-generator compilation warnings.
Unused variables and make fns with missing prototypes static.
2017-03-16 22:39:04 +00:00
Heinz Mauelshagen
ad4158bac7 man: lvmraid(7) clarifications 2017-03-16 23:10:57 +01:00
Heinz Mauelshagen
4a3e30d102 WHATS_NEW: ensure raid6 upconversion restrictions 2017-03-16 22:33:08 +01:00
Heinz Mauelshagen
b917b12e2c WHATS_NEW: adjust mirror+raid DSOs to lvconvert --repair 2017-03-16 22:27:30 +01:00
Heinz Mauelshagen
b0336e8b3c lvconvert: ensure upconversion restrictions
Ensure minimum number of 3 data stripes on conversions to raid6.

Add test for it.

Resolves: rhbz1432675
2017-03-16 22:10:32 +01:00
Heinz Mauelshagen
76b843a4bf test: adjust to proper dm-raid target version
Adjust to final target version 1.10.1 supporting reshape
properly and to recently changed report field specifications
(e.g. rehape_len_le) to allow these tests to run.

Lower mirror region size to suite the tiny test VG.
2017-03-16 21:17:58 +01:00
Heinz Mauelshagen
a37bb7b2a5 dmeventd: adjust mirror/raid DSOs to new repair design
Previous commit 506d88a2ec introduced disabling lvmetad on repairs.

Avoid calling lvscan and use of any --config options altogether
in the mirror and raid DSOs.

Related: rhbz1380521
2017-03-16 21:05:05 +01:00
David Teigland
73d028023a lvmetad: fix bug in snprintf of disable reason 2017-03-16 12:15:40 -05:00
David Teigland
c8719d4e94 WHATS_NEW: disable lvmetad for repair 2017-03-16 11:56:19 -05:00
David Teigland
506d88a2ec lvconvert: disable lvmetad for repair
Repairing missing devices does not work reliably
with lvmetad, so disable lvmetad before repair.
A standard lvmetad refresh (pvscan --cache) will
enable lvmetad again.
2017-03-16 11:50:36 -05:00
Heinz Mauelshagen
e5b6f2685a dmeventd: reintroduce fix mirror DSO to work with lvmetad
Commit 07ded8059c assumed that the mirror is blocked which is not the case.

It is accessible, degraded and in need of repair because some of its legs
(partially) failed.  Any auto-repair via dmeventd fails though because
of lvmetad not providing proper data about the failed PV(s).  That's why
this workaround got introduced in commit 76f6951c3e until we get to
the lvmetad interaction core issue.

Mind any mirror auto-repair failure is caused by such lvmetad interaction
problems not yet solved so disabling lvmetad works as a resort as elaborated
on in the related bz.

Reintroducing the interim solution.

Resolves: rhbz1380521
2017-03-16 14:19:06 +01:00
Marian Csontos
a87715b6fd spec: Disable sanlock only, keep lockd_dlm enabled 2017-03-16 13:03:25 +01:00
Marian Csontos
19b65a3d76 spec: Replace remaining %define by %global 2017-03-16 13:03:24 +01:00
Marian Csontos
7067514c9b spec: Use %global instead of %define for constants
Using %define is now discouraged by Fedora Packaging Guidelines
2017-03-16 13:03:24 +01:00
Marian Csontos
5ba82a16db spec: Update requirements for lockd
lockd requires sanlock >= 3.3.0
2017-03-16 13:03:24 +01:00
Marian Csontos
cf0bf4b314 spec: Profiles are not %config(noreplace)
These files are just examples and should not be edited by user.
2017-03-16 13:02:24 +01:00
147 changed files with 2067 additions and 1728 deletions

View File

@@ -1 +1 @@
2.02.169(2)-git (2016-11-30)
2.02.169(2)-git (2017-03-28)

View File

@@ -1 +1 @@
1.02.138-git (2016-11-30)
1.02.138-git (2017-03-28)

View File

@@ -1,10 +1,17 @@
Version 2.02.169 -
=====================================
Version 2.02.169 - 28th March 2017
==================================
Automatically decide whether '-' in a man page is a hyphen or a minus sign.
Add build-time configuration command line to 'lvm version' output.
Handle known table line parameter order change in specific raid target vsns.
Conditionally reject raid convert to striped/raid0* after reshape.
Ensure raid6 upconversion restrictions.
Adjust mirror & raid dmeventd plugins for new lvconvert --repair behaviour.
Disable lvmetad when lvconvert --repair is run.
Remove obsolete lvmchange binary - convert to built-in command.
Lvdisplay [-m] shows more informations for cached volumes.
Show more information for cached volumes in lvdisplay [-m].
Add option for lvcreate/lvconvert --cachemetadataformat auto|1|2.
Support cache segment with configurable metadata format.
Add allocation/cache_metadata_format profilable setttings.
Add allocation/cache_metadata_format profilable settings.
Use function cache_set_params() for both lvcreate and lvconvert.
Skip rounding on cache chunk size boudary when create cache LV.
Improve cache_set_params support for chunk_size selection.
@@ -14,7 +21,7 @@ Version 2.02.169 -
Support conversion of raid type, stripesize and number of disks
Reject writemostly/writebehind in lvchange during resynchronization.
Deactivate active origin first before removal for improved workflow.
Fix regression of accepting options --type and -m with lvresize (2.02.158).
Fix regression of accepting both --type and -m with lvresize. (2.02.158)
Add lvconvert --swapmetadata, new specific way to swap pool metadata LVs.
Add lvconvert --startpoll, new specific way to start polling conversions.
Add lvconvert --mergethin, new specific way to merge thin snapshots.
@@ -27,9 +34,9 @@ Version 2.02.169 -
Match every command run to one command definition.
Specify every allowed command definition/syntax in command-lines.in.
Add extra memory page when limiting pthread stack size in clvmd.
Support striped/raid0* <-> raid10_near conversions
Support shrinking of RaidLvs
Support region size changes on existing RaidLVs
Support striped/raid0* <-> raid10_near conversions.
Support shrinking of RaidLVs.
Support region size changes on existing RaidLVs.
Avoid parallel usage of cpg_mcast_joined() in clvmd with corosync.
Support raid6_{ls,rs,la,ra}_6 segment types and conversions from/to it.
Support raid6_n_6 segment type and conversions from/to it.

View File

@@ -1,23 +1,24 @@
Version 1.02.138 -
=====================================
Version 1.02.138 - 28th March 2017
==================================
Support additional raid5/6 configurations.
Provide dm_tree_node_add_cache_target@base compatible symbol.
Support DM_CACHE_FEATURE_METADATA2, new cache metadata format 2.
Improve code to handle mode mask for cache nodes.
Cache status check for passthrough also require trailing space.
Add extra memory page when limiting pthread stack size in dmeventd.
Avoids immediate resume when preloaded device is smaller.
Do not suppress kernel key description in dmsetup table output.
Do not suppress kernel key description in dmsetup table output for dm-crypt.
Support configurable command executed from dmeventd thin plugin.
Support new R|r human readable units output format.
Thin dmeventd plugin reacts faster on lvextend failure path with umount.
Add dm_stats_bind_from_fd() to bind a stats handle from a file descriptor.
Do not try call callback when reverting activation on error path.
Fix file mapping for extents with physically adjacent extents.
Fix file mapping for extents with physically adjacent extents in dmstats.
Validation vsnprintf result in runtime translate of dm_log (1.02.136).
Separate filemap extent allocation from region table.
Fix segmentation fault when filemap region creation fails.
Fix performance of region cleanup for failed filemap creation.
Fix very slow region deletion with many regions.
Separate filemap extent allocation from region table in dmstats.
Fix segmentation fault when filemap region creation fails in dmstats.
Fix performance of region cleanup for failed filemap creation in dmstats.
Fix very slow region deletion with many regions in dmstats.
Version 1.02.137 - 30th November 2016
=====================================

29
configure vendored
View File

@@ -3015,6 +3015,7 @@ ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $
ac_compiler_gnu=$ac_cv_c_compiler_gnu
CONFIGURE_LINE="$0 $@"
ac_config_headers="$ac_config_headers include/configure.h"
@@ -6078,7 +6079,7 @@ fi
done
for ac_header in termios.h sys/statvfs.h sys/timerfd.h linux/magic.h linux/fiemap.h
for ac_header in termios.h sys/statvfs.h sys/timerfd.h sys/vfs.h linux/magic.h linux/fiemap.h
do :
as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh`
ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default"
@@ -6271,6 +6272,26 @@ _ACEOF
fi
ac_fn_c_check_member "$LINENO" "struct stat" "st_blocks" "ac_cv_member_struct_stat_st_blocks" "$ac_includes_default"
if test "x$ac_cv_member_struct_stat_st_blocks" = xyes; then :
cat >>confdefs.h <<_ACEOF
#define HAVE_STRUCT_STAT_ST_BLOCKS 1
_ACEOF
$as_echo "#define HAVE_ST_BLOCKS 1" >>confdefs.h
else
case " $LIBOBJS " in
*" fileblocks.$ac_objext "* ) ;;
*) LIBOBJS="$LIBOBJS fileblocks.$ac_objext"
;;
esac
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether struct tm is in sys/time.h or time.h" >&5
$as_echo_n "checking whether struct tm is in sys/time.h or time.h... " >&6; }
if ${ac_cv_struct_tm+:} false; then :
@@ -15473,6 +15494,12 @@ LVM_MINOR=`echo "$VER" | $AWK -F '.' '{print $2}'`
LVM_PATCHLEVEL=`echo "$VER" | $AWK -F '[(.]' '{print $3}'`
LVM_LIBAPI=`echo "$VER" | $AWK -F '[()]' '{print $2}'`
cat >>confdefs.h <<_ACEOF
#define LVM_CONFIGURE_LINE "$CONFIGURE_LINE"
_ACEOF
################################################################################

View File

@@ -15,6 +15,7 @@ AC_PREREQ(2.69)
################################################################################
dnl -- Process this file with autoconf to produce a configure script.
AC_INIT
CONFIGURE_LINE="$0 $@"
AC_CONFIG_SRCDIR([lib/device/dev-cache.h])
AC_CONFIG_HEADERS([include/configure.h])
@@ -105,7 +106,7 @@ AC_CHECK_HEADERS([assert.h ctype.h dirent.h errno.h fcntl.h float.h \
sys/time.h sys/types.h sys/utsname.h sys/wait.h time.h \
unistd.h], , [AC_MSG_ERROR(bailing out)])
AC_CHECK_HEADERS(termios.h sys/statvfs.h sys/timerfd.h linux/magic.h linux/fiemap.h)
AC_CHECK_HEADERS(termios.h sys/statvfs.h sys/timerfd.h sys/vfs.h linux/magic.h linux/fiemap.h)
case "$host_os" in
linux*)
@@ -120,6 +121,7 @@ AC_C_CONST
AC_C_INLINE
AC_CHECK_MEMBERS([struct stat.st_rdev])
AC_CHECK_TYPES([ptrdiff_t])
AC_STRUCT_ST_BLOCKS
AC_STRUCT_TM
AC_TYPE_OFF_T
AC_TYPE_PID_T
@@ -2001,6 +2003,8 @@ LVM_MINOR=`echo "$VER" | $AWK -F '.' '{print $2}'`
LVM_PATCHLEVEL=`echo "$VER" | $AWK -F '[[(.]]' '{print $3}'`
LVM_LIBAPI=`echo "$VER" | $AWK -F '[[()]]' '{print $2}'`
AC_DEFINE_UNQUOTED(LVM_CONFIGURE_LINE, "$CONFIGURE_LINE", [configure command line used])
################################################################################
AC_SUBST(APPLIB)
AC_SUBST(AWK)

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2015 Red Hat, Inc. All rights reserved.
* Copyright (C) 2005-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -25,7 +25,6 @@
struct dso_state {
struct dm_pool *mem;
char cmd_lvscan[512];
char cmd_lvconvert[512];
};
@@ -99,21 +98,14 @@ static int _get_mirror_event(struct dso_state *state, char *params)
return r;
}
static int _remove_failed_devices(const char *cmd_lvscan, const char *cmd_lvconvert,
const char *device)
static int _remove_failed_devices(const char *cmd_lvconvert, const char *device)
{
if (!dmeventd_lvm2_run_with_lock(cmd_lvscan))
log_warn("WARNING: Re-scan of mirrored device %s failed.", device);
/* if repair goes OK, report success even if lvscan has failed */
if (!dmeventd_lvm2_run_with_lock(cmd_lvconvert)) {
log_error("Repair of mirrored device %s failed.", device);
return 0;
}
if (!dmeventd_lvm2_run_with_lock(cmd_lvscan))
log_warn("WARNING: Re-scan of mirrored device %s failed.", device);
log_info("Repair of mirrored device %s finished successfully.", device);
return 1;
@@ -154,9 +146,7 @@ void process_event(struct dm_task *dmt,
break;
case ME_FAILURE:
log_error("Device failure in %s.", device);
if (!_remove_failed_devices(state->cmd_lvscan,
state->cmd_lvconvert,
device))
if (!_remove_failed_devices(state->cmd_lvconvert, device))
/* FIXME Why are all the error return codes unused? Get rid of them? */
log_error("Failed to remove faulty devices in %s.",
device);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2005-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -22,7 +22,6 @@
struct dso_state {
struct dm_pool *mem;
char cmd_lvscan[512];
char cmd_lvconvert[512];
uint64_t raid_devs[RAID_DEVS_ELEMS];
int failed;
@@ -74,8 +73,6 @@ static int _process_raid_event(struct dso_state *state, char *params, const char
goto out; /* already reported */
state->failed = 1;
if (!dmeventd_lvm2_run_with_lock(state->cmd_lvscan))
log_warn("WARNING: Re-scan of RAID device %s failed.", device);
/* if repair goes OK, report success even if lvscan has failed */
if (!dmeventd_lvm2_run_with_lock(state->cmd_lvconvert)) {
@@ -136,9 +133,7 @@ int register_device(const char *device,
if (!dmeventd_lvm2_init_with_pool("raid_state", state))
goto_bad;
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvscan, sizeof(state->cmd_lvscan),
"lvscan --cache", device) ||
!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
if (!dmeventd_lvm2_command(state->mem, state->cmd_lvconvert, sizeof(state->cmd_lvconvert),
"lvconvert --config devices{ignore_suspended_devices=1} "
"--repair --use-policies", device))
goto_bad;

View File

@@ -11,7 +11,8 @@ import subprocess
from . import cfg
from .cmdhandler import options_to_cli_args
import dbus
from .utils import pv_range_append, pv_dest_ranges, log_error, log_debug
from .utils import pv_range_append, pv_dest_ranges, log_error, log_debug,\
add_no_notify
import os
import threading
@@ -42,6 +43,10 @@ def _move_merge(interface_name, command, job_state):
# the command always as we will be getting periodic output from them on
# the status of the long running operation.
command.insert(0, cfg.LVM_CMD)
# Instruct lvm to not register an event with us
command = add_no_notify(command)
process = subprocess.Popen(command, stdout=subprocess.PIPE,
env=os.environ,
stderr=subprocess.PIPE, close_fds=True)
@@ -59,6 +64,10 @@ def _move_merge(interface_name, command, job_state):
(device, ignore, percentage) = line_str.split(':')
job_state.Percent = round(
float(percentage.strip()[:-1]), 1)
# While the move is in progress we need to periodically update
# the state to reflect where everything is at.
cfg.load()
except ValueError:
log_error("Trying to parse percentage which failed for %s" %
line_str)

View File

@@ -26,7 +26,7 @@ bus = None
args = None
# Set to true if we are depending on external events for updates
ee = False
got_external_event = False
# Shared state variable across all processes
run = multiprocessing.Value('i', 1)

View File

@@ -206,7 +206,7 @@ class Manager(AutomatedProperties):
utils.log_debug("ExternalEvent received, disabling "
"udev monitoring")
# We are dependent on external events now to stay current!
cfg.ee = True
cfg.got_external_event = True
r = RequestEntry(
-1, Manager._external_event, (command,), None, None, False)

View File

@@ -16,9 +16,33 @@ from . import utils
observer = None
observer_lock = threading.RLock()
_udev_lock = threading.RLock()
_udev_count = 0
def udev_add():
global _udev_count
with _udev_lock:
if _udev_count == 0:
_udev_count += 1
# Place this on the queue so any other operations will sequence
# behind it
r = RequestEntry(
-1, _udev_event, (), None, None, False)
cfg.worker_q.put(r)
def udev_complete():
global _udev_count
with _udev_lock:
if _udev_count > 0:
_udev_count -= 1
def _udev_event():
utils.log_debug("Processing udev event")
udev_complete()
cfg.load()
@@ -44,10 +68,7 @@ def filter_event(action, device):
refresh = True
if refresh:
# Place this on the queue so any other operations will sequence behind it
r = RequestEntry(
-1, _udev_event, (), None, None, False)
cfg.worker_q.put(r)
udev_add()
def add():

View File

@@ -510,16 +510,19 @@ def add_no_notify(cmdline):
:rtype: list
"""
if 'help' in cmdline:
return cmdline
# Only after we have seen an external event will be disable lvm from sending
# us one when we call lvm
if cfg.got_external_event:
if 'help' in cmdline:
return cmdline
if '--config' in cmdline:
for i, arg in enumerate(cmdline):
if arg == '--config':
cmdline[i] += "global/notify_dbus=0"
break
else:
cmdline.extend(['--config', 'global/notify_dbus=0'])
if '--config' in cmdline:
for i, arg in enumerate(cmdline):
if arg == '--config':
cmdline[i] += "global/notify_dbus=0"
break
else:
cmdline.extend(['--config', 'global/notify_dbus=0'])
return cmdline

View File

@@ -25,6 +25,7 @@
#define LVMETAD_DISABLE_REASON_LVM1 "LVM1"
#define LVMETAD_DISABLE_REASON_DUPLICATES "DUPLICATES"
#define LVMETAD_DISABLE_REASON_VGRESTORE "VGRESTORE"
#define LVMETAD_DISABLE_REASON_REPAIR "REPAIR"
struct volume_group;

View File

@@ -203,8 +203,9 @@ struct vg_info {
#define GLFL_DISABLE_REASON_LVM1 0x00000008
#define GLFL_DISABLE_REASON_DUPLICATES 0x00000010
#define GLFL_DISABLE_REASON_VGRESTORE 0x00000020
#define GLFL_DISABLE_REASON_REPAIR 0x00000040
#define GLFL_DISABLE_REASON_ALL (GLFL_DISABLE_REASON_DIRECT | GLFL_DISABLE_REASON_LVM1 | GLFL_DISABLE_REASON_DUPLICATES | GLFL_DISABLE_REASON_VGRESTORE)
#define GLFL_DISABLE_REASON_ALL (GLFL_DISABLE_REASON_DIRECT | GLFL_DISABLE_REASON_REPAIR | GLFL_DISABLE_REASON_LVM1 | GLFL_DISABLE_REASON_DUPLICATES | GLFL_DISABLE_REASON_VGRESTORE)
#define VGFL_INVALID 0x00000001
@@ -2355,6 +2356,8 @@ static response set_global_info(lvmetad_state *s, request r)
if ((reason = daemon_request_str(r, "disable_reason", NULL))) {
if (strstr(reason, LVMETAD_DISABLE_REASON_DIRECT))
reason_flags |= GLFL_DISABLE_REASON_DIRECT;
if (strstr(reason, LVMETAD_DISABLE_REASON_REPAIR))
reason_flags |= GLFL_DISABLE_REASON_REPAIR;
if (strstr(reason, LVMETAD_DISABLE_REASON_LVM1))
reason_flags |= GLFL_DISABLE_REASON_LVM1;
if (strstr(reason, LVMETAD_DISABLE_REASON_DUPLICATES))
@@ -2418,8 +2421,9 @@ static response get_global_info(lvmetad_state *s, request r)
pid = (int)daemon_request_int(r, "pid", 0);
if (s->flags & GLFL_DISABLE) {
snprintf(reason, REASON_BUF_SIZE - 1, "%s%s%s%s",
snprintf(reason, REASON_BUF_SIZE - 1, "%s%s%s%s%s",
(s->flags & GLFL_DISABLE_REASON_DIRECT) ? LVMETAD_DISABLE_REASON_DIRECT "," : "",
(s->flags & GLFL_DISABLE_REASON_REPAIR) ? LVMETAD_DISABLE_REASON_REPAIR "," : "",
(s->flags & GLFL_DISABLE_REASON_LVM1) ? LVMETAD_DISABLE_REASON_LVM1 "," : "",
(s->flags & GLFL_DISABLE_REASON_DUPLICATES) ? LVMETAD_DISABLE_REASON_DUPLICATES "," : "",
(s->flags & GLFL_DISABLE_REASON_VGRESTORE) ? LVMETAD_DISABLE_REASON_VGRESTORE "," : "");

View File

@@ -491,6 +491,9 @@
/* Define to 1 if you have the <sys/file.h> header file. */
#undef HAVE_SYS_FILE_H
/* Define to 1 if you have the <sys/inotify.h> header file. */
#undef HAVE_SYS_INOTIFY_H
/* Define to 1 if you have the <sys/ioctl.h> header file. */
#undef HAVE_SYS_IOCTL_H
@@ -626,6 +629,9 @@
/* Define to 1 to include code that uses lvmpolld. */
#undef LVMPOLLD_SUPPORT
/* configure command line used */
#undef LVM_CONFIGURE_LINE
/* Path to lvm binary. */
#undef LVM_PATH

5
lib/cache/lvmetad.c vendored
View File

@@ -66,7 +66,7 @@ static int _log_debug_inequality(const char *name, struct dm_config_node *a, str
log_debug_lvmetad("VG %s metadata inequality at %s / %s: %s / %s",
name, a->key, b->key, av->v.str, bv->v.str);
else if (a->v->type == DM_CFG_INT && b->v->type == DM_CFG_INT)
log_debug_lvmetad("VG %s metadata inequality at %s / %s: " FMTi64 " / " FMTi64,
log_debug_lvmetad("VG %s metadata inequality at %s / %s: " FMTd64 " / " FMTd64,
name, a->key, b->key, av->v.i, bv->v.i);
else
log_debug_lvmetad("VG %s metadata inequality at %s / %s: type %d / type %d",
@@ -2874,6 +2874,9 @@ int lvmetad_is_disabled(struct cmd_context *cmd, const char **reason)
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_DIRECT)) {
*reason = "the disable flag was set directly";
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_REPAIR)) {
*reason = "a repair command was run";
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_LVM1)) {
*reason = "LVM1 metadata was found";

View File

@@ -128,8 +128,8 @@ int import_pv(const struct format_type *fmt, struct dm_pool *mem,
int generate_lvm1_system_id(struct cmd_context *cmd, char *s, const char *prefix)
{
if (dm_snprintf(s, NAME_LEN, "%s%s%lu",
prefix, cmd->hostname, time(NULL)) < 0) {
if (dm_snprintf(s, NAME_LEN, "%s%s" FMTu64,
prefix, cmd->hostname, (uint64_t)time(NULL)) < 0) {
log_error("Generated LVM1 format system_id too long");
return 0;
}

View File

@@ -350,7 +350,7 @@ static int _print_header(struct cmd_context *cmd, struct formatter *f,
_utsname.version, _utsname.machine);
if (cmd->system_id && *cmd->system_id)
outf(f, "creation_host_system_id = \"%s\"", cmd->system_id);
outf(f, "creation_time = %lu\t# %s", t, ctime(&t));
outf(f, "creation_time = " FMTu64 "\t# %s", (uint64_t)t, ctime(&t));
return 1;
}

View File

@@ -220,7 +220,12 @@ char *lvseg_segtype_dup(struct dm_pool *mem, const struct lv_segment *seg)
char *lvseg_discards_dup(struct dm_pool *mem, const struct lv_segment *seg)
{
return dm_pool_strdup(mem, get_pool_discards_name(seg->discards));
if (lv_is_thin_pool(seg->lv))
return dm_pool_strdup(mem, get_pool_discards_name(seg->discards));
log_error("Cannot query non thin-pool segment of LV %s for discards property.",
display_lvname(seg->lv));
return NULL;
}
char *lvseg_kernel_discards_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_with_info_and_seg_status *lvdm)

View File

@@ -4773,6 +4773,19 @@ static int _lvresize_check(struct logical_volume *lv,
return 0;
}
if (lv_is_raid(lv) &&
lp->resize == LV_REDUCE) {
unsigned attrs;
const struct segment_type *segtype = first_seg(lv)->segtype;
if (!segtype->ops->target_present ||
!segtype->ops->target_present(lv->vg->cmd, NULL, &attrs) ||
!(attrs & RAID_FEATURE_SHRINK)) {
log_error("RAID module does not support shrinking.");
return 0;
}
}
if (lp->use_policies && !lv_is_cow(lv) && !lv_is_thin_pool(lv)) {
log_error("Policy-based resize is supported only for snapshot and thin pool volumes.");
return 0;

View File

@@ -39,22 +39,6 @@ static int _check_restriping(uint32_t new_stripes, struct logical_volume *lv)
return 1;
}
__attribute__ ((__unused__))
/* Check that all lv has segments have exactly the required number of areas */
static int _check_num_areas_in_lv_segments(struct logical_volume *lv, unsigned num_areas)
{
struct lv_segment *seg;
dm_list_iterate_items(seg, &lv->segments)
if (seg->area_count != num_areas) {
log_error("For this operation LV %s needs exactly %u data areas per segment.",
display_lvname(lv), num_areas);
return 0;
}
return 1;
}
/*
* Check if reshape is supported in the kernel.
*/
@@ -171,6 +155,33 @@ char *top_level_lv_name(struct volume_group *vg, const char *lv_name)
return new_lv_name;
}
/* Get available and removed SubLVs for @lv */
static int _get_available_removed_sublvs(const struct logical_volume *lv, uint32_t *available_slvs, uint32_t *removed_slvs)
{
uint32_t s;
struct lv_segment *seg = first_seg(lv);
*available_slvs = 0;
*removed_slvs = 0;
if (!lv_is_raid(lv))
return 1;
for (s = 0; s < seg->area_count; s++) {
struct logical_volume *slv;
if (seg_type(seg, s) != AREA_LV || !(slv = seg_lv(seg, s))) {
log_error(INTERNAL_ERROR "Missing image sub lv in area %" PRIu32 " of LV %s.",
s, display_lvname(lv));
return_0;
}
(slv->status & LV_REMOVE_AFTER_RESHAPE) ? (*removed_slvs)++ : (*available_slvs)++;
}
return 1;
}
static int _lv_is_raid_with_tracking(const struct logical_volume *lv,
struct logical_volume **tracking)
{
@@ -1821,7 +1832,7 @@ static int _raid_reshape_remove_images(struct logical_volume *lv,
const unsigned new_stripes, const unsigned new_stripe_size,
struct dm_list *allocate_pvs, struct dm_list *removal_lvs)
{
uint32_t active_lvs, current_le_count, reduced_le_count, removed_lvs, s;
uint32_t available_slvs, current_le_count, reduced_le_count, removed_slvs, s;
uint64_t extend_le_count;
unsigned devs_health, devs_in_sync;
struct lv_segment *seg = first_seg(lv);
@@ -1916,26 +1927,15 @@ static int _raid_reshape_remove_images(struct logical_volume *lv,
* -> remove the freed up images and reduce LV size
*
*/
for (active_lvs = removed_lvs = s = 0; s < seg->area_count; s++) {
struct logical_volume *slv;
if (!seg_lv(seg, s) || !(slv = seg_lv(seg, s))) {
log_error("Missing image sub lv off LV %s.", display_lvname(lv));
return 0;
}
if (slv->status & LV_REMOVE_AFTER_RESHAPE)
removed_lvs++;
else
active_lvs++;
}
if (!_get_available_removed_sublvs(lv, &available_slvs, &removed_slvs))
return_0;
if (devs_in_sync != new_image_count) {
log_error("No correct kernel/lvm active LV count on %s.", display_lvname(lv));
return 0;
}
if (active_lvs + removed_lvs != old_image_count) {
if (available_slvs + removed_slvs != old_image_count) {
log_error ("No correct kernel/lvm total LV count on %s.", display_lvname(lv));
return 0;
}
@@ -2305,7 +2305,8 @@ static int _raid_reshape(struct logical_volume *lv,
} if (!_vg_write_commit_backup(lv->vg))
return 0;
return 1; // force_repair ? _lv_cond_repair(lv) : 1;
return 1;
/* FIXME force_repair ? _lv_cond_repair(lv) : 1; */
}
/*
@@ -2325,6 +2326,8 @@ static int _raid_reshape(struct logical_volume *lv,
* 1 -> allowed reshape request
* 2 -> prohibited reshape request
* 3 -> allowed region size change request
*
* FIXME Use alternative mechanism - separate parameter or enum.
*/
static int _reshape_requested(const struct logical_volume *lv, const struct segment_type *segtype,
const int data_copies, const uint32_t region_size,
@@ -2364,33 +2367,6 @@ static int _reshape_requested(const struct logical_volume *lv, const struct segm
display_lvname(lv));
return 2;
}
#if 0
if ((_lv_is_duplicating(lv) || lv_is_duplicated(lv)) &&
((seg_is_raid1(seg) ? 0 : (stripes != _data_rimages_count(seg, seg->area_count))) ||
data_copies != seg->data_copies))
goto err;
if ((!seg_is_striped(seg) && segtype_is_raid10_far(segtype)) ||
(seg_is_raid10_far(seg) && !segtype_is_striped(segtype))) {
if (data_copies == seg->data_copies &&
region_size == seg->region_size) {
log_error("Can't convert %sraid10_far.",
seg_is_raid10_far(seg) ? "" : "to ");
goto err;
}
}
if (seg_is_raid10_far(seg)) {
if (stripes != _data_rimages_count(seg, seg->area_count)) {
log_error("Can't change stripes in raid10_far.");
goto err;
}
if (stripe_size != seg->stripe_size) {
log_error("Can't change stripe size in raid10_far.");
goto err;
}
}
#endif
if (seg_is_any_raid10(seg) && seg->area_count > 2 &&
stripes && stripes < seg->area_count - seg->segtype->parity_devs) {
@@ -2401,46 +2377,8 @@ static int _reshape_requested(const struct logical_volume *lv, const struct segm
if (data_copies != seg->data_copies) {
if (seg_is_raid10_near(seg))
return 0;
#if 0
if (seg_is_raid10_far(seg))
return segtype_is_raid10_far(segtype) ? 1 : 0;
if (seg_is_raid10_offset(seg)) {
log_error("Can't change number of data copies on %s LV %s.",
lvseg_name(seg), display_lvname(lv));
goto err;
}
#endif
}
#if 0
/* raid10_{near,offset} case */
if ((seg_is_raid10_near(seg) && segtype_is_raid10_offset(segtype)) ||
(seg_is_raid10_offset(seg) && segtype_is_raid10_near(segtype))) {
if (stripes >= seg->area_count)
return 1;
goto err;
}
/*
* raid10_far is not reshapable in MD at all;
* lvm/dm adds reshape capability to add/remove data_copies
*/
if (seg_is_raid10_far(seg) && segtype_is_raid10_far(segtype)) {
if (stripes && stripes == seg->area_count &&
data_copies > 1 &&
data_copies <= seg->area_count &&
data_copies != seg->data_copies)
return 1;
goto err;
} else if (seg_is_any_raid10(seg) && segtype_is_any_raid10(segtype) &&
data_copies > 1 && data_copies != seg->data_copies)
goto err;
#endif
/* Change layout (e.g. raid5_ls -> raid5_ra) keeping # of stripes */
if (seg->segtype != segtype) {
if (stripes && stripes != _data_rimages_count(seg, seg->area_count))
@@ -2459,12 +2397,6 @@ static int _reshape_requested(const struct logical_volume *lv, const struct segm
return (stripes || stripe_size) ? 1 : 0;
err:
#if 0
if (lv_is_duplicated(lv))
log_error("Conversion of duplicating sub LV %s rejected.", display_lvname(lv));
else
log_error("Use \"lvconvert --duplicate --type %s ... %s.", segtype->name, display_lvname(lv));
#endif
return 2;
}
@@ -4827,8 +4759,80 @@ static int _shift_parity_dev(struct lv_segment *seg)
return 1;
}
/*
* raid4 <-> raid5_n helper
*
* On conversions between raid4 and raid5_n, the parity SubLVs need
* to be switched between beginning and end of the segment areas.
*
* The metadata devices reflect the previous positions within the RaidLV,
* thus need to be cleared in order to allow the kernel to start the new
* mapping and recreate metadata with the proper new position stored.
*/
static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
{
struct lv_segment *seg = first_seg(lv);
struct dm_list removal_lvs;
uint32_t region_size = seg->region_size;
dm_list_init(&removal_lvs);
if (!(seg_is_raid4(seg) && segtype_is_raid5_n(new_segtype)) &&
!(seg_is_raid5_n(seg) && segtype_is_raid4(new_segtype))) {
log_error("LV %s has to be of type raid4 or raid5_n to allow for this conversion.",
display_lvname(lv));
return 0;
}
/* Necessary when convering to raid0/striped w/o redundancy. */
if (!_raid_in_sync(lv)) {
log_error("Unable to convert %s while it is not in-sync.",
display_lvname(lv));
return 0;
}
log_debug_metadata("Converting LV %s from %s to %s.", display_lvname(lv),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID4 : SEG_TYPE_NAME_RAID5_N),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID5_N : SEG_TYPE_NAME_RAID4));
/* Archive metadata */
if (!archive(lv->vg))
return_0;
if (!_rename_area_lvs(lv, "_")) {
log_error("Failed to rename %s LV %s MetaLVs.", lvseg_name(seg), display_lvname(lv));
return 0;
}
if (!_clear_meta_lvs(lv))
return_0;
/* Shift parity SubLV pair "PDD..." <-> "DD...P" on raid4 <-> raid5_n conversion */
if( !_shift_parity_dev(seg))
return 0;
/* Don't resync */
init_mirror_in_sync(1);
seg->region_size = new_region_size ?: region_size;
seg->segtype = new_segtype;
if (!_lv_update_reload_fns_reset_eliminate_lvs(lv, 0, &removal_lvs, NULL))
return_0;
init_mirror_in_sync(0);
if (!_rename_area_lvs(lv, NULL)) {
log_error("Failed to rename %s LV %s MetaLVs.", lvseg_name(seg), display_lvname(lv));
return 0;
}
if (!lv_update_and_reload(lv))
return_0;
return 1;
}
/* raid45610 -> raid0* / stripe, raid5_n -> raid4 */
static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS);
static int _takeover_downconvert_wrapper(TAKEOVER_FN_ARGS)
{
int rename_sublvs = 0;
@@ -4974,79 +4978,6 @@ static int _takeover_downconvert_wrapper(TAKEOVER_FN_ARGS)
return 1;
}
/*
* raid4 <-> raid5_n helper
*
* On conversions between raid4 and raid5_n, the parity SubLVs need
* to be switched between beginning and end of the segment areas.
*
* The metadata devices reflect the previous positions within the RaidLV,
* thus need to be cleared in order to allow the kernel to start the new
* mapping and recreate metadata with the proper new position stored.
*/
static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
{
struct lv_segment *seg = first_seg(lv);
struct dm_list removal_lvs;
uint32_t region_size = seg->region_size;
dm_list_init(&removal_lvs);
if (!(seg_is_raid4(seg) && segtype_is_raid5_n(new_segtype)) &&
!(seg_is_raid5_n(seg) && segtype_is_raid4(new_segtype))) {
log_error("LV %s has to be of type raid4 or raid5_n to allow for this conversion.",
display_lvname(lv));
return 0;
}
/* Necessary when convering to raid0/striped w/o redundancy. */
if (!_raid_in_sync(lv)) {
log_error("Unable to convert %s while it is not in-sync.",
display_lvname(lv));
return 0;
}
log_debug_metadata("Converting LV %s from %s to %s.", display_lvname(lv),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID4 : SEG_TYPE_NAME_RAID5_N),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID5_N : SEG_TYPE_NAME_RAID4));
/* Archive metadata */
if (!archive(lv->vg))
return_0;
if (!_rename_area_lvs(lv, "_")) {
log_error("Failed to rename %s LV %s MetaLVs.", lvseg_name(seg), display_lvname(lv));
return 0;
}
if (!_clear_meta_lvs(lv))
return_0;
/* Shift parity SubLV pair "PDD..." <-> "DD...P" on raid4 <-> raid5_n conversion */
if( !_shift_parity_dev(seg))
return 0;
/* Don't resync */
init_mirror_in_sync(1);
seg->region_size = new_region_size ?: region_size;
seg->segtype = new_segtype;
if (!_lv_update_reload_fns_reset_eliminate_lvs(lv, 0, &removal_lvs, NULL))
return_0;
init_mirror_in_sync(0);
if (!_rename_area_lvs(lv, NULL)) {
log_error("Failed to rename %s LV %s MetaLVs.", lvseg_name(seg), display_lvname(lv));
return 0;
}
if (!lv_update_and_reload(lv))
return_0;
return 1;
}
static int _striped_to_raid0_wrapper(struct logical_volume *lv,
const struct segment_type *new_segtype,
uint32_t new_stripes,
@@ -5087,6 +5018,19 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
return 0;
}
if (segtype_is_any_raid6(new_segtype)) {
uint32_t min_areas = 3;
if (seg_is_raid4(seg) || seg_is_any_raid5(seg))
min_areas = 4;
if (seg->area_count < min_areas) {
log_error("Minimum of %" PRIu32 " stripes needed for conversion from %s to %s.",
min_areas, lvseg_name(seg), new_segtype->name);
return 0;
}
}
if (seg_is_any_raid5(seg) && segtype_is_any_raid6(new_segtype) && seg->area_count < 4) {
log_error("Minimum of 3 stripes needed for conversion from %s to %s.",
lvseg_name(seg), new_segtype->name);
@@ -5952,6 +5896,7 @@ int lv_raid_convert(struct logical_volume *lv,
uint32_t new_image_count = seg->area_count;
uint32_t region_size = new_region_size;
uint32_t data_copies = seg->data_copies;
uint32_t available_slvs, removed_slvs;
takeover_fn_t takeover_fn;
new_segtype = new_segtype ? : seg->segtype;
@@ -6003,6 +5948,17 @@ int lv_raid_convert(struct logical_volume *lv,
return 0;
}
/* Prohibit any takeover in case sub LVs to be removed still exist after a previous reshape */
if (!_get_available_removed_sublvs(lv, &available_slvs, &removed_slvs))
return 0;
if (removed_slvs) {
log_error("Can't convert %s LV %s to %s containing sub LVs to remove after a reshape.",
lvseg_name(seg), display_lvname(lv), new_segtype->name);
log_error("Run \"lvconvert --stripes %" PRIu32 " %s\" first.",
seg->area_count - removed_slvs - 1, display_lvname(lv));
return 0;
}
/*
* Check acceptible options mirrors, region_size,
* stripes and/or stripe_size have been provided.

View File

@@ -28,50 +28,50 @@ struct dm_config_node;
struct dev_manager;
/* Feature flags */
#define SEG_CAN_SPLIT 0x0000000000000001ULL
#define SEG_AREAS_STRIPED 0x0000000000000002ULL
#define SEG_AREAS_MIRRORED 0x0000000000000004ULL
#define SEG_SNAPSHOT 0x0000000000000008ULL
#define SEG_FORMAT1_SUPPORT 0x0000000000000010ULL
#define SEG_VIRTUAL 0x0000000000000020ULL
#define SEG_CANNOT_BE_ZEROED 0x0000000000000040ULL
#define SEG_MONITORED 0x0000000000000080ULL
#define SEG_REPLICATOR 0x0000000000000100ULL
#define SEG_REPLICATOR_DEV 0x0000000000000200ULL
#define SEG_RAID 0x0000000000000400ULL
#define SEG_THIN_POOL 0x0000000000000800ULL
#define SEG_THIN_VOLUME 0x0000000000001000ULL
#define SEG_CACHE 0x0000000000002000ULL
#define SEG_CACHE_POOL 0x0000000000004000ULL
#define SEG_MIRROR 0x0000000000008000ULL
#define SEG_ONLY_EXCLUSIVE 0x0000000000010000ULL /* In cluster only exlusive activation */
#define SEG_CAN_ERROR_WHEN_FULL 0x0000000000020000ULL
#define SEG_CAN_SPLIT (1ULL << 0)
#define SEG_AREAS_STRIPED (1ULL << 1)
#define SEG_AREAS_MIRRORED (1ULL << 2)
#define SEG_SNAPSHOT (1ULL << 3)
#define SEG_FORMAT1_SUPPORT (1ULL << 4)
#define SEG_VIRTUAL (1ULL << 5)
#define SEG_CANNOT_BE_ZEROED (1ULL << 6)
#define SEG_MONITORED (1ULL << 7)
#define SEG_REPLICATOR (1ULL << 8)
#define SEG_REPLICATOR_DEV (1ULL << 9)
#define SEG_RAID (1ULL << 10)
#define SEG_THIN_POOL (1ULL << 11)
#define SEG_THIN_VOLUME (1ULL << 12)
#define SEG_CACHE (1ULL << 13)
#define SEG_CACHE_POOL (1ULL << 14)
#define SEG_MIRROR (1ULL << 15)
#define SEG_ONLY_EXCLUSIVE (1ULL << 16) /* In cluster only exlusive activation */
#define SEG_CAN_ERROR_WHEN_FULL (1ULL << 17)
#define SEG_RAID0 0x0000000000040000ULL
#define SEG_RAID0_META 0x0000000000080000ULL
#define SEG_RAID1 0x0000000000100000ULL
#define SEG_RAID10_NEAR 0x0000000000200000ULL
#define SEG_RAID0 (1ULL << 18)
#define SEG_RAID0_META (1ULL << 19)
#define SEG_RAID1 (1ULL << 20)
#define SEG_RAID10_NEAR (1ULL << 21)
#define SEG_RAID10 SEG_RAID10_NEAR
#define SEG_RAID4 0x0000000000400000ULL
#define SEG_RAID5_N 0x0000000000800000ULL
#define SEG_RAID5_LA 0x0000000001000000ULL
#define SEG_RAID5_LS 0x0000000002000000ULL
#define SEG_RAID5_RA 0x0000000004000000ULL
#define SEG_RAID5_RS 0x0000000008000000ULL
#define SEG_RAID4 (1ULL << 22)
#define SEG_RAID5_N (1ULL << 23)
#define SEG_RAID5_LA (1ULL << 24)
#define SEG_RAID5_LS (1ULL << 25)
#define SEG_RAID5_RA (1ULL << 26)
#define SEG_RAID5_RS (1ULL << 27)
#define SEG_RAID5 SEG_RAID5_LS
#define SEG_RAID6_NC 0x0000000010000000ULL
#define SEG_RAID6_NR 0x0000000020000000ULL
#define SEG_RAID6_ZR 0x0000000040000000ULL
#define SEG_RAID6_LA_6 0x0000000080000000ULL
#define SEG_RAID6_LS_6 0x0000000100000000ULL
#define SEG_RAID6_RA_6 0x0000000200000000ULL
#define SEG_RAID6_RS_6 0x0000000400000000ULL
#define SEG_RAID6_N_6 0x0000000800000000ULL
#define SEG_RAID6_NC (1ULL << 28)
#define SEG_RAID6_NR (1ULL << 29)
#define SEG_RAID6_ZR (1ULL << 30)
#define SEG_RAID6_LA_6 (1ULL << 31)
#define SEG_RAID6_LS_6 (1ULL << 32)
#define SEG_RAID6_RA_6 (1ULL << 33)
#define SEG_RAID6_RS_6 (1ULL << 34)
#define SEG_RAID6_N_6 (1ULL << 35)
#define SEG_RAID6 SEG_RAID6_ZR
#define SEG_STRIPED_TARGET 0x0000008000000000ULL
#define SEG_STRIPED_TARGET (1ULL << 39)
#define SEG_UNKNOWN 0x8000000000000000ULL
#define SEG_UNKNOWN (1ULL << 63)
#define SEG_TYPE_NAME_LINEAR "linear"
#define SEG_TYPE_NAME_STRIPED "striped"
@@ -141,7 +141,7 @@ struct dev_manager;
#define segtype_is_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10_near(segtype) segtype_is_raid10(segtype)
/* FIXME: once raid10_offset supported */
#define segtype_is_raid10_offset(segtype) 0 // ((segtype)->flags & SEG_RAID10_OFFSET ? 1 : 0)
#define segtype_is_raid10_offset(segtype) 0 /* FIXME ((segtype)->flags & SEG_RAID10_OFFSET ? 1 : 0 */
#define segtype_is_raid_with_meta(segtype) (segtype_is_raid(segtype) && !segtype_is_raid0(segtype))
#define segtype_is_striped_raid(segtype) (segtype_is_raid(segtype) && !segtype_is_raid1(segtype))
#define segtype_is_reshapable_raid(segtype) ((segtype_is_striped_raid(segtype) && !segtype_is_any_raid0(segtype)) || segtype_is_raid10_near(segtype) || segtype_is_raid10_offset(segtype))
@@ -286,7 +286,8 @@ struct segment_type *init_unknown_segtype(struct cmd_context *cmd,
#define RAID_FEATURE_RAID0 (1U << 1) /* version 1.7 */
#define RAID_FEATURE_RESHAPING (1U << 2) /* version 1.8 */
#define RAID_FEATURE_RAID4 (1U << 3) /* ! version 1.8 or 1.9.0 */
#define RAID_FEATURE_RESHAPE (1U << 4) /* version 1.10.1 */
#define RAID_FEATURE_SHRINK (1U << 4) /* version 1.9.0 */
#define RAID_FEATURE_RESHAPE (1U << 5) /* version 1.10.1 */
#ifdef RAID_INTERNAL
int init_raid_segtypes(struct cmd_context *cmd, struct segtype_library *seglib);

View File

@@ -100,9 +100,10 @@ int attach_thin_external_origin(struct lv_segment *seg,
external_lv->status &= ~LVM_WRITE;
}
// TODO: should we mark even origin read-only ?
//if (lv_is_cache(external_lv)) /* read-only corigin of cache LV */
// seg_lv(first_seg(external_lv), 0)->status &= ~LVM_WRITE;
/* FIXME Mark origin read-only?
if (lv_is_cache(external_lv)) // read-only corigin of cache LV
seg_lv(first_seg(external_lv), 0)->status &= ~LVM_WRITE;
*/
}
return 1;

View File

@@ -473,6 +473,7 @@ static int _raid_target_present(struct cmd_context *cmd,
const struct raid_feature _features[] = {
{ 1, 3, 0, RAID_FEATURE_RAID10, SEG_TYPE_NAME_RAID10 },
{ 1, 7, 0, RAID_FEATURE_RAID0, SEG_TYPE_NAME_RAID0 },
{ 1, 9, 0, RAID_FEATURE_SHRINK, "shrinking" },
{ 1, 10, 1, RAID_FEATURE_RESHAPE, "reshaping" },
};

View File

@@ -1008,7 +1008,7 @@ static int _translate_time_items(struct dm_report *rh, struct time_info *info,
dm_pool_free(info->mem, info->ti_list);
info->ti_list = NULL;
if (dm_snprintf(buf, sizeof(buf), "@%ld:@%ld", t1, t2) == -1) {
if (dm_snprintf(buf, sizeof(buf), "@" FMTd64 ":@" FMTd64, (int64_t)t1, (int64_t)t2) == -1) {
log_error("_translate_time_items: dm_snprintf failed");
return 0;
}
@@ -1063,10 +1063,10 @@ static void *_lv_time_handler_get_dynamic_value(struct dm_report *rh,
struct dm_pool *mem,
const char *data_in)
{
time_t t1, t2;
int64_t t1, t2;
time_t *result;
if (sscanf(data_in, "@%ld:@%ld", &t1, &t2) != 2) {
if (sscanf(data_in, "@" FMTd64 ":@" FMTd64, &t1, &t2) != 2) {
log_error("Failed to get value for parsed time specification.");
return NULL;
}
@@ -1076,8 +1076,8 @@ static void *_lv_time_handler_get_dynamic_value(struct dm_report *rh,
return NULL;
}
result[0] = t1;
result[1] = t2;
result[0] = (time_t) t1; /* Validate range for 32b arch ? */
result[1] = (time_t) t2;
return result;
}

View File

@@ -1734,9 +1734,11 @@ static int _dm_tree_deactivate_children(struct dm_tree_node *dnode,
!child->callback(child, DM_NODE_CALLBACK_DEACTIVATED,
child->callback_data))
stack;
// FIXME: We need to let lvremove pass,
// so for now deactivation ignores check result
//r = 0; // FIXME: _node_clear_table() without callback ?
/* FIXME Deactivation must currently ignore failure
* here so that lvremove can continue: we need an
* alternative way to handle this state without
* setting r=0. Or better, skip calling thin_check
* entirely if the device is about to be removed. */
if (dm_tree_node_num_children(child, 0) &&
!_dm_tree_deactivate_children(child, uuid_prefix, uuid_prefix_len, level + 1))
@@ -2375,6 +2377,51 @@ static int _get_params_count(uint64_t *bits)
return r;
}
/*
* Get target version (major, minor and patchlevel) for @target_name
*
* FIXME: this function is derived from liblvm.
* Integrate with move of liblvm functions
* to libdm in future library layer purge
* (e.g. expose as API dm_target_version()?)
*/
static int _target_version(const char *target_name, uint32_t *maj,
uint32_t *min, uint32_t *patchlevel)
{
int r = 0;
struct dm_task *dmt;
struct dm_versions *target, *last_target = NULL;
log_very_verbose("Getting target version for %s", target_name);
if (!(dmt = dm_task_create(DM_DEVICE_LIST_VERSIONS)))
return_0;
if (!dm_task_run(dmt)) {
log_debug_activation("Failed to get %s target versions", target_name);
/* Assume this was because LIST_VERSIONS isn't supported */
maj = min = patchlevel = 0;
r = 1;
} else
for (target = dm_task_get_versions(dmt);
target != last_target;
last_target = target, target = (struct dm_versions *)((char *) target + target->next))
if (!strcmp(target_name, target->name)) {
*maj = target->version[0];
*min = target->version[1];
*patchlevel = target->version[2];
log_very_verbose("Found %s target "
"v%" PRIu32 ".%" PRIu32 ".%" PRIu32 ".",
target_name, *maj, *min, *patchlevel);
r = 1;
break;
}
dm_task_destroy(dmt);
return r;
}
static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
uint32_t minor, struct load_segment *seg,
uint64_t *seg_start, char *params,
@@ -2382,6 +2429,7 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
{
uint32_t i;
uint32_t area_count = seg->area_count / 2;
uint32_t maj, min, patchlevel;
int param_count = 1; /* mandatory 'chunk size'/'stripe size' arg */
int pos = 0;
unsigned type;
@@ -2411,67 +2459,95 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
type = seg->type;
if (type == SEG_RAID0_META)
type = SEG_RAID0;
#if 0
/* Kernel only expects "raid10", not "raid10_{far,offset}" */
else if (type == SEG_RAID10_FAR ||
type == SEG_RAID10_OFFSET) {
param_count += 2;
type = SEG_RAID10_NEAR;
}
#endif
EMIT_PARAMS(pos, "%s %d %u",
// type == SEG_RAID10_NEAR ? "raid10" : _dm_segtypes[type].target,
type == SEG_RAID10 ? "raid10" : _dm_segtypes[type].target,
param_count, seg->stripe_size);
#if 0
if (seg->type == SEG_RAID10_FAR)
EMIT_PARAMS(pos, " raid10_format far");
else if (seg->type == SEG_RAID10_OFFSET)
EMIT_PARAMS(pos, " raid10_format offset");
#endif
if (seg->data_copies > 1 && type == SEG_RAID10)
EMIT_PARAMS(pos, " raid10_copies %u", seg->data_copies);
if (seg->flags & DM_NOSYNC)
EMIT_PARAMS(pos, " nosync");
else if (seg->flags & DM_FORCESYNC)
EMIT_PARAMS(pos, " sync");
if (seg->region_size)
EMIT_PARAMS(pos, " region_size %u", seg->region_size);
/* If seg-data_offset == 1, kernel needs a zero offset to adjust to it */
if (seg->data_offset)
EMIT_PARAMS(pos, " data_offset %d", seg->data_offset == 1 ? 0 : seg->data_offset);
if (seg->delta_disks)
EMIT_PARAMS(pos, " delta_disks %d", seg->delta_disks);
for (i = 0; i < area_count; i++)
if (seg->rebuilds[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " rebuild %u", i);
for (i = 0; i < area_count; i++)
if (seg->writemostly[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " write_mostly %u", i);
if (seg->writebehind)
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
if (!_target_version("raid", &maj, &min, &patchlevel))
return_0;
/*
* Has to be before "min_recovery_rate" or the kernels
* check will fail when both set and min > previous max
* Target version prior to 1.9.0 and >= 1.11.0 emit
* order of parameters as of kernel target documentation
*/
if (seg->max_recovery_rate)
EMIT_PARAMS(pos, " max_recovery_rate %u",
seg->max_recovery_rate);
if (maj > 1 || (maj == 1 && (min < 9 || min >= 11))) {
if (seg->flags & DM_NOSYNC)
EMIT_PARAMS(pos, " nosync");
else if (seg->flags & DM_FORCESYNC)
EMIT_PARAMS(pos, " sync");
if (seg->min_recovery_rate)
EMIT_PARAMS(pos, " min_recovery_rate %u",
seg->min_recovery_rate);
for (i = 0; i < area_count; i++)
if (seg->rebuilds[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " rebuild %u", i);
if (seg->min_recovery_rate)
EMIT_PARAMS(pos, " min_recovery_rate %u",
seg->min_recovery_rate);
if (seg->max_recovery_rate)
EMIT_PARAMS(pos, " max_recovery_rate %u",
seg->max_recovery_rate);
for (i = 0; i < area_count; i++)
if (seg->writemostly[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " write_mostly %u", i);
if (seg->writebehind)
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
if (seg->region_size)
EMIT_PARAMS(pos, " region_size %u", seg->region_size);
if (seg->data_copies > 1 && type == SEG_RAID10)
EMIT_PARAMS(pos, " raid10_copies %u", seg->data_copies);
if (seg->delta_disks)
EMIT_PARAMS(pos, " delta_disks %d", seg->delta_disks);
/* If seg-data_offset == 1, kernel needs a zero offset to adjust to it */
if (seg->data_offset)
EMIT_PARAMS(pos, " data_offset %d", seg->data_offset == 1 ? 0 : seg->data_offset);
/* Target version >= 1.9.0 && < 1.11.0 had a table line parameter ordering flaw */
} else {
if (seg->data_copies > 1 && type == SEG_RAID10)
EMIT_PARAMS(pos, " raid10_copies %u", seg->data_copies);
if (seg->flags & DM_NOSYNC)
EMIT_PARAMS(pos, " nosync");
else if (seg->flags & DM_FORCESYNC)
EMIT_PARAMS(pos, " sync");
if (seg->region_size)
EMIT_PARAMS(pos, " region_size %u", seg->region_size);
/* If seg-data_offset == 1, kernel needs a zero offset to adjust to it */
if (seg->data_offset)
EMIT_PARAMS(pos, " data_offset %d", seg->data_offset == 1 ? 0 : seg->data_offset);
if (seg->delta_disks)
EMIT_PARAMS(pos, " delta_disks %d", seg->delta_disks);
for (i = 0; i < area_count; i++)
if (seg->rebuilds[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " rebuild %u", i);
for (i = 0; i < area_count; i++)
if (seg->writemostly[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " write_mostly %u", i);
if (seg->writebehind)
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
if (seg->max_recovery_rate)
EMIT_PARAMS(pos, " max_recovery_rate %u",
seg->max_recovery_rate);
if (seg->min_recovery_rate)
EMIT_PARAMS(pos, " min_recovery_rate %u",
seg->min_recovery_rate);
}
/* Print number of metadata/data device pairs */
EMIT_PARAMS(pos, " %u", area_count);
@@ -2742,7 +2818,7 @@ static int _emit_segment(struct dm_task *dmt, uint32_t major, uint32_t minor,
struct load_segment *seg, uint64_t *seg_start)
{
char *params;
size_t paramsize = 4096;
size_t paramsize = 4096; /* FIXME: too small for long RAID lines when > 64 devices supported */
int ret;
do {

View File

@@ -680,8 +680,8 @@ static void _check_group_regions_present(struct dm_stats *dms,
for (; i > 0; i = dm_bit_get_next(regions, i))
if (!_stats_region_present(&dms->regions[i])) {
log_warn("Group descriptor " FMTi64 " contains "
"non-existent region_id " FMTi64 ".",
log_warn("Group descriptor " FMTd64 " contains "
"non-existent region_id " FMTd64 ".",
group_id, i);
dm_bit_clear(regions, i);
}
@@ -4563,7 +4563,7 @@ static int _stats_unmap_regions(struct dm_stats *dms, uint64_t group_id,
log_error("Could not finalize region extent table.");
goto out;
}
log_very_verbose("Kept " FMTi64 " of " FMTi64 " old extents",
log_very_verbose("Kept " FMTd64 " of " FMTd64 " old extents",
nr_kept, nr_old);
log_very_verbose("Found " FMTu64 " new extents",
*count - nr_kept);

View File

@@ -566,7 +566,21 @@ static lv_create_params_t _lvm_lv_params_create_thin_pool(vg_t vg,
if (lvcp) {
lvcp->vg = vg;
lvcp->lvp.discards = (thin_discards_t) discard;
switch (discard) {
case LVM_THIN_DISCARDS_IGNORE:
lvcp->lvp.discards = THIN_DISCARDS_IGNORE;
break;
case LVM_THIN_DISCARDS_NO_PASSDOWN:
lvcp->lvp.discards = THIN_DISCARDS_NO_PASSDOWN;
break;
case LVM_THIN_DISCARDS_PASSDOWN:
lvcp->lvp.discards = THIN_DISCARDS_PASSDOWN;
break;
default:
log_error("Invalid discard argument %d for thin pool creation.", discard);
return NULL;
}
lvcp->lvp.zero_new_blocks = THIN_ZERO_YES;
if (chunk_size)
lvcp->lvp.chunk_size = chunk_size;

View File

@@ -13,7 +13,7 @@
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
SHELL = /bin/sh
SHELL = @SHELL@
@SET_MAKE@
@@ -416,7 +416,7 @@ endif
.LIBPATTERNS = lib%.so lib%.a
%.o: %.c
$(CC) -c $(INCLUDES) $(DEFS) $(WFLAGS) $(WCFLAGS) $(CFLAGS) $(CFLAGS_$@) $< -o $@
$(CC) -c $(INCLUDES) $(DEFS) $(DEFS_$@) $(WFLAGS) $(WCFLAGS) $(CFLAGS) $(CFLAGS_$@) $< -o $@
%.o: %.cpp
$(CXX) -c $(INCLUDES) $(DEFS) $(WFLAGS) $(CXXFLAGS) $(CXXFLAGS_$@) $< -o $@

View File

@@ -117,13 +117,13 @@ MAN5DIR=$(mandir)/man5
MAN7DIR=$(mandir)/man7
MAN8DIR=$(mandir)/man8
MANGENERATOR=./man-generator
MANGENERATOR=$(top_builddir)/tools/man-generator
TESTMAN=test.gen
include $(top_builddir)/make.tmpl
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8:%.8_gen=%.8) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM) $(MANGENERATOR) $(TESTMAN)
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8:%.8=%.8_gen) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM) $(TESTMAN)
DISTCLEAN_TARGETS+=$(FSADMMAN) $(BLKDEACTIVATEMAN) $(DMEVENTDMAN) \
$(LVMETADMAN) $(LVMPOLLDMAN) $(LVMLOCKDMAN) $(CLVMDMAN) $(CMIRRORDMAN) \
$(LVMCACHEMAN) $(LVMTHINMAN) $(LVMDBUSDMAN) $(LVMRAIDMAN) \
@@ -141,21 +141,20 @@ all_man: man
$(MAN5) $(MAN7) $(MAN8) $(MAN8DM) $(MAN8CLUSTER) $(MAN8SYSTEMD_GENERATORS): Makefile
$(MANGENERATOR): Makefile
$(CC) -DMAN_PAGE_GENERATOR -I$(top_builddir)/tools $(CFLAGS) $(top_srcdir)/tools/command.c -o $@
# Test whether or not the man page generator works
$(TESTMAN): $(MANGENERATOR)
$(TESTMAN): $(MANGENERATOR) Makefile
- $(MANGENERATOR) --primary lvmconfig > $@
SEE_ALSO=$(srcdir)/see_also.end
.PRECIOUS: %.8_gen
%.8_gen: $(srcdir)/%.8_des $(srcdir)/%.8_end $(MANGENERATOR) $(TESTMAN)
( \
if [ ! -s $(TESTMAN) ] ; then \
echo "Copying pre-generated $@" ; \
echo "Copying pre-generated template $@" ; \
else \
echo "Generating $@" ; \
echo "Generating template $@" ; \
fi \
)
( \
@@ -174,17 +173,44 @@ define SUBSTVARS
echo "Generating $@" ; $(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+/data/lvmtest/usr/sbin/clvmd+;s+#LVM_PATH#+/data/lvmtest/sbin/lvm+;s+#DEFAULT_RUN_DIR#+/var/run/lvm+;s+#DEFAULT_PID_DIR#+/var/run+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $< > $@
endef
# Escape any '-':
#
# - multiple (>= 2)
# - in ' -'
# - in (cache|thin)-*
# - in numerical ranges
# - in single one in '\\f.-'
define ESCAPEHYPHENS
sed -i -e "s+\([^\\]\)-\{7\}+\1\\\-\\\-\\\-\\\-\\\-\\\-\\\-+g" \
-e "s+\([^\\]\)-\{6\}+\1\\\-\\\-\\\-\\\-\\\-\\\-+g" \
-e "s+\([^\\]\)-\{5\}+\1\\\-\\\-\\\-\\\-\\\-+g" \
-e "s+\([^\\]\)-\{4\}+\1\\\-\\\-\\\-\\\-+g" \
-e "s+\([^\\]\)-\{3\}+\1\\\-\\\-\\\-+g" \
-e "s+\([^\\]\)-\{2\}+\1\\\-\\\-+g" \
-e "s+^-\{2\}+\\\-\\\-+g" \
-e "s+ -+ \\\-+g" \
-e "s+\(cache\)-\([[:alpha:]]\{1,\}\)+\1\\\-\2+g" \
-e "s+\(thin\)-\([[:alpha:]]\{1,\}\)+\1\\\-\2+g" \
-e "s+\([0-9]\)-\([0-9]\)+\1\\\-\2+g" \
-e "s+\(\\\f.\)-\([^-]\)+\1\\\-\2+g" \
-e "s+\([[:digit:]]\{4\}\)\\\-\([[:digit:]]\{2\}\)\\\-\([[:digit:]]\{2\}\)+\1-\2-\3+g" $@
endef
%.5: $(srcdir)/%.5_main
$(SUBSTVARS)
$(ESCAPEHYPHENS)
%.7: $(srcdir)/%.7_main
$(SUBSTVARS)
$(ESCAPEHYPHENS)
%.8: $(srcdir)/%.8_main
$(SUBSTVARS)
$(ESCAPEHYPHENS)
%.8: %.8_gen
$(SUBSTVARS)
$(ESCAPEHYPHENS)
install_man5: $(MAN5)
$(INSTALL) -d $(MAN5DIR)
@@ -222,7 +248,7 @@ install_all_man: install install_systemd_generators
pregenerated_man: all
for i in $(srcdir)/*.8_des; do \
CMD=`basename $$i .8_des`; \
cat $${CMD}.8 > $(srcdir)/$$CMD.8_pregen ; \
cat $${CMD}.8_gen > $(srcdir)/$$CMD.8_pregen ; \
done
generate: pregenerated_man

View File

@@ -3,13 +3,13 @@
blkdeactivate \(em utility to deactivate block devices
.SH SYNOPSIS
.B blkdeactivate
.RB [ \-d \ \fIdm_options\fP ]
.RB [ \-e ]
.RB [ \-h ]
.RB [ \-l \ \fIlvm_options\fP ]
.RB [ \-m \ \fImpath_options\fP ]
.RB [ \-u ]
.RB [ \-v ]
.RB [ -d \ \fIdm_options\fP ]
.RB [ -e ]
.RB [ -h ]
.RB [ -l \ \fIlvm_options\fP ]
.RB [ -m \ \fImpath_options\fP ]
.RB [ -u ]
.RB [ -v ]
.RI [ device ]
.SH DESCRIPTION
The blkdeactivate utility deactivates block devices. For mounted
@@ -22,7 +22,7 @@ based devices are handled using the \fBdmsetup\fP(8) command.
MD devices are handled using the \fBmdadm\fP(8) command.
.SH OPTIONS
.TP
.BR \-d ", " \-\-dmoption \ \fIdm_options\fP
.BR -d ", " --dmoption \ \fIdm_options\fP
Comma separated list of device-mapper specific options.
Accepted \fBdmsetup\fP(8) options are:
.RS
@@ -32,16 +32,16 @@ Retry removal several times in case of failure.
Force device removal.
.RE
.TP
.BR \-e ", " \-\-errors
.BR -e ", " --errors
Show errors reported from tools called by \fBblkdeactivate\fP. Without this
option, any error messages from these external tools are suppressed and the
\fBblkdeactivate\fP itself provides only a summary message to indicate
the device was skipped.
.TP
.BR \-h ", " \-\-help
.BR -h ", " --help
Display the help text.
.TP
.BR \-l ", " \-\-lvmoption \ \fIlvm_options\fP
.BR -l ", " --lvmoption \ \fIlvm_options\fP
Comma-separated list of LVM specific options:
.RS
.IP \fIretry\fP
@@ -52,7 +52,7 @@ Deactivating the Volume Group as a whole is quicker than deactivating
each Logical Volume separately.
.RE
.TP
.BR \-m ", " \-\-mpathoption \ \fImpath_options\fP
.BR -m ", " --mpathoption \ \fImpath_options\fP
Comma-separated list of device-mapper multipath specific options:
.RS
.IP \fIdisablequeueing\fP
@@ -62,12 +62,12 @@ all the paths are unavailable for any underlying device-mapper multipath
device.
.RE
.TP
.BR \-u ", " \-\-umount
.BR -u ", " --umount
Unmount a mounted device before trying to deactivate it.
Without this option used, a device that is mounted is not deactivated.
.TP
.BR \-v ", " \-\-verbose
Run in verbose mode. Use \-\-vv for even more verbose mode.
.BR -v ", " --verbose
Run in verbose mode. Use --vv for even more verbose mode.
.SH EXAMPLES
.
Deactivate all supported block devices found in the system, skipping mounted
@@ -81,14 +81,14 @@ Deactivate all supported block devices found in the system, unmounting any
mounted devices first, if possible.
.BR
#
.B blkdeactivate \-u
.B blkdeactivate -u
.BR
.P
Deactivate the device /dev/vg/lvol0 together with all its holders, unmounting
any mounted devices first, if possible.
.BR
#
.B blkdeactivate \-u /dev/vg/lvol0
.B blkdeactivate -u /dev/vg/lvol0
.BR
.P
Deactivate all supported block devices found in the system. If the deactivation
@@ -96,14 +96,14 @@ of a device-mapper device fails, retry it. Deactivate the whole
Volume Group at once when processing an LVM Logical Volume.
.BR
#
.B blkdeactivate \-u \-d retry \-l wholevg
.B blkdeactivate -u -d retry -l wholevg
.BR
.P
Deactivate all supported block devices found in the system. If the deactivation
of a device-mapper device fails, retry it and force removal.
.BR
#
.B blkdeactivate \-d force,retry
.B blkdeactivate -d force,retry
.
.SH SEE ALSO
.BR dmsetup (8),

View File

@@ -8,22 +8,22 @@ clvmd \(em cluster LVM daemon
.
.ad l
.B clvmd
.RB [ \-C ]
.RB [ \-d
.RB [ -C ]
.RB [ -d
.RI [ value ]]
.RB [ \-E
.RB [ -E
.IR lock_uuid ]
.RB [ \-f ]
.RB [ \-h ]
.RB [ \-I
.RB [ -f ]
.RB [ -h ]
.RB [ -I
.IR cluster_manager ]
.RB [ \-R ]
.RB [ \-S ]
.RB [ \-t
.RB [ -R ]
.RB [ -S ]
.RB [ -t
.IR timeout ]
.RB [ \-T
.RB [ -T
.IR start_timeout ]
.RB [ \-V ]
.RB [ -V ]
.ad b
.
.SH DESCRIPTION
@@ -35,12 +35,12 @@ if a node in the cluster does not have this daemon running.
.SH OPTIONS
.
.HP
.BR \-C
.BR -C
.br
Only valid if \fB\-d\fP is also specified.
Only valid if \fB-d\fP is also specified.
Tells all clvmds in a cluster to enable/disable debug logging.
Without this switch, only the local clvmd will change its debug level to that
given with \fB\-d\fP.
given with \fB-d\fP.
.br
This does not work correctly if specified on the command-line that starts clvmd.
If you want to start clvmd \fBand\fP
@@ -48,14 +48,14 @@ enable cluster-wide logging then the command needs to be issued twice, eg:
.br
.BR clvmd
.br
.BR clvmd\ \-d2
.BR clvmd\ -d2
.
.HP
.BR \-d
.BR -d
.RI [ value ]
.br
Set debug logging level.
If \fB\-d\fP is specified without a \fIvalue\fP
If \fB-d\fP is specified without a \fIvalue\fP
then 1 is assumed. \fIValue\fP can be:
.PD 0
.IP
@@ -63,30 +63,30 @@ then 1 is assumed. \fIValue\fP can be:
\(em Disabled
.IP
.BR 1
\(em Sends debug logs to stderr (implies \fB\-f\fP)
\(em Sends debug logs to stderr (implies \fB-f\fP)
.IP
.BR 2
\(em Sends debug logs to \fBsyslog\fP(3)
.PD
.
.HP
.BR \-E
.BR -E
.IR lock_uuid
.br
Pass lock uuid to be reacquired exclusively when clvmd is restarted.
.
.HP
.BR \-f
.BR -f
.br
Don't fork, run in the foreground.
.
.HP
.BR \-h
.BR -h
.br
Show help information.
.
.HP
.BR \-I
.BR -I
.IR cluster_manager
.br
Selects the cluster manager to use for locking and internal
@@ -94,24 +94,24 @@ communications. As it is quite possible to have multiple managers available on
the same system you might have to manually specify this option to override the
search.
By default, omit \fB-I\fP is equivalent to \fB\-Iauto\fP.
By default, omit \fB-I\fP is equivalent to \fB-Iauto\fP.
Clvmd will use the first cluster manager that succeeds,
and it checks them in a predefined order
.BR cman ,
.BR corosync ,
.BR openais .
The available managers will be listed by order as part of the
\fBclvmd \-h\fP output.
\fBclvmd -h\fP output.
.
.HP
.BR \-R
.BR -R
.br
Tells all the running instance of \fBclvmd\fP in the cluster to reload their device cache and
re-read the lvm configuration file \fBlvm.conf\fP(5). This command should be run whenever the
devices on a cluster system are changed.
.
.HP
.BR \-S
.BR -S
.br
Tells the running \fBclvmd\fP to exit and reexecute itself, for example at the
end of a package upgrade. The new instance is instructed to reacquire
@@ -120,7 +120,7 @@ methods of restarting the daemon have the side effect of changing
exclusive LV locks into shared locks.)
.
.HP
.BR \-t
.BR -t
.IR timeout
.br
Specifies the \fItimeout\fP for commands to run around the cluster. This should not
@@ -129,7 +129,7 @@ may need to increase this on systems with very large disk farms.
The default is 60 seconds.
.
.HP
.BR \-T
.BR -T
.IR start_timeout
.br
Specifies the start timeout for \fBclvmd\fP daemon startup. If the
@@ -147,10 +147,10 @@ The default is \fB0\fP (no timeout) and the value is in seconds. Don't set this
small or you will experience spurious errors. 10 or 20 seconds might be
sensible.
This timeout will be ignored if you start \fBclvmd\fP with the \fB\-d\fP.
This timeout will be ignored if you start \fBclvmd\fP with the \fB-d\fP.
.
.HP
.BR \-V
.BR -V
.br
Display the version of the cluster LVM daemon.
.

View File

@@ -3,7 +3,7 @@
cmirrord \(em cluster mirror log daemon
.SH SYNOPSIS
\fBcmirrord\fR [\fB\-f\fR] [\fB\-h\fR]
\fBcmirrord\fR [\fB-f\fR] [\fB-h\fR]
.SH DESCRIPTION
\fBcmirrord\fP is the daemon that tracks mirror log information in a cluster.
@@ -26,9 +26,9 @@ ignored. Active cluster mirrors should be shutdown before stopping the cluster
mirror log daemon.
.SH OPTIONS
.IP "\fB\-f\fR, \fB\-\-foreground\fR" 4
.IP "\fB-f\fR, \fB--foreground\fR" 4
Do not fork and log to the terminal.
.IP "\fB\-h\fR, \fB\-\-help\fR" 4
.IP "\fB-h\fR, \fB--help\fR" 4
Print usage.
.SH SEE ALSO

View File

@@ -7,15 +7,15 @@ dmeventd \(em Device-mapper event daemon
.SH SYNOPSIS
.
.B dmeventd
.RB [ \-d
.RB [ \-d
.RB [ \-d ]]]
.RB [ \-f ]
.RB [ \-h ]
.RB [ \-l ]
.RB [ \-R ]
.RB [ \-V ]
.RB [ \-? ]
.RB [ -d
.RB [ -d
.RB [ -d ]]]
.RB [ -f ]
.RB [ -h ]
.RB [ -l ]
.RB [ -R ]
.RB [ -V ]
.RB [ -? ]
.
.SH DESCRIPTION
.
@@ -27,46 +27,46 @@ particular events occur.
.SH OPTIONS
.
.HP
.BR \-d
.BR -d
.br
Repeat from 1 to 3 times (
.BR \-d ,
.BR \-dd ,
.BR \-ddd
.BR -d ,
.BR -dd ,
.BR -ddd
) to increase the detail of
debug messages sent to syslog.
Each extra d adds more debugging information.
.
.HP
.BR \-f
.BR -f
.br
Don't fork, run in the foreground.
.
.HP
.BR \-h
.BR -h
.br
Show help information.
.
.HP
.BR \-l
.BR -l
.br
Log through stdout and stderr instead of syslog.
This option works only with option \-f, otherwise it is ignored.
This option works only with option -f, otherwise it is ignored.
.
.HP
.BR \-?
.BR -?
.br
Show help information on stderr.
.
.HP
.BR \-R
.BR -R
.br
Replace a running dmeventd instance. The running dmeventd must be version
2.02.77 or newer. The new dmeventd instance will obtain a list of devices and
events to monitor from the currently running daemon.
.
.HP
.BR \-V
.BR -V
.br
Show version of dmeventd.
.

View File

@@ -23,12 +23,12 @@ dmsetup \(em low level logical volume management
. ad l
. BR create
. IR device_name
. RB [ -u | \-\-uuid
. RB [ -u | --uuid
. IR uuid ]
. RB \%[ \-\-addnodeoncreate | \-\-addnodeonresume ]
. RB \%[ \-n | \-\-notable | \-\-table
. RB \%[ --addnodeoncreate | --addnodeonresume ]
. RB \%[ -n | --notable | --table
. IR \%table | table_file ]
. RB [ \-\-readahead
. RB [ --readahead
. RB \%[ + ] \fIsectors | auto | none ]
. ad b
..
@@ -39,7 +39,7 @@ dmsetup \(em low level logical volume management
.de CMD_DEPS
. ad l
. BR deps
. RB [ \-o
. RB [ -o
. IR options ]
. RI [ device_name ...]
. ad b
@@ -50,7 +50,7 @@ dmsetup \(em low level logical volume management
.B dmsetup
.de CMD_HELP
. BR help
. RB [ \-c | \-C | \-\-columns ]
. RB [ -c | -C | --columns ]
..
.CMD_HELP
.
@@ -67,18 +67,18 @@ dmsetup \(em low level logical volume management
.de CMD_INFOLONG
. ad l
. BR info
. BR \-c | \-C | \-\-columns
. RB [ \-\-count
. BR -c | -C | --columns
. RB [ --count
. IR count ]
. RB [ \-\-interval
. RB [ --interval
. IR seconds ]
. RB \%[ \-\-nameprefixes ]
. RB \%[ \-\-noheadings ]
. RB [ \-o
. RB \%[ --nameprefixes ]
. RB \%[ --noheadings ]
. RB [ -o
. IR fields ]
. RB [ \-O | \-\-sort
. RB [ -O | --sort
. IR sort_fields ]
. RB [ \-\-separator
. RB [ --separator
. IR separator ]
. RI [ device_name ]
. ad b
@@ -91,7 +91,7 @@ dmsetup \(em low level logical volume management
. ad l
. BR load
. IR device_name
. RB [ \-\-table
. RB [ --table
. IR table | table_file ]
. ad b
..
@@ -102,12 +102,12 @@ dmsetup \(em low level logical volume management
.de CMD_LS
. ad l
. BR ls
. RB [ \-\-target
. RB [ --target
. IR target_type ]
. RB [ \-\-exec
. RB [ --exec
. IR command ]
. RB [ \-\-tree ]
. RB [ \-o
. RB [ --tree ]
. RB [ -o
. IR options ]
. ad b
..
@@ -145,7 +145,7 @@ dmsetup \(em low level logical volume management
. ad l
. BR reload
. IR device_name
. RB [ \-\-table
. RB [ --table
. IR table | table_file ]
. ad b
..
@@ -156,9 +156,9 @@ dmsetup \(em low level logical volume management
.de CMD_REMOVE
. ad l
. BR remove
. RB [ \-f | \-\-force ]
. RB [ \-\-retry ]
. RB [ \-\-deferred ]
. RB [ -f | --force ]
. RB [ --retry ]
. RB [ --deferred ]
. IR device_name ...
. ad b
..
@@ -168,8 +168,8 @@ dmsetup \(em low level logical volume management
.B dmsetup
.de CMD_REMOVE_ALL
. BR remove_all
. RB [ \-f | \-\-force ]
. RB [ \-\-deferred ]
. RB [ -f | --force ]
. RB [ --deferred ]
..
.CMD_REMOVE_ALL
.
@@ -187,7 +187,7 @@ dmsetup \(em low level logical volume management
.de CMD_RENAME_UUID
. BR rename
. IR device_name
. BR \-\-setuuid
. BR --setuuid
. IR uuid
..
.CMD_RENAME_UUID
@@ -198,10 +198,10 @@ dmsetup \(em low level logical volume management
. ad l
. BR resume
. IR device_name ...
. RB [ \-\-addnodeoncreate | \-\-addnodeonresume ]
. RB [ \-\-noflush ]
. RB [ \-\-nolockfs ]
. RB \%[ \-\-readahead
. RB [ --addnodeoncreate | --addnodeonresume ]
. RB [ --noflush ]
. RB [ --nolockfs ]
. RB \%[ --readahead
. RB \%[ + ] \fIsectors | auto | none ]
. ad b
..
@@ -244,9 +244,9 @@ dmsetup \(em low level logical volume management
.de CMD_STATUS
. ad l
. BR status
. RB [ \-\-target
. RB [ --target
. IR target_type ]
. RB [ \-\-noflush ]
. RB [ --noflush ]
. RI [ device_name ...]
. ad b
..
@@ -257,8 +257,8 @@ dmsetup \(em low level logical volume management
.de CMD_SUSPEND
. ad l
. BR suspend
. RB [ \-\-nolockfs ]
. RB [ \-\-noflush ]
. RB [ --nolockfs ]
. RB [ --noflush ]
. IR device_name ...
. ad b
..
@@ -269,9 +269,9 @@ dmsetup \(em low level logical volume management
.de CMD_TABLE
. ad l
. BR table
. RB [ \-\-target
. RB [ --target
. IR target_type ]
. RB [ \-\-showkeys ]
. RB [ --showkeys ]
. RI [ device_name ...]
. ad b
..
@@ -342,7 +342,7 @@ dmsetup \(em low level logical volume management
.de CMD_WAIT
. ad l
. BR wait
. RB [ \-\-noflush ]
. RB [ --noflush ]
. IR device_name
. RI [ event_nr ]
. ad b
@@ -355,9 +355,9 @@ dmsetup \(em low level logical volume management
. ad l
. BR wipe_table
. IR device_name ...
. RB [ \-f | \-\-force ]
. RB [ \-\-noflush ]
. RB [ \-\-nolockfs ]
. RB [ -f | --force ]
. RB [ --noflush ]
. RB [ --nolockfs ]
. ad b
..
.CMD_WIPE_TABLE
@@ -383,70 +383,70 @@ The second argument is the logical device name or uuid.
Invoking the dmsetup tool as \fBdevmap_name\fP
(which is not normally distributed and is supported
only for historical reasons) is equivalent to
.BI \%dmsetup\ info\ \-c\ \-\-noheadings\ \-j \ major\ \-m \ minor \c
.BI \%dmsetup\ info\ -c\ --noheadings\ -j \ major\ -m \ minor \c
\fR.
.\" dot above here fixes -Thtml rendering for next HP option
.
.SH OPTIONS
.
.HP
.BR \-\-addnodeoncreate
.BR --addnodeoncreate
.br
Ensure \fI/dev/mapper\fP node exists after \fBdmsetup create\fP.
.
.HP
.BR \-\-addnodeonresume
.BR --addnodeonresume
.br
Ensure \fI/dev/mapper\fP node exists after \fBdmsetup resume\fP (default with udev).
.
.HP
.BR \-\-checks
.BR --checks
.br
Perform additional checks on the operations requested and report
potential problems. Useful when debugging scripts.
In some cases these checks may slow down operations noticeably.
.
.HP
.BR \-c | \-C | \-\-columns
.BR -c | -C | --columns
.br
Display output in columns rather than as Field: Value lines.
.
.HP
.BR \-\-count
.BR --count
.IR count
.br
Specify the number of times to repeat a report. Set this to zero
continue until interrupted. The default interval is one second.
.
.HP
.BR \-f | \-\-force
.BR -f | --force
.br
Try harder to complete operation.
.
.HP
.BR \-h | \-\-help
.BR -h | --help
.br
Outputs a summary of the commands available, optionally including
the list of report fields (synonym with \fBhelp\fP command).
.
.HP
.BR \-\-inactive
.BR --inactive
.br
When returning any table information from the kernel report on the
inactive table instead of the live table.
Requires kernel driver version 4.16.0 or above.
.
.HP
.BR \-\-interval
.BR --interval
.IR seconds
.br
Specify the interval in seconds between successive iterations for
repeating reports. If \fB\-\-interval\fP is specified but \fB\-\-count\fP
repeating reports. If \fB--interval\fP is specified but \fB--count\fP
is not, reports will continue to repeat until interrupted.
The default interval is one second.
.
.HP
.BR \-\-manglename
.BR --manglename
.BR auto | hex | none
.br
Mangle any character not on a whitelist using mangling_mode when
@@ -466,69 +466,69 @@ Mangling mode could be also set through
environment variable.
.
.HP
.BR \-j | \-\-major
.BR -j | --major
.IR major
.br
Specify the major number.
.
.HP
.BR \-m | \-\-minor
.BR -m | --minor
.IR minor
.br
Specify the minor number.
.
.HP
.BR \-n | \-\-notable
.BR -n | --notable
.br
When creating a device, don't load any table.
.
.HP
.BR \-\-nameprefixes
.BR --nameprefixes
.br
Add a "DM_" prefix plus the field name to the output. Useful with
\fB\-\-noheadings\fP to produce a list of
\fB--noheadings\fP to produce a list of
field=value pairs that can be used to set environment variables
(for example, in
.BR udev (7)
rules).
.
.HP
.BR \-\-noheadings
.BR --noheadings
Suppress the headings line when using columnar output.
.
.HP
.BR \-\-noflush
.BR --noflush
Do not flush outstading I/O when suspending a device, or do not
commit thin-pool metadata when obtaining thin-pool status.
.
.HP
.BR \-\-nolockfs
.BR --nolockfs
.br
Do not attempt to synchronize filesystem eg, when suspending a device.
.
.HP
.BR \-\-noopencount
.BR --noopencount
.br
Tell the kernel not to supply the open reference count for the device.
.
.HP
.BR \-\-noudevrules
.BR --noudevrules
.br
Do not allow udev to manage nodes for devices in device-mapper directory.
.
.HP
.BR \-\-noudevsync
.BR --noudevsync
.br
Do not synchronise with udev when creating, renaming or removing devices.
.
.HP
.BR \-o | \-\-options
.BR -o | --options
.IR options
.br
Specify which fields to display.
.
.HP
.BR \-\-readahead
.BR --readahead
.RB [ + ] \fIsectors | auto | none
.br
Specify read ahead size in units of sectors.
@@ -539,12 +539,12 @@ smaller than the value chosen by the kernel.
The value \fBnone\fP is equivalent to specifying zero.
.
.HP
.BR \-r | \-\-readonly
.BR -r | --readonly
.br
Set the table being loaded read-only.
.
.HP
.BR \-S | \-\-select
.BR -S | --select
.IR selection
.br
Display only rows that match \fIselection\fP criteria. All rows are displayed
@@ -557,14 +557,14 @@ selection operators, check the output of \fBdmsetup\ info\ -c\ -S\ help\fP
command.
.
.HP
.BR \-\-table
.BR --table
.IR table
.br
Specify a one-line table directly on the command line.
See below for more information on the table format.
.
.HP
.BR \-\-udevcookie
.BR --udevcookie
.IR cookie
.br
Use cookie for udev synchronisation.
@@ -573,29 +573,29 @@ multiple different devices. It's not adviced to combine different
operations on the single device.
.
.HP
.BR \-u | \-\-uuid
.BR -u | --uuid
.br
Specify the \fIuuid\fP.
.
.HP
.BR \-y | \-\-yes
.BR -y | --yes
.br
Answer yes to all prompts automatically.
.
.HP
.BR \-v | \-\-verbose
.RB [ \-v | \-\-verbose ]
.BR -v | --verbose
.RB [ -v | --verbose ]
.br
Produce additional output.
.
.HP
.BR \-\-verifyudev
.BR --verifyudev
.br
If udev synchronisation is enabled, verify that udev operations get performed
correctly and try to fix up the device nodes afterwards if not.
.
.HP
.BR \-\-version
.BR --version
.br
Display the library and kernel driver version.
.br
@@ -612,7 +612,7 @@ Destroys the table in the inactive table slot for device_name.
.br
Creates a device with the given name.
If \fItable\fP or \fItable_file\fP is supplied, the table is loaded and made live.
Otherwise a table is read from standard input unless \fB\-\-notable\fP is used.
Otherwise a table is read from standard input unless \fB--notable\fP is used.
The optional \fIuuid\fP can be used in place of
device_name in subsequent dmsetup commands.
If successful the device will appear in table and for live
@@ -682,7 +682,7 @@ Device names on output can be customised by following options:
\fBdevno\fP (major and minor pair, used by default),
\fBblkdevname\fP (block device name),
\fBdevname\fP (map name for device-mapper devices, equal to blkdevname otherwise).
\fB\-\-tree\fP displays dependencies between devices as a tree.
\fB--tree\fP displays dependencies between devices as a tree.
It accepts a comma-separate list of \fIoptions\fP.
Some specify the information displayed against each node:
.BR device / nodevice ;
@@ -705,11 +705,11 @@ If neither is supplied, reads a table from standard input.
Ensure existing device-mapper \fIdevice_name\fP and UUID is in the correct mangled
form containing only whitelisted characters (supported by udev) and do
a rename if necessary. Any character not on the whitelist will be mangled
based on the \fB\-\-manglename\fP setting. Automatic rename works only for device
based on the \fB--manglename\fP setting. Automatic rename works only for device
names and not for device UUIDs because the kernel does not allow changing
the UUID of active devices. Any incorrect UUIDs are reported only and they
must be manually corrected by deactivating the device first and then
reactivating it with proper mangling mode used (see also \fB\-\-manglename\fP).
reactivating it with proper mangling mode used (see also \fB--manglename\fP).
.
.HP
.CMD_MESSAGE
@@ -728,16 +728,16 @@ driver, adding, changing or removing nodes as necessary.
.CMD_REMOVE
.br
Removes a device. It will no longer be visible to dmsetup. Open devices
cannot be removed, but adding \fB\-\-force\fP will replace the table with one
that fails all I/O. \fB\-\-deferred\fP will enable deferred removal of open
cannot be removed, but adding \fB--force\fP will replace the table with one
that fails all I/O. \fB--deferred\fP will enable deferred removal of open
devices - the device will be removed when the last user closes it. The deferred
removal feature is supported since version 4.27.0 of the device-mapper
driver available in upstream kernel version 3.13. (Use \fBdmsetup version\fP
to check this.) If an attempt to remove a device fails, perhaps because a process run
from a quick udev rule temporarily opened the device, the \fB\-\-retry\fP
from a quick udev rule temporarily opened the device, the \fB--retry\fP
option will cause the operation to be retried for a few seconds before failing.
Do NOT combine
\fB\-\-force\fP and \fB\-\-udevcookie\fP, as udev may start to process udev
\fB--force\fP and \fB--udevcookie\fP, as udev may start to process udev
rules in the middle of error target replacement and result in nondeterministic
result.
.
@@ -746,8 +746,8 @@ result.
.br
Attempts to remove all device definitions i.e. reset the driver. This also runs
\fBmknodes\fP afterwards. Use with care! Open devices cannot be removed, but
adding \fB\-\-force\fP will replace the table with one that fails all I/O.
\fB\-\-deferred\fP will enable deferred removal of open devices - the device
adding \fB--force\fP will replace the table with one that fails all I/O.
\fB--deferred\fP will enable deferred removal of open devices - the device
will be removed when the last user closes it. The deferred removal feature is
supported since version 4.27.0 of the device-mapper driver available in
upstream kernel version 3.13.
@@ -797,8 +797,8 @@ for more details.
.CMD_STATUS
.br
Outputs status information for each of the device's targets.
With \fB\-\-target\fP, only information relating to the specified target type
any is displayed. With \fB\-\-noflush\fP, the thin target (from version 1.3.0)
With \fB--target\fP, only information relating to the specified target type
any is displayed. With \fB--noflush\fP, the thin target (from version 1.3.0)
doesn't commit any outstanding changes to disk before reporting its statistics.
.HP
@@ -808,9 +808,9 @@ Suspends a device. Any I/O that has already been mapped by the device
but has not yet completed will be flushed. Any further I/O to that
device will be postponed for as long as the device is suspended.
If there's a filesystem on the device which supports the operation,
an attempt will be made to sync it first unless \fB\-\-nolockfs\fP is specified.
an attempt will be made to sync it first unless \fB--nolockfs\fP is specified.
Some targets such as recent (October 2006) versions of multipath may support
the \fB\-\-noflush\fP option. This lets outstanding I/O that has not yet reached the
the \fB--noflush\fP option. This lets outstanding I/O that has not yet reached the
device to remain unflushed.
.
.HP
@@ -818,10 +818,10 @@ device to remain unflushed.
.br
Outputs the current table for the device in a format that can be fed
back in using the create or load commands.
With \fB\-\-target\fP, only information relating to the specified target type
With \fB--target\fP, only information relating to the specified target type
is displayed.
Real encryption keys are suppressed in the table output for the crypt
target unless the \fB\-\-showkeys\fP parameter is supplied. Kernel key
target unless the \fB--showkeys\fP parameter is supplied. Kernel key
references prefixed with \fB:\fP are not affected by the parameter and get
displayed always.
.
@@ -855,7 +855,7 @@ The output is a cookie value. Normally we don't need to create cookies since
dmsetup creates and destroys them for each action automatically. However, we can
generate one explicitly to group several actions together and use only one
cookie instead. We can define a cookie to use for each relevant command by using
\fB\-\-udevcookie\fP option. Alternatively, we can export this value into the environment
\fB--udevcookie\fP option. Alternatively, we can export this value into the environment
of the dmsetup process as \fBDM_UDEV_COOKIE\fP variable and it will be used automatically
with all subsequent commands until it is unset.
Invoking this command will create system-wide semaphore that needs to be cleaned
@@ -888,10 +888,10 @@ Outputs version information.
.CMD_WAIT
.br
Sleeps until the event counter for device_name exceeds event_nr.
Use \fB\-v\fP to see the event number returned.
Use \fB-v\fP to see the event number returned.
To wait until the next event is triggered, use \fBinfo\fP to find
the last event number.
With \fB\-\-noflush\fP, the thin target (from version 1.3.0) doesn't commit
With \fB--noflush\fP, the thin target (from version 1.3.0) doesn't commit
any outstanding changes to disk before reporting its statistics.
.
.HP
@@ -1005,11 +1005,11 @@ Defaults to "\fI/dev\fP" and must be an absolute path.
.TP
.B DM_UDEV_COOKIE
A cookie to use for all relevant commands to synchronize with udev processing.
It is an alternative to using \fB\-\-udevcookie\fP option.
It is an alternative to using \fB--udevcookie\fP option.
.TP
.B DM_DEFAULT_NAME_MANGLING_MODE
A default mangling mode. Defaults to "\fB#DEFAULT_MANGLING#\fP"
and it is an alternative to using \fB\-\-manglename\fP option.
and it is an alternative to using \fB--manglename\fP option.
.
.SH AUTHORS
.

View File

@@ -1,21 +1,21 @@
.TH DMSTATS 8 "Jun 23 2016" "Linux" "MAINTENANCE COMMANDS"
.de OPT_PROGRAMS
. RB \%[ \-\-allprograms | \-\-programid
. RB \%[ --allprograms | --programid
. IR id ]
..
.
.de OPT_REGIONS
. RB \%[ \-\-allregions | \-\-regionid
. RB \%[ --allregions | --regionid
. IR id ]
..
.de OPT_OBJECTS
. RB [ \-\-area ]
. RB [ \-\-region ]
. RB [ \-\-group ]
. RB [ --area ]
. RB [ --region ]
. RB [ --group ]
..
.de OPT_FOREGROUND
. RB [ \-\-foreground ]
. RB [ --foreground ]
..
.
.\" Print units suffix, use with arg to print human
@@ -57,13 +57,13 @@ dmstats \(em device-mapper statistics management
. ad l
. IR command
. IR device_name " |"
. BR \-\-major
. BR --major
. IR major
. BR \-\-minor
. BR --minor
. IR minor " |"
. BR \-u | \-\-uuid
. BR -u | --uuid
. IR uuid
. RB \%[ \-v | \-\-verbose]
. RB \%[ -v | --verbose]
. ad b
..
.CMD_COMMAND
@@ -85,26 +85,26 @@ dmstats \(em device-mapper statistics management
.de CMD_CREATE
. ad l
. BR create
. IR device_name... | file_path... | \fB\-\-alldevices
. RB [ \-\-areas
. IR nr_areas | \fB\-\-areasize
. IR device_name... | file_path... | \fB--alldevices
. RB [ --areas
. IR nr_areas | \fB--areasize
. IR area_size ]
. RB [ \-\-bounds
. RB [ --bounds
. IR \%histogram_boundaries ]
. RB [ \-\-filemap ]
. RB [ \-\-follow
. RB [ --filemap ]
. RB [ --follow
. IR follow_mode ]
. OPT_FOREGROUND
. RB [ \-\-nomonitor ]
. RB [ \-\-nogroup ]
. RB [ \-\-precise ]
. RB [ \-\-start
. RB [ --nomonitor ]
. RB [ --nogroup ]
. RB [ --precise ]
. RB [ --start
. IR start_sector
. BR \-\-length
. IR length | \fB\-\-segments ]
. RB \%[ \-\-userdata
. BR --length
. IR length | \fB--segments ]
. RB \%[ --userdata
. IR user_data ]
. RB [ \-\-programid
. RB [ --programid
. IR id ]
. ad b
..
@@ -115,7 +115,7 @@ dmstats \(em device-mapper statistics management
.de CMD_DELETE
. ad l
. BR delete
. IR device_name | \fB\-\-alldevices
. IR device_name | \fB--alldevices
. OPT_PROGRAMS
. OPT_REGIONS
. ad b
@@ -127,10 +127,10 @@ dmstats \(em device-mapper statistics management
.de CMD_GROUP
. ad l
. BR group
. RI [ device_name | \fB\-\-alldevices ]
. RB [ \-\-alias
. RI [ device_name | \fB--alldevices ]
. RB [ --alias
. IR name ]
. RB [ \-\-regions
. RB [ --regions
. IR regions ]
. ad b
..
@@ -140,7 +140,7 @@ dmstats \(em device-mapper statistics management
.de CMD_HELP
. ad l
. BR help
. RB [ \-c | \-C | \-\-columns ]
. RB [ -c | -C | --columns ]
. ad b
..
.CMD_HELP
@@ -151,14 +151,14 @@ dmstats \(em device-mapper statistics management
. ad l
. BR list
. RI [ device_name ]
. RB [ \-\-histogram ]
. RB [ --histogram ]
. OPT_PROGRAMS
. RB [ \-\-units
. RB [ --units
. IR units ]
. OPT_OBJECTS
. RB \%[ \-\-nosuffix ]
. RB [ \-\-notimesuffix ]
. RB \%[ \-v | \-\-verbose]
. RB \%[ --nosuffix ]
. RB [ --notimesuffix ]
. RB \%[ -v | --verbose]
. ad b
..
.CMD_LIST
@@ -169,7 +169,7 @@ dmstats \(em device-mapper statistics management
. ad l
. BR print
. RI [ device_name ]
. RB [ \-\-clear ]
. RB [ --clear ]
. OPT_PROGRAMS
. OPT_REGIONS
. ad b
@@ -182,24 +182,24 @@ dmstats \(em device-mapper statistics management
. ad l
. BR report
. RI [ device_name ]
. RB [ \-\-interval
. RB [ --interval
. IR seconds ]
. RB [ \-\-count
. RB [ --count
. IR count ]
. RB [ \-\-units
. RB [ --units
. IR units ]
. RB [ \-\-histogram ]
. RB [ --histogram ]
. OPT_PROGRAMS
. OPT_REGIONS
. OPT_OBJECTS
. RB [ \-O | \-\-sort
. RB [ -O | --sort
. IR sort_fields ]
. RB [ \-S | \-\-select
. RB [ -S | --select
. IR selection ]
. RB [ \-\-units
. RB [ --units
. IR units ]
. RB [ \-\-nosuffix ]
. RB \%[ \-\-notimesuffix ]
. RB [ --nosuffix ]
. RB \%[ --notimesuffix ]
. ad b
..
.CMD_REPORT
@@ -208,8 +208,8 @@ dmstats \(em device-mapper statistics management
.de CMD_UNGROUP
. ad l
. BR ungroup
. RI [ device_name | \fB\-\-alldevices ]
. RB [ \-\-groupid
. RI [ device_name | \fB--alldevices ]
. RB [ --groupid
. IR id ]
. ad b
..
@@ -220,9 +220,9 @@ dmstats \(em device-mapper statistics management
. ad l
. BR update_filemap
. IR file_path
. RB [ \-\-groupid
. RB [ --groupid
. IR id ]
. RB [ \-\-follow
. RB [ --follow
. IR follow_mode ]
. OPT_FOREGROUND
. ad b
@@ -248,47 +248,47 @@ control, and reporting behaviour.
When no device argument is given dmstats will by default operate on all
device-mapper devices present. The \fBcreate\fP and \fBdelete\fP
commands require the use of \fB\-\-alldevices\fP when used in this way.
commands require the use of \fB--alldevices\fP when used in this way.
.
.SH OPTIONS
.
.HP
.BR \-\-alias
.BR --alias
.IR name
.br
Specify an alias name for a group.
.
.HP
.BR \-\-alldevices
.BR --alldevices
.br
If no device arguments are given allow operation on all devices when
creating or deleting regions.
.
.HP
.BR \-\-allprograms
.BR --allprograms
.br
Include regions from all program IDs for list and report operations.
.br
.HP
.BR \-\-allregions
.BR --allregions
.br
Include all present regions for commands that normally accept a single
region identifier.
.
.HP
.BR \-\-area
.BR --area
.br
When peforming a list or report, include objects of type area in the
results.
.
.HP
.BR \-\-areas
.BR --areas
.IR nr_areas
.br
Specify the number of statistics areas to create within a new region.
.
.HP
.BR \-\-areasize
.BR --areasize
.IR area_size \c
.RB [ \c
.UNITS
@@ -298,25 +298,25 @@ optional suffix selects units of:
.HELP_UNITS
.
.HP
.BR \-\-clear
.BR --clear
.br
When printing statistics counters, also atomically reset them to zero.
.
.HP
.BR \-\-count
.BR --count
.IR count
.br
Specify the iteration count for repeating reports. If the count
argument is zero reports will continue to repeat until interrupted.
.
.HP
.BR \-\-group
.BR --group
.br
When peforming a list or report, include objects of type group in the
results.
.
.HP
.BR \-\-filemap
.BR --filemap
.br
Instead of creating regions on a device as specified by command line
options, open the file found at each \fBfile_path\fP argument, and
@@ -324,7 +324,7 @@ create regions corresponding to the locations of the on-disk extents
allocated to the file(s).
.
.HP
.BR \-\-nomonitor
.BR --nomonitor
.br
Disable the \fBdmfilemapd\fP daemon when creating new file mapped
groups. Normally the device-mapper filemap monitoring daemon,
@@ -336,7 +336,7 @@ Regions in the group may still be updated with the
\fBupdate_filemap\fP command, or by starting the daemon manually.
.
.HP
.BR \-\-follow
.BR --follow
.IR follow_mode
.br
Specify the \fBdmfilemapd\fP file following mode. The file map
@@ -371,20 +371,20 @@ In either mode, the daemon exits automatically if the monitored group
is removed.
.
.HP
.BR \-\-foreground
.BR --foreground
.br
Specify that the \fBdmfilemapd\fP daemon should run in the foreground.
The daemon will not fork into the background, and will replace the
\fBdmstats\fP command that started it.
.
.HP
.BR \-\-groupid
.BR --groupid
.IR id
.br
Specify the group to operate on.
.
.HP
.BR \-\-bounds
.BR --bounds
.IR histogram_boundaries \c
.RB [ ns | us | ms | s ]
.br
@@ -398,22 +398,22 @@ or \fBs\fP may be given after each value to specify units of
nanoseconds, microseconds, miliseconds or seconds respectively.
.
.HP
.BR \-\-histogram
.BR --histogram
.br
When used with the \fBreport\fP and \fBlist\fP commands select default
fields that emphasize latency histogram data.
.
.HP
.BR \-\-interval
.BR --interval
.IR seconds
.br
Specify the interval in seconds between successive iterations for
repeating reports. If \fB\-\-interval\fP is specified but
\fB\-\-count\fP is not,
repeating reports. If \fB--interval\fP is specified but
\fB--count\fP is not,
reports will continue to repeat until interrupted.
.
.HP
.BR \-\-length
.BR --length
.IR length \c
.RB [ \c
.UNITS
@@ -423,55 +423,55 @@ suffix selects units of:
.HELP_UNITS
.
.HP
.BR \-j | \-\-major
.BR -j | --major
.IR major
.br
Specify the major number.
.
.HP
.BR \-m | \-\-minor
.BR -m | --minor
.IR minor
.br
Specify the minor number.
.
.HP
.BR \-\-nogroup
.BR --nogroup
.br
When creating regions mapping the extents of a file in the file
system, do not create a group or set an alias.
.
.HP
.BR \-\-nosuffix
.BR --nosuffix
.br
Suppress the suffix on output sizes. Use with \fB\-\-units\fP
Suppress the suffix on output sizes. Use with \fB--units\fP
(except h and H) if processing the output.
.
.HP
.BR \-\-notimesuffix
.BR --notimesuffix
.br
Suppress the suffix on output time values. Histogram boundary values
will be reported in units of nanoseconds.
.
.HP
.BR \-o | \-\-options
.BR -o | --options
.br
Specify which report fields to display.
.
.HP
.BR \-O | \-\-sort
.BR -O | --sort
.IR sort_fields
.br
Sort output according to the list of fields given. Precede any
sort field with '\fB-\fP' for a reverse sort on that column.
.
.HP
.BR \-\-precise
.BR --precise
.br
Attempt to use nanosecond precision counters when creating new
statistics regions.
.
.HP
.BR \-\-programid
.BR --programid
.IR id
.br
Specify a program ID string. When creating new statistics regions this
@@ -480,19 +480,19 @@ program ID in order to select only regions with a matching value. The
default program ID for dmstats-managed regions is "dmstats".
.
.HP
.BR \-\-region
.BR --region
.br
When peforming a list or report, include objects of type region in the
results.
.
.HP
.BR \-\-regionid
.BR --regionid
.IR id
.br
Specify the region to operate on.
.
.HP
.BR \-\-regions
.BR --regions
.IR region_list
.br
Specify a list of regions to group. The group list is a comma-separated
@@ -500,23 +500,23 @@ list of region identifiers. Continuous sequences of identifiers may be
expressed as a hyphen separated range, for example: '1-10'.
.
.HP
.BR \-\-relative
.BR --relative
.br
If displaying the histogram report show relative (percentage) values
instead of absolute counts.
.
.HP
.BR \-S | \-\-select
.BR -S | --select
.IR selection
.br
Display only rows that match \fIselection\fP criteria. All rows with the
additional "selected" column (\fB\-o selected\fP) showing 1 if the row matches
additional "selected" column (\fB-o selected\fP) showing 1 if the row matches
the \fIselection\fP and 0 otherwise. The selection criteria are defined by
specifying column names and their valid values while making use of
supported comparison operators.
.
.HP
.BR \-\-start
.BR --start
.IR start \c
.RB [ \c
.UNITS
@@ -526,18 +526,18 @@ optional suffix selects units of:
.HELP_UNITS
.
.HP
.BR \-\-segments
.BR --segments
.br
When used with \fBcreate\fP, create a new statistics region for each
target contained in the given device(s). This causes a separate region
to be allocated for each segment of the device.
The newly created regions are automatically placed into a group unless
the \fB\-\-nogroup\fP option is given. When grouping is enabled a group
alias may be specified using the \fB\-\-alias\fP option.
the \fB--nogroup\fP option is given. When grouping is enabled a group
alias may be specified using the \fB--alias\fP option.
.
.HP
.BR \-\-units
.BR --units
.RI [ units ] \c
.RB [ h | H | \c
.UNITS
@@ -546,10 +546,10 @@ Set the display units for report output.
All sizes are output in these units:
.RB ( h )uman-readable,
.HELP_UNITS
Can also specify custom units e.g. \fB\-\-units\ 3M\fP.
Can also specify custom units e.g. \fB--units\ 3M\fP.
.
.HP
.BR \-\-userdata
.BR --userdata
.IR user_data
.br
Specify user data (a word) to be stored with a new region. The value
@@ -558,12 +558,12 @@ information), and stored with the region in the aux_data field provided
by the kernel. Whitespace is not permitted.
.
.HP
.BR \-u | \-\-uuid
.BR -u | --uuid
.br
Specify the uuid.
.
.HP
.BR \-v | \-\-verbose " [" \-v | \-\-verbose ]
.BR -v | --verbose " [" -v | --verbose ]
.br
Produce additional output.
.
@@ -580,17 +580,17 @@ regions (with the exception of in-flight IO counters).
.br
Creates one or more new statistics regions on the specified device(s).
The region will span the entire device unless \fB\-\-start\fP and
\fB\-\-length\fP or \fB\-\-segments\fP are given. The \fB\-\-start\fP an
\fB\-\-length\fP options allow a region of arbitrary length to be placed
at an arbitrary offset into the device. The \fB\-\-segments\fP option
The region will span the entire device unless \fB--start\fP and
\fB--length\fP or \fB--segments\fP are given. The \fB--start\fP an
\fB--length\fP options allow a region of arbitrary length to be placed
at an arbitrary offset into the device. The \fB--segments\fP option
causes a new region to be created for each target in the corresponding
device-mapper device's table.
If the \fB\-\-precise\fP option is used the command will attempt to
If the \fB--precise\fP option is used the command will attempt to
create a region using nanosecond precision counters.
If \fB\-\-bounds\fP is given a latency histogram will be tracked for
If \fB--bounds\fP is given a latency histogram will be tracked for
the new region. The boundaries of the histogram bins are given as a
comma separated list of latency values. There is an implicit lower bound
of zero on the first bin and an implicit upper bound of infinity (or the
@@ -601,7 +601,7 @@ ms, or s may be given after each value to specify units of nanoseconds,
microseconds, miliseconds or seconds respectively, so for example, 10ms
is equivalent to 10000000. Latency values with a precision of less than
one milisecond can only be used when precise timestamps are enabled: if
\fB\-\-precise\fP is not given and values less than one milisecond are
\fB--precise\fP is not given and values less than one milisecond are
used it will be enabled automatically.
An optional \fBprogram_id\fP or \fBuser_data\fP string may be associated
@@ -616,7 +616,7 @@ By default dmstats creates regions with a \fBprogram_id\fP of
On success the \fBregion_id\fP of the newly created region is printed
to stdout.
If the \fB\-\-filemap\fP option is given with a regular file, or list
If the \fB--filemap\fP option is given with a regular file, or list
of files, as the \fBfile_path\fP argument, instead of creating regions
with parameters specified on the command line, \fBdmstats\fP will open
the files located at \fBfile_path\fP and create regions corresponding to
@@ -624,20 +624,20 @@ the physical extents allocated to the file. This can be used to monitor
statistics for individual files in the file system, for example, virtual
machine images, swap areas, or large database files.
To work with the \fB\-\-filemap\fP option, files must be located on a
To work with the \fB--filemap\fP option, files must be located on a
local file system, backed by a device-mapper device, that supports
physical extent data using the FIEMAP ioctl (Ext4 and XFS for e.g.).
By default regions that map a file are placed into a group and the
group alias is set to the basename of the file. This behaviour can be
overridden with the \fB\-\-alias\fP and \fB\-\-nogroup\fP options.
overridden with the \fB--alias\fP and \fB--nogroup\fP options.
Creating a group that maps a file automatically starts a daemon,
\fBdmfilemapd\fP to monitor the file and update the mapping as the
extents allocated to the file change. This behaviour can be disabled
using the \fB\-\-nomonitor\fP option.
using the \fB--nomonitor\fP option.
Use the \fB\-\-group\fP option to only display information for groups
Use the \fB--group\fP option to only display information for groups
when listing and reporting.
.
.HP
@@ -648,12 +648,12 @@ by the region are released and the region will not appear in the output
of subsequent list, print, or report operations.
All regions registered on a device may be removed using
\fB\-\-allregions\fP.
\fB--allregions\fP.
To remove all regions on all devices both \fB\-\-allregions\fP and
\fB\-\-alldevices\fP must be used.
To remove all regions on all devices both \fB--allregions\fP and
\fB--alldevices\fP must be used.
If a \fB\-\-groupid\fP is given instead of a \fB\-\-regionid\fP the
If a \fB--groupid\fP is given instead of a \fB--regionid\fP the
command will attempt to delete the group and all regions that it
contains.
@@ -666,8 +666,8 @@ will also be removed.
Combine one or more statistics regions on the specified device into a
group.
The list of regions to be grouped is specified with \fB\-\-regions\fP
and an optional alias may be assigned with \fB\-\-alias\fP. The set of
The list of regions to be grouped is specified with \fB--regions\fP
and an optional alias may be assigned with \fB--alias\fP. The set of
regions is given as a comma-separated list of region identifiers. A
continuous range of identifers spanning from \fBR1\fP to \fBR2\fP may
be expressed as '\fBR1\fP-\fBR2\fP'.
@@ -693,21 +693,21 @@ the list of report fields.
.CMD_LIST
.br
List the statistics regions, areas, or groups registered on the device.
If the \fB\-\-allprograms\fP switch is given all regions will be listed
If the \fB--allprograms\fP switch is given all regions will be listed
regardless of region program ID values.
By default only regions and groups are included in list output. If
\fB\-v\fP or \fB\-\-verbose\fP is given the report will also include a
\fB-v\fP or \fB--verbose\fP is given the report will also include a
row of information for each configured group and for each area contained
in each region displayed.
Regions that contain a single area are by default omitted from the
verbose list since their properties are identical to the area that they
contain - to view all regions regardless of the number of areas present
use \fB\-\-region\fP). To also view the areas contained within regions
use \fB\-\-area\fP.
use \fB--region\fP). To also view the areas contained within regions
use \fB--area\fP.
If \fB\-\-histogram\fP is given the report will include the bin count
If \fB--histogram\fP is given the report will include the bin count
and latency boundary values for any configured histograms.
.HP
.CMD_PRINT
@@ -720,20 +720,20 @@ present regions.
.br
Start a report for the specified object or for all present objects. If
the count argument is specified, the report will repeat at a fixed
interval set by the \fB\-\-interval\fP option. The default interval is
interval set by the \fB--interval\fP option. The default interval is
one second.
If the \fB\-\-allprograms\fP switch is given, all regions will be
If the \fB--allprograms\fP switch is given, all regions will be
listed, regardless of region program ID values.
If the \fB\-\-histogram\fP is given the report will include the histogram
If the \fB--histogram\fP is given the report will include the histogram
values and latency boundaries.
If the \fB\-\-relative\fP is used the default histogram field displays
If the \fB--relative\fP is used the default histogram field displays
bin values as a percentage of the total number of I/Os.
Object types (areas, regions and groups) to include in the report are
selected using the \fB\-\-area\fP, \fB\-\-region\fP, and \fB\-\-group\fP
selected using the \fB--area\fP, \fB--region\fP, and \fB--group\fP
options.
.
.HP
@@ -742,12 +742,12 @@ options.
Remove an existing group and return all the group's regions to their
original state.
The group to be removed is specified using \fB\-\-groupid\fP.
The group to be removed is specified using \fB--groupid\fP.
.HP
.CMD_UPDATE_FILEMAP
.br
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
that were previously created with \fB\-\-filemap\fP, either directly,
that were previously created with \fB--filemap\fP, either directly,
or by starting the monitoring daemon, \fBdmfilemapd\fP.
This will add and remove regions to reflect changes in the allocated
@@ -758,11 +758,11 @@ Use of this command is not normally needed since the \fBdmfilemapd\fP
daemon will automatically monitor filemap groups and perform these
updates when required.
If a filemapped group was created with \fB\-\-nomonitor\fP, or the
If a filemapped group was created with \fB--nomonitor\fP, or the
daemon has been killed, the \fBupdate_filemap\fP can be used to
manually force an update or start a new daemon.
Use \fB\-\-nomonitor\fP to force a direct update and disable starting
Use \fB--nomonitor\fP to force a direct update and disable starting
the monitoring daemon.
.
.SH REGIONS, AREAS, AND GROUPS
@@ -786,8 +786,8 @@ The group metadata is stored with the first (lowest numbered)
the group and other group members will be returned to their prior
state.
By default new regions span the entire device. The \fB\-\-start\fP and
\fB\-\-length\fP options allows a region of any size to be placed at any
By default new regions span the entire device. The \fB--start\fP and
\fB--length\fP options allows a region of any size to be placed at any
location on the device.
Using offsets it is possible to create regions that map individual
@@ -798,7 +798,7 @@ and data aggregation.
A region may be either divided into the specified number of equal-sized
areas, or into areas of the given size by specifying one of
\fB\-\-areas\fP or \fB\-\-areasize\fP when creating a region with the
\fB--areas\fP or \fB--areasize\fP when creating a region with the
\fBcreate\fP command. Depending on the size of the areas and the device
region the final area within the region may be smaller than requested.
.P
@@ -827,7 +827,7 @@ reference the group.
.
.SH FILE MAPPING
.
Using \fB\-\-filemap\fP, it is possible to create regions that
Using \fB--filemap\fP, it is possible to create regions that
correspond to the extents of a file in the file system. This allows
IO statistics to be monitored on a per-file basis, for example to
observe large database files, virtual machine images, or other files
@@ -843,7 +843,7 @@ group, and the group alias is set to the \fBbasename(3)\fP of the
file. This allows statistics to be reported for the file as a whole,
aggregating values for the regions making up the group. To see only
the whole file (group) when using the \fBlist\fP and \fBreport\fP
commands, use \fB\-\-group\fP.
commands, use \fB--group\fP.
Since it is possible for the file to change after the initial
group of regions is created, the \fBupdate_filemap\fP command, and
@@ -888,8 +888,8 @@ progressively out-of-date as extents are added and removed (in this
case the daemon can be re-started or the group updated manually with
the \fBupdate_filemap\fP command).
See the \fBcreate\fP command and \fB\-\-filemap\fP, \fB\-\-follow\fP,
and \fB\-\-nomonitor\fP options for further information.
See the \fBcreate\fP command and \fB--filemap\fP, \fB--follow\fP,
and \fB--nomonitor\fP options for further information.
.
.P
.B Limitations
@@ -984,11 +984,11 @@ when a statistics region is created.
.TP
.B region_start
The region start location. Display units are selected by the
\fB\-\-units\fP option.
\fB--units\fP option.
.TP
.B region_len
The length of the region. Display units are selected by the
\fB\-\-units\fP option.
\fB--units\fP option.
.TP
.B area_id
Area identifier. Area identifiers are assigned by the device-mapper
@@ -1001,11 +1001,11 @@ identifiers exist.
.TP
.B area_start
The area start location. Display units are selected by the
\fB\-\-units\fP option.
\fB--units\fP option.
.TP
.B area_len
The length of the area. Display units are selected by the
\fB\-\-units\fP option.
\fB--units\fP option.
.TP
.B area_count
The number of areas in this region.
@@ -1157,7 +1157,7 @@ vg00/lvol1: Created new region with 1 area(s) as region ID 0
Create a 32M region 1G into device d0
.br
#
.B dmstats create \-\-start 1G \-\-length 32M d0
.B dmstats create --start 1G --length 32M d0
.br
d0: Created new region with 1 area(s) as region ID 0
.P
@@ -1165,7 +1165,7 @@ Create a whole-device region with 8 areas on every device
.br
.br
#
.B dmstats create \-\-areas 8
.B dmstats create --areas 8
.br
vg00-lvol1: Created new region with 8 area(s) as region ID 0
.br
@@ -1183,21 +1183,21 @@ Delete all regions on all devices
.br
.br
#
.B dmstats delete \-\-alldevices \-\-allregions
.B dmstats delete --alldevices --allregions
.P
Create a whole-device region with areas 10GiB in size on vg00/lvol1
using dmsetup
.br
.br
#
.B dmsetup stats create \-\-areasize 10G vg00/lvol1
.B dmsetup stats create --areasize 10G vg00/lvol1
.br
vg00-lvol1: Created new region with 5 area(s) as region ID 1
.P
Create a 1GiB region with 16 areas at the start of vg00/lvol1
.br
#
.B dmstats create \-\-start 0 \-\-len 1G \-\-areas=16 vg00/lvol1
.B dmstats create --start 0 --len 1G --areas=16 vg00/lvol1
.br
vg00-lvol1: Created new region with 16 area(s) as region ID 0
.P
@@ -1218,7 +1218,7 @@ Display five statistics reports for vg00/lvol1 at an interval of one second
.br
.br
#
.B dmstats report \-\-interval 1 \-\-count 5 vg00/lvol1
.B dmstats report --interval 1 --count 5 vg00/lvol1
.br
#
.B dmstats report
@@ -1235,7 +1235,7 @@ Create one region for reach target contained in device vg00/lvol1
.br
.br
#
.B dmstats create \-\-segments vg00/lvol1
.B dmstats create --segments vg00/lvol1
.br
vg00-lvol1: Created new region with 1 area(s) as region ID 0
.br
@@ -1262,7 +1262,7 @@ images/vm3.img: Created new group with 2 region(s) as group ID 1560.
Print raw counters for region 4 on device d0
.br
#
.B dmstats print \-\-regionid 4 d0
.B dmstats print --regionid 4 d0
.br
2097152+65536 0 0 0 0 29 0 264 701 0 41 701 0 41
.

View File

@@ -35,32 +35,32 @@ filesystem.
.SH OPTIONS
.
.HP
.BR \-e | \-\-ext\-offline
.BR -e | --ext-offline
.br
Unmount ext2/ext3/ext4 filesystem before doing resize.
.
.HP
.BR \-f | \-\-force
.BR -f | --force
.br
Bypass some sanity checks.
.
.HP
.BR \-h | \-\-help
.BR -h | --help
.br
Display the help text.
.
.HP
.BR \-n | \-\-dry\-run
.BR -n | --dry-run
.br
Print commands without running them.
.
.HP
.BR \-v | \-\-verbose
.BR -v | --verbose
.br
Be more verbose.
.
.HP
.BR \-y | \-\-yes
.BR -y | --yes
.br
Answer "yes" at any prompts.
.
@@ -87,7 +87,7 @@ If \fI/dev/vg/test\fP contains ext2/ext3/ext4
filesystem it will be unmounted prior the resize.
All [y/n] questions will be answered 'y'.
.sp
.B fsadm \-e \-y resize /dev/vg/test 1000M
.B fsadm -e -y resize /dev/vg/test 1000M
.
.SH ENVIRONMENT VARIABLES
.

View File

@@ -1,4 +1,4 @@
.SH EXAMPLES
Change LV permission to read-only:
.sp
.B lvchange \-pr vg00/lvol1
.B lvchange -pr vg00/lvol1

View File

@@ -1,6 +1,6 @@
.TH LVCHANGE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVCHANGE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvchange \- Change the attributes of logical volume(s)
lvchange - Change the attributes of logical volume(s)
.
.SH SYNOPSIS
\fBlvchange\fP \fIoption_args\fP \fIposition_args\fP
@@ -1092,7 +1092,7 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
.SH EXAMPLES
Change LV permission to read-only:
.sp
.B lvchange \-pr vg00/lvol1
.B lvchange -pr vg00/lvol1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -4,7 +4,7 @@ The LV type is also called the segment type or segtype.
To display the current LV type, run the command:
.B lvs \-o name,segtype
.B lvs -o name,segtype
.I LV
The

View File

@@ -20,19 +20,19 @@ the current metadata LV with LV2 (for repair purposes.)
.SH EXAMPLES
Convert a linear LV to a two-way mirror LV.
.br
.B lvconvert \-\-type mirror \-\-mirrors 1 vg/lvol1
.B lvconvert --type mirror --mirrors 1 vg/lvol1
Convert a linear LV to a two-way RAID1 LV.
.br
.B lvconvert \-\-type raid1 \-\-mirrors 1 vg/lvol1
.B lvconvert --type raid1 --mirrors 1 vg/lvol1
Convert a mirror LV to use an in\-memory log.
Convert a mirror LV to use an in-memory log.
.br
.B lvconvert \-\-mirrorlog core vg/lvol1
.B lvconvert --mirrorlog core vg/lvol1
Convert a mirror LV to use a disk log.
.br
.B lvconvert \-\-mirrorlog disk vg/lvol1
.B lvconvert --mirrorlog disk vg/lvol1
Convert a mirror or raid1 LV to a linear LV.
.br
@@ -40,73 +40,73 @@ Convert a mirror or raid1 LV to a linear LV.
Convert a mirror LV to a raid1 LV with the same number of images.
.br
.B lvconvert \-\-type raid1 vg/lvol1
.B lvconvert --type raid1 vg/lvol1
Convert a linear LV to a two-way mirror LV, allocating new extents from specific
PV ranges.
.br
.B lvconvert \-\-mirrors 1 vg/lvol1 /dev/sda:0\-15 /dev/sdb:0\-15
.B lvconvert --mirrors 1 vg/lvol1 /dev/sda:0-15 /dev/sdb:0-15
Convert a mirror LV to a linear LV, freeing physical extents from a specific PV.
.br
.B lvconvert \-\-type linear vg/lvol1 /dev/sda
.B lvconvert --type linear vg/lvol1 /dev/sda
Split one image from a mirror or raid1 LV, making it a new LV.
.br
.B lvconvert \-\-splitmirrors 1 \-\-name lv_split vg/lvol1
.B lvconvert --splitmirrors 1 --name lv_split vg/lvol1
Split one image from a raid1 LV, and track changes made to the raid1 LV
while the split image remains detached.
.br
.B lvconvert \-\-splitmirrors 1 \-\-trackchanges vg/lvol1
.B lvconvert --splitmirrors 1 --trackchanges vg/lvol1
Merge an image (that was previously created with \-\-splitmirrors and
\-\-trackchanges) back into the original raid1 LV.
Merge an image (that was previously created with --splitmirrors and
--trackchanges) back into the original raid1 LV.
.br
.B lvconvert \-\-mergemirrors vg/lvol1_rimage_1
.B lvconvert --mergemirrors vg/lvol1_rimage_1
Replace PV /dev/sdb1 with PV /dev/sdf1 in a raid1/4/5/6/10 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 vg/lvol1 /dev/sdf1
.B lvconvert --replace /dev/sdb1 vg/lvol1 /dev/sdf1
Replace 3 PVs /dev/sd[b-d]1 with PVs /dev/sd[f-h]1 in a raid1 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 \-\-replace /dev/sdd1
.B lvconvert --replace /dev/sdb1 --replace /dev/sdc1 --replace /dev/sdd1
.RS
.B vg/lvol1 /dev/sd[fgh]1
.RE
Replace the maximum of 2 PVs /dev/sd[bc]1 with PVs /dev/sd[gh]1 in a raid6 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
.B lvconvert --replace /dev/sdb1 --replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV.
is used as an external read-only origin for the new thin LV.
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1 vg/lvol1
.B lvconvert --type thin --thinpool vg/tpool1 vg/lvol1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV, and is
is used as an external read-only origin for the new thin LV, and is
renamed "external".
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1
.B lvconvert --type thin --thinpool vg/tpool1
.RS
.B \-\-originname external vg/lvol1
.B --originname external vg/lvol1
.RE
Convert an LV to a cache pool LV using another specified LV for cache pool
metadata.
.br
.B lvconvert \-\-type cache-pool \-\-poolmetadata vg/poolmeta1 vg/lvol1
.B lvconvert --type cache-pool --poolmetadata vg/poolmeta1 vg/lvol1
Convert an LV to a cache LV using the specified cache pool and chunk size.
.br
.B lvconvert \-\-type cache \-\-cachepool vg/cpool1 \-c 128 vg/lvol1
.B lvconvert --type cache --cachepool vg/cpool1 -c 128 vg/lvol1
Detach and keep the cache pool from a cache LV.
.br
.B lvconvert \-\-splitcache vg/lvol1
.B lvconvert --splitcache vg/lvol1
Detach and remove the cache pool from a cache LV.
.br
.B lvconvert \-\-uncache vg/lvol1
.B lvconvert --uncache vg/lvol1

View File

@@ -1,6 +1,6 @@
.TH LVCONVERT 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVCONVERT 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvconvert \- Change logical volume layout
lvconvert - Change logical volume layout
.
.SH SYNOPSIS
\fBlvconvert\fP \fIoption_args\fP \fIposition_args\fP
@@ -240,7 +240,7 @@ The LV type is also called the segment type or segtype.
To display the current LV type, run the command:
.B lvs \-o name,segtype
.B lvs -o name,segtype
.I LV
The
@@ -1090,7 +1090,8 @@ storage space, usually on a separate device from the data being mirrored.
\fBcore\fP is not persistent; the log is kept only in memory.
In this case, the mirror must be synchronized (by copying LV data from
the first device to others) each time the LV is activated, e.g. after reboot.
\fBmirrored\fP is a persistent log that is itself mirrored.
\fBmirrored\fP is a persistent log that is itself mirrored, but
should be avoided. Instead, use the raid1 type for log redundancy.
.ad b
.HP
.ad l
@@ -1416,7 +1417,7 @@ Alternate command forms, advanced command usage, and listing of all valid syntax
.P
Convert LV to type mirror (also see type raid1),
.br
(also see lvconvert --mirrors).
(also see lvconvert \-\-mirrors).
.br
.P
\fBlvconvert\fP \fB--type\fP \fBmirror\fP \fILV\fP
@@ -1473,7 +1474,7 @@ Change the type of mirror log used by a mirror LV.
Convert LV to a thin LV, using the original LV as an external origin
.br
(infers --type thin).
(infers \-\-type thin).
.br
.P
\fBlvconvert\fP \fB-T\fP|\fB--thin\fP \fB--thinpool\fP \fILV\fP \fILV\fP\fI_linear_striped_cache_raid\fP
@@ -1520,7 +1521,7 @@ Convert LV to a thin LV, using the original LV as an external origin
.br
-
Convert LV to type cache (infers --type cache).
Convert LV to type cache (infers \-\-type cache).
.br
.P
\fBlvconvert\fP \fB-H\fP|\fB--cache\fP \fB--cachepool\fP \fILV\fP \fILV\fP\fI_linear_striped_thinpool_raid\fP
@@ -1605,11 +1606,11 @@ Swap metadata LV in a thin pool or cache pool (for repair only).
.br
-
Merge LV that was split from a mirror (variant, use --mergemirrors).
Merge LV that was split from a mirror (variant, use \-\-mergemirrors).
.br
Merge thin LV into its origin LV (variant, use --mergethin).
Merge thin LV into its origin LV (variant, use \-\-mergethin).
.br
Merge COW snapshot LV into its origin (variant, use --mergesnapshot).
Merge COW snapshot LV into its origin (variant, use \-\-mergesnapshot).
.br
.P
\fBlvconvert\fP \fB--merge\fP \fIVG\fP|\fILV\fP\fI_linear_striped_snapshot_thin_raid\fP|\fITag\fP ...
@@ -1685,7 +1686,7 @@ origin LV (first arg) to reverse a splitsnapshot command.
.br
-
Poll LV to continue conversion (also see --startpoll).
Poll LV to continue conversion (also see \-\-startpoll).
.br
.P
\fBlvconvert\fP \fILV\fP\fI_mirror_raid\fP
@@ -1718,19 +1719,19 @@ the current metadata LV with LV2 (for repair purposes.)
.SH EXAMPLES
Convert a linear LV to a two-way mirror LV.
.br
.B lvconvert \-\-type mirror \-\-mirrors 1 vg/lvol1
.B lvconvert --type mirror --mirrors 1 vg/lvol1
Convert a linear LV to a two-way RAID1 LV.
.br
.B lvconvert \-\-type raid1 \-\-mirrors 1 vg/lvol1
.B lvconvert --type raid1 --mirrors 1 vg/lvol1
Convert a mirror LV to use an in\-memory log.
Convert a mirror LV to use an in-memory log.
.br
.B lvconvert \-\-mirrorlog core vg/lvol1
.B lvconvert --mirrorlog core vg/lvol1
Convert a mirror LV to use a disk log.
.br
.B lvconvert \-\-mirrorlog disk vg/lvol1
.B lvconvert --mirrorlog disk vg/lvol1
Convert a mirror or raid1 LV to a linear LV.
.br
@@ -1738,76 +1739,76 @@ Convert a mirror or raid1 LV to a linear LV.
Convert a mirror LV to a raid1 LV with the same number of images.
.br
.B lvconvert \-\-type raid1 vg/lvol1
.B lvconvert --type raid1 vg/lvol1
Convert a linear LV to a two-way mirror LV, allocating new extents from specific
PV ranges.
.br
.B lvconvert \-\-mirrors 1 vg/lvol1 /dev/sda:0\-15 /dev/sdb:0\-15
.B lvconvert --mirrors 1 vg/lvol1 /dev/sda:0-15 /dev/sdb:0-15
Convert a mirror LV to a linear LV, freeing physical extents from a specific PV.
.br
.B lvconvert \-\-type linear vg/lvol1 /dev/sda
.B lvconvert --type linear vg/lvol1 /dev/sda
Split one image from a mirror or raid1 LV, making it a new LV.
.br
.B lvconvert \-\-splitmirrors 1 \-\-name lv_split vg/lvol1
.B lvconvert --splitmirrors 1 --name lv_split vg/lvol1
Split one image from a raid1 LV, and track changes made to the raid1 LV
while the split image remains detached.
.br
.B lvconvert \-\-splitmirrors 1 \-\-trackchanges vg/lvol1
.B lvconvert --splitmirrors 1 --trackchanges vg/lvol1
Merge an image (that was previously created with \-\-splitmirrors and
\-\-trackchanges) back into the original raid1 LV.
Merge an image (that was previously created with --splitmirrors and
--trackchanges) back into the original raid1 LV.
.br
.B lvconvert \-\-mergemirrors vg/lvol1_rimage_1
.B lvconvert --mergemirrors vg/lvol1_rimage_1
Replace PV /dev/sdb1 with PV /dev/sdf1 in a raid1/4/5/6/10 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 vg/lvol1 /dev/sdf1
.B lvconvert --replace /dev/sdb1 vg/lvol1 /dev/sdf1
Replace 3 PVs /dev/sd[b-d]1 with PVs /dev/sd[f-h]1 in a raid1 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 \-\-replace /dev/sdd1
.B lvconvert --replace /dev/sdb1 --replace /dev/sdc1 --replace /dev/sdd1
.RS
.B vg/lvol1 /dev/sd[fgh]1
.RE
Replace the maximum of 2 PVs /dev/sd[bc]1 with PVs /dev/sd[gh]1 in a raid6 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
.B lvconvert --replace /dev/sdb1 --replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV.
is used as an external read-only origin for the new thin LV.
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1 vg/lvol1
.B lvconvert --type thin --thinpool vg/tpool1 vg/lvol1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV, and is
is used as an external read-only origin for the new thin LV, and is
renamed "external".
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1
.B lvconvert --type thin --thinpool vg/tpool1
.RS
.B \-\-originname external vg/lvol1
.B --originname external vg/lvol1
.RE
Convert an LV to a cache pool LV using another specified LV for cache pool
metadata.
.br
.B lvconvert \-\-type cache-pool \-\-poolmetadata vg/poolmeta1 vg/lvol1
.B lvconvert --type cache-pool --poolmetadata vg/poolmeta1 vg/lvol1
Convert an LV to a cache LV using the specified cache pool and chunk size.
.br
.B lvconvert \-\-type cache \-\-cachepool vg/cpool1 \-c 128 vg/lvol1
.B lvconvert --type cache --cachepool vg/cpool1 -c 128 vg/lvol1
Detach and keep the cache pool from a cache LV.
.br
.B lvconvert \-\-splitcache vg/lvol1
.B lvconvert --splitcache vg/lvol1
Detach and remove the cache pool from a cache LV.
.br
.B lvconvert \-\-uncache vg/lvol1
.B lvconvert --uncache vg/lvol1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -3,56 +3,56 @@
Create a striped LV with 3 stripes, a stripe size of 8KiB and a size of 100MiB.
The LV name is chosen by lvcreate.
.br
.B lvcreate \-i 3 \-I 8 \-L 100m vg00
.B lvcreate -i 3 -I 8 -L 100m vg00
Create a raid1 LV with two images, and a useable size of 500 MiB. This
operation requires two devices, one for each mirror image. RAID metadata
(superblock and bitmap) is also included on the two devices.
.br
.B lvcreate \-\-type raid1 \-m1 \-L 500m \-n mylv vg00
.B lvcreate --type raid1 -m1 -L 500m -n mylv vg00
Create a mirror LV with two images, and a useable size of 500 MiB.
This operation requires three devices: two for mirror images and
one for a disk log.
.br
.B lvcreate \-\-type mirror \-m1 \-L 500m \-n mylv vg00
.B lvcreate --type mirror -m1 -L 500m -n mylv vg00
Create a mirror LV with 2 images, and a useable size of 500 MiB.
This operation requires 2 devices because the log is in memory.
.br
.B lvcreate \-\-type mirror \-m1 \-\-mirrorlog core \-L 500m \-n mylv vg00
.B lvcreate --type mirror -m1 --mirrorlog core -L 500m -n mylv vg00
Create a copy\-on\-write snapshot of an LV:
Create a copy-on-write snapshot of an LV:
.br
.B lvcreate \-\-snapshot \-\-size 100m \-\-name mysnap vg00/mylv
.B lvcreate --snapshot --size 100m --name mysnap vg00/mylv
Create a copy\-on\-write snapshot with a size sufficient
Create a copy-on-write snapshot with a size sufficient
for overwriting 20% of the size of the original LV.
.br
.B lvcreate \-s \-l 20%ORIGIN \-n mysnap vg00/mylv
.B lvcreate -s -l 20%ORIGIN -n mysnap vg00/mylv
Create a sparse LV with 1TiB of virtual space, and actual space just under
100MiB.
.br
.B lvcreate \-\-snapshot \-\-virtualsize 1t \-\-size 100m \-\-name mylv vg00
.B lvcreate --snapshot --virtualsize 1t --size 100m --name mylv vg00
Create a linear LV with a usable size of 64MiB on specific physical extents.
.br
.B lvcreate \-L 64m \-n mylv vg00 /dev/sda:0\-7 /dev/sdb:0\-7
.B lvcreate -L 64m -n mylv vg00 /dev/sda:0-7 /dev/sdb:0-7
Create a RAID5 LV with a usable size of 5GiB, 3 stripes, a stripe size of
64KiB, using a total of 4 devices (including one for parity).
.br
.B lvcreate \-\-type raid5 \-L 5G \-i 3 \-I 64 \-n mylv vg00
.B lvcreate --type raid5 -L 5G -i 3 -I 64 -n mylv vg00
Create a RAID5 LV using all of the free space in the VG and spanning all the
PVs in the VG (note that the command will fail if there are more than 8 PVs in
the VG, in which case \fB\-i 7\fP must be used to get to the current maximum of
the VG, in which case \fB-i 7\fP must be used to get to the current maximum of
8 devices including parity for RaidLVs).
.br
.B lvcreate \-\-config allocation/raid_stripe_all_devices=1
.B lvcreate --config allocation/raid_stripe_all_devices=1
.RS
.B \-\-type raid5 \-l 100%FREE \-n mylv vg00
.B --type raid5 -l 100%FREE -n mylv vg00
.RE
Create RAID10 LV with a usable size of 5GiB, using 2 stripes, each on
@@ -62,36 +62,36 @@ differently:
but \fB-m\fP specifies the number of images in addition
to the first image).
.br
.B lvcreate \-\-type raid10 \-L 5G \-i 2 \-m 1 \-n mylv vg00
.B lvcreate --type raid10 -L 5G -i 2 -m 1 -n mylv vg00
Create a 1TiB thin LV, first creating a new thin pool for it, where
the thin pool has 100MiB of space, uses 2 stripes, has a 64KiB stripe
size, and 256KiB chunk size.
.br
.B lvcreate \-\-type thin \-\-name mylv \-\-thinpool mypool
.B lvcreate --type thin --name mylv --thinpool mypool
.RS
.B \-V 1t \-L 100m \-i 2 \-I 64 \-c 256 vg00
.B -V 1t -L 100m -i 2 -I 64 -c 256 vg00
.RE
Create a thin snapshot of a thin LV (the size option must not be
used, otherwise a copy-on-write snapshot would be created).
.br
.B lvcreate \-\-snapshot \-\-name mysnap vg00/thinvol
.B lvcreate --snapshot --name mysnap vg00/thinvol
Create a thin snapshot of the read-only inactive LV named "origin"
which becomes an external origin for the thin snapshot LV.
.br
.B lvcreate \-\-snapshot \-\-name mysnap \-\-thinpool mypool vg00/origin
.B lvcreate --snapshot --name mysnap --thinpool mypool vg00/origin
Create a cache pool from a fast physical device. The cache pool can
then be used to cache an LV.
.br
.B lvcreate \-\-type cache-pool \-L 1G \-n my_cpool vg00 /dev/fast1
.B lvcreate --type cache-pool -L 1G -n my_cpool vg00 /dev/fast1
Create a cache LV, first creating a new origin LV on a slow physical device,
then combining the new origin LV with an existing cache pool.
.br
.B lvcreate \-\-type cache \-\-cachepool my_cpool
.B lvcreate --type cache --cachepool my_cpool
.RS
.B \-L 100G \-n mylv vg00 /dev/slow1
.B -L 100G -n mylv vg00 /dev/slow1
.RE

View File

@@ -1,6 +1,6 @@
.TH LVCREATE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVCREATE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvcreate \- Create a logical volume
lvcreate - Create a logical volume
.
.SH SYNOPSIS
\fBlvcreate\fP \fIoption_args\fP \fIposition_args\fP
@@ -301,7 +301,7 @@ Create a linear LV.
.RE
-
Create a striped LV (infers --type striped).
Create a striped LV (infers \-\-type striped).
.br
.P
\fBlvcreate\fP \fB-i\fP|\fB--stripes\fP \fINumber\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fIVG\fP
@@ -323,7 +323,7 @@ Create a striped LV (infers --type striped).
.RE
-
Create a raid1 or mirror LV (infers --type raid1|mirror).
Create a raid1 or mirror LV (infers \-\-type raid1|mirror).
.br
.P
\fBlvcreate\fP \fB-m\fP|\fB--mirrors\fP \fINumber\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fIVG\fP
@@ -570,7 +570,7 @@ Create a cache pool.
.RE
-
Create a thin LV in a thin pool (infers --type thin).
Create a thin LV in a thin pool (infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT] \fB--thinpool\fP \fILV\fP\fI_thinpool\fP \fIVG\fP
@@ -599,7 +599,7 @@ Create a thin LV in a thin pool (infers --type thin).
Create a thin LV that is a snapshot of an existing thin LV
.br
(infers --type thin).
(infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-s\fP|\fB--snapshot\fP \fILV\fP\fI_thin\fP
@@ -659,7 +659,7 @@ Create a thin LV that is a snapshot of an external origin LV.
Create a thin LV, first creating a thin pool for it,
.br
where the new thin pool is named by the --thinpool arg.
where the new thin pool is named by the \-\-thinpool arg.
.br
.P
\fBlvcreate\fP \fB--type\fP \fBthin\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT]
@@ -716,7 +716,7 @@ Create a cache LV, first creating a new origin LV,
.br
then combining it with the existing cache pool named
.br
by the --cachepool arg.
by the \-\-cachepool arg.
.br
.P
\fBlvcreate\fP \fB--type\fP \fBcache\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT]
@@ -927,7 +927,7 @@ New LVs are made active by default.
\fBn\fP makes the LV inactive, or unavailable, only when possible.
In some cases, creating an LV requires it to be active.
For example, COW snapshots of an active origin LV can only
be created in the active state (this does not apply to thin snapshots.)
be created in the active state (this does not apply to thin snapshots).
The --zero option normally requires the LV to be active.
If autoactivation \fBay\fP is used, the LV is only activated
if it matches an item in lvm.conf activation/auto_activation_volume_list.
@@ -1200,7 +1200,8 @@ storage space, usually on a separate device from the data being mirrored.
\fBcore\fP is not persistent; the log is kept only in memory.
In this case, the mirror must be synchronized (by copying LV data from
the first device to others) each time the LV is activated, e.g. after reboot.
\fBmirrored\fP is a persistent log that is itself mirrored.
\fBmirrored\fP is a persistent log that is itself mirrored, but
should be avoided. Instead, use the raid1 type for log redundancy.
.ad b
.HP
.ad l
@@ -1394,7 +1395,7 @@ When creating a RAID 4/5/6 LV, this number does not include the extra
devices that are required for parity. The largest number depends on
the RAID type (raid0: 64, raid10: 32, raid4/5: 63, raid6: 62), and
when unspecified, the default depends on the RAID type
(raid0: 2, raid10: 4, raid4/5: 3, raid6: 5.)
(raid0: 2, raid10: 2, raid4/5: 3, raid6: 5.)
To stripe a new raid LV across all PVs by default,
see lvm.conf allocation/raid_stripe_all_devices.
.ad b
@@ -1606,7 +1607,7 @@ Create a linear LV.
.RE
-
Create a striped LV (also see lvcreate --stripes).
Create a striped LV (also see lvcreate \-\-stripes).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBstriped\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fIVG\fP
@@ -1632,7 +1633,7 @@ Create a striped LV (also see lvcreate --stripes).
.RE
-
Create a mirror LV (also see --type raid1).
Create a mirror LV (also see \-\-type raid1).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBmirror\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fIVG\fP
@@ -1672,7 +1673,7 @@ Create a mirror LV (also see --type raid1).
Create a COW snapshot LV of an origin LV
.br
(also see --snapshot).
(also see \-\-snapshot).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBsnapshot\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fILV\fP
@@ -1708,7 +1709,7 @@ Create a COW snapshot LV of an origin LV
Create a sparse COW snapshot LV of a virtual origin LV
.br
(also see --snapshot).
(also see \-\-snapshot).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBsnapshot\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT]
@@ -1766,7 +1767,7 @@ Create a sparse COW snapshot LV of a virtual origin LV.
.RE
-
Create a thin pool (infers --type thin-pool).
Create a thin pool (infers \-\-type thin-pool).
.br
.P
\fBlvcreate\fP \fB-T\fP|\fB--thin\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fIVG\fP
@@ -1816,9 +1817,9 @@ Create a thin pool (infers --type thin-pool).
.RE
-
Create a thin pool named by the --thinpool arg
Create a thin pool named by the \-\-thinpool arg
.br
(infers --type thin-pool).
(infers \-\-type thin-pool).
.br
.P
\fBlvcreate\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fB--thinpool\fP \fILV\fP\fI_new\fP \fIVG\fP
@@ -1872,9 +1873,9 @@ Create a thin pool named by the --thinpool arg
.RE
-
Create a cache pool named by the --cachepool arg
Create a cache pool named by the \-\-cachepool arg
.br
(variant, uses --cachepool in place of --name).
(variant, uses \-\-cachepool in place of \-\-name).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBcache-pool\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT]
@@ -1967,7 +1968,7 @@ Create a thin LV in a thin pool.
Create a thin LV in a thin pool named in the first arg
.br
(variant, also see --thinpool for naming pool).
(variant, also see \-\-thinpool for naming pool).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBthin\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT] \fILV\fP\fI_thinpool\fP
@@ -1992,7 +1993,7 @@ Create a thin LV in a thin pool named in the first arg
Create a thin LV in the thin pool named in the first arg
.br
(variant, infers --type thin, also see --thinpool for
(variant, infers \-\-type thin, also see \-\-thinpool for
.br
naming pool.)
.br
@@ -2046,7 +2047,7 @@ Create a thin LV that is a snapshot of an existing thin LV.
Create a thin LV that is a snapshot of an existing thin LV
.br
(infers --type thin).
(infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-T\fP|\fB--thin\fP \fILV\fP\fI_thin\fP
@@ -2071,7 +2072,7 @@ Create a thin LV that is a snapshot of an existing thin LV
Create a thin LV that is a snapshot of an external origin LV
.br
(infers --type thin).
(infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-s\fP|\fB--snapshot\fP \fB--thinpool\fP \fILV\fP\fI_thinpool\fP \fILV\fP
@@ -2096,14 +2097,14 @@ Create a thin LV that is a snapshot of an external origin LV
Create a thin LV, first creating a thin pool for it,
.br
where the new thin pool is named by the --thinpool arg
where the new thin pool is named by the \-\-thinpool arg
.br
(variant, infers --type thin).
(variant, infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-T\fP|\fB--thin\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT]
\fBlvcreate\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT] \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT]
.RS 5
\fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fB--thinpool\fP \fILV\fP\fI_new\fP
\fB--thinpool\fP \fILV\fP\fI_new\fP
.RE
.br
.RS 4
@@ -2112,6 +2113,10 @@ where the new thin pool is named by the --thinpool arg
.ad b
.br
.ad l
[ \fB-T\fP|\fB--thin\fP ]
.ad b
.br
.ad l
[ \fB-c\fP|\fB--chunksize\fP \fISize\fP[k|UNIT] ]
.ad b
.br
@@ -2124,7 +2129,60 @@ where the new thin pool is named by the --thinpool arg
.ad b
.br
.ad l
[ \fB--type\fP \fBthin\fP ]
[ \fB--poolmetadatasize\fP \fISize\fP[m|UNIT] ]
.ad b
.br
.ad l
[ \fB--poolmetadataspare\fP \fBy\fP|\fBn\fP ]
.ad b
.br
.ad l
[ \fB--discards\fP \fBpassdown\fP|\fBnopassdown\fP|\fBignore\fP ]
.ad b
.br
.ad l
[ \fB--errorwhenfull\fP \fBy\fP|\fBn\fP ]
.ad b
.br
[ COMMON_OPTIONS ]
.RE
.br
.RS 4
[ \fIPV\fP ... ]
.RE
-
Create a thin LV, first creating a thin pool for it,
.br
where the new thin pool is named by the \-\-thinpool arg
.br
(variant, infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT] \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT]
.RS 5
\fB--thinpool\fP \fILV\fP\fI_new\fP \fIVG\fP
.RE
.br
.RS 4
.ad l
[ \fB-l\fP|\fB--extents\fP \fINumber\fP[PERCENT] ]
.ad b
.br
.ad l
[ \fB-T\fP|\fB--thin\fP ]
.ad b
.br
.ad l
[ \fB-c\fP|\fB--chunksize\fP \fISize\fP[k|UNIT] ]
.ad b
.br
.ad l
[ \fB-i\fP|\fB--stripes\fP \fINumber\fP ]
.ad b
.br
.ad l
[ \fB-I\fP|\fB--stripesize\fP \fISize\fP[k|UNIT] ]
.ad b
.br
.ad l
@@ -2216,7 +2274,7 @@ where the new thin pool is named in the first arg,
.br
or the new thin pool name is generated when the first
.br
arg is a VG name (variant, infers --type thin).
arg is a VG name (variant, infers \-\-type thin).
.br
.P
\fBlvcreate\fP \fB-T\fP|\fB--thin\fP \fB-V\fP|\fB--virtualsize\fP \fISize\fP[m|UNIT]
@@ -2242,10 +2300,6 @@ arg is a VG name (variant, infers --type thin).
.ad b
.br
.ad l
[ \fB--type\fP \fBthin\fP ]
.ad b
.br
.ad l
[ \fB--poolmetadatasize\fP \fISize\fP[m|UNIT] ]
.ad b
.br
@@ -2271,13 +2325,13 @@ arg is a VG name (variant, infers --type thin).
Create a thin LV, first creating a thin pool for it
.br
(infers --type thin).
(infers \-\-type thin).
.br
Create a sparse snapshot of a virtual origin LV
.br
(infers --type snapshot).
(infers \-\-type snapshot).
.br
Chooses --type thin or --type snapshot according to
Chooses \-\-type thin or \-\-type snapshot according to
.br
config setting sparse_segtype_default.
.br
@@ -2290,10 +2344,6 @@ config setting sparse_segtype_default.
.ad b
.br
.ad l
[ \fB-T\fP|\fB--thin\fP ]
.ad b
.br
.ad l
[ \fB-s\fP|\fB--snapshot\fP ]
.ad b
.br
@@ -2310,7 +2360,7 @@ config setting sparse_segtype_default.
.ad b
.br
.ad l
[ \fB--type\fP \fBthin\fP ]
[ \fB--type\fP \fBsnapshot\fP ]
.ad b
.br
.ad l
@@ -2341,7 +2391,7 @@ Create a cache LV, first creating a new origin LV,
.br
then combining it with the existing cache pool named
.br
by the --cachepool arg (variant, infers --type cache).
by the \-\-cachepool arg (variant, infers \-\-type cache).
.br
.P
\fBlvcreate\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fB--cachepool\fP \fILV\fP\fI_cachepool\fP \fIVG\fP
@@ -2399,7 +2449,7 @@ Create a cache LV, first creating a new origin LV,
.br
then combining it with the existing cache pool named
.br
in the first arg (variant, also use --cachepool).
in the first arg (variant, also use \-\-cachepool).
.br
.P
\fBlvcreate\fP \fB--type\fP \fBcache\fP \fB-L\fP|\fB--size\fP \fISize\fP[m|UNIT] \fILV\fP\fI_cachepool\fP
@@ -2463,7 +2513,7 @@ first creating a new origin LV, then combining it with
.br
the existing cache pool named in the first arg
.br
(variant, infers --type cache, also use --cachepool).
(variant, infers \-\-type cache, also use \-\-cachepool).
.br
When LV is not a cache pool, convert the specified LV
.br
@@ -2528,56 +2578,56 @@ to type cache after creating a new cache pool LV to use
Create a striped LV with 3 stripes, a stripe size of 8KiB and a size of 100MiB.
The LV name is chosen by lvcreate.
.br
.B lvcreate \-i 3 \-I 8 \-L 100m vg00
.B lvcreate -i 3 -I 8 -L 100m vg00
Create a raid1 LV with two images, and a useable size of 500 MiB. This
operation requires two devices, one for each mirror image. RAID metadata
(superblock and bitmap) is also included on the two devices.
.br
.B lvcreate \-\-type raid1 \-m1 \-L 500m \-n mylv vg00
.B lvcreate --type raid1 -m1 -L 500m -n mylv vg00
Create a mirror LV with two images, and a useable size of 500 MiB.
This operation requires three devices: two for mirror images and
one for a disk log.
.br
.B lvcreate \-\-type mirror \-m1 \-L 500m \-n mylv vg00
.B lvcreate --type mirror -m1 -L 500m -n mylv vg00
Create a mirror LV with 2 images, and a useable size of 500 MiB.
This operation requires 2 devices because the log is in memory.
.br
.B lvcreate \-\-type mirror \-m1 \-\-mirrorlog core \-L 500m \-n mylv vg00
.B lvcreate --type mirror -m1 --mirrorlog core -L 500m -n mylv vg00
Create a copy\-on\-write snapshot of an LV:
Create a copy-on-write snapshot of an LV:
.br
.B lvcreate \-\-snapshot \-\-size 100m \-\-name mysnap vg00/mylv
.B lvcreate --snapshot --size 100m --name mysnap vg00/mylv
Create a copy\-on\-write snapshot with a size sufficient
Create a copy-on-write snapshot with a size sufficient
for overwriting 20% of the size of the original LV.
.br
.B lvcreate \-s \-l 20%ORIGIN \-n mysnap vg00/mylv
.B lvcreate -s -l 20%ORIGIN -n mysnap vg00/mylv
Create a sparse LV with 1TiB of virtual space, and actual space just under
100MiB.
.br
.B lvcreate \-\-snapshot \-\-virtualsize 1t \-\-size 100m \-\-name mylv vg00
.B lvcreate --snapshot --virtualsize 1t --size 100m --name mylv vg00
Create a linear LV with a usable size of 64MiB on specific physical extents.
.br
.B lvcreate \-L 64m \-n mylv vg00 /dev/sda:0\-7 /dev/sdb:0\-7
.B lvcreate -L 64m -n mylv vg00 /dev/sda:0-7 /dev/sdb:0-7
Create a RAID5 LV with a usable size of 5GiB, 3 stripes, a stripe size of
64KiB, using a total of 4 devices (including one for parity).
.br
.B lvcreate \-\-type raid5 \-L 5G \-i 3 \-I 64 \-n mylv vg00
.B lvcreate --type raid5 -L 5G -i 3 -I 64 -n mylv vg00
Create a RAID5 LV using all of the free space in the VG and spanning all the
PVs in the VG (note that the command will fail if there are more than 8 PVs in
the VG, in which case \fB\-i 7\fP must be used to get to the current maximum of
the VG, in which case \fB-i 7\fP must be used to get to the current maximum of
8 devices including parity for RaidLVs).
.br
.B lvcreate \-\-config allocation/raid_stripe_all_devices=1
.B lvcreate --config allocation/raid_stripe_all_devices=1
.RS
.B \-\-type raid5 \-l 100%FREE \-n mylv vg00
.B --type raid5 -l 100%FREE -n mylv vg00
.RE
Create RAID10 LV with a usable size of 5GiB, using 2 stripes, each on
@@ -2587,38 +2637,38 @@ differently:
but \fB-m\fP specifies the number of images in addition
to the first image).
.br
.B lvcreate \-\-type raid10 \-L 5G \-i 2 \-m 1 \-n mylv vg00
.B lvcreate --type raid10 -L 5G -i 2 -m 1 -n mylv vg00
Create a 1TiB thin LV, first creating a new thin pool for it, where
the thin pool has 100MiB of space, uses 2 stripes, has a 64KiB stripe
size, and 256KiB chunk size.
.br
.B lvcreate \-\-type thin \-\-name mylv \-\-thinpool mypool
.B lvcreate --type thin --name mylv --thinpool mypool
.RS
.B \-V 1t \-L 100m \-i 2 \-I 64 \-c 256 vg00
.B -V 1t -L 100m -i 2 -I 64 -c 256 vg00
.RE
Create a thin snapshot of a thin LV (the size option must not be
used, otherwise a copy-on-write snapshot would be created).
.br
.B lvcreate \-\-snapshot \-\-name mysnap vg00/thinvol
.B lvcreate --snapshot --name mysnap vg00/thinvol
Create a thin snapshot of the read-only inactive LV named "origin"
which becomes an external origin for the thin snapshot LV.
.br
.B lvcreate \-\-snapshot \-\-name mysnap \-\-thinpool mypool vg00/origin
.B lvcreate --snapshot --name mysnap --thinpool mypool vg00/origin
Create a cache pool from a fast physical device. The cache pool can
then be used to cache an LV.
.br
.B lvcreate \-\-type cache-pool \-L 1G \-n my_cpool vg00 /dev/fast1
.B lvcreate --type cache-pool -L 1G -n my_cpool vg00 /dev/fast1
Create a cache LV, first creating a new origin LV on a slow physical device,
then combining the new origin LV with an existing cache pool.
.br
.B lvcreate \-\-type cache \-\-cachepool my_cpool
.B lvcreate --type cache --cachepool my_cpool
.RS
.B \-L 100G \-n mylv vg00 /dev/slow1
.B -L 100G -n mylv vg00 /dev/slow1
.RE
.SH SEE ALSO

View File

@@ -1,6 +1,6 @@
.TH LVDISPLAY 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVDISPLAY 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvdisplay \- Display information about a logical volume
lvdisplay - Display information about a logical volume
.
.SH SYNOPSIS
\fBlvdisplay\fP

View File

@@ -2,7 +2,7 @@ lvextend extends the size of an LV. This requires allocating logical
extents from the VG's free physical extents. If the extension adds a new
LV segment, the new segment will use the existing segment type of the LV.
Extending a copy\-on\-write snapshot LV adds space for COW blocks.
Extending a copy-on-write snapshot LV adds space for COW blocks.
Use \fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.

View File

@@ -1,14 +1,14 @@
.SH EXAMPLES
Extend the size of an LV by 54MiB, using a specific PV.
.br
.B lvextend \-L +54 vg01/lvol10 /dev/sdk3
.B lvextend -L +54 vg01/lvol10 /dev/sdk3
Extend the size of an LV by the amount of free
space on PV /dev/sdk3. This is equivalent to specifying
"\-l +100%PVS" on the command line.
"-l +100%PVS" on the command line.
.br
.B lvextend vg01/lvol01 /dev/sdk3
Extend an LV by 16MiB using specific physical extents.
.br
.B lvextend \-L+16m vg01/lvol01 /dev/sda:8\-9 /dev/sdb:8\-9
.B lvextend -L+16m vg01/lvol01 /dev/sda:8-9 /dev/sdb:8-9

View File

@@ -1,6 +1,6 @@
.TH LVEXTEND 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVEXTEND 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvextend \- Add space to a logical volume
lvextend - Add space to a logical volume
.
.SH SYNOPSIS
\fBlvextend\fP \fIoption_args\fP \fIposition_args\fP
@@ -126,7 +126,7 @@ lvextend extends the size of an LV. This requires allocating logical
extents from the VG's free physical extents. If the extension adds a new
LV segment, the new segment will use the existing segment type of the LV.
Extending a copy\-on\-write snapshot LV adds space for COW blocks.
Extending a copy-on-write snapshot LV adds space for COW blocks.
Use \fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.
@@ -528,7 +528,7 @@ When creating a RAID 4/5/6 LV, this number does not include the extra
devices that are required for parity. The largest number depends on
the RAID type (raid0: 64, raid10: 32, raid4/5: 63, raid6: 62), and
when unspecified, the default depends on the RAID type
(raid0: 2, raid10: 4, raid4/5: 3, raid6: 5.)
(raid0: 2, raid10: 2, raid4/5: 3, raid6: 5.)
To stripe a new raid LV across all PVs by default,
see lvm.conf allocation/raid_stripe_all_devices.
.ad b
@@ -633,17 +633,17 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
.SH EXAMPLES
Extend the size of an LV by 54MiB, using a specific PV.
.br
.B lvextend \-L +54 vg01/lvol10 /dev/sdk3
.B lvextend -L +54 vg01/lvol10 /dev/sdk3
Extend the size of an LV by the amount of free
space on PV /dev/sdk3. This is equivalent to specifying
"\-l +100%PVS" on the command line.
"-l +100%PVS" on the command line.
.br
.B lvextend vg01/lvol01 /dev/sdk3
Extend an LV by 16MiB using specific physical extents.
.br
.B lvextend \-L+16m vg01/lvol01 /dev/sda:8\-9 /dev/sdb:8\-9
.B lvextend -L+16m vg01/lvol01 /dev/sda:8-9 /dev/sdb:8-9
.SH SEE ALSO
.BR lvm (8)

View File

@@ -2,4 +2,4 @@ This command is the same as \fBlvmconfig\fP(8).
lvm config produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.

View File

@@ -1,6 +1,6 @@
.TH LVM CONFIG 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVM CONFIG 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvm config \- Display and manipulate configuration information
lvm config - Display and manipulate configuration information
.
.SH SYNOPSIS
\fBlvm config\fP
@@ -14,7 +14,7 @@ This command is the same as \fBlvmconfig\fP(8).
lvm config produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.
.SH USAGE
\fBlvm config\fP
.br

View File

@@ -2,4 +2,4 @@ This command is the same as \fBlvmconfig\fP(8).
lvm dumpconfig produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.

View File

@@ -1,6 +1,6 @@
.TH LVM DUMPCONFIG 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVM DUMPCONFIG 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvm dumpconfig \- Display and manipulate configuration information
lvm dumpconfig - Display and manipulate configuration information
.
.SH SYNOPSIS
\fBlvm dumpconfig\fP
@@ -14,7 +14,7 @@ This command is the same as \fBlvmconfig\fP(8).
lvm dumpconfig produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.
.SH USAGE
\fBlvm dumpconfig\fP
.br

View File

@@ -1,6 +1,6 @@
.TH LVM FULLREPORT 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVM FULLREPORT 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvm fullreport \- Display full report
lvm fullreport - Display full report
.
.SH SYNOPSIS
\fBlvm fullreport\fP

View File

@@ -1,6 +1,6 @@
.TH LVM LVPOLL 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVM LVPOLL 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvm lvpoll \- Continue already initiated poll operation on a logical volume
lvm lvpoll - Continue already initiated poll operation on a logical volume
.
.SH SYNOPSIS
\fBlvm lvpoll\fP \fIoption_args\fP \fIposition_args\fP

View File

@@ -36,7 +36,7 @@ as "vg0/lvol0". Where a list of VGs is required but is left empty,
a list of all VGs will be substituted. Where a list of LVs is required
but a VG is given, a list of all the LVs in that VG will be substituted.
So \fBlvdisplay vg0\fP will display all the LVs in "vg0".
Tags can also be used - see \fB\-\-addtag\fP below.
Tags can also be used - see \fB--addtag\fP below.
.P
One advantage of using the built-in shell is that configuration
information gets cached internally between commands.
@@ -47,7 +47,7 @@ executed directly if the first line is #! followed by the absolute
path of \fBlvm\fP.
.P
Additional hyphens within option names are ignored. For example,
\fB\-\-readonly\fP and \fB\-\-read\-only\fP are both accepted.
\fB--readonly\fP and \fB--read-only\fP are both accepted.
.
.SH BUILT-IN COMMANDS
.
@@ -313,7 +313,7 @@ those ranges on the specified Physical Volumes are considered.
Then they try each allocation policy in turn, starting with the strictest
policy (\fBcontiguous\fP) and ending with the allocation policy specified
using \fB\-\-alloc\fP or set as the default for the particular Logical
using \fB--alloc\fP or set as the default for the particular Logical
Volume or Volume Group concerned. For each policy, working from the
lowest-numbered Logical Extent of the empty Logical Volume space that
needs to be filled, they allocate as much space as possible according to
@@ -371,7 +371,7 @@ restrictions described above applied to each step leave the tools no
discretion over the layout.
To view the way the allocation process currently works in any specific
case, read the debug logging output, for example by adding \fB\-vvvv\fP to
case, read the debug logging output, for example by adding \fB-vvvv\fP to
a command.
.
.SH LOGICAL VOLUME TYPES
@@ -409,7 +409,7 @@ File descriptor to use for report output from LVM commands.
.TP
.B LVM_COMMAND_PROFILE
Name of default command profile to use for LVM commands. This profile
is overriden by direct use of \fB\-\-commandprofile\fP command line option.
is overriden by direct use of \fB--commandprofile\fP command line option.
.TP
.B LVM_RUN_BY_DMEVENTD
This variable is normally set by dmeventd plugin to inform lvm2 command

View File

@@ -14,7 +14,7 @@ The settings defined in lvm.conf can be overridden by any
of these extended configuration methods:
.TP
.B direct config override on command line
The \fB\-\-config ConfigurationString\fP command line option takes the
The \fB--config ConfigurationString\fP command line option takes the
ConfigurationString as direct string representation of the configuration
to override the existing configuration. The ConfigurationString is of
exactly the same format as used in any LVM configuration file.
@@ -34,7 +34,7 @@ The \fBcommand profile\fP is used to override selected configuration
settings at global LVM command level - it is applied at the very beginning
of LVM command execution and it is used throughout the whole time of LVM
command execution. The command profile is applied by using the
\fB\-\-commandprofile ProfileName\fP command line option that is recognised by
\fB--commandprofile ProfileName\fP command line option that is recognised by
all LVM2 commands.
The \fBmetadata profile\fP is used to override selected configuration
@@ -46,8 +46,8 @@ processed, the profile is applied automatically. If Volume Group and
any of its Logical Volumes have different profiles defined, the profile
defined for the Logical Volume is preferred. The metadata profile can be
attached/detached by using the \fBlvchange\fP and \fBvgchange\fP commands
and their \fB\-\-metadataprofile ProfileName\fP and
\fB\-\-detachprofile\fP options or the \fB\-\-metadataprofile\fP
and their \fB--metadataprofile ProfileName\fP and
\fB--detachprofile\fP options or the \fB--metadataprofile\fP
option during creation when using \fBvgcreate\fP or \fBlvcreate\fP command.
The \fBvgs\fP and \fBlvs\fP reporting commands provide \fB-o vg_profile\fP
and \fB-o lv_profile\fP output options to show the metadata profile
@@ -65,8 +65,8 @@ For this purpose, there's the \fBcommand_profile_template.profile\fP
(for metadata profiles) which contain all settings that are customizable
by profiles of certain type. Users are encouraged to copy these template
profiles and edit them as needed. Alternatively, the
\fBlvmconfig \-\-file <ProfileName.profile> \-\-type profilable-command <section>\fP
or \fBlvmconfig \-\-file <ProfileName.profile> \-\-type profilable-metadata <section>\fP
\fBlvmconfig --file <ProfileName.profile> --type profilable-command <section>\fP
or \fBlvmconfig --file <ProfileName.profile> --type profilable-metadata <section>\fP
can be used to generate a configuration with profilable settings in either
of the type for given section and save it to new ProfileName.profile
(if the section is not specified, all profilable settings are reported).
@@ -166,30 +166,30 @@ See the man page
Command to print a list of all possible config settings, with their
default values:
.br
.B lvmconfig \-\-type default
.B lvmconfig --type default
Command to print a list of all possible config settings, with their
default values, and a full description of each as a comment:
.br
.B lvmconfig \-\-type default --withcomments
.B lvmconfig --type default --withcomments
Command to print a list of all possible config settings, with their
current values (configured, non-default values are shown):
.br
.B lvmconfig \-\-type current
.B lvmconfig --type current
Command to print all config settings that have been configured with a
different value than the default (configured, non-default values are
shown):
.br
.B lvmconfig \-\-type diff
.B lvmconfig --type diff
Command to print a single config setting, with its default value,
and a full description, where "Section" refers to the config section,
e.g. global, and "Setting" refers to the name of the specific setting,
e.g. umask:
.br
.B lvmconfig \-\-type default --withcomments Section/Setting
.B lvmconfig --type default --withcomments Section/Setting
.SH FILES

View File

@@ -1,6 +1,6 @@
.TH "LVM2-ACTIVATION-GENERATOR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME"
lvm2-activation-generator \- generator for systemd units to activate LVM2 volumes on boot
lvm2-activation-generator - generator for systemd units to activate LVM2 volumes on boot
.SH SYNOPSIS
.B #SYSTEMD_GENERATOR_DIR#/lvm2-activation-generator
.sp
@@ -12,7 +12,7 @@ option is used). Otherwise, if \fBlvmetad\fP(8) is enabled,
the lvm2-activation-generator exits immediately without generating
any systemd units and LVM2 fully relies on event-based activation
to activate the LVM2 volumes instead using the \fBpvscan\fP(8)
(pvscan \-\-cache -aay) call that is a part of \fBudev\fP(8) rules.
(pvscan --cache -aay) call that is a part of \fBudev\fP(8) rules.
These systemd units are generated by lvm2-activation-generator:
.sp

View File

@@ -36,11 +36,11 @@ The primary method for using a cache type logical volume:
Create an LV or identify an existing LV to be the origin LV.
.B lvcreate \-n OriginLV \-L LargeSize VG SlowPVs
.B lvcreate -n OriginLV -L LargeSize VG SlowPVs
.I Example
.br
# lvcreate \-n lvol0 \-L 100G vg
# lvcreate -n lvol0 -L 100G vg
.SS 1. create CacheDataLV
@@ -49,11 +49,11 @@ Create the cache data LV. This LV will hold data blocks from the
OriginLV. The size of this LV is the size of the cache and will be
reported as the size of the cache pool LV.
.B lvcreate \-n CacheDataLV \-L CacheSize VG FastPVs
.B lvcreate -n CacheDataLV -L CacheSize VG FastPVs
.I Example
.br
# lvcreate \-n cache0 \-L 10G vg /dev/fast
# lvcreate -n cache0 -L 10G vg /dev/fast
.SS 2. create CacheMetaLV
@@ -62,11 +62,11 @@ Create the cache metadata LV. This LV will hold cache pool metadata. The
size of this LV should be 1000 times smaller than the cache data LV, with
a minimum size of 8MiB.
.B lvcreate \-n CacheMetaLV \-L MetaSize VG FastPVs
.B lvcreate -n CacheMetaLV -L MetaSize VG FastPVs
.I Example
.br
# lvcreate \-n cache0meta \-L 12M vg /dev/fast
# lvcreate -n cache0meta -L 12M vg /dev/fast
.nf
# lvs -a vg
@@ -88,14 +88,14 @@ CacheDataLV is renamed CachePoolLV_cdata and becomes hidden.
.br
CacheMetaLV is renamed CachePoolLV_cmeta and becomes hidden.
.B lvconvert \-\-type cache-pool \-\-poolmetadata VG/CacheMetaLV
.B lvconvert --type cache-pool --poolmetadata VG/CacheMetaLV
.RS
.B VG/CacheDataLV
.RE
.I Example
.br
# lvconvert \-\-type cache\-pool \-\-poolmetadata vg/cache0meta vg/cache0
# lvconvert --type cache-pool --poolmetadata vg/cache0meta vg/cache0
.nf
# lvs -a vg
@@ -118,11 +118,11 @@ CacheLV takes the name of OriginLV.
.br
OriginLV is renamed OriginLV_corig and becomes hidden.
.B lvconvert \-\-type cache \-\-cachepool VG/CachePoolLV VG/OriginLV
.B lvconvert --type cache --cachepool VG/CachePoolLV VG/OriginLV
.I Example
.br
# lvconvert \-\-type cache \-\-cachepool vg/cache0 vg/lvol0
# lvconvert --type cache --cachepool vg/cache0 vg/lvol0
.nf
# lvs -a vg
@@ -198,21 +198,21 @@ pool sub-LVs redundant.
.I Example
.nf
0. Create an origin LV we wish to cache
# lvcreate \-L 10G \-n lv1 vg /dev/slow_devs
# lvcreate -L 10G -n lv1 vg /dev/slow_devs
1. Create a 2-way RAID1 cache data LV
# lvcreate \-\-type raid1 \-m 1 \-L 1G -n cache1 vg \\
# lvcreate --type raid1 -m 1 -L 1G -n cache1 vg \\
/dev/fast1 /dev/fast2
2. Create a 2-way RAID1 cache metadata LV
# lvcreate \-\-type raid1 \-m 1 \-L 8M -n cache1meta vg \\
# lvcreate --type raid1 -m 1 -L 8M -n cache1meta vg \\
/dev/fast1 /dev/fast2
3. Create a cache pool LV combining cache data LV and cache metadata LV
# lvconvert \-\-type cache\-pool \-\-poolmetadata vg/cache1meta vg/cache1
# lvconvert --type cache-pool --poolmetadata vg/cache1meta vg/cache1
4. Create a cached LV by combining the cache pool LV and origin LV
# lvconvert \-\-type cache \-\-cachepool vg/cache1 vg/lv1
# lvconvert --type cache --cachepool vg/cache1 vg/lv1
.fi
.SS Cache mode
@@ -229,11 +229,11 @@ from the cache pool back to the origin LV. This mode will increase
performance, but the loss of a device associated with the cache pool LV
can result in lost data.
With the \-\-cachemode option, the cache mode can be set when creating a
With the --cachemode option, the cache mode can be set when creating a
cache LV, or changed on an existing cache LV. The current cache mode of a
cache LV can be displayed with the cache_mode reporting option:
.B lvs \-o+cache_mode VG/CacheLV
.B lvs -o+cache_mode VG/CacheLV
.BR lvm.conf (5)
.B allocation/cache_mode
@@ -243,21 +243,21 @@ defines the default cache mode.
.I Example
.nf
0. Create an origin LV we wish to cache (yours may already exist)
# lvcreate \-L 10G \-n lv1 vg /dev/slow
# lvcreate -L 10G -n lv1 vg /dev/slow
1. Create a cache data LV
# lvcreate \-L 1G \-n cache1 vg /dev/fast
# lvcreate -L 1G -n cache1 vg /dev/fast
2. Create a cache metadata LV
# lvcreate \-L 8M \-n cache1meta vg /dev/fast
# lvcreate -L 8M -n cache1meta vg /dev/fast
3. Create a cache pool LV
# lvconvert \-\-type cache\-pool \-\-poolmetadata vg/cache1meta vg/cache1
# lvconvert --type cache-pool --poolmetadata vg/cache1meta vg/cache1
4. Create a cache LV by combining the cache pool LV and origin LV,
and use the writethrough cache mode.
# lvconvert \-\-type cache \-\-cachepool vg/cache1 \\
\-\-cachemode writethrough vg/lv1
# lvconvert --type cache --cachepool vg/cache1 \\
--cachemode writethrough vg/lv1
.fi
@@ -275,18 +275,18 @@ The "mq" policy has a number of tunable parameters. The defaults are
chosen to be suitable for the majority of systems, but in special
circumstances, changing the settings can improve performance.
With the \-\-cachepolicy and \-\-cachesettings options, the cache policy
With the --cachepolicy and --cachesettings options, the cache policy
and settings can be set when creating a cache LV, or changed on an
existing cache LV (both options can be used together). The current cache
policy and settings of a cache LV can be displayed with the cache_policy
and cache_settings reporting options:
.B lvs \-o+cache_policy,cache_settings VG/CacheLV
.B lvs -o+cache_policy,cache_settings VG/CacheLV
.I Example
.nf
Change the cache policy and settings of an existing cache LV.
# lvchange \-\-cachepolicy mq \-\-cachesettings \\
# lvchange --cachepolicy mq --cachesettings \\
\(aqmigration_threshold=2048 random_threshold=4\(aq vg/lv1
.fi
@@ -306,7 +306,7 @@ defines the default cache settings.
\&
The size of data blocks managed by a cache pool can be specified with the
\-\-chunksize option when the cache LV is created. The default unit
--chunksize option when the cache LV is created. The default unit
is KiB. The value must be a multiple of 32KiB between 32KiB and 1GiB.
Using a chunk size that is too large can result in wasteful use of the
@@ -318,7 +318,7 @@ CPU time searching for chunks, and excessive memory tracking chunks.
Command to display the cache pool LV chunk size:
.br
.B lvs \-o+chunksize VG/CacheLV
.B lvs -o+chunksize VG/CacheLV
.BR lvm.conf (5)
.B cache_pool_chunk_size
@@ -327,7 +327,7 @@ controls the default chunk size used when creating a cache LV.
The default value is shown by:
.br
.B lvmconfig \-\-type default allocation/cache_pool_chunk_size
.B lvmconfig --type default allocation/cache_pool_chunk_size
.SS Spare metadata LV
@@ -349,7 +349,7 @@ the same VG.
.B lvcreate -n CacheDataLV -L CacheSize VG
.br
.B lvconvert --type cache\-pool VG/CacheDataLV
.B lvconvert --type cache-pool VG/CacheDataLV
.SS Create a new cache LV without an existing origin LV
@@ -360,9 +360,9 @@ A cache LV can be created using an existing cache pool without an existing
origin LV. A new origin LV is created and linked to the cache pool in a
single step.
.B lvcreate \-\-type cache \-L LargeSize \-n CacheLV
.B lvcreate --type cache -L LargeSize -n CacheLV
.RS
.B \-\-cachepool VG/CachePoolLV VG SlowPVs
.B --cachepool VG/CachePoolLV VG SlowPVs
.RE
@@ -374,7 +374,7 @@ A cache pool LV can be created with a single lvcreate command, rather than
using lvconvert on existing LVs. This one command creates a cache data
LV, a cache metadata LV, and combines the two into a cache pool LV.
.B lvcreate \-\-type cache\-pool \-L CacheSize \-n CachePoolLV VG FastPVs
.B lvcreate --type cache-pool -L CacheSize -n CachePoolLV VG FastPVs
.SS Convert existing LVs to cache types

View File

@@ -4,19 +4,19 @@
lvmconf \(em LVM configuration modifier
.SH "SYNOPSIS"
.B lvmconf
.RB [ \-\-disable-cluster ]
.RB [ \-\-enable-cluster ]
.RB [ \-\-enable-halvm ]
.RB [ \-\-disable-halvm ]
.RB [ \-\-file
.RB [ --disable-cluster ]
.RB [ --enable-cluster ]
.RB [ --enable-halvm ]
.RB [ --disable-halvm ]
.RB [ --file
.RI < configfile >]
.RB [ \-\-lockinglib
.RB [ --lockinglib
.RI < lib >]
.RB [ \-\-lockinglibdir
.RB [ --lockinglibdir
.RI < dir >]
.RB [ \-\-services ]
.RB [ \-\-mirrorservice ]
.RB [ \-\-startstopservices ]
.RB [ --services ]
.RB [ --mirrorservice ]
.RB [ --startstopservices ]
.SH "DESCRIPTION"
lvmconf is a script that modifies the locking configuration in
@@ -26,42 +26,42 @@ changes in the lvm configuration if needed.
.SH "OPTIONS"
.TP
.BR \-\-disable-cluster
.BR --disable-cluster
Set \fBlocking_type\fR to the default non-clustered type. Also reset
lvmetad use to its default.
.TP
.BR \-\-enable-cluster
.BR --enable-cluster
Set \fBlocking_type\fR to the default clustered type on this system.
Also disable lvmetad use as it is not yet supported in clustered environment.
.TP
.BR \-\-disable-halvm
.BR --disable-halvm
Set \fBlocking_type\fR to the default non-clustered type. Also reset
lvmetad use to its default.
.TP
.BR \-\-enable-halvm
.BR --enable-halvm
Set \fBlocking_type\fR suitable for HA LVM use.
Also disable lvmetad use as it is not yet supported in HA LVM environment.
.TP
.BR \-\-file " <" \fIconfigfile >
.BR --file " <" \fIconfigfile >
Apply the changes to \fIconfigfile\fP instead of the default
\fI#DEFAULT_SYS_DIR#/lvm.conf\fP.
.TP
.BR \-\-lockinglib " <" \fIlib >
.BR --lockinglib " <" \fIlib >
Set external \fBlocking_library\fR locking library to load if an external locking type is used.
.TP
.BR \-\-lockinglibdir " <" \fIdir >
.BR --lockinglibdir " <" \fIdir >
.TP
.BR \-\-services
.BR --services
In addition to setting the lvm configuration, also enable or disable related Systemd or SysV
clvmd and lvmetad services. This script does not configure services provided by cluster resource
agents.
.TP
.BR \-\-mirrorservice
Also enable or disable optional cmirrord service when handling services (applicable only with \-\-services).
.BR --mirrorservice
Also enable or disable optional cmirrord service when handling services (applicable only with --services).
.TP
.BR \-\-startstopservices
.BR --startstopservices
In addition to enabling or disabling related services, start or stop them immediately
(applicable only with \-\-services).
(applicable only with --services).
.SH FILES
.I #DEFAULT_SYS_DIR#/lvm.conf

View File

@@ -1,3 +1,3 @@
lvmconfig produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.

View File

@@ -1,6 +1,6 @@
.TH LVMCONFIG 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVMCONFIG 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvmconfig \- Display and manipulate configuration information
lvmconfig - Display and manipulate configuration information
.
.SH SYNOPSIS
\fBlvmconfig\fP
@@ -12,7 +12,7 @@ lvmconfig \- Display and manipulate configuration information
.SH DESCRIPTION
lvmconfig produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.
line settings from --config.
.SH USAGE
\fBlvmconfig\fP
.br

View File

@@ -8,8 +8,8 @@ lvmdbusd \(em LVM D-Bus daemon
.
.ad l
.B lvmdbusd
.RB [ \-\-debug \]
.RB [ \-\-udev \]
.RB [ --debug \]
.RB [ --udev \]
.ad b
.
.SH DESCRIPTION
@@ -22,12 +22,12 @@ as root.
.SH OPTIONS
.
.HP
.BR \-\-debug
.BR --debug
.br
Enable debug statements
.
.HP
.BR \-\-udev
.BR --udev
.br
Use udev events to trigger updates
.

View File

@@ -1,6 +1,6 @@
.TH LVMDISKSCAN 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVMDISKSCAN 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvmdiskscan \- List devices that may be used as physical volumes
lvmdiskscan - List devices that may be used as physical volumes
.
.SH SYNOPSIS
\fBlvmdiskscan\fP

View File

@@ -3,16 +3,16 @@
lvmdump \(em create lvm2 information dumps for diagnostic purposes
.SH SYNOPSIS
.B lvmdump
.RB [ \-a ]
.RB [ \-c ]
.RB [ \-d
.RB [ -a ]
.RB [ -c ]
.RB [ -d
.IR directory ]
.RB [ \-h ]
.RB [ \-l ]
.RB [ \-m ]
.RB [ \-p ]
.RB [ \-s ]
.RB [ \-u ]
.RB [ -h ]
.RB [ -l ]
.RB [ -m ]
.RB [ -p ]
.RB [ -s ]
.RB [ -u ]
.SH DESCRIPTION
lvmdump is a tool to dump various information concerning LVM2.
By default, it creates a tarball suitable for submission along
@@ -34,69 +34,69 @@ The content of the tarball is as follows:
.br
- list of files present /sys/devices/virtual/block
.br
- if enabled with \-m, metadata dump will be also included
- if enabled with -m, metadata dump will be also included
.br
- if enabled with \-a, debug output of vgscan, pvscan and list of all available volume groups, physical volumes and logical volumes will be included
- if enabled with -a, debug output of vgscan, pvscan and list of all available volume groups, physical volumes and logical volumes will be included
.br
- if enabled with \-c, cluster status info
- if enabled with -c, cluster status info
.br
- if enabled with \-l, lvmetad state if running
- if enabled with -l, lvmetad state if running
.br
- if enabled with \-p, lvmpolld state if running
- if enabled with -p, lvmpolld state if running
.br
- if enabled with \-s, system info and context
- if enabled with -s, system info and context
.br
- if enabled with \-u, udev info and context
- if enabled with -u, udev info and context
.SH OPTIONS
.TP
.B \-a
.B -a
Advanced collection.
\fBWARNING\fR: if lvm is already hung, then this script may hang as well
if \fB\-a\fR is used.
if \fB-a\fR is used.
.TP
.B \-c
.B -c
If clvmd is running, gather cluster data as well.
.TP
.B \-d \fIdirectory
.B -d \fIdirectory
Dump into a directory instead of tarball
By default, lvmdump will produce a single compressed tarball containing
all the information. Using this option, it can be instructed to only
produce the raw dump tree, rooted in \fIdirectory\fP.
.TP
.B \-h
.B -h
Print help message
.TP
.B \-l
.B -l
Include \fBlvmetad\fP(8) daemon dump if it is running. The dump contains
cached information that is currently stored in lvmetad: VG metadata,
PV metadata and various mappings in between these metadata for quick
access.
.TP
.B \-m
.B -m
Gather LVM metadata from the PVs
This option generates a 1:1 dump of the metadata area from all PVs visible
to the system, which can cause the dump to increase in size considerably.
However, the metadata dump may represent a valuable diagnostic resource.
.TP
.B \-p
.B -p
Include \fBlvmpolld\fP(8) daemon dump if it is running. The dump contains
all in-progress operation currently monitored by the daemon and partial
history for all yet uncollected results of polling operations already finished
including reason.
.TP
.B \-s
.B -s
Gather system info and context. Currently, this encompasses info gathered
by calling lsblk command and various systemd info and context: overall state
of systemd units present in the system, more detailed status of units
controlling LVM functionality and the content of systemd journal for
current boot.
.TP
.B \-u
.B -u
Gather udev info and context: /etc/udev/udev.conf file, udev daemon version
(output of 'udevadm info \-\-version' command), udev rules currently used in the system
(output of 'udevadm info --version' command), udev rules currently used in the system
(content of /lib/udev/rules.d and /etc/udev/rules.d directory),
list of files in /lib/udev directory and dump of current udev
database content (the output of 'udevadm info \-\-export\-db' command).
database content (the output of 'udevadm info --export-db' command).
.SH ENVIRONMENT VARIABLES
.TP
\fBLVM_BINARY\fP

View File

@@ -4,18 +4,18 @@ lvmetad \(em LVM metadata cache daemon
.SH SYNOPSIS
.B lvmetad
.RB [ \-l
.RB [ -l
.IR level [,level...]]
.RB [ \-p
.RB [ -p
.IR pidfile_path ]
.RB [ \-s
.RB [ -s
.IR socket_path ]
.RB [ \-t
.RB [ -t
.IR timeout_value ]
.RB [ \-f ]
.RB [ \-h ]
.RB [ \-V ]
.RB [ \-? ]
.RB [ -f ]
.RB [ -h ]
.RB [ -V ]
.RB [ -? ]
.SH DESCRIPTION
@@ -26,7 +26,7 @@ the normal work of the system. lvmetad can be a disadvantage when disk
event notifications from the system are unreliable.
lvmetad does not read metadata from disks itself. Instead, it relies on
an LVM command, like pvscan \-\-cache, to read metadata from disks and
an LVM command, like pvscan --cache, to read metadata from disks and
send it to lvmetad to be cached.
New LVM disks that appear on the system must be scanned before lvmetad
@@ -34,8 +34,8 @@ knows about them. If lvmetad does not know about a disk, then LVM
commands using lvmetad will also not know about it. When disks are added
or removed from the system, lvmetad must be updated.
lvmetad is usually combined with event\-based system services that
automatically run pvscan \-\-cache on disks added or removed. This way,
lvmetad is usually combined with event-based system services that
automatically run pvscan --cache on disks added or removed. This way,
the cache is automatically updated with metadata from new disks when they
appear. LVM udev rules and systemd services implement this automation.
Automatic scanning is usually combined with automatic activation. For
@@ -44,7 +44,7 @@ more information, see
If lvmetad is started or restarted after disks have been added to the
system, or if the global_filter has changed, the cache must be updated.
This can be done by running pvscan \-\-cache, or it will be done
This can be done by running pvscan --cache, or it will be done
automatically by the next LVM command that's run.
When lvmetad is not used, LVM commands revert to scanning disks for LVM
@@ -56,7 +56,7 @@ revert to scanning disks. A warning will also be printed which includes
the reason why lvmetad is not being used. The most common reason is the
existence of duplicate PVs (lvmetad cannot cache data for duplicate PVs.)
Once duplicates have been resolved, the lvmetad cache is can be updated
with pvscan \-\-cache and commands will return to using the cache.
with pvscan --cache and commands will return to using the cache.
Use of lvmetad is enabled/disabled by:
.br
@@ -65,7 +65,7 @@ Use of lvmetad is enabled/disabled by:
For more information on this setting, see:
.br
.B lvmconfig \-\-withcomments global/use_lvmetad
.B lvmconfig --withcomments global/use_lvmetad
To ignore disks from LVM at the system level, e.g. lvmetad, pvscan use:
.br
@@ -74,42 +74,42 @@ To ignore disks from LVM at the system level, e.g. lvmetad, pvscan use:
For more information on this setting, see
.br
.B lvmconfig \-\-withcomments devices/global_filter
.B lvmconfig --withcomments devices/global_filter
.SH OPTIONS
To run the daemon in a test environment both the pidfile_path and the
socket_path should be changed from the defaults.
.TP
.B \-f
.B -f
Don't fork, but run in the foreground.
.TP
.BR \-h ", " \-?
.BR -h ", " -?
Show help information.
.TP
.B \-l \fIlevels
.B -l \fIlevels
Specify the levels of log messages to generate as a comma separated list.
Messages are logged by syslog.
Additionally, when \-f is given they are also sent to standard error.
Additionally, when -f is given they are also sent to standard error.
Possible levels are: all, fatal, error, warn, info, wire, debug.
.TP
.B \-p \fIpidfile_path
.B -p \fIpidfile_path
Path to the pidfile. This overrides both the built-in default
(#DEFAULT_PID_DIR#/lvmetad.pid) and the environment variable
\fBLVM_LVMETAD_PIDFILE\fP. This file is used to prevent more
than one instance of the daemon running simultaneously.
.TP
.B \-s \fIsocket_path
.B -s \fIsocket_path
Path to the socket file. This overrides both the built-in default
(#DEFAULT_RUN_DIR#/lvmetad.socket) and the environment variable
\fBLVM_LVMETAD_SOCKET\fP. To communicate successfully with lvmetad,
all LVM2 processes should use the same socket path.
.TP
.B \-t \fItimeout_value
.B -t \fItimeout_value
The daemon may shutdown after being idle for the given time (in seconds). When the
option is omitted or the value given is zero the daemon never shutdowns on idle.
.TP
.B \-V
.B -V
Display the version of lvmetad daemon.
.SH ENVIRONMENT VARIABLES
.TP

View File

@@ -11,41 +11,41 @@ This command interacts with
lvmlockctl [options]
.B \-\-help | \-h
.B --help | -h
Show this help information.
.B \-\-quit | \-q
.B --quit | -q
Tell lvmlockd to quit.
.B \-\-info | \-i
.B --info | -i
Print lock state information from lvmlockd.
.B \-\-dump | \-d
.B --dump | -d
Print log buffer from lvmlockd.
.B \-\-wait | \-w 0|1
.B --wait | -w 0|1
Wait option for other commands.
.B \-\-force | \-f 0|1
.B --force | -f 0|1
Force option for other commands.
.B \-\-kill | \-k
.B --kill | -k
.I vgname
Kill access to the VG when sanlock cannot renew lease.
.B \-\-drop | \-r
.B --drop | -r
.I vgname
Clear locks for the VG when it is unused after kill (-k).
.B \-\-gl\-enable | \-E
.B --gl-enable | -E
.I vgname
Tell lvmlockd to enable the global lock in a sanlock VG.
.B \-\-gl\-disable | \-D
.B --gl-disable | -D
.I vgname
Tell lvmlockd to disable the global lock in a sanlock VG.
.B \-\-stop\-lockspaces | \-S
.B --stop-lockspaces | -S
Stop all lockspaces.
@@ -73,28 +73,28 @@ forcibly deactivate the VG. For more, see
.SS drop
This should only be run after a VG has been successfully deactivated
following an lvmlockctl \-\-kill command. It clears the stale lockspace
following an lvmlockctl --kill command. It clears the stale lockspace
from lvmlockd. In the future, this may become automatic along with an
automatic handling of \-\-kill. For more, see
automatic handling of --kill. For more, see
.BR lvmlockd (8).
.SS gl\-enable
.SS gl-enable
This enables the global lock in a sanlock VG. This is necessary if the VG
that previously held the global lock is removed. For more, see
.BR lvmlockd (8).
.SS gl\-disable
.SS gl-disable
This disables the global lock in a sanlock VG. This is necessary if the
global lock has mistakenly been enabled in more than one VG. The global
lock should be disabled in all but one sanlock VG. For more, see
.BR lvmlockd (8).
.SS stop\-lockspaces
.SS stop-lockspaces
This tells lvmlockd to stop all lockspaces. It can be useful to stop
lockspaces for VGs that the vgchange \-\-lock\-stop comand can no longer
lockspaces for VGs that the vgchange --lock-stop comand can no longer
see, or to stop the dlm global lockspace which is not directly stopped by
the vgchange command. The wait and force options can be used with this
command.

View File

@@ -33,50 +33,50 @@ dlm: uses network communication and a cluster manager.
lvmlockd [options]
For default settings, see lvmlockd \-h.
For default settings, see lvmlockd -h.
.B \-\-help | \-h
.B --help | -h
Show this help information.
.B \-\-version | \-V
.B --version | -V
Show version of lvmlockd.
.B \-\-test | \-T
.B --test | -T
Test mode, do not call lock manager.
.B \-\-foreground | \-f
.B --foreground | -f
Don't fork.
.B \-\-daemon\-debug | \-D
.B --daemon-debug | -D
Don't fork and print debugging to stdout.
.B \-\-pid\-file | \-p
.B --pid-file | -p
.I path
Set path to the pid file.
.B \-\-socket\-path | \-s
.B --socket-path | -s
.I path
Set path to the socket to listen on.
.B \-\-syslog\-priority | \-S err|warning|debug
.B --syslog-priority | -S err|warning|debug
Write log messages from this level up to syslog.
.B \-\-gl\-type | \-g sanlock|dlm
.B --gl-type | -g sanlock|dlm
Set global lock type to be sanlock or dlm.
.B \-\-host\-id | \-i
.B --host-id | -i
.I num
Set the local sanlock host id.
.B \-\-host\-id\-file | \-F
.B --host-id-file | -F
.I path
A file containing the local sanlock host_id.
.B \-\-sanlock\-timeout | \-o
.B --sanlock-timeout | -o
.I seconds
Override the default sanlock I/O timeout.
.B \-\-adopt | \-A 0|1
.B --adopt | -A 0|1
Adopt locks from a previous instance of lvmlockd.
@@ -84,7 +84,7 @@ For default settings, see lvmlockd \-h.
.SS Initial set up
Using LVM with lvmlockd for the first time includes some one\-time set up
Using LVM with lvmlockd for the first time includes some one-time set up
steps:
.SS 1. choose a lock manager
@@ -111,7 +111,7 @@ use_lvmlockd = 1
.I sanlock
.br
Assign each host a unique host_id in the range 1\-2000 by setting
Assign each host a unique host_id in the range 1-2000 by setting
.br
/etc/lvm/lvmlocal.conf local/host_id
@@ -133,7 +133,7 @@ systemctl start corosync dlm
.SS 5. create VG on shared devices
vgcreate \-\-shared <vgname> <devices>
vgcreate --shared <vgname> <devices>
The shared option sets the VG lock type to sanlock or dlm depending on
which lock manager is running. LVM commands will perform locking for the
@@ -141,7 +141,7 @@ VG using lvmlockd. lvmlockd will use the chosen lock manager.
.SS 6. start VG on all hosts
vgchange \-\-lock\-start
vgchange --lock-start
lvmlockd requires shared VGs to be started before they are used. This is
a lock manager operation to start (join) the VG lockspace, and it may take
@@ -156,7 +156,7 @@ LVs in a shared VG.
An LV activated exclusively on one host cannot be activated on another.
When multiple hosts need to use the same LV concurrently, the LV can be
activated with a shared lock (see lvchange options \-aey vs \-asy.)
activated with a shared lock (see lvchange options -aey vs -asy.)
(Shared locks are disallowed for certain LV types that cannot be used from
multiple hosts.)
@@ -177,7 +177,7 @@ start lvmlockd
start lock manager
.br
\[bu]
vgchange \-\-lock\-start
vgchange --lock-start
.br
\[bu]
activate LVs in shared VGs
@@ -189,7 +189,7 @@ The shut down sequence is the reverse:
deactivate LVs in shared VGs
.br
\[bu]
vgchange \-\-lock\-stop
vgchange --lock-stop
.br
\[bu]
stop lock manager
@@ -227,7 +227,7 @@ activate the VG will fail without the necessary locks.
A "local VG" is meant to be used by a single host. It has no lock type or
lock type "none". LVM commands and lvmlockd do not perform locking for
these VGs. A local VG typically exists on local (non\-shared) devices and
these VGs. A local VG typically exists on local (non-shared) devices and
cannot be used concurrently from different hosts.
If a local VG does exist on shared devices, it should be owned by a single
@@ -252,8 +252,8 @@ using lvmlockd. From a host not using lvmlockd, visible lockd VGs are
ignored in the same way as foreign VGs (see
.BR lvmsystemid (7).)
The \-\-shared option for reporting and display commands causes lockd VGs
to be displayed on a host not using lvmlockd, like the \-\-foreign option
The --shared option for reporting and display commands causes lockd VGs
to be displayed on a host not using lvmlockd, like the --foreign option
does for foreign VGs.
@@ -275,7 +275,7 @@ Creates a clvm VG when clvm is configured.
.P
.B vgcreate \-\-shared <vgname> <devices>
.B vgcreate --shared <vgname> <devices>
.IP \[bu] 2
Requires lvmlockd to be configured and running.
.IP \[bu] 2
@@ -288,7 +288,7 @@ lvmlockd obtains locks from the selected lock manager.
.P
.B vgcreate \-c|\-\-clustered y <vgname> <devices>
.B vgcreate -c|--clustered y <vgname> <devices>
.IP \[bu] 2
Requires clvm to be configured and running.
.IP \[bu] 2
@@ -343,29 +343,29 @@ global lock will be available, and LVM will be fully operational.
When a new lockd VG is created, its lockspace is automatically started on
the host that creates it. Other hosts need to run 'vgchange
\-\-lock\-start' to start the new VG before they can use it.
--lock-start' to start the new VG before they can use it.
From the 'vgs' command, lockd VGs are indicated by "s" (for shared) in the
sixth attr field. The specific lock type and lock args for a lockd VG can
be displayed with 'vgs \-o+locktype,lockargs'.
be displayed with 'vgs -o+locktype,lockargs'.
lockd VGs need to be "started" and "stopped", unlike other types of VGs.
See the following section for a full description of starting and stopping.
vgremove of a lockd VG will fail if other hosts have the VG started.
Run vgchange \-\-lock-stop <vgname> on all other hosts before vgremove.
Run vgchange --lock-stop <vgname> on all other hosts before vgremove.
(It may take several seconds before vgremove recognizes that all hosts
have stopped a sanlock VG.)
.SS starting and stopping VGs
Starting a lockd VG (vgchange \-\-lock\-start) causes the lock manager to
Starting a lockd VG (vgchange --lock-start) causes the lock manager to
start (join) the lockspace for the VG on the host where it is run. This
makes locks for the VG available to LVM commands on the host. Before a VG
is started, only LVM commands that read/display the VG are allowed to
continue without locks (and with a warning).
Stopping a lockd VG (vgchange \-\-lock\-stop) causes the lock manager to
Stopping a lockd VG (vgchange --lock-stop) causes the lock manager to
stop (leave) the lockspace for the VG on the host where it is run. This
makes locks for the VG inaccessible to the host. A VG cannot be stopped
while it has active LVs.
@@ -390,24 +390,24 @@ A lockd VG can be stopped if all LVs are deactivated.
All lockd VGs can be started/stopped using:
.br
vgchange \-\-lock-start
vgchange --lock-start
.br
vgchange \-\-lock-stop
vgchange --lock-stop
Individual VGs can be started/stopped using:
.br
vgchange \-\-lock\-start <vgname> ...
vgchange --lock-start <vgname> ...
.br
vgchange \-\-lock\-stop <vgname> ...
vgchange --lock-stop <vgname> ...
To make vgchange not wait for start to complete:
.br
vgchange \-\-lock\-start \-\-lock\-opt nowait ...
vgchange --lock-start --lock-opt nowait ...
lvmlockd can be asked directly to stop all lockspaces:
.br
lvmlockctl \-\-stop\-lockspaces
lvmlockctl --stop-lockspaces
To start only selected lockd VGs, use the lvm.conf
activation/lock_start_list. When defined, only VG names in this list are
@@ -429,7 +429,7 @@ Scripts or programs on a host that automatically start VGs will use the
"auto" option to indicate that the command is being run automatically by
the system:
vgchange \-\-lock\-start \-\-lock\-opt auto [<vgname> ...]
vgchange --lock-start --lock-opt auto [<vgname> ...]
Without any additional configuration, including the "auto" option has no
effect; all VGs are started unless restricted by lock_start_list.
@@ -545,7 +545,7 @@ If the situation arises where more than one sanlock VG contains a global
lock, the global lock should be manually disabled in all but one of them
with the command:
lvmlockctl \-\-gl\-disable <vgname>
lvmlockctl --gl-disable <vgname>
(The one VG with the global lock enabled must be visible to all hosts.)
@@ -555,7 +555,7 @@ and subsequent LVM commands will fail to acquire it. In this case, the
global lock needs to be manually enabled in one of the remaining sanlock
VGs with the command:
lvmlockctl \-\-gl\-enable <vgname>
lvmlockctl --gl-enable <vgname>
A small sanlock VG dedicated to holding the global lock can avoid the case
where the GL lock must be manually enabled after a vgremove.
@@ -593,7 +593,7 @@ cannot be acquired, the LV is not activated and an error is reported.
This would happen if the LV is active exclusively on another host. If the
LV type prohibits shared access, such as a snapshot, the command will
report an error and fail.
The shared mode is intended for a multi\-host/cluster application or
The shared mode is intended for a multi-host/cluster application or
file system.
LV types that cannot be used concurrently
from multiple hosts include thin, cache, raid, mirror, and snapshot.
@@ -638,18 +638,18 @@ acquired by other hosts. The VG must be forcibly deactivated on the host
with the expiring lease before other hosts can acquire its locks.
When the sanlock daemon detects that the lease storage is lost, it runs
the command lvmlockctl \-\-kill <vgname>. This command emits a syslog
the command lvmlockctl --kill <vgname>. This command emits a syslog
message stating that lease storage is lost for the VG and LVs must be
immediately deactivated.
If no LVs are active in the VG, then the lockspace with an expiring lease
will be removed, and errors will be reported when trying to use the VG.
Use the lvmlockctl \-\-drop command to clear the stale lockspace from
Use the lvmlockctl --drop command to clear the stale lockspace from
lvmlockd.
If the VG has active LVs when the lock storage is lost, the LVs must be
quickly deactivated before the lockspace lease expires. After all LVs are
deactivated, run lvmlockctl \-\-drop <vgname> to clear the expiring
deactivated, run lvmlockctl --drop <vgname> to clear the expiring
lockspace from lvmlockd. If all LVs in the VG are not deactivated within
about 40 seconds, sanlock will reset the host using the local watchdog.
The machine reset is effectively a severe form of "deactivating" LVs
@@ -692,7 +692,7 @@ vgchange --lock-stop <vgname>
.IP \[bu] 2
Change the VG lock type to none:
.br
vgchange \-\-lock\-type none <vgname>
vgchange --lock-type none <vgname>
.IP \[bu] 2
Change the dlm cluster name on the host or move the VG to the new cluster.
@@ -704,7 +704,7 @@ cat /sys/kernel/config/dlm/cluster/cluster_name
.IP \[bu] 2
Change the VG lock type back to dlm which sets the new cluster name:
.br
vgchange \-\-lock\-type dlm <vgname>
vgchange --lock-type dlm <vgname>
.IP \[bu] 2
Start the VG on hosts to use it:
@@ -728,12 +728,12 @@ cat /sys/kernel/config/dlm/cluster/cluster_name
.IP \[bu] 2
Change the VG lock type to none:
.br
vgchange \-\-lock\-type none \-\-force <vgname>
vgchange --lock-type none --force <vgname>
.IP \[bu] 2
Change the VG lock type back to dlm which sets the new cluster name:
.br
vgchange \-\-lock\-type dlm <vgname>
vgchange --lock-type dlm <vgname>
.IP \[bu] 2
Start the VG on hosts to use it:
@@ -749,18 +749,18 @@ lvmlockd must be configured and running as described in USAGE.
Change a local VG to a lockd VG with the command:
.br
vgchange \-\-lock\-type sanlock|dlm <vgname>
vgchange --lock-type sanlock|dlm <vgname>
Start the VG on hosts to use it:
.br
vgchange \-\-lock\-start <vgname>
vgchange --lock-start <vgname>
.SS changing a lockd VG to a local VG
Stop the lockd VG on all hosts, then run:
.br
vgchange \-\-lock\-type none <vgname>
vgchange --lock-type none <vgname>
To change a VG from one lockd type to another (i.e. between sanlock and
dlm), first change it to a local VG, then to the new type.
@@ -773,15 +773,15 @@ All LVs must be inactive to change the lock type.
First change the clvm VG to a local VG. Within a running clvm cluster,
change a clvm VG to a local VG with the command:
vgchange \-cn <vgname>
vgchange -cn <vgname>
If the clvm cluster is no longer running on any nodes, then extra options
can be used to forcibly make the VG local. Caution: this is only safe if
all nodes have stopped using the VG:
vgchange \-\-config 'global/locking_type=0 global/use_lvmlockd=0'
vgchange --config 'global/locking_type=0 global/use_lvmlockd=0'
.RS
\-cn <vgname>
-cn <vgname>
.RE
After the VG is local, follow the steps described in "changing a local VG
@@ -830,7 +830,7 @@ lvm.conf must be configured to use either lvmlockd (use_lvmlockd=1) or
clvmd (locking_type=3), but not both.
.IP \[bu] 2
vgcreate \-\-shared creates a lockd VG, and vgcreate \-\-clustered y
vgcreate --shared creates a lockd VG, and vgcreate --clustered y
creates a clvm VG.
.IP \[bu] 2
@@ -839,7 +839,7 @@ need for network clustering.
.IP \[bu] 2
lvmlockd defaults to the exclusive activation mode whenever the activation
mode is unspecified, i.e. \-ay means \-aey, not \-asy.
mode is unspecified, i.e. -ay means -aey, not -asy.
.IP \[bu] 2
lvmlockd commands always apply to the local host, and never have an effect
@@ -856,13 +856,13 @@ lvmlockd saves the cluster name for a lockd VG using dlm. Only hosts in
the matching cluster can use the VG.
.IP \[bu] 2
lvmlockd requires starting/stopping lockd VGs with vgchange \-\-lock-start
and \-\-lock-stop.
lvmlockd requires starting/stopping lockd VGs with vgchange --lock-start
and --lock-stop.
.IP \[bu] 2
vgremove of a sanlock VG may fail indicating that all hosts have not
stopped the VG lockspace. Stop the VG on all hosts using vgchange
\-\-lock-stop.
--lock-stop.
.IP \[bu] 2
vgreduce or pvmove of a PV in a sanlock VG will fail if it holds the

View File

@@ -3,22 +3,22 @@
lvmpolld \(em LVM poll daemon
.SH SYNOPSIS
.B lvmpolld
.RB [ \-l | \-\-log
.RB [ -l | --log
.RI { all | wire | debug }]
.RB [ \-p | \-\-pidfile
.RB [ -p | --pidfile
.IR pidfile_path ]
.RB [ \-s | \-\-socket
.RB [ -s | --socket
.IR socket_path ]
.RB [ \-B | \-\-binary
.RB [ -B | --binary
.IR lvm_binary_path ]
.RB [ \-t | \-\-timeout
.RB [ -t | --timeout
.IR timeout_value ]
.RB [ \-f | \-\-foreground ]
.RB [ \-h | \-\-help ]
.RB [ \-V | \-\-version ]
.RB [ -f | --foreground ]
.RB [ -h | --help ]
.RB [ -V | --version ]
.B lvmpolld
.RB [ \-\-dump ]
.RB [ --dump ]
.SH DESCRIPTION
lvmpolld is polling daemon for LVM. The daemon receives requests for polling
of already initialised operations originating in LVM2 command line tool.
@@ -33,48 +33,48 @@ external factors.
lvmpolld is used by LVM only if it is enabled in \fBlvm.conf\fP(5) by
specifying the \fBglobal/use_lvmpolld\fP setting. If this is not defined in the
LVM configuration explicitly then default setting is used instead (see the
output of \fBlvmconfig \-\-type default global/use_lvmpolld\fP command).
output of \fBlvmconfig --type default global/use_lvmpolld\fP command).
.SH OPTIONS
To run the daemon in a test environment both the pidfile_path and the
socket_path should be changed from the defaults.
.TP
.BR \-f ", " \-\-foreground
.BR -f ", " --foreground
Don't fork, but run in the foreground.
.TP
.BR \-h ", " \-\-help
.BR -h ", " --help
Show help information.
.TP
.IR \fB\-l\fP ", " \fB\-\-log\fP " {" all | wire | debug }
.IR \fB-l\fP ", " \fB--log\fP " {" all | wire | debug }
Select the type of log messages to generate.
Messages are logged by syslog.
Additionally, when \-f is given they are also sent to standard error.
Additionally, when -f is given they are also sent to standard error.
There are two classes of messages: wire and debug. Selecting 'all' supplies both
and is equivalent to a comma-separated list \-l wire,debug.
and is equivalent to a comma-separated list -l wire,debug.
.TP
.BR \-p ", " \-\-pidfile " " \fIpidfile_path
.BR -p ", " --pidfile " " \fIpidfile_path
Path to the pidfile. This overrides both the built-in default
(#DEFAULT_PID_DIR#/lvmpolld.pid) and the environment variable
\fBLVM_LVMPOLLD_PIDFILE\fP. This file is used to prevent more
than one instance of the daemon running simultaneously.
.TP
.BR \-s ", " \-\-socket " " \fIsocket_path
.BR -s ", " --socket " " \fIsocket_path
Path to the socket file. This overrides both the built-in default
(#DEFAULT_RUN_DIR#/lvmpolld.socket) and the environment variable
\fBLVM_LVMPOLLD_SOCKET\fP.
.TP
.BR \-t ", " \-\-timeout " " \fItimeout_value
.BR -t ", " --timeout " " \fItimeout_value
The daemon may shutdown after being idle for the given time (in seconds). When the
option is omitted or the value given is zero the daemon never shutdowns on idle.
.TP
.BR \-B ", " \-\-binary " " \fIlvm_binary_path
.BR -B ", " --binary " " \fIlvm_binary_path
Optional path to alternative LVM binary (default: #LVM_PATH#). Use for
testing purposes only.
.TP
.BR \-V ", " \-\-version
.BR -V ", " --version
Display the version of lvmpolld daemon.
.TP
.B \-\-dump
.B --dump
Contact the running lvmpolld daemon to obtain the complete state and print it
out in a raw format.
.SH ENVIRONMENT VARIABLES

View File

@@ -8,7 +8,8 @@ lvmraid \(em LVM RAID
LVM RAID is a way to create logical volumes (LVs) that use multiple physical
devices to improve performance or tolerate device failure. How blocks of
data in an LV are placed onto physical devices is determined by the RAID
level. RAID levels are commonly referred to by number, e.g. raid1, raid5.
level. RAID levels are commonly referred by a level specific number
suffixed to the string 'raid', e.g. raid1, raid5 or raid6.
Selecting a RAID level involves tradeoffs among physical device
requirements, fault tolerance, and performance. A description of the RAID
levels can be found at
@@ -31,12 +32,12 @@ The LV type corresponds to a RAID level.
The basic RAID levels that can be used are:
.B raid0, raid1, raid4, raid5, raid6, raid10.
.B lvcreate \-\-type
.B lvcreate --type
.I RaidLevel
[\fIOPTIONS\fP]
.B \-\-name
.B --name
.I Name
.B \-\-size
.B --size
.I Size
.I VG
[\fIPVs\fP]
@@ -58,17 +59,17 @@ Also called striping, raid0 spreads LV data across multiple devices in
units of stripe size. This is used to increase performance. LV data will
be lost if any of the devices fail.
.B lvcreate \-\-type raid0
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
.B lvcreate --type raid0
[\fB--stripes\fP \fINumber\fP \fB--stripesize\fP \fISize\fP]
\fIVG\fP
[\fIPVs\fP]
.HP
.B \-\-stripes
.B --stripes
specifies the number of devices to spread the LV across.
.HP
.B \-\-stripesize
.B --stripesize
specifies the size of each stripe in kilobytes. This is the amount of
data that is written to one device before moving to the next.
.P
@@ -85,15 +86,15 @@ Also called mirroring, raid1 uses multiple devices to duplicate LV data.
The LV data remains available if all but one of the devices fail.
The minimum number of devices (i.e. sub LV pairs) required is 2.
.B lvcreate \-\-type raid1
[\fB\-\-mirrors\fP \fINumber\fP]
.B lvcreate --type raid1
[\fB--mirrors\fP \fINumber\fP]
\fIVG\fP
[\fIPVs\fP]
.HP
.B \-\-mirrors
.B --mirrors
specifies the number of mirror images in addition to the original LV
image, e.g. \-\-mirrors 1 means there are two images of the data, the
image, e.g. --mirrors 1 means there are two images of the data, the
original and one mirror image.
.P
@@ -109,20 +110,20 @@ storing parity blocks. The LV data remains available if one device fails. The
parity is used to recalculate data that is lost from a single device. The
minimum number of devices required is 3.
.B lvcreate \-\-type raid4
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
.B lvcreate --type raid4
[\fB--stripes\fP \fINumber\fP \fB--stripesize\fP \fISize\fP]
\fIVG\fP
[\fIPVs\fP]
.HP
.B \-\-stripes
.B --stripes
specifies the number of devices to use for LV data. This does not include
the extra device lvm adds for storing parity blocks. A raid4 LV with
\fINumber\fP stripes requires \fINumber\fP+1 devices. \fINumber\fP must
be 2 or more.
.HP
.B \-\-stripesize
.B --stripesize
specifies the size of each stripe in kilobytes. This is the amount of
data that is written to one device before moving to the next.
.P
@@ -143,20 +144,20 @@ a rotating pattern for performance reasons. The LV data remains available
if one device fails. The parity is used to recalculate data that is lost
from a single device. The minimum number of devices required is 3.
.B lvcreate \-\-type raid5
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
.B lvcreate --type raid5
[\fB--stripes\fP \fINumber\fP \fB--stripesize\fP \fISize\fP]
\fIVG\fP
[\fIPVs\fP]
.HP
.B \-\-stripes
.B --stripes
specifies the number of devices to use for LV data. This does not include
the extra device lvm adds for storing parity blocks. A raid5 LV with
\fINumber\fP stripes requires \fINumber\fP+1 devices. \fINumber\fP must
be 2 or more.
.HP
.B \-\-stripesize
.B --stripesize
specifies the size of each stripe in kilobytes. This is the amount of
data that is written to one device before moving to the next.
.P
@@ -181,20 +182,20 @@ LV data remains available if up to two devices fail. The parity is used
to recalculate data that is lost from one or two devices. The minimum
number of devices required is 5.
.B lvcreate \-\-type raid6
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
.B lvcreate --type raid6
[\fB--stripes\fP \fINumber\fP \fB--stripesize\fP \fISize\fP]
\fIVG\fP
[\fIPVs\fP]
.HP
.B \-\-stripes
.B --stripes
specifies the number of devices to use for LV data. This does not include
the extra two devices lvm adds for storing parity blocks. A raid6 LV with
\fINumber\fP stripes requires \fINumber\fP+2 devices. \fINumber\fP must be
3 or more.
.HP
.B \-\-stripesize
.B --stripesize
specifies the size of each stripe in kilobytes. This is the amount of
data that is written to one device before moving to the next.
.P
@@ -215,24 +216,24 @@ raid10 is a combination of raid1 and raid0, striping data across mirrored
devices. LV data remains available if one or more devices remains in each
mirror set. The minimum number of devices required is 4.
.B lvcreate \-\-type raid10
.B lvcreate --type raid10
.RS
[\fB\-\-mirrors\fP \fINumberMirrors\fP]
[\fB--mirrors\fP \fINumberMirrors\fP]
.br
[\fB\-\-stripes\fP \fINumberStripes\fP \fB\-\-stripesize\fP \fISize\fP]
[\fB--stripes\fP \fINumberStripes\fP \fB--stripesize\fP \fISize\fP]
.br
\fIVG\fP
[\fIPVs\fP]
.RE
.HP
.B \-\-mirrors
.B --mirrors
specifies the number of mirror images within each stripe. e.g.
\-\-mirrors 1 means there are two images of the data, the original and one
--mirrors 1 means there are two images of the data, the original and one
mirror image.
.HP
.B \-\-stripes
.B --stripes
specifies the total number of devices to use in all raid1 images (not the
number of raid1 devices to spread the LV across, even though that is the
effective result). The number of devices in each raid1 mirror will be
@@ -240,7 +241,7 @@ NumberStripes/(NumberMirrors+1), e.g. mirrors 1 and stripes 4 will stripe
data across two raid1 mirrors, where each mirror is devices.
.HP
.B \-\-stripesize
.B --stripesize
specifies the size of each stripe in kilobytes. This is the amount of
data that is written to one device before moving to the next.
.P
@@ -273,16 +274,16 @@ written.
The RAID implementation keeps track of which parts of a RAID LV are
synchronized. This uses a bitmap saved in the RAID metadata. The bitmap
can exclude large parts of the LV from synchronization to reduce the
amount of work. Without this, the entire LV would need to be synchronized
every time it was activated. When a RAID LV is first created and
activated the first synchronization is called initialization.
amount of work after a crash. Without this, the entire LV would need
to be synchronized every time it was activated. When a RAID LV is
first created and activated the first synchronization is called initialization.
Automatic synchronization happens when a RAID LV is activated, but it is
usually partial because the bitmaps reduce the areas that are checked.
A full sync may become necessary when devices in the RAID LV are changed.
A full sync becomes necessary when devices in the RAID LV are replaced.
The synchronization status of a RAID LV is reported by the
following command, where "image synced" means sync is complete:
following command, where "Cpy%Sync" = "100%" means sync is complete:
.B lvs -a -o name,sync_percent
@@ -300,13 +301,13 @@ excludes areas outside of the RAID write-intent bitmap.
The command to scrub a RAID LV can operate in two different modes:
.B lvchange \-\-syncaction
.B lvchange --syncaction
.BR check | repair
.IR VG / LV
.HP
.B check
Check mode is read\-only and only detects inconsistent areas in the RAID
Check mode is read-only and only detects inconsistent areas in the RAID
LV, it does not correct them.
.HP
@@ -320,7 +321,7 @@ Scrubbing can consume a lot of bandwidth and slow down application I/O on
the RAID LV. To control the I/O rate used for scrubbing, use:
.HP
.B \-\-maxrecoveryrate
.B --maxrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the maximum recovery rate for a RAID LV. \fIRate\fP is specified as
@@ -329,7 +330,7 @@ then KiB/sec/device is assumed. Setting the recovery rate to \fB0\fP
means it will be unbounded.
.HP
.BR \-\-minrecoveryrate
.BR --minrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the minimum recovery rate for a RAID LV. \fIRate\fP is specified as
@@ -371,7 +372,7 @@ not know which data is correct. The result may be consistent but
incorrect data. When two different blocks of data must be made
consistent, it chooses the block from the device that would be used during
RAID intialization. However, if the PV holding corrupt data is known,
lvchange \-\-rebuild can be used in place of scrubbing to reconstruct the
lvchange --rebuild can be used in place of scrubbing to reconstruct the
data on the bad device.
Future developments might include:
@@ -390,7 +391,7 @@ allowing it to be rewritten.
An LV is often a combination of other hidden LVs called SubLVs. The
SubLVs either use physical devices, or are built from other SubLVs
themselves. SubLVs hold LV data blocks, RAID parity blocks, and RAID
metadata. SubLVs are generally hidden, so the lvs \-a option is required
metadata. SubLVs are generally hidden, so the lvs -a option is required
to display them:
.B lvs -a -o name,segtype,devices
@@ -600,7 +601,7 @@ WARNING: Device for PV uItL3Z-wBME-DQy0-... not found or rejected ...
.fi
This warning will go away if the device returns or is removed from the
VG (see \fBvgreduce \-\-removemissing\fP).
VG (see \fBvgreduce --removemissing\fP).
.SS Activating an LV with missing devices
@@ -608,7 +609,7 @@ VG (see \fBvgreduce \-\-removemissing\fP).
A RAID LV that is missing devices may be activated or not, depending on
the "activation mode" used in lvchange:
.B lvchange \-ay \-\-activationmode
.B lvchange -ay --activationmode
.RB { complete | degraded | partial }
.IR VG / LV
@@ -641,25 +642,25 @@ lvmconfig --type default activation/activation_mode
Devices in a RAID LV can be replaced by other devices in the VG. When
replacing devices that are no longer visible on the system, use lvconvert
\-\-repair. When replacing devices that are still visible, use lvconvert
\-\-replace. The repair command will attempt to restore the same number
--repair. When replacing devices that are still visible, use lvconvert
--replace. The repair command will attempt to restore the same number
of data LVs that were previously in the LV. The replace option can be
repeated to replace multiple PVs. Replacement devices can be optionally
listed with either option.
.B lvconvert \-\-repair
.B lvconvert --repair
.IR VG / LV
[\fINewPVs\fP]
.B lvconvert \-\-replace
.B lvconvert --replace
\fIOldPV\fP
.IR VG / LV
[\fINewPV\fP]
.B lvconvert
.B \-\-replace
.B --replace
\fIOldPV1\fP
.B \-\-replace
.B --replace
\fIOldPV2\fP
...
.IR VG / LV
@@ -677,7 +678,7 @@ Restoring a device will usually require at least partial synchronization
in the RAID LV operating in degraded mode until it is reactivated. Use
the lvchange command to refresh an LV:
.B lvchange \-\-refresh
.B lvchange --refresh
.IR VG / LV
.nf
@@ -719,7 +720,7 @@ synchronization is started.
The specific command run by dmeventd to warn or repair is:
.br
.B lvconvert \-\-repair \-\-use\-policies
.B lvconvert --repair --use-policies
.IR VG / LV
@@ -735,7 +736,7 @@ This should be rare, and can be detected (see \fBScrubbing\fP).
If specific PVs in a RAID LV are known to have corrupt data, the data on
those PVs can be reconstructed with:
.B lvchange \-\-rebuild PV
.B lvchange --rebuild PV
.IR VG / LV
The rebuild option can be repeated with different PVs to replace the data
@@ -781,7 +782,7 @@ A RAID1 LV can be tuned so that certain devices are avoided for reading
while all devices are still written to.
.B lvchange
.BR \-\- [ raid ] writemostly
.BR -- [ raid ] writemostly
.BR \fIPhysicalVolume [ : { y | n | t }]
.IR VG / LV
@@ -808,7 +809,7 @@ further writes become synchronous. When synchronous, a write to the LV
will not complete until writes to all the mirror images are complete.
.B lvchange
.BR \-\- [ raid ] writebehind
.BR -- [ raid ] writebehind
.IR IOCount
.IR VG / LV
@@ -824,8 +825,8 @@ synchronous.
RAID takeover is converting a RAID LV from one RAID level to another, e.g.
raid5 to raid6. Changing the RAID level is usually done to increase or
decrease resilience to device failures. This is done using lvconvert and
specifying the new RAID level as the LV type:
decrease resilience to device failures or to restripe LVs. This is done
using lvconvert and specifying the new RAID level as the LV type:
.B lvconvert --type
.I RaidLevel
@@ -865,7 +866,7 @@ blocks to a new image on a new device. Converting to a parity RAID level
requires reading all LV data blocks, calculating parity, and writing the
new parity blocks. Synchronization can take a long time and degrade
performance (rate controls also apply to conversion, see
\fB\-\-maxrecoveryrate\fP.)
\fB--maxrecoveryrate\fP.)
Warning: though it is possible to create \fBstriped\fP LVs with up to 128 stripes,
a maximum of 64 stripes can be converted to \fBraid0\fP, 63 to \fBraid4/5\fP and
@@ -1443,7 +1444,7 @@ Mind the fact that stripes are removed thus the capacity of the RaidLV will shri
"lvconvert --stripes 1 vg/lv" for converting to 1 stripe will inform upfront about
the reduced size to allow for resizing the content or growing the RaidLV before
actually converting to 1 stripe. The \fB\-\-force\fP option is needed to
actually converting to 1 stripe. The \fB--force\fP option is needed to
allow stripe removing conversions to prevent data loss.
Of course any interim step can be the intended last one (e.g. striped -> raid1).
@@ -1607,7 +1608,7 @@ Used for RAID Takeover
.ig
.SH RAID Duplication
RAID LV conversion (takeover or reshaping) can be done out\-of\-place by
RAID LV conversion (takeover or reshaping) can be done out-of-place by
copying the LV data onto new devices while changing the RAID properties.
Copying avoids modifying the original LV but requires additional devices.
Once the LV data has been copied/converted onto the new devices, there are
@@ -1626,23 +1627,23 @@ LV, leaving the original RAID LV unchanged on its original devices.
The command to start duplication is:
.B lvconvert \-\-type
.B lvconvert --type
.I RaidLevel
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
[\fB--stripes\fP \fINumber\fP \fB--stripesize\fP \fISize\fP]
.RS
.B \-\-duplicate
.B --duplicate
.IR VG / LV
[\fIPVs\fP]
.RE
.HP
.B \-\-duplicate
.B --duplicate
.br
Specifies that the LV conversion should be done out\-of\-place, copying
Specifies that the LV conversion should be done out-of-place, copying
LV data to new devices while converting.
.HP
.BR \-\-type , \-\-stripes , \-\-stripesize
.BR --type , --stripes , --stripesize
.br
Specifies the RAID properties to use when creating the copy.
@@ -1678,16 +1679,16 @@ devices (SubLV 0) or the new devices (SubLV 1).
To make the RAID LV use the data on the old devices, and drop the copy on
the new devices, specify the name of SubLV 0 (suffix _dup_0):
.B lvconvert \-\-unduplicate
.BI \-\-name
.B lvconvert --unduplicate
.BI --name
.IB LV _dup_0
.IR VG / LV
To make the RAID LV use the data copy on the new devices, and drop the old
devices, specify the name of SubLV 1 (suffix _dup_1):
.B lvconvert \-\-unduplicate
.BI \-\-name
.B lvconvert --unduplicate
.BI --name
.IB LV _dup_1
.IR VG / LV

View File

@@ -99,21 +99,21 @@ The primary method for using lvm thin provisioning:
Create an LV that will hold thin pool data.
.B lvcreate \-n ThinDataLV \-L LargeSize VG
.B lvcreate -n ThinDataLV -L LargeSize VG
.I Example
.br
# lvcreate \-n pool0 \-L 10G vg
# lvcreate -n pool0 -L 10G vg
.SS 2. create ThinMetaLV
Create an LV that will hold thin pool metadata.
.B lvcreate \-n ThinMetaLV \-L SmallSize VG
.B lvcreate -n ThinMetaLV -L SmallSize VG
.I Example
.br
# lvcreate \-n pool0meta \-L 1G vg
# lvcreate -n pool0meta -L 1G vg
# lvs
LV VG Attr LSize
@@ -129,17 +129,17 @@ ThinMetaLV is renamed to hidden ThinPoolLV_tmeta.
The new ThinPoolLV takes the previous name of ThinDataLV.
.fi
.B lvconvert \-\-type thin-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
.B lvconvert --type thin-pool --poolmetadata VG/ThinMetaLV VG/ThinDataLV
.I Example
.br
# lvconvert \-\-type thin-pool \-\-poolmetadata vg/pool0meta vg/pool0
# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
# lvs vg/pool0
LV VG Attr LSize Pool Origin Data% Meta%
pool0 vg twi-a-tz-- 10.00g 0.00 0.00
# lvs \-a
# lvs -a
LV VG Attr LSize
pool0 vg twi-a-tz-- 10.00g
[pool0_tdata] vg Twi-ao---- 10.00g
@@ -157,17 +157,17 @@ The --thinpool argument specifies which thin pool will
contain the ThinLV.
.fi
.B lvcreate \-n ThinLV \-V VirtualSize \-\-thinpool ThinPoolLV VG
.B lvcreate -n ThinLV -V VirtualSize --thinpool ThinPoolLV VG
.I Example
.br
Create a thin LV in a thin pool:
.br
# lvcreate \-n thin1 \-V 1T \-\-thinpool pool0 vg
# lvcreate -n thin1 -V 1T --thinpool pool0 vg
Create another thin LV in the same thin pool:
.br
# lvcreate \-n thin2 \-V 1T \-\-thinpool pool0 vg
# lvcreate -n thin2 -V 1T --thinpool pool0 vg
# lvs vg/thin1 vg/thin2
LV VG Attr LSize Pool Origin Data%
@@ -179,28 +179,28 @@ Create another thin LV in the same thin pool:
Create snapshots of an existing ThinLV or SnapLV.
.br
Do not specify
.BR \-L ", " \-\-size
.BR -L ", " --size
when creating a thin snapshot.
.br
A size argument will cause an old COW snapshot to be created.
.B lvcreate \-n SnapLV \-\-snapshot VG/ThinLV
.B lvcreate -n SnapLV --snapshot VG/ThinLV
.br
.B lvcreate \-n SnapLV \-\-snapshot VG/PrevSnapLV
.B lvcreate -n SnapLV --snapshot VG/PrevSnapLV
.I Example
.br
Create first snapshot of an existing ThinLV:
.br
# lvcreate \-n thin1s1 \-s vg/thin1
# lvcreate -n thin1s1 -s vg/thin1
Create second snapshot of the same ThinLV:
.br
# lvcreate \-n thin1s2 \-s vg/thin1
# lvcreate -n thin1s2 -s vg/thin1
Create a snapshot of the first snapshot:
.br
# lvcreate \-n thin1s1s1 \-s vg/thin1s1
# lvcreate -n thin1s1s1 -s vg/thin1s1
# lvs vg/thin1s1 vg/thin1s2 vg/thin1s1s1
LV VG Attr LSize Pool Origin
@@ -211,14 +211,14 @@ Create a snapshot of the first snapshot:
.SS 6. activate SnapLV
Thin snapshots are created with the persistent "activation skip"
flag, indicated by the "k" attribute. Use \-K with lvchange
flag, indicated by the "k" attribute. Use -K with lvchange
or vgchange to activate thin snapshots with the "k" attribute.
.B lvchange \-ay \-K VG/SnapLV
.B lvchange -ay -K VG/SnapLV
.I Example
.br
# lvchange \-ay \-K vg/thin1s1
# lvchange -ay -K vg/thin1s1
# lvs vg/thin1s1
LV VG Attr LSize Pool Origin
@@ -226,7 +226,7 @@ or vgchange to activate thin snapshots with the "k" attribute.
.SH Thin Topics
.B Alternate syntax for specifying type thin\-pool
.B Alternate syntax for specifying type thin-pool
.br
.B Automatic pool metadata LV
.br
@@ -286,17 +286,17 @@ A thin data LV can be converted to a thin pool LV without specifying a
thin pool metadata LV. LVM automatically creates a metadata LV from the
same VG.
.B lvcreate \-n ThinDataLV \-L LargeSize VG
.B lvcreate -n ThinDataLV -L LargeSize VG
.br
.B lvconvert \-\-type thin\-pool VG/ThinDataLV
.B lvconvert --type thin-pool VG/ThinDataLV
.I Example
.br
.nf
# lvcreate \-n pool0 \-L 10G vg
# lvconvert \-\-type thin\-pool vg/pool0
# lvcreate -n pool0 -L 10G vg
# lvconvert --type thin-pool vg/pool0
# lvs \-a
# lvs -a
pool0 vg twi-a-tz-- 10.00g
[pool0_tdata] vg Twi-ao---- 10.00g
[pool0_tmeta] vg ewi-ao---- 16.00m
@@ -312,18 +312,18 @@ separate physical devices. To do that, specify the device name(s)
at the end of the lvcreate line. It can be especially helpful
to use fast devices for the metadata LV.
.B lvcreate \-n ThinDataLV \-L LargeSize VG LargePV
.B lvcreate -n ThinDataLV -L LargeSize VG LargePV
.br
.B lvcreate \-n ThinMetaLV \-L SmallSize VG SmallPV
.B lvcreate -n ThinMetaLV -L SmallSize VG SmallPV
.br
.B lvconvert \-\-type thin\-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
.B lvconvert --type thin-pool --poolmetadata VG/ThinMetaLV VG/ThinDataLV
.I Example
.br
.nf
# lvcreate \-n pool0 \-L 10G vg /dev/sdA
# lvcreate \-n pool0meta \-L 1G vg /dev/sdB
# lvconvert \-\-type thin\-pool \-\-poolmetadata vg/pool0meta vg/pool0
# lvcreate -n pool0 -L 10G vg /dev/sdA
# lvcreate -n pool0meta -L 1G vg /dev/sdB
# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
.fi
.BR lvm.conf (5)
@@ -340,18 +340,18 @@ controls the default PV usage for thin pool creation.
To tolerate device failures, use raid for the pool data LV and
pool metadata LV. This is especially recommended for pool metadata LVs.
.B lvcreate \-\-type raid1 \-m 1 \-n ThinMetaLV \-L SmallSize VG PVA PVB
.B lvcreate --type raid1 -m 1 -n ThinMetaLV -L SmallSize VG PVA PVB
.br
.B lvcreate \-\-type raid1 \-m 1 \-n ThinDataLV \-L LargeSize VG PVC PVD
.B lvcreate --type raid1 -m 1 -n ThinDataLV -L LargeSize VG PVC PVD
.br
.B lvconvert \-\-type thin\-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
.B lvconvert --type thin-pool --poolmetadata VG/ThinMetaLV VG/ThinDataLV
.I Example
.br
.nf
# lvcreate \-\-type raid1 \-m 1 \-n pool0 \-L 10G vg /dev/sdA /dev/sdB
# lvcreate \-\-type raid1 \-m 1 \-n pool0meta \-L 1G vg /dev/sdC /dev/sdD
# lvconvert \-\-type thin\-pool \-\-poolmetadata vg/pool0meta vg/pool0
# lvcreate --type raid1 -m 1 -n pool0 -L 10G vg /dev/sdA /dev/sdB
# lvcreate --type raid1 -m 1 -n pool0meta -L 1G vg /dev/sdC /dev/sdD
# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
.fi
@@ -361,7 +361,7 @@ pool metadata LV. This is especially recommended for pool metadata LVs.
The first time a thin pool LV is created, lvm will create a spare
metadata LV in the VG. This behavior can be controlled with the
option \-\-poolmetadataspare y|n. (Future thin pool creations will
option --poolmetadataspare y|n. (Future thin pool creations will
also attempt to create the pmspare LV if none exists.)
To create the pmspare ("pool metadata spare") LV, lvm first creates
@@ -376,11 +376,11 @@ explicitly.
.I Example
.br
.nf
# lvcreate \-n pool0 \-L 10G vg
# lvcreate \-n pool0meta \-L 1G vg
# lvconvert \-\-type thin\-pool \-\-poolmetadata vg/pool0meta vg/pool0
# lvcreate -n pool0 -L 10G vg
# lvcreate -n pool0meta -L 1G vg
# lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0
# lvs \-a
# lvs -a
[lvol0_pmspare] vg ewi-------
pool0 vg twi---tz--
[pool0_tdata] vg Twi-------
@@ -424,7 +424,7 @@ details of damaged thin metadata to get the best advice on recovery.
Command to repair a thin pool:
.br
.B lvconvert \-\-repair VG/ThinPoolLV
.B lvconvert --repair VG/ThinPoolLV
Repair performs the following steps:
@@ -452,7 +452,7 @@ If metadata is manually restored with thin_repair directly,
the pool metadata LV can be manually swapped with another LV
containing new metadata:
.B lvconvert \-\-thinpool VG/ThinPoolLV \-\-poolmetadata VG/NewThinMetaLV
.B lvconvert --thinpool VG/ThinPoolLV --poolmetadata VG/NewThinMetaLV
.SS Activation of thin snapshots
@@ -474,29 +474,29 @@ by normal activation commands. The skipping behavior does not
apply to deactivation commands.
A snapshot LV with the "k" attribute can be activated using
the \-K (or \-\-ignoreactivationskip) option in addition to the
standard \-ay (or \-\-activate y) option.
the -K (or --ignoreactivationskip) option in addition to the
standard -ay (or --activate y) option.
Command to activate a thin snapshot LV:
.br
.B lvchange \-ay \-K VG/SnapLV
.B lvchange -ay -K VG/SnapLV
The persistent "activation skip" flag can be turned off during
lvcreate, or later with lvchange using the \-kn
(or \-\-setactivationskip n) option.
It can be turned on again with \-ky (or \-\-setactivationskip y).
lvcreate, or later with lvchange using the -kn
(or --setactivationskip n) option.
It can be turned on again with -ky (or --setactivationskip y).
When the "activation skip" flag is removed, normal activation
commands will activate the LV, and the \-K activation option is
commands will activate the LV, and the -K activation option is
not needed.
Command to create snapshot LV without the activation skip flag:
.br
.B lvcreate \-kn \-n SnapLV \-s VG/ThinLV
.B lvcreate -kn -n SnapLV -s VG/ThinLV
Command to remove the activation skip flag from a snapshot LV:
.br
.B lvchange \-kn VG/SnapLV
.B lvchange -kn VG/SnapLV
.BR lvm.conf (5)
.B auto_set_activation_skip
@@ -531,7 +531,7 @@ the thin pool LV.
Command to extend thin pool data space:
.br
.B lvextend \-L Size VG/ThinPoolLV
.B lvextend -L Size VG/ThinPoolLV
.I Example
.br
@@ -542,7 +542,7 @@ Command to extend thin pool data space:
pool0 vg twi-a-tz-- 10.00g 26.96
2. Double the amount of physical space in the thin pool LV.
# lvextend \-L+10G vg/pool0
# lvextend -L+10G vg/pool0
3. The percentage of used data blocks is half the previous value.
# lvs
@@ -560,24 +560,24 @@ fstrim on the file system using a thin LV.
\&
The available metadata space in a thin pool LV can be displayed
with the lvs \-o+metadata_percent command.
with the lvs -o+metadata_percent command.
Command to extend thin pool metadata space:
.br
.B lvextend \-\-poolmetadatasize Size VG/ThinPoolLV
.B lvextend --poolmetadatasize Size VG/ThinPoolLV
.I Example
.br
1. A thin pool LV is using 12.40% of its metadata blocks.
.nf
# lvs \-oname,size,data_percent,metadata_percent vg/pool0
# lvs -oname,size,data_percent,metadata_percent vg/pool0
LV LSize Data% Meta%
pool0 20.00g 13.48 12.40
.fi
2. Display a thin pool LV with its component thin data LV and thin metadata LV.
.nf
# lvs \-a \-oname,attr,size vg
# lvs -a -oname,attr,size vg
LV Attr LSize
pool0 twi-a-tz-- 20.00g
[pool0_tdata] Twi-ao---- 20.00g
@@ -586,12 +586,12 @@ Command to extend thin pool metadata space:
3. Double the amount of physical space in the thin metadata LV.
.nf
# lvextend \-\-poolmetadatasize +12M vg/pool0
# lvextend --poolmetadatasize +12M vg/pool0
.fi
4. The percentage of used metadata blocks is half the previous value.
.nf
# lvs \-a \-oname,size,data_percent,metadata_percent vg
# lvs -a -oname,size,data_percent,metadata_percent vg
LV LSize Data% Meta%
pool0 20.00g 13.48 6.20
[pool0_tdata] 20.00g
@@ -619,12 +619,12 @@ of the file system by 1%. Removing the 1G file restores the virtual
thin pool. The fstrim command restores the physical space to the thin pool.
.nf
# lvs \-a \-oname,attr,size,pool_lv,origin,data_percent,metadata_percent vg
# lvs -a -oname,attr,size,pool_lv,origin,data_percent,metadata_percent vg
LV Attr LSize Pool Origin Data% Meta%
pool0 twi-a-tz-- 10.00g 47.01 21.03
thin1 Vwi-aotz-- 100.00g pool0 2.70
# df \-h /mnt/X
# df -h /mnt/X
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg-thin1 99G 1.1G 93G 2% /mnt/X
@@ -634,7 +634,7 @@ Filesystem Size Used Avail Use% Mounted on
pool0 vg twi-a-tz-- 10.00g 57.01 25.26
thin1 vg Vwi-aotz-- 100.00g pool0 3.70
# df \-h /mnt/X
# df -h /mnt/X
/dev/mapper/vg-thin1 99G 2.1G 92G 3% /mnt/X
# rm /mnt/X/1Gfile
@@ -643,10 +643,10 @@ thin1 vg Vwi-aotz-- 100.00g pool0 3.70
pool0 vg twi-a-tz-- 10.00g 57.01 25.26
thin1 vg Vwi-aotz-- 100.00g pool0 3.70
# df \-h /mnt/X
# df -h /mnt/X
/dev/mapper/vg-thin1 99G 1.1G 93G 2% /mnt/X
# fstrim \-v /mnt/X
# fstrim -v /mnt/X
# lvs
pool0 vg twi-a-tz-- 10.00g 47.01 21.03
@@ -673,7 +673,7 @@ default.
Command to start or stop dmeventd monitoring a thin pool LV:
.br
.B lvchange \-\-monitor {y|n} VG/ThinPoolLV
.B lvchange --monitor {y|n} VG/ThinPoolLV
The current dmeventd monitoring status of a thin pool LV can be displayed
with the command lvs -o+seg_monitor.
@@ -777,7 +777,7 @@ system. This can result in file system corruption for non-journaled file
systems that may require fsck. When a thin pool returns errors for writes
to a thin LV, any file system is subject to losing unsynced user data.
The 60 second timeout can be changed or disabled with the dm\-thin\-pool
The 60 second timeout can be changed or disabled with the dm-thin-pool
kernel module option
.B no_space_timeout.
This option sets the number of seconds that thin pools will queue writes.
@@ -836,7 +836,7 @@ When metadata space is exhausted, the lvs command displays 100 under Meta%
for the thin pool LV:
.nf
# lvs \-o lv_name,size,data_percent,metadata_percent vg/pool0
# lvs -o lv_name,size,data_percent,metadata_percent vg/pool0
LV LSize Data% Meta%
pool0 100.00
.fi
@@ -850,11 +850,11 @@ repair.
1. Deactivate the thin pool LV, or reboot the system if this is not possible.
2. Repair thin pool with lvconvert \-\-repair.
2. Repair thin pool with lvconvert --repair.
.br
See "Metadata check and repair".
3. Extend pool metadata space with lvextend \-\-poolmetadatasize.
3. Extend pool metadata space with lvextend --poolmetadatasize.
.br
See "Manually manage free metadata space of a thin pool LV".
@@ -872,7 +872,7 @@ these presets. (See "Automatically extend thin pool LV".)
Command to extend a thin pool data LV using presets:
.br
.B lvextend \-\-use\-policies VG/ThinPoolLV
.B lvextend --use-policies VG/ThinPoolLV
The command uses these settings:
@@ -888,12 +888,12 @@ autoextend the LV by this much additional space.
To see the default values of these settings, run:
.B lvmconfig \-\-type default \-\-withcomment
.B lvmconfig --type default --withcomment
.RS
.B activation/thin_pool_autoextend_threshold
.RE
.B lvmconfig \-\-type default \-\-withcomment
.B lvmconfig --type default --withcomment
.RS
.B activation/thin_pool_autoextend_percent
.RE
@@ -919,12 +919,12 @@ For the profile_dir location, run:
.IP \[bu] 2
Attach the profile to an LV, using the command:
.br
.B lvchange \-\-metadataprofile ProfileName VG/ThinPoolLV
.B lvchange --metadataprofile ProfileName VG/ThinPoolLV
.IP \[bu] 2
Extend the LV using the profile settings:
.br
.B lvextend \-\-use\-policies VG/ThinPoolLV
.B lvextend --use-policies VG/ThinPoolLV
.P
@@ -954,7 +954,7 @@ file with the profile also needs to be moved.
.IP \[bu] 2
Only certain settings can be used in a VG or LV profile, see:
.br
.B lvmconfig \-\-type profilable-metadata.
.B lvmconfig --type profilable-metadata.
.IP \[bu] 2
An LV without a profile of its own will inherit the VG profile.
@@ -967,9 +967,9 @@ Remove a profile from an LV using the command:
.IP \[bu] 2
Commands can also have profiles applied to them. The settings that can be
applied to a command are different than the settings that can be applied
to a VG or LV. See lvmconfig \-\-type profilable\-command. To apply a
to a VG or LV. See lvmconfig --type profilable-command. To apply a
profile to a command, write a profile, save it in the profile directory,
and run the command using the option: \-\-commandprofile ProfileName.
and run the command using the option: --commandprofile ProfileName.
.SS Zeroing
@@ -978,20 +978,20 @@ and run the command using the option: \-\-commandprofile ProfileName.
When a thin pool provisions a new data block for a thin LV, the
new block is first overwritten with zeros. The zeroing mode is
indicated by the "z" attribute displayed by lvs. The option \-Z
(or \-\-zero) can be added to commands to specify the zeroing mode.
indicated by the "z" attribute displayed by lvs. The option -Z
(or --zero) can be added to commands to specify the zeroing mode.
Command to set the zeroing mode when creating a thin pool LV:
.br
.B lvconvert \-\-type thin\-pool \-Z{y|n}
.B lvconvert --type thin-pool -Z{y|n}
.br
.RS
.B \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
.B --poolmetadata VG/ThinMetaLV VG/ThinDataLV
.RE
Command to change the zeroing mode of an existing thin pool LV:
.br
.B lvchange \-Z{y|n} VG/ThinPoolLV
.B lvchange -Z{y|n} VG/ThinPoolLV
If zeroing mode is changed from "n" to "y", previously provisioned
blocks are not zeroed.
@@ -1024,27 +1024,27 @@ mode.
Command to display the current discard mode of a thin pool LV:
.br
.B lvs \-o+discards VG/ThinPoolLV
.B lvs -o+discards VG/ThinPoolLV
Command to set the discard mode when creating a thin pool LV:
.br
.B lvconvert \-\-discards {ignore|nopassdown|passdown}
.B lvconvert --discards {ignore|nopassdown|passdown}
.br
.RS
.B \-\-type thin\-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
.B --type thin-pool --poolmetadata VG/ThinMetaLV VG/ThinDataLV
.RE
Command to change the discard mode of an existing thin pool LV:
.br
.B lvchange \-\-discards {ignore|nopassdown|passdown} VG/ThinPoolLV
.B lvchange --discards {ignore|nopassdown|passdown} VG/ThinPoolLV
.I Example
.br
.nf
# lvs \-o name,discards vg/pool0
# lvs -o name,discards vg/pool0
pool0 passdown
# lvchange \-\-discards ignore vg/pool0
# lvchange --discards ignore vg/pool0
.fi
.BR lvm.conf (5)
@@ -1058,7 +1058,7 @@ controls the default discards mode used when creating a thin pool.
\&
The size of data blocks managed by a thin pool can be specified with the
\-\-chunksize option when the thin pool LV is created. The default unit
--chunksize option when the thin pool LV is created. The default unit
is KiB. The value must be a multiple of 64KiB between 64KiB and 1GiB.
When a thin pool is used primarily for the thin provisioning feature, a
@@ -1067,12 +1067,12 @@ reduces copying time and consumes less space.
Command to display the thin pool LV chunk size:
.br
.B lvs \-o+chunksize VG/ThinPoolLV
.B lvs -o+chunksize VG/ThinPoolLV
.I Example
.br
.nf
# lvs \-o name,chunksize
# lvs -o name,chunksize
pool0 64.00k
.fi
@@ -1083,7 +1083,7 @@ controls the default chunk size used when creating a thin pool.
The default value is shown by:
.br
.B lvmconfig \-\-type default allocation/thin_pool_chunk_size
.B lvmconfig --type default allocation/thin_pool_chunk_size
.SS Size of pool metadata LV
@@ -1096,10 +1096,10 @@ need a larger metadata LV. Thin pool metadata LV sizes can be from 2MiB
to 16GiB.
When using lvcreate to create what will become a thin metadata LV, the
size is specified with the \-L|\-\-size option.
size is specified with the -L|--size option.
When an LVM command automatically creates a thin metadata LV, the size is
specified with the \-\-poolmetadatasize option. When this option is not
specified with the --poolmetadatasize option. When this option is not
given, LVM automatically chooses a size based on the data size and chunk
size.
@@ -1119,14 +1119,14 @@ to take thin snapshots of external, read only LVs. Writes to the
snapshot are stored in the thin pool, and the external LV is used
to read unwritten parts of the thin snapshot.
.B lvcreate \-n SnapLV \-s VG/ExternalOriginLV \-\-thinpool VG/ThinPoolLV
.B lvcreate -n SnapLV -s VG/ExternalOriginLV --thinpool VG/ThinPoolLV
.I Example
.br
.nf
# lvchange \-an vg/lve
# lvchange \-\-permission r vg/lve
# lvcreate \-n snaplve \-s vg/lve \-\-thinpool vg/pool0
# lvchange -an vg/lve
# lvchange --permission r vg/lve
# lvcreate -n snaplve -s vg/lve --thinpool vg/pool0
# lvs vg/lve vg/snaplve
LV VG Attr LSize Pool Origin Data%
@@ -1144,29 +1144,29 @@ standard LV. At the same time, the existing LV is converted to a
read only external LV with a new name. Unwritten portions of the
thin LV are read from the external LV.
The new name given to the existing LV can be specified with
\-\-originname, otherwise the existing LV will be given a default
--originname, otherwise the existing LV will be given a default
name, e.g. lvol#.
Convert ExampleLV into a read only external LV with the new name
NewExternalOriginLV, and create a new thin LV that is given the previous
name of ExampleLV.
.B lvconvert \-\-type thin \-\-thinpool VG/ThinPoolLV
.B lvconvert --type thin --thinpool VG/ThinPoolLV
.br
.RS
.B \-\-originname NewExternalOriginLV VG/ExampleLV
.B --originname NewExternalOriginLV VG/ExampleLV
.RE
.I Example
.br
.nf
# lvcreate \-n lv_example \-L 10G vg
# lvcreate -n lv_example -L 10G vg
# lvs
lv_example vg -wi-a----- 10.00g
# lvconvert \-\-type thin \-\-thinpool vg/pool0
\-\-originname lv_external \-\-thin vg/lv_example
# lvconvert --type thin --thinpool vg/pool0
--originname lv_external --thin vg/lv_example
# lvs
LV VG Attr LSize Pool Origin
@@ -1184,18 +1184,18 @@ rather than using lvconvert on existing LVs.
This one command creates a thin data LV, a thin metadata LV,
and combines the two into a thin pool LV.
.B lvcreate \-\-type thin\-pool \-L LargeSize \-n ThinPoolLV VG
.B lvcreate --type thin-pool -L LargeSize -n ThinPoolLV VG
.I Example
.br
.nf
# lvcreate \-\-type thin\-pool \-L8M -n pool0 vg
# lvcreate --type thin-pool -L8M -n pool0 vg
# lvs vg/pool0
LV VG Attr LSize Pool Origin Data%
pool0 vg twi-a-tz-- 8.00m 0.00
# lvs \-a
# lvs -a
pool0 vg twi-a-tz-- 8.00m
[pool0_tdata] vg Twi-ao---- 8.00m
[pool0_tmeta] vg ewi-ao---- 8.00m
@@ -1211,27 +1211,27 @@ lvcreate command. This one command creates a thin data LV,
a thin metadata LV, combines the two into a thin pool LV,
and creates a thin LV in the new pool.
.br
\-L LargeSize specifies the physical size of the thin pool LV.
-L LargeSize specifies the physical size of the thin pool LV.
.br
\-V VirtualSize specifies the virtual size of the thin LV.
-V VirtualSize specifies the virtual size of the thin LV.
.B lvcreate \-\-type thin \-V VirtualSize \-L LargeSize
.B lvcreate --type thin -V VirtualSize -L LargeSize
.RS
.B \-n ThinLV \-\-thinpool VG/ThinPoolLV
.B -n ThinLV --thinpool VG/ThinPoolLV
.RE
Equivalent to:
.br
.B lvcreate \-\-type thin\-pool \-L LargeSize VG/ThinPoolLV
.B lvcreate --type thin-pool -L LargeSize VG/ThinPoolLV
.br
.B lvcreate \-n ThinLV \-V VirtualSize \-\-thinpool VG/ThinPoolLV
.B lvcreate -n ThinLV -V VirtualSize --thinpool VG/ThinPoolLV
.I Example
.br
.nf
# lvcreate \-L8M \-V2G \-n thin1 \-\-thinpool vg/pool0
# lvcreate -L8M -V2G -n thin1 --thinpool vg/pool0
# lvs \-a
# lvs -a
pool0 vg twi-a-tz-- 8.00m
[pool0_tdata] vg Twi-ao---- 8.00m
[pool0_tmeta] vg ewi-ao---- 8.00m
@@ -1244,7 +1244,7 @@ Equivalent to:
\&
A thin snapshot can be merged into its origin thin LV using the lvconvert
\-\-merge command. The result of a snapshot merge is that the origin thin
--merge command. The result of a snapshot merge is that the origin thin
LV takes the content of the snapshot LV, and the snapshot LV is removed.
Any content that was unique to the origin thin LV is lost after the merge.
@@ -1253,7 +1253,7 @@ LVs are open, e.g. mounted. If a merge is initiated while the LVs are open,
the effect of the merge is delayed until the origin thin LV is next
activated.
.B lvconvert \-\-merge VG/SnapLV
.B lvconvert --merge VG/SnapLV
.I Example
.br
@@ -1264,7 +1264,7 @@ activated.
thin1 vg Vwi-a-tz-- 100.00g pool0
thin1s1 vg Vwi-a-tz-k 100.00g pool0 thin1
# lvconvert \-\-merge vg/thin1s1
# lvconvert --merge vg/thin1s1
# lvs vg
LV VG Attr LSize Pool Origin
@@ -1292,7 +1292,7 @@ file1 file2 file3
# ls /mnt/Xs
file3 file4 file5
# lvconvert \-\-merge vg/thin1s1
# lvconvert --merge vg/thin1s1
Logical volume vg/thin1s1 contains a filesystem in use.
Delaying merge since snapshot is open.
Merging of thin snapshot thin1s1 will occur on next activation.
@@ -1300,7 +1300,7 @@ Merging of thin snapshot thin1s1 will occur on next activation.
# umount /mnt/X
# umount /mnt/Xs
# lvs \-a vg
# lvs -a vg
LV VG Attr LSize Pool Origin
pool0 vg twi-a-tz-- 10.00g
[pool0_tdata] vg Twi-ao---- 10.00g
@@ -1308,8 +1308,8 @@ Merging of thin snapshot thin1s1 will occur on next activation.
thin1 vg Owi-a-tz-- 100.00g pool0
[thin1s1] vg Swi-a-tz-k 100.00g pool0 thin1
# lvchange \-an vg/thin1
# lvchange \-ay vg/thin1
# lvchange -an vg/thin1
# lvchange -ay vg/thin1
# mount /dev/vg/thin1 /mnt/X
@@ -1330,18 +1330,18 @@ file system on the origin LV.
If the snapshot LV is writable, mounting will recover the log to clear the
dummy transaction, but will require skipping the uuid check:
mount /dev/VG/SnapLV /mnt \-o nouuid
mount /dev/VG/SnapLV /mnt -o nouuid
Or, the uuid can be changed on disk before mounting:
xfs_admin \-U generate /dev/VG/SnapLV
xfs_admin -U generate /dev/VG/SnapLV
.br
mount /dev/VG/SnapLV /mnt
If the snapshot LV is readonly, the log recovery and uuid check need to be
skipped while mounting readonly:
mount /dev/VG/SnapLV /mnt \-o ro,nouuid,norecovery
mount /dev/VG/SnapLV /mnt -o ro,nouuid,norecovery
.SH SEE ALSO
.BR lvm (8),

View File

@@ -1,5 +1,5 @@
lvreduce reduces the size of an LV. The freed logical extents are returned
to the VG to be used by other LVs. A copy\-on\-write snapshot LV can also
to the VG to be used by other LVs. A copy-on-write snapshot LV can also
be reduced if less space is needed to hold COW blocks. Use
\fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.

View File

@@ -2,4 +2,4 @@
Reduce the size of an LV by 3 logical extents:
.br
.B lvreduce \-l \-3 vg00/lvol1
.B lvreduce -l -3 vg00/lvol1

View File

@@ -1,6 +1,6 @@
.TH LVREDUCE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVREDUCE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvreduce \- Reduce the size of a logical volume
lvreduce - Reduce the size of a logical volume
.
.SH SYNOPSIS
\fBlvreduce\fP \fIoption_args\fP \fIposition_args\fP
@@ -9,7 +9,7 @@ lvreduce \- Reduce the size of a logical volume
.br
.SH DESCRIPTION
lvreduce reduces the size of an LV. The freed logical extents are returned
to the VG to be used by other LVs. A copy\-on\-write snapshot LV can also
to the VG to be used by other LVs. A copy-on-write snapshot LV can also
be reduced if less space is needed to hold COW blocks. Use
\fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.
@@ -314,7 +314,7 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
Reduce the size of an LV by 3 logical extents:
.br
.B lvreduce \-l \-3 vg00/lvol1
.B lvreduce -l -3 vg00/lvol1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -1,7 +1,7 @@
.SH EXAMPLES
Remove an active LV without asking for confirmation.
.br
.B lvremove \-f vg00/lvol1
.B lvremove -f vg00/lvol1
Remove all LVs the specified VG.
.br

View File

@@ -1,6 +1,6 @@
.TH LVREMOVE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVREMOVE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvremove \- Remove logical volume(s) from the system
lvremove - Remove logical volume(s) from the system
.
.SH SYNOPSIS
\fBlvremove\fP \fIposition_args\fP
@@ -304,7 +304,7 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
.SH EXAMPLES
Remove an active LV without asking for confirmation.
.br
.B lvremove \-f vg00/lvol1
.B lvremove -f vg00/lvol1
Remove all LVs the specified VG.
.br

View File

@@ -1,6 +1,6 @@
.TH LVRENAME 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVRENAME 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvrename \- Rename a logical volume
lvrename - Rename a logical volume
.
.SH SYNOPSIS
\fBlvrename\fP \fIposition_args\fP

View File

@@ -2,4 +2,4 @@
Extend an LV by 16MB using specific physical extents:
.br
.B lvresize \-L+16M vg1/lv1 /dev/sda:0\-1 /dev/sdb:0\-1
.B lvresize -L+16M vg1/lv1 /dev/sda:0-1 /dev/sdb:0-1

View File

@@ -1,6 +1,6 @@
.TH LVRESIZE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVRESIZE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvresize \- Resize a logical volume
lvresize - Resize a logical volume
.
.SH SYNOPSIS
\fBlvresize\fP \fIoption_args\fP \fIposition_args\fP
@@ -470,7 +470,7 @@ When creating a RAID 4/5/6 LV, this number does not include the extra
devices that are required for parity. The largest number depends on
the RAID type (raid0: 64, raid10: 32, raid4/5: 63, raid6: 62), and
when unspecified, the default depends on the RAID type
(raid0: 2, raid10: 4, raid4/5: 3, raid6: 5.)
(raid0: 2, raid10: 2, raid4/5: 3, raid6: 5.)
To stripe a new raid LV across all PVs by default,
see lvm.conf allocation/raid_stripe_all_devices.
.ad b
@@ -569,7 +569,7 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
Extend an LV by 16MB using specific physical extents:
.br
.B lvresize \-L+16M vg1/lv1 /dev/sda:0\-1 /dev/sdb:0\-1
.B lvresize -L+16M vg1/lv1 /dev/sda:0-1 /dev/sdb:0-1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -1,6 +1,6 @@
.TH LVS 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVS 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvs \- Display information about logical volumes
lvs - Display information about logical volumes
.
.SH SYNOPSIS
\fBlvs\fP

View File

@@ -1,6 +1,6 @@
.TH LVSCAN 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH LVSCAN 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
lvscan \- List all logical volumes in all volume groups
lvscan - List all logical volumes in all volume groups
.
.SH SYNOPSIS
\fBlvscan\fP \fIoption_args\fP

View File

@@ -3,4 +3,4 @@
Disallow the allocation of physical extents on a PV (e.g. because of
disk errors, or because it will be removed after freeing it).
.br
.B pvchange \-x n /dev/sdk1
.B pvchange -x n /dev/sdk1

View File

@@ -1,6 +1,6 @@
.TH PVCHANGE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVCHANGE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvchange \- Change attributes of physical volume(s)
pvchange - Change attributes of physical volume(s)
.
.SH SYNOPSIS
\fBpvchange\fP \fIoption_args\fP \fIposition_args\fP
@@ -369,7 +369,7 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
Disallow the allocation of physical extents on a PV (e.g. because of
disk errors, or because it will be removed after freeing it).
.br
.B pvchange \-x n /dev/sdk1
.B pvchange -x n /dev/sdk1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -2,7 +2,7 @@
If the partition table is corrupted or lost on /dev/sda, and you suspect
there was an LVM partition at approximately 100 MiB, then this
area of the disk can be scanned using the \fB\-\-labelsector\fP
area of the disk can be scanned using the \fB--labelsector\fP
parameter with a value of 204800 (100 * 1024 * 1024 / 512 = 204800).
.br
.B pvck \-\-labelsector 204800 /dev/sda
.B pvck --labelsector 204800 /dev/sda

View File

@@ -1,6 +1,6 @@
.TH PVCK 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVCK 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvck \- Check the consistency of physical volume(s)
pvck - Check the consistency of physical volume(s)
.
.SH SYNOPSIS
\fBpvck\fP \fIposition_args\fP
@@ -204,10 +204,10 @@ For example, LVM_VG_NAME can generally be substituted for a required VG paramete
If the partition table is corrupted or lost on /dev/sda, and you suspect
there was an LVM partition at approximately 100 MiB, then this
area of the disk can be scanned using the \fB\-\-labelsector\fP
area of the disk can be scanned using the \fB--labelsector\fP
parameter with a value of 204800 (100 * 1024 * 1024 / 512 = 204800).
.br
.B pvck \-\-labelsector 204800 /dev/sda
.B pvck --labelsector 204800 /dev/sda
.SH SEE ALSO
.BR lvm (8)

View File

@@ -9,4 +9,4 @@ partitioning (sector 7 is the lowest aligned logical block, the 4KiB
sectors start at LBA -1, and consequently sector 63 is aligned on a 4KiB
boundary) manually account for this when initializing for use by LVM.
.br
.B pvcreate \-\-dataalignmentoffset 7s /dev/sdb
.B pvcreate --dataalignmentoffset 7s /dev/sdb

View File

@@ -1,6 +1,6 @@
.TH PVCREATE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVCREATE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvcreate \- Initialize physical volume(s) for use by LVM
pvcreate - Initialize physical volume(s) for use by LVM
.
.SH SYNOPSIS
\fBpvcreate\fP \fIposition_args\fP
@@ -420,7 +420,7 @@ partitioning (sector 7 is the lowest aligned logical block, the 4KiB
sectors start at LBA -1, and consequently sector 63 is aligned on a 4KiB
boundary) manually account for this when initializing for use by LVM.
.br
.B pvcreate \-\-dataalignmentoffset 7s /dev/sdb
.B pvcreate --dataalignmentoffset 7s /dev/sdb
.SH SEE ALSO
.BR lvm (8)

View File

@@ -1,6 +1,6 @@
.TH PVDISPLAY 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVDISPLAY 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvdisplay \- Display various attributes of physical volume(s)
pvdisplay - Display various attributes of physical volume(s)
.
.SH SYNOPSIS
\fBpvdisplay\fP

View File

@@ -33,7 +33,7 @@ Note that this new process cannot support the original LVM1
type of on-disk metadata. Metadata can be converted using
\fBvgconvert\fP(8).
If the \fB\-\-atomic\fP option is used, a slightly different approach is
If the \fB--atomic\fP option is used, a slightly different approach is
used for the move. Again, a temporary 'pvmove' LV is created to store the
details of all the data movements required. This temporary LV contains
all the segments of the various LVs that need to be moved. However, in
@@ -57,13 +57,13 @@ Use a specific destination PV when moving physical extents.
Move extents belonging to a single LV.
.br
.B pvmove \-n lvol1 /dev/sdb1 /dev/sdc1
.B pvmove -n lvol1 /dev/sdb1 /dev/sdc1
Rather than moving the contents of an entire device, it is possible to
move a range of physical extents, for example numbers 1000 to 1999
inclusive on the specified PV.
.br
.B pvmove /dev/sdb1:1000\-1999
.B pvmove /dev/sdb1:1000-1999
A range of physical extents to move can be specified as start+length. For
example, starting from PE 1000. (Counting starts from 0, so this refers to the
@@ -74,18 +74,18 @@ example, starting from PE 1000. (Counting starts from 0, so this refers to the
Move a range of physical extents to a specific PV (which must have
sufficient free extents).
.br
.B pvmove /dev/sdb1:1000\-1999 /dev/sdc1
.B pvmove /dev/sdb1:1000-1999 /dev/sdc1
Move a range of physical extents to specific new extents on a new PV.
.br
.B pvmove /dev/sdb1:1000\-1999 /dev/sdc1:0\-999
.B pvmove /dev/sdb1:1000-1999 /dev/sdc1:0-999
If the source and destination are on the same disk, the
\fBanywhere\fP allocation policy is needed.
.br
.B pvmove \-\-alloc anywhere /dev/sdb1:1000\-1999 /dev/sdb1:0\-999
.B pvmove --alloc anywhere /dev/sdb1:1000-1999 /dev/sdb1:0-999
The part of a specific LV present within in a range of physical
extents can also be picked out and moved.
.br
.B pvmove \-n lvol1 /dev/sdb1:1000\-1999 /dev/sdc1
.B pvmove -n lvol1 /dev/sdb1:1000-1999 /dev/sdc1

View File

@@ -1,6 +1,6 @@
.TH PVMOVE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVMOVE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvmove \- Move extents from one physical volume to another
pvmove - Move extents from one physical volume to another
.
.SH SYNOPSIS
\fBpvmove\fP \fIposition_args\fP
@@ -378,7 +378,7 @@ Note that this new process cannot support the original LVM1
type of on-disk metadata. Metadata can be converted using
\fBvgconvert\fP(8).
If the \fB\-\-atomic\fP option is used, a slightly different approach is
If the \fB--atomic\fP option is used, a slightly different approach is
used for the move. Again, a temporary 'pvmove' LV is created to store the
details of all the data movements required. This temporary LV contains
all the segments of the various LVs that need to be moved. However, in
@@ -402,13 +402,13 @@ Use a specific destination PV when moving physical extents.
Move extents belonging to a single LV.
.br
.B pvmove \-n lvol1 /dev/sdb1 /dev/sdc1
.B pvmove -n lvol1 /dev/sdb1 /dev/sdc1
Rather than moving the contents of an entire device, it is possible to
move a range of physical extents, for example numbers 1000 to 1999
inclusive on the specified PV.
.br
.B pvmove /dev/sdb1:1000\-1999
.B pvmove /dev/sdb1:1000-1999
A range of physical extents to move can be specified as start+length. For
example, starting from PE 1000. (Counting starts from 0, so this refers to the
@@ -419,21 +419,21 @@ example, starting from PE 1000. (Counting starts from 0, so this refers to the
Move a range of physical extents to a specific PV (which must have
sufficient free extents).
.br
.B pvmove /dev/sdb1:1000\-1999 /dev/sdc1
.B pvmove /dev/sdb1:1000-1999 /dev/sdc1
Move a range of physical extents to specific new extents on a new PV.
.br
.B pvmove /dev/sdb1:1000\-1999 /dev/sdc1:0\-999
.B pvmove /dev/sdb1:1000-1999 /dev/sdc1:0-999
If the source and destination are on the same disk, the
\fBanywhere\fP allocation policy is needed.
.br
.B pvmove \-\-alloc anywhere /dev/sdb1:1000\-1999 /dev/sdb1:0\-999
.B pvmove --alloc anywhere /dev/sdb1:1000-1999 /dev/sdb1:0-999
The part of a specific LV present within in a range of physical
extents can also be picked out and moved.
.br
.B pvmove \-n lvol1 /dev/sdb1:1000\-1999 /dev/sdc1
.B pvmove -n lvol1 /dev/sdb1:1000-1999 /dev/sdc1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -1,6 +1,6 @@
.TH PVREMOVE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVREMOVE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvremove \- Remove LVM label(s) from physical volume(s)
pvremove - Remove LVM label(s) from physical volume(s)
.
.SH SYNOPSIS
\fBpvremove\fP \fIposition_args\fP

View File

@@ -9,4 +9,4 @@ Expand a PV after enlarging the partition.
Shrink a PV prior to shrinking the partition (ensure that the PV size is
appropriate for the intended new partition size).
.br
.B pvresize \-\-setphysicalvolumesize 40G /dev/sda1
.B pvresize --setphysicalvolumesize 40G /dev/sda1

View File

@@ -1,6 +1,6 @@
.TH PVRESIZE 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVRESIZE 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvresize \- Resize physical volume(s)
pvresize - Resize physical volume(s)
.
.SH SYNOPSIS
\fBpvresize\fP \fIposition_args\fP
@@ -225,7 +225,7 @@ Expand a PV after enlarging the partition.
Shrink a PV prior to shrinking the partition (ensure that the PV size is
appropriate for the intended new partition size).
.br
.B pvresize \-\-setphysicalvolumesize 40G /dev/sda1
.B pvresize --setphysicalvolumesize 40G /dev/sda1
.SH SEE ALSO
.BR lvm (8)

View File

@@ -1,6 +1,6 @@
.TH PVS 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVS 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvs \- Display information about physical volumes
pvs - Display information about physical volumes
.
.SH SYNOPSIS
\fBpvs\fP

View File

@@ -20,19 +20,19 @@ commands.
When lvmetad is used, LVM commands avoid scanning disks by reading
metadata from lvmetad. When new disks appear, they must be scanned so
their metadata can be cached in lvmetad. This is done by the command
pvscan \-\-cache, which scans disks and passes the metadata to lvmetad.
pvscan --cache, which scans disks and passes the metadata to lvmetad.
The pvscan \-\-cache command is typically run automatically by system
The pvscan --cache command is typically run automatically by system
services when a new device appears. Users do not generally need to run
this command if the system and lvmetad are running properly.
Many scripts contain unnecessary pvscan (or vgscan) commands for
historical reasons. To avoid disrupting the system with extraneous disk
scanning, an ordinary pvscan (without \-\-cache) will simply read metadata
scanning, an ordinary pvscan (without --cache) will simply read metadata
from lvmetad like other LVM commands. It does not do anything beyond
displaying the current state of the cache.
.IP \[bu] 2
When given specific device name arguments, pvscan \-\-cache will only
When given specific device name arguments, pvscan --cache will only
read the named devices.
.IP \[bu] 2
LVM udev rules and systemd services are used to initiate automatic device
@@ -50,7 +50,7 @@ For more information, see:
.IP \[bu] 2
If lvmetad is started or restarted after devices are visible, or
if the global_filter has changed, then all devices must be rescanned
for metadata with the command pvscan \-\-cache.
for metadata with the command pvscan --cache.
.IP \[bu] 2
lvmetad does not cache older metadata formats, e.g. lvm1, and will
be temporarily disabled if they are seen.
@@ -62,9 +62,9 @@ minor numbers must be given, not the path.
When event-driven system services detect a new LVM device, the first step
is to automatically scan and cache the metadata from the device. This is
done by pvscan \-\-cache. A second step is to automatically activate LVs
done by pvscan --cache. A second step is to automatically activate LVs
that are present on the new device. This auto-activation is done by the
same pvscan \-\-cache command when the option '\-a|\-\-activate ay' is
same pvscan --cache command when the option '-a|--activate ay' is
included.
Auto-activation of VGs or LVs can be enabled/disabled using:
@@ -82,7 +82,7 @@ fully integrated with the event-driven system services.)
When a VG or LV is not auto-activated, traditional activation using
vgchange or lvchange -a|--activate is needed.
.IP \[bu] 2
pvscan auto-activation can be only done in combination with \-\-cache.
pvscan auto-activation can be only done in combination with --cache.
.IP \[bu] 2
Auto-activation is designated by the "a" argument in '-a|--activate ay'.
This is meant to distinguish system generated commands from explicit user

View File

@@ -1,6 +1,6 @@
.TH PVSCAN 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH PVSCAN 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
pvscan \- List all physical volumes
pvscan - List all physical volumes
.
.SH SYNOPSIS
\fBpvscan\fP \fIoption_args\fP
@@ -32,19 +32,19 @@ commands.
When lvmetad is used, LVM commands avoid scanning disks by reading
metadata from lvmetad. When new disks appear, they must be scanned so
their metadata can be cached in lvmetad. This is done by the command
pvscan \-\-cache, which scans disks and passes the metadata to lvmetad.
pvscan --cache, which scans disks and passes the metadata to lvmetad.
The pvscan \-\-cache command is typically run automatically by system
The pvscan --cache command is typically run automatically by system
services when a new device appears. Users do not generally need to run
this command if the system and lvmetad are running properly.
Many scripts contain unnecessary pvscan (or vgscan) commands for
historical reasons. To avoid disrupting the system with extraneous disk
scanning, an ordinary pvscan (without \-\-cache) will simply read metadata
scanning, an ordinary pvscan (without --cache) will simply read metadata
from lvmetad like other LVM commands. It does not do anything beyond
displaying the current state of the cache.
.IP \[bu] 2
When given specific device name arguments, pvscan \-\-cache will only
When given specific device name arguments, pvscan --cache will only
read the named devices.
.IP \[bu] 2
LVM udev rules and systemd services are used to initiate automatic device
@@ -62,7 +62,7 @@ For more information, see:
.IP \[bu] 2
If lvmetad is started or restarted after devices are visible, or
if the global_filter has changed, then all devices must be rescanned
for metadata with the command pvscan \-\-cache.
for metadata with the command pvscan --cache.
.IP \[bu] 2
lvmetad does not cache older metadata formats, e.g. lvm1, and will
be temporarily disabled if they are seen.
@@ -74,9 +74,9 @@ minor numbers must be given, not the path.
When event-driven system services detect a new LVM device, the first step
is to automatically scan and cache the metadata from the device. This is
done by pvscan \-\-cache. A second step is to automatically activate LVs
done by pvscan --cache. A second step is to automatically activate LVs
that are present on the new device. This auto-activation is done by the
same pvscan \-\-cache command when the option '\-a|\-\-activate ay' is
same pvscan --cache command when the option '-a|--activate ay' is
included.
Auto-activation of VGs or LVs can be enabled/disabled using:
@@ -94,7 +94,7 @@ fully integrated with the event-driven system services.)
When a VG or LV is not auto-activated, traditional activation using
vgchange or lvchange -a|--activate is needed.
.IP \[bu] 2
pvscan auto-activation can be only done in combination with \-\-cache.
pvscan auto-activation can be only done in combination with --cache.
.IP \[bu] 2
Auto-activation is designated by the "a" argument in '-a|--activate ay'.
This is meant to distinguish system generated commands from explicit user

View File

@@ -6,7 +6,7 @@ files.
In a default installation, each VG is backed up into a separate file
bearing the name of the VG in the directory \fI#DEFAULT_BACKUP_DIR#\fP.
To use an alternative back up file, use \fB\-f\fP. In this case, when
To use an alternative back up file, use \fB-f\fP. In this case, when
backing up multiple VGs, the file name is treated as a template, with %s
replaced by the VG name.

View File

@@ -1,6 +1,6 @@
.TH VGCFGBACKUP 8 "LVM TOOLS 2.02.169(2)-git (2016-11-30)" "Red Hat, Inc."
.TH VGCFGBACKUP 8 "LVM TOOLS #VERSION#" "Red Hat, Inc."
.SH NAME
vgcfgbackup \- Backup volume group configuration(s)
vgcfgbackup - Backup volume group configuration(s)
.
.SH SYNOPSIS
\fBvgcfgbackup\fP
@@ -16,16 +16,16 @@ See \fBvgcfgrestore\fP for information on using the back up
files.
In a default installation, each VG is backed up into a separate file
bearing the name of the VG in the directory \fI/etc/lvm/backup\fP.
bearing the name of the VG in the directory \fI#DEFAULT_BACKUP_DIR#\fP.
To use an alternative back up file, use \fB\-f\fP. In this case, when
To use an alternative back up file, use \fB-f\fP. In this case, when
backing up multiple VGs, the file name is treated as a template, with %s
replaced by the VG name.
NB. This DOES NOT back up the data content of LVs.
It may also be useful to regularly back up the files in
\fI/etc/lvm\fP.
\fI#DEFAULT_SYS_DIR#\fP.
.SH USAGE
\fBvgcfgbackup\fP
.br

View File

@@ -2,8 +2,8 @@ vgcfgrestore restores the metadata of a VG from a text back up file
produced by \fBvgcfgbackup\fP. This writes VG metadata onto the devices
specifed in back up file.
A back up file can be specified with \fB\-\-file\fP. If no backup file is
specified, the most recent one is used. Use \fB\-\-list\fP for a list of
A back up file can be specified with \fB--file\fP. If no backup file is
specified, the most recent one is used. Use \fB--list\fP for a list of
the available back up and archive files of a VG.
WARNING: When a VG contains thin pools, changes to thin metadata cannot be

View File

@@ -1,9 +1,9 @@
.SH NOTES
To replace PVs, \fBvgdisplay \-\-partial \-\-verbose\fP will show the
To replace PVs, \fBvgdisplay --partial --verbose\fP will show the
UUIDs and sizes of any PVs that are no longer present. If a PV in the VG
is lost and you wish to substitute another of the same size, use
\fBpvcreate \-\-restorefile filename \-\-uuid uuid\fP (plus additional
\fBpvcreate --restorefile filename --uuid uuid\fP (plus additional
arguments as appropriate) to initialise it with the same UUID as the
missing PV. Repeat for all other missing PVs in the VG. Then use
\fBvgcfgrestore \-\-file filename\fP to restore the VG's metadata.
\fBvgcfgrestore --file filename\fP to restore the VG's metadata.

Some files were not shown because too many files have changed in this diff Show More