1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-10-05 07:33:15 +03:00

Compare commits

..

170 Commits

Author SHA1 Message Date
David Teigland
725526419a generate man pages 2017-02-10 15:04:15 -06:00
David Teigland
fc2d1fe5f0 args: use arg parsing function for region size
Consolidate the validation of the region size arg
in a new arg parsing function.
2017-02-10 14:07:27 -06:00
David Teigland
8207fcd664 lvconvert: remove code for changing region size
from the generic raid type conversion code.
2017-02-10 14:07:27 -06:00
David Teigland
5499fc3bb0 lvconvert: add command to change region size of a raid LV 2017-02-10 14:07:27 -06:00
David Teigland
7db338f51e vgchange: fix uint32 parsing of logicalvolume arg 2017-02-10 14:07:27 -06:00
David Teigland
2614f0b21d commands: recognize raid variations 2017-02-10 14:07:27 -06:00
David Teigland
4b0db1c54f commands: move command def parsing into lvm binary
It was previously done at build time by the ccmd binary.
2017-02-10 14:07:22 -06:00
David Teigland
a15bab6981 lvconvert: remove unused code
For "split" which is not an alias for splitmirrors.
2017-02-10 11:34:12 -06:00
David Teigland
cd16848d5e args: split is a synonym for splitcache
also tidy the other synonyms
2017-02-10 11:29:54 -06:00
David Teigland
c062fb0c11 man: lvmthin updates
Some minor changes to some of the command syntaxes
to use more standard forms.
2017-02-10 11:29:49 -06:00
David Teigland
4bafe21f2a ccmd: split into multiple files 2017-02-10 11:29:44 -06:00
David Teigland
52c67f8aff command struct: remove command name refs
Change run time access to the command_name struct
cmd->cname instead of indirectly through
cmd->command->cname. This removes the two run time
fields from struct command.
2017-02-10 11:29:25 -06:00
David Teigland
49f918296c command.h comment tidying 2017-02-10 11:29:20 -06:00
David Teigland
eae06d48c2 lvm shell: clear argv for each command 2017-02-10 11:29:14 -06:00
David Teigland
366bfe2f2d help: accept positional args
lvm help <commandname> ...
2017-02-10 11:29:10 -06:00
David Teigland
0abdb1eaa4 fix lvmcmdline warning
declaration of ‘usage’ shadows a globa
2017-02-10 11:29:05 -06:00
David Teigland
77ad558808 man lvm: remove options
all options are now included in commands
2017-02-10 11:29:01 -06:00
David Teigland
0d1fa4602d args: add man page descriptions 2017-02-10 11:28:56 -06:00
David Teigland
bd643afd91 args: use uint32 arg for maxphysicalvolumes 2017-02-10 11:28:45 -06:00
David Teigland
3ea32b195d lvconvert: remove unused code 2017-02-10 11:28:40 -06:00
David Teigland
b81f8bca77 lvconvert: use command defs for raid/mirror types
All lvconvert functionality has been moved out of the
previous monolithic lvconvert code, except conversions
related to raid/mirror/striped/linear.  This switches
that remaining code to be based on command defs, and
standard process_each_lv arg processing.  This final
switch results in quite a bit of dead code that is
also removed.
2017-02-10 11:28:37 -06:00
David Teigland
9b10b7d920 tests: use swapmetadata
and some other pool/cache/thin related changes
2017-02-10 11:28:32 -06:00
David Teigland
1c76d55bb3 lvconvert: use command defs for mergemirrors
and route the generic --merge to one of the
specific merge functions
2017-02-10 11:28:28 -06:00
David Teigland
c7e20e8af5 toollib: find VG name in option values when needed 2017-02-10 11:28:21 -06:00
David Teigland
0715f2ef7f lvconvert: use command defs for thin/cache/pool creation
Everything related to thin and cache.
2017-02-10 11:28:17 -06:00
David Teigland
d5cac8af59 lvconvert: add startpoll command using command def
This is a new explicit version of 'lvconvert LV'
which has been an obscure way of triggering polling
to be restarted on an LV that was previously converted.
2017-02-10 11:28:13 -06:00
David Teigland
37439ff671 lvconvert: snapshot: use command definitions
Lift all the snapshot utilities (merge, split, combine)
out of the monolithic lvconvert implementation, using
the command definitions.  The old code associated with
these commands is now unused and will be removed separately.
2017-02-10 11:28:08 -06:00
David Teigland
573eec59d7 lvconvert: remove unused calls for repair and replace
repair and replace are no longer called from the
monolithic lvconvert code, so remove the unused code.
2017-02-10 11:28:01 -06:00
David Teigland
c0da182126 lvconvert: repair and replace: use command definitions
This lifts the lvconvert --repair and --replace commands
out of the monolithic lvconvert implementation.  The
previous calls into repair/replace can no longer be
reached and will be removed in a separate commit.
2017-02-10 11:27:56 -06:00
David Teigland
d835f1cc2e lvchange: make use of command definitions
Reorganize the lvchange code to take advantage of
the command definition, and remove the validation
that is done by the command definintion rules.
2017-02-10 11:27:51 -06:00
David Teigland
f4f06a8434 process_each_lv: add check_single_lv function
The new check_single_lv() function is called prior to the
existing process_single_lv().  If the check function returns 0,
the LV will not be processed.

The check_single_lv function is meant to be a standard method
to validate the combination of specific command + specific LV,
and decide if the combination is allowed.  The check_single
function can be used by anything that calls process_each_lv.

As commands are migrated to take advantage of command
definitions, each command definition gets its own entry
point which calls process_each for itself, passing a
pair of check_single/process_single functions which can
be specific to the narrowly defined command def.
2017-02-10 11:27:46 -06:00
David Teigland
250030c989 commands: new method for defining commands
. Define a prototype for every lvm command.
. Match every user command with one definition.
. Generate help text and man pages from them.

The new file command-lines.in defines a prototype for every
unique lvm command.  A unique lvm command is a unique
combination of: command name + required option args +
required positional args.  Each of these prototypes also
includes the optional option args and optional positional
args that the command will accept, a description, and a
unique string ID for the definition.  Any valid command
will match one of the prototypes.

Here's an example of the lvresize command definitions from
command-lines.in, there are three unique lvresize commands:

lvresize --size SizeMB LV
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync, --reportformat String, --resizefs,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SizeMB
OP: PV ...
ID: lvresize_by_size
DESC: Resize an LV by a specified size.

lvresize LV PV ...
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --resizefs, --stripes Number, --stripesize SizeKB
ID: lvresize_by_pv
DESC: Resize an LV by specified PV extents.
FLAGS: SECONDARY_SYNTAX

lvresize --poolmetadatasize SizeMB LV_thinpool
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat String, --stripes Number, --stripesize SizeKB
OP: PV ...
ID: lvresize_pool_metadata_by_size
DESC: Resize a pool metadata SubLV by a specified size.

The three commands have separate definitions because they have
different required parameters.  Required parameters are specified
on the first line of the definition.  Optional options are
listed after OO, and optional positional args are listed after OP.

This data is used to generate corresponding command definition
structures for lvm in command-lines.h.  usage/help output is also
auto generated, so it is always in sync with the definitions.

Every user-entered command is compared against the set of
command structures, and matched with one.  An error is
reported if an entered command does not have the required
parameters for any definition.  The closest match is printed
as a suggestion, and running lvresize --help will display
the usage for each possible lvresize command.

The prototype syntax used for help/man output includes
required --option and positional args on the first line,
and optional --option and positional args enclosed in [ ]
on subsequent lines.

  command_name <required_opt_args> <required_pos_args>
          [ <optional_opt_args> ]
          [ <optional_pos_args> ]

Command definitions that are not to be advertised/suggested
have the flag SECONDARY_SYNTAX.  These commands will not be
printed in the normal help output.

Man page prototypes are also generated from the same original
command definitions, and are always in sync with the code
and help text.

Very early in command execution, a matching command definition
is found.  lvm then knows the operation being done, and that
the provided args conform to the definition.  This will allow
lots of ad hoc checking/validation to be removed throughout
the code.

Each command definition can also be routed to a specific
function to implement it.  The function is associated with
an enum value for the command definition (generated from
the ID string.)  These per-command-definition implementation
functions have not yet been created, so all commands
currently fall back to the existing per-command-name
implementation functions.

Using per-command-definition functions will allow lots of
code to be removed which tries to figure out what the
command is meant to do.  This is currently based on ad hoc
and complicated option analysis.  When using the new
functions, what the command is doing is already known
from the associated command definition.
2017-02-10 11:27:40 -06:00
David Teigland
bcb9dc93a4 lvmlockd: test mode doesn't work
The --test option is not yet compatible with shared VGs
because changes are made in lvmlockd that cannot be
reversed or faked.
2017-02-10 11:27:16 -06:00
Heinz Mauelshagen
46a772fbc4 lvconvert: add support to change RAID region size (fixup)
Commit cfb6ef654d introduced
support to change RAID region size.

Add:
- missing conditions to support any types to function with
  it in lv_raid_convert();  temporary workaround used until
  cli validation patches get merged
- tests requesting "-R " to lvconvert-raid-takeover.sh
  involving a cleanup of the script

Related: rhbz1392947
2017-02-07 16:52:04 +01:00
Heinz Mauelshagen
91c4bd14d0 lvconvert: add segtype raid5_n and conversions to/from it (cleanup)
Cleanups as of Jons review:
- enhance comment about mandatory raid4 <-> raid5_n activation w/o metadata SubLVs
- remove bogus segment flag setting
- fix to sync related comments on conversions to raid0/striped and amongst raid4/5
- add missing error message for non-synced conversion to raid0/striped

Related: rhbz1366296
2017-02-07 12:25:26 +01:00
Heinz Mauelshagen
cfb6ef654d lvconvert: add support to change RAID region size
Add:
- support to change region size of existing RaidLVs
  (all RAID LV types but raid0/raid0_meta)
- lvconvert-raid-regionsize.sh with test variations
  for different RAID types and region sizes

Resolves: rhbz1392947
2017-02-07 01:01:19 +01:00
Zdenek Kabelac
51d03acb1c tests: drop zeroing
Well waiting for zeroing may take enough time to finish 'raid' sync.
So make the test running faster without zeroing and better avoid race
to have chance to happen (i.e. lvcreate is finished after array
gets already in sync).
2017-02-06 13:15:39 +01:00
Zdenek Kabelac
48abbdf452 tests: drop /tmp polution
Drop forgotten extra metadata debug.
2017-02-06 11:43:07 +01:00
Zdenek Kabelac
811d137d3f cleanup: hide gcc warning
Gcc is not clever enough to see these vars are actually initialize in
given code path so let's just make sure it has a value.
2017-02-06 11:43:07 +01:00
Zdenek Kabelac
9fe8c2da36 debug: add space before uuid
With commit 8853462528 we added
uuid right after device name. Add space between them.
(Also fix some indenting)
2017-02-05 17:55:37 +01:00
Zdenek Kabelac
12bbfbe89d comment: update
Fix can -> cannot.
2017-02-05 17:55:37 +01:00
Zdenek Kabelac
fb3f4ed72d cleanup: rename to use track_ prefix
Since we use 'track_' prefix for other deps tracking,
convert skip_external_lv to use same logical meaning.
(just converts  1->0  0->1)
2017-02-05 17:55:37 +01:00
Zdenek Kabelac
dae4f53acb clvmd: add mutex protection for cpg_ call
The library for corosync multicasting is not supporting multithread
usage - add local mutex to avoid parallel call of cpg_mcast_joined().
2017-02-05 17:55:37 +01:00
Heinz Mauelshagen
a4bbaa3b89 lvconvert: add segtypes raid6_{ls,rs,la,ra}_6 and conversions to/from it
Add:
- support for segment types raid6_{ls,rs,la,ra}_6
  (striped raid with dedicated last Q-Syndrome SubLVs)
- conversion support from raid5_{ls,rs,la,ra} to/from raid6_{ls,rs,la,ra}_6
- setting convenient segtypes on conversions from/to raid4/5/6
- related tests to lvconvert-raid-takeover.sh factoring
  out _lvcreate,_lvconvert funxtions

Related: rhbz1366296
2017-02-05 00:56:27 +01:00
Heinz Mauelshagen
d8568552e4 WHATS_NEW: New segment type raid6_n_6 2017-02-04 14:09:26 +01:00
Heinz Mauelshagen
3673ce48e0 lvconvert: add segtype raid6_n_6 and conversions to/from it
Add:
- support for segment type raid6_n_6 (striped raid with dedicated last parity/Q-Syndrome SubLVs)
- conversion support from striped/raid0/raid0_meta/raid4 to/from raid6_n_6
- related tests to lvconvert-raid-takeover.sh

Related: rhbz1366296
2017-02-04 01:42:21 +01:00
Heinz Mauelshagen
96f331fe05 WHATS_NEW: New segment type raid5_n 2017-02-03 23:41:48 +01:00
Heinz Mauelshagen
7e92535d47 lvconvert: add segtype raid5_n and conversions to/from it
Change:
- missed a return_0
- use lvseg_name() rather than my own function

Related: rhbz1366296
2017-02-03 22:16:35 +01:00
Heinz Mauelshagen
60ddd05f16 lvconvert: add segtype raid5_n and conversions to/from it
Add:
- support for segment type raid5_n (striped raid with dedicated last parity SubLVs)
- conversion support from striped/raid0/raid0_meta/raid4 to/from raid5_n
- related tests to lvconvert-raid-takeover.sh

Related: rhbz1366296
2017-02-03 20:40:26 +01:00
Tony Asleson
875ce04c61 lvmdbusd: cmdhandler.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 19:05:41 -06:00
Tony Asleson
3eccbb4b47 lvmdbusd: manager.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 18:57:01 -06:00
Tony Asleson
945842fa68 lvmdbusd: lvmdb.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 18:56:39 -06:00
Tony Asleson
681d69c70a lvmdbusd: pv.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 16:51:00 -06:00
Tony Asleson
a010cede6e lvmdbusd: vg.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 16:37:03 -06:00
Tony Asleson
83a1907586 lvmdbusd: lv.py, remove duplicate code
Move similar code to common functions, less is more!
2017-02-01 15:38:55 -06:00
Bryn M. Reeves
2350ea0060 man: add update_filemap to dmstats.8.in 2017-01-25 16:15:21 +00:00
Bryn M. Reeves
b999d5f501 dmstats: allow --filemap groups to be updated
Add a new update_filemap command to dmstats that allows a filemap
group to be updated:

  # dmstats update_filemap --groupid 0 vm.img
  /var/lib/libvirt/images/vm.img: Updated group ID 0 with 137 region(s).

This will update the set of regions mapped to the file to reflect
the current file system allocation.

Currently this needs to be run manually - a future update will add
support for monitoring file maps via a daemon, allowing them to be
automatically updated when the underlying file is modified.
2017-01-25 16:15:21 +00:00
Bryn M. Reeves
e0d19feb85 libdm: add dm_stats_update_regions_from_fd()
Add a call to update the regions corresponding to a file mapped
group of regions. The regions to be updated must be grouped, to
allow us to correctly identify extents that have been deallocated
since the map was created.

Tables are built of the file extents, and the extents currently
mapped to dmstats regions: if a region no longer has a matching
file extent, it is deleted, and new regions are created for any
file extents without a matching region.

The FIEMAP call returns extents that are currently in-memory (or
journaled) and awaiting allocation in the file system. These have
the FIEMAP_EXTENT_UNKNOWN | FIEMAP_EXTENT_DELALLOC flag bits set
in the fe_flags field - these extents are skipped until they
have a known disk location.

Since it is possile for the 0th extent of the file to have been
deallocated this must also handle the possible deletion and
re-creation of the group leader: if no other region allocation
is taking place the group identifier will not change.
2017-01-25 16:15:21 +00:00
Bryn M. Reeves
1c00bb5da3 libdm: test for DM_STATS_GROUP_NOT_PRESENT in _stats_group_id_present
If the group_id passed to _stats_group_id_present is equal to the
special value DM_STATS_GROUP_NOT_PRESENT there is no need to perform
any further tests: return false immediately.
2017-01-25 15:29:35 +00:00
Bryn M. Reeves
ca427a711a libdm: fix stats comment formatting in libdevmapper.h 2017-01-24 09:29:31 +00:00
Zdenek Kabelac
ec93f37b86 toolcontext: action for LVM_RUN_BY_DMEVENTD env var
When LVM_RUN_BY_DMEVENTD is set to 1, ensure there will
be no interaction with dmeventd.
2017-01-23 14:55:47 +01:00
Zdenek Kabelac
836eb122ce dmeventd_thind: set LVM_RUN_BY_DMEVENTD
Set LVM_RUN_BY_DMEVENTD envvar to expose the command is runing from
dmeventd environment.
2017-01-23 14:55:47 +01:00
Zdenek Kabelac
4a7f2155c1 clean: move code to lib part
Move actual processing part of the lvm2_disable_dmeventd_monitoring()
into a /lib part so we can reuse the code later for other cases.
2017-01-23 14:55:28 +01:00
Zdenek Kabelac
2d48317d3a tests: umount when above 95
Add code to check if resulting data or metadata remained over 95%
and in such case invoke umount.
2017-01-21 22:53:57 +01:00
Zdenek Kabelac
e2fa90bf38 tests: properly quote heredoc
Prepend \$ for vars which should remain in script.
Also drop --lazy umount.
Move inittest call up, so mntdir and mntusedir have proper full path.
2017-01-21 19:28:06 +01:00
Zdenek Kabelac
1a2b88516b tests: implement umount in script
Since dmeventd no longer umounts thin devices, such logic
needs to be implemented by external script.
Add some very simple one for the start.
2017-01-21 17:42:19 +01:00
Zdenek Kabelac
47c11c7b1c tests: enusure units in TiB 2017-01-21 17:42:19 +01:00
Zdenek Kabelac
2e0605d6db dmeventd_thin: internal command without lvm prefix
Internal command processing needs to go without 'lvm ' prefix.
2017-01-21 17:42:19 +01:00
Zdenek Kabelac
85dab3963f dmeventd_thin: enable support for external command
With this commit we start to support configurable action
from thin-pool monitoring via  'dmeventd/thin_command'
2017-01-21 00:01:05 +01:00
Zdenek Kabelac
8c4f3633ac dmeventd_thin: new logic for calling commands
For more advanced support we need to ensure better logic for calling
external much more advanced script for maintanance of thin-pool.

So this new code ensures:

When thin-pool data or metadata is bigger then 50%,
then with each 5% increment, action is called.
This is independent from autoextend_threshold.
This action always happens when thin-pool is over threshold,
(so no action when it's exactly i.e. 60%).
The only exception is 100% full thin-pool - which invokes 'last'
action.

Since thin-pool occupancy may change also downward, code needs
to also handle possibly reduction of occupancy  of thin-pool.
So when usage drop from 90% to 50%, thin-pool will start to call
again action when it will pass 55% threshold.

This give external commands lot of option i.e. to call 'fstrim'
before actual resize is needed.
2017-01-20 23:58:56 +01:00
Zdenek Kabelac
8b95551ade dmeventd_thin: drop umounting on error path
Default internal logic will stop trying to do any 'rescue' action
when executed command fails.
This will be now fully in hands of external script if such
behaviour is needed.
2017-01-20 23:58:56 +01:00
Zdenek Kabelac
43e3268ada dmeventd_thin: rework failure handling
Instead of stopping monitoring after couple failing retries,
keep monitoring forever, just make larger delays between command
retries (ATM upto ~42 minutes).

So syslog is not spammed too often, yet commands have a chance to
be retried and succeed eventually...
2017-01-20 23:56:39 +01:00
Zdenek Kabelac
46c23dfb87 dmeventd_thin: SIGCHLD handler
To improve reaction time on when child is finished,
lets handle SIGCHLD in particular thread.
Let's hope kernel will route SIGCHLD to matching thread.
2017-01-20 23:55:51 +01:00
Zdenek Kabelac
bc7a1d70d4 dmeventd_thin: init command
When dmeventd configured command does not start with 'lvm ' prefix,
it's going to be an 'external' command.
In this case we split command by spaces to argv strings.
2017-01-20 23:55:50 +01:00
Zdenek Kabelac
14746a6c00 dmeventd_thin: add wait_pid
Add support handling command exit.
2017-01-20 23:55:50 +01:00
Zdenek Kabelac
2e935c0967 dmeventd_thin: add run_command
Implement forking of executable command.
When command is forked, dmeventd may continue monitor device.
2017-01-20 23:55:50 +01:00
Zdenek Kabelac
e5bef50827 dmeventd_thin: better warning logic
When fullness is passing WARN_THRESHOLD, print warning,
when it drops bellow and crossed again, we should print
warning again, but always only once.
2017-01-20 23:55:50 +01:00
Zdenek Kabelac
0d945ddbad dmeventd_thin: switch to struct percent
Later we can use stored percent values to pass them
to executed commands.
2017-01-20 23:55:50 +01:00
Zdenek Kabelac
eca964b554 dmeventd_thin: handling of internal command 2017-01-20 23:55:50 +01:00
Zdenek Kabelac
d80f9a107f lvmcmd2lib: support new command
Internal command which reads lvm.conf settins and passes it
via envvar to dmeventd monitoring thread.
2017-01-20 23:55:07 +01:00
Zdenek Kabelac
04a9cad499 config: new option dmeventd/thin_command
This setting will allowing configuring which command gets executed
when thin-pool fullness goes from 50%..100%
2017-01-20 23:53:26 +01:00
Zdenek Kabelac
ee754500db cleanup: update config doc 2017-01-20 23:52:40 +01:00
Zdenek Kabelac
f8234d6e5f libdm: add human R|readable units
When showing sizes with 'H|human' units we do use standard rounding.
This however is confusing users from time to time,
when the printed number uses some biger units i.e. GiB and there is just
tiny fraction of space missing.

So here is some real-life example with new 'r' unit.

$lvs

  LV    VG Attr       LSize  Pool Origin
  lvol0 vg -wi-a-----  1.99g
  lvol1 vg -wi-a----- <2.00g
  lvol2 vg -wi-a----- <2.01g

Meaning is - lvol1 has 'slightly' less then 2.00g - from sign '<' user
can be aware the LV doesn't have full 2.00GiB in size so he
will be less surpriced allocation of 2G volume will not succeed.

$ vgs
  VG #PV #LV #SN Attr   VSize  VFree
  vg   2   2   0 wz--n- <6,00g <2,01g

For uses needing  'old'  undecorated human unit simply will continue
to use 'H|h' units.

The new R|r  may further change when we would recongnize some
other way how to improve readability.
2017-01-20 23:52:17 +01:00
Alasdair G Kergon
6a20b22151 devices: Recognise Veritas Dynamic Multipathing
VxDMP doesn't interact very well with udev so always set
  devices/obtain_device_list_from_udev = 0
in lvm.conf on these systems.
2017-01-10 22:23:23 +00:00
Zdenek Kabelac
15e657f110 tests: ignore racy test failure
When test fails here, make it just warning instead of failing whole
test.
2017-01-06 23:39:53 +01:00
Zdenek Kabelac
d757b2431a tests: make test more race immune
Add more delay and increase raid size.
Speedup volume during wait for sync.
Drop --yes from lvcreate.
2017-01-06 23:39:53 +01:00
Zdenek Kabelac
a4be2be5a4 raid: postpone archiving until metadata are changed
Avoid archiving of lvm2 metadata when there is call of 'lvconvert --repair'
on healthy raid LV.
2017-01-06 23:39:04 +01:00
Zdenek Kabelac
0d2a9ebec6 vgchange: also -l is uint32 2017-01-06 21:51:36 +01:00
Zdenek Kabelac
8a93cde75e mirror: relax internal error for a while
With recent commit d6a74025df using
INTERNAL_ERROR while cheking layer LV - it's been noticed mirror
logic currently doesn't do a correct thing during upconversion and
does a full-try instead of checking only allocator capabilities.
This leads to invalid usage of layer.

To keep existing code running before providing a fix, relax
INTERNAL_ERROR just an error and keep the 'code' running.

Once mirror code is fixed, these all check should be switched
to internal errors.
2017-01-06 12:45:07 +01:00
Peter Rajnoha
d90320f4f1 blkdeactivate: also unmount mount point on top of MD device if using blkdeactivate -u
The blkdeactivate script processes MD devices too so we should unmount
any mount point on top of an MD device if blkdeactivate -u|--umount is
called.

Diagnosed and reported by: Rick Warner <rick@microway.com>
See also https://bugzilla.redhat.com/show_bug.cgi?id=1410585.
2017-01-06 11:16:07 +01:00
Zdenek Kabelac
b92a9c3e1a tests: slow down devs for raid more
Since we still experience occasiaonal test failure - slow
things down even more to avoid race.

Add support for 'quick' table changes between normal & delayed tables.
2017-01-05 15:54:14 +01:00
Zdenek Kabelac
c64f4447d9 tests: drop FIXME
Since we fixed core trouble with sequence of
suspend/resume/suspend without udev wait
we can drop 'should' and expect volume is still mounted.
2017-01-05 15:54:14 +01:00
Zdenek Kabelac
74969c9a38 report: report merged state for inactive LV
This was missing piece in 77997c7673.
When merging origin is inactive (while driver is loaded) we
could already report merge in progress values as there is
no way to activate 'old state' now.
2017-01-05 15:54:14 +01:00
Zdenek Kabelac
d6a74025df debug: show proper error message for layer mismatch
Show proper internal error for failing command when there are some
inconsitencies in sizes of LV and its layer instead of rather
meaningless error code 5.

(Could be hit i.e. if user tried to 'resize' cached LV and then
uncache such LV.)
2017-01-05 15:54:14 +01:00
Zdenek Kabelac
3e9c03cbbc cache: resize is still unsupported
During rework of resize code this validation check
has been lost (in my resize branch). Upstream
is still not supporting resize of any cache type LV
so needs to be prevented.
2017-01-05 15:34:22 +01:00
Zdenek Kabelac
1f5dde38a7 cleanup: more use of lvseg_name
Use existing function lvseg_name().
2017-01-03 14:55:16 +01:00
Zdenek Kabelac
dc5bb12956 cleanup: use macros 2017-01-03 14:55:16 +01:00
Zdenek Kabelac
ee784fd28f cleanup: defines 2017-01-03 14:55:16 +01:00
Zdenek Kabelac
377288fe03 cleanup: reuse existing code 2017-01-03 14:55:16 +01:00
Zdenek Kabelac
95d5877f7a cache: add missing udev wait
When we need to clear dirty cache content of cached LV, there
is table reload which usually is shortly followed by next metadata
change.  However  udev  can't (as of now)  process   udev event
while device is 'suspended'.

So whenever sequence of  'suspend/resume/suspend' is needed,
we need to wait first for finishing of 'resume' processing before
starting next 'suspend'. Otherwise there is  'race' danger of triggering
unwantend umount by systemd as  such event will trigger
SYSTEMD_READY=0 state for a moment for such changed device.

Such race is pretty ugly to trace so we may need to review more
sequencies for missing 'sync'.

(Other option is to enhnace 'udev' rules processing to avoid
such dramatic actions to be happening for suspended devices).
2017-01-03 14:55:16 +01:00
Zdenek Kabelac
4fd41cf67f vgchange: max_pv limited to uint32
Solves: https://bugzilla.redhat.com/1280496

The only reasonable behaviour here is to error on
any number out of accepted range (i.e. now numbers
wrapping around with some hidden logic).

As this is plain bug there is no support for
backward compatibility since noone should
set numbers >UINT32_MAX and expect 0 or error
depending on how big number was used....

TODO: more fields might need to be converted.
2017-01-03 14:55:16 +01:00
Zdenek Kabelac
9f65a3f0c5 lvmcmdline: support uint32
Add simple function to wrap usage for only uint32 numbers.
Unlike  'int_arg'  which accepts full range of 64bit number
this function will error on numbers out of this range:

   <0, UINT32_MAX>
2017-01-03 14:55:16 +01:00
Bryn M. Reeves
e75f0b7c77 man: fix name of 'write_time' field in dmstats.8.in 2016-12-25 17:36:35 +00:00
Zdenek Kabelac
96a1943fb8 tests: update test
lvm2 now correctly reports thin_id  after action of merged thin,
but before physical metadata update as we know the merge has happened.
2016-12-23 13:16:35 +01:00
Zdenek Kabelac
14902d1739 validation: temporarily let pass linear with chunk_size
Old pool format seems to be setting chunk_size.
For now let validation pass with this.
2016-12-23 13:16:06 +01:00
Heinz Mauelshagen
95d68f1d0e lvchange: allow a transiently failed RaidLV to be refreshed
Add to commits 87117c2b25 and 0b8bf73a63 to avoid refreshing two
times altogether, thus avoiding issues related to clustered, remotely
activated RaidLV.  Avoid need to repeat "lvchange --refresh RaidLV"
two times as a workaround to refresh a RaidLV.  Fix handles removal
of temporary *-missing-* devices created for any missing segments
in RAID SubLVs during activation.

Because the kernel dm-raid target isn't able to handle transiently
failing devices properly we need
"[dm-devel][PATCH] dm raid: fix transient device failure processing"
as well.

test: add lvchange-raid-transient-failures.sh
      and enhance lvconvert-raid.sh

Resolves: rhbz1025322
Related:  rhbz1265191
Related:  rhbz1399844
Related:  rhbz1404425
2016-12-23 03:41:32 +01:00
Zdenek Kabelac
62be9c8de4 tests: use hold_device_open 2016-12-22 23:37:07 +01:00
Zdenek Kabelac
e1943fc07f tests: add device holding function
Hold device open with sleep and wait till sleep really opens
given devices.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
1053d46aff tests: workaround failure on fc23 2016-12-22 23:37:07 +01:00
Zdenek Kabelac
dd19b56985 thin: refresh status when error processing fails
When thin-pool processes event and 'lvextend --use-policies' fails
rather capture up-to-date new info as the fullness percentage may
have jumped noticable. This way we could use 'more' correct numbers
when checking for thresholds.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
77997c7673 report: show proper info for merging origin
When there is 'merging' of an origin in progress, but metadata stil
do provide both origin and snapshot, we should show data from merged
snapshot.  This is important mainly for thin case, where there was
a window, where i.e. 'lvs -o+device_id' would report information
about 'already gone' origin thin LV.

This race window is usually hard to trigger but can be ocasionally hit.
Usually shortly after activation, but before polling process manages
to update metadata after merge.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
2aee4769b4 snapshot: validate merge has started
Before starting polling process, validate the merge has actually started
so there is not pointless invoke of lvmpolld.

This also fixes reported message from command, so user has
correct info whether merging has already started or
if it's delayed for next activation.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
95e3dd5fb1 lv: more exact check for merging origin
Merging origin has 'MERGE_LV' and should also have its merging snapshot.
2016-12-22 23:37:07 +01:00
Zdenek Kabelac
9491ab41cd validation: rework segment validation
Move individual segment validation to a separate function
executed for 'complete_vg'.

Move some 'extra' validation bits from 'raid' validation to global
segtype validation (so extending existing normal validation)

TODO: still some test are left to be moved.
Reduce some duplication in validation process - there are still
some left thought so still room for improving overal speed.
2016-12-22 23:37:07 +01:00
Tony Asleson
eacff5c189 lvmdbustest: Print messages if timeout value > 10%
We will dump some informational messages if the time to return when we
specify a timeout exceeds 10% of requested.
2016-12-20 11:06:57 -06:00
Tony Asleson
a7e1f973cc lvmdbusd: Use timeout_add instead
The function timeout_add_seconds has quite a bit of variability.  Using
timeout_add which specifies the timeout in ms instead of seconds.  Testing
shows that this is much more consistent which should improve clients that
are using shorter timeouts for the API and the connection.
2016-12-20 11:06:57 -06:00
Tony Asleson
75568294be lvmdbusd: Use cfg.reload() instead of dbo.refresh
We want to update the data and send out any signals as needed, not just
update the in memory database.
2016-12-20 11:06:57 -06:00
Tony Asleson
6fe6e8053a lvmdbusd: Remove un-needed main thread execution 2016-12-20 11:06:57 -06:00
Zdenek Kabelac
f47da0ad23 tests: usage of cached volume for snapshot 2016-12-19 14:41:43 +01:00
Zdenek Kabelac
0c56eb8f43 cache: support cached origin for snapshot
Enable  'lvcreate/lvconvert -s' for cached LV.
and supported operations:

Create a snapshot of cached LV

Split/Join snapshot LV to cached origin LV.
2016-12-19 14:41:42 +01:00
Zdenek Kabelac
eb3f83357a lvconvert: fix shown lv name for snapshot split
We can't keep 'display_lvname' for too long - it's using
ringbuffer and keeps limited number of names. So it's
safe only per few simple tests,  but can't be used anymore
after some function calls..
(Fixes 00e641ef37)
2016-12-19 14:41:16 +01:00
Bryn M. Reeves
c90e9392e4 libdm: add dm_stats_bind_from_fd()
dmsetup already has a version of this function, and dmfilemapd will
need it too: move it to libdevmapper to avoid copying it around.
2016-12-18 20:47:17 +00:00
Bryn M. Reeves
009b711834 libdm: clear region table in dm_stats_list()
Call _stats_regions_destroy() from dm_stats_list() if dms->regions
is non-NULL. This avoids leaking any pool allocations and ensures
the handle is in a known state: if an error occurs during the list,
dms->regions will be NULL and the handle will appear empty.
2016-12-18 20:44:31 +00:00
Zdenek Kabelac
8d6ac1c3ba tests: using cached LV for external origin 2016-12-18 19:38:51 +01:00
Zdenek Kabelac
8c17233af5 debug: add debug message showing new lv
Make trace easier to follow knowing which LV was added to dtree.
2016-12-18 19:38:51 +01:00
Zdenek Kabelac
034931f68d activate: further _info API refinement
Another cleanup of internal _info() API simplifying code.
Also make sure 'error' on _info() call is properly passed upward
(return 0 is error path).
2016-12-18 19:38:51 +01:00
Zdenek Kabelac
79121416df thin: add comment with future extension
It could be actually better to use even cache origin in
read-only mode so there could no be some 'acidental'
change being done on this volume.

This however need further tools enhancment - where we would need
to handle whole subtree on 'lvchange -pr/-prw'.
2016-12-18 19:38:51 +01:00
Zdenek Kabelac
75f2388093 backup: show warning once per command
When command calls backup() more then once (which is actually not
wanted) this warning message is shown repeatedly:

"WARNING: This metadata update is NOT backed up."

Instead now print message just once and less confuse user.
2016-12-18 19:38:30 +01:00
Zdenek Kabelac
5bb6266046 lvconvert: support cache to external origin conversion
Add this functionality to lvconvert:

'lvconvert --thin cachedLV --thinpool vg/poll'

Converts cachedLV to external origin (which will be read-only).
New thin volume is created in thinpool LV and it's using external
origin as source for unprovisioned chunks.
This conversion happens  online (while volume is in use).
Thin LV remains fully writable.
Cached external origin no longer could be written so cache will be used
ONLY for read operations. For this limitation we require cache mode
to be writethrough (as writeback cannot write to read-only volumes).

When  thinLV is later removed  cached external origin is again
fully usable, just note, LV remain in 'read-only' mode.
When read-write is needed,  'lvchange -prw' has to be used.

Single external origin could be user by multiple thinLV in
multiple differen thin pool.
2016-12-18 19:35:27 +01:00
Zdenek Kabelac
69434c2eca cache: improve activation with -real
When cache volume may be converted from normal to -real layer LV
we need to improve logic for call cache_check.

With this patch, we register call for cache_check only when metadata LV
is not yet present in active table slot (should match initial table
load).
This avoids unwanted checking when cache would become layer device
online.
2016-12-18 19:30:50 +01:00
Zdenek Kabelac
954c59779d libdm: drop callback on revert path
The system is likely in some very inconsisten state.
Do not try to make it even more problematic with trying
to invoke tools like thin_check via callback.
2016-12-18 19:29:08 +01:00
Zdenek Kabelac
29b0e42be3 lv: fix lock holder for external origin
External origin could be reloaded via more locks.
It's actually even more complex then thin-pool,
as it may be active on more nodes for linear LVs
(and maybe even more types).

External origin is always read-only thus unmodifiable
device so there should not be a problem accesing it
through multiple nodes.

Also for thin-pool check first presence of active thin-pool.

FIXME:
It's not easy to detect on which nodes this device is active
Thus manipulation with such device may require checking every
node and it active state and refresh.

But since such setup is quite complex to prepare and use,
hopefully there are not user trying to 'explore' this usage yet.
2016-12-18 19:25:25 +01:00
Zdenek Kabelac
a24eae6e82 cache: prepare status checking for layer
To be ready to show status of cache volume, call the status
with layer.  Layer is automatically detected in this case when
cache volume is used in 'layered' form (needs -real suffix).
2016-12-18 19:23:13 +01:00
Zdenek Kabelac
bf157ed833 cache: improve wait for cache clear
Avoid printing misleading message about single dirty block.
Instead properly detect condition where the 'cleaner' policy
needs to be installed without 'overloading' dirty variable.

Also print warning if we would be clearing read-only volume.
(it really shouldn't happen).
2016-12-18 19:22:11 +01:00
Zdenek Kabelac
36f609e513 validation: check external property is matching
Detect if number of external_count is matching
referencing devices for  external_origin LV.
2016-12-18 19:17:59 +01:00
Zdenek Kabelac
7db46c4a39 thin: reload external origin with last thin
External origin could be activated as stand-alone device.
When the last thin LV is removed, external origin is no longer
the external origin and it's layer property was dropped.

Ensure dm table is correct by reloading external origin
(when it's active).
2016-12-18 19:13:34 +01:00
Zdenek Kabelac
c71fefad8d lvs: show status for layer
When LV is external origin, show info for LV but
status for -layer.  So we expose more info to a user
as otherwise active external origin is only linear
mapping of -real layer.

We do the same for i.e. old snaphost origin.
2016-12-18 19:12:12 +01:00
Zdenek Kabelac
bdfc96cb08 raid: fix activation of tracked image
Activation of raid has brough up also splitted image with tracing
(without taking lock for this).

So when raid is now activate - such image is not put into
table (with _rmeta).  When user needs such device, just active it.
2016-12-18 19:10:38 +01:00
Bryn M. Reeves
a15f0d181c dmstats: don't declare _start_timestamp if HAVE_SYS_TIMERFD_H
The _start_timestamp is not used by the TIMERFD clock.
2016-12-18 14:08:11 +00:00
Bryn M. Reeves
3e53adf7c0 dmstats: fix TIMERFD _timer_running() test 2016-12-18 14:07:25 +00:00
Bryn M. Reeves
5a4750d76c dmstats: fix interval number reporting with --count=0
When --count=0 interval numbers are miscalculated:

Interval     #18446744069414584325     time delta:    999920887ns
Interval     #18446744069414584325   current err:       -79113ns
End interval #18446744069414584325  duration:    999920887ns

This is because the command line argument is cast through the
uint32_t type, and fixed to UINT32_MAX:

  _count = ((uint32_t)_int_args[COUNT_ARG]) ? : UINT32_MAX;

We also need to handle --count=0 specially when calculating the
interval number: since intervals count from #1, this must account
for the implicit "minus one" when converting from zero to the
UINT64_MAX value used (which is too large to store in _int_args).
2016-12-18 13:03:45 +00:00
Bryn M. Reeves
5635cd3b03 dmstats: separate TIMERFD and useleep() exit conditions
The time management code mixes tests of the _timer_fd value with
code that should be timer agnostic: this causes problems for users
of the usleep() timer, since it cannot properly detect the start
of a new interval:

Beginning first interval
Interval     #18446744069414584348     time delta:   1000000000ns
Interval     #18446744069414584348   current err:            0ns
End interval #18446744069414584348  duration:   1000000000ns
Adjusted sample interval duration:   1000000000ns
[...]
Beginning first interval
Interval     #18446744069414584349     time delta:   1000000000ns
Interval     #18446744069414584349   current err:            0ns
End interval #18446744069414584349  duration:   1000000000ns
Adjusted sample interval duration:   1000000000ns

Separate these out, by defining a _timer_running() call that each
timer implements, and only define _timer_fd if we are compiling
with TIMERFD enabled.
2016-12-18 13:03:44 +00:00
Bryn M. Reeves
886b4f755d dmstats: use better interval estimate for usleep() timer
Although the usleep() interval timer is not used if the Linux
TIMERFD interface is available it should still provide reasonably
good timing.

Instead of trying to estimate the error from the duration of the
last sleep, peg it to the start time of the program, and use the
value of  ((start_time - now) % interval) to correct the current
interval duration.

This always pulls us back into sync at the end of each interval,
rather than relying on trying to incrementally adjust the time
duration at each interval start.

This greatly reduces drift when the usleep() clock is used.
2016-12-18 13:03:44 +00:00
Bryn M. Reeves
68ec42ebaf dmstats: improve tool help output and option coverage 2016-12-18 11:51:13 +00:00
Bryn M. Reeves
4f9d901c71 man: fix 'dmstats create' formatting in dmstats.8.in 2016-12-18 10:23:12 +00:00
Bryn M. Reeves
14be8c4fad man: fix 'dmstats list' option formatting in dmstats.8.in 2016-12-18 10:12:56 +00:00
Bryn M. Reeves
25dd3988c3 man: fix 'dmstats <command>' formatting in dmstats.8.in 2016-12-18 10:12:45 +00:00
Bryn M. Reeves
35791689ba libdm: use destination size as limit in dm_bit_copy()
The dm_bit_copy() macro uses the source (bs1) bitset size as the
limit for memcpy:

    memcpy((bs1) + 1, (bs2) + 1, ((*(bs1) / DM_BITS_PER_INT) + 1)..)

This is safe if the destination bitset is smaller than the source,
or if the two bitsets are of the same size.

With a destination that is larger (e.g. when resizing a bitmap to
add more capacity), the memcpy will overrun the source bitset and
set garbage bits in the destination.

There are nine uses of the macro currently (8 in libdm/regex, and
1 in daemons/cmirrord): in each case the two bitsets are always of
equal size so the behaviour is unchanged.

Fix the macro to use bs2's size to simplify resizing bitsets and
avoid the need for another copy macro.
2016-12-14 11:28:11 +00:00
Zdenek Kabelac
0f98d5c2e6 cleanup: use exiting function
Reuse existing code and some indent change.
2016-12-14 11:41:42 +01:00
Zdenek Kabelac
fecd043cca raid: split preserves local exlusive activation 2016-12-14 11:40:01 +01:00
Zdenek Kabelac
77e09c3fb4 raid: activation with list
Commit 0690392040 revealed a problem
in raid metadata manipulation.

We do two operations in one table reload:
- raid leg/image extraction
- rename remaining raid legs

This should be made in separate steps. Otherwise we do an
uncorrectable table change on error path (leaving tables
for admin and dmsetup).

As a hotfix - restore the previous logic and use a single
new function _lv_update_and_reload_list which activates exclusively
extracted LVs on the list before resuming suspended raid LV.
This restore 'rename' functionality upon resume.

Also still preserve the 'origin_only' logic - although we know
it's not working properly for cluster and LV stacking.

Further fixes are needed.
2016-12-14 11:37:02 +01:00
Zdenek Kabelac
4a05f83278 configure: just move new macro to right file
aclocal is regenerated while acinclude is permanent.
Move new macro to permanent file.
2016-12-13 22:49:59 +01:00
Bryn M. Reeves
f4401fe351 libdm: ensure first extent is always counted
If FIEMAP returns a single extent after the first call, no extent
boundary is detected and the first extent is not counted by the
normal mechanism.

In this case, increment nr_extents at the same time the extent is
added to the region table, before returning.
2016-12-13 21:41:31 +00:00
Zdenek Kabelac
fce7449d73 cleanup: remove wrapping function
backup is not 'tested' for success and also it should
actually happen just when command is finished.
We do not target to make backups with each inter-step
metadata change.
2016-12-13 22:07:52 +01:00
Zdenek Kabelac
c7da16e5f1 cleanup: log message updates 2016-12-13 22:07:52 +01:00
Zdenek Kabelac
a8f5e1f274 cleanup: more lv_is_ usage 2016-12-13 22:07:52 +01:00
Zdenek Kabelac
47b96c3537 cleanup: allocate NAME_LEN size for lv name 2016-12-13 22:07:52 +01:00
Zdenek Kabelac
d0fe3ec0c5 raid: avoid manipulation of segment status
RAID is LV property

TODO: only 2 flags are seg->status: PVMOVE & MERGING
At least the second one should be soon elimanted as again
we merge LV not a segment.
2016-12-13 22:07:52 +01:00
Zdenek Kabelac
d1e398c474 segtype: check for seg type instead of status
RAID is LV property - which has single segment of raid type.
2016-12-13 22:07:52 +01:00
Zdenek Kabelac
0690392040 raid: improve table reload sequence
This is another place for 'common' use pattern or
reload and activation of deleted devices.
(Moving the exclusive activation to _deactivate_and_remove_lvs()).

TODO: looks like halve of raid function is reloading
just 'origin' - and the other full LV.
2016-12-13 22:07:52 +01:00
Bryn M. Reeves
7dff632c11 libdm: add min_num_bits to dm_bitset_parse_list()
It's useful to be able to specify a minimum number of bits for a
new bitmap parsed from a list, for e.g. to allow for expansing a
group without needing to copy/reallocate the bitmap.

Add a backwards compatible symbol for programs linked against old
versions of the library.
2016-12-13 21:02:18 +00:00
Bryn M. Reeves
e8d966bc31 libdm: use dm_bit_get_last() in _stats_group_tag_fill()
Instead of iterating over all bits, use dm_bit_get_last() to find
the last set bit in the group bitmap.
2016-12-13 21:02:18 +00:00
Bryn M. Reeves
5d1d65e735 libdm: add dm_bit_get_last()/dm_bit_get_prev()
It is sometimes convenient to iterate over the set bits in a dm
bitset in reverse order (from the highest set bit toward zero), or
to quickly find the last set bit.

Add dm_bit_get_last() and dm_bit_get_prev(), mirroring the existing
dm_bit_get_first() and dm_bit_get_next().

dm_bit_get_prev() uses __builtin_clz when available to efficiently
test the bitset in reverse.
2016-12-13 21:01:58 +00:00
Bryn M. Reeves
107bc13db3 util: add clz() and use __builtin_clz() if available
Add a macro for the clz (count leading zeros) operation.

Use the GCC __builtin_clz() for clz() if it is available and fall
back to a shift based implementation on systems that do not set
HAVE___BUILTIN_CLZ.
2016-12-13 20:41:29 +00:00
Bryn M. Reeves
83a9cd5258 configure: check for __builtin_clz() 2016-12-13 20:38:40 +00:00
Bryn M. Reeves
930b0b4c9e libdm: fix start of file detection in _stats_map_extents() 2016-12-13 20:25:47 +00:00
Bryn M. Reeves
eb65572217 libdm: break up _stats_get_extents_for_file()
Split out the loop that iterates over each batch of FIEMAP
extent data from the function that sets up and calls the ioctl
to reduce nesting and simplify local variable use:

  _stats_get_extents_for_file()
  ->  _stats_map_extents()

The _stats_map_extents() function is responsible for detecting
eof and extent boundaries and adding whole, allocated extents
to the file extent table for region creation.
2016-12-13 20:25:45 +00:00
Bryn M. Reeves
ea9af3e290 libdm: fix dm_stats_foreach_group() macro 2016-12-13 20:01:00 +00:00
Bryn M. Reeves
8e33972828 libdm: check for non-existent region_id values in groups
Check that all region_id values specified in a group bitmap are
actually present: although this should not normally happen when
using the dmstats tool, it is possible as a result of manual
changes (or bugs) for a group descriptor to contain one or more
group_id values that do not exist.

Check for this situation when reading group descriptors, warn
the user the user, and clear these bits in the bitmap when
formatting it for output.
2016-12-13 15:37:48 +00:00
Bryn M. Reeves
99b6d82e2d libdm: fix segfault with invalid group descriptor
If a region has a a DMS_GROUP tag in aux_data where the first
region_id in the bitmap is not the same as the containing region,
dmstats will segfault:

  # '2' is never a valid group bitset list for region_id == 0
  # dmsetup message vg_hex/root 0 "@stats_set_aux 0 DMS_GROUP=img:2#"

  # dmsetup message vg_hex/root 0 "@stats_list"
  0: 45383680+16384 16384 dmstats DMS_GROUP=img:2#
  1: 46071808+32768 32768 dmstats -
  2: 47382528+16384 16384 dmstats -

  # dmstats list
  Segmentation fault (core dumped)

The crash will occur in some arbitrary dm_stats_get_* property
method - this happens while processing the 1st region_id in the
bitset, because the region is marked as grouped, but there is
no group bitmap present at dms->groups[2]->regions.

Fix this by detecting a mismatch between the expected region_id
and dm_bit_get_first() for the parsed bitset during
_parse_aux_data_group().
2016-12-13 14:37:41 +00:00
225 changed files with 10362 additions and 13576 deletions

View File

@@ -1,5 +1,37 @@
Version 2.02.169 -
=====================================
Support region size changes on existing RaidLVs
Avoid parallel usage of cpg_mcast_joined() in clvmd with corosync.
Support raid6_{ls,rs,la,ra}_6 segment types and conversions from/to it.
Support raid6_n_6 segment type and conversions from/to it.
Support raid5_n segment type and conversions from/to it.
Support new internal command _dmeventd_thin_command.
Introduce new dmeventd/thin_command configurable setting.
Use new default units 'r' for displaying sizes.
Also unmount mount point on top of MD device if using blkdeactivate -u.
Restore check preventing resize of cache type volumes (2.02.158).
Add missing udev sync when flushing dirty cache content.
vgchange -p accepts only uint32 numbers.
Report thin LV date for merged LV when the merge is in progress.
Detect if snapshot merge really started before polling for progress.
Checking LV for merging origin requires also it has merged snapshot.
Extend validation of metadata processing.
Enable usage of cached volumes as snapshot origin LV.
Fix displayed lv name when splitting snapshot (2.02.146).
Warn about command not making metadata backup just once per command.
Enable usage of cached volume as thin volume's external origin.
Support cache volume activation with -real layer.
Improve search of lock-holder for external origin and thin-pool.
Support status checking of cache volume used in layer.
Avoid shifting by one number of blocks when clearing dirty cache volume.
Extend metadata validation of external origin LV use count.
Fix dm table when the last user of active external origin is removed.
Improve reported lvs status for active external origin volume.
Fix table load for splitted RAID LV and require explicit activation.
Always active splitted RAID LV exclusively locally.
Do not use LV RAID status bit for segment status.
Check segtype directly instead of checking RAID in segment status.
Reusing exiting code for raid image removal.
Fix pvmove leaving -pvmove0 error device in clustered VG.
Avoid adding extra '_' at end of raid extracted images or metadata.
Optimize another _rmeta clearing code.

View File

@@ -1,5 +1,10 @@
Version 1.02.138 -
=====================================
Support configurable command executed from dmeventd thin plugin.
Support new R|r human readable units output format.
Thin dmeventd plugin reacts faster on lvextend failure path with umount.
Add dm_stats_bind_from_fd() to bind a stats handle from a file descriptor.
Do not try call callback when reverting activation on error path.
Fix file mapping for extents with physically adjacent extents.
Validation vsnprintf result in runtime translate of dm_log (1.02.136).
Separate filemap extent allocation from region table.

View File

@@ -61,3 +61,174 @@ AC_DEFUN([AC_TRY_LDFLAGS],
ifelse([$4], [], [:], [$4])
fi
])
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_gcc_builtin.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_GCC_BUILTIN(BUILTIN)
#
# DESCRIPTION
#
# This macro checks if the compiler supports one of GCC's built-in
# functions; many other compilers also provide those same built-ins.
#
# The BUILTIN parameter is the name of the built-in function.
#
# If BUILTIN is supported define HAVE_<BUILTIN>. Keep in mind that since
# builtins usually start with two underscores they will be copied over
# into the HAVE_<BUILTIN> definition (e.g. HAVE___BUILTIN_EXPECT for
# __builtin_expect()).
#
# The macro caches its result in the ax_cv_have_<BUILTIN> variable (e.g.
# ax_cv_have___builtin_expect).
#
# The macro currently supports the following built-in functions:
#
# __builtin_assume_aligned
# __builtin_bswap16
# __builtin_bswap32
# __builtin_bswap64
# __builtin_choose_expr
# __builtin___clear_cache
# __builtin_clrsb
# __builtin_clrsbl
# __builtin_clrsbll
# __builtin_clz
# __builtin_clzl
# __builtin_clzll
# __builtin_complex
# __builtin_constant_p
# __builtin_ctz
# __builtin_ctzl
# __builtin_ctzll
# __builtin_expect
# __builtin_ffs
# __builtin_ffsl
# __builtin_ffsll
# __builtin_fpclassify
# __builtin_huge_val
# __builtin_huge_valf
# __builtin_huge_vall
# __builtin_inf
# __builtin_infd128
# __builtin_infd32
# __builtin_infd64
# __builtin_inff
# __builtin_infl
# __builtin_isinf_sign
# __builtin_nan
# __builtin_nand128
# __builtin_nand32
# __builtin_nand64
# __builtin_nanf
# __builtin_nanl
# __builtin_nans
# __builtin_nansf
# __builtin_nansl
# __builtin_object_size
# __builtin_parity
# __builtin_parityl
# __builtin_parityll
# __builtin_popcount
# __builtin_popcountl
# __builtin_popcountll
# __builtin_powi
# __builtin_powif
# __builtin_powil
# __builtin_prefetch
# __builtin_trap
# __builtin_types_compatible_p
# __builtin_unreachable
#
# Unsuppored built-ins will be tested with an empty parameter set and the
# result of the check might be wrong or meaningless so use with care.
#
# LICENSE
#
# Copyright (c) 2013 Gabriele Svelto <gabriele.svelto@gmail.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 3
AC_DEFUN([AX_GCC_BUILTIN], [
AS_VAR_PUSHDEF([ac_var], [ax_cv_have_$1])
AC_CACHE_CHECK([for $1], [ac_var], [
AC_LINK_IFELSE([AC_LANG_PROGRAM([], [
m4_case([$1],
[__builtin_assume_aligned], [$1("", 0)],
[__builtin_bswap16], [$1(0)],
[__builtin_bswap32], [$1(0)],
[__builtin_bswap64], [$1(0)],
[__builtin_choose_expr], [$1(0, 0, 0)],
[__builtin___clear_cache], [$1("", "")],
[__builtin_clrsb], [$1(0)],
[__builtin_clrsbl], [$1(0)],
[__builtin_clrsbll], [$1(0)],
[__builtin_clz], [$1(0)],
[__builtin_clzl], [$1(0)],
[__builtin_clzll], [$1(0)],
[__builtin_complex], [$1(0.0, 0.0)],
[__builtin_constant_p], [$1(0)],
[__builtin_ctz], [$1(0)],
[__builtin_ctzl], [$1(0)],
[__builtin_ctzll], [$1(0)],
[__builtin_expect], [$1(0, 0)],
[__builtin_ffs], [$1(0)],
[__builtin_ffsl], [$1(0)],
[__builtin_ffsll], [$1(0)],
[__builtin_fpclassify], [$1(0, 1, 2, 3, 4, 0.0)],
[__builtin_huge_val], [$1()],
[__builtin_huge_valf], [$1()],
[__builtin_huge_vall], [$1()],
[__builtin_inf], [$1()],
[__builtin_infd128], [$1()],
[__builtin_infd32], [$1()],
[__builtin_infd64], [$1()],
[__builtin_inff], [$1()],
[__builtin_infl], [$1()],
[__builtin_isinf_sign], [$1(0.0)],
[__builtin_nan], [$1("")],
[__builtin_nand128], [$1("")],
[__builtin_nand32], [$1("")],
[__builtin_nand64], [$1("")],
[__builtin_nanf], [$1("")],
[__builtin_nanl], [$1("")],
[__builtin_nans], [$1("")],
[__builtin_nansf], [$1("")],
[__builtin_nansl], [$1("")],
[__builtin_object_size], [$1("", 0)],
[__builtin_parity], [$1(0)],
[__builtin_parityl], [$1(0)],
[__builtin_parityll], [$1(0)],
[__builtin_popcount], [$1(0)],
[__builtin_popcountl], [$1(0)],
[__builtin_popcountll], [$1(0)],
[__builtin_powi], [$1(0, 0)],
[__builtin_powif], [$1(0, 0)],
[__builtin_powil], [$1(0, 0)],
[__builtin_prefetch], [$1("")],
[__builtin_trap], [$1()],
[__builtin_types_compatible_p], [$1(int, int)],
[__builtin_unreachable], [$1()],
[m4_warn([syntax], [Unsupported built-in $1, the test may fail])
$1()]
)
])],
[AS_VAR_SET([ac_var], [yes])],
[AS_VAR_SET([ac_var], [no])])
])
AS_IF([test yes = AS_VAR_GET([ac_var])],
[AC_DEFINE_UNQUOTED(AS_TR_CPP(HAVE_$1), 1,
[Define to 1 if the system has the `$1' built-in function])], [])
AS_VAR_POPDEF([ac_var])
])

1
aclocal.m4 vendored
View File

@@ -536,4 +536,5 @@ AC_DEFUN([AM_RUN_LOG],
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
m4_include([acinclude.m4])

View File

@@ -665,7 +665,7 @@ global {
# Configuration option global/units.
# Default value for --units argument.
units = "h"
units = "r"
# Configuration option global/si_unit_consistency.
# Distinguish between powers of 1024 and 1000 bytes.
@@ -1156,7 +1156,8 @@ activation {
# Configuration option activation/missing_stripe_filler.
# Method to fill missing stripes when activating an incomplete LV.
# Using 'error' will make inaccessible parts of the device return I/O
# errors on access. You can instead use a device path, in which case,
# errors on access. Using 'zero' will return success (and zero) on I/O
# You can instead use a device path, in which case,
# that device will be used in place of missing stripes. Using anything
# other than 'error' with mirrored or snapshotted volumes is likely to
# result in data corruption.
@@ -2048,6 +2049,14 @@ dmeventd {
# warning is repeated when 85%, 90% and 95% of the pool is filled.
thin_library = "libdevmapper-event-lvm2thin.so"
# Configuration option dmeventd/thin_command.
# The plugin runs command with each 5% increment when thin-pool data volume
# or metadata volume gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.

44
configure vendored
View File

@@ -6321,6 +6321,50 @@ _ACEOF
esac
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for __builtin_clz" >&5
$as_echo_n "checking for __builtin_clz... " >&6; }
if ${ax_cv_have___builtin_clz+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
__builtin_clz(0)
;
return 0;
}
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
ax_cv_have___builtin_clz=yes
else
ax_cv_have___builtin_clz=no
fi
rm -f core conftest.err conftest.$ac_objext \
conftest$ac_exeext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_have___builtin_clz" >&5
$as_echo "$ax_cv_have___builtin_clz" >&6; }
if test yes = $ax_cv_have___builtin_clz; then :
cat >>confdefs.h <<_ACEOF
#define HAVE___BUILTIN_CLZ 1
_ACEOF
fi
################################################################################
for ac_func in ftruncate gethostname getpagesize gettimeofday localtime_r \
memchr memset mkdir mkfifo munmap nl_langinfo realpath rmdir setenv \

View File

@@ -134,6 +134,7 @@ AC_TYPE_UINT8_T
AC_TYPE_UINT16_T
AC_TYPE_UINT32_T
AC_TYPE_UINT64_T
AX_GCC_BUILTIN([__builtin_clz])
################################################################################
dnl -- Check for functions

View File

@@ -532,6 +532,7 @@ static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
static pthread_mutex_t _mutex = PTHREAD_MUTEX_INITIALIZER;
struct iovec iov[2];
cs_error_t err;
int target_node;
@@ -546,7 +547,10 @@ static int _cluster_send_message(const void *buf, int msglen, const char *csid,
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
pthread_mutex_lock(&_mutex);
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
pthread_mutex_unlock(&_mutex);
return cs_to_errno(err);
}

View File

@@ -121,6 +121,7 @@ int dmeventd_lvm2_run(const char *cmdline)
int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
const char *cmd, const char *device)
{
static char _internal_prefix[] = "_dmeventd_";
char *vg = NULL, *lv = NULL, *layer;
int r;
@@ -135,6 +136,21 @@ int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
(layer = strstr(lv, "_mlog")))
*layer = '\0';
if (!strncmp(cmd, _internal_prefix, sizeof(_internal_prefix) - 1)) {
dmeventd_lvm2_lock();
/* output of internal command passed via env var */
if (!dmeventd_lvm2_run(cmd))
cmd = NULL;
else if ((cmd = getenv(cmd)))
cmd = dm_pool_strdup(mem, cmd); /* copy with lock */
dmeventd_lvm2_unlock();
if (!cmd) {
log_error("Unable to find configured command.");
return 0;
}
}
r = dm_snprintf(buffer, size, "%s %s/%s", cmd, vg, lv);
dm_pool_free(mem, vg);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2011-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -40,17 +40,26 @@
#define UMOUNT_COMMAND "/bin/umount"
#define MAX_FAILS (10)
#define MAX_FAILS (256) /* ~42 mins between cmd call retry with 10s delay */
#define THIN_DEBUG 0
struct dso_state {
struct dm_pool *mem;
int metadata_percent_check;
int metadata_percent;
int metadata_warn_once;
int data_percent_check;
int data_percent;
int data_warn_once;
uint64_t known_metadata_size;
uint64_t known_data_size;
unsigned fails;
unsigned max_fails;
int restore_sigset;
sigset_t old_sigset;
pid_t pid;
char **argv;
char cmd_str[1024];
};
@@ -58,259 +67,95 @@ DM_EVENT_LOG_FN("thin")
#define UUID_PREFIX "LVM-"
/* Figure out device UUID has LVM- prefix and is OPEN */
static int _has_unmountable_prefix(int major, int minor)
static int _run_command(struct dso_state *state)
{
struct dm_task *dmt;
struct dm_info info;
const char *uuid;
int r = 0;
char val[3][36];
char *env[] = { val[0], val[1], val[2], NULL };
int i;
if (!(dmt = dm_task_create(DM_DEVICE_INFO)))
return_0;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) dm_snprintf(val[0], sizeof(val[0]), "LVM_RUN_BY_DMEVENTD=1");
if (!dm_task_set_major_minor(dmt, major, minor, 1))
goto_out;
if (state->data_percent) {
/* Prepare some known data to env vars for easy use */
(void) dm_snprintf(val[1], sizeof(val[1]), "DMEVENTD_THIN_POOL_DATA=%d",
state->data_percent / DM_PERCENT_1);
(void) dm_snprintf(val[2], sizeof(val[2]), "DMEVENTD_THIN_POOL_METADATA=%d",
state->metadata_percent / DM_PERCENT_1);
} else {
/* For an error event it's for a user to check status and decide */
env[1] = NULL;
log_debug("Error event processing");
}
if (!dm_task_no_flush(dmt))
stack;
log_verbose("Executing command: %s", state->cmd_str);
if (!dm_task_run(dmt))
goto out;
if (!dm_task_get_info(dmt, &info))
goto out;
if (!info.exists || !info.open_count)
goto out; /* Not open -> not mounted */
if (!(uuid = dm_task_get_uuid(dmt)))
goto out;
/* Check it's public mountable LV
* has prefix LVM- and UUID size is 68 chars */
if (memcmp(uuid, UUID_PREFIX, sizeof(UUID_PREFIX) - 1) ||
strlen(uuid) != 68)
goto out;
#if THIN_DEBUG
log_debug("Found logical volume %s (%u:%u).", uuid, major, minor);
#endif
r = 1;
out:
dm_task_destroy(dmt);
return r;
}
/* Get dependencies for device, and try to find matching device */
static int _has_deps(const char *name, int tp_major, int tp_minor, int *dev_minor)
{
struct dm_task *dmt;
const struct dm_deps *deps;
struct dm_info info;
int major, minor;
int r = 0;
if (!(dmt = dm_task_create(DM_DEVICE_DEPS)))
/* TODO:
* Support parallel run of 'task' and it's waitpid maintainence
* ATM we can't handle signaling of SIGALRM
* as signalling is not allowed while 'process_event()' is running
*/
if (!(state->pid = fork())) {
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execve(state->argv[0], state->argv, env);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
state->fails = 1;
return 0;
if (!dm_task_set_name(dmt, name))
goto out;
if (!dm_task_no_open_count(dmt))
goto out;
if (!dm_task_run(dmt))
goto out;
if (!dm_task_get_info(dmt, &info))
goto out;
if (!(deps = dm_task_get_deps(dmt)))
goto out;
if (!info.exists || deps->count != 1)
goto out;
major = (int) MAJOR(deps->device[0]);
minor = (int) MINOR(deps->device[0]);
if ((major != tp_major) || (minor != tp_minor))
goto out;
*dev_minor = info.minor;
if (!_has_unmountable_prefix(major, info.minor))
goto out;
#if THIN_DEBUG
{
char dev_name[PATH_MAX];
if (dm_device_get_name(major, minor, 0, dev_name, sizeof(dev_name)))
log_debug("Found %s (%u:%u) depends on %s.",
name, major, *dev_minor, dev_name);
}
#endif
r = 1;
out:
dm_task_destroy(dmt);
return r;
}
/* Get all active devices */
static int _find_all_devs(dm_bitset_t bs, int tp_major, int tp_minor)
{
struct dm_task *dmt;
struct dm_names *names;
unsigned next = 0;
int minor, r = 1;
if (!(dmt = dm_task_create(DM_DEVICE_LIST)))
return 0;
if (!dm_task_run(dmt)) {
r = 0;
goto out;
}
if (!(names = dm_task_get_names(dmt))) {
r = 0;
goto out;
}
if (!names->dev)
goto out;
do {
names = (struct dm_names *)((char *) names + next);
if (_has_deps(names->name, tp_major, tp_minor, &minor))
dm_bit_set(bs, minor);
next = names->next;
} while (next);
out:
dm_task_destroy(dmt);
return r;
}
static int _run(const char *cmd, ...)
{
va_list ap;
int argc = 1; /* for argv[0], i.e. cmd */
int i = 0;
const char **argv;
pid_t pid = fork();
int status;
if (pid == 0) { /* child */
va_start(ap, cmd);
while (va_arg(ap, const char *))
++argc;
va_end(ap);
/* + 1 for the terminating NULL */
argv = alloca(sizeof(const char *) * (argc + 1));
argv[0] = cmd;
va_start(ap, cmd);
while ((argv[++i] = va_arg(ap, const char *)));
va_end(ap);
execvp(cmd, (char **)argv);
log_sys_error("exec", cmd);
exit(127);
}
if (pid > 0) { /* parent */
if (waitpid(pid, &status, 0) != pid)
return 0; /* waitpid failed */
if (!WIFEXITED(status) || WEXITSTATUS(status))
return 0; /* the child failed */
}
if (pid < 0)
return 0; /* fork failed */
return 1; /* all good */
}
struct mountinfo_s {
const char *device;
struct dm_info info;
dm_bitset_t minors; /* Bitset for active thin pool minors */
};
static int _umount_device(char *buffer, unsigned major, unsigned minor,
char *target, void *cb_data)
{
struct mountinfo_s *data = cb_data;
char *words[10];
if ((major == data->info.major) && dm_bit(data->minors, minor)) {
if (dm_split_words(buffer, DM_ARRAY_SIZE(words), 0, words) < DM_ARRAY_SIZE(words))
words[9] = NULL; /* just don't show device name */
log_info("Unmounting thin %s (%d:%d) of thin pool %s (%u:%u) from mount point \"%s\".",
words[9] ? : "", major, minor, data->device,
data->info.major, data->info.minor,
target);
if (!_run(UMOUNT_COMMAND, "-fl", target, NULL))
log_error("Failed to lazy umount thin %s (%d:%d) from %s: %s.",
words[9], major, minor, target, strerror(errno));
}
return 1;
}
/*
* Find all thin pool LV users and try to umount them.
* TODO: work with read-only thin pool support
*/
static void _umount(struct dm_task *dmt)
{
/* TODO: Convert to use hash to reduce memory usage */
static const size_t MINORS = (1U << 20); /* 20 bit */
struct mountinfo_s data = { NULL };
if (!dm_task_get_info(dmt, &data.info))
return;
data.device = dm_task_get_name(dmt);
if (!(data.minors = dm_bitset_create(NULL, MINORS))) {
log_error("Failed to allocate bitset. Not unmounting %s.", data.device);
goto out;
}
if (!_find_all_devs(data.minors, data.info.major, data.info.minor)) {
log_error("Failed to detect mounted volumes for %s.", data.device);
goto out;
}
if (!dm_mountinfo_read(_umount_device, &data)) {
log_error("Could not parse mountinfo file.");
goto out;
}
out:
if (data.minors)
dm_bitset_destroy(data.minors);
}
static int _use_policy(struct dm_task *dmt, struct dso_state *state)
{
#if THIN_DEBUG
log_debug("dmeventd executes: %s.", state->cmd_str);
#endif
if (state->argv)
return _run_command(state);
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
log_error("Failed to extend thin pool %s.",
dm_task_get_name(dmt));
state->fails++;
log_error("Failed command for %s.", dm_task_get_name(dmt));
state->fails = 1;
return 0;
}
state->fails = 0;
return 1;
}
/* Check if executed command has finished
* Only 1 command may run */
static int _wait_for_pid(struct dso_state *state)
{
int status = 0;
if (state->pid == -1)
return 1;
if (!waitpid(state->pid, &status, WNOHANG))
return 0;
/* Wait for finish */
if (WIFEXITED(status)) {
log_verbose("Child %d exited with status %d.",
state->pid, WEXITSTATUS(status));
state->fails = WEXITSTATUS(status) ? 1 : 0;
} else {
if (WIFSIGNALED(status))
log_verbose("Child %d was terminated with status %d.",
state->pid, WTERMSIG(status));
state->fails = 1;
}
state->pid = -1;
return 1;
}
@@ -319,7 +164,6 @@ void process_event(struct dm_task *dmt,
void **user)
{
const char *device = dm_task_get_name(dmt);
int percent;
struct dso_state *state = *user;
struct dm_status_thin_pool *tps = NULL;
void *next = NULL;
@@ -327,25 +171,48 @@ void process_event(struct dm_task *dmt,
char *target_type = NULL;
char *params;
int needs_policy = 0;
int needs_umount = 0;
struct dm_task *new_dmt = NULL;
#if THIN_DEBUG
log_debug("Watch for tp-data:%.2f%% tp-metadata:%.2f%%.",
dm_percent_to_float(state->data_percent_check),
dm_percent_to_float(state->metadata_percent_check));
#endif
#if 0
/* No longer monitoring, waiting for remove */
if (!state->meta_percent_check && !state->data_percent_check)
if (!_wait_for_pid(state)) {
log_warn("WARNING: Skipping event, child %d is still running (%s).",
state->pid, state->cmd_str);
return;
#endif
}
if (event & DM_EVENT_DEVICE_ERROR) {
/* Error -> no need to check and do instant resize */
state->data_percent = state->metadata_percent = 0;
if (_use_policy(dmt, state))
goto out;
stack;
/*
* Rather update oldish status
* since after 'command' processing
* percentage info could have changed a lot.
* If we would get above UMOUNT_THRESH
* we would wait for next sigalarm.
*/
if (!(new_dmt = dm_task_create(DM_DEVICE_STATUS)))
goto_out;
if (!dm_task_set_uuid(new_dmt, dm_task_get_uuid(dmt)))
goto_out;
/* Non-blocking status read */
if (!dm_task_no_flush(new_dmt))
log_warn("WARNING: Can't set no_flush for dm status.");
if (!dm_task_run(new_dmt))
goto_out;
dmt = new_dmt;
}
dm_get_next_target(dmt, next, &start, &length, &target_type, &params);
@@ -357,7 +224,6 @@ void process_event(struct dm_task *dmt,
if (!dm_get_status_thin_pool(state->mem, params, &tps)) {
log_error("Failed to parse status.");
needs_umount = 1;
goto out;
}
@@ -372,67 +238,112 @@ void process_event(struct dm_task *dmt,
if (state->known_metadata_size != tps->total_metadata_blocks) {
state->metadata_percent_check = CHECK_MINIMUM;
state->known_metadata_size = tps->total_metadata_blocks;
state->fails = 0;
}
if (state->known_data_size != tps->total_data_blocks) {
state->data_percent_check = CHECK_MINIMUM;
state->known_data_size = tps->total_data_blocks;
state->fails = 0;
}
percent = dm_make_percent(tps->used_metadata_blocks, tps->total_metadata_blocks);
if (percent >= state->metadata_percent_check) {
/*
* Usage has raised more than CHECK_STEP since the last
* time. Run actions.
*/
state->metadata_percent_check = (percent / CHECK_STEP) * CHECK_STEP + CHECK_STEP;
/*
* Trigger action when threshold boundary is exceeded.
* Report 80% threshold warning when it's used above 80%.
* Only 100% is exception as it cannot be surpased so policy
* action is called for: >50%, >55% ... >95%, 100%
*/
state->metadata_percent = dm_make_percent(tps->used_metadata_blocks, tps->total_metadata_blocks);
if (state->metadata_percent <= WARNING_THRESH)
state->metadata_warn_once = 0; /* Dropped bellow threshold, reset warn once */
else if (!state->metadata_warn_once++) /* Warn once when raised above threshold */
log_warn("WARNING: Thin pool %s metadata is now %.2f%% full.",
device, dm_percent_to_float(state->metadata_percent));
if (state->metadata_percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->metadata_percent > state->metadata_percent_check)
needs_policy = 1;
state->metadata_percent_check = (state->metadata_percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->metadata_percent_check == DM_PERCENT_100)
state->metadata_percent_check--; /* Can't get bigger then 100% */
} else
state->metadata_percent_check = CHECK_MINIMUM;
/* FIXME: extension of metadata needs to be written! */
if (percent >= WARNING_THRESH) /* Print a warning to syslog. */
log_warn("WARNING: Thin pool %s metadata is now %.2f%% full.",
device, dm_percent_to_float(percent));
needs_policy = 1;
state->data_percent = dm_make_percent(tps->used_data_blocks, tps->total_data_blocks);
if (state->data_percent <= WARNING_THRESH)
state->data_warn_once = 0;
else if (!state->data_warn_once++)
log_warn("WARNING: Thin pool %s data is now %.2f%% full.",
device, dm_percent_to_float(state->data_percent));
if (state->data_percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->data_percent > state->data_percent_check)
needs_policy = 1;
state->data_percent_check = (state->data_percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->data_percent_check == DM_PERCENT_100)
state->data_percent_check--; /* Can't get bigger then 100% */
} else
state->data_percent_check = CHECK_MINIMUM;
if (percent >= UMOUNT_THRESH)
needs_umount = 1;
}
/* Reduce number of _use_policy() calls by power-of-2 factor till frequency of MAX_FAILS is reached.
* Avoids too high number of error retries, yet shows some status messages in log regularly.
* i.e. PV could have been pvmoved and VG/LV was locked for a while...
*/
if (state->fails) {
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
return;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;
state->fails = needs_policy = 1; /* Retry failing command */
} else
state->max_fails = 1; /* Reset on success */
percent = dm_make_percent(tps->used_data_blocks, tps->total_data_blocks);
if (percent >= state->data_percent_check) {
/*
* Usage has raised more than CHECK_STEP since
* the last time. Run actions.
*/
state->data_percent_check = (percent / CHECK_STEP) * CHECK_STEP + CHECK_STEP;
if (percent >= WARNING_THRESH) /* Print a warning to syslog. */
log_warn("WARNING: Thin pool %s data is now %.2f%% full.",
device, dm_percent_to_float(percent));
needs_policy = 1;
if (percent >= UMOUNT_THRESH)
needs_umount = 1;
}
if (needs_policy &&
_use_policy(dmt, state))
needs_umount = 0; /* No umount when command was successful */
if (needs_policy)
_use_policy(dmt, state);
out:
if (needs_umount) {
_umount(dmt);
/* Until something changes, do not retry any more actions */
state->data_percent_check = state->metadata_percent_check = (DM_PERCENT_1 * 101);
}
if (tps)
dm_pool_free(state->mem, tps);
if (state->fails >= MAX_FAILS) {
log_warn("WARNING: Dropping monitoring of %s. "
"lvm2 command fails too often (%u times in row).",
device, state->fails);
pthread_kill(pthread_self(), SIGALRM);
}
if (new_dmt)
dm_task_destroy(new_dmt);
}
/* Handle SIGCHLD for a thread */
static void _sig_child(int signum __attribute__((unused)))
{
/* empty SIG_IGN */;
}
/* Setup handler for SIGCHLD when executing external command
* to get quick 'waitpid()' reaction
* It will interrupt syscall just like SIGALRM and
* invoke process_event().
*/
static void _init_thread_signals(struct dso_state *state)
{
struct sigaction act = { .sa_handler = _sig_child };
sigset_t my_sigset;
sigemptyset(&my_sigset);
if (sigaction(SIGCHLD, &act, NULL))
log_warn("WARNING: Failed to set SIGCHLD action.");
else if (sigaddset(&my_sigset, SIGCHLD))
log_warn("WARNING: Failed to add SIGCHLD to set.");
else if (pthread_sigmask(SIG_UNBLOCK, &my_sigset, &state->old_sigset))
log_warn("WARNING: Failed to unblock SIGCHLD.");
else
state->restore_sigset = 1;
}
static void _restore_thread_signals(struct dso_state *state)
{
if (state->restore_sigset &&
pthread_sigmask(SIG_SETMASK, &state->old_sigset, NULL))
log_warn("WARNING: Failed to block SIGCHLD.");
}
int register_device(const char *device,
@@ -442,20 +353,36 @@ int register_device(const char *device,
void **user)
{
struct dso_state *state;
int maxcmd;
char *str;
if (!dmeventd_lvm2_init_with_pool("thin_pool_state", state))
goto_bad;
if (!dmeventd_lvm2_command(state->mem, state->cmd_str,
sizeof(state->cmd_str),
"lvextend --use-policies",
device)) {
"_dmeventd_thin_command", device)) {
dmeventd_lvm2_exit_with_pool(state);
goto_bad;
}
state->metadata_percent_check = CHECK_MINIMUM;
state->data_percent_check = CHECK_MINIMUM;
if (strncmp(state->cmd_str, "lvm ", 4)) {
maxcmd = 2; /* space for last NULL element */
for (str = state->cmd_str; *str; str++)
if (*str == ' ')
maxcmd++;
if (!(str = dm_pool_strdup(state->mem, state->cmd_str)) ||
!(state->argv = dm_pool_zalloc(state->mem, maxcmd * sizeof(char *)))) {
log_error("Failed to allocate memory for command.");
goto bad;
}
dm_split_words(str, maxcmd - 1, 0, state->argv);
_init_thread_signals(state);
} else
memmove(state->cmd_str, state->cmd_str + 4, strlen(state->cmd_str + 4) + 1);
state->pid = -1;
*user = state;
log_info("Monitoring thin pool %s.", device);
@@ -474,6 +401,28 @@ int unregister_device(const char *device,
void **user)
{
struct dso_state *state = *user;
int i;
for (i = 0; !_wait_for_pid(state) && (i < 6); ++i) {
if (i == 0)
/* Give it 2 seconds, then try to terminate & kill it */
log_verbose("Child %d still not finished (%s) waiting.",
state->pid, state->cmd_str);
else if (i == 3) {
log_warn("WARNING: Terminating child %d.", state->pid);
kill(state->pid, SIGINT);
kill(state->pid, SIGTERM);
} else if (i == 5) {
log_warn("WARNING: Killing child %d.", state->pid);
kill(state->pid, SIGKILL);
}
sleep(1);
}
if (state->pid != -1)
log_warn("WARNING: Cannot kill child %d!", state->pid);
_restore_thread_signals(state);
dmeventd_lvm2_exit_with_pool(state);
log_info("No longer monitoring thin pool %s.", device);

View File

@@ -295,7 +295,7 @@ def vg_lv_snapshot(vg_name, snapshot_options, name, size_bytes):
return call(cmd)
def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
def _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
@@ -303,20 +303,18 @@ def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
return cmd
def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
cmd = _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool)
cmd.extend(['--name', name, vg_name])
return call(cmd)
def vg_lv_create_striped(vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
if not thin_pool:
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
cmd = _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool)
cmd.extend(['--stripes', str(num_stripes)])
if stripe_size_kb != 0:

View File

@@ -272,6 +272,26 @@ class LvCommon(AutomatedProperties):
self.state = object_state
self._move_pv = self._get_move_pv()
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(lv_uuid, lv_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if not dbo:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
return dbo
@property
def VolumeType(self):
type_map = {'C': 'Cache', 'm': 'mirrored',
@@ -408,24 +428,10 @@ class Lv(LvCommon):
@staticmethod
def _remove(lv_uuid, lv_name, remove_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if dbo:
# Remove the LV, if successful then remove from the model
rc, out, err = cmdhandler.lv_remove(lv_name, remove_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Remove the LV, if successful then remove from the model
rc, out, err = cmdhandler.lv_remove(lv_name, remove_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -443,24 +449,11 @@ class Lv(LvCommon):
@staticmethod
def _rename(lv_uuid, lv_name, new_name, rename_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
if dbo:
# Rename the logical volume
rc, out, err = cmdhandler.lv_rename(lv_name, new_name,
rename_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Rename the logical volume
rc, out, err = cmdhandler.lv_rename(lv_name, new_name,
rename_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -500,32 +493,21 @@ class Lv(LvCommon):
def _snap_shot(lv_uuid, lv_name, name, optional_size,
snapshot_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
# If you specify a size you get a 'thick' snapshot even if
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder
if dbo:
# If you specify a size you get a 'thick' snapshot even if
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder
rc, out, err = cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options, name, optional_size)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
rc, out, err = cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options, name, optional_size)
if rc == 0:
cfg.load()
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -548,38 +530,24 @@ class Lv(LvCommon):
resize_options):
# Make sure we have a dbus object representing it
pv_dests = []
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
if dbo:
# If we have PVs, verify them
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'PV Destination (%s) not found' % pr[0])
# If we have PVs, verify them
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'PV Destination (%s) not found' % pr[0])
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
size_change = new_size_bytes - dbo.SizeBytes
rc, out, err = cmdhandler.lv_resize(dbo.lvm_id, size_change,
pv_dests, resize_options)
if rc == 0:
# Refresh what's changed
cfg.load()
return "/"
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
size_change = new_size_bytes - dbo.SizeBytes
rc, out, err = cmdhandler.lv_resize(dbo.lvm_id, size_change,
pv_dests, resize_options)
LvCommon.handle_execute(rc, out, err)
return "/"
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -612,23 +580,11 @@ class Lv(LvCommon):
def _lv_activate_deactivate(uuid, lv_name, activate, control_flags,
options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, lv_name)
if dbo:
rc, out, err = cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(uuid, lv_name))
LvCommon.validate_dbus_object(uuid, lv_name)
rc, out, err = cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -660,25 +616,11 @@ class Lv(LvCommon):
@staticmethod
def _add_rm_tags(uuid, lv_name, tags_add, tags_del, tag_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, lv_name)
if dbo:
rc, out, err = cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(uuid, lv_name))
LvCommon.validate_dbus_object(uuid, lv_name)
rc, out, err = cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=LV_INTERFACE,
@@ -736,24 +678,13 @@ class LvThinPool(Lv):
@staticmethod
def _lv_create(lv_uuid, lv_name, name, size_bytes, create_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
if dbo:
rc, out, err = cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes)
if rc == 0:
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
cfg.load()
return cfg.om.get_object_path_by_lvm_id(full_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
rc, out, err = cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
@dbus.service.method(
dbus_interface=THIN_POOL_INTERFACE,
@@ -790,14 +721,13 @@ class LvCachePool(Lv):
@staticmethod
def _cache_lv(lv_uuid, lv_name, lv_object_path, cache_options):
# Make sure we have a dbus object representing cache pool
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Make sure we have dbus object representing lv to cache
lv_to_cache = cfg.om.get_object_by_path(lv_object_path)
if dbo and lv_to_cache:
if lv_to_cache:
fcn = lv_to_cache.lv_full_name()
rc, out, err = cmdhandler.lv_cache_lv(
dbo.lv_full_name(), fcn, cache_options)
@@ -809,22 +739,14 @@ class LvCachePool(Lv):
cfg.load()
lv_converted = cfg.om.get_object_path_by_lvm_id(fcn)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
msg = ""
if not dbo:
dbo += 'CachePool LV with uuid %s and name %s not present!' % \
(lv_uuid, lv_name)
if not lv_to_cache:
dbo += 'LV to cache with object path %s not present!' % \
(lv_object_path)
raise dbus.exceptions.DBusException(LV_INTERFACE, msg)
raise dbus.exceptions.DBusException(
LV_INTERFACE, 'LV to cache with object path %s not present!' %
lv_object_path)
return lv_converted
@dbus.service.method(
@@ -855,31 +777,25 @@ class LvCacheLv(Lv):
@staticmethod
def _detach_lv(lv_uuid, lv_name, detach_options, destroy_cache):
# Make sure we have a dbus object representing cache pool
dbo = cfg.om.get_object_by_uuid_lvm_id(lv_uuid, lv_name)
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
if dbo:
# Get current cache name
cache_pool = cfg.om.get_object_by_path(dbo.CachePool)
# Get current cache name
cache_pool = cfg.om.get_object_by_path(dbo.CachePool)
rc, out, err = cmdhandler.lv_detach_cache(
dbo.lv_full_name(), detach_options, destroy_cache)
if rc == 0:
# The cache pool gets removed as hidden and put back to
# visible, so lets delete
mt_remove_dbus_objects((cache_pool, dbo))
cfg.load()
rc, out, err = cmdhandler.lv_detach_cache(
dbo.lv_full_name(), detach_options, destroy_cache)
if rc == 0:
# The cache pool gets removed as hidden and put back to
# visible, so lets delete
mt_remove_dbus_objects((cache_pool, dbo))
cfg.load()
uncached_lv_path = cfg.om.get_object_path_by_lvm_id(lv_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
uncached_lv_path = cfg.om.get_object_path_by_lvm_id(lv_name)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'LV with uuid %s and name %s not present!' %
(lv_uuid, lv_name))
'Exit code %s, stderr = %s' % (str(rc), err))
return uncached_lv_path
@dbus.service.method(

View File

@@ -69,18 +69,7 @@ class DataStore(object):
table[key] = record
@staticmethod
def _parse_pvs(_pvs):
pvs = sorted(_pvs, key=lambda pk: pk['pv_name'])
c_pvs = OrderedDict()
c_lookup = {}
c_pvs_in_vgs = {}
for p in pvs:
DataStore._insert_record(
c_pvs, p['pv_uuid'], p,
['pvseg_start', 'pvseg_size', 'segtype'])
def _pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup):
for p in c_pvs.values():
# Capture which PVs are associated with which VG
if p['vg_uuid'] not in c_pvs_in_vgs:
@@ -93,6 +82,20 @@ class DataStore(object):
# Lookup for translating between /dev/<name> and pv uuid
c_lookup[p['pv_name']] = p['pv_uuid']
@staticmethod
def _parse_pvs(_pvs):
pvs = sorted(_pvs, key=lambda pk: pk['pv_name'])
c_pvs = OrderedDict()
c_lookup = {}
c_pvs_in_vgs = {}
for p in pvs:
DataStore._insert_record(
c_pvs, p['pv_uuid'], p,
['pvseg_start', 'pvseg_size', 'segtype'])
DataStore._pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup)
return c_pvs, c_lookup, c_pvs_in_vgs
@staticmethod
@@ -132,17 +135,7 @@ class DataStore(object):
i['pvseg_size'] = i['pv_pe_count']
i['segtype'] = 'free'
for p in c_pvs.values():
# Capture which PVs are associated with which VG
if p['vg_uuid'] not in c_pvs_in_vgs:
c_pvs_in_vgs[p['vg_uuid']] = []
if p['vg_name']:
c_pvs_in_vgs[p['vg_uuid']].append(
(p['pv_name'], p['pv_uuid']))
# Lookup for translating between /dev/<name> and pv uuid
c_lookup[p['pv_name']] = p['pv_uuid']
DataStore._pvs_parse_common(c_pvs, c_pvs_in_vgs, c_lookup)
return c_pvs, c_lookup, c_pvs_in_vgs

View File

@@ -30,6 +30,16 @@ class Manager(AutomatedProperties):
def Version(self):
return dbus.String('1.0.0')
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def _pv_create(device, create_options):
@@ -41,15 +51,8 @@ class Manager(AutomatedProperties):
MANAGER_INTERFACE, "PV Already exists!")
rc, out, err = cmdhandler.pv_create(create_options, [device])
if rc == 0:
cfg.load()
created_pv = cfg.om.get_object_path_by_lvm_id(device)
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
return created_pv
Manager.handle_execute(rc, out, err)
return cfg.om.get_object_path_by_lvm_id(device)
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
@@ -76,14 +79,8 @@ class Manager(AutomatedProperties):
MANAGER_INTERFACE, 'object path = %s not found' % p)
rc, out, err = cmdhandler.vg_create(create_options, pv_devices, name)
if rc == 0:
cfg.load()
return cfg.om.get_object_path_by_lvm_id(name)
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
Manager.handle_execute(rc, out, err)
return cfg.om.get_object_path_by_lvm_id(name)
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,
@@ -200,15 +197,8 @@ class Manager(AutomatedProperties):
activate, cache, device_path,
major_minor, scan_options)
if rc == 0:
# This could potentially change the state quite a bit, so lets
# update everything to be safe
cfg.load()
return '/'
else:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
Manager.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=MANAGER_INTERFACE,

View File

@@ -135,23 +135,30 @@ class Pv(AutomatedProperties):
def _remove(pv_uuid, pv_name, remove_options):
# Remove the PV, if successful then remove from the model
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
Pv.validate_dbus_object(pv_uuid, pv_name)
rc, out, err = cmdhandler.pv_remove(pv_name, remove_options)
Pv.handle_execute(rc, out, err)
return '/'
if dbo:
rc, out, err = cmdhandler.pv_remove(pv_name, remove_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(pv_uuid, pv_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
if not dbo:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
return '/'
return dbo
@dbus.service.method(
dbus_interface=PV_INTERFACE,
@@ -168,22 +175,11 @@ class Pv(AutomatedProperties):
@staticmethod
def _resize(pv_uuid, pv_name, new_size_bytes, resize_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
Pv.validate_dbus_object(pv_uuid, pv_name)
if dbo:
rc, out, err = cmdhandler.pv_resize(pv_name, new_size_bytes,
rc, out, err = cmdhandler.pv_resize(pv_name, new_size_bytes,
resize_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
Pv.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -201,21 +197,10 @@ class Pv(AutomatedProperties):
@staticmethod
def _allocation_enabled(pv_uuid, pv_name, yes_no, allocation_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(pv_uuid, pv_name)
if dbo:
rc, out, err = cmdhandler.pv_allocatable(
pv_name, yes_no, allocation_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE, 'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'PV with uuid %s and name %s not present!' %
(pv_uuid, pv_name))
Pv.validate_dbus_object(pv_uuid, pv_name)
rc, out, err = cmdhandler.pv_allocatable(
pv_name, yes_no, allocation_options)
Pv.handle_execute(rc, out, err)
return '/'
@dbus.service.method(

View File

@@ -19,7 +19,6 @@ from .utils import log_error, mt_async_result
class RequestEntry(object):
def __init__(self, tmo, method, arguments, cb, cb_error,
return_tuple=True, job_state=None):
self.tmo = tmo
self.method = method
self.arguments = arguments
self.cb = cb
@@ -35,31 +34,38 @@ class RequestEntry(object):
self._return_tuple = return_tuple
self._job_state = job_state
if self.tmo < 0:
if tmo < 0:
# Client is willing to block forever
pass
elif tmo == 0:
self._return_job()
else:
self.timer_id = GLib.timeout_add_seconds(
tmo, RequestEntry._request_timeout, self)
# Note: using 990 instead of 1000 for second to ms conversion to
# account for overhead. Goal is to return just before the
# timeout amount has expired. Better to be a little early than
# late.
self.timer_id = GLib.timeout_add(
tmo * 990, RequestEntry._request_timeout, self)
@staticmethod
def _request_timeout(r):
"""
Method which gets called when the timer runs out!
:param r: RequestEntry which timed out
:return: Nothing
:return: Result of timer_expired
"""
r.timer_expired()
return r.timer_expired()
def _return_job(self):
# Return job is only called when we create a request object or when
# we pop a timer. In both cases we are running in the correct context
# and do not need to schedule the call back in main context.
self._job = Job(self, self._job_state)
cfg.om.register_object(self._job, True)
if self._return_tuple:
mt_async_result(self.cb, ('/', self._job.dbus_object_path()))
self.cb(('/', self._job.dbus_object_path()))
else:
mt_async_result(self.cb, self._job.dbus_object_path())
self.cb(self._job.dbus_object_path())
def run_cmd(self):
try:

View File

@@ -145,29 +145,35 @@ class Vg(AutomatedProperties):
@staticmethod
def fetch_new_lv(vg_name, lv_name):
cfg.load()
return cfg.om.get_object_path_by_lvm_id("%s/%s" % (vg_name, lv_name))
@staticmethod
def handle_execute(rc, out, err):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(vg_uuid, vg_name):
dbo = cfg.om.get_object_by_uuid_lvm_id(vg_uuid, vg_name)
if not dbo:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(vg_uuid, vg_name))
return dbo
@staticmethod
def _rename(uuid, vg_name, new_name, rename_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_rename(vg_name, new_name,
rename_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_rename(
vg_name, new_name, rename_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -184,24 +190,10 @@ class Vg(AutomatedProperties):
@staticmethod
def _remove(uuid, vg_name, remove_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
# Remove the VG, if successful then remove from the model
rc, out, err = cmdhandler.vg_remove(vg_name, remove_options)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
Vg.validate_dbus_object(uuid, vg_name)
# Remove the VG, if successful then remove from the model
rc, out, err = cmdhandler.vg_remove(vg_name, remove_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -216,26 +208,9 @@ class Vg(AutomatedProperties):
@staticmethod
def _change(uuid, vg_name, change_options):
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_change(change_options, vg_name)
# To use an example with d-feet (Method input)
# {"activate": __import__('gi.repository.GLib', globals(),
# locals(), ['Variant']).Variant("s", "n")}
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_change(change_options, vg_name)
Vg.handle_execute(rc, out, err)
return '/'
# TODO: This should be broken into a number of different methods
@@ -256,34 +231,24 @@ class Vg(AutomatedProperties):
@staticmethod
def _reduce(uuid, vg_name, missing, pv_object_paths, reduce_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
pv_devices = []
pv_devices = []
# If pv_object_paths is not empty, then get the device paths
if pv_object_paths and len(pv_object_paths) > 0:
for pv_op in pv_object_paths:
pv = cfg.om.get_object_by_path(pv_op)
if pv:
pv_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Object path not found = %s!' % pv_op)
# If pv_object_paths is not empty, then get the device paths
if pv_object_paths and len(pv_object_paths) > 0:
for pv_op in pv_object_paths:
pv = cfg.om.get_object_by_path(pv_op)
if pv:
pv_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Object path not found = %s!' % pv_op)
rc, out, err = cmdhandler.vg_reduce(vg_name, missing, pv_devices,
reduce_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
rc, out, err = cmdhandler.vg_reduce(vg_name, missing, pv_devices,
reduce_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -300,36 +265,26 @@ class Vg(AutomatedProperties):
@staticmethod
def _extend(uuid, vg_name, pv_object_paths, extend_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
extend_devices = []
extend_devices = []
for i in pv_object_paths:
pv = cfg.om.get_object_by_path(i)
if pv:
extend_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV Object path not found = %s!' % i)
if len(extend_devices):
rc, out, err = cmdhandler.vg_extend(vg_name, extend_devices,
extend_options)
if rc == 0:
cfg.load()
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
for i in pv_object_paths:
pv = cfg.om.get_object_by_path(i)
if pv:
extend_devices.append(pv.lvm_id)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'No pv_object_paths provided!')
VG_INTERFACE, 'PV Object path not found = %s!' % i)
if len(extend_devices):
rc, out, err = cmdhandler.vg_extend(vg_name, extend_devices,
extend_options)
Vg.handle_execute(rc, out, err)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
VG_INTERFACE, 'No pv_object_paths provided!')
return '/'
@dbus.service.method(
@@ -366,33 +321,24 @@ class Vg(AutomatedProperties):
create_options):
# Make sure we have a dbus object representing it
pv_dests = []
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Destination (%s) not found' % pr[0])
Vg.validate_dbus_object(uuid, vg_name)
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
if len(pv_dests_and_ranges):
for pr in pv_dests_and_ranges:
pv_dbus_obj = cfg.om.get_object_by_path(pr[0])
if not pv_dbus_obj:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'PV Destination (%s) not found' % pr[0])
rc, out, err = cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests)
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
if rc == 0:
return Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
rc, out, err = cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -428,25 +374,13 @@ class Vg(AutomatedProperties):
def _lv_create_linear(uuid, vg_name, name, size_bytes,
thin_pool, create_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool)
rc, out, err = cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -466,24 +400,12 @@ class Vg(AutomatedProperties):
def _lv_create_striped(uuid, vg_name, name, size_bytes, num_stripes,
stripe_size_kb, thin_pool, create_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_striped(
vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_striped(
vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -506,25 +428,11 @@ class Vg(AutomatedProperties):
def _lv_create_mirror(uuid, vg_name, name, size_bytes,
num_copies, create_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -545,26 +453,12 @@ class Vg(AutomatedProperties):
def _lv_create_raid(uuid, vg_name, name, raid_type, size_bytes,
num_stripes, stripe_size_kb, create_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_lv_create_raid(
vg_name, create_options, name, raid_type, size_bytes,
num_stripes, stripe_size_kb)
if rc == 0:
created_lv = Vg.fetch_new_lv(vg_name, name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
return created_lv
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.vg_lv_create_raid(
vg_name, create_options, name, raid_type, size_bytes,
num_stripes, stripe_size_kb)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -585,33 +479,27 @@ class Vg(AutomatedProperties):
def _create_pool(uuid, vg_name, meta_data_lv, data_lv,
create_options, create_method):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
# Retrieve the full names for the metadata and data lv
md = cfg.om.get_object_by_path(meta_data_lv)
data = cfg.om.get_object_by_path(data_lv)
if dbo and md and data:
if md and data:
new_name = data.Name
rc, out, err = create_method(
md.lv_full_name(), data.lv_full_name(), create_options)
if rc == 0:
mt_remove_dbus_objects((md, data))
cache_pool_lv = Vg.fetch_new_lv(vg_name, new_name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
Vg.handle_execute(rc, out, err)
else:
msg = ""
if not dbo:
msg += 'VG with uuid %s and name %s not present!' % \
(uuid, vg_name)
if not md:
msg += 'Meta data LV with object path %s not present!' % \
(meta_data_lv)
@@ -622,7 +510,7 @@ class Vg(AutomatedProperties):
raise dbus.exceptions.DBusException(VG_INTERFACE, msg)
return cache_pool_lv
return Vg.fetch_new_lv(vg_name, new_name)
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -656,33 +544,21 @@ class Vg(AutomatedProperties):
pv_devices = []
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
# Check for existence of pv object paths
for p in pv_object_paths:
pv = cfg.om.get_object_by_path(p)
if pv:
pv_devices.append(pv.Name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV object path = %s not found' % p)
rc, out, err = cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options)
if rc == 0:
cfg.load()
return '/'
# Check for existence of pv object paths
for p in pv_object_paths:
pv = cfg.om.get_object_by_path(p)
if pv:
pv_devices.append(pv.Name)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
VG_INTERFACE, 'PV object path = %s not found' % p)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
rc, out, err = cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -720,25 +596,12 @@ class Vg(AutomatedProperties):
@staticmethod
def _vg_add_rm_tags(uuid, vg_name, tags_add, tags_del, tag_options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
Vg.validate_dbus_object(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
rc, out, err = cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -775,23 +638,10 @@ class Vg(AutomatedProperties):
@staticmethod
def _vg_change_set(uuid, vg_name, method, value, options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = method(vg_name, value, options)
if rc == 0:
dbo.refresh()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = method(vg_name, value, options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=VG_INTERFACE,
@@ -849,23 +699,11 @@ class Vg(AutomatedProperties):
def _vg_activate_deactivate(uuid, vg_name, activate, control_flags,
options):
# Make sure we have a dbus object representing it
dbo = cfg.om.get_object_by_uuid_lvm_id(uuid, vg_name)
if dbo:
rc, out, err = cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options)
if rc == 0:
cfg.load()
return '/'
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'VG with uuid %s and name %s not present!' %
(uuid, vg_name))
Vg.validate_dbus_object(uuid, vg_name)
rc, out, err = cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
dbus_interface=VG_INTERFACE,

14
doc/license.txt Normal file
View File

@@ -0,0 +1,14 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/

View File

@@ -341,6 +341,9 @@
/* Define to 1 if the system has the type `ptrdiff_t'. */
#undef HAVE_PTRDIFF_T
/* Define to 1 if the compiler has the `__builtin_clz` builtin. */
#undef HAVE___BUILTIN_CLZ
/* Define to 1 if you have the <readline/history.h> header file. */
#undef HAVE_READLINE_HISTORY_H

View File

@@ -358,6 +358,10 @@ int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv)
{
return 1;
}
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv)
{
return 1;
}
int pv_uses_vg(struct physical_volume *pv,
struct volume_group *vg)
{
@@ -770,7 +774,7 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
/* INFO is not set as cache-pool cannot be active.
* STATUS is collected from cache LV */
lv_seg = get_only_segment_using_this_lv(lv);
(void) _lv_info(cmd, lv_seg->lv, 0, NULL, lv_seg, &status->seg_status, 0, 0);
(void) _lv_info(cmd, lv_seg->lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
}
@@ -784,6 +788,13 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
status->info.exists = 0; /* So pool LV is not active */
}
return 1;
} else if (lv_is_external_origin(lv)) {
if (!_lv_info(cmd, lv, 0, &status->info, NULL, NULL,
with_open_count, with_read_ahead))
return_0;
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
} else if (lv_is_origin(lv)) {
/* Query segment status for 'layered' (-real) device most of the time,
* only for merging snapshot, query its progress.
@@ -1164,7 +1175,7 @@ int lv_cache_status(const struct logical_volume *cache_lv,
return 0;
}
if (!lv_info(cache_lv->vg->cmd, cache_lv, 0, NULL, 0, 0)) {
if (!lv_info(cache_lv->vg->cmd, cache_lv, 1, NULL, 0, 0)) {
log_error("Cannot check status for locally inactive cache volume %s.",
display_lvname(cache_lv));
return 0;
@@ -1929,7 +1940,7 @@ int monitor_dev_for_events(struct cmd_context *cmd, const struct logical_volume
/* FIXME specify events */
if (!monitor_fn(seg, 0)) {
log_error("%s: %s segment monitoring function failed.",
display_lvname(lv), seg->segtype->name);
display_lvname(lv), lvseg_name(seg));
return 0;
}
} else
@@ -2566,6 +2577,77 @@ int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv)
return r;
}
/* Remove any existing, closed mapped device by @name */
static int _remove_dm_dev_by_name(const char *name)
{
int r = 0;
struct dm_task *dmt;
struct dm_info info;
if (!(dmt = dm_task_create(DM_DEVICE_INFO)))
return_0;
/* Check, if the device exists. */
if (dm_task_set_name(dmt, name) && dm_task_run(dmt) && dm_task_get_info(dmt, &info)) {
dm_task_destroy(dmt);
/* Ignore non-existing or open dm devices */
if (!info.exists || info.open_count)
return 1;
if (!(dmt = dm_task_create(DM_DEVICE_REMOVE)))
return_0;
if (dm_task_set_name(dmt, name))
r = dm_task_run(dmt);
}
dm_task_destroy(dmt);
return r;
}
/* Work all segments of @lv removing any existing, closed "*-missing_N_0" sub devices. */
static int _lv_remove_any_missing_subdevs(struct logical_volume *lv)
{
if (lv) {
uint32_t seg_no = 0;
char name[257];
struct lv_segment *seg;
dm_list_iterate_items(seg, &lv->segments) {
if (seg->area_count != 1)
return_0;
if (dm_snprintf(name, sizeof(name), "%s-%s-missing_%u_0", seg->lv->vg->name, seg->lv->name, seg_no) < 0)
return 0;
if (!_remove_dm_dev_by_name(name))
return 0;
seg_no++;
}
}
return 1;
}
/* Remove any "*-missing_*" sub devices added by the activation layer for an rmate/rimage missing PV mapping */
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv)
{
uint32_t s;
struct lv_segment *seg = first_seg(lv);
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_LV &&
!_lv_remove_any_missing_subdevs(seg_lv(seg, s)))
return 0;
if (seg->meta_areas && seg_metatype(seg, s) == AREA_LV &&
!_lv_remove_any_missing_subdevs(seg_metalv(seg, s)))
return 0;
}
return 1;
}
/*
* Does PV use VG somewhere in its construction?
* Returns 1 on failure.

View File

@@ -124,6 +124,8 @@ int lv_deactivate(struct cmd_context *cmd, const char *lvid_s, const struct logi
int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv);
int lv_deactivate_any_missing_subdevs(const struct logical_volume *lv);
/*
* Returns 1 if info structure has been populated, else 0 on failure.
* When lvinfo* is NULL, it returns 1 if the device is locally active, 0 otherwise.

View File

@@ -61,7 +61,7 @@ struct dev_manager {
int flush_required;
int activation; /* building activation tree */
int suspend; /* building suspend tree */
int skip_external_lv;
unsigned track_external_lv_deps;
struct dm_list pending_delete; /* str_list of dlid(s) with pending delete */
unsigned track_pending_delete;
unsigned track_pvmove_deps;
@@ -214,8 +214,8 @@ typedef enum {
STATUS, /* DM_DEVICE_STATUS ioctl */
} info_type_t;
static int _info_run(info_type_t type, const char *dlid,
struct dm_info *dminfo, uint32_t *read_ahead,
static int _info_run(const char *dlid, struct dm_info *dminfo,
uint32_t *read_ahead,
struct lv_seg_status *seg_status,
int with_open_count, int with_read_ahead,
uint32_t major, uint32_t minor)
@@ -223,26 +223,21 @@ static int _info_run(info_type_t type, const char *dlid,
int r = 0;
struct dm_task *dmt;
int dmtask;
int with_flush; /* TODO: arg for _info_run */
void *target = NULL;
uint64_t target_start, target_length, start, length;
char *target_name, *target_params;
int with_flush = 1; /* TODO: arg for _info_run */
switch (type) {
case INFO:
dmtask = DM_DEVICE_INFO;
break;
case STATUS:
dmtask = DM_DEVICE_STATUS;
with_flush = 0;
break;
default:
log_error(INTERNAL_ERROR "_info_run: unhandled info type.");
return 0;
if (seg_status) {
dmtask = DM_DEVICE_STATUS;
with_flush = 0;
} else {
dmtask = DM_DEVICE_INFO;
with_flush = 1; /* doesn't really matter */
}
if (!(dmt = _setup_task_run(dmtask, dminfo,
NULL, dlid, 0, major, minor, with_open_count, with_flush, 0)))
if (!(dmt = _setup_task_run(dmtask, dminfo, NULL, dlid, 0, major, minor,
with_open_count, with_flush, 0)))
return_0;
if (with_read_ahead && dminfo->exists) {
@@ -252,7 +247,7 @@ static int _info_run(info_type_t type, const char *dlid,
*read_ahead = DM_READ_AHEAD_NONE;
/* Query status only for active device */
if ((type == STATUS) && dminfo->exists) {
if (seg_status && dminfo->exists) {
start = length = seg_status->seg->lv->vg->extent_size;
start *= seg_status->seg->le;
length *= seg_status->seg->len;
@@ -674,18 +669,24 @@ static int _original_uuid_format_check_required(struct cmd_context *cmd)
return (_kernel_major == -1);
}
static int _info(struct cmd_context *cmd, const char *dlid, int with_open_count, int with_read_ahead,
static int _info(struct cmd_context *cmd,
const char *name, const char *dlid,
int with_open_count, int with_read_ahead,
struct dm_info *dminfo, uint32_t *read_ahead,
struct lv_seg_status *seg_status)
{
int r = 0;
char old_style_dlid[sizeof(UUID_PREFIX) + 2 * ID_LEN];
const char *suffix, *suffix_position;
unsigned i = 0;
log_debug_activation("Getting device info for %s [%s].", name, dlid);
/* Check for dlid */
if ((r = _info_run(seg_status ? STATUS : INFO, dlid, dminfo, read_ahead,
seg_status, with_open_count, with_read_ahead, 0, 0)) && dminfo->exists)
if (!_info_run(dlid, dminfo, read_ahead, seg_status,
with_open_count, with_read_ahead, 0, 0))
return_0;
if (dminfo->exists)
return 1;
/* Check for original version of dlid before the suffixes got added in 2.02.106 */
@@ -696,33 +697,33 @@ static int _info(struct cmd_context *cmd, const char *dlid, int with_open_count,
(void) strncpy(old_style_dlid, dlid, sizeof(old_style_dlid));
old_style_dlid[sizeof(old_style_dlid) - 1] = '\0';
if ((r = _info_run(seg_status ? STATUS : INFO, old_style_dlid, dminfo,
read_ahead, seg_status, with_open_count,
with_read_ahead, 0, 0)) && dminfo->exists)
if (!_info_run(old_style_dlid, dminfo, read_ahead, seg_status,
with_open_count, with_read_ahead, 0, 0))
return_0;
if (dminfo->exists)
return 1;
}
}
/* Must we still check for the pre-2.02.00 dm uuid format? */
if (!_original_uuid_format_check_required(cmd))
return r;
/* Check for dlid before UUID_PREFIX was added */
if ((r = _info_run(seg_status ? STATUS : INFO, dlid + sizeof(UUID_PREFIX) - 1,
dminfo, read_ahead, seg_status, with_open_count,
with_read_ahead, 0, 0)) && dminfo->exists)
return 1;
return r;
/* Check for dlid before UUID_PREFIX was added */
if (!_info_run(dlid + sizeof(UUID_PREFIX) - 1, dminfo, read_ahead, seg_status,
with_open_count, with_read_ahead, 0, 0))
return_0;
return 1;
}
static int _info_by_dev(uint32_t major, uint32_t minor, struct dm_info *info)
{
return _info_run(INFO, NULL, info, NULL, 0, 0, 0, major, minor);
return _info_run(NULL, info, NULL, 0, 0, 0, major, minor);
}
int dev_manager_info(struct cmd_context *cmd, const struct logical_volume *lv,
const char *layer,
int dev_manager_info(struct cmd_context *cmd,
const struct logical_volume *lv, const char *layer,
int with_open_count, int with_read_ahead,
struct dm_info *dminfo, uint32_t *read_ahead,
struct lv_seg_status *seg_status)
@@ -736,9 +737,9 @@ int dev_manager_info(struct cmd_context *cmd, const struct logical_volume *lv,
if (!(dlid = build_dm_uuid(cmd->mem, lv, layer)))
goto_out;
log_debug_activation("Getting device info for %s [%s].", name, dlid);
r = _info(cmd, dlid, with_open_count, with_read_ahead,
dminfo, read_ahead, seg_status);
if (!(r = _info(cmd, name, dlid, with_open_count, with_read_ahead,
dminfo, read_ahead, seg_status)))
stack;
out:
dm_pool_free(cmd->mem, name);
@@ -1713,12 +1714,8 @@ static int _add_dev_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
if (!(dlid = build_dm_uuid(dm->mem, lv, layer)))
return_0;
log_debug_activation("Getting device info for %s [%s].", name, dlid);
if (!_info(dm->cmd, dlid, 1, 0, &info, NULL, NULL)) {
log_error("Failed to get info for %s [%s].", name, dlid);
return 0;
}
if (!_info(dm->cmd, name, dlid, 1, 0, &info, NULL, NULL))
return_0;
/*
* For top level volumes verify that existing device match
@@ -2042,16 +2039,16 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
#endif
}
if (origin_only && dm->activation && !dm->skip_external_lv &&
if (origin_only && dm->activation && dm->track_external_lv_deps &&
lv_is_external_origin(lv)) {
/* Find possible users of external origin lv */
dm->skip_external_lv = 1; /* avoid recursion */
dm->track_external_lv_deps = 0; /* avoid recursion */
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
/* Match only external_lv users */
if ((sl->seg->external_lv == lv) &&
!_add_lv_to_dtree(dm, dtree, sl->seg->lv, 1))
return_0;
dm->skip_external_lv = 0;
dm->track_external_lv_deps = 1;
}
if (lv_is_thin_pool(lv)) {
@@ -2151,7 +2148,7 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
/* Add any LVs used by segments in this LV */
dm_list_iterate_items(seg, &lv->segments) {
if (seg->external_lv && !dm->skip_external_lv &&
if (seg->external_lv && dm->track_external_lv_deps &&
!_add_lv_to_dtree(dm, dtree, seg->external_lv, 1)) /* stack */
return_0;
if (seg->log_lv &&
@@ -2161,7 +2158,7 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
!_add_lv_to_dtree(dm, dtree, seg->metadata_lv, 0))
return_0;
if (seg->pool_lv &&
(lv_is_cache_pool(seg->pool_lv) || !dm->skip_external_lv) &&
(lv_is_cache_pool(seg->pool_lv) || dm->track_external_lv_deps) &&
/* When activating and not origin_only detect linear 'overlay' over pool */
!_add_lv_to_dtree(dm, dtree, seg->pool_lv, dm->activation ? origin_only : 1))
return_0;
@@ -2241,11 +2238,8 @@ static char *_add_error_or_zero_device(struct dev_manager *dm, struct dm_tree *d
seg->lv->name, errid)))
return_NULL;
log_debug_activation("Getting device info for %s [%s].", name, dlid);
if (!_info(dm->cmd, dlid, 1, 0, &info, NULL, NULL)) {
log_error("Failed to get info for %s [%s].", name, dlid);
return 0;
}
if (!_info(dm->cmd, name, dlid, 1, 0, &info, NULL, NULL))
return_NULL;
if (!info.exists) {
/* Create new node */
@@ -2499,7 +2493,7 @@ static int _add_target_to_dtree(struct dev_manager *dm,
if (!seg->segtype->ops->add_target_line) {
log_error(INTERNAL_ERROR "_emit_target cannot handle "
"segment type %s.", seg->segtype->name);
"segment type %s.", lvseg_name(seg));
return 0;
}
@@ -2581,7 +2575,7 @@ static int _add_new_external_lv_to_dtree(struct dev_manager *dm,
struct seg_list *sl;
/* Do not want to recursively add externals again */
if (dm->skip_external_lv)
if (!dm->track_external_lv_deps)
return 1;
/*
@@ -2589,7 +2583,7 @@ static int _add_new_external_lv_to_dtree(struct dev_manager *dm,
* process all LVs related to this LV, and we want to
* skip repeated invocation of external lv processing
*/
dm->skip_external_lv = 1;
dm->track_external_lv_deps = 0;
log_debug_activation("Adding external origin LV %s and all active users.",
display_lvname(external_lv));
@@ -2615,7 +2609,7 @@ static int _add_new_external_lv_to_dtree(struct dev_manager *dm,
log_debug_activation("Finished adding external origin LV %s and all active users.",
display_lvname(external_lv));
dm->skip_external_lv = 0;
dm->track_external_lv_deps = 1;
return 1;
}
@@ -2681,12 +2675,15 @@ static int _add_segment_to_dtree(struct dev_manager *dm,
/* Add any LVs used by this segment */
for (s = 0; s < seg->area_count; ++s) {
if ((seg_type(seg, s) == AREA_LV) &&
/* do not bring up tracked image */
!lv_is_raid_image_with_tracking(seg_lv(seg, s)) &&
/* origin only for cache without pending delete */
(!dm->track_pending_delete || !seg_is_cache(seg)) &&
!_add_new_lv_to_dtree(dm, dtree, seg_lv(seg, s),
laopts, NULL))
return_0;
if (seg_is_raid_with_meta(seg) && seg->meta_areas && seg_metalv(seg, s) &&
!lv_is_raid_image_with_tracking(seg_lv(seg, s)) &&
!_add_new_lv_to_dtree(dm, dtree, seg_metalv(seg, s),
laopts, NULL))
return_0;
@@ -2721,6 +2718,8 @@ static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
int save_pending_delete = dm->track_pending_delete;
int merge_in_progress = 0;
log_debug_activation("Adding new LV %s%s%s to dtree", display_lvname(lv),
layer ? "-" : "", layer ? : "");
/* LV with pending delete is never put new into a table */
if (lv_is_pending_delete(lv) && !_cached_dm_info(dm->mem, dtree, lv, NULL))
return 1; /* Replace with error only when already exists */
@@ -2898,6 +2897,10 @@ static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
return_0;
if (lv_is_cache(lv) &&
/* Register callback only for layer activation or non-layered cache LV */
(layer || !lv_layer(lv)) &&
/* Register callback when metadata LV is NOT already active */
!_cached_dm_info(dm->mem, dtree, first_seg(first_seg(lv)->pool_lv)->metadata_lv, NULL) &&
!_pool_register_callback(dm, dnode, lv))
return_0;
@@ -3072,7 +3075,7 @@ static int _tree_action(struct dev_manager *dm, const struct logical_volume *lv,
(laopts->origin_only) ? " origin-only" : "",
display_lvname(lv));
/* Some LV can be used for top level tree */
/* Some LV cannot be used for top level tree */
/* TODO: add more.... */
if (lv_is_cache_pool(lv) && !dm_list_empty(&lv->segs_using_this_lv)) {
log_error(INTERNAL_ERROR "Cannot create tree for %s.",
@@ -3082,6 +3085,7 @@ static int _tree_action(struct dev_manager *dm, const struct logical_volume *lv,
/* Some targets may build bigger tree for activation */
dm->activation = ((action == PRELOAD) || (action == ACTIVATE));
dm->suspend = (action == SUSPEND_WITH_LOCKFS) || (action == SUSPEND);
dm->track_external_lv_deps = 1;
if (!(dtree = _create_partial_dtree(dm, lv, laopts->origin_only)))
return_0;

View File

@@ -192,6 +192,9 @@ static int _get_env_vars(struct cmd_context *cmd)
}
}
if (strcmp((getenv("LVM_RUN_BY_DMEVENTD") ? : "0"), "1") == 0)
init_run_by_dmeventd(cmd);
return 1;
}
@@ -1755,6 +1758,15 @@ bad:
return 0;
}
int init_run_by_dmeventd(struct cmd_context *cmd)
{
init_dmeventd_monitor(DMEVENTD_MONITOR_IGNORE);
init_ignore_suspended_devices(1);
init_disable_dmeventd_monitoring(1); /* Lock settings */
return 0;
}
void destroy_config_context(struct cmd_context *cmd)
{
_destroy_config(cmd);

View File

@@ -89,6 +89,7 @@ struct cmd_context {
*/
const char *cmd_line;
const char *name; /* needed before cmd->command is set */
struct command_name *cname;
struct command *command;
char **argv;
struct arg_values *opt_arg_values;
@@ -241,6 +242,7 @@ int config_files_changed(struct cmd_context *cmd);
int init_lvmcache_orphans(struct cmd_context *cmd);
int init_filters(struct cmd_context *cmd, unsigned load_persistent_cache);
int init_connections(struct cmd_context *cmd);
int init_run_by_dmeventd(struct cmd_context *cmd);
/*
* A config context is a very light weight cmd struct that

View File

@@ -1221,14 +1221,15 @@ cfg_array(activation_read_only_volume_list_CFG, "read_only_volume_list", activat
"read_only_volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ]\n"
"#\n")
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
"This has been replaced by the activation/raid_region_size setting.\n",
"Size in KiB of each copy operation when mirroring.\n")
"Size in KiB of each raid or mirror synchronization region.\n")
cfg(activation_raid_region_size_CFG, "raid_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(2, 2, 99), NULL, 0, NULL,
"Size in KiB of each raid or mirror synchronization region.\n"
"For raid or mirror segment types, this is the amount of data that is\n"
"copied at once when initializing, or moved at once by pvmove.\n")
"The clean/dirty state of data is tracked for each region.\n"
"The value is rounded down to a power of two if necessary, and\n"
"is ignored if it is not a multiple of the machine memory page size.\n")
cfg(activation_error_when_full_CFG, "error_when_full", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ERROR_WHEN_FULL, vsn(2, 2, 115), NULL, 0, NULL,
"Return errors if a thin pool runs out of space.\n"
@@ -1859,6 +1860,12 @@ cfg(dmeventd_thin_library_CFG, "thin_library", dmeventd_CFG_SECTION, 0, CFG_TYPE
"and emits a warning through syslog when the usage exceeds 80%. The\n"
"warning is repeated when 85%, 90% and 95% of the pool is filled.\n")
cfg(dmeventd_thin_command_CFG, "thin_command", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_THIN_COMMAND, vsn(2, 2, 169), NULL, 0, NULL,
"The plugin runs command with each 5% increment when thin-pool data volume\n"
"or metadata volume gets above 50%.\n"
"Command which starts with 'lvm ' prefix is internal lvm command.\n"
"You can write your own handler to customise behaviour in more details.\n")
cfg(dmeventd_executable_CFG, "executable", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_PATH, vsn(2, 2, 73), "@DMEVENTD_PATH@", 0, NULL,
"The full path to the dmeventd binary.\n")

View File

@@ -81,6 +81,7 @@
#define DEFAULT_DMEVENTD_MIRROR_LIB "libdevmapper-event-lvm2mirror.so"
#define DEFAULT_DMEVENTD_SNAPSHOT_LIB "libdevmapper-event-lvm2snapshot.so"
#define DEFAULT_DMEVENTD_THIN_LIB "libdevmapper-event-lvm2thin.so"
#define DEFAULT_DMEVENTD_THIN_COMMAND "lvm lvextend --use-policies"
#define DEFAULT_DMEVENTD_MONITOR 1
#define DEFAULT_BACKGROUND_POLLING 1
@@ -176,7 +177,7 @@
#define DEFAULT_INDENT 1
#define DEFAULT_ABORT_ON_INTERNAL_ERRORS 0
#define DEFAULT_DETECT_INTERNAL_VG_CACHE_CORRUPTION 0
#define DEFAULT_UNITS "h"
#define DEFAULT_UNITS "r"
#define DEFAULT_SUFFIX 1
#define DEFAULT_HOSTTAGS 0

View File

@@ -125,6 +125,10 @@ struct dev_types *create_dev_types(const char *proc_dir,
if (!strncmp("emcpower", line + i, 8) && isspace(*(line + i + 8)))
dt->emcpower_major = line_maj;
/* Look for Veritas Dynamic Multipathing */
if (!strncmp("VxDMP", line + i, 5) && isspace(*(line + i + 5)))
dt->vxdmp_major = line_maj;
if (!strncmp("loop", line + i, 4) && isspace(*(line + i + 4)))
dt->loop_major = line_maj;
@@ -218,6 +222,9 @@ int dev_subsystem_part_major(struct dev_types *dt, struct device *dev)
if (MAJOR(dev->dev) == dt->power2_major)
return 1;
if (MAJOR(dev->dev) == dt->vxdmp_major)
return 1;
if ((MAJOR(dev->dev) == dt->blkext_major) &&
dev_get_primary_dev(dt, dev, &primary_dev) &&
(MAJOR(primary_dev) == dt->md_major))
@@ -246,6 +253,9 @@ const char *dev_subsystem_name(struct dev_types *dt, struct device *dev)
if (MAJOR(dev->dev) == dt->power2_major)
return "POWER2";
if (MAJOR(dev->dev) == dt->vxdmp_major)
return "VXDMP";
if (MAJOR(dev->dev) == dt->blkext_major)
return "BLKEXT";

View File

@@ -41,6 +41,7 @@ struct dev_types {
int drbd_major;
int device_mapper_major;
int emcpower_major;
int vxdmp_major;
int power2_major;
int dasd_major;
int loop_major;

View File

@@ -63,5 +63,6 @@ static const dev_known_type_t _dev_known_types[] = {
{"bcache", 1, "bcache block device cache"},
{"nvme", 64, "NVM Express"},
{"zvol", 16, "ZFS Zvols"},
{"VxDMP", 16, "Veritas Dynamic Multipathing"},
{"", 0, ""}
};

View File

@@ -398,7 +398,7 @@ int export_extents(struct disk_list *dl, uint32_t lv_num,
if (!(seg->segtype->flags & SEG_FORMAT1_SUPPORT)) {
log_error("Segment type %s in LV %s: "
"unsupported by format1",
seg->segtype->name, lv->name);
lvseg_name(seg), lv->name);
return 0;
}
if (seg_type(seg, s) != AREA_PV) {
@@ -510,7 +510,7 @@ int export_lvs(struct disk_list *dl, struct volume_group *vg,
goto_out;
dm_list_iterate_items(ll, &vg->lvs) {
if (ll->lv->status & SNAPSHOT)
if (lv_is_snapshot(ll->lv))
continue;
if (!(lvdl = dm_pool_alloc(dl->mem, sizeof(*lvdl))))

View File

@@ -56,7 +56,7 @@ static struct dm_hash_table *_create_lv_maps(struct dm_pool *mem,
}
dm_list_iterate_items(ll, &vg->lvs) {
if (ll->lv->status & SNAPSHOT)
if (lv_is_snapshot(ll->lv))
continue;
if (!(lvm = dm_pool_alloc(mem, sizeof(*lvm))))

View File

@@ -267,7 +267,7 @@ int import_pool_segments(struct dm_list *lvs, struct dm_pool *mem,
dm_list_iterate_items(lvl, lvs) {
lv = lvl->lv;
if (lv->status & SNAPSHOT)
if (lv_is_snapshot(lv))
continue;
for (i = 0; i < subpools; i++) {

View File

@@ -35,6 +35,7 @@ struct archive_params {
struct backup_params {
int enabled;
char *dir;
int suppress;
};
int archive_init(struct cmd_context *cmd, const char *dir,
@@ -235,7 +236,8 @@ static int _backup(struct volume_group *vg)
int backup_locally(struct volume_group *vg)
{
if (!vg->cmd->backup_params->enabled || !vg->cmd->backup_params->dir) {
log_warn("WARNING: This metadata update is NOT backed up");
log_warn_suppress(vg->cmd->backup_params->suppress++,
"WARNING: This metadata update is NOT backed up.");
return 1;
}

View File

@@ -623,7 +623,7 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
break;
case AREA_LV:
/* FIXME This helper code should be target-independent! Check for metadata LV property. */
if (!(seg->status & RAID)) {
if (!seg_is_raid(seg)) {
outf(f, "\"%s\", %u%s",
seg_lv(seg, s)->name,
seg_le(seg, s),

View File

@@ -512,7 +512,7 @@ static int _read_segments(struct logical_volume *lv, const struct dm_config_node
count++;
}
/* FIXME Remove this restriction */
if ((lv->status & SNAPSHOT) && count > 1) {
if (lv_is_snapshot(lv) && count > 1) {
log_error("Only one segment permitted for snapshot");
return 0;
}

View File

@@ -2651,7 +2651,7 @@ int lockd_lv_uses_lock(struct logical_volume *lv)
if (lv_is_cow(lv))
return 0;
if (lv->status & SNAPSHOT)
if (lv_is_snapshot(lv))
return 0;
/* FIXME: lv_is_virtual_origin ? */

View File

@@ -322,12 +322,11 @@ int validate_lv_cache_create_origin(const struct logical_volume *origin_lv)
if (lv_is_cache_type(origin_lv) ||
lv_is_mirror_type(origin_lv) ||
lv_is_thin_volume(origin_lv) || lv_is_thin_pool_metadata(origin_lv) ||
lv_is_origin(origin_lv) || lv_is_merging_origin(origin_lv) ||
lv_is_merging_origin(origin_lv) ||
lv_is_cow(origin_lv) || lv_is_merging_cow(origin_lv) ||
lv_is_external_origin(origin_lv) ||
lv_is_virtual(origin_lv)) {
log_error("Cache is not supported with %s segment type of the original logical volume %s.",
first_seg(origin_lv)->segtype->name, display_lvname(origin_lv));
lvseg_name(first_seg(origin_lv)), display_lvname(origin_lv));
return 0;
}
@@ -383,7 +382,7 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
const struct logical_volume *lock_lv = lv_lock_holder(cache_lv);
struct lv_segment *cache_seg = first_seg(cache_lv);
struct lv_status_cache *status;
int cleaner_policy;
int cleaner_policy, writeback;
uint64_t dirty_blocks;
*is_clean = 0;
@@ -402,14 +401,11 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
cleaner_policy = !strcmp(status->cache->policy_name, "cleaner");
dirty_blocks = status->cache->dirty_blocks;
/* No clear policy and writeback mode means dirty */
if (!cleaner_policy &&
(status->cache->feature_flags & DM_CACHE_FEATURE_WRITEBACK))
dirty_blocks++;
writeback = (status->cache->feature_flags & DM_CACHE_FEATURE_WRITEBACK);
dm_pool_destroy(status->mem);
if (!dirty_blocks)
/* Only clear when policy is Clear or mode != writeback */
if (!dirty_blocks && (cleaner_policy || !writeback))
break;
log_print_unless_silent("Flushing " FMTu64 " blocks for cache %s.",
@@ -420,11 +416,23 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
continue;
}
if (!(cache_lv->status & LVM_WRITE)) {
log_warn("WARNING: Dirty blocks found on read-only cache volume %s.",
display_lvname(cache_lv));
/* TODO: can we actually clean something? */
}
/* Switch to cleaner policy to flush the cache */
cache_seg->cleaner_policy = 1;
/* Reaload kernel with "cleaner" policy */
/* Reload cache volume with "cleaner" policy */
if (!lv_update_and_reload_origin(cache_lv))
return_0;
if (!sync_local_dev_names(cache_lv->vg->cmd)) {
log_error("Failed to sync local devices when clearing cache volume %s.",
display_lvname(cache_lv));
return 0;
}
}
/*
@@ -434,6 +442,12 @@ int lv_cache_wait_for_clean(struct logical_volume *cache_lv, int *is_clean)
if (1) {
if (!lv_refresh_suspend_resume(lock_lv))
return_0;
if (!sync_local_dev_names(cache_lv->vg->cmd)) {
log_error("Failed to sync local devices after final clearing of cache %s.",
display_lvname(cache_lv));
return 0;
}
}
cache_seg->cleaner_policy = 0;
@@ -472,7 +486,7 @@ int lv_cache_remove(struct logical_volume *cache_lv)
}
/* Localy active volume is needed for writeback */
if (!lv_is_active_locally(cache_lv)) {
if (!lv_info(cache_lv->vg->cmd, cache_lv, 1, NULL, 0, 0)) {
/* Give up any remote locks */
if (!deactivate_lv(cache_lv->vg->cmd, cache_lv)) {
log_error("Cannot deactivate remotely active cache volume %s.",

View File

@@ -971,7 +971,7 @@ int lv_mirror_image_in_sync(const struct logical_volume *lv)
struct lv_segment *seg = first_seg(lv);
struct lv_segment *mirror_seg;
if (!(lv->status & MIRROR_IMAGE) || !seg ||
if (!lv_is_mirror_image(lv) || !seg ||
!(mirror_seg = find_mirror_seg(seg))) {
log_error(INTERNAL_ERROR "Cannot find mirror segment.");
return 0;
@@ -1551,14 +1551,19 @@ const struct logical_volume *lv_lock_holder(const struct logical_volume *lv)
if (lv_is_cow(lv))
return lv_lock_holder(origin_from_cow(lv));
if (lv_is_thin_pool(lv)) {
/* Find any active LV from the pool */
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (lv_is_active(sl->seg->lv)) {
log_debug_activation("Thin volume %s is active.",
display_lvname(lv));
return sl->seg->lv;
}
if (lv_is_thin_pool(lv) ||
lv_is_external_origin(lv)) {
/* FIXME: Ensure cluster keeps thin-pool active exlusively.
* External origin can be activated on more nodes (depends on type).
*/
if (!lv_is_active(lv))
/* Find any active LV from the pool or external origin */
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (lv_is_active(sl->seg->lv)) {
log_debug_activation("Thin volume %s is active.",
display_lvname(lv));
return sl->seg->lv;
}
return lv;
}
@@ -1573,9 +1578,6 @@ const struct logical_volume *lv_lock_holder(const struct logical_volume *lv)
lv_is_thin_volume(sl->seg->lv) &&
first_seg(lv)->pool_lv == sl->seg->pool_lv)
continue; /* Skip thin snaphost */
if (lv_is_external_origin(lv) &&
lv_is_thin_volume(sl->seg->lv))
continue; /* Skip external origin */
if (lv_is_pending_delete(sl->seg->lv))
continue; /* Skip deleted LVs */
return lv_lock_holder(sl->seg->lv);

View File

@@ -712,6 +712,7 @@ static int _round_down_pow2(int r)
int get_default_region_size(struct cmd_context *cmd)
{
int pagesize = lvm_getpagesize();
int region_size = _get_default_region_size(cmd);
if (!is_power_of_2(region_size)) {
@@ -720,6 +721,12 @@ int get_default_region_size(struct cmd_context *cmd)
region_size / 2);
}
if (region_size % (pagesize >> SECTOR_SHIFT)) {
region_size = DEFAULT_RAID_REGION_SIZE * 2;
log_verbose("Using default region size %u kiB (multiple of page size).",
region_size / 2);
}
return region_size;
}
@@ -1138,7 +1145,8 @@ int set_lv_segment_area_lv(struct lv_segment *seg, uint32_t area_num,
display_lvname(seg->lv), seg->le, area_num,
display_lvname(lv), le);
if (status & RAID_META) {
lv->status |= status;
if (lv_is_raid_metadata(lv)) {
seg->meta_areas[area_num].type = AREA_LV;
seg_metalv(seg, area_num) = lv;
if (le) {
@@ -1151,7 +1159,6 @@ int set_lv_segment_area_lv(struct lv_segment *seg, uint32_t area_num,
seg_lv(seg, area_num) = lv;
seg_le(seg, area_num) = le;
}
lv->status |= status;
if (!add_seg_to_segs_using_this_lv(lv, seg))
return_0;
@@ -1263,6 +1270,7 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
uint32_t count = extents;
uint32_t reduction;
struct logical_volume *pool_lv;
struct logical_volume *external_lv = NULL;
if (lv_is_merging_origin(lv)) {
log_debug_metadata("Dropping snapshot merge of %s to removed origin %s.",
@@ -1274,6 +1282,9 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
if (!count)
break;
if (seg->external_lv)
external_lv = seg->external_lv;
if (seg->len <= count) {
if (seg->merge_lv) {
log_debug_metadata("Dropping snapshot merge of removed %s to origin %s.",
@@ -1340,6 +1351,12 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
!lv->vg->fid->fmt->ops->lv_setup(lv->vg->fid, lv))
return_0;
/* Removal of last user enforces refresh */
if (external_lv && !lv_is_external_origin(external_lv) &&
lv_is_active(external_lv) &&
!lv_update_and_reload(external_lv))
return_0;
return 1;
}
@@ -1408,35 +1425,19 @@ static int _lv_refresh_suspend_resume(const struct logical_volume *lv)
int lv_refresh_suspend_resume(const struct logical_volume *lv)
{
/*
* FIXME:
*
* in case of RAID, refresh the SubLVs before
* refreshing the top-level one in order to cope
* with transient failures of SubLVs.
*/
if (lv_is_raid(lv)) {
if (vg_is_clustered(lv->vg) &&
lv_is_active_remotely(lv)) {
if (!_lv_refresh_suspend_resume(lv))
return 0;
} else {
uint32_t s;
struct lv_segment *seg = first_seg(lv);
if (!_lv_refresh_suspend_resume(lv))
return 0;
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_LV &&
!_lv_refresh_suspend_resume(seg_lv(seg, s)))
return 0;
if (seg->meta_areas &&
seg_metatype(seg, s) == AREA_LV &&
!_lv_refresh_suspend_resume(seg_metalv(seg, s)))
return 0;
}
}
/*
* Remove any transiently activated error
* devices which arean't used any more.
*/
if (lv_is_raid(lv) && !lv_deactivate_any_missing_subdevs(lv)) {
log_error("Failed to remove temporary SubLVs from %s", display_lvname(lv));
return 0;
}
return _lv_refresh_suspend_resume(lv);
return 1;
}
/*
@@ -3472,7 +3473,7 @@ int lv_add_segment(struct alloc_handle *ah,
region_size))
return_0;
if ((segtype->flags & SEG_CAN_SPLIT) && !lv_merge_segments(lv)) {
if (segtype_can_split(segtype) && !lv_merge_segments(lv)) {
log_error("Couldn't merge segments after extending "
"logical volume.");
return 0;
@@ -4601,7 +4602,7 @@ static int _lvresize_adjust_policy(const struct logical_volume *lv,
if (!policy_amount) {
log_error("Can't extend %s with %s autoextend percent set to 0%%.",
display_lvname(lv), first_seg(lv)->segtype->name);
display_lvname(lv), lvseg_name(first_seg(lv)));
return 0;
}
@@ -4677,8 +4678,7 @@ static int _lvresize_check(struct logical_volume *lv,
if (lv_is_raid_image(lv) || lv_is_raid_metadata(lv)) {
log_error("Cannot resize a RAID %s directly",
(lv->status & RAID_IMAGE) ? "image" :
"metadata area");
lv_is_raid_image(lv) ? "image" : "metadata area");
return 0;
}
@@ -4693,6 +4693,11 @@ static int _lvresize_check(struct logical_volume *lv,
return 0;
}
if (lv_is_cache_type(lv)) {
log_error("Unable to resize logical volumes of cache type.");
return 0;
}
if (!lv_is_visible(lv) &&
!lv_is_thin_pool_metadata(lv) &&
!lv_is_lockd_sanlock_lv(lv)) {
@@ -6258,12 +6263,12 @@ static int _lv_update_and_reload(struct logical_volume *lv, int origin_only)
int lv_update_and_reload(struct logical_volume *lv)
{
return _lv_update_and_reload(lv, 0);
return _lv_update_and_reload(lv, 0);
}
int lv_update_and_reload_origin(struct logical_volume *lv)
{
return _lv_update_and_reload(lv, 1);
return _lv_update_and_reload(lv, 1);
}
/*
@@ -6534,12 +6539,43 @@ int remove_layer_from_lv(struct logical_volume *lv,
* Before removal, the layer should be cleaned up,
* i.e. additional segments and areas should have been removed.
*/
if (dm_list_size(&parent_lv->segments) != 1 ||
parent_seg->area_count != 1 ||
seg_type(parent_seg, 0) != AREA_LV ||
layer_lv != seg_lv(parent_seg, 0) ||
parent_lv->le_count != layer_lv->le_count)
return_0;
/* FIXME:
* These are all INTERNAL_ERROR, but ATM there is
* some internal API problem and this code is wrongle
* executed with certain mirror manipulations.
* So we need to fix mirror code first, then switch...
*/
if (dm_list_size(&parent_lv->segments) != 1) {
log_error("Invalid %d segments in %s, expected only 1.",
dm_list_size(&parent_lv->segments),
display_lvname(parent_lv));
return 0;
}
if (parent_seg->area_count != 1) {
log_error("Invalid %d area count(s) in %s, expected only 1.",
parent_seg->area_count, display_lvname(parent_lv));
return 0;
}
if (seg_type(parent_seg, 0) != AREA_LV) {
log_error("Invalid seg_type %d in %s, expected LV.",
seg_type(parent_seg, 0), display_lvname(parent_lv));
return 0;
}
if (layer_lv != seg_lv(parent_seg, 0)) {
log_error("Layer doesn't match segment in %s.",
display_lvname(parent_lv));
return 0;
}
if (parent_lv->le_count != layer_lv->le_count) {
log_error("Inconsistent extent count (%u != %u) of layer %s.",
parent_lv->le_count, layer_lv->le_count,
display_lvname(parent_lv));
return 0;
}
if (!lv_empty(parent_lv))
return_0;
@@ -7054,7 +7090,7 @@ static int _should_wipe_lv(struct lvcreate_params *lp,
struct logical_volume *lv, int warn)
{
/* Unzeroable segment */
if (first_seg(lv)->segtype->flags & SEG_CANNOT_BE_ZEROED)
if (seg_cannot_be_zeroed(first_seg(lv)))
return 0;
/* Thin snapshot need not to be zeroed */
@@ -7418,7 +7454,7 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
return NULL;
}
if (lv_is_cache_type(origin_lv)) {
if (lv_is_cache_type(origin_lv) && !lv_is_cache(origin_lv)) {
log_error("Snapshots of cache type volume %s "
"is not supported.", display_lvname(origin_lv));
return NULL;

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -71,6 +71,13 @@ int lv_merge_segments(struct logical_volume *lv)
if (error_count++ > ERROR_MAX) \
goto out
#define seg_error(msg) { \
log_error("LV %s, segment %u invalid: %s for %s segment.", \
seg->lv->name, seg_count, (msg), lvseg_name(seg)); \
if ((*error_count)++ > ERROR_MAX) \
return; \
}
/*
* RAID segment property checks.
*
@@ -141,7 +148,15 @@ static void _check_raid1_seg(struct lv_segment *seg, int *error_count)
static void _check_raid45610_seg(struct lv_segment *seg, int *error_count)
{
/* Checks applying to any raid4/5/6/10 */
if (!seg->meta_areas)
/*
* Allow raid4 + raid5_n to get activated w/o metadata.
*
* This is mandatory during conversion between them,
* because switching the dedicated parity SubLVs
* beginning <-> end changes the roles of all SubLVs
* which the kernel would reject.
*/
if (!(seg_is_raid4(seg) || seg_is_raid5_n(seg)) && !seg->meta_areas)
raid_seg_error("no meta areas");
if (!seg->stripe_size)
raid_seg_error("zero stripe size");
@@ -188,44 +203,10 @@ static void _check_non_raid_seg_members(struct lv_segment *seg, int *error_count
{
if (seg->origin) /* snap and thin */
raid_seg_error("non-zero origin LV");
if (seg->indirect_origin) /* thin */
raid_seg_error("non-zero indirect_origin LV");
if (seg->merge_lv) /* thin */
raid_seg_error("non-zero merge LV");
if (seg->cow) /* snap */
raid_seg_error("non-zero cow LV");
if (!dm_list_empty(&seg->origin_list)) /* snap */
raid_seg_error("non-zero origin_list");
if (seg->log_lv)
raid_seg_error("non-zero log LV");
if (seg->segtype_private)
raid_seg_error("non-zero segtype_private");
/* thin members */
if (seg->metadata_lv)
raid_seg_error("non-zero metadata LV");
if (seg->transaction_id)
raid_seg_error("non-zero transaction_id");
if (seg->zero_new_blocks)
raid_seg_error("non-zero zero_new_blocks");
if (seg->discards)
raid_seg_error("non-zero discards");
if (!dm_list_empty(&seg->thin_messages))
raid_seg_error("non-zero thin_messages list");
if (seg->external_lv)
raid_seg_error("non-zero external LV");
if (seg->pool_lv)
raid_seg_error("non-zero pool LV");
if (seg->device_id)
raid_seg_error("non-zero device_id");
/* cache members */
if (seg->cache_mode)
raid_seg_error("non-zero cache_mode");
if (seg->policy_name)
raid_seg_error("non-zero policy_name");
if (seg->policy_settings)
raid_seg_error("non-zero policy_settings");
if (seg->cleaner_policy)
raid_seg_error("non-zero cleaner_policy");
/* replicator members (deprecated) */
if (seg->replicator)
raid_seg_error("non-zero replicator");
@@ -249,9 +230,6 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
uint32_t area_len, s;
/* General checks applying to all RAIDs */
if (!seg_is_raid(seg))
raid_seg_error("erroneous RAID check");
if (!seg->area_count)
raid_seg_error("zero area count");
@@ -276,9 +254,6 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
return;
}
if (seg->chunk_size)
raid_seg_error_val("non-zero chunk_size", seg->chunk_size);
/* FIXME: should we check any non-RAID segment struct members at all? */
_check_non_raid_seg_members(seg, error_count);
@@ -329,6 +304,169 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
}
/* END: RAID segment property checks. */
static void _check_lv_segment(struct logical_volume *lv, struct lv_segment *seg,
unsigned seg_count, int *error_count)
{
struct lv_segment *seg2;
if (lv_is_mirror_image(lv) &&
(!(seg2 = find_mirror_seg(seg)) || !seg_is_mirrored(seg2)))
seg_error("mirror image is not mirrored");
if (seg_is_cache(seg)) {
if (!lv_is_cache(lv))
seg_error("is not flagged as cache LV");
if (!seg->pool_lv) {
seg_error("is missing cache pool LV");
} else if (!lv_is_cache_pool(seg->pool_lv))
seg_error("is not referencing cache pool LV");
} else { /* !cache */
if (seg->cleaner_policy)
seg_error("sets cleaner_policy");
}
if (seg_is_cache_pool(seg)) {
if (!dm_list_empty(&seg->lv->segs_using_this_lv)) {
switch (seg->cache_mode) {
case CACHE_MODE_WRITETHROUGH:
case CACHE_MODE_WRITEBACK:
case CACHE_MODE_PASSTHROUGH:
break;
default:
seg_error("has invalid cache's feature flag")
}
if (!seg->policy_name)
seg_error("is missing cache policy name");
}
} else { /* !cache_pool */
if (seg->cache_mode)
seg_error("sets cache mode");
if (seg->policy_name)
seg_error("sets policy name");
if (seg->policy_settings)
seg_error("sets policy settings");
}
if (!seg_can_error_when_full(seg) && lv_is_error_when_full(lv))
seg_error("does not support flag ERROR_WHEN_FULL.");
if (seg_is_mirrored(seg)) {
/* Check mirror log - which is attached to the mirrored seg */
if (seg->log_lv) {
if (!lv_is_mirror_log(seg->log_lv))
seg_error("log LV is not a mirror log");
if (!(seg2 = first_seg(seg->log_lv)) || (find_mirror_seg(seg2) != seg))
seg_error("log LV does not point back to mirror segment");
}
} else { /* !mirrored */
if (seg->log_lv) {
if (lv_is_raid_image(lv))
seg_error("log LV is not a mirror log or a RAID image");
}
}
if (seg_is_raid(seg))
_check_raid_seg(seg, error_count);
if (seg_is_pool(seg)) {
if ((seg->area_count != 1) || (seg_type(seg, 0) != AREA_LV)) {
seg_error("is missing a pool data LV");
} else if (!(seg2 = first_seg(seg_lv(seg, 0))) || (find_pool_seg(seg2) != seg))
seg_error("data LV does not refer back to pool LV");
if (!seg->metadata_lv) {
seg_error("is missing a pool metadata LV");
} else if (!(seg2 = first_seg(seg->metadata_lv)) || (find_pool_seg(seg2) != seg))
seg_error("metadata LV does not refer back to pool LV");
if (!validate_pool_chunk_size(lv->vg->cmd, seg->segtype, seg->chunk_size))
seg_error("has invalid chunk size.");
} else { /* !thin_pool && !cache_pool */
if (seg->metadata_lv)
seg_error("must not have pool metadata LV set");
}
if (seg_is_thin_pool(seg)) {
if (!lv_is_thin_pool(lv))
seg_error("is not flagged as thin pool LV");
if (lv_is_thin_volume(lv))
seg_error("is a thin volume that must not contain thin pool segment");
} else { /* !thin_pool */
if (seg->zero_new_blocks)
seg_error("sets zero_new_blocks");
if (seg->discards)
seg_error("sets discards");
if (!dm_list_empty(&seg->thin_messages))
seg_error("sets thin_messages list");
}
if (seg_is_thin_volume(seg)) {
if (!lv_is_thin_volume(lv))
seg_error("is not flagged as thin volume LV");
if (lv_is_thin_pool(lv))
seg_error("is a thin pool that must not contain thin volume segment");
if (!seg->pool_lv) {
seg_error("is missing thin pool LV");
} else if (!lv_is_thin_pool(seg->pool_lv))
seg_error("is not referencing thin pool LV");
if (seg->device_id > DM_THIN_MAX_DEVICE_ID)
seg_error("has too large device id");
if (seg->external_lv &&
!lv_is_external_origin(seg->external_lv))
seg_error("external LV is not flagged as a external origin LV");
if (seg->merge_lv) {
if (!lv_is_thin_volume(seg->merge_lv))
seg_error("merge LV is not flagged as a thin LV");
if (!lv_is_merging_origin(seg->merge_lv))
seg_error("merge LV is not flagged as merging");
}
} else { /* !thin */
if (seg->device_id)
seg_error("sets device_id");
if (seg->external_lv)
seg_error("sets external LV");
if (seg->merge_lv)
seg_error("sets merge LV");
if (seg->indirect_origin)
seg_error("sets indirect_origin LV");
}
/* Some multi-seg vars excluded here */
if (!seg_is_cache(seg) &&
!seg_is_thin_volume(seg)) {
if (seg->pool_lv)
seg_error("sets pool LV");
}
if (!seg_is_pool(seg) &&
/* FIXME: format_pool/import_export.c _add_linear_seg() sets chunk_size */
!seg_is_linear(seg) &&
!seg_is_snapshot(seg)) {
if (seg->chunk_size)
seg_error("sets chunk_size");
}
if (!seg_is_thin_pool(seg) &&
!seg_is_thin_volume(seg)) {
if (seg->transaction_id)
seg_error("sets transaction_id");
}
if (!seg_unknown(seg)) {
if (seg->segtype_private)
seg_error("set segtype_private");
}
}
/*
* Verify that an LV's segments are consecutive, complete and don't overlap.
*/
@@ -336,7 +474,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
{
struct lv_segment *seg, *seg2;
uint32_t le = 0;
unsigned seg_count = 0, seg_found;
unsigned seg_count = 0, seg_found, external_lv_found = 0;
uint32_t area_multiplier, s;
struct seg_list *sl;
struct glv_list *glvl;
@@ -344,68 +482,15 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
struct replicator_site *rsite;
struct replicator_device *rdev;
/* Check LV flags match first segment type */
if (complete_vg) {
if (lv_is_thin_volume(lv)) {
if (dm_list_size(&lv->segments) != 1) {
log_error("LV %s is thin volume without exactly one segment.",
lv->name);
inc_error_count;
} else if (!seg_is_thin_volume(first_seg(lv))) {
log_error("LV %s is thin volume without first thin volume segment.",
lv->name);
inc_error_count;
}
}
if (lv_is_thin_pool(lv)) {
if (dm_list_size(&lv->segments) != 1) {
log_error("LV %s is thin pool volume without exactly one segment.",
lv->name);
inc_error_count;
} else if (!seg_is_thin_pool(first_seg(lv))) {
log_error("LV %s is thin pool without first thin pool segment.",
lv->name);
inc_error_count;
}
}
if (lv_is_pool_data(lv) &&
(!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->area_count != 1 || seg_type(seg2, 0) != AREA_LV ||
seg_lv(seg2, 0) != lv)) {
log_error("LV %s: segment 1 pool data LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_pool_metadata(lv)) {
if (!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->metadata_lv != lv) {
log_error("LV %s: segment 1 pool metadata LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_thin_pool_metadata(lv) &&
!strstr(lv->name, "_tmeta")) {
log_error("LV %s: thin pool metadata LV does not use _tmeta",
lv->name);
inc_error_count;
} else if (lv_is_cache_pool_metadata(lv) &&
!strstr(lv->name, "_cmeta")) {
log_error("LV %s: cache pool metadata LV does not use _cmeta",
lv->name);
inc_error_count;
}
}
}
dm_list_iterate_items(seg, &lv->segments) {
seg_count++;
if (complete_vg && seg_is_raid(seg))
_check_raid_seg(seg, &error_count);
if (seg->lv != lv) {
log_error("LV %s invalid: segment %u is referencing different LV.",
lv->name, seg_count);
inc_error_count;
}
if (seg->le != le) {
log_error("LV %s invalid: segment %u should begin at "
"LE %" PRIu32 " (found %" PRIu32 ").",
@@ -423,186 +508,6 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
inc_error_count;
}
if (lv_is_error_when_full(lv) &&
!seg_can_error_when_full(seg)) {
log_error("LV %s: segment %u (%s) does not support flag "
"ERROR_WHEN_FULL.", lv->name, seg_count, seg->segtype->name);
inc_error_count;
}
if (complete_vg && seg->log_lv &&
!seg_is_mirrored(seg) && !(seg->status & RAID_IMAGE)) {
log_error("LV %s: segment %u log LV %s is not a "
"mirror log or a RAID image",
lv->name, seg_count, seg->log_lv->name);
inc_error_count;
}
/*
* Check mirror log - which is attached to the mirrored seg
*/
if (complete_vg && seg->log_lv && seg_is_mirrored(seg)) {
if (!lv_is_mirror_log(seg->log_lv)) {
log_error("LV %s: segment %u log LV %s is not "
"a mirror log",
lv->name, seg_count, seg->log_lv->name);
inc_error_count;
}
if (!(seg2 = first_seg(seg->log_lv)) ||
find_mirror_seg(seg2) != seg) {
log_error("LV %s: segment %u log LV does not "
"point back to mirror segment",
lv->name, seg_count);
inc_error_count;
}
}
if (complete_vg && seg->status & MIRROR_IMAGE) {
if (!find_mirror_seg(seg) ||
!seg_is_mirrored(find_mirror_seg(seg))) {
log_error("LV %s: segment %u mirror image "
"is not mirrored",
lv->name, seg_count);
inc_error_count;
}
}
/* Check the various thin segment types */
if (complete_vg) {
if (seg_is_thin_pool(seg)) {
if (!lv_is_thin_pool(lv)) {
log_error("LV %s is missing thin pool flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (lv_is_thin_volume(lv)) {
log_error("LV %s is a thin volume that must not contain thin pool segment %u",
lv->name, seg_count);
inc_error_count;
}
}
if (seg_is_cache_pool(seg) &&
!dm_list_empty(&seg->lv->segs_using_this_lv)) {
switch (seg->cache_mode) {
case CACHE_MODE_WRITETHROUGH:
case CACHE_MODE_WRITEBACK:
case CACHE_MODE_PASSTHROUGH:
break;
default:
log_error("LV %s has invalid cache's feature flag.",
lv->name);
inc_error_count;
}
if (!seg->policy_name) {
log_error("LV %s is missing cache policy name.", lv->name);
inc_error_count;
}
}
if (seg_is_pool(seg)) {
if (seg->area_count != 1 ||
seg_type(seg, 0) != AREA_LV) {
log_error("LV %s: %s segment %u is missing a pool data LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg_lv(seg, 0))) || find_pool_seg(seg2) != seg) {
log_error("LV %s: %s segment %u data LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
if (!seg->metadata_lv) {
log_error("LV %s: %s segment %u is missing a pool metadata LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg->metadata_lv)) ||
find_pool_seg(seg2) != seg) {
log_error("LV %s: %s segment %u metadata LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
if (!validate_pool_chunk_size(lv->vg->cmd, seg->segtype, seg->chunk_size)) {
log_error("LV %s: %s segment %u has invalid chunk size %u.",
lv->name, seg->segtype->name, seg_count, seg->chunk_size);
inc_error_count;
}
} else {
if (seg->metadata_lv) {
log_error("LV %s: segment %u must not have pool metadata LV set",
lv->name, seg_count);
inc_error_count;
}
}
if (seg_is_thin_volume(seg)) {
if (!lv_is_thin_volume(lv)) {
log_error("LV %s is missing thin volume flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (lv_is_thin_pool(lv)) {
log_error("LV %s is a thin pool that must not contain thin volume segment %u",
lv->name, seg_count);
inc_error_count;
}
if (!seg->pool_lv) {
log_error("LV %s: segment %u is missing thin pool LV",
lv->name, seg_count);
inc_error_count;
} else if (!lv_is_thin_pool(seg->pool_lv)) {
log_error("LV %s: thin volume segment %u pool LV is not flagged as a pool LV",
lv->name, seg_count);
inc_error_count;
}
if (seg->device_id > DM_THIN_MAX_DEVICE_ID) {
log_error("LV %s: thin volume segment %u has too large device id %u",
lv->name, seg_count, seg->device_id);
inc_error_count;
}
if (seg->external_lv && (seg->external_lv->status & LVM_WRITE)) {
log_error("LV %s: external origin %s is writable.",
lv->name, seg->external_lv->name);
inc_error_count;
}
if (seg->merge_lv) {
if (!lv_is_thin_volume(seg->merge_lv)) {
log_error("LV %s: thin volume segment %u merging LV %s is not flagged as a thin LV",
lv->name, seg_count, seg->merge_lv->name);
inc_error_count;
}
if (!lv_is_merging_origin(seg->merge_lv)) {
log_error("LV %s: merging LV %s is not flagged as merging.",
lv->name, seg->merge_lv->name);
inc_error_count;
}
}
} else if (seg_is_cache(seg)) {
if (!lv_is_cache(lv)) {
log_error("LV %s is missing cache flag for segment %u",
lv->name, seg_count);
inc_error_count;
}
if (!seg->pool_lv) {
log_error("LV %s: segment %u is missing cache_pool LV",
lv->name, seg_count);
inc_error_count;
}
} else {
if (seg->pool_lv) {
log_error("LV %s: segment %u must not have pool LV set",
lv->name, seg_count);
inc_error_count;
}
}
}
if (seg_is_snapshot(seg)) {
if (seg->cow && seg->cow == seg->origin) {
log_error("LV %s: segment %u has same LV %s for "
@@ -615,6 +520,9 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
if (seg_is_replicator(seg) && !check_replicator_segment(seg))
inc_error_count;
if (complete_vg)
_check_lv_segment(lv, seg, seg_count, &error_count);
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_UNASSIGNED) {
log_error("LV %s: segment %u has unassigned "
@@ -696,6 +604,12 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
le += seg->len;
}
if (le != lv->le_count) {
log_error("LV %s: inconsistent LE count %u != %u",
lv->name, le, lv->le_count);
inc_error_count;
}
dm_list_iterate_items(sl, &lv->segs_using_this_lv) {
seg = sl->seg;
seg_found = 0;
@@ -704,7 +618,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
continue;
if (lv == seg_lv(seg, s))
seg_found++;
if (seg_is_raid_with_meta(seg) && (lv == seg_metalv(seg, s)))
if (seg->meta_areas && seg_is_raid_with_meta(seg) && (lv == seg_metalv(seg, s)))
seg_found++;
}
if (seg_is_replicator_dev(seg)) {
@@ -756,6 +670,10 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
lv->name);
inc_error_count;
}
/* Validation of external origin counter */
if (seg->external_lv == lv)
external_lv_found++;
}
dm_list_iterate_items(glvl, &lv->indirect_glvs) {
@@ -777,10 +695,51 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
}
}
if (le != lv->le_count) {
log_error("LV %s: inconsistent LE count %u != %u",
lv->name, le, lv->le_count);
inc_error_count;
/* Check LV flags match first segment type */
if (complete_vg) {
if ((seg_count != 1) &&
(lv_is_cache(lv) ||
lv_is_cache_pool(lv) ||
lv_is_raid(lv) ||
lv_is_snapshot(lv) ||
lv_is_thin_pool(lv) ||
lv_is_thin_volume(lv))) {
log_error("LV %s must have exactly one segment.",
lv->name);
inc_error_count;
}
if (lv_is_pool_data(lv) &&
(!(seg2 = first_seg(lv)) || !(seg2 = find_pool_seg(seg2)) ||
seg2->area_count != 1 || seg_type(seg2, 0) != AREA_LV ||
seg_lv(seg2, 0) != lv)) {
log_error("LV %s: segment 1 pool data LV does not point back to same LV",
lv->name);
inc_error_count;
}
if (lv_is_thin_pool_metadata(lv) && !strstr(lv->name, "_tmeta")) {
log_error("LV %s: thin pool metadata LV does not use _tmeta.",
lv->name);
inc_error_count;
} else if (lv_is_cache_pool_metadata(lv) && !strstr(lv->name, "_cmeta")) {
log_error("LV %s: cache pool metadata LV does not use _cmeta.",
lv->name);
inc_error_count;
}
if (lv_is_external_origin(lv)) {
if (lv->external_count != external_lv_found) {
log_error("LV %s: external origin count does not match.",
lv->name);
inc_error_count;
}
if (lv->status & LVM_WRITE) {
log_error("LV %s: external origin cant't be writable.",
lv->name);
inc_error_count;
}
}
}
out:

View File

@@ -198,7 +198,7 @@
#define lv_is_partial(lv) (((lv)->status & PARTIAL_LV) ? 1 : 0)
#define lv_is_virtual(lv) (((lv)->status & VIRTUAL) ? 1 : 0)
#define lv_is_merging(lv) (((lv)->status & MERGING) ? 1 : 0)
#define lv_is_merging_origin(lv) (lv_is_merging(lv))
#define lv_is_merging_origin(lv) (lv_is_merging(lv) && (lv)->snapshot)
#define lv_is_snapshot(lv) (((lv)->status & SNAPSHOT) ? 1 : 0)
#define lv_is_converting(lv) (((lv)->status & CONVERTING) ? 1 : 0)
#define lv_is_external_origin(lv) (((lv)->external_count > 0) ? 1 : 0)
@@ -226,6 +226,7 @@
#define lv_is_raid(lv) (((lv)->status & RAID) ? 1 : 0)
#define lv_is_raid_image(lv) (((lv)->status & RAID_IMAGE) ? 1 : 0)
#define lv_is_raid_image_with_tracking(lv) ((lv_is_raid_image(lv) && !((lv)->status & LVM_WRITE)) ? 1 : 0)
#define lv_is_raid_metadata(lv) (((lv)->status & RAID_META) ? 1 : 0)
#define lv_is_raid_type(lv) (((lv)->status & (RAID | RAID_IMAGE | RAID_META)) ? 1 : 0)
@@ -1223,6 +1224,8 @@ int lv_raid_replace(struct logical_volume *lv, int force,
struct dm_list *remove_pvs, struct dm_list *allocate_pvs);
int lv_raid_remove_missing(struct logical_volume *lv);
int partial_raid_lv_supports_degraded_activation(const struct logical_volume *lv);
int lv_raid_change_region_size(struct logical_volume *lv,
int yes, int force, uint32_t new_region_size);
/* -- metadata/raid_manip.c */
/* ++ metadata/cache_manip.c */

View File

@@ -5447,6 +5447,19 @@ int vg_flag_write_locked(struct volume_group *vg)
return 0;
}
static int _access_vg_clustered(struct cmd_context *cmd, const struct volume_group *vg)
{
if (vg_is_clustered(vg) && !locking_is_clustered()) {
if (!cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
return 0;
}
return 1;
}
/*
* Performs a set of checks against a VG according to bits set in status
* and returns FAILED_* bits for those that aren't acceptable.
@@ -5458,15 +5471,9 @@ static uint32_t _vg_bad_status_bits(const struct volume_group *vg,
{
uint32_t failure = 0;
if ((status & CLUSTERED) &&
(vg_is_clustered(vg)) && !locking_is_clustered()) {
if (!vg->cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
if ((status & CLUSTERED) && !_access_vg_clustered(vg->cmd, vg))
/* Return because other flags are considered undefined. */
return FAILED_CLUSTERED;
}
if ((status & EXPORTED_VG) &&
vg_is_exported(vg)) {
@@ -5555,19 +5562,6 @@ static int _allow_extra_system_id(struct cmd_context *cmd, const char *system_id
return 0;
}
static int _access_vg_clustered(struct cmd_context *cmd, struct volume_group *vg)
{
if (vg_is_clustered(vg) && !locking_is_clustered()) {
if (!cmd->ignore_clustered_vgs)
log_error("Skipping clustered volume group %s", vg->name);
else
log_verbose("Skipping clustered volume group %s", vg->name);
return 0;
}
return 1;
}
static int _access_vg_lock_type(struct cmd_context *cmd, struct volume_group *vg,
uint32_t lockd_state, uint32_t *failure)
{
@@ -5635,6 +5629,11 @@ static int _access_vg_lock_type(struct cmd_context *cmd, struct volume_group *vg
}
}
if (test_mode()) {
log_error("Test mode is not yet supported with lock type %s.", vg->lock_type);
return 0;
}
return 1;
}

File diff suppressed because it is too large Load Diff

View File

@@ -132,6 +132,10 @@ struct dev_manager;
#define segtype_is_raid6_nr(segtype) ((segtype)->flags & SEG_RAID6_NR ? 1 : 0)
#define segtype_is_raid6_n_6(segtype) ((segtype)->flags & SEG_RAID6_N_6 ? 1 : 0)
#define segtype_is_raid6_zr(segtype) ((segtype)->flags & SEG_RAID6_ZR ? 1 : 0)
#define segtype_is_raid6_ls_6(segtype) ((segtype)->flags & SEG_RAID6_LS_6 ? 1 : 0)
#define segtype_is_raid6_rs_6(segtype) ((segtype)->flags & SEG_RAID6_RS_6 ? 1 : 0)
#define segtype_is_raid6_la_6(segtype) ((segtype)->flags & SEG_RAID6_LA_6 ? 1 : 0)
#define segtype_is_raid6_ra_6(segtype) ((segtype)->flags & SEG_RAID6_RA_6 ? 1 : 0)
#define segtype_is_any_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10_near(segtype) segtype_is_raid10(segtype)
@@ -144,6 +148,12 @@ struct dev_manager;
#define segtype_is_virtual(segtype) ((segtype)->flags & SEG_VIRTUAL ? 1 : 0)
#define segtype_is_unknown(segtype) ((segtype)->flags & SEG_UNKNOWN ? 1 : 0)
#define segtype_can_split(segtype) ((segtype)->flags & SEG_CAN_SPLIT ? 1 : 0)
#define segtype_cannot_be_zeroed(segtype) ((segtype)->flags & SEG_CANNOT_BE_ZEROED ? 1 : 0)
#define segtype_monitored(segtype) ((segtype)->flags & SEG_MONITORED ? 1 : 0)
#define segtype_only_exclusive(segtype) ((segtype)->flags & SEG_ONLY_EXCLUSIVE ? 1 : 0)
#define segtype_can_error_when_full(segtype) ((segtype)->flags & SEG_CAN_ERROR_WHEN_FULL ? 1 : 0)
#define segtype_supports_stripe_size(segtype) \
((segtype_is_striped(segtype) || segtype_is_mirror(segtype) || \
segtype_is_cache(segtype) || segtype_is_cache_pool(segtype) || \
@@ -188,11 +198,11 @@ struct dev_manager;
#define seg_is_thin_volume(seg) segtype_is_thin_volume((seg)->segtype)
#define seg_is_virtual(seg) segtype_is_virtual((seg)->segtype)
#define seg_unknown(seg) segtype_is_unknown((seg)->segtype)
#define seg_can_split(seg) ((seg)->segtype->flags & SEG_CAN_SPLIT ? 1 : 0)
#define seg_cannot_be_zeroed(seg) ((seg)->segtype->flags & SEG_CANNOT_BE_ZEROED ? 1 : 0)
#define seg_monitored(seg) ((seg)->segtype->flags & SEG_MONITORED ? 1 : 0)
#define seg_only_exclusive(seg) ((seg)->segtype->flags & SEG_ONLY_EXCLUSIVE ? 1 : 0)
#define seg_can_error_when_full(seg) ((seg)->segtype->flags & SEG_CAN_ERROR_WHEN_FULL ? 1 : 0)
#define seg_can_split(seg) segtype_can_split((seg)->segtype)
#define seg_cannot_be_zeroed(seg) segtype_cannot_be_zeroed((seg)->segtype)
#define seg_monitored(seg) segtype_monitored((seg)->segtype)
#define seg_only_exclusive(seg) segtype_only_exclusive((seg)->segtype)
#define seg_can_error_when_full(seg) segtype_can_error_when_full((seg)->segtype)
struct segment_type {
struct dm_list list; /* Internal */

View File

@@ -121,7 +121,7 @@ int lv_is_visible(const struct logical_volume *lv)
if (lv_is_historical(lv))
return 1;
if (lv->status & SNAPSHOT)
if (lv_is_snapshot(lv))
return 0;
if (lv_is_cow(lv)) {

View File

@@ -99,6 +99,10 @@ int attach_thin_external_origin(struct lv_segment *seg,
external_lv->name);
external_lv->status &= ~LVM_WRITE;
}
// TODO: should we mark even origin read-only ?
//if (lv_is_cache(external_lv)) /* read-only corigin of cache LV */
// seg_lv(first_seg(external_lv), 0)->status &= ~LVM_WRITE;
}
return 1;

View File

@@ -35,6 +35,47 @@
#define uninitialized_var(x) x = x
#endif
/*
* GCC 3.4 adds a __builtin_clz, which uses the count leading zeros (clz)
* instruction on arches that have one. Provide a fallback using shifts
* and comparisons for older compilers.
*/
#ifdef HAVE___BUILTIN_CLZ
#define clz(x) __builtin_clz((x))
#else /* ifdef HAVE___BUILTIN_CLZ */
unsigned _dm_clz(unsigned x)
{
int n;
if ((int)x <= 0) return (~x >> 26) & 32;
n = 1;
if ((x >> 16) == 0) {
n = n + 16;
x = x << 16;
}
if ((x >> 24) == 0) {
n = n + 8;
x = x << 8;
}
if ((x >> 28) == 0) {
n = n + 4;
x = x << 4;
}
if ((x >> 30) == 0) {
n = n + 2;
x = x << 2;
}
n = n - (x >> 31);
return n;
}
#define clz(x) _dm_clz((x))
#endif /* ifdef HAVE___BUILTIN_CLZ */
#define KERNEL_VERSION(major, minor, release) (((major) << 16) + ((minor) << 8) + (release))
/* Define some portable printing types */

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2011-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2011-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -87,13 +87,13 @@ static int _raid_text_import_areas(struct lv_segment *seg,
}
/* Metadata device comes first. */
if (!seg_is_raid0(seg)) {
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
}
if (!(lv = find_lv(seg->lv->vg, cv->v.str))) {
log_error("Couldn't find volume '%s' for segment '%s'.",
cv->v.str ? : "NULL", seg_name);
return 0;
}
if (strstr(lv->name, "_rmeta_")) {
if (!set_lv_segment_area_lv(seg, s, lv, 0, RAID_META))
return_0;
cv = cv->next;
@@ -167,7 +167,6 @@ static int _raid_text_import(struct lv_segment *seg,
if (seg_is_any_raid0(seg))
seg->area_len /= seg->area_count;
seg->status |= RAID;
return 1;
}
@@ -538,14 +537,20 @@ static const struct raid_type {
{ SEG_TYPE_NAME_RAID10, 0, SEG_RAID10 | SEG_AREAS_MIRRORED },
{ SEG_TYPE_NAME_RAID4, 1, SEG_RAID4 },
{ SEG_TYPE_NAME_RAID5, 1, SEG_RAID5 },
{ SEG_TYPE_NAME_RAID5_N, 1, SEG_RAID5_N },
{ SEG_TYPE_NAME_RAID5_LA, 1, SEG_RAID5_LA },
{ SEG_TYPE_NAME_RAID5_LS, 1, SEG_RAID5_LS },
{ SEG_TYPE_NAME_RAID5_RA, 1, SEG_RAID5_RA },
{ SEG_TYPE_NAME_RAID5_RS, 1, SEG_RAID5_RS },
{ SEG_TYPE_NAME_RAID6, 2, SEG_RAID6 },
{ SEG_TYPE_NAME_RAID6_N_6, 2, SEG_RAID6_N_6 },
{ SEG_TYPE_NAME_RAID6_NC, 2, SEG_RAID6_NC },
{ SEG_TYPE_NAME_RAID6_NR, 2, SEG_RAID6_NR },
{ SEG_TYPE_NAME_RAID6_ZR, 2, SEG_RAID6_ZR }
{ SEG_TYPE_NAME_RAID6_ZR, 2, SEG_RAID6_ZR },
{ SEG_TYPE_NAME_RAID6_LS_6, 2, SEG_RAID6_LS_6 },
{ SEG_TYPE_NAME_RAID6_RS_6, 2, SEG_RAID6_RS_6 },
{ SEG_TYPE_NAME_RAID6_LA_6, 2, SEG_RAID6_LA_6 },
{ SEG_TYPE_NAME_RAID6_RA_6, 2, SEG_RAID6_RA_6 }
};
static struct segment_type *_init_raid_segtype(struct cmd_context *cmd,

View File

@@ -0,0 +1,5 @@
dm_bit_get_last
dm_bit_get_prev
dm_stats_update_regions_from_fd
dm_bitset_parse_list
dm_stats_bind_from_fd

View File

@@ -76,6 +76,13 @@ static int _test_word(uint32_t test, int bit)
return (tb ? ffs(tb) + bit - 1 : -1);
}
static int _test_word_rev(uint32_t test, int bit)
{
uint32_t tb = test << (DM_BITS_PER_INT - 1 - bit);
return (tb ? bit - clz(tb) : -1);
}
int dm_bit_get_next(dm_bitset_t bs, int last_bit)
{
int bit, word;
@@ -101,15 +108,45 @@ int dm_bit_get_next(dm_bitset_t bs, int last_bit)
return -1;
}
int dm_bit_get_prev(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit--; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit >= 0) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word_rev(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = (last_bit & ~(DM_BITS_PER_INT - 1)) - 1;
}
return -1;
}
int dm_bit_get_first(dm_bitset_t bs)
{
return dm_bit_get_next(bs, -1);
}
int dm_bit_get_last(dm_bitset_t bs)
{
return dm_bit_get_prev(bs, bs[0] + 1);
}
/*
* Based on the Linux kernel __bitmap_parselist from lib/bitmap.c
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem)
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits)
{
unsigned a, b;
int c, old_c, totaldigits, ndigits, nmaskbits;
@@ -185,6 +222,9 @@ scan:
} while (len && c == ',');
if (!mask) {
if (min_num_bits && (nmaskbits < min_num_bits))
nmaskbits = min_num_bits;
if (!(mask = dm_bitset_create(mem, nmaskbits)))
goto_bad;
str = start;
@@ -201,3 +241,19 @@ bad:
}
return NULL;
}
#if defined(__GNUC__)
/*
* Maintain backward compatibility with older versions that did not
* accept a 'min_num_bits' argument to dm_bitset_parse_list().
*/
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem);
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem)
{
return dm_bitset_parse_list(str, mem, 0);
}
DM_EXPORT_SYMBOL(dm_bitset_parse_list, 1_02_129);
#else /* if defined(__GNUC__) */
#endif

View File

@@ -1851,10 +1851,10 @@ static struct dm_ioctl *_do_dm_ioctl(struct dm_task *dmt, unsigned command,
dmi->flags &= ~DM_EXISTS_FLAG; /* FIXME */
else {
if (_log_suppress || dmt->ioctl_errno == EINTR)
log_verbose("device-mapper: %s ioctl on %s%s%s%.0d%s%.0d%s%s "
log_verbose("device-mapper: %s ioctl on %s %s%s%.0d%s%.0d%s%s "
"failed: %s",
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
dmt->major > 0 ? "(" : "",
dmt->major > 0 ? dmt->major : 0,
dmt->major > 0 ? ":" : "",
@@ -1863,10 +1863,10 @@ static struct dm_ioctl *_do_dm_ioctl(struct dm_task *dmt, unsigned command,
dmt->major > 0 ? ")" : "",
strerror(dmt->ioctl_errno));
else
log_error("device-mapper: %s ioctl on %s%s%s%.0d%s%.0d%s%s "
log_error("device-mapper: %s ioctl on %s %s%s%.0d%s%.0d%s%s "
"failed: %s",
_cmd_data_v4[dmt->type].name,
dmi->name, dmi->uuid,
dmi->name, dmi->uuid,
dmt->major > 0 ? "(" : "",
dmt->major > 0 ? dmt->major : 0,
dmt->major > 0 ? ":" : "",

View File

@@ -517,6 +517,16 @@ int dm_stats_bind_name(struct dm_stats *dms, const char *name);
*/
int dm_stats_bind_uuid(struct dm_stats *dms, const char *uuid);
/*
* Bind a dm_stats handle to the device backing the file referenced
* by the specified file descriptor.
*
* File descriptor fd must reference a regular file, open for reading,
* in a local file system, backed by a device-mapper device, that
* supports the FIEMAP ioctl, and that returns data describing the
* physical location of extents.
*/
int dm_stats_bind_from_fd(struct dm_stats *dms, int fd);
/*
* Test whether the running kernel supports the precise_timestamps
* feature. Presence of this feature also implies histogram support.
@@ -1138,9 +1148,9 @@ for (dm_stats_walk_init((dms), DM_STATS_WALK_AREA), \
*/
#define dm_stats_foreach_group(dms) \
for (dm_stats_walk_init((dms), DM_STATS_WALK_GROUP), \
dm_stats_group_walk_start(dms); \
!dm_stats_group_walk_end(dms); \
dm_stats_group_walk_next(dms))
dm_stats_walk_start(dms); \
!dm_stats_walk_end(dms); \
dm_stats_walk_next(dms))
/*
* Start a walk iterating over the regions contained in dm_stats handle
@@ -1310,8 +1320,9 @@ int dm_stats_get_group_descriptor(const struct dm_stats *dms,
* On success the function returns a pointer to an array of uint64_t
* containing the IDs of the newly created regions. The region_id
* array is terminated by the value DM_STATS_REGION_NOT_PRESENT and
* should be freed using dm_free() when no longer required. On error
* NULL is returned.
* should be freed using dm_free() when no longer required.
*
* On error NULL is returned.
*
* Following a call to dm_stats_create_regions_from_fd() the handle
* is guaranteed to be in a listed state, and to contain any region
@@ -1319,12 +1330,43 @@ int dm_stats_get_group_descriptor(const struct dm_stats *dms,
*
* The group_id for the new group is equal to the region_id value in
* the first array element.
*
*/
uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
int group, int precise,
struct dm_histogram *bounds,
const char *alias);
/*
* Update a group of regions that correspond to the extents of a file
* in the filesystem, adding and removing regions to account for
* allocation changes in the underlying file.
*
* File descriptor fd must reference a regular file, open for reading,
* in a local file system that supports the FIEMAP ioctl, and that
* returns data describing the physical location of extents.
*
* The file descriptor can be closed by the caller following the call
* to dm_stats_update_regions_from_fd().
*
* On success the function returns a pointer to an array of uint64_t
* containing the IDs of the updated regions (including any existing
* regions that were not modified by the call).
*
* The region_id array is terminated by the special value
* DM_STATS_REGION_NOT_PRESENT and should be freed using dm_free()
* when no longer required.
*
* On error NULL is returned.
*
* Following a call to dm_stats_update_regions_from_fd() the handle
* is guaranteed to be in a listed state, and to contain any region
* and group identifiers created by the operation.
*
* This function cannot be used with file mapped regions that are
* not members of a group: either group the regions, or remove them
* and re-map them with dm_stats_create_regions_from_fd().
*/
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t group_id);
/*
* Call this to actually run the ioctl.
@@ -2069,6 +2111,8 @@ void dm_bit_and(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2);
void dm_bit_union(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2);
int dm_bit_get_first(dm_bitset_t bs);
int dm_bit_get_next(dm_bitset_t bs, int last_bit);
int dm_bit_get_last(dm_bitset_t bs);
int dm_bit_get_prev(dm_bitset_t bs, int last_bit);
#define DM_BITS_PER_INT (sizeof(int) * CHAR_BIT)
@@ -2088,7 +2132,7 @@ int dm_bit_get_next(dm_bitset_t bs, int last_bit);
memset((bs) + 1, 0, ((*(bs) / DM_BITS_PER_INT) + 1) * sizeof(int))
#define dm_bit_copy(bs1, bs2) \
memcpy((bs1) + 1, (bs2) + 1, ((*(bs1) / DM_BITS_PER_INT) + 1) * sizeof(int))
memcpy((bs1) + 1, (bs2) + 1, ((*(bs2) / DM_BITS_PER_INT) + 1) * sizeof(int))
/*
* Parse a string representation of a bitset into a dm_bitset_t. The
@@ -2098,7 +2142,8 @@ int dm_bit_get_next(dm_bitset_t bs, int last_bit);
* dm_malloc(). Otherwise the bitset will be allocated using the supplied
* dm_pool.
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem);
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits);
/* Returns number of set bits */
static inline unsigned hweight32(uint32_t i)

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2005-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
@@ -47,13 +47,19 @@ enum {
SEG_RAID1,
SEG_RAID10,
SEG_RAID4,
SEG_RAID5_N,
SEG_RAID5_LA,
SEG_RAID5_RA,
SEG_RAID5_LS,
SEG_RAID5_RS,
SEG_RAID6_N_6,
SEG_RAID6_ZR,
SEG_RAID6_NR,
SEG_RAID6_NC,
SEG_RAID6_LS_6,
SEG_RAID6_RS_6,
SEG_RAID6_LA_6,
SEG_RAID6_RA_6,
};
/* FIXME Add crypt and multipath support */
@@ -81,13 +87,20 @@ static const struct {
{ SEG_RAID1, "raid1"},
{ SEG_RAID10, "raid10"},
{ SEG_RAID4, "raid4"},
{ SEG_RAID5_N, "raid5_n"},
{ SEG_RAID5_LA, "raid5_la"},
{ SEG_RAID5_RA, "raid5_ra"},
{ SEG_RAID5_LS, "raid5_ls"},
{ SEG_RAID5_RS, "raid5_rs"},
{ SEG_RAID6_N_6,"raid6_n_6"},
{ SEG_RAID6_ZR, "raid6_zr"},
{ SEG_RAID6_NR, "raid6_nr"},
{ SEG_RAID6_NC, "raid6_nc"},
{ SEG_RAID6_LS_6, "raid6_ls_6"},
{ SEG_RAID6_RS_6, "raid6_rs_6"},
{ SEG_RAID6_LA_6, "raid6_la_6"},
{ SEG_RAID6_RA_6, "raid6_ra_6"},
/*
* WARNING: Since 'raid' target overloads this 1:1 mapping table
@@ -2140,13 +2153,19 @@ static int _emit_areas_line(struct dm_task *dmt __attribute__((unused)),
case SEG_RAID1:
case SEG_RAID10:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
if (!area->dev_node) {
EMIT_PARAMS(*pos, " -");
break;
@@ -2588,13 +2607,19 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
case SEG_RAID1:
case SEG_RAID10:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
target_type_is_raid = 1;
r = _raid_emit_segment_line(dmt, major, minor, seg, seg_start,
params, paramsize);
@@ -2778,6 +2803,10 @@ static int _dm_tree_revert_activated(struct dm_tree_node *parent)
dm_list_iterate_items_gen(child, &parent->activated, activated_list) {
log_debug_activation("Reverting %s.", child->name);
if (child->callback) {
log_debug_activation("Dropping callback for %s.", child->name);
child->callback = NULL;
}
if (!_deactivate_node(child->name, child->info.major, child->info.minor,
&child->dtree->cookie, child->udev_flags, 0)) {
log_error("Unable to deactivate %s (%" PRIu32
@@ -3865,13 +3894,19 @@ int dm_tree_node_add_null_area(struct dm_tree_node *node, uint64_t offset)
case SEG_RAID0_META:
case SEG_RAID1:
case SEG_RAID4:
case SEG_RAID5_N:
case SEG_RAID5_LA:
case SEG_RAID5_RA:
case SEG_RAID5_LS:
case SEG_RAID5_RS:
case SEG_RAID6_N_6:
case SEG_RAID6_ZR:
case SEG_RAID6_NR:
case SEG_RAID6_NC:
case SEG_RAID6_LS_6:
case SEG_RAID6_RS_6:
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
break;
default:
log_error("dm_tree_node_add_null_area() called on an unsupported segment type");

View File

@@ -16,6 +16,7 @@
*/
#include "dmlib.h"
#include "kdev_t.h"
#include "math.h" /* log10() */
@@ -260,6 +261,9 @@ static int _stats_group_id_present(const struct dm_stats *dms, uint64_t id)
{
struct dm_stats_group *group = NULL;
if (id == DM_STATS_GROUP_NOT_PRESENT)
return 0;
if (!dms || !dms->regions)
return_0;
@@ -452,6 +456,24 @@ int dm_stats_bind_uuid(struct dm_stats *dms, const char *uuid)
return 1;
}
int dm_stats_bind_from_fd(struct dm_stats *dms, int fd)
{
int major, minor;
struct stat buf;
if (fstat(fd, &buf)) {
log_error("fstat failed for fd %d.", fd);
return 0;
}
major = (int) MAJOR(buf.st_dev);
minor = (int) MINOR(buf.st_dev);
if (!dm_stats_bind_devno(dms, major, minor))
return_0;
return 1;
}
static int _stats_check_precise_timestamps(const struct dm_stats *dms)
{
/* Already checked? */
@@ -645,6 +667,23 @@ static void _stats_update_groups(struct dm_stats *dms)
}
}
static void _check_group_regions_present(struct dm_stats *dms,
struct dm_stats_group *group)
{
dm_bitset_t regions = group->regions;
int64_t i, group_id;
group_id = i = dm_bit_get_first(regions);
for (; i > 0; dm_bit_get_next(regions, i))
if (!_stats_region_present(&dms->regions[i])) {
log_warn("Group descriptor " FMTi64 " contains "
"non-existent region_id " FMTi64 ".",
group_id, i);
dm_bit_clear(regions, i);
}
}
/*
* Parse a DMS_GROUP group descriptor embedded in a region's aux_data.
*
@@ -658,6 +697,7 @@ static void _stats_update_groups(struct dm_stats *dms)
#define DMS_GROUP_TAG_LEN (sizeof(DMS_GROUP_TAG) - 1)
#define DMS_GROUP_SEP ':'
#define DMS_AUX_SEP "#"
static int _parse_aux_data_group(struct dm_stats *dms,
struct dm_stats_region *region,
struct dm_stats_group *group)
@@ -701,15 +741,21 @@ static int _parse_aux_data_group(struct dm_stats *dms,
end = c + strlen(c);
*(end++) = '\0';
if (!(regions = dm_bitset_parse_list(c, NULL))) {
if (!(regions = dm_bitset_parse_list(c, NULL, 0))) {
log_error("Could not parse member list while "
"reading group aux_data");
return 0;
}
group->group_id = dm_bit_get_first(regions);
group->regions = regions;
if (group->group_id != region->region_id) {
log_error("Found invalid group descriptor in region " FMTu64
" aux_data.", region->region_id);
group->group_id = DM_STATS_GROUP_NOT_PRESENT;
goto bad;
}
group->regions = regions;
group->alias = NULL;
if (strlen(alias)) {
group->alias = dm_strdup(alias);
@@ -1024,6 +1070,9 @@ static int _stats_parse_list(struct dm_stats *dms, const char *resp)
dms->regions = dm_pool_end_object(mem);
dms->groups = dm_pool_end_object(group_mem);
dm_stats_foreach_group(dms)
_check_group_regions_present(dms, &dms->groups[dms->cur_group]);
_stats_update_groups(dms);
if (fclose(list_rows))
@@ -1056,6 +1105,9 @@ int dm_stats_list(struct dm_stats *dms, const char *program_id)
if (!_stats_set_name_cache(dms))
return_0;
if (dms->regions)
_stats_regions_destroy(dms);
r = dm_snprintf(msg, sizeof(msg), "@stats_list %s", program_id);
if (r < 0) {
@@ -1690,9 +1742,7 @@ static size_t _stats_group_tag_fill(const struct dm_stats *dms,
int i, j, r, next, last = 0;
size_t used = 0;
i = dm_bit_get_first(regions);
for (; i >= 0; i = dm_bit_get_next(regions, i))
last = i;
last = dm_bit_get_last(regions);
i = dm_bit_get_first(regions);
for(; i >= 0; i = dm_bit_get_next(regions, i)) {
@@ -1736,17 +1786,12 @@ bad:
static size_t _stats_group_tag_len(const struct dm_stats *dms,
dm_bitset_t regions)
{
int i, j, next, nr_regions = 0;
int64_t i, j, next, nr_regions = 0;
size_t buflen = 0, id_len = 0;
/* check region ids and find last set bit */
i = dm_bit_get_first(regions);
for (; i >= 0; i = dm_bit_get_next(regions, i)) {
if (!dm_stats_region_present(dms, i)) {
log_error("Region identifier %d not found", i);
return 0;
}
/* length of region_id or range start in characters */
id_len = (i) ? 1 + (size_t) log10(i) : 1;
buflen += id_len;
@@ -3823,6 +3868,37 @@ static int _extent_start_compare(const void *p1, const void *p2)
return 1;
}
/*
* Resize the group bitmap corresponding to group_id so that it can
* contain at least num_regions members.
*/
static int _stats_resize_group(struct dm_stats_group *group, int num_regions)
{
int last_bit = dm_bit_get_last(group->regions);
dm_bitset_t new, old;
if (last_bit >= num_regions) {
log_error("Cannot resize group bitmap to %d with bit %d set.",
num_regions, last_bit);
return 0;
}
log_very_verbose("Resizing group bitmap from %d to %d (last_bit: %d).",
group->regions[0], num_regions, last_bit);
new = dm_bitset_create(NULL, num_regions);
if (!new) {
log_error("Could not allocate memory for new group bitmap.");
return 0;
}
old = group->regions;
dm_bit_copy(new, old);
group->regions = new;
dm_bitset_destroy(old);
return 1;
}
static int _stats_create_group(struct dm_stats *dms, dm_bitset_t regions,
const char *alias, uint64_t *group_id)
{
@@ -3973,7 +4049,7 @@ int dm_stats_create_group(struct dm_stats *dms, const char *members,
return 0;
};
if (!(regions = dm_bitset_parse_list(members, NULL))) {
if (!(regions = dm_bitset_parse_list(members, NULL, 0))) {
log_error("Could not parse list: '%s'", members);
return 0;
}
@@ -4176,8 +4252,8 @@ bad:
return 0;
}
static int _stats_add_extent(struct dm_pool *mem, struct fiemap_extent *fm_ext,
uint64_t id)
static int _stats_add_file_extent(int fd, struct dm_pool *mem, uint64_t id,
struct fiemap_extent *fm_ext)
{
struct _extent extent;
@@ -4187,16 +4263,17 @@ static int _stats_add_extent(struct dm_pool *mem, struct fiemap_extent *fm_ext,
/* convert bytes to dm (512b) sectors */
extent.start = fm_ext->fe_physical >> SECTOR_SHIFT;
extent.len = fm_ext->fe_length >> SECTOR_SHIFT;
extent.id = id;
log_very_verbose("Extent " FMTu64 " on fd %d at " FMTu64 "+"
FMTu64, extent.id, fd, extent.start, extent.len);
if (!dm_pool_grow_object(mem, &extent,
sizeof(extent))) {
log_error("Cannot map file: failed to grow extent map.");
return 0;
}
return 1;
}
/* test for the boundary of an extent */
@@ -4204,6 +4281,102 @@ static int _stats_add_extent(struct dm_pool *mem, struct fiemap_extent *fm_ext,
((ext).fe_logical != 0) && \
((ext).fe_physical != (exp))
/*
* Copy fields from fiemap_extent 'from' to the fiemap_extent
* pointed to by 'to'.
*/
#define ext_copy(to, from) \
do { \
*(to) = *(from); \
} while (0)
static uint64_t _stats_map_extents(int fd, struct dm_pool *mem,
struct fiemap *fiemap,
struct fiemap_extent *fm_ext,
struct fiemap_extent *fm_last,
struct fiemap_extent *fm_pending,
uint64_t next_extent,
int *eof)
{
uint64_t expected = 0, nr_extents = next_extent;
unsigned int i;
/*
* Loop over the returned extents adding the fm_pending extent
* to the table of extents each time a discontinuity (or eof)
* is detected.
*
* We use a pointer to fm_pending in the caller since it is
* possible that logical extents comprising a single physical
* extent are returned by successive FIEMAP calls.
*/
for (i = 0; i < fiemap->fm_mapped_extents; i++) {
expected = fm_last->fe_physical + fm_last->fe_length;
if (fm_ext[i].fe_flags & FIEMAP_EXTENT_LAST)
*eof = 1;
/* cannot map extents that are not yet allocated. */
if (fm_ext[i].fe_flags
& (FIEMAP_EXTENT_UNKNOWN | FIEMAP_EXTENT_DELALLOC))
continue;
/*
* Begin a new extent if the current physical address differs
* from the expected address yielded by fm_last.fe_physical +
* fm_last.fe_length.
*
* A logical discontinuity is seen at the start of the file if
* unwritten space exists before the first extent: do not add
* any extent record until we have accumulated a non-zero length
* in fm_pending.
*/
if (fm_pending->fe_length &&
ext_boundary(fm_ext[i], expected)) {
if (!_stats_add_file_extent(fd, mem, nr_extents,
fm_pending))
goto_bad;
nr_extents++;
/* Begin a new pending extent. */
ext_copy(fm_pending, fm_ext + i);
} else {
expected = 0;
/* Begin a new pending extent for extent 0. If there is
* a hole at the start of the file, the first allocated
* extent will have a non-zero fe_logical. Detect this
* case by testing fm_pending->fe_length: if no length
* has been accumulated we are handling the first
* physical extent of the file.
*/
if (!fm_pending->fe_length || fm_ext[i].fe_logical == 0)
ext_copy(fm_pending, fm_ext + i);
else
/* accumulate this logical extent's length */
fm_pending->fe_length += fm_ext[i].fe_length;
}
*fm_last = fm_ext[i];
}
/*
* If the file only has a single extent, no boundary is ever
* detected to trigger addition of the first extent.
*/
if (*eof || (fm_ext[i - 1].fe_logical == 0)) {
_stats_add_file_extent(fd, mem, nr_extents, fm_pending);
nr_extents++;
}
fiemap->fm_start = (fm_ext[i - 1].fe_logical +
fm_ext[i - 1].fe_length);
/* return the number of extents found in this call. */
return nr_extents - next_extent;
bad:
/* signal mapping error to caller */
*eof = -1;
return 0;
}
/*
* Read the extents of an open file descriptor into a table of struct _extent.
*
@@ -4215,17 +4388,12 @@ static int _stats_add_extent(struct dm_pool *mem, struct fiemap_extent *fm_ext,
static struct _extent *_stats_get_extents_for_file(struct dm_pool *mem, int fd,
uint64_t *count)
{
uint64_t *buf;
struct fiemap_extent fm_last = {0}, fm_pending = {0}, *fm_ext = NULL;
struct fiemap *fiemap = NULL;
struct fiemap_extent *fm_ext = NULL;
struct fiemap_extent fm_last = {0}, fm_pending = {0};
int eof = 0, nr_extents = 0;
struct _extent *extents;
unsigned long long expected = 0;
unsigned long flags = 0;
unsigned int i, num = 0;
int tot_extents = 0, n = 0;
int last = 0;
int rc;
uint64_t *buf;
buf = dm_zalloc(STATS_FIE_BUF_LEN);
if (!buf) {
@@ -4245,7 +4413,7 @@ static struct _extent *_stats_get_extents_for_file(struct dm_pool *mem, int fd,
if (!dm_pool_begin_object(mem, sizeof(*extents)))
return NULL;
flags |= FIEMAP_FLAG_SYNC;
flags = FIEMAP_FLAG_SYNC;
do {
/* start of ioctl loop - zero size and set count to bufsize */
@@ -4254,76 +4422,34 @@ static struct _extent *_stats_get_extents_for_file(struct dm_pool *mem, int fd,
fiemap->fm_extent_count = *count;
/* get count-sized chunk of extents */
rc = ioctl(fd, FS_IOC_FIEMAP, (unsigned long) fiemap);
if (rc < 0) {
rc = -errno;
if (rc == -EBADR)
if (ioctl(fd, FS_IOC_FIEMAP, (unsigned long) fiemap) < 0) {
if (errno == EBADR)
log_err_once("FIEMAP failed with unknown "
"flags %x.", fiemap->fm_flags);
goto bad;
}
/* If 0 extents are returned, then more ioctls are not needed */
/* If 0 extents are returned, more ioctls are not needed */
if (fiemap->fm_mapped_extents == 0)
break;
for (i = 0; i < fiemap->fm_mapped_extents; i++) {
expected = fm_last.fe_physical + fm_last.fe_length;
nr_extents += _stats_map_extents(fd, mem, fiemap, fm_ext,
&fm_last, &fm_pending,
nr_extents, &eof);
if (fm_ext[i].fe_flags & FIEMAP_EXTENT_LAST)
last = 1;
/* check for extent mapping error */
if (eof < 0)
goto bad;
/* cannot map extents that are not yet allocated. */
if (fm_ext[i].fe_flags
& (FIEMAP_EXTENT_UNKNOWN | FIEMAP_EXTENT_DELALLOC))
continue;
} while (eof == 0);
if (ext_boundary(fm_ext[i], expected)) {
tot_extents++;
if (!_stats_add_extent(mem, &fm_pending,
fm_pending.fe_flags))
goto_bad;
/* Begin a new pending extent. */
fm_pending.fe_flags = tot_extents - 1;
fm_pending.fe_physical = fm_ext[i].fe_physical;
fm_pending.fe_logical = fm_ext[i].fe_logical;
fm_pending.fe_length = fm_ext[i].fe_length;
} else {
expected = 0;
if (!tot_extents)
tot_extents = 1;
/* Begin a new pending extent for extent 0. */
if (fm_ext[i].fe_logical == 0) {
fm_pending.fe_flags = tot_extents - 1;
fm_pending.fe_physical = fm_ext[i].fe_physical;
fm_pending.fe_logical = fm_ext[i].fe_logical;
fm_pending.fe_length = fm_ext[i].fe_length;
} else
fm_pending.fe_length += fm_ext[i].fe_length;
}
num += tot_extents;
fm_last = fm_ext[i];
n++;
}
/*
* If the file only has a single extent, no boundary is ever
* detected to trigger addition of the first extent.
*/
if (fm_ext[i].fe_logical == 0)
_stats_add_extent(mem, &fm_pending, fm_pending.fe_flags);
fiemap->fm_start = (fm_ext[i - 1].fe_logical +
fm_ext[i - 1].fe_length);
} while (last == 0);
if (!tot_extents) {
if (!nr_extents) {
log_error("Cannot map file: no allocated extents.");
goto bad;
}
/* return total number of extents */
*count = tot_extents;
*count = nr_extents;
extents = dm_pool_end_object(mem);
/* free FIEMAP buffer. */
@@ -4337,23 +4463,140 @@ bad:
return NULL;
}
#define MATCH_EXTENT(e, s, l) \
(((e).start == (s)) && ((e).len == (l)))
static struct _extent *_find_extent(size_t nr_extents, struct _extent *extents,
uint64_t start, uint64_t len)
{
size_t i;
for (i = 0; i < nr_extents; i++)
if (MATCH_EXTENT(extents[i], start, len))
return extents + i;
return NULL;
}
/*
* Create a set of regions representing the extents of a file and
* return a table of uint64_t region_id values. The number of regions
* Clean up a table of region_id values that were created during a
* failed dm_stats_create_regions_from_fd, or dm_stats_update_regions_from_fd
* operation.
*/
static void _stats_cleanup_region_ids(struct dm_stats *dms, uint64_t *regions,
uint64_t nr_regions)
{
uint64_t i;
for (i = 0; i < nr_regions; i++)
if (!_stats_delete_region(dms, regions[i]))
log_error("Could not delete region " FMTu64 ".", i);
}
/*
* First update pass: prune no-longer-allocated extents from the group
* and build a table of the remaining extents so that their creation
* can be skipped in the second pass.
*/
static int _stats_unmap_regions(struct dm_stats *dms, uint64_t group_id,
struct dm_pool *mem, struct _extent *extents,
struct _extent **old_extents, uint64_t *count,
int *regroup)
{
struct dm_stats_region *region = NULL;
struct dm_stats_group *group = NULL;
int64_t nr_kept, nr_old, i;
struct _extent ext;
group = &dms->groups[group_id];
log_very_verbose("Checking for changed file extents in group ID "
FMTu64, group_id);
if (!dm_pool_begin_object(mem, sizeof(**old_extents))) {
log_error("Could not allocate extent table.");
return 0;
}
nr_kept = nr_old = 0; /* counts of old and retained extents */
/*
* First pass: delete de-allocated extents and set regroup=1 if
* deleting the current group leader.
*/
i = dm_bit_get_last(group->regions);
for (; i >= 0; i = dm_bit_get_prev(group->regions, i)) {
region = &dms->regions[i];
nr_old++;
if (_find_extent(*count, extents,
region->start, region->len)) {
ext.start = region->start;
ext.len = region->len;
ext.id = i;
nr_kept++;
dm_pool_grow_object(mem, &ext,
sizeof(ext));
log_very_verbose("Kept region " FMTu64, i);
} else {
if (i == group_id)
*regroup = 1;
if (!_stats_delete_region(dms, i)) {
log_error("Could not remove region ID " FMTu64,
i);
goto out;
}
log_very_verbose("Deleted region " FMTu64, i);
}
}
*old_extents = dm_pool_end_object(mem);
if (!*old_extents) {
log_error("Could not finalize region extent table.");
goto out;
}
log_very_verbose("Kept %ld of %ld old extents",
nr_kept, nr_old);
log_very_verbose("Found " FMTu64 " new extents",
*count - nr_kept);
return nr_kept;
out:
dm_pool_abandon_object(mem);
return -1;
}
/*
* Create or update a set of regions representing the extents of a file
* and return a table of uint64_t region_id values. The number of regions
* created is returned in the memory pointed to by count (which must be
* non-NULL).
*
* If group_id is not equal to DM_STATS_GROUP_NOT_PRESENT, it is assumed
* that group_id corresponds to a group containing existing regions that
* were mapped to this file at an earlier time: regions will be added or
* removed to reflect the current status of the file.
*/
static uint64_t *_stats_create_file_regions(struct dm_stats *dms, int fd,
struct dm_histogram *bounds,
int precise, uint64_t *count)
static uint64_t *_stats_map_file_regions(struct dm_stats *dms, int fd,
struct dm_histogram *bounds,
int precise, uint64_t group_id,
uint64_t *count, int *regroup)
{
uint64_t *regions = NULL, i, fail_region;
struct _extent *extents = NULL, *old_extents = NULL;
uint64_t *regions = NULL, fail_region;
struct dm_stats_group *group = NULL;
struct dm_pool *extent_mem = NULL;
struct _extent *extents = NULL;
struct _extent *old_ext;
char *hist_arg = NULL;
int update, num_bits;
struct statfs fsbuf;
int64_t nr_kept = 0, i;
struct stat buf;
update = _stats_group_id_present(dms, group_id);
#ifdef BTRFS_SUPER_MAGIC
if (fstatfs(fd, &fsbuf)) {
log_error("fstatfs failed for fd %d", fd);
@@ -4382,6 +4625,18 @@ static uint64_t *_stats_create_file_regions(struct dm_stats *dms, int fd,
return 0;
}
/*
* If regroup is set here, we are creating a new filemap: otherwise
* we are updating a group with a valid group identifier in group_id.
*/
if (update)
log_very_verbose("Updating extents from fd %d with group ID "
FMTu64 " on (%d:%d)", fd, group_id,
major(buf.st_dev), minor(buf.st_dev));
else
log_very_verbose("Mapping extents from fd %d on (%d:%d)",
fd, major(buf.st_dev), minor(buf.st_dev));
/* Use a temporary, private pool for the extent table. This avoids
* hijacking the dms->mem (region table) pool which would lead to
* interleaving temporary allocations with dm_stats_list() data,
@@ -4395,19 +4650,41 @@ static uint64_t *_stats_create_file_regions(struct dm_stats *dms, int fd,
return_0;
}
if (bounds) {
/* _build_histogram_arg enables precise if vals < 1ms. */
if (update) {
group = &dms->groups[group_id];
if ((nr_kept = _stats_unmap_regions(dms, group_id, extent_mem,
extents, &old_extents,
count, regroup)) < 0)
goto_out;
}
if (bounds)
if (!(hist_arg = _build_histogram_arg(bounds, &precise)))
goto_out;
}
/* make space for end-of-table marker */
if (!(regions = dm_malloc((1 + *count) * sizeof(*regions)))) {
log_error("Could not allocate memory for region IDs.");
goto out;
goto_out;
}
/*
* Second pass (first for non-update case): create regions for
* all extents not retained from the prior mapping, and insert
* retained regions into the table of region_id values.
*
* If a regroup is not scheduled, set group bits for newly
* created regions in the group leader bitmap.
*/
for (i = 0; i < *count; i++) {
if (update) {
if ((old_ext = _find_extent(nr_kept, old_extents,
extents[i].start,
extents[i].len))) {
regions[i] = old_ext->id;
continue;
}
}
if (!_stats_create_region(dms, regions + i, extents[i].start,
extents[i].len, -1, precise, hist_arg,
dms->program_id, "")) {
@@ -4416,13 +4693,39 @@ static uint64_t *_stats_create_file_regions(struct dm_stats *dms, int fd,
extents[i].start);
goto out_remove;
}
log_very_verbose("Created new region mapping " FMTu64 "+" FMTu64
" with region ID " FMTu64, extents[i].start,
extents[i].len, regions[i]);
if (!*regroup && update) {
/* expand group bitmap */
if (regions[i] > (group->regions[0] - 1)) {
num_bits = regions[i] + *count;
if (!_stats_resize_group(group, num_bits)) {
log_error("Failed to resize group "
"bitmap.");
goto out_remove;
}
}
dm_bit_set(group->regions, regions[i]);
}
}
regions[*count] = DM_STATS_REGION_NOT_PRESENT;
/* Update group leader aux_data for new group members. */
if (!*regroup && update)
if (!_stats_set_aux(dms, group_id,
dms->regions[group_id].aux_data))
log_error("Failed to update group aux_data.");
if (bounds)
dm_free(hist_arg);
dm_pool_free(extent_mem, extents);
dm_pool_destroy(extent_mem);
dm_free(hist_arg);
return regions;
out_remove:
@@ -4436,13 +4739,15 @@ out_remove:
dm_stats_list(dms, NULL);
fail_region = i;
for (i = 0; i < fail_region; i++)
if (!_stats_delete_region(dms, regions[i]))
log_error("Could not delete region " FMTu64 ".", i);
_stats_cleanup_region_ids(dms, regions, fail_region);
*count = 0;
out:
/*
* The table of file extents in 'extents' is always built, so free
* it explicitly: this will also free any 'old_extents' table that
* was later allocated from the 'extent_mem' pool by this function.
*/
dm_pool_free(extent_mem, extents);
dm_pool_destroy(extent_mem);
dm_free(hist_arg);
@@ -4456,15 +4761,16 @@ uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
const char *alias)
{
uint64_t *regions, count = 0;
int regroup = 1;
if (alias && !group) {
log_error("Cannot set alias without grouping regions.");
return NULL;
}
regions = _stats_create_file_regions(dms, fd, bounds, precise, &count);
if (!regions)
return_0;
if (!(regions = _stats_map_file_regions(dms, fd, bounds, precise,
-1, &count, &regroup)))
return NULL;
if (!group)
return regions;
@@ -4478,11 +4784,97 @@ uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
return regions;
out:
_stats_cleanup_region_ids(dms, regions, count);
dm_free(regions);
return NULL;
}
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t group_id)
{
struct dm_histogram *bounds = NULL;
int nr_bins, precise, regroup;
uint64_t *regions, count = 0;
const char *alias = NULL;
if (!dms->regions || !dm_stats_group_present(dms, group_id)) {
if (!dm_stats_list(dms, dms->program_id)) {
log_error("Could not obtain region list while "
"updating group " FMTu64 ".", group_id);
return NULL;
}
}
if (!dm_stats_group_present(dms, group_id)) {
log_error("Group ID " FMTu64 " does not exist.", group_id);
return NULL;
}
/*
* If the extent corresponding to the group leader's region has been
* deallocated, _stats_map_file_regions() will remove the region and
* the group. In this case, regroup will be set by the call and the
* group will be re-created using saved values.
*/
regroup = 0;
/*
* A copy of the alias is needed to re-create the group when regroup=1.
*/
if (dms->groups[group_id].alias) {
alias = dm_strdup(dms->groups[group_id].alias);
if (!alias) {
log_error("Failed to allocate group alias string.");
return NULL;
}
}
if (dms->regions[group_id].bounds) {
/*
* A copy of the histogram bounds must be passed to
* _stats_map_file_regions() to be used when creating new
* regions: it is not safe to use the copy in the current group
* leader since it may be destroyed during the first group
* update pass.
*/
nr_bins = dms->regions[group_id].bounds->nr_bins;
bounds = _alloc_dm_histogram(nr_bins);
if (!bounds) {
log_error("Could not allocate memory for group "
"histogram bounds.");
return NULL;
}
_stats_copy_histogram_bounds(bounds,
dms->regions[group_id].bounds);
}
precise = (dms->regions[group_id].timescale == 1);
regions = _stats_map_file_regions(dms, fd, bounds, precise,
group_id, &count, &regroup);
if (!regions)
goto bad;
if (!dm_stats_list(dms, NULL))
goto bad;
if (regroup)
if (!_stats_group_file_regions(dms, regions, count, alias))
goto bad;
dm_free(bounds);
dm_free((char *) alias);
return regions;
bad:
_stats_cleanup_region_ids(dms, regions, count);
dm_free(bounds);
dm_free((char *) alias);
return NULL;
}
#else /* HAVE_LINUX_FIEMAP */
uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
int group, int precise,
struct dm_histogram *bounds,
@@ -4491,6 +4883,13 @@ uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
log_error("File mapping requires FIEMAP ioctl support.");
return 0;
}
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t group_id)
{
log_error("File mapping requires FIEMAP ioctl support.");
return 0;
}
#endif /* HAVE_LINUX_FIEMAP */
/*

View File

@@ -468,10 +468,12 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
unsigned base = BASE_UNKNOWN;
unsigned s;
int precision;
double d;
uint64_t byte = UINT64_C(0);
uint64_t units = UINT64_C(1024);
char *size_buf = NULL;
char new_unit_type = '\0', unit_type_buf[2];
const char *prefix = "";
const char * const size_str[][3] = {
/* BASE_UNKNOWN */
{" ", " ", " "}, /* [0] */
@@ -570,7 +572,7 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
byte = unit_factor;
} else {
/* Human-readable style */
if (unit_type == 'H') {
if (unit_type == 'H' || unit_type == 'R') {
units = UINT64_C(1000);
base = BASE_1000;
} else {
@@ -586,6 +588,16 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
for (s = 0; s < NUM_UNIT_PREFIXES && size < byte; s++)
byte /= units;
if ((s < NUM_UNIT_PREFIXES) &&
((unit_type == 'R') || (unit_type == 'r'))) {
/* When the rounding would cause difference, add '<' prefix
* i.e. 2043M is more then 1.9949G prints <2.00G
* This version is for 2 digits fixed precision */
d = 100. * (double) size / byte;
if (!_close_enough(floorl(d), nearbyintl(d)))
prefix = "<";
}
include_suffix = 1;
}
@@ -599,7 +611,7 @@ const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
precision = 2;
}
snprintf(size_buf, SIZE_BUF - 1, "%.*f%s", precision,
snprintf(size_buf, SIZE_BUF - 1, "%s%.*f%s", prefix, precision,
(double) size / byte, include_suffix ? size_str[base + s][suffix_type] : "");
return size_buf;
@@ -639,6 +651,8 @@ uint64_t dm_units_to_factor(const char *units, char *unit_type,
switch (*units) {
case 'h':
case 'H':
case 'r':
case 'R':
multiplier = v = UINT64_C(1);
*unit_type = *units;
break;

View File

@@ -31,18 +31,20 @@ LVMRAIDMAN = lvmraid.7
MAN5=lvm.conf.5
MAN7=lvmsystemid.7 lvmreport.7
MAN8=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
lvchange.8 lvmconfig.8 lvconvert.8 lvcreate.8 lvdisplay.8 lvextend.8 \
lvm.8 lvmchange.8 lvmconf.8 lvmdiskscan.8 lvmdump.8 lvmsadc.8 lvmsar.8 \
MAN8=lvm.8 lvmconf.8 lvmdump.8
MAN8DM=dmsetup.8 dmstats.8
MAN8CLUSTER=
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
MAN8GEN=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
lvcreate.8 lvchange.8 lvmconfig.8 lvconvert.8 lvdisplay.8 lvextend.8 \
lvreduce.8 lvremove.8 lvrename.8 lvresize.8 lvs.8 \
lvscan.8 pvchange.8 pvck.8 pvcreate.8 pvdisplay.8 pvmove.8 pvremove.8 \
pvresize.8 pvs.8 pvscan.8 vgcfgbackup.8 vgcfgrestore.8 vgchange.8 \
vgck.8 vgcreate.8 vgconvert.8 vgdisplay.8 vgexport.8 vgextend.8 \
vgimport.8 vgimportclone.8 vgmerge.8 vgmknodes.8 vgreduce.8 vgremove.8 \
vgrename.8 vgs.8 vgscan.8 vgsplit.8
MAN8DM=dmsetup.8 dmstats.8
MAN8CLUSTER=
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
vgrename.8 vgs.8 vgscan.8 vgsplit.8 \
lvmsar.8 lvmsadc.8 lvmdiskscan.8 lvmchange.8
ifeq ($(MAKECMDGOALS),all_man)
MAN_ALL="yes"
@@ -113,8 +115,8 @@ MAN8DIR=$(mandir)/man8
include $(top_builddir)/make.tmpl
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM)
CLEAN_TARGETS+=$(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8CLUSTER) \
$(MAN8SYSTEMD_GENERATORS) $(MAN8DM) *.gen man-generator
DISTCLEAN_TARGETS+=$(FSADMMAN) $(BLKDEACTIVATEMAN) $(DMEVENTDMAN) \
$(LVMETADMAN) $(LVMPOLLDMAN) $(LVMLOCKDMAN) $(CLVMDMAN) $(CMIRRORDMAN) \
$(LVMCACHEMAN) $(LVMTHINMAN) $(LVMDBUSDMAN) $(LVMRAIDMAN)
@@ -125,11 +127,11 @@ all: man device-mapper
device-mapper: $(MAN8DM)
man: $(MAN5) $(MAN7) $(MAN8) $(MAN8CLUSTER) $(MAN8SYSTEMD_GENERATORS)
man: $(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8CLUSTER) $(MAN8SYSTEMD_GENERATORS)
all_man: man
$(MAN5) $(MAN7) $(MAN8) $(MAN8DM) $(MAN8CLUSTER): Makefile
$(MAN5) $(MAN7) $(MAN8) $(MAN8GEN) $(MAN8DM) $(MAN8CLUSTER): Makefile
Makefile: Makefile.in
@:
@@ -140,12 +142,17 @@ Makefile: Makefile.in
*) echo "Creating $@" ; $(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $< > $@ ;; \
esac
ccmd: ../tools/create-commands.c
$(CC) ../tools/create-commands.c -o ccmd
man-generator:
$(CC) -DMAN_PAGE_GENERATOR -I$(top_builddir)/tools $(CFLAGS) $(top_srcdir)/tools/command.c -o $@
- ./man-generator lvmconfig > test.gen
if [ ! -s test.gen ] ; then cp genfiles/*.gen $(top_builddir)/man; fi;
generate: ccmd
./ccmd --output man -s 0 -p 1 -c lvcreate ../tools/command-lines.in > lvcreate.8.a
cat lvcreate.8.a lvcreate.8.b > lvcreate.8.in
$(MAN8GEN): man-generator
echo "Generating $@" ;
if [ ! -e $@.gen ]; then ./man-generator $(basename $@) $(top_srcdir)/man/$@.des > $@.gen; fi
if [ -f $(top_srcdir)/man/$@.end ]; then cat $(top_srcdir)/man/$@.end >> $@.gen; fi;
cat $(top_srcdir)/man/see_also.end >> $@.gen
$(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $@.gen > $@
install_man5: $(MAN5)
$(INSTALL) -d $(MAN5DIR)
@@ -155,9 +162,10 @@ install_man7: $(MAN7)
$(INSTALL) -d $(MAN7DIR)
$(INSTALL_DATA) $(MAN7) $(MAN7DIR)/
install_man8: $(MAN8)
install_man8: $(MAN8) $(MAN8GEN)
$(INSTALL) -d $(MAN8DIR)
$(INSTALL_DATA) $(MAN8) $(MAN8DIR)/
$(INSTALL_DATA) $(MAN8GEN) $(MAN8DIR)/
install_lvm2: install_man5 install_man7 install_man8

View File

@@ -154,6 +154,32 @@ This timeout will be ignored if you start \fBclvmd\fP with the \fB\-d\fP.
.br
Display the version of the cluster LVM daemon.
.
.SH NOTES
.
.SS Activation
.
In a clustered VG, clvmd is used for activation, and the following values are
possible with \fBlvchange/vgchange -a\fP:
.IP \fBy\fP|\fBsy\fP
clvmd activates the LV in shared mode (with a shared lock),
allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
an exclusive lock is automatically used instead.
clvmd attempts to activate the LV concurrently on all nodes.
.IP \fBey\fP
clvmd activates the LV in exclusive mode (with an exclusive lock),
allowing a single node to activate the LV.
clvmd attempts to activate the LV concurrently on all nodes, but only
one will succeed.
.IP \fBly\fP
clvmd attempts to activate the LV only on the local node.
If the LV type allows concurrent access, then shared mode is used,
otherwise exclusive.
.IP \fBn\fP
clvmd deactivates the LV on all nodes.
.IP \fBln\fP
clvmd deactivates the LV on the local node.
.
.SH ENVIRONMENT VARIABLES
.TP
.B LVM_CLVMD_BINARY

View File

@@ -53,12 +53,14 @@ dmstats \(em device-mapper statistics management
.de CMD_COMMAND
. ad l
. IR command
. RI [ device_name
. RB | \-\-uuid
. IR uuid | \fB\-\-major
. RI [ device_name |
. RB [ \-u | \-\-uuid
. IR uuid ]
. RB | [ \-\-major
. IR major
. BR \-\-minor
. IR minor ]
. RB \%[ \-v | \-\-verbose]
. ad b
..
.CMD_COMMAND
@@ -81,8 +83,8 @@ dmstats \(em device-mapper statistics management
. ad l
. BR create
. RB [ device_name...
. RB | file_path... ]
. RB [ \-\-alldevices ]
. RB | file_path...
. RB | [ \-\-alldevices ]]
. RB [ \-\-areas
. IR nr_areas | \fB\-\-areasize
. IR area_size ]
@@ -153,7 +155,7 @@ dmstats \(em device-mapper statistics management
. OPT_OBJECTS
. RB \%[ \-\-nosuffix ]
. RB [ \-\-notimesuffix ]
. RB \%[ \-v | \-\-verbose [ \-v | \-\-verbose ]]
. RB \%[ \-v | \-\-verbose]
. ad b
..
.CMD_LIST
@@ -210,6 +212,17 @@ dmstats \(em device-mapper statistics management
. ad b
..
.CMD_UNGROUP
.HP
.B dmstats
.de CMD_UPDATE_FILEMAP
. ad l
. BR update_filemap
. RI file_path
. RB [ \-\-groupid
. IR id ]
. ad b
..
.CMD_UPDATE_FILEMAP
.
.PD
.ad b
@@ -666,6 +679,21 @@ Remove an existing group and return all the group's regions to their
original state.
The group to be removed is specified using \fB\-\-groupid\fP.
.HP
.CMD_UPDATE_FILEMAP
.br
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
that were previously created with \fB\-\-filemap\fP. This will add
and remove regions to reflect changes in the allocated extents of
the file on-disk, since the time that it was crated or last updated.
Use of this command is not normally needed since the \fBdmfilemapd\fP
daemon will automatically monitor filemap groups and perform these
updates when required.
If a filemapped group was created with \fB\-\-nominitor\fP, or the
daemon has been killed, the \fBupdate_filemap\fP can be used to
manually force an update.
.
.SH REGIONS, AREAS, AND GROUPS
.
@@ -876,7 +904,7 @@ Count of writes merged this interval.
.B write_sector_count
Count of 512 byte sectors written this interval.
.TP
.B write_nsecs
.B write_time
Accumulated duration of all write requests (ns).
.TP
.B in_progress_count

2
man/lvchange.8.des Normal file
View File

@@ -0,0 +1,2 @@
lvchange changes LV attributes in the VG, changes LV activation in the
kernel, and includes other utilities for LV maintenance.

6
man/lvchange.8.end Normal file
View File

@@ -0,0 +1,6 @@
.SH EXAMPLES
Change LV permission to read-only:
.sp
.B lvchange \-pr vg00/lvol1

View File

@@ -1,491 +0,0 @@
.TH LVCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.de UNITS
..
.
.SH NAME
.
lvchange \(em change attributes of a logical volume
.
.SH SYNOPSIS
.
.ad l
.B lvchange
.RB [ \-a | \-\-activate
.RB [ a ][ e | s | l ]{ y | n }]
.RB [ \-\-activationmode
.RB { complete | degraded | partial }]
.RB [ \-\-addtag
.IR Tag ]
.RB [ \-K | \-\-ignoreactivationskip ]
.RB [ \-k | \-\-setactivationskip
.RB { y | n }]
.RB [ \-\-alloc
.IR AllocationPolicy ]
.RB [ \-A | \-\-autobackup
.RB { y | n }]
.RB [ \-\-rebuild
.IR PhysicalVolume ]
.RB [ \-\-cachemode
.RB { passthrough | writeback | writethrough }]
.RB [ \-\-cachepolicy
.IR Policy ]
.RB [ \-\-cachesettings
.IR Key \fB= Value ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-C | \-\-contiguous
.RB { y | n }]
.RB [ \-d | \-\-debug ]
.RB [ \-\-deltag
.IR Tag ]
.RB [ \-\-detachprofile ]
.RB [ \-\-discards
.RB { ignore | nopassdown | passdown }]
.RB [ \-\-errorwhenfull
.RB { y | n }]
.RB [ \-h | \-? | \-\-help ]
.RB \%[ \-\-ignorelockingfailure ]
.RB \%[ \-\-ignoremonitoring ]
.RB \%[ \-\-ignoreskippedcluster ]
.RB \%[ \-\-metadataprofile
.IR ProfileName ]
.RB [ \-\-monitor
.RB { y | n }]
.RB [ \-\-noudevsync ]
.RB [ \-P | \-\-partial ]
.RB [ \-p | \-\-permission
.RB { r | rw }]
.RB [ \-M | \-\-persistent
.RB { y | n }
.RB [ \-\-major
.IR Major ]
.RB [ \-\-minor
.IR Minor ]]
.RB [ \-\-poll
.RB { y | n }]
.RB [ \-\- [ raid ] maxrecoveryrate
.IR Rate ]
.RB [ \-\- [ raid ] minrecoveryrate
.IR Rate ]
.RB [ \-\- [ raid ] syncaction
.RB { check | repair }]
.RB [ \-\- [ raid ] writebehind
.IR IOCount ]
.RB [ \-\- [ raid ] writemostly
.BR \fIPhysicalVolume [ : { y | n | t }]]
.RB [ \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }]
.RB [ \-\-refresh ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-\-resync ]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-\-sysinit ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-Z | \-\-zero
.RB { y | n }]
.RI [ LogicalVolumePath ...]
.ad b
.
.SH DESCRIPTION
.
lvchange allows you to change the attributes of a logical volume
including making them known to the kernel ready for use.
.
.SH OPTIONS
.
See \fBlvm\fP(8) for common options.
.
.HP
.BR \-a | \-\-activate
.RB [ a ][ e | s | l ]{ y | n }
.br
Controls the availability of the logical volumes for use.
Communicates with the kernel device-mapper driver via
libdevmapper to activate (\fB\-ay\fP) or deactivate (\fB\-an\fP) the
logical volumes.
.br
Activation of a logical volume creates a symbolic link
\fI/dev/VolumeGroupName/LogicalVolumeName\fP pointing to the device node.
This link is removed on deactivation.
All software and scripts should access the device through
this symbolic link and present this as the name of the device.
The location and name of the underlying device node may depend on
the distribution and configuration (e.g. udev) and might change
from release to release.
.br
If autoactivation option is used (\fB\-aay\fP),
the logical volume is activated only if it matches an item in
the \fBactivation/auto_activation_volume_list\fP
set in \fBlvm.conf\fP(5).
If this list is not set, then all volumes are considered for
activation. The \fB\-aay\fP option should be also used during system
boot so it's possible to select which volumes to activate using
the \fBactivation/auto_activation_volume_list\fP setting.
.br
In a clustered VG, clvmd is used for activation, and the
following options are possible:
With \fB\-aey\fP, clvmd activates the LV in exclusive mode
(with an exclusive lock), allowing a single node to activate the LV.
With \fB\-asy\fP, clvmd activates the LV in shared mode
(with a shared lock), allowing multiple nodes to activate the LV concurrently.
If the LV type prohibits shared access, such as an LV with a snapshot,
the '\fBs\fP' option is ignored and an exclusive lock is used.
With \fB\-ay\fP (no mode specified), clvmd activates the LV in shared mode
if the LV type allows concurrent access, such as a linear LV.
Otherwise, clvmd activates the LV in exclusive mode.
With \fB\-aey\fP, \fB\-asy\fP, and \fB\-ay\fP, clvmd attempts to activate the LV
on all nodes. If exclusive mode is used, then only one of the
nodes will be successful.
With \fB\-an\fP, clvmd attempts to deactivate the LV on all nodes.
With \fB\-aly\fP, clvmd activates the LV only on the local node, and \fB\-aln\fP
deactivates only on the local node. If the LV type allows concurrent
access, then shared mode is used, otherwise exclusive.
LVs with snapshots are always activated exclusively because they can only
be used on one node at once.
For local VGs \fB\-ay\fP, \fB\-aey\fP, and \fB\-asy\fP are all equivalent.
.
.HP
.BR \-\-activationmode
.RB { complete | degraded | partial }
.br
The activation mode determines whether logical volumes are allowed to
activate when there are physical volumes missing (e.g. due to a device
failure). \fBcomplete\fP is the most restrictive; allowing only those
logical volumes to be activated that are not affected by the missing
PVs. \fBdegraded\fP allows RAID logical volumes to be activated even if
they have PVs missing. (Note that the "\fImirror\fP" segment type is not
considered a RAID logical volume. The "\fIraid1\fP" segment type should
be used instead.) Finally, \fBpartial\fP allows any logical volume to
be activated even if portions are missing due to a missing or failed
PV. This last option should only be used when performing recovery or
repair operations. \fBdegraded\fP is the default mode. To change it,
modify \fBactivation_mode\fP in \fBlvm.conf\fP(5).
.
.HP
.BR \-K | \-\-ignoreactivationskip
.br
Ignore the flag to skip Logical Volumes during activation.
.
.HP
.BR \-k | \-\-setactivationskip
.RB { y | n }
.br
Controls whether Logical Volumes are persistently flagged to be
skipped during activation. By default, thin snapshot volumes are
flagged for activation skip. To activate such volumes,
an extra \fB\-\-ignoreactivationskip\fP option must be used.
The flag is not applied during deactivation. To see whether
the flag is attached, use \fBlvs\fP(8) command where the state
of the flag is reported within \fBlv_attr\fP bits.
.
.HP
.BR \-\-cachemode
.RB { passthrough | writeback | writethrough }
.br
Specifying a cache mode determines when the writes to a cache LV
are considered complete. When \fBwriteback\fP is specified, a write is
considered complete as soon as it is stored in the cache pool LV.
If \fBwritethough\fP is specified, a write is considered complete only
when it has been stored in the cache pool LV and on the origin LV.
While \fBwritethrough\fP may be slower for writes, it is more
resilient if something should happen to a device associated with the
cache pool LV. With \fBpassthrough\fP mode, all reads are served
from origin LV (all reads miss the cache) and all writes are
forwarded to the origin LV; additionally, write hits cause cache
block invalidates. See \fBlvmcache(7)\fP for more details.
.
.HP
.BR \-\-cachepolicy
.IR Policy ,
.BR \-\-cachesettings
.IR Key \fB= Value
.br
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy and its associated tunable settings. In most use-cases,
default values should be adequate.
.
.HP
.BR \-C | \-\-contiguous
.RB { y | n }
.br
Tries to set or reset the contiguous allocation policy for
logical volumes. It's only possible to change a non-contiguous
logical volume's allocation policy to contiguous, if all of the
allocated physical extents are already contiguous.
.
.HP
.BR \-\-detachprofile
.br
Detach any metadata configuration profiles attached to given
Logical Volumes. See \fBlvm.conf\fP(5) for more information
about metadata profiles.
.
.HP
.BR \-\-discards
.RB { ignore | nopassdown | passdown }
.br
Set this to \fBignore\fP to ignore any discards received by a
thin pool Logical Volume. Set to \fBnopassdown\fP to process such
discards within the thin pool itself and allow the no-longer-needed
extents to be overwritten by new data. Set to \fBpassdown\fP (the
default) to process them both within the thin pool itself and to
pass them down the underlying device.
.
.HP
.BR \-\-errorwhenfull
.RB { y | n }
.br
Sets thin pool behavior when data space is exhaused. See
.BR lvcreate (8)
for information.
.
.HP
.BR \-\-ignoremonitoring
.br
Make no attempt to interact with dmeventd unless \fB\-\-monitor\fP
is specified.
Do not use this if dmeventd is already monitoring a device.
.
.HP
.BR \-\-major
.IR Major
.br
Sets the major number. This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
.
.HP
.BR \-\-minor
.IR Minor
.br
Set the minor number.
.
.HP
.BR \-\-metadataprofile
.IR ProfileName
.br
Uses and attaches \fIProfileName\fP configuration profile to the logical
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
See \fBlvm.conf\fP(5) for more information about metadata profiles.
.
.HP
.BR \-\-monitor
.RB { y | n }
.br
Start or stop monitoring a mirrored or snapshot logical volume with
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
\%\fBmirror_image_fault_policy\fP and \fBmirror_log_fault_policy\fP
set in \fBlvm.conf\fP(5).
.
.HP
.BR \-\-noudevsync
.br
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.
.HP
.BR \-p | \-\-permission
.RB { r | rw }
.br
Change access permission to read-only or read/write.
.
.HP
.BR \-M | \-\-persistent
.RB { y | n }
.br
Set to \fBy\fP to make the minor number specified persistent.
Change of persistent numbers is not supported for pool volumes.
.
.HP
.BR \-\-poll
.RB { y | n }
.br
Without polling a logical volume's backgrounded transformation process
will never complete. If there is an incomplete pvmove or lvconvert (for
example, on rebooting after a crash), use \fB\-\-poll y\fP to restart the
process from its last checkpoint. However, it may not be appropriate to
immediately poll a logical volume when it is activated, use
\fB\-\-poll n\fP to defer and then \fB\-\-poll y\fP to restart the process.
.
.HP
.BR \-\- [ raid ] rebuild
.BR \fIPhysicalVolume
.br
Option can be repeated multiple times.
Selects PhysicalVolume(s) to be rebuild in a RaidLV.
Use this option instead of
.BR \-\-resync
or
.BR \-\- [ raid ] syncaction
\fBrepair\fP in case the PVs with corrupted data are known and their data
should be reconstructed rather than reconstructing default (rotating) data.
.br
E.g. in a raid1 mirror, the master leg on /dev/sda may hold corrupt data due
to a known transient disk error, thus
.br
\fBlvchange --rebuild /dev/sda LV\fP
.br
will request the master leg to be rebuild rather than rebuilding
all other legs from the master.
On a raid5 with rotating data and parity
.br
\fBlvchange --rebuild /dev/sda LV\fP
.br
will rebuild all data and parity blocks in the stripe on /dev/sda.
.HP
.BR \-\- [ raid ] maxrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the maximum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to \fB0\fP means it will be unbounded.
.
.HP
.BR \-\- [ raid ] minrecoveryrate
.BR \fIRate [ b | B | s | S | k | K | m | M | g | G ]
.br
Sets the minimum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to \fB0\fP means it will be unbounded.
.
.HP
.BR \-\- [ raid ] syncaction
.RB { check | repair }
.br
This argument is used to initiate various RAID synchronization operations.
The \fBcheck\fP and \fBrepair\fP options provide a way to check the
integrity of a RAID logical volume (often referred to as "scrubbing").
These options cause the RAID logical volume to
read all of the data and parity blocks in the array and check for any
discrepancies (e.g. mismatches between mirrors or incorrect parity values).
If \fBcheck\fP is used, the discrepancies will be counted but not repaired.
If \fBrepair\fP is used, the discrepancies will be corrected as they are
encountered. The \fBlvs\fP(8) command can be used to show the number of
discrepancies found or repaired.
.
.HP
.BR \-\- [ raid ] writebehind
.IR IOCount
.br
Specify the maximum number of outstanding writes that are allowed to
devices in a RAID1 logical volume that are marked as write-mostly.
Once this value is exceeded, writes become synchronous (i.e. all writes
to the constituent devices must complete before the array signals the
write has completed). Setting the value to zero clears the preference
and allows the system to choose the value arbitrarily.
.
.HP
.BR \-\- [ raid ] writemostly
.BR \fIPhysicalVolume [ : { y | n | t }]
.br
Mark a device in a RAID1 logical volume as write-mostly. All reads
to these drives will be avoided unless absolutely necessary. This keeps
the number of I/Os to the drive to a minimum. The default behavior is to
set the write-mostly attribute for the specified physical volume in the
logical volume. It is possible to also remove the write-mostly flag by
appending a "\fB:n\fP" to the physical volume or to toggle the value by specifying
"\fB:t\fP". The \fB\-\-writemostly\fP argument can be specified more than one time
in a single command; making it possible to toggle the write-mostly attributes
for all the physical volumes in a logical volume at once.
.
.HP
.BR \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }
.br
Set read ahead sector count of this logical volume.
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120 sectors.
The default value is "\fBauto\fP" which allows the kernel to choose
a suitable value automatically.
"\fBnone\fP" is equivalent to specifying zero.
.
.HP
.BR \-\-refresh
.br
If the logical volume is active, reload its metadata.
This is not necessary in normal operation, but may be useful
if something has gone wrong or if you're doing clustering
manually without a clustered lock manager.
.
.HP
.BR \-\-resync
.br
Forces the complete resynchronization of a mirror. In normal
circumstances you should not need this option because synchronization
happens automatically. Data is read from the primary mirror device
and copied to the others, so this can take a considerable amount of
time - and during this time you are without a complete redundant copy
of your data.
.
.HP
.BR \-\-sysinit
.br
Indicates that \fBlvchange\fP(8) is being invoked from early system
initialisation scripts (e.g. rc.sysinit or an initrd),
before writeable filesystems are available. As such,
some functionality needs to be disabled and this option
acts as a shortcut which selects an appropriate set of options. Currently
this is equivalent to using \fB\-\-ignorelockingfailure\fP,
\fB\-\-ignoremonitoring\fP, \fB\-\-poll n\fP and setting
\fBLVM_SUPPRESS_LOCKING_FAILURE_MESSAGES\fP
environment variable.
If \fB\-\-sysinit\fP is used in conjunction with
\fBlvmetad\fP(8) enabled and running,
autoactivation is preferred over manual activation via direct lvchange call.
Logical volumes are autoactivated according to
\fBauto_activation_volume_list\fP set in \fBlvm.conf\fP(5).
.
.HP
.BR \-Z | \-\-zero
.RB { y | n }
.br
Set zeroing mode for thin pool. Note: already provisioned blocks from pool
in non-zero mode are not cleared in unwritten parts when setting zero to
\fBy\fP.
.
.SH ENVIRONMENT VARIABLES
.
.TP
.B LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES
Suppress locking failure messages.
.
.SH Examples
.
Changes the permission on volume lvol1 in volume group vg00 to be read-only:
.sp
.B lvchange \-pr vg00/lvol1
.
.SH SEE ALSO
.
.nh
.BR lvm (8),
.BR lvmetad (8),
.BR lvs (8),
.BR lvcreate (8),
.BR vgchange (8),
.BR lvmcache (7),
.BR lvmthin (7),
.BR lvm.conf (5)

38
man/lvconvert.8.des Normal file
View File

@@ -0,0 +1,38 @@
lvconvert changes the LV type and includes utilities for LV data
maintenance. The LV type controls data layout and redundancy.
The LV type is also called the segment type or segtype.
To display the current LV type, run the command:
.B lvs \-o name,segtype
.I LV
The
.B linear
type is equivalent to the
.B striped
type when one stripe exists.
In that case, the types can sometimes be used interchangably.
In most cases, the
.B mirror
type is deprecated and the
.B raid1
type should be used. They are both implementations of mirroring.
The
.B raid*
type refers to one of many raid levels, e.g.
.B raid1,
.B raid5.
In some cases, an LV is a single device mapper (dm) layer above physical
devices. In other cases, hidden LVs (dm devices) are layered between the
visible LV and physical devices. LVs in the middle layers are called sub LVs.
A command run on a visible LV sometimes operates on a sub LV rather than
the specified LV. In other cases, a sub LV must be specified directly on
the command line.
Sub LVs can be displayed with the command
.B lvs -a

95
man/lvconvert.8.end Normal file
View File

@@ -0,0 +1,95 @@
.SH EXAMPLES
Convert a linear LV to a two-way mirror LV.
.br
.B lvconvert \-\-type mirror \-\-mirrors 1 vg/lvol1
Convert a linear LV to a two-way RAID1 LV.
.br
.B lvconvert \-\-type raid1 \-\-mirrors 1 vg/lvol1
Convert a mirror LV to use an in\-memory log.
.br
.B lvconvert \-\-mirrorlog core vg/lvol1
Convert a mirror LV to use a disk log.
.br
.B lvconvert \-\-mirrorlog disk vg/lvol1
Convert a mirror or raid1 LV to a linear LV.
.br
.B lvconvert --type linear vg/lvol1
Convert a mirror LV to a raid1 LV with the same number of images.
.br
.B lvconvert \-\-type raid1 vg/lvol1
Convert a linear LV to a two-way mirror LV, allocating new extents from specific
PV ranges.
.br
.B lvconvert \-\-mirrors 1 vg/lvol1 /dev/sda:0\-15 /dev/sdb:0\-15
Convert a mirror LV to a linear LV, freeing physical extents from a specific PV.
.br
.B lvconvert \-\-type linear vg/lvol1 /dev/sda
Split one image from a mirror or raid1 LV, making it a new LV.
.br
.B lvconvert \-\-splitmirrors 1 \-\-name lv_split vg/lvol1
Split one image from a raid1 LV, and track changes made to the raid1 LV
while the split image remains detached.
.br
.B lvconvert \-\-splitmirrors 1 \-\-trackchanges vg/lvol1
Merge an image (that was previously created with \-\-splitmirrors and
\-\-trackchanges) back into the original raid1 LV.
.br
.B lvconvert \-\-mergemirrors vg/lvol1_rimage_1
Replace PV /dev/sdb1 with PV /dev/sdf1 in a raid1/4/5/6/10 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 vg/lvol1 /dev/sdf1
Replace 3 PVs /dev/sd[b-d]1 with PVs /dev/sd[f-h]1 in a raid1 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 \-\-replace /dev/sdd1
.RS
.B vg/lvol1 /dev/sd[fgh]1
.RE
Replace the maximum of 2 PVs /dev/sd[bc]1 with PVs /dev/sd[gh]1 in a raid6 LV.
.br
.B lvconvert \-\-replace /dev/sdb1 \-\-replace /dev/sdc1 vg/lvol1 /dev/sd[gh]1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV.
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1 vg/lvol1
Convert an LV into a thin LV in the specified thin pool. The existing LV
is used as an external read\-only origin for the new thin LV, and is
renamed "external".
.br
.B lvconvert \-\-type thin \-\-thinpool vg/tpool1
.RS
.B \-\-originname external vg/lvol1
.RE
Convert an LV to a cache pool LV using another specified LV for cache pool
metadata.
.br
.B lvconvert \-\-type cache-pool \-\-poolmetadata vg/poolmeta1 vg/lvol1
Convert an LV to a cache LV using the specified cache pool and chunk size.
.br
.B lvconvert \-\-type cache \-\-cachepool vg/cpool1 \-c 128 vg/lvol1
Detach and keep the cache pool from a cache LV.
.br
.B lvconvert \-\-splitcache vg/lvol1
Detach and remove the cache pool from a cache LV.
.br
.B lvconvert \-\-uncache vg/lvol1

File diff suppressed because it is too large Load Diff

28
man/lvcreate.8.des Normal file
View File

@@ -0,0 +1,28 @@
lvcreate creates a new LV in a VG. For standard LVs, this requires
allocating logical extents from the VG's free physical extents. If there
is not enough free space, then the VG can be extended (see
\fBvgextend\fP(8)) with other PVs, or existing LVs can be reduced or
removed (see \fBlvremove\fP, \fBlvreduce\fP.)
To control which PVs a new LV will use, specify one or more PVs as
position args at the end of the command line. lvcreate will allocate
physical extents only from the specified PVs.
lvcreate can also create snapshots of existing LVs, e.g. for backup
purposes. The data in a new snapshot LV represents the content of the
original LV from the time the snapshot was created.
RAID LVs can be created by specifying an LV type when creating the LV (see
\fBlvmraid\fP(7)). Different RAID levels require different numbers of
unique PVs be available in the VG for allocation.
Thin pools (for thin provisioning) and cache pools (for caching) are
represented by special LVs with types thin-pool and cache-pool (see
\fBlvmthin\fP(7) and \fBlvmcache\fP(7)). The pool LVs are not usable as
standard block devices, but the LV names act references to the pools.
Thin LVs are thinly provisioned from a thin pool, and are created with a
virtual size rather than a physical size. A cache LV is the combination of
a standard LV with a cache pool, used to cache active portions of the LV
to improve performance.

98
man/lvcreate.8.end Normal file
View File

@@ -0,0 +1,98 @@
.SH EXAMPLES
Create a striped LV with 3 stripes, a stripe size of 8KiB and a size of 100MiB.
The LV name is chosen by lvcreate.
.br
.B lvcreate \-i 3 \-I 8 \-L 100m vg00
Create a raid1 LV with two images, and a useable size of 500 MiB. This
operation requires two devices, one for each mirror image. RAID metadata
(superblock and bitmap) is also included on the two devices.
.br
.B lvcreate \-\-type raid1 \-m1 \-L 500m \-n mylv vg00
Create a mirror LV with two images, and a useable size of 500 MiB.
This operation requires three devices: two for mirror images and
one for a disk log.
.br
.B lvcreate \-\-type mirror \-m1 \-L 500m \-n mylv vg00
Create a mirror LV with 2 images, and a useable size of 500 MiB.
This operation requires 2 devices because the log is in memory.
.br
.B lvcreate \-\-type mirror \-m1 \-\-mirrorlog core \-L 500m \-n mylv vg00
Create a copy\-on\-write snapshot of an LV:
.br
.B lvcreate \-\-snapshot \-\-size 100m \-\-name mysnap vg00/mylv
Create a copy\-on\-write snapshot with a size sufficient
for overwriting 20% of the size of the original LV.
.br
.B lvcreate \-s \-l 20%ORIGIN \-n mysnap vg00/mylv
Create a sparse LV with 1TiB of virtual space, and actual space just under
100MiB.
.br
.B lvcreate \-\-snapshot \-\-virtualsize 1t \-\-size 100m \-\-name mylv vg00
Create a linear LV with a usable size of 64MiB on specific physical extents.
.br
.B lvcreate \-L 64m \-n mylv vg00 /dev/sda:0\-7 /dev/sdb:0\-7
Create a RAID5 LV with a usable size of 5GiB, 3 stripes, a stripe size of
64KiB, using a total of 4 devices (including one for parity).
.br
.B lvcreate \-\-type raid5 \-L 5G \-i 3 \-I 64 \-n mylv vg00
Create a RAID5 LV using all of the free space in the VG and spanning all the
PVs in the VG (note that the command will fail if there are more than 8 PVs in
the VG, in which case \fB\-i 7\fP must be used to get to the current maximum of
8 devices including parity for RaidLVs).
.br
.B lvcreate \-\-config allocation/raid_stripe_all_devices=1
.RS
.B \-\-type raid5 \-l 100%FREE \-n mylv vg00
.RE
Create RAID10 LV with a usable size of 5GiB, using 2 stripes, each on
a two-image mirror. (Note that the \fB-i\fP and \fB-m\fP arguments behave
differently:
\fB-i\fP specifies the total number of stripes,
but \fB-m\fP specifies the number of images in addition
to the first image).
.br
.B lvcreate \-\-type raid10 \-L 5G \-i 2 \-m 1 \-n mylv vg00
Create a 1TiB thin LV, first creating a new thin pool for it, where
the thin pool has 100MiB of space, uses 2 stripes, has a 64KiB stripe
size, and 256KiB chunk size.
.br
.B lvcreate \-\-type thin \-\-name mylv \-\-thinpool mypool
.RS
.B \-V 1t \-L 100m \-i 2 \-I 64 \-c 256 vg00
.RE
Create a thin snapshot of a thin LV (the size option must not be
used, otherwise a copy-on-write snapshot would be created).
.br
.B lvcreate \-\-snapshot \-\-name mysnap vg00/thinvol
Create a thin snapshot of the read-only inactive LV named "origin"
which becomes an external origin for the thin snapshot LV.
.br
.B lvcreate \-\-snapshot \-\-name mysnap \-\-thinpool mypool vg00/origin
Create a cache pool from a fast physical device. The cache pool can
then be used to cache an LV.
.br
.B lvcreate \-\-type cache-pool \-L 1G \-n my_cpool vg00 /dev/fast1
Create a cache LV, first creating a new origin LV on a slow physical device,
then combining the new origin LV with an existing cache pool.
.br
.B lvcreate \-\-type cache \-\-cachepool my_cpool
.RS
.B \-L 100G \-n mylv vg00 /dev/slow1
.RE

View File

@@ -1,914 +0,0 @@
.TH LVCREATE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.
.\" Use 1st. parameter with \% to fix 'man2html' rendeing on same line!
.de SIZE_G
. IR \\$1 \c
. RB [ b | B | s | S | k | K | m | M | g | G ]
..
.de SIZE_E
. IR \\$1 \c
. RB [ b | B | s | S | k | K | m | M | \c
. BR g | G | t | T | p | P | e | E ]
..
.
.SH NAME
.
lvcreate \- create a logical volume in an existing volume group
.
.SH SYNOPSIS
.
.ad l
.B lvcreate
.RB [ \-a | \-\-activate
.RB [ a ][ e | l | s ]{ y | n }]
.RB [ \-\-addtag
.IR Tag ]
.RB [ \-\-alloc
.IR Allocation\%Policy ]
.RB [ \-A | \-\-autobackup
.RB { y | n }]
.RB [ \-H | \-\-cache ]
.RB [ \-\-cachemode
.RB { passthrough | writeback | writethrough }]
.RB [ \-\-cachepolicy
.IR Policy ]
.RB \%[ \-\-cachepool
.IR CachePoolLogicalVolume ]
.RB [ \-\-cachesettings
.IR Key \fB= Value ]
.RB [ \-c | \-\-chunksize
.IR ChunkSize ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB \%[ \-C | \-\-contiguous
.RB { y | n }]
.RB [ \-d | \-\-debug ]
.RB [ \-\-discards
.RB \%{ ignore | nopassdown | passdown }]
.RB [ \-\-errorwhenfull
.RB { y | n }]
.RB [{ \-l | \-\-extents
.BR \fILogicalExtents\%Number [ % { FREE | PVS | VG }]
.RB |
.BR \-L | \-\-size
.BR \fILogicalVolumeSize }
.RB [ \-i | \-\-stripes
.IR Stripes
.RB [ \-I | \-\-stripesize
.IR StripeSize ]]]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-K | \-\-ignoreactivationskip ]
.RB [ \-\-ignoremonitoring ]
.RB [ \-\-minor
.IR Minor
.RB [ \-j | \-\-major
.IR Major ]]
.RB [ \-\-metadataprofile
.IR Profile\%Name ]
.RB [ \-m | \-\-mirrors
.IR Mirrors
.RB [ \-\-corelog | \-\-mirrorlog
.RB { disk | core | mirrored }]
.RB [ \-\-nosync ]
.RB [ \-R | \-\-regionsize
.BR \fIMirrorLogRegionSize ]]
.RB [ \-\-monitor
.RB { y | n }]
.RB [ \-n | \-\-name
.IR Logical\%Volume ]
.RB [ \-\-noudevsync ]
.RB [ \-p | \-\-permission
.RB { r | rw }]
.RB [ \-M | \-\-persistent
.RB { y | n }]
.\" .RB [ \-\-pooldatasize
.\" .I DataVolumeSize
.RB \%[ \-\-poolmetadatasize
.IR MetadataVolumeSize ]
.RB [ \-\-poolmetadataspare
.RB { y | n }]
.RB [ \-\- [ raid ] maxrecoveryrate
.IR Rate ]
.RB [ \-\- [ raid ] minrecoveryrate
.IR Rate ]
.RB [ \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }]
.RB [ \-\-reportformat
.RB {basic | json}]
.RB \%[ \-k | \-\-setactivationskip
.RB { y | n }]
.RB [ \-s | \-\-snapshot ]
.RB [ \-V | \-\-virtualsize
.IR VirtualSize ]
.RB [ \-t | \-\-test ]
.RB [ \-T | \-\-thin ]
.RB [ \-\-thinpool
.IR ThinPoolLogicalVolume ]
.RB [ \-\-type
.IR SegmentType ]
.RB [ \-v | \-\-verbose ]
.RB [ \-W | \-\-wipesignatures
.RB { y | n }]
.RB [ \-Z | \-\-zero
.RB { y | n }]
.RI [ VolumeGroup
.RI |
.RI \%{ ExternalOrigin | Origin | Pool } LogicalVolume
.RI \%[ PhysicalVolumePath [ \fB: \fIPE \fR[ \fB\- PE ]]...]]
.LP
.B lvcreate
.RB [ \-l | \-\-extents
.BR \fILogicalExtentsNumber [ % { FREE | ORIGIN | PVS | VG }]
|
.BR \-L | \-\-size
.\" | \-\-pooldatasize
.IR LogicalVolumeSize ]
.RB [ \-c | \-\-chunksize
.IR ChunkSize ]
.RB \%[ \-\-commandprofile
.IR Profile\%Name ]
.RB [ \-\-noudevsync ]
.RB [ \-\-ignoremonitoring ]
.RB [ \-\-metadataprofile
.IR Profile\%Name ]
.RB \%[ \-\-monitor
.RB { y | n }]
.RB [ \-n | \-\-name
.IR SnapshotLogicalVolumeName ]
.RB [ \-\-reportformat
.RB {basic | json}]
.BR \-s | \-\-snapshot | \-H | \-\-cache
.RI \%{[ VolumeGroup \fB/\fP] OriginalLogicalVolume
.RB \%[ \-V | \-\-virtualsize
.IR VirtualSize ]}
.ad b
.
.SH DESCRIPTION
.
lvcreate creates a new logical volume in a volume group (see
.BR vgcreate "(8), " vgchange (8))
by allocating logical extents from the free physical extent pool
of that volume group. If there are not enough free physical extents then
the volume group can be extended (see
.BR vgextend (8))
with other physical volumes or by reducing existing logical volumes
of this volume group in size (see
.BR lvreduce (8)).
If you specify one or more PhysicalVolumes, allocation of physical
extents will be restricted to these volumes.
.br
.br
The second form supports the creation of snapshot logical volumes which
keep the contents of the original logical volume for backup purposes.
.
.SH OPTIONS
.
See
.BR lvm (8)
for common options.
.
.HP
.BR \-a | \-\-activate
.RB [ a ][ l | e | s ]{ y | n }
.br
Controls the availability of the Logical Volumes for immediate use after
the command finishes running.
By default, new Logical Volumes are activated (\fB\-ay\fP).
If it is possible technically, \fB\-an\fP will leave the new Logical
Volume inactive. But for example, snapshots of active origin can only be
created in the active state so \fB\-an\fP cannot be used with
\fB-\-type snapshot\fP. This does not apply to thin volume snapshots,
which are by default created with flag to skip their activation
(\fB-ky\fP).
Normally the \fB\-\-zero n\fP argument has to be supplied too because
zeroing (the default behaviour) also requires activation.
If autoactivation option is used (\fB\-aay\fP), the logical volume is
activated only if it matches an item in the
\fBactivation/auto_activation_volume_list\fP
set in \fBlvm.conf\fP(5).
For autoactivated logical volumes, \fB\-\-zero n\fP and
\fB\-\-wipesignatures n\fP is always assumed and it can't
be overridden. If the clustered locking is enabled,
\fB\-aey\fP will activate exclusively on one node and
.BR \-a { a | l } y
will activate only on the local node.
.
.HP
.BR \-H | \-\-cache
.br
Creates cache or cache pool logical volume.
.\" or both.
Specifying the optional argument \fB\-\-extents\fP or \fB\-\-size\fP
will cause the creation of the cache logical volume.
.\" Specifying the optional argument \fB\-\-pooldatasize\fP will cause
.\" the creation of the cache pool logical volume.
.\" Specifying both arguments will cause the creation of cache with its
.\" cache pool volume.
When the Volume group name is specified together with existing logical volume
name which is NOT a cache pool name, such volume is treated
as cache origin volume and cache pool is created. In this case the
\fB\-\-extents\fP or \fB\-\-size\fP is used to specify size of cache pool volume.
See \fBlvmcache\fP(7) for more info about caching support.
Note that the cache segment type requires a dm-cache kernel module version
1.3.0 or greater.
.
.HP
.BR \-\-cachemode
.RB { passthrough | writeback | writethrough }
.br
Specifying a cache mode determines when the writes to a cache LV
are considered complete. When \fBwriteback\fP is specified, a write is
considered complete as soon as it is stored in the cache pool LV.
If \fBwritethough\fP is specified, a write is considered complete only
when it has been stored in the cache pool LV and on the origin LV.
While \fBwritethrough\fP may be slower for writes, it is more
resilient if something should happen to a device associated with the
cache pool LV. With \fBpassthrough\fP mode, all reads are served
from origin LV (all reads miss the cache) and all writes are
forwarded to the origin LV; additionally, write hits cause cache
block invalidates. See \fBlvmcache(7)\fP for more details.
.
.HP
.BR \-\-cachepolicy
.IR Policy
.br
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache policy. \fBmq\fP is the basic policy name. \fBsmq\fP is more advanced
version available in newer kernels.
.
.HP
.BR \-\-cachepool
.IR CachePoolLogicalVolume { Name | Path }
.br
Specifies the name of cache pool volume name. The other way to specify pool name
is to append name to Volume group name argument.
.
.HP
.BR \-\-cachesettings
.IB Key = Value
.br
Only applicable to cached LVs; see also \fBlvmcache(7)\fP. Sets
the cache tunable settings. In most use-cases, default values should be adequate.
Special string value \fBdefault\fP switches setting back to its default kernel value
and removes it from the list of settings stored in lvm2 metadata.
.
.HP
.BR \-c | \-\-chunksize
.SIZE_G \%ChunkSize
.br
Gives the size of chunk for snapshot, cache pool and thin pool logical volumes.
Default unit is in kilobytes.
.br
For snapshots the value must be power of 2 between 4KiB and 512KiB
and the default value is 4KiB.
.br
For cache pools the value must a multiple of 32KiB
between 32KiB and 1GiB. The default is 64KiB.
When the size is specified with volume caching, it may not be smaller
than cache pool creation chunk size was.
.br
For thin pools the value must be a multiple of 64KiB
between 64KiB and 1GiB.
Default value starts with 64KiB and grows up to
fit the pool metadata size within 128MiB,
if the pool metadata size is not specified.
See
.BR lvm.conf (5)
setting \fBallocation/thin_pool_chunk_size_policy\fP
to select different calculation policy.
Thin pool target version <1.4 requires this value to be a power of 2.
For target version <1.5 discard is not supported for non power of 2 values.
.
.HP
.BR \-C | \-\-contiguous
.RB { y | n }
.br
Sets or resets the contiguous allocation policy for
logical volumes. Default is no contiguous allocation based
on a next free principle.
.
.HP
.BR \-\-corelog
.br
This is shortcut for option \fB\-\-mirrorlog core\fP.
.
.HP
.BR \-\-discards
.RB { ignore | nopassdown | passdown }
.br
Sets discards behavior for thin pool.
Default is \fBpassdown\fP.
.
.HP
.BR \-\-errorwhenfull
.RB { y | n }
.br
Configures thin pool behaviour when data space is exhausted.
Default is \fBn\fPo.
Device will queue I/O operations until target timeout
(see dm-thin-pool kernel module option \fPno_space_timeout\fP)
expires. Thus configured system has a time to i.e. extend
the size of thin pool data device.
When set to \fBy\fPes, the I/O operation is immeditelly errored.
.
.HP
.BR \-K | \-\-ignoreactivationskip
.br
Ignore the flag to skip Logical Volumes during activation.
Use \fB\-\-setactivationskip\fP option to set or reset
activation skipping flag persistently for logical volume.
.
.HP
.BR \-\-ignoremonitoring
.br
Make no attempt to interact with dmeventd unless \fB\-\-monitor\fP
is specified.
.
.HP
.BR -l | \-\-extents
.IR LogicalExtentsNumber \c
.RB [ % { VG | PVS | FREE | ORIGIN }]
.br
Specifies the size of the new LV in logical extents. The number of
physical extents allocated may be different, and depends on the LV type.
Certain LV types require more physical extents for data redundancy or
metadata. An alternate syntax allows the size to be determined indirectly
as a percentage of the size of a related VG, LV, or set of PVs. The
suffix \fB%VG\fP denotes the total size of the VG, the suffix \fB%FREE\fP
the remaining free space in the VG, and the suffix \fB%PVS\fP the free
space in the specified Physical Volumes. For a snapshot, the size
can be expressed as a percentage of the total size of the Origin Logical
Volume with the suffix \fB%ORIGIN\fP (\fB100%ORIGIN\fP provides space for
the whole origin).
When expressed as a percentage, the size defines an upper limit for the
number of logical extents in the new LV. The precise number of logical
extents in the new LV is not determined until the command has completed.
.
.HP
.BR \-j | \-\-major
.IR Major
.br
Sets the major number.
Major numbers are not supported with pool volumes.
This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
.
.HP
.BR \-\-metadataprofile
.IR ProfileName
.br
Uses and attaches the \fIProfileName\fP configuration profile to the logical
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
See \fBlvm.conf\fP(5) for more information about \fBmetadata profiles\fP.
.
.HP
.BR \-\-minor
.IR Minor
.br
Sets the minor number.
Minor numbers are not supported with pool volumes.
.
.HP
.BR \-m | \-\-mirrors
.IR mirrors
.br
Creates a mirrored logical volume with \fImirrors\fP copies.
For example, specifying \fB\-m 1\fP
would result in a mirror with two-sides; that is,
a linear volume plus one copy.
Specifying the optional argument \fB\-\-nosync\fP will cause the creation
of the mirror LV to skip the initial resynchronization. Any data written
afterwards will be mirrored, but the original contents will not be copied.
This is useful for skipping a potentially long and resource intensive initial
sync of an empty mirrored RaidLV.
There are two implementations of mirroring which can be used and correspond
to the "\fIraid1\fP" and "\fImirror\fP" segment types.
The default is "\fIraid1\fP". See the
\fB\-\-type\fP option for more information if you would like to use the
legacy "\fImirror\fP" segment type. See
.BR lvm.conf (5)
settings \fB global/mirror_segtype_default\fP
and \fBglobal/raid10_segtype_default\fP
to configure default mirror segment type.
The options
\fB\-\-mirrorlog\fP and \fB\-\-corelog\fP apply
to the legacy "\fImirror\fP" segment type only.
Note the current maxima for mirrors are 7 for "mirror" providing
8 mirror legs and 9 for "raid1" providing 10 legs.
.
.HP
.BR \-\-mirrorlog
.RB { disk | core | mirrored }
.br
Specifies the type of log to be used for logical volumes utilizing
the legacy "\fImirror\fP" segment type.
.br
The default is \fBdisk\fP, which is persistent and requires
a small amount of storage space, usually on a separate device from the
data being mirrored.
.br
Using \fBcore\fP means the mirror is regenerated by copying the data
from the first device each time the logical volume is activated,
like after every reboot.
.br
Using \fBmirrored\fP will create a persistent log that is itself mirrored.
.
.HP
.BR \-\-monitor
.RB { y | n }
.br
Starts or avoids monitoring a mirrored, snapshot or thin pool logical volume with
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
\fBactivation/mirror_image_fault_policy\fP
and \fBactivation/mirror_log_fault_policy\fP
set in \fBlvm.conf\fP(5).
.
.HP
.BR \-n | \-\-name
.IR LogicalVolume { Name | Path }
.br
Sets the name for the new logical volume.
.br
Without this option a default name of "lvol#" will be generated where
# is the LVM internal number of the logical volume.
.
.HP
.BR \-\-nosync
.br
Causes the creation of mirror, raid1, raid4, raid5 and raid10 to skip the
initial resynchronization. In case of mirror, raid1 and raid10, any data
written afterwards will be mirrored, but the original contents will not be
copied. In case of raid4 and raid5, no parity blocks will be written,
though any data written afterwards will cause parity blocks to be stored.
.br
This is useful for skipping a potentially long and resource intensive initial
sync of an empty mirror/raid1/raid4/raid5 and raid10 LV.
.br
This option is not valid for raid6, because raid6 relies on proper parity
(P and Q Syndromes) being created during initial synchronization in order
to reconstruct proper user date in case of device failures.
raid0 and raid0_meta don't provide any data copies or parity support
and thus don't support initial resynchronization.
.
.HP
.BR \-\-noudevsync
.br
Disables udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.
.HP
.BR \-p | \-\-permission
.RB { r | rw }
.br
Sets access permissions to read only (\fBr\fP) or read and write (\fBrw\fP).
.br
Default is read and write.
.
.HP
.BR \-M | \-\-persistent
.RB { y | n }
.br
Set to \fBy\fP to make the minor number specified persistent.
Pool volumes cannot have persistent major and minor numbers.
Defaults to \fBy\fPes only when major or minor number is specified.
Otherwise it is \fBn\fPo.
.\" .HP
.\" .IR \fB\-\-pooldatasize " " PoolDataVolumeSize [ bBsSkKmMgGtTpPeE ]
.\" Sets the size of pool's data logical volume.
.\" For thin pools you may also specify the size
.\" with the option \fB\-\-size\fP.
.\"
.
.HP
.BR \-\-poolmetadatasize
.SIZE_G \%MetadataVolumeSize
.br
Sets the size of pool's metadata logical volume.
Supported values are in range between 2MiB and 16GiB for thin pool,
and upto 16GiB for cache pool. The minimum value is computed from pool's
data size.
Default value for thin pool is (Pool_LV_size / Pool_LV_chunk_size * 64b).
To work with a thin pool, there should be at least 25% of free space
when the size of metadata is smaller then 16MiB,
or at least 4MiB of free space otherwise.
Default unit is megabytes.
.
.HP
.BR \-\-poolmetadataspare
.RB { y | n }
.br
Controls creation and maintanence of pool metadata spare logical volume
that will be used for automated pool recovery.
Only one such volume is maintained within a volume group
with the size of the biggest pool metadata volume.
Default is \fBy\fPes.
.
.HP
.BR \-\- [ raid ] maxrecoveryrate
.SIZE_G \%Rate
.br
Sets the maximum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
.
.HP
.BR \-\- [ raid ] minrecoveryrate
.SIZE_G \%Rate
.br
Sets the minimum recovery rate for a RAID logical volume. \fIRate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
.
.HP
.BR \-r | \-\-readahead
.RB { \fIReadAheadSectors | auto | none }
.br
Sets read ahead sector count of this logical volume.
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120.
The default value is \fBauto\fP which allows the kernel to choose
a suitable value automatically.
\fBnone\fP is equivalent to specifying zero.
.
.HP
.BR \-R | \-\-regionsize
.SIZE_G \%MirrorLogRegionSize
.br
A mirror is divided into regions of this size (in MiB), and the mirror log
uses this granularity to track which regions are in sync.
.
.HP
.BR \-k | \-\-setactivationskip
.RB { y | n }
.br
Controls whether Logical Volumes are persistently flagged to be skipped during
activation. By default, thin snapshot volumes are flagged for activation skip.
See
.BR lvm.conf (5)
\fBactivation/auto_set_activation_skip\fP
how to change its default behaviour.
To activate such volumes, an extra \fB\-\-ignoreactivationskip\fP
option must be used. The flag is not applied during deactivation. Use
\fBlvchange \-\-setactivationskip\fP
command to change the skip flag for existing volumes.
To see whether the flag is attached, use \fBlvs\fP command
where the state of the flag is reported within \fBlv_attr\fP bits.
.
.HP
.BR \-L | \-\-size
.SIZE_E \%LogicalVolumeSize
.br
Gives the size to allocate for the new logical volume.
A size suffix of \fBB\fP for bytes, \fBS\fP for sectors as 512 bytes,
\fBK\fP for kilobytes, \fBM\fP for megabytes,
\fBG\fP for gigabytes, \fBT\fP for terabytes, \fBP\fP for petabytes
or \fBE\fP for exabytes is optional.
.br
Default unit is megabytes.
.
.HP
.BR \-s | \fB\-\-snapshot
.IR OriginalLogicalVolume { Name | Path }
.br
Creates a snapshot logical volume (or snapshot) for an existing, so called
original logical volume (or origin).
Snapshots provide a 'frozen image' of the contents of the origin
while the origin can still be updated. They enable consistent
backups and online recovery of removed/overwritten data/files.
.br
Thin snapshot is created when the origin is a thin volume and
the size IS NOT specified. Thin snapshot shares same blocks within
the thin pool volume.
The non thin volume snapshot with the specified size does not need
the same amount of storage the origin has. In a typical scenario,
15-20% might be enough. In case the snapshot runs out of storage, use
.BR lvextend (8)
to grow it. Shrinking a snapshot is supported by
.BR lvreduce (8)
as well. Run
.BR lvs (8)
on the snapshot in order to check how much data is allocated to it.
Note: a small amount of the space you allocate to the snapshot is
used to track the locations of the chunks of data, so you should
allocate slightly more space than you actually need and monitor
(\fB\-\-monitor\fP) the rate at which the snapshot data is growing
so you can \fBavoid\fP running out of space.
If \fB\-\-thinpool\fP is specified, thin volume is created that will
use given original logical volume as an external origin that
serves unprovisioned blocks.
Only read-only volumes can be used as external origins.
To make the volume external origin, lvm expects the volume to be inactive.
External origin volume can be used/shared for many thin volumes
even from different thin pools. See
.BR lvconvert (8)
for online conversion to thin volumes with external origin.
.
.HP
.BR \-i | \-\-stripes
.IR Stripes
.br
Gives the number of stripes.
This is equal to the number of physical volumes to scatter
the logical volume data. When creating a RAID 4/5/6 logical volume,
the extra devices which are necessary for parity are
internally accounted for. Specifying \fB\-i 3\fP
would cause 3 devices for striped and RAID 0 logical volumes,
4 devices for RAID 4/5, 5 devices for RAID 6 and 6 devices for RAID 10.
Alternatively, RAID 0 will stripe across 2 devices,
RAID 4/5 across 3 PVs, RAID 6 across 5 PVs and RAID 10 across
4 PVs in the volume group if the \fB\-i\fP argument is omitted.
In order to stripe across all PVs of the VG if the \fB\-i\fP argument is
omitted, set raid_stripe_all_devices=1 in the allocation
section of \fBlvm.conf (5)\fP or add
.br
\fB\-\-config allocation/raid_stripe_all_devices=1\fP
.br
to the command.
Note the current maxima for stripes depend on the created RAID type.
For raid10, the maximum of stripes is 32,
for raid0, it is 64,
for raid4/5, it is 63
and for raid6 it is 62.
See the \fB\-\-nosync\fP option to optionally avoid initial syncrhonization of RaidLVs.
Two implementations of basic striping are available in the kernel.
The original device-mapper implementation is the default and should
normally be used. The alternative implementation using MD, available
since version 1.7 of the RAID device-mapper kernel target (kernel
version 4.2) is provided to facilitate the development of new RAID
features. It may be accessed with \fB--type raid0[_meta]\fP, but is best
avoided at present because of assorted restrictions on resizing and converting
such devices.
.HP
.BR \-I | \-\-stripesize
.IR StripeSize
.br
Gives the number of kilobytes for the granularity of the stripes.
.br
StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format.
For metadata in LVM2 format, the stripe size may be a larger
power of 2 but must not exceed the physical extent size.
.
.HP
.BR \-T | \-\-thin
.br
Creates thin pool or thin logical volume or both.
Specifying the optional argument \fB\-\-size\fP or \fB\-\-extents\fP
will cause the creation of the thin pool logical volume.
Specifying the optional argument \fB\-\-virtualsize\fP will cause
the creation of the thin logical volume from given thin pool volume.
Specifying both arguments will cause the creation of both
thin pool and thin volume using this pool.
See \fBlvmthin\fP(7) for more info about thin provisioning support.
Thin provisioning requires device mapper kernel driver
from kernel 3.2 or greater.
.
.HP
.BR \-\-thinpool
.IR ThinPoolLogicalVolume { Name | Path }
.br
Specifies the name of thin pool volume name. The other way to specify pool name
is to append name to Volume group name argument.
.
.HP
.BR \-\-type
.IR SegmentType
.br
Creates a logical volume with the specified segment type.
Supported types are:
.BR cache ,
.BR cache-pool ,
.BR error ,
.BR linear ,
.BR mirror,
.BR raid0 ,
.BR raid1 ,
.BR raid4 ,
.BR raid5_la ,
.BR raid5_ls
.RB (=
.BR raid5 ),
.BR raid5_ra ,
.BR raid5_rs ,
.BR raid6_nc ,
.BR raid6_nr ,
.BR raid6_zr
.RB (=
.BR raid6 ),
.BR raid10 ,
.BR snapshot ,
.BR striped,
.BR thin ,
.BR thin-pool
or
.BR zero .
Segment type may have a commandline switch alias that will
enable its use.
When the type is not explicitly specified an implicit type
is selected from combination of options:
.BR \-H | \-\-cache | \-\-cachepool
(cache or cachepool),
.BR \-T | \-\-thin | \-\-thinpool
(thin or thinpool),
.BR \-m | \-\-mirrors
(raid1 or mirror),
.BR \-s | \-\-snapshot | \-V | \-\-virtualsize
(snapshot or thin),
.BR \-i | \-\-stripes
(striped).
The default segment type is \fBlinear\fP.
.
.HP
.BR \-V | \-\-virtualsize
.SIZE_E \%VirtualSize
.br
Creates a thinly provisioned device or a sparse device of the given size (in MiB by default).
See
.BR lvm.conf (5)
settings \fBglobal/sparse_segtype_default\fP
to configure default sparse segment type.
See \fBlvmthin\fP(7) for more info about thin provisioning support.
Anything written to a sparse snapshot will be returned when reading from it.
Reading from other areas of the device will return blocks of zeros.
Virtual snapshot (sparse snapshot) is implemented by creating
a hidden virtual device of the requested size using the zero target.
A suffix of _vorigin is used for this device.
Note: using sparse snapshots is not efficient for larger
device sizes (GiB), thin provisioning should be used for this case.
.
.HP
.BR \-W | \-\-wipesignatures
.RB { y | n }
.br
Controls detection and subsequent wiping of signatures on newly created
Logical Volume. There's a prompt for each signature detected to confirm
its wiping (unless \fB--yes\fP is used where LVM assumes 'yes' answer
for each prompt automatically). If this option is not specified, then by
default \fB-W\fP | \fB--wipesignatures y\fP is assumed each time the
zeroing is done (\fB\-Z\fP | \fB\-\-zero y\fP). This default behaviour
can be controlled by \fB\%allocation/wipe_signatures_when_zeroing_new_lvs\fP
setting found in
.BR lvm.conf (5).
.br
If blkid wiping is used (\fBallocation/use_blkid_wiping\fP setting in
.BR lvm.conf (5))
and LVM2 is compiled with blkid wiping support, then \fBblkid\fP(8) library is used
to detect the signatures (use \fBblkid \-k\fP command to list the signatures that are recognized).
Otherwise, native LVM2 code is used to detect signatures (MD RAID, swap and LUKS
signatures are detected only in this case).
.br
Logical volume is not wiped if the read only flag is set.
.
.HP
.BR \-Z | \-\-zero
.RB { y | n }
.br
Controls zeroing of the first 4KiB of data in the new logical volume.
Default is \fBy\fPes.
Snapshot COW volumes are always zeroed.
Logical volume is not zeroed if the read only flag is set.
.br
Warning: trying to mount an unzeroed logical volume can cause the system to
hang.
.
.SH Examples
.
Creates a striped logical volume with 3 stripes, a stripe size of 8KiB
and a size of 100MiB in the volume group named vg00.
The logical volume name will be chosen by lvcreate:
.sp
.B lvcreate \-i 3 \-I 8 \-L 100M vg00
Creates a mirror logical volume with 2 sides with a useable size of 500 MiB.
This operation would require 3 devices (or option
\fB\-\-alloc \%anywhere\fP) - two for the mirror
devices and one for the disk log:
.sp
.B lvcreate \-m1 \-L 500M vg00
Creates a mirror logical volume with 2 sides with a useable size of 500 MiB.
This operation would require 2 devices - the log is "in-memory":
.sp
.B lvcreate \-m1 \-\-mirrorlog core \-L 500M vg00
Creates a snapshot logical volume named "vg00/snap" which has access to the
contents of the original logical volume named "vg00/lvol1"
at snapshot logical volume creation time. If the original logical volume
contains a file system, you can mount the snapshot logical volume on an
arbitrary directory in order to access the contents of the filesystem to run
a backup while the original filesystem continues to get updated:
.sp
.B lvcreate \-\-size 100m \-\-snapshot \-\-name snap /dev/vg00/lvol1
Creates a snapshot logical volume named "vg00/snap" with size
for overwriting 20% of the original logical volume named "vg00/lvol1".:
.sp
.B lvcreate \-s \-l 20%ORIGIN \-\-name snap vg00/lvol1
Creates a sparse device named /dev/vg1/sparse of size 1TiB with space for just
under 100MiB of actual data on it:
.sp
.B lvcreate \-\-virtualsize 1T \-\-size 100M \-\-snapshot \-\-name sparse vg1
Creates a linear logical volume "vg00/lvol1" using physical extents
/dev/sda:0\-7 and /dev/sdb:0\-7 for allocation of extents:
.sp
.B lvcreate \-L 64M \-n lvol1 vg00 /dev/sda:0\-7 /dev/sdb:0\-7
Creates a 5GiB RAID5 logical volume "vg00/my_lv", with 3 stripes (plus
a parity drive for a total of 4 devices) and a stripesize of 64KiB:
.sp
.B lvcreate \-\-type raid5 \-L 5G \-i 3 \-I 64 \-n my_lv vg00
Creates a RAID5 logical volume "vg00/my_lv", using all of the free
space in the VG and spanning all the PVs in the VG (note that the command
will fail if there's more than 8 PVs in the VG in which case \fB\-i 7\fP
has to be used to get to the currently possible maximum of
8 devices including parity for RaidLVs):
.sp
.B lvcreate \-\-config allocation/raid_stripe_all_devices=1 \-\-type raid5 \-l 100%FREE \-n my_lv vg00
Creates a 5GiB RAID10 logical volume "vg00/my_lv", with 2 stripes on
2 2-way mirrors. Note that the \fB-i\fP and \fB-m\fP arguments behave
differently.
The \fB-i\fP specifies the number of stripes.
The \fB-m\fP specifies the number of
.B additional
copies:
.sp
.B lvcreate \-\-type raid10 \-L 5G \-i 2 \-m 1 \-n my_lv vg00
Creates 100MiB pool logical volume for thin provisioning
build with 2 stripes 64KiB and chunk size 256KiB together with
1TiB thin provisioned logical volume "vg00/thin_lv":
.sp
.B lvcreate \-i 2 \-I 64 \-c 256 \-L100M \-T vg00/pool \-V 1T \-\-name thin_lv
Creates a thin snapshot volume "thinsnap" of thin volume "thinvol" that
will share the same blocks within the thin pool.
Note: the size MUST NOT be specified, otherwise the non-thin snapshot
is created instead:
.sp
.B lvcreate \-s vg00/thinvol \-\-name thinsnap
Creates a thin snapshot volume of read-only inactive volume "origin"
which then becomes the thin external origin for the thin snapshot volume
in vg00 that will use an existing thin pool "vg00/pool":
.sp
.B lvcreate \-s \-\-thinpool vg00/pool origin
Create a cache pool LV that can later be used to cache one
logical volume.
.sp
.B lvcreate \-\-type cache-pool \-L 1G \-n my_lv_cachepool vg /dev/fast1
If there is an existing cache pool LV, create the large slow
device (i.e. the origin LV) and link it to the supplied cache pool LV,
creating a cache LV.
.sp
.B lvcreate \-\-cache \-L 100G \-n my_lv vg/my_lv_cachepool /dev/slow1
If there is an existing logical volume, create the small and fast
cache pool LV and link it to the supplied existing logical
volume (i.e. the origin LV), creating a cache LV.
.sp
.B lvcreate \-\-type cache \-L 1G \-n my_lv_cachepool vg/my_lv /dev/fast1
.\" Create a 1G cached LV "lvol1" with 10M cache pool "vg00/pool".
.\" .sp
.\" .B lvcreate \-\-cache \-L 1G \-n lv \-\-pooldatasize 10M vg00/pool
.
.SH SEE ALSO
.
.nh
.BR lvm (8),
.BR lvm.conf (5),
.BR lvmcache (7),
.BR lvmthin (7),
.BR lvconvert (8),
.BR lvchange (8),
.BR lvextend (8),
.BR lvreduce (8),
.BR lvremove (8),
.BR lvrename (8)
.BR lvs (8),
.BR lvscan (8),
.BR vgcreate (8),
.BR blkid (8)

5
man/lvdisplay.8.des Normal file
View File

@@ -0,0 +1,5 @@
lvdisplay shows the attributes of LVs, like size, read/write status,
snapshot information, etc.
\fBlvs\fP(8) is a preferred alternative that shows the same information
and more, using a more compact and configurable output format.

View File

@@ -1,134 +0,0 @@
.TH LVDISPLAY 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvdisplay \(em display attributes of a logical volume
.SH SYNOPSIS
.B lvdisplay
.RB [ \-a | \-\-all ]
.RB [ \-c | \-\-colon ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-\-ignorelockingfailure ]
.RB [ \-\-ignoreskippedcluster ]
.RB [ \-\-maps ]
.RB [ \-\-nosuffix ]
.RB [ \-P | \-\-partial ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-\-units
.IR hHbBsSkKmMgGtTpPeE ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RI [ VolumeGroupName | LogicalVolume { Name | Path }\ ...]
.br
.B lvdisplay
.BR \-C | \-\-columns
.RB [ \-\-aligned ]
.RB [ \-\-binary ]
.RB [ \-a | \-\-all ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [[ \-\-configreport
.IR ReportName ]
.RB [ \-o | \-\-options
.RI [ + | \- | # ] Field1 [, Field2 ...]
.RB [ \-O | \-\-sort
.RI [ + | \- ] Key1 [, Key2 ...]]
.RB [ \-S | \-\-select
.IR Selection ]
.RB ...]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-\-ignorelockingfailure ]
.RB [ \-\-ignoreskippedcluster ]
.RB [ \-\-logonly ]
.RB [ \-\-noheadings ]
.RB [ \-\-nosuffix ]
.RB [ \-P | \-\-partial ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-\-segments ]
.RB [ \-\-separator
.IR Separator ]
.RB [ \-\-unbuffered ]
.RB [ \-\-units
.IR hHbBsSkKmMgGtTpPeE ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RI [ VolumeGroupName | LogicalVolume { Name | Path }\ ...]
.SH DESCRIPTION
lvdisplay allows you to see the attributes of a logical volume
like size, read/write status, snapshot information etc.
.P
\fBlvs\fP(8) is an alternative that provides the same information
in the style of \fBps\fP(1).
\fBlvs\fP(8) is recommended over \fBlvdisplay\fP.
.SH OPTIONS
See \fBlvm\fP(8) for common options and \fBlvs\fP for options given with
\fB\-\-columns\fP.
.TP
.B \-\-all
Include information in the output about internal Logical Volumes that
are components of normally-accessible Logical Volumes, such as mirrors,
but which are not independently accessible (e.g. not mountable).
For example, after creating a mirror using
\fBlvcreate \-m1 \-\-mirrorlog disk\fP,
this option will reveal three internal Logical Volumes, with suffixes
mimage_0, mimage_1, and mlog.
.TP
.BR \-C ", " \-\-columns
Display output in columns, the equivalent of \fBlvs\fP(8).
Options listed are the same as options given in \fBlvs\fP(8).
.TP
.BR \-c ", " \-\-colon
Generate colon separated output for easier parsing in scripts or programs.
N.B. \fBlvs\fP(8) provides considerably more control over the output.
.nf
The values are:
\(bu logical volume name
\(bu volume group name
\(bu logical volume access
\(bu logical volume status
\(bu internal logical volume number
\(bu open count of logical volume
\(bu logical volume size in sectors
\(bu current logical extents associated to logical volume
\(bu allocated logical extents of logical volume
\(bu allocation policy of logical volume
\(bu read ahead sectors of logical volume
\(bu major device number of logical volume
\(bu minor device number of logical volume
.fi
.TP
.BR \-m ", " \-\-maps
Display the mapping of logical extents to physical volumes and
physical extents. To map physical extents
to logical extents use:
.B pvs \-\-segments \-o+lv_name,seg_start_pe,segtype
.SH Examples
Shows attributes of that logical volume. If snapshot
logical volumes have been created for this original logical volume,
this command shows a list of all snapshot logical volumes and their
status (active or inactive) as well:
.sp
.B lvdisplay \-v vg00/lvol2
Shows the attributes of this snapshot logical volume and also which
original logical volume it is associated with:
.sp
.B lvdisplay vg00/snapshot
.SH SEE ALSO
.BR lvm (8),
.BR lvcreate (8),
.BR lvs (8),
.BR lvscan (8),
.BR pvs (8)

5
man/lvextend.8.des Normal file
View File

@@ -0,0 +1,5 @@
lvextend extends the size of an LV. This requires allocating logical
extents from the VG's free physical extents. A copy\-on\-write snapshot LV
can also be extended to provide more space to hold COW blocks. Use
\fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.

16
man/lvextend.8.end Normal file
View File

@@ -0,0 +1,16 @@
.SH EXAMPLES
Extend the size of an LV by 54MiB, using a specific PV.
.br
.B lvextend \-L +54 vg01/lvol10 /dev/sdk3
Extend the size of an LV by the amount of free
space on PV /dev/sdk3. This is equivalent to specifying
"\-l +100%PVS" on the command line.
.br
.B lvextend vg01/lvol01 /dev/sdk3
Extend an LV by 16MiB using specific physical extents.
.br
.B lvextend \-L+16m vg01/lvol01 /dev/sda:8\-9 /dev/sdb:8\-9

View File

@@ -1,134 +0,0 @@
.TH LVEXTEND 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvextend \(em extend the size of a logical volume
.SH SYNOPSIS
.B lvextend
.RB [ \-\-alloc
.IR AllocationPolicy ]
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-f | \-\-force ]
.RB [ \-i | \-\-stripes
.I Stripes
.RB [ \-I | \-\-stripesize
.IR StripeSize ]]
.RB { \-l | \-\-extents
.RI [ + ] LogicalExtentsNumber [ % { VG | LV | PVS | FREE | ORIGIN }]
|
.BR \-L | \-\-size
.RI [ + ] LogicalVolumeSize [ bBsSkKmMgGtTpPeE ]}
.RB [ \-n | \-\-nofsck ]
.RB [ \-\-noudevsync]
.RB [ \-r | \-\-resizefs ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-\-use\-policies ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.I LogicalVolumePath
.RI [ PhysicalVolumePath [ :PE [ \-PE ]]...]
.SH DESCRIPTION
lvextend allows you to extend the size of a logical volume.
Extension of snapshot logical volumes (see
.BR lvcreate (8)
for information to create snapshots) is supported as well.
But to change the number of copies in a mirrored logical
volume use
.BR lvconvert (8).
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-f ", " \-\-force
Proceed with size extension without prompting.
.TP
.IR \fB\-l ", " \fB\-\-extents " [" + ] LogicalExtentsNumber [ % { VG | LV | PVS | FREE | ORIGIN }]
Extend or set the logical volume size in units of logical extents.
With the '\fI+\fP' sign the value is added to the actual size
of the logical volume and without it, the value is taken as an absolute one.
The total number of physical extents allocated will be
greater than this, for example, if the volume is mirrored.
The number can also be expressed as a percentage of the total space
in the Volume Group with the suffix \fI%VG\fP, relative to the existing
size of the Logical Volume with the suffix \fI%LV\fP, of the remaining
free space for the specified PhysicalVolume(s) with the suffix \fI%PVS\fP,
as a percentage of the remaining free space in the Volume Group
with the suffix \fI%FREE\fP, or (for a snapshot) as a percentage of the total
space in the Origin Logical Volume with the suffix \fI%ORIGIN\fP.
The resulting value is rounded upward.
N.B. In a future release, when expressed as a percentage with PVS, VG or FREE,
the number will be treated as an approximate upper limit for the total number
of physical extents to be allocated (including extents used by any mirrors, for
example). The code may currently allocate more space than you might otherwise
expect.
.TP
.IR \fB\-L ", " \fB\-\-size " [" + ] LogicalVolumeSize [ bBsSkKmMgGtTpPeE ]
Extend or set the logical volume size in units of megabytes.
A size suffix of M for megabytes,
G for gigabytes, T for terabytes, P for petabytes
or E for exabytes is optional.
With the + sign the value is added to the actual size
of the logical volume and without it, the value is taken as an absolute one.
.TP
.BR \-i ", " \-\-stripes " " \fIStripes
Gives the number of stripes for the extension.
Not applicable to LVs using the original metadata LVM format, which must
use a single value throughout.
.TP
.BR \-I ", " \-\-stripesize " " \fIStripeSize
Gives the number of kilobytes for the granularity of the stripes.
Not applicable to LVs using the original metadata LVM format, which must
use a single value throughout.
.br
StripeSize must be 2^n (n = 2 to 9)
.TP
.BR \-n ", " \-\-nofsck
Do not perform fsck before extending filesystem when filesystem
requires it. You may need to use \fB\-\-force\fR to proceed with
this option.
.TP
.B \-\-noudevsync
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.TP
.BR \-r ", " \-\-resizefs
Resize underlying filesystem together with the logical volume using
\fBfsadm\fR(8).
.TP
.B \-\-use\-policies
Resizes the logical volume according to configured policy. See
\fBlvm.conf\fR(5) for some details.
.SH Examples
Extends the size of the logical volume "vg01/lvol10" by 54MiB on physical
volume /dev/sdk3. This is only possible if /dev/sdk3 is a member of
volume group vg01 and there are enough free physical extents in it:
.sp
.B lvextend \-L +54 /dev/vg01/lvol10 /dev/sdk3
Extends the size of logical volume "vg01/lvol01" by the amount of free
space on physical volume /dev/sdk3. This is equivalent to specifying
"\-l +100%PVS" on the command line:
.sp
.B lvextend /dev/vg01/lvol01 /dev/sdk3
Extends a logical volume "vg01/lvol01" by 16MiB using physical extents
/dev/sda:8\-9 and /dev/sdb:8\-9 for allocation of extents:
.sp
.B lvextend -L+16M vg01/lvol01 /dev/sda:8\-9 /dev/sdb:8\-9
.SH SEE ALSO
.BR fsadm (8),
.BR lvm (8),
.BR lvm.conf (5),
.BR lvcreate (8),
.BR lvconvert (8),
.BR lvreduce (8),
.BR lvresize (8),
.BR lvchange (8)

View File

@@ -1 +0,0 @@
.so man8/lvmconfig.8

View File

@@ -1 +0,0 @@
.so man8/lvmconfig.8

6
man/lvm-fullreport.8.des Normal file
View File

@@ -0,0 +1,6 @@
lvm fullreport produces formatted output about PVs, PV segments, VGs, LVs
and LV segments. The information is all gathered together for each VG
(under a per-VG lock) so it is consistent. Information gathered from
separate calls to \fBvgs\fP, \fBpvs\fP, and \fBlvs\fP can be inconsistent
if information changes between commands.

View File

@@ -1,145 +0,0 @@
.TH LVM-FULLREPORT 8 "LVM TOOLS #VERSION#" "Red Hat, Inc" \" -*- nroff -*-
.SH NAME
lvm fullreport \(em Report information about PVs, PV segments, VGs, LVs and LV segments, all at once for each VG.
.SH SYNOPSIS
.B lvm fullreport
.RB [ \-a | \-\-all ]
.RB [ \-\-aligned ]
.RB [ \-\-binary ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [[ \-\-configreport
.IR ReportName ]
.RB [ \-o | \-\-options
.RI [ + | \- | # ] Field1 [, Field2 ...]
.RB [ \-O | \-\-sort
.RI [ + | \- ] Key1 [, Key2 ...]]
.RB [ \-S | \-\-select
.IR Selection ]
.RB ...]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-\-ignorelockingfailure ]
.RB [ \-\-ignoreskippedcluster ]
.RB [ \-\-logonly ]
.RB [ \-\-nameprefixes ]
.RB [ \-\-noheadings ]
.RB [ \-\-nosuffix ]
.RB [ \-P | \-\-partial ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-\-rows ]
.RB [ \-\-separator
.IR Separator ]
.RB [ \-\-unbuffered ]
.RB [ \-\-units
.IR hHbBsSkKmMgGtTpPeE ]
.RB [ \-\-unquoted ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RI [ VolumeGroupName
.RI [ VolumeGroupName ...]]
.SH DESCRIPTION
lvm fullreport produces formatted output about PVs, PV segments, VGs, LVs
and LV segments, all at once for each VG and guarded by per-VG lock
for consistency.
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.B \-\-all
Include information in the output about internal Logical Volumes that
are components of normally-accessible Logical Volumes, such as mirrors,
but which are not independently accessible (e.g. not mountable).
The names of such Logical Volumes are enclosed within square brackets
in the output. For example, after creating a mirror using
.B lvcreate -m1 \-\-mirrorlog disk
, this option will reveal three internal Logical
Volumes, with suffixes mimage_0, mimage_1, and mlog.
.TP
.B \-\-aligned
Use with \fB\-\-separator\fP to align the output columns.
.TP
.B \-\-binary
Use binary values "0" or "1" instead of descriptive literal values
for columns that have exactly two valid values to report (not counting
the "unknown" value which denotes that the value could not be determined).
.TP
.B \-\-configreport \fI ReportName
Make any subsequent \fB\-o, \-\-options\fP, \fB\-O, \-\-sort\fP or
\fB\-S, \-\-select\fP to apply for \fIReportName\fP where \fIReportName\fP
is 'pv' for PV subreport, 'pvseg' for PV segment subreport, 'vg' for
VG subreport, 'lv' for LV subreport, 'seg' for LV segment subreport or 'log'
for log report. If \fB\-\-configreport\fP option is not used to identify a
report, then all command's subreports are assumed except log report. The log
report is available only if enabled by \fBlog/report_command_log\fP
\fBlvm.conf\fP(5) setting or if \fB\-\-logonly\fP option is used.
.TP
.B \-\-logonly
Suppress the main report itself and display only log report on output.
.TP
.B \-\-nameprefixes
Add an "LVM2_" prefix plus the field name to the output. Useful
with \fB\-\-noheadings\fP to produce a list of field=value pairs that can
be used to set environment variables (for example, in \fBudev\fP(7) rules).
.TP
.B \-\-noheadings
Suppress the headings line that is normally the first line of output.
Useful if grepping the output.
.TP
.B \-\-nosuffix
Suppress the suffix on output sizes. Use with \fB\-\-units\fP
(except h and H) if processing the output.
.TP
.BR \-o ", " \-\-options
Comma-separated ordered list of columns.
.IP
Precede the list with '\fI+\fP' to append to the current list
of columns, '\fI-\fP' to remove from the current list of columns
or '\fI#\fP' to compact given columns. The \fI\-o\fP option can
be repeated, providing several lists. These lists are evaluated
from left to right.
.IP
For the list of columns, see \fBpvs\fP(8), \fBvgs\fP(8),
\fBlvs\fP(8) man page or check \fBpvs\fP, \fBvgs\fP, \fBlvs -o help\fP
output.
.TP
.BR \-O ", " \-\-sort
Comma-separated ordered list of columns to sort by. Replaces the default
selection. Precede any column with '\fI\-\fP' for a reverse sort on that
column.
.TP
.B \-\-rows
Output columns as rows.
.TP
.BR \-S ", " \-\-select " " \fISelection
Display only rows that match Selection criteria. All rows are displayed with
the additional "selected" column (\fB-o selected\fP) showing 1 if the row
matches the Selection and 0 otherwise. The Selection criteria are defined
by specifying column names and their valid values (that can include reserved
values) while making use of supported comparison operators. See \fBlvm\fP(8)
and \fB\-S\fP, \fB\-\-select\fP description for more detailed information
about constructing the Selection criteria. As a quick help and to see full
list of column names that can be used in Selection including the list of
reserved values and the set of supported selection operators, check the
output of \fBpvs\fP, \fBvgs\fP, \fBlvs -S help\fP command.
.TP
.B \-\-separator \fISeparator
String to use to separate each column. Useful if grepping the output.
.TP
.B \-\-unbuffered
Produce output immediately without sorting or aligning the columns properly.
.TP
.B \-\-units \fIhHbBsSkKmMgGtTpPeE
All sizes are output in these units: (h)uman-readable, (b)ytes, (s)ectors,
(k)ilobytes, (m)egabytes, (g)igabytes, (t)erabytes, (p)etabytes, (e)xabytes.
Capitalise to use multiples of 1000 (S.I.) instead of 1024. Can also specify
custom units e.g. \-\-units 3M
.TP
.B \-\-unquoted
When used with \fB\-\-nameprefixes\fP, output values in the field=value
pairs are not quoted.
.SH SEE ALSO
.BR lvm (8),
.BR pvs (8),
.BR vgs (8),
.BR lvs (8)

4
man/lvm-lvpoll.8.des Normal file
View File

@@ -0,0 +1,4 @@
lvm lvpoll is an internal command used by \fBlvmpolld\fP(8) to monitor and
complete \fBlvconvert\fP(8) and \fBpvmove\fP(8) operations. lvpoll itself
does not initiate these operations and should not normally need to be run
directly.

33
man/lvm-lvpoll.8.end Normal file
View File

@@ -0,0 +1,33 @@
.SH NOTES
To find the name of the pvmove LV that was created by an original
\fBpvmove /dev/name\fP command, use the command:
.br
\fBlvs -a -S move_pv=/dev/name\fP.
.SH EXAMPLES
Continue polling a pvmove operation.
.br
.B lvm lvpoll --polloperation pvmove vg00/pvmove0
Abort a pvmove operation.
.br
.B lvm lvpoll --polloperation pvmove --abort vg00/pvmove0
Continue polling a mirror conversion.
.br
.B lvm lvpoll --polloperation convert vg00/lvmirror
Continue mirror repair.
.br
.B lvm lvpoll --polloperation convert vg/damaged_mirror --handlemissingpvs
Continue snapshot merge.
.br
.B lvm lvpoll --polloperation merge vg/snapshot_old
Continue thin snapshot merge.
.br
.B lvm lvpoll --polloperation merge_thin vg/thin_snapshot

View File

@@ -1,89 +0,0 @@
.TH "LVPOLL" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" \" -*- nroff -*-
.SH NAME
lvpoll \(em Internal command used by lvmpolld to complete some Logical Volume operations.
.SH SYNOPSIS
.B lvm lvpoll
.BR \-\-polloperation
.RI { pvmove | convert | merge | merge_thin }
.RB [ \-\-abort ]
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-\-handlemissingpvs ]
.RB [ \-i | \-\-interval
.IR Seconds ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.IR LogicalVolume [ Path ]
.SH DESCRIPTION
\fBlvpoll\fP is an internal command used by \fBlvmpolld\fP(8) to monitor and
complete \fBlvconvert\fP(8) and \fBpvmove\fP(8) operations.
\fBlvpoll\fP itself does not initiate these operations and
you should never normally need to invoke it directly.
.I LogicalVolume
The Logical Volume undergoing conversion or, in the case of pvmove, the name of
the internal pvmove Logical Volume (see \fBEXAMPLES\fP).
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-\-polloperation " {" \fIconvert | \fImerge | \fImerge_thin | \fIpvmove }
Mandatory option.
\fIpvmove\fP refers to a pvmove operation that is moving data.
\fIconvert\fP refers to an operation that is increasing the number of redundant copies of data maintained by a mirror.
\fImerge\fP indicates a merge operation that doesn't involve thin volumes.
\fImerge_thin\fP indicates a merge operation involving thin snapshots.
\fBpvmove\fP(8) and \fBlvconvert\fP(8) describe how to initiate these operations.
.TP
.B \-\-abort
Abort pvmove in progress. See \fBpvmove\fP(8).
.TP
.B \-\-handlemissingpvs
Used when the polling operation needs to handle missing PVs to be able to
continue. This can happen when \fBlvconvert\fP(8) is repairing a mirror
with one or more faulty devices.
.TP
.BR \-i ", " \-\-interval " "\fISeconds
Report progress at regular intervals
.SH EXAMPLES
Resume polling of a pvmove operation identified by the Logical Volume vg00/pvmove0:
.sp
.B lvm lvpoll --polloperation pvmove vg00/pvmove0
.P
Abort the same pvmove operation:
.sp
.B lvm lvpoll --polloperation pvmove --abort vg00/pvmove0
.P
To find out the name of the pvmove Logical Volume resulting from an original
\fBpvmove /dev/sda1\fP command you may use the following \fBlvs\fP command.
(Remove the parentheses from the LV name.)
.sp
.B lvs -a -S move_pv=/dev/sda1
.P
Resume polling of mirror conversion vg00/lvmirror:
.sp
.B lvm lvpoll --polloperation convert vg00/lvmirror
.P
Complete mirror repair:
.sp
.B lvm lvpoll --polloperation convert vg/damaged_mirror --handlemissingpvs
.P
Process snapshot merge:
.sp
.B lvm lvpoll --polloperation merge vg/snapshot_old
.P
Finish thin snapshot merge:
.sp
.B lvm lvpoll --polloperation merge_thin vg/thin_snapshot
.SH SEE ALSO
.BR lvconvert (8),
.BR lvm (8),
.BR lvmpolld (8),
.BR lvs (8),
.BR pvmove (8)

View File

@@ -45,6 +45,9 @@ A file containing a simple script with one command per line
can also be given on the command line. The script can also be
executed directly if the first line is #! followed by the absolute
path of \fBlvm\fP.
.P
Additional hyphens within option names are ignored. For example,
\fB\-\-readonly\fP and \fB\-\-read\-only\fP are both accepted.
.
.SH BUILT-IN COMMANDS
.
@@ -238,261 +241,6 @@ The following commands are not implemented in LVM2 but might be
in the future:
.BR lvmsadc ", " lvmsar ", " pvdata .
.
.SH OPTIONS
.
The following options are available for many of the commands.
They are implemented generically and documented here rather
than repeated on individual manual pages.
.P
Additional hyphens within option names are ignored. For example,
\fB\-\-readonly\fP and \fB\-\-read\-only\fP are both accepted.
.
.HP
.BR \-h | \-? | \-\-help
.br
Display the help text.
.
.HP
.BR \-\-version
.br
Display version information.
.
.HP
.BR \-v | \-\-verbose
.br
Set verbose level. Repeat from 1 to 3 times to increase the detail
of messages sent to stdout and stderr. Overrides config file setting.
.
.HP
.BR \-d | \-\-debug
.br
Set debug level. Repeat from 1 to 6 times to increase the detail of
messages sent to the log file and/or syslog (if configured).
Overrides config file setting.
.
.HP
.BR \-q | \-\-quiet
.br
Suppress output and log messages.
Overrides \fB\-d\fP and \fB\-v\fP.
Repeat once to also suppress any prompts with answer 'no'.
.
.HP
.BR \-\-yes
.br
Don't prompt for confirmation interactively but instead always assume the
answer is 'yes'. Take great care if you use this!
.
.HP
.BR \-t | \-\-test
.br
Run in test mode. Commands will not update metadata.
This is implemented by disabling all metadata writing but nevertheless
returning success to the calling function. This may lead to unusual
error messages in multi-stage operations if a tool relies on reading
back metadata it believes has changed but hasn't.
.
.HP
.BR \-\-driverloaded
.RB { y | n }
.br
Whether or not the device-mapper kernel driver is loaded.
If you set this to \fBn\fP, no attempt will be made to contact the driver.
.
.HP
.BR \-A | \-\-autobackup
.RB { y | n }
.br
Whether or not to metadata should be backed up automatically after a change.
You are strongly advised not to disable this!
See \fBvgcfgbackup\fP(8).
.
.HP
.BR \-P | \-\-partial
.br
When set, the tools will do their best to provide access to Volume Groups
that are only partially available (one or more Physical Volumes belonging
to the Volume Group are missing from the system). Where part of a logical
volume is missing, \fI\%/dev/ioerror\fP will be substituted, and you could use
\fBdmsetup\fP(8) to set this up to return I/O errors when accessed,
or create it as a large block device of nulls. Metadata may not be
changed with this option. To insert a replacement Physical Volume
of the same or large size use \fBpvcreate \-u\fP to set the uuid to
match the original followed by \fBvgcfgrestore\fP(8).
.
.HP
.BR \-S | \-\-select
.IR Selection
.br
For reporting commands, display only rows that match \fISelection\fP criteria.
All rows are displayed with the additional "selected" column (\fB-o selected\fP)
showing 1 if the row matches the \fISelection\fP and 0 otherwise. For non-reporting
commands which process LVM entities, the selection can be used to match items
to process. See \fBSelection\fP section in \fBlvmreport\fP(7) man page for more
information about the way the selection criteria are constructed.
.
.HP
.BR \-M | \-\-metadatatype
.IR Type
.br
Specifies which \fItype\fP of on-disk metadata to use, such as \fBlvm1\fP
or \fBlvm2\fP, which can be abbreviated to \fB1\fP or \fB2\fP respectively.
The default (\fBlvm2\fP) can be changed by setting \fBformat\fP
in the \fBglobal\fP section of the config file \fBlvm.conf\fP(5).
.
.HP
.BR \-\-ignorelockingfailure
.br
This lets you proceed with read-only metadata operations such as
\fBlvchange \-ay\fP and \fBvgchange \-ay\fP even if the locking module fails.
One use for this is in a system init script if the lock directory
is mounted read-only when the script runs.
.
.HP
.BR \-\-ignoreskippedcluster
.br
Use to avoid exiting with an non-zero status code if the command is run
without clustered locking and some clustered Volume Groups have to be
skipped over.
.
.HP
.BR \-\-readonly
.br
Run the command in a special read-only mode which will read on-disk
metadata without needing to take any locks. This can be used to peek
inside metadata used by a virtual machine image while the virtual
machine is running.
It can also be used to peek inside the metadata of clustered Volume
Groups when clustered locking is not configured or running. No attempt
will be made to communicate with the device-mapper kernel driver, so
this option is unable to report whether or not Logical Volumes are
actually in use.
.
.HP
.BR \-\-foreign
.br
Cause the command to access foreign VGs, that would otherwise be skipped.
It can be used to report or display a VG that is owned by another host.
This option can cause a command to perform poorly because lvmetad caching
is not used and metadata is read from disks.
.
.HP
.BR \-\-shared
.br
Cause the command to access shared VGs, that would otherwise be skipped
when lvmlockd is not being used. It can be used to report or display a
lockd VG without locking. Applicable only if LVM is compiled with lockd
support.
.
.HP
.BR \-\-addtag
.IR Tag
.br
Add the tag \fITag\fP to a PV, VG or LV.
Supply this argument multiple times to add more than one tag at once.
A tag is a word that can be used to group LVM2 objects of the same type
together.
Tags can be given on the command line in place of PV, VG or LV
arguments. Tags should be prefixed with @ to avoid ambiguity.
Each tag is expanded by replacing it with all objects possessing
that tag which are of the type expected by its position on the command line.
PVs can only possess tags while they are part of a Volume Group:
PV tags are discarded if the PV is removed from the VG.
As an example, you could tag some LVs as \fBdatabase\fP and others
as \fBuserdata\fP and then activate the database ones
with \fBlvchange \-ay @database\fP.
Objects can possess multiple tags simultaneously.
Only the new LVM2 metadata format supports tagging: objects using the
LVM1 metadata format cannot be tagged because the on-disk format does not
support it.
Characters allowed in tags are:
.BR A - Z
.BR a - z
.BR 0 - 9
.BR "_ + . -"
and as of version 2.02.78 the following characters are also accepted:
.BR "/ = ! : # &" .
.
.HP
.BR \-\-deltag
.IR Tag
.br
Delete the tag \fITag\fP from a PV, VG or LV, if it's present.
Supply this argument multiple times to remove more than one tag at once.
.
.HP
.BR \-\-alloc
.RB { anywhere | contiguous | cling | inherit | normal }
.br
Selects the allocation policy when a command needs to allocate
Physical Extents from the Volume Group.
Each Volume Group and Logical Volume has an allocation policy defined.
The default for a Volume Group is \fBnormal\fP which applies
common-sense rules such as not placing parallel stripes on the same
Physical Volume. The default for a Logical Volume is \fBinherit\fP
which applies the same policy as for the Volume Group. These policies can
be changed using \fBlvchange\fP(8) and \fBvgchange\fP(8) or overridden
on the command line of any command that performs allocation.
The \fBcontiguous\fP policy requires that new Physical Extents be placed adjacent
to existing Physical Extents.
The \fBcling\fP policy places new Physical Extents on the same Physical
Volume as existing Physical Extents in the same stripe of the Logical Volume.
If there are sufficient free Physical Extents to satisfy
an allocation request but \fBnormal\fP doesn't use them,
\fBanywhere\fP will - even if that reduces performance by
placing two stripes on the same Physical Volume.
.
.HP
.BR \-\-commandprofile
.IR ProfileName
.br
Selects the command configuration profile to use when processing an LVM command.
See also \fBlvm.conf\fP(5) for more information about \fBcommand profile config\fP and
the way it fits with other LVM configuration methods. Using \fB\-\-commandprofile\fP
option overrides any command profile specified via \fBLVM_COMMAND_PROFILE\fP
environment variable.
.
.HP
.BR \-\-metadataprofile
.IR ProfileName
.br
Selects the metadata configuration profile to use when processing an LVM command.
When using metadata profile during Volume Group or Logical Volume creation,
the metadata profile name is saved in metadata. When such Volume Group or Logical
Volume is processed next time, the metadata profile is automatically applied
and the use of \fB\-\-metadataprofile\fP option is not necessary. See also
\fBlvm.conf\fP(5) for more information about \fBmetadata profile config\fP and the
way it fits with other LVM configuration methods.
.
.HP
.BR \-\-profile
.IR ProfileName
.br
A short form of \fB\-\-metadataprofile\fP for \fBvgcreate\fP, \fBlvcreate\fP,
\fBvgchange\fP and \fBlvchange\fP command and a short form of \fB\-\-commandprofile\fP
for any other command (with the exception of \fBlvmconfig\fP command where the
\fB\-\-profile\fP has special meaning, see \fBlvmconfig\fP(8) for more information).
.
.HP
.BR \-\-reportformat
.IR {basic|json}
.br
Overrides current output format for reports which is defined globally by
\fBreport/output_format\fP configuration setting in \fBlvm.conf\fP(5).
The \fBbasic\fP format is the original format with columns and rows and
if there is more than one report per command, each report is prefixed
with report's name for identification. The \fBjson\fP stands for report
output in JSON format.
.HP
.BR \-\-config
.IR ConfigurationString
.br
Uses the ConfigurationString as direct string representation of the configuration
to override the existing configuration. The ConfigurationString is of exactly
the same format as used in any LVM configuration file. See \fBlvm.conf\fP(5)
for more information about \fBdirect config override on command line\fP and the
way it fits with other LVM configuration methods.
.
.SH VALID NAMES
.
The valid characters for VG and LV names are:
@@ -668,6 +416,11 @@ File descriptor to use for report output from LVM commands.
Name of default command profile to use for LVM commands. This profile
is overriden by direct use of \fB\-\-commandprofile\fP command line option.
.TP
.B LVM_RUN_BY_DMEVENTD
This variable is normally set by dmeventd plugin to inform lvm2 command
it is running from dmeventd plugin so lvm2 takes some extra action
to avoid comunication and deadlocks with dmeventd.
.TP
.B LVM_SYSTEM_DIR
Directory containing \fBlvm.conf\fP(5) and other LVM system files.
Defaults to "\fI#DEFAULT_SYS_DIR#\fP".

View File

@@ -1,10 +0,0 @@
.TH LVMCHANGE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvmchange \(em change attributes of the logical volume manager
.SH SYNOPSIS
.B lvmchange
.SH DESCRIPTION
lvmchange is not currently supported under LVM2, although
\fBdmsetup\fP(8) has a \fBremove_all\fP command.
.SH SEE ALSO
.BR dmsetup (8)

3
man/lvmconfig.8.des Normal file
View File

@@ -0,0 +1,3 @@
lvmconfig produces formatted output from the LVM configuration tree. The
sources of the configuration data include \fBlvm.conf\fP(5) and command
line settings from \-\-config.

View File

@@ -1,225 +0,0 @@
.TH "LVMCONFIG" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME"
lvmconfig, lvm dumpconfig, lvm config \(em Display LVM configuration
.SH SYNOPSIS
.
.ad l
.B lvmconfig
.RB [ \-f | \-\-file
.IR Filename ]
.RB [ \-\-type
.RB { current | default | diff | full |\: list | missing | new \c
.RB | profilable | profilable-command | profilable-metadata }]
.RB [ \-\-atversion
.IR Version ]
.RB [ \-\-sinceversion
.IR Version ]
.RB [ \-\-ignoreadvanced ]
.RB [ \-\-ignoreunsupported ]
.RB [ \-\-ignorelocal ]
.RB [ \-l | \-\-list ]
.RB [ \-\-config
.IR ConfigurationString ]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-\-profile
.IR ProfileName ]
.RB [ \-\-metadataprofile
.IR ProfileName ]
.RB [ \-\-mergedconfig ]
.RB [ \-\-showdeprecated ]
.RB [ \-\-showunsupported ]
.RB [ \-\-validate ]
.RB [ \-\-withsummary ]
.RB [ \-\-withcomments ]
.RB [ \-\-withspaces ]
.RB [ \-\-withversions ]
.RB [ ConfigurationNode... ]
.ad b
.
.SH DESCRIPTION
lvmconfig produces formatted output from the LVM configuration tree.
The command was added in release 2.02.119 and has an identical longer form
\fBlvm dumpconfig\fP.
.SH OPTIONS
.TP
.BR \-f ", " \-\-file " \fIFilename"
Send output to a file named 'filename'.
.TP
.BR \-l ", " \-\-list
List configuration settings with summarizing comment. This is the same as using
\fBlvmconfig --type list --withsummary\fP.
.TP
.BR \-\-type " {" current | default | diff | full | missing | new | profilable |\: profilable-command | profilable-metadata }
Select the type of configuration to display. The configuration settings
displayed have either default values or currently-used values assigned based on
the type selected. If no type is selected, \fB\-\-type current\fP is used
by default. Whenever a configuration setting with a default value is
commented out, it means the setting does not have any concrete default
value defined. Output can be saved and used as a proper \fBlvm.conf\fP(5)
file.
.RS
.IP \fBcurrent\fP 3
Display the current \fBlvm.conf\fP configuration merged with any \fBtag
config\fP if used. See also \fBlvm.conf\fP(5) for more info about LVM
configuration methods.
.IP \fBdefault\fP 3
Display all possible configuration settings with default values assigned.
.IP \fBdiff\fP 3
Display all configuration settings for which the values used differ from defaults.
The value assigned for each configuration setting is the value currently used.
Using this type also implies the use of \fB\-\-mergedconfig\fP option.
This is actually minimal LVM configuration which can be used without
a change to current configured behaviour.
.IP \fBfull\fP 3
Display full configuration tree - a combination of current configuration tree
(\fB\-\-type current\fP) and tree of settings for which default values are
used (\fB\-\-type missing\fP). This is exactly the configuration tree that
LVM2 uses during command execution. Using this type also implies
the use of \fB\-\-mergedconfig\fP option. If comments are displayed
(see \fB\-\-withcomments\fP and \fB\-\-withsummary\fP options), then
for each setting found in existing configuration and for which defaults
are not used, there's an extra comment line printed to denote this.
.IP \fBlist\fP 3
Display plain list of configuration settings.
.IP \fBmissing\fP 3
Display all configuration settings with default values assigned which are
missing in the configuration currently used and for which LVM automatically
fallbacks to using these default values.
.IP \fBnew\fP 3
Display all new configuration settings introduced in current LVM version
or specific version as defined by \fB\-\-atversion\fP option.
.IP \fBprofilable\fP 3
Display all profilable configuration settings with default values assigned.
See \fBlvm.conf\fP(5) for more info about \fBprofile config\fP method.
.IP \fBprofilable-command\fP 3
Display all profilable configuration settings with default values assigned
that can be used in command profile. This is a subset of settings displayed
by \fB\-\-type profilable\fP.
.IP \fBprofilable-metadata\fP 3
Display all profilable configuration settings with default values assigned
that can be used in metadata profile. This is a subset of settings displayed
by \fB\-\-type profilable\fP.
.RE
.TP
.BI \-\-atversion " Version"
Specify an LVM version in x.y.z format where x is the major version,
the y is the minor version and z is the patchlevel (e.g. 2.2.106).
When configuration is displayed, the configuration settings recognized
at this LVM version will be considered only. This can be used
to display a configuration that a certain LVM version understands and
which does not contain any newer settings for which LVM would
issue a warning message when checking the configuration.
.TP
.BI \-\-sinceversion " Version"
Specify an LVM version in x.y.z format where x is the major version,
the y is the minor version and z is the patchlevel (e.g. 2.2.106).
This option is currently applicable only with \fB\-\-type new\fP
to display all configuration settings introduced since given version.
.TP
.B \-\-ignoreadvanced
Exclude advanced configuration settings from the output.
.TP
.B \-\-ignoreunsupported
Exclude unsupported configuration settings from the output. These settings are
either used for debugging and development purposes only or their support is not
yet complete and they are not meant to be used in production. The \fBcurrent\fP
and \fBdiff\fP types include unsupported settings in their output by default,
all the other types ignore unsupported settings.
.TP
.B \-\-ignorelocal
Ignore local section.
.TP
.BI \-\-config " ConfigurationString"
Use \fIConfigurationString\fP to override existing configuration.
This configuration is then applied for the lvmconfig command itself.
See also \fBlvm.conf\fP(5) for more info about \fBconfig cascade\fP.
.TP
.BI \-\-commandprofile " ProfileName"
Use \fIProfileName\fP to override existing configuration.
This configuration is then applied for the lvmconfig command itself.
See also \fB\-\-mergedconfig\fP option and \fBlvm.conf\fP(5) for
more info about \fBconfig cascade\fP.
.TP
.BI \-\-profile " ProfileName"
The same as using \fB\-\-commandprofile\fP but the configuration is not
applied for the lvmconfig command itself.
.TP
.BI \-\-metadataprofile " ProfileName"
Use \fIProfileName\fP to override existing configuration.
The configuration defined in metadata profile has no effect for
the lvmconfig command itself. lvmconfig displays the configuration only.
See also \fB\-\-mergedconfig\fP option and \fBlvm.conf\fP(5) for more
info about \fBconfig cascade\fP.
.TP
.B \-\-mergedconfig
When the lvmconfig command is run with the \fB\-\-config\fP option
and/or \fB\-\-commandprofile\fP (or using \fBLVM_COMMAND_PROFILE\fP
environment variable), \fB\-\-profile\fP, \fB\-\-metadataprofile\fP
option, merge all the contents of the \fBconfig cascade\fP before displaying it.
Without the \fB\-\-mergeconfig\fP option used, only the configuration at
the front of the cascade is displayed. See also \fBlvm.conf\fP(5) for more
info about \fBconfig cascade\fP.
.TP
.B \-\-showdeprecated
Include deprecated configuration settings in the output. These settings
are always deprecated since certain version. If concrete version is specified
with \fB--atversion\fP option, deprecated settings are automatically included
if specified version is lower that the version in which the settings were
deprecated. The \fBcurrent\fP and \fBdiff\fP types include deprecated settings
int their output by default, all the other types ignore deprecated settings.
.TP
.B \-\-showunsupported
Include unsupported configuration settings in the output. These settings
are either used for debugging or development purposes only or their support
is not yet complete and they are not meant to be used in production. The
\fBcurrent\fP and \fBdiff\fP types include unsupported settings in their
output by default, all the other types ignore unsupported settings.
.TP
.B \-\-validate
Validate current configuration used and exit with appropriate
return code. The validation is done only for the configuration
at the front of the \fBconfig cascade\fP. To validate the whole
merged configuration tree, use also the \fB\-\-mergedconfig\fP option.
The validation is done even if \fBconfig/checks\fP \fBlvm.conf\fP(5)
option is disabled.
.TP
.B \-\-withsummary
Display a one line comment for each configuration node.
.TP
.B \-\-withcomments
Display a full comment for each configuration node. For deprecated
settings, also display comments about deprecation in addition.
.TP
.B \-\-withspaces
Where appropriate, add more spaces in output for better readability.
.TP
.B \-\-withversions
Also display a comment containing the version of introduction for
each configuration node. If the setting is deprecated, also display
the version since which it is deprecated.
.SH SEE ALSO
.BR lvm (8)
.BR lvmconf (8)
.BR lvm.conf (5)

7
man/lvmdiskscan.8.des Normal file
View File

@@ -0,0 +1,7 @@
lvmdiskscan scans all SCSI, (E)IDE disks, multiple devices and a bunch of
other block devices in the system looking for LVM PVs. The size reported
is the real device size. Define a filter in \fBlvm.conf\fP(5) to restrict
the scan to avoid a CD ROM, for example.
This command is deprecated, use \fBpvs\fP instead.

View File

@@ -1,27 +0,0 @@
.TH LVMDISKSCAN 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvmdiskscan \(em scan for all devices visible to LVM2
.SH SYNOPSIS
.B lvmdiskscan
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-? | \-\-help ]
.RB [ \-l | \-\-lvmpartition ]
.RB [ \-v | \-\-verbose ]
.SH DESCRIPTION
lvmdiskscan scans all SCSI, (E)IDE disks, multiple devices and a bunch
of other block devices in the system looking for LVM physical volumes.
The size reported is the real device size.
Define a filter in \fBlvm.conf\fP(5) to restrict
the scan to avoid a CD ROM, for example.
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-l ", " \-\-lvmpartition
Only reports Physical Volumes.
.SH SEE ALSO
.BR lvm (8),
.BR lvm.conf (5),
.BR pvscan (8),
.BR vgscan (8)

View File

@@ -573,25 +573,37 @@ To place the lvmlock LV on a specific device, create the VG with only that
device, then use vgextend to add other devices.
.SS shared LVs
.SS LV activation
When an LV is used concurrently from multiple hosts (e.g. by a
multi\-host/cluster application or file system), the LV can be activated
on multiple hosts concurrently using a shared lock.
In a shared VG, activation changes involve locking through lvmlockd, and
the following values are possible with lvchange/vgchange -a:
To activate the LV with a shared lock: lvchange \-asy vg/lv.
.IP \fBy\fP|\fBey\fP
The command activates the LV in exclusive mode, allowing a single host
to activate the LV. Before activating the LV, the command uses lvmlockd
to acquire an exclusive lock on the LV. If the lock cannot be acquired,
the LV is not activated and an error is reported. This would happen if
the LV is active on another host.
With lvmlockd, an unspecified activation mode is always exclusive, i.e.
\-ay defaults to \-aey.
If the LV type does not allow the LV to be used concurrently from multiple
hosts, then a shared activation lock is not allowed and the lvchange
command will report an error. LV types that cannot be used concurrently
.IP \fBsy\fP
The command activates the LV in shared mode, allowing multiple hosts to
activate the LV concurrently. Before activating the LV, the
command uses lvmlockd to acquire a shared lock on the LV. If the lock
cannot be acquired, the LV is not activated and an error is reported.
This would happen if the LV is active exclusively on another host. If the
LV type prohibits shared access, such as a snapshot, the command will
report an error and fail.
The shared mode is intended for a multi\-host/cluster application or
file system.
LV types that cannot be used concurrently
from multiple hosts include thin, cache, raid, mirror, and snapshot.
lvextend on LV with shared locks is not yet allowed. The LV must be
deactivated, or activated exclusively to run lvextend.
.IP \fBn\fP
The command deactivates the LV. After deactivating the LV, the command
uses lvmlockd to release the current lock on the LV.
.SS recover from lost PV holding sanlock locks

View File

@@ -1,13 +0,0 @@
.TH "LVMSADC" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME"
lvmsadc \(em LVM system activity data collector
.SH "SYNOPSIS"
.B lvmsadc
.SH "DESCRIPTION"
lvmsadc is not currently supported under LVM2.
.SH "SEE ALSO"
.BR lvm (8)

View File

@@ -1,13 +0,0 @@
.TH "LVMSAR" "8" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH "NAME"
lvmsar \(em LVM system activity reporter
.SH "SYNOPSIS"
.B lvmsar
.SH "DESCRIPTION"
lvmsar is not currently supported under LVM2.
.SH "SEE ALSO"
.BR lvm (8)

View File

@@ -157,17 +157,17 @@ The --thinpool argument specifies which thin pool will
contain the ThinLV.
.fi
.B lvcreate \-n ThinLV \-V VirtualSize \-\-thinpool VG/ThinPoolLV
.B lvcreate \-n ThinLV \-V VirtualSize \-\-thinpool ThinPoolLV VG
.I Example
.br
Create a thin LV in a thin pool:
.br
# lvcreate \-n thin1 \-V 1T \-\-thinpool vg/pool0
# lvcreate \-n thin1 \-V 1T \-\-thinpool pool0 vg
Create another thin LV in the same thin pool:
.br
# lvcreate \-n thin2 \-V 1T \-\-thinpool vg/pool0
# lvcreate \-n thin2 \-V 1T \-\-thinpool pool0 vg
# lvs vg/thin1 vg/thin2
LV VG Attr LSize Pool Origin Data%
@@ -184,9 +184,9 @@ when creating a thin snapshot.
.br
A size argument will cause an old COW snapshot to be created.
.B lvcreate \-n SnapLV \-s VG/ThinLV
.B lvcreate \-n SnapLV \-\-snapshot VG/ThinLV
.br
.B lvcreate \-n SnapLV \-s VG/PrevSnapLV
.B lvcreate \-n SnapLV \-\-snapshot VG/PrevSnapLV
.I Example
.br
@@ -286,15 +286,12 @@ The fully specified syntax for creating a thin pool LV shown above is:
.B lvconvert \-\-type thin-pool \-\-poolmetadata VG/ThinMetaLV VG/ThinDataLV
An existing LV is converted to a thin pool by changing its type to
thin-pool. An alternate syntax may be used for the same operation:
An alternate syntax may be used for the same operation:
.B lvconvert \-\-thinpool VG/ThinDataLV \-\-poolmetadata VG/ThinMetaLV
The thin-pool type is inferred by lvm; the --thinpool option is not an
alias for --type thin-pool. The use of the --thinpool option here is
different from the use of the --thinpool option when creating a thin LV,
where it specifies the pool in which the thin LV is created.
The thin-pool type is inferred by lvm; the \-\-thinpool option is not an
alias for \-\-type thin\-pool.
.SS Automatic pool metadata LV
@@ -1234,7 +1231,7 @@ and creates a thin LV in the new pool.
.br
\-V VirtualSize specifies the virtual size of the thin LV.
.B lvcreate \-V VirtualSize \-L LargeSize
.B lvcreate \-\-type thin \-V VirtualSize \-L LargeSize
.RS
.B \-n ThinLV \-\-thinpool VG/ThinPoolLV
.RE

14
man/lvreduce.8.des Normal file
View File

@@ -0,0 +1,14 @@
lvreduce reduces the size of an LV. The freed logical extents are returned
to the VG to be used by other LVs. A copy\-on\-write snapshot LV can also
be reduced if less space is needed to hold COW blocks. Use
\fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.
Be careful when reducing an LV's size, because data in the reduced area is
lost. Ensure that any file system on the LV is resized \fBbefore\fP
running lvreduce so that the removed extents are not in use by the file
system.
Sizes will be rounded if necessary. For example, the LV size must be an
exact number of extents, and the size of a striped segment must be a
multiple of the number of stripes.

5
man/lvreduce.8.end Normal file
View File

@@ -0,0 +1,5 @@
.SH EXAMPLES
Reduce the size of an LV by 3 logical extents:
.br
.B lvreduce \-l \-3 vg00/lvol1

View File

@@ -1,110 +0,0 @@
.TH LVREDUCE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvreduce \(em reduce the size of a logical volume
.SH SYNOPSIS
.B lvreduce
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-\-help ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RB [ \-f | \-\-force ]
.RB [ \-\-noudevsync ]
.RB { \-l | \-\-extents
.RI [ \- ] LogicalExtentsNumber [ % { VG | LV | FREE | ORIGIN }]
.RB |
.BR \-L | \-\-size
.RI [ \- ] LogicalVolumeSize [ bBsSkKmMgGtTpPeE ]}
.RB [ \-n | \-\-nofsck ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-r | \-\-resizefs ]
.IR LogicalVolume { Name | Path }
.SH DESCRIPTION
lvreduce allows you to reduce the size of a logical volume.
Be careful when reducing a logical volume's size, because data in the
reduced part is lost!!!
.br
You should therefore ensure that any filesystem on the volume is
resized
.I before
running lvreduce so that the extents that are to be removed are not in use.
.br
Shrinking snapshot logical volumes (see
.BR lvcreate (8)
for information to create snapshots) is supported as well.
But to change the number of copies in a mirrored logical
volume use
.BR lvconvert (8).
.br
Sizes will be rounded if necessary - for example, the volume size must
be an exact number of extents and the size of a striped segment must
be a multiple of the number of stripes.
.br
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-f ", " \-\-force
Force size reduction without prompting even when it may cause data loss.
.TP
.IR \fB\-l ", " \fB\-\-extents " [" \- ] LogicalExtentsNumber [ % { VG | LV | FREE | ORIGIN }]
Reduce or set the logical volume size in units of logical extents.
With the \fI-\fP sign the value will be subtracted from
the logical volume's actual size and without it the value will be taken
as an absolute size.
The total number of physical extents freed will be greater than this logical
value if, for example, the volume is mirrored.
The number can also be expressed as a percentage of the total space
in the Volume Group with the suffix \fI%VG\fP, relative to the existing
size of the Logical Volume with the suffix \fI%LV\fP, as a percentage of the
remaining free space in the Volume Group with the suffix \fI%FREE\fP, or (for
a snapshot) as a percentage of the total space in the Origin Logical
Volume with the suffix \fI%ORIGIN\fP.
The resulting value for the subtraction is rounded downward, for the absolute
size it is rounded upward.
N.B. In a future release, when expressed as a percentage with VG or FREE, the
number will be treated as an approximate total number of physical extents to be
freed (including extents used by any mirrors, for example). The code may
currently release more space than you might otherwise expect.
.TP
.IR \fB\-L ", " \fB\-\-size " [" \- ] LogicalVolumeSize [ bBsSkKmMgGtTpPeE ]
Reduce or set the logical volume size in units of megabytes.
A size suffix of \fIk\fP for kilobyte, \fIm\fP for megabyte,
\fIg\fP for gigabytes, \fIt\fP for terabytes, \fIp\fP for petabytes
or \fIe\fP for exabytes is optional.
With the \fI\-\fP sign the value will be subtracted from
the logical volume's actual size and without it it will be taken as
an absolute size.
.TP
.BR \-n ", " \-\-nofsck
Do not perform fsck before resizing filesystem when filesystem
requires it. You may need to use \fB\-\-force\fR to proceed with
this option.
.TP
.BR \-\-noudevsync
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.TP
.BR \-r ", " \-\-resizefs
Resize underlying filesystem together with the logical volume using
.BR fsadm (8).
.SH Examples
Reduce the size of logical volume lvol1 in volume group vg00 by 3 logical extents:
.sp
.B lvreduce \-l \-3 vg00/lvol1
.SH SEE ALSO
.BR fsadm (8),
.BR lvchange (8),
.BR lvconvert (8),
.BR lvcreate (8),
.BR lvextend (8),
.BR lvm (8),
.BR lvresize (8),
.BR vgreduce (8)

22
man/lvremove.8.des Normal file
View File

@@ -0,0 +1,22 @@
lvremove removes one or more LVs. For standard LVs, this returns the
logical extents that were used by the LV to the VG for use by other LVs.
Confirmation will be requested before deactivating any active LV prior to
removal. LVs cannot be deactivated or removed while they are open (e.g.
if they contain a mounted filesystem). Removing an origin LV will also
remove all dependent snapshots.
\fBHistorical LVs\fP
If the configuration setting \fBmetadata/record_lvs_history\fP is enabled
and the LV being removed forms part of the history of at least one LV that
is still present, then a simplified representation of the LV will be
retained. This includes the time of removal (\fBlv_time_removed\fP
reporting field), creation time (\fBlv_time\fP), name (\fBlv_name\fP), LV
uuid (\fBlv_uuid\fP) and VG name (\fBvg_name\fP). This allows later
reporting to see the ancestry chain of thin snapshot volumes, even after
some intermediate LVs have been removed. The names of such historical LVs
acquire a hyphen as a prefix (e.g. '-lvol1') and cannot be reactivated.
Use lvremove a second time, with the hyphen, to remove the record of the
former LV completely.

11
man/lvremove.8.end Normal file
View File

@@ -0,0 +1,11 @@
.SH EXAMPLES
Remove an active LV without asking for confirmation.
.br
.B lvremove \-f vg00/lvol1
Remove all LVs the specified VG.
.br
.B lvremove vg00

View File

@@ -1,80 +0,0 @@
.TH LVREMOVE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvremove \(em remove a logical volume
.SH SYNOPSIS
.B lvremove
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-\-help ]
.RB [ \-\-nohistory ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RB [ \-S | \-\-select
.IR Selection ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RB [ \-f | \-\-force ]
.RB [ \-\-noudevsync ]
.RI [ LogicalVolume { Name | Path }...]
.SH DESCRIPTION
lvremove removes one or more logical volumes.
Confirmation will be requested before deactivating any active logical
volume prior to removal. Logical volumes cannot be deactivated
or removed while they are open (e.g. if they contain a mounted filesystem).
Removing an origin logical volume will also remove all dependent snapshots.
.sp
If the logical volume is clustered then it must be deactivated on all
nodes in the cluster before it can be removed. A single lvchange command
issued from one node can do this.
.sp
If the configuration setting \fBmetadata/record_lvs_history\fP is enabled
and the logical volume being removed forms part of the history of at least
one logical volume that is still present then a simplified representation of
the logical volume will be retained. This includes the time of removal
(\fBlv_time_removed\fP reporting field), creation time (\fBlv_time\fP), name
(\fBlv_name\fP), LV uuid (\fBlv_uuid\fP) and VG name (\fBvg_name\fP) and
allows you to see the ancestry chain of thin snapshot volumes even after
some intermediate logical volumes have been removed.
The names of such historical logical volumes acquire a hyphen as a prefix
(e.g. '-lvol1') and cannot be reactivated. Use lvremove a second time,
with the hyphen, to remove the record of the former logical volume completely.
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-f ", " \-\-force
Remove active logical volumes without confirmation.
Tool will try to deactivate \fIunused\fP volume.
To proceed with damaged pools use \-ff
.TP
.B \-\-nohistory
Disable the recording of history of logical volumes which are being removed.
(This has no effect unless the configuration setting
\fBmetadata/record_lvs_history\fP is enabled.)
.TP
.B \-\-noudevsync
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.SH Examples
Remove the active logical volume lvol1 in volume group vg00
without asking for confirmation:
.sp
.B lvremove \-f vg00/lvol1
.sp
Remove all logical volumes in volume group vg00:
.sp
.B lvremove vg00
.SH SEE ALSO
.BR lvcreate (8),
.BR lvdisplay (8),
.BR lvchange (8),
.BR lvm (8),
.BR lvs (8),
.BR lvscan (8),
.BR vgremove (8)

2
man/lvrename.8.des Normal file
View File

@@ -0,0 +1,2 @@
lvrename renames an existing LV or a historical LV (see \fBlvremove\fP for
historical LV information.)

10
man/lvrename.8.end Normal file
View File

@@ -0,0 +1,10 @@
.SH EXAMPLES
Rename "lvold" to "lvnew":
.br
.B lvrename /dev/vg02/lvold vg02/lvnew
An alternate syntax to rename "lvold" to "lvnew":
.br
.B lvrename vg02 lvold lvnew

View File

@@ -1,51 +0,0 @@
.TH LVRENAME 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
.SH NAME
lvrename \(em rename a logical volume
.SH SYNOPSIS
.B lvrename
.RB [ \-A | \-\-autobackup
.RI { y | n }]
.RB [ \-\-commandprofile
.IR ProfileName ]
.RB [ \-d | \-\-debug ]
.RB [ \-h | \-\-help ]
.RB [ \-t | \-\-test ]
.RB [ \-v | \-\-verbose ]
.RB [ \-\-version ]
.RB [ \-f | \-\-force ]
.RB [ \-\-noudevsync ]
.RB [ \-\-reportformat
.RB { basic | json }]
.RI { OldLogicalVolume { Name | Path }
.IR NewLogicalVolume { Name | Path }
|
.I VolumeGroupName OldLogicalVolumeName NewLogicalVolumeName\fR}
.SH DESCRIPTION
lvrename renames an existing logical volume or an existing
historical logical volume from
.IR OldLogicalVolume { Name | Path }
to
.IR NewLogicalVolume { Name | Path }.
.SH OPTIONS
See \fBlvm\fP(8) for common options.
.TP
.BR \-\-noudevsync
Disable udev synchronisation. The
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
.SH EXAMPLE
To rename lvold in volume group vg02 to lvnew:
.sp
.B lvrename /dev/vg02/lvold vg02/lvnew
.sp
An alternate syntax to rename this logical volume is:
.sp
.B lvrename vg02 lvold lvnew
.sp
.SH SEE ALSO
.BR lvm (8),
.BR lvchange (8),
.BR vgcreate (8),
.BR vgrename (8)

2
man/lvresize.8.des Normal file
View File

@@ -0,0 +1,2 @@
lvresize resizes an LV in the same way as lvextend and lvreduce. See
\fBlvextend\fP(8) and \fBlvreduce\fP(8) for more information.

6
man/lvresize.8.end Normal file
View File

@@ -0,0 +1,6 @@
.SH EXAMPLES
Extend an LV by 16MB using specific physical extents:
.br
.B lvresize \-L+16M vg1/lv1 /dev/sda:0\-1 /dev/sdb:0\-1

Some files were not shown because too many files have changed in this diff Show More