1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-10-09 23:33:17 +03:00

Compare commits

..

97 Commits

Author SHA1 Message Date
Bryn M. Reeves
2d161892f6 man: add FILE MAPPING section to dmstats.8.in
Add a section to explain file mapping, outside of the individual
command descriptions, and to describe the limitations of the
current update strategy.
2017-03-08 17:44:04 +00:00
Bryn M. Reeves
c51203760c man: add dmfilemapd options to dmstats.8.in
Add descriptions of --follow and --nomonitor, and the behaviour
of create and update_filemap when starting dmfilemapd.
2017-03-08 17:44:04 +00:00
Bryn M. Reeves
9edf67759c dmstats: start dmfilemapd when creating or updating file maps
Launch an instance of the filemap monitoring daemon when creating,
or updating, a file mapped group, unless the --nomonitor switch is
given.

Unless --foreground is given the daemon will detach from the
terminal and run in the background until it is signaled or the
daemon termination conditions are met.

The --follow={inode|path} switch is added to control the daemon
behaviour when files are moved, unlinked, or renamed while they
are being monitored.

The daemon runs with the same verbosity as the dmstats command
that starts it.
2017-03-08 17:44:04 +00:00
Bryn M. Reeves
613a4c1652 man: add dmfilemapd.8 2017-03-08 17:32:25 +00:00
Bryn M. Reeves
8ed8ae8abe daemons: add dmfilemapd
Add a daemon that can be launched to monitor a group of regions
corresponding to the extents of a file, and to update the regions as the
file's allocation changes.

The daemon is intended to be started from a library interface, but can
also be run from the command line:

  dmfilemapd <fd> <group_id> <path> <mode> [<foreground>[<log_level>]]

Where fd is a file descriptor open on the mapped file, group_id is the
group identifier of the mapped group and mode is either "inode" or
"path". E.g.:

  # dmfilemapd 3 0 vm.img inode 1 3 3<vm.img
  ...

If foreground is non-zero, the daemon will not fork to run in the
background. If verbose is non-zero, libdm and daemon log messages will
be printed.

It is possible for the group identifier to change when regions are
re-mapped: this occurs when the group leader is deleted (regroup=1 in
dm_stats_update_regions_from_fd()), and another region is created before
the daemon has a chance to recreate the leader region.

The operation is inherently racey since there is currently no way to
atomically move or resize a dm_stats region while retaining its
region_id.

Detect this condition and update the group_id value stored in the
filemap monitor.

A function is also provided in the the stats API to launch the filemap
monitoring daemon:

  int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
                              dm_filemapd_mode_t mode, unsigned foreground,
                              unsigned verbose);

This carries out the first fork and execs dmfilemapd with the arguments
specified.

A dm_filemapd_mode_t value is specified by the mode argument: either
DM_FILEMAPD_FOLLOW_INODE, or DM_FILEMAPD_FOLLOW_PATH. A helper function,
dm_filemapd_mode_from_string(), is provided to parse a string containing
a valid mode name into the appropriate dm_filemapd_mode_t value.
2017-03-08 17:30:37 +00:00
David Teigland
11f1556d5d commands: combine duplicate arrays for lv types and props
Like opt and val arrays in previous commit, combine duplicate
arrays for lv types and props in command.c and lvmcmdline.c.
Also move the command_names array to be defined in command.c
so it's consistent with the others.
2017-03-08 11:03:02 -06:00
David Teigland
690f604733 commands: combine duplicate arrays for opt and val
command.c and lvmcmdline.c each had a full array defining
all options and values.  This duplication was not removed
when the command.c code was merged into the run time.
2017-03-08 11:03:02 -06:00
David Teigland
f48e6b2690 help: avoid end notes repetition in lvm help all 2017-03-08 11:03:02 -06:00
Heinz Mauelshagen
ed58672029 metadata: comments
log_count,nosync,stripes,stripe_size,,...  are also used for raid.
2017-03-08 15:13:59 +01:00
Heinz Mauelshagen
3a5561e5ab raid: define seg->extents_copied
seg->extents_copied has to be defined properly on reducing
the size of a raid LV or conversion from raid5 with 1 stripe
to raid1 will fail.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-07 23:28:09 +01:00
Heinz Mauelshagen
aedac100f9 report: correct lv_size for 2-legged raid5
Reshaping a raid5 LV to one stripe aiming to convert it to
raid1 (and optionally to linear) reports the wrong LV size
when still having reshape space allocated.
2017-03-07 22:36:50 +01:00
Heinz Mauelshagen
18bbeec825 raid: fix raid LV resizing
The lv_extend/_lv_reduce API doesn't cope with resizing RaidLVs
with allocated reshape space and ongoing conversions.  Prohibit
resizing during conversions and remove the reshape space before
processing resize.  Add missing seg->data_copies initialisation.

Fix typo/comment.
2017-03-07 22:05:23 +01:00
Heinz Mauelshagen
9ed11e9191 raid: cleanup _lv_set_image_lvs_start_les()
Avoid second loop.
2017-03-07 21:55:19 +01:00
Heinz Mauelshagen
05aceaffbd lvconvert: adjust --stripes on raid10 convert
For the time being raid10 is limited to even number of total stripes
as is and 2 data copies.  The number of stripes provided on creation
of a raid10(_near) LV with -i/--stripes gets doubled to define
that even total number of stripes (i.e. images).

Apply the same on disk adding conversions (reshapes) with
"lvconvert --stripes RaidLV" (e.g. 2 stripes = 4 images
total converted to 3 stripes = 6 images total).

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-07 21:36:03 +01:00
Heinz Mauelshagen
f4b30b0dae report: display proper LV size for reshapable RaidLVs
Subtract reshape space when reporting visible lv_size on RaidLV.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-07 18:47:20 +01:00
Heinz Mauelshagen
43fb4aa69b test: add delay to lvchange-raid1-writemostly.sh
Commit 8ab0725077 introduced this new test.

Add a read delay to the PVs to avoid a race
in the script causing the test to fail.

Correct comment.
2017-03-07 15:18:13 +01:00
Heinz Mauelshagen
872932a0fb man lvs: describe new 'R' volume health character 2017-03-06 19:33:10 +01:00
David Teigland
0b019c5406 man/help: improve stripes option wording 2017-03-06 12:20:33 -06:00
David Teigland
ef97360866 man lvextend: mention segment type 2017-03-06 11:27:56 -06:00
Heinz Mauelshagen
17838e6439 test: fix typo 2017-03-03 23:22:29 +01:00
David Teigland
11589891d7 man: move the full UNIT description
Part of the UNIT description was still living in the
--size description.  Move it to the Size[UNIT] description
since it is used by other options, not just --size.
2017-03-03 16:12:02 -06:00
David Teigland
b6c4b7cfb0 man/help: poolmetadatasize option can take relative value
In lvcreate/lvconvert, --poolmetdatasize can only be an
absolute value, but in lvresize/lvextend, --poolmetadatasize
can be given a + relative value.

The val types only currently support relative values that
go in both directions +|-.  Further work is needed to add
val types that can be relative in only one direction, and
switching various option values to one those depending on
the command name.  Then poolmetdatasize will not appear
with a +|- option in lvcreate/lvconvert, and will
appear with only the + option in lvresize/lvextend.
2017-03-03 16:12:02 -06:00
Heinz Mauelshagen
c5b6c9ad44 report: raid enhancements for --select
Enhance the raid report functions for the recently added LV fields
reshape_len, reshape_len_le, data_offset, new_data_offset, data_copies,
data_stripes and parity_chunks to cope with "lvs --select".

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-03 22:29:50 +01:00
Heinz Mauelshagen
6dea1ed5ae man: fix term in lvmraid(7)
Adjust commit af7c8e7106 to use "image/leg".

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-03 22:24:53 +01:00
David Teigland
e4ef3d04ad help: show short opt with long opt
e.g. show -n|--name instead of just --name
2017-03-03 15:14:18 -06:00
Heinz Mauelshagen
547bdb63e1 dev_manager: remove reshape message
The dm-raid target doesn't support a "reshape" message.
2017-03-03 22:10:21 +01:00
David Teigland
9a50df291a man/help: rework extents and size output
Clean up and correct the details around --extents and --size.

lvcreate/lvresize/lvreduce/lvextend all now display the
extents option in usages.

The Size and Number value variables for --size and --extents
are now displayed without the [+|-] prefix for lvcreate.
2017-03-03 14:23:50 -06:00
David Teigland
e7ee89d80b man/help: improve the PV range description 2017-03-03 11:15:27 -06:00
David Teigland
2a5e24580a args: in cachemode option fix passthrough value 2017-03-03 10:53:18 -06:00
David Teigland
191a2517be args: update select description
mention --select help and --options help.
2017-03-03 09:53:11 -06:00
David Teigland
1a0d57f895 man lvcreate: show extents option
Display --extents as an option in each cmd def,
in addition to the special notes about how
--size and --extents are alternatives.
2017-03-02 16:58:19 -06:00
David Teigland
9a62767f2d man lvcreate: remove the extents prefix
This applies the same hack to --extents (dropping
the [+|-] prefix), as commit b7831fc14a did for --size.
2017-03-02 16:58:19 -06:00
David Teigland
5d39927f22 help: show extents option for lvcreate
A special case is needed to display
--extents for lvcreate since the cmd defs
treat --extents as an automatic alternative
to --size (to avoid duplicating every cmd def).
2017-03-02 16:58:19 -06:00
David Teigland
9b23d9bfe4 help: print info about special options and variables
when --longhelp is used
2017-03-02 16:58:19 -06:00
David Teigland
f350283398 lvcreate: munge size value in help output
Add hack to omit the [+|-] from the --size
value.  Same hack exists in man generator.
2017-03-02 16:58:19 -06:00
Heinz Mauelshagen
af7c8e7106 man: cover reshaping
Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-02 22:24:19 +01:00
Heinz Mauelshagen
ca859b5149 lvs: enhance stripes and region size field descriptions
Now that we got the "data_stripes" field key, adjust the "stripes" field description.
Enhance the "regionsize" field description to cover raids as well.
2017-03-02 22:22:16 +01:00
David Teigland
d3bcec5993 man: change the synopsis option style
Remove the required/optional words because it
should already be evident from the use of [ ].
2017-03-02 14:08:59 -06:00
David Teigland
910918d1c2 lvcreate: allow chunksize option when creating cache 2017-03-02 13:58:27 -06:00
David Teigland
6360ba3d2d commands: handle including an optional opt multiple times
When a cmd def includes multiple sets of options (OO_FOO),
allow multiple OO_FOO sets to contain the same option and
avoid repeating it in the cmd def.
2017-03-02 13:52:06 -06:00
David Teigland
b7831fc14a lvcreate/lvresize: the --size option accepts signed values
There was confusion in the code about whether or not the
--size option accepted a sign.  Make it consistent and clear
that it does.

This exposes a new problem in that an option can only
accept one value type, e.g. --size can only accept a
signed number, it cannot accept a positive or negative
number for some commands and reject negative numbers for
others.

In practice, lvcreate accepts only positive --size
values and lvresize accepts positive or negative --size
values.  There is currently no way to encode this
difference.  Until that is fixed, the man page output
is hacked to avoid printing the [+|-] prefix for sizes
in lvcreate.
2017-03-02 12:53:01 -06:00
Alasdair G Kergon
70c1fa3764 tools: Fix overriding of current_settings.
Settings specified in other command line args take precedence over
profiles and --config, which takes precedence over settings in actual
config files.

Since commit 1e2420bca8 ('commands: new
method for defining commands') commands like this:
  lvchange --config 'global/test=1' -ay vg
have been printing the 'TEST MODE' message, but nevertheless making
real changes.
2017-03-02 16:41:41 +00:00
David Teigland
8df3f300ba commands: adjust syntax error message 2017-03-02 09:46:41 -06:00
David Teigland
b76852bf35 man lvchange: mention special activation vals
used by lvmlockd and clvmd.
2017-03-02 09:31:06 -06:00
Tony Asleson
26ca308ba9 lvmdbustest.py: Remove un-used variable
Not needed with new validation function.
2017-03-01 17:47:04 -06:00
Tony Asleson
7b0371e74e lvmdbustest.py: Validate LV lookups
Ensure that the LV lookups work as expected for newly created LVs.
2017-03-01 17:47:04 -06:00
Tony Asleson
83249f3327 lvmdbustest.py: Validate PV device
Validate device lookup after PV creation.
2017-03-01 17:47:04 -06:00
Tony Asleson
4c89d3794c lvmdbustest.py: Re-name validation function
Make this name generic as we can use for different types.
2017-03-01 17:47:04 -06:00
Tony Asleson
10c3d94159 lvmdbustest.py: Verify lookups work immediately after vg create 2017-03-01 17:47:04 -06:00
Tony Asleson
157948b5a5 lvmdbustest.py: Use _lookup function
Be consistent in using this helper function for dbus object lookups.
2017-03-01 17:47:04 -06:00
David Teigland
c25b95e2ef man: fix typo 2017-03-01 16:59:51 -06:00
David Teigland
51dfbf1fb3 commands: tweak some descriptions 2017-03-01 16:59:51 -06:00
Heinz Mauelshagen
daf1d4cadc lvconvert: add new reporting fields for reshaping
Commit 48778bc503 introduced new RAID reshaping related report fields.

The inclusioon of segtype.h in properties.c isn't mandatory, remove it.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-01 19:30:52 +01:00
Heinz Mauelshagen
fb42874a4f lvconvert: libdm RAID API compatibility versioning; remove new function
Commit 80a6de616a versioned the dm_tree_node_add_raid_target_with_params()
and dm_tree_node_add_raid_target() APIs for compatibility reasons.

There's no user of the latter function, remove it.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-01 18:58:48 +01:00
Heinz Mauelshagen
48778bc503 lvconvert: add new reporting fields for reshaping
During an ongoing reshape, the MD kernel runtime reads stripes relative
to data_offset and starts storing the reshaped stripes (with new raid
layout and/or new stripesize  and/or new number of stripes) relative
to new_data_offset.  This is to avoid writing over any data in place
which is non-atomic by nature and thus be recoverable without data loss
in the transition.  MD uses the term out-of-place reshaping for it.

There's 2 other areas we don't have report capability for:
- number of data stripes vs. total stripes
  (e.g. raid6 with 7 stripes toal has 5 data stripes)
- number of (rotating) parity/syndrome chunks
  (e.g. raid6 with 7 stripes toal has 2 parity chunks; one
   per stripe for P-Syndrome and another one for Q-Syndrome)

Thus, add the following reportable keys:

- reshape_len      (in current units)
- reshape_len_le   (in logical extents)
- data_offset      (in sectors)
- new_data_offset  (     "    )
- data_stripes
- parity_chunks

Enhance lvchange-raid.sh, lvconvert-raid-reshape-linear_to_striped.sh,
lvconvert-raid-reshape-striped_to_linear.sh, lvconvert-raid-reshape.sh
and lvconvert-raid-takeover.sh to make use of new keys.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-01 18:50:35 +01:00
David Teigland
62abae1525 man: put some commands into advanced usage section
and separate commands with --
2017-03-01 10:22:48 -06:00
David Teigland
eb9586bd3b commands: SECONDARY flag changes in cmd defs
Add/remove the SECONDARY_SYNTAX flag to cmd defs.
cmd defs with this flag will be listed under the
ADVANCED USAGE man page section, so that the main
USAGE section contains the most common commands
without distraction.

- When multiple cmd defs do the same thing, one variant
  can be displayed in the first list.
- Very advanced, unusual or uncommon commands should be
  in the second list.
2017-03-01 10:22:48 -06:00
Heinz Mauelshagen
d6dd700bf7 raid: rework _raid_target_present()
Recently added check for reshaping in this function called for
a cleanup to avoid proliferating it with more explicit conditionals.

Base the reshaping check on the given _features array.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-03-01 14:52:23 +01:00
Heinz Mauelshagen
7a064303fe lvconvert: add missing reshape_len initialization
An initialization was missing when converting striped to raid0(_meta)
causing unitialized reshape_len in the new component LVs first segment.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-28 23:29:03 +01:00
Heinz Mauelshagen
964114950c lvconvert: adjust mininum region size check
The imposed minimum region size can cause rejection on
disk removing reshapes.  Lower it to avoid that.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-28 23:10:37 +01:00
David Teigland
1828822bd8 help: print full usage for lvm help all 2017-02-28 15:58:14 -06:00
Heinz Mauelshagen
ce1e5b9991 lvconvert: adjust reshaping check to target version
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.11&id=b08c6076782
sets the dm-raid target version to 1.10.1.

Adjust the condition to set RAID_RESHAPE_FEATURE to it.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-28 22:46:25 +01:00
Heinz Mauelshagen
80a6de616a lvconvert: libdm RAID API compatibility versioning
Commit 27384c52cf lowered the maximum number of devices
back to 64 for compatibility.

Because more members have been added to the API in
'struct dm_tree_node_raid_params *', we have to version
the public libdm RAID API to not break any existing users.

Changes:

- keep the previous 'struct dm_tree_node_raid_params' and
  dm_tree_node_add_raid_target_with_params()/dm_tree_node_add_raid_target()
  in order to expose the already released public RAID API

- introduce 'struct dm_tree_node_raid_params_v2' and additional functions
  dm_tree_node_add_raid_target_with_params_v2()/dm_tree_node_add_raid_target_v2()
  to be used by the new lvm2 lib reshape extentions

With this new API, the bitfields for rebuild/writemostly legs in
'struct dm_tree_node_raid_params_v2' can be raised to 256 bits
again (253 legs maximum supported in MD kernel).

Mind that we can limit the maximum usable number via the
DEFAULT_RAID{1}_MAX_IMAGES definition in defaults.h.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-28 22:34:00 +01:00
David Teigland
21456dcf7f commands: include lvconvert cache options as group 2017-02-28 13:47:46 -06:00
David Teigland
89661981e8 man: fix syntax for PV ranges 2017-02-28 12:22:12 -06:00
David Teigland
4a14617dc4 commands: remove lvconvert raid rule
A raid0 LV also needs to be converted to other
raid levels, so this rule should be removed entirely.
2017-02-27 17:06:08 -06:00
David Teigland
f9d28f1aec man: mention regionsize default is in lvm.conf 2017-02-27 17:05:20 -06:00
David Teigland
998151e83e commands: fix lvconvert raid rule
Recent rule change was incorrect.
We want to allow 'lvconvert --type raid' on raid1 LVs.
2017-02-27 16:33:38 -06:00
David Teigland
8d0df0c011 commands: fixes for recent raid changes
- Combine the equivalent lvconvert --type raid defs.
  (Two cmd defs must be different without relying
  on LV type, which are not known at the time the
  cmd def is matched.)

- Remove unused optional options from lvconvert --stripes,
  and lvconvert --stripesize.

- Use Number for --stripes_long val type.

- Combine the cmd def for raid reshape cleanup into the
  existing start_poll cmd def (they were duplicate defs).
  Calls into the raid code from a poll opertion will be
  added.
2017-02-27 14:44:00 -06:00
Heinz Mauelshagen
27384c52cf lvconvert: limit libdm to maximum of 64 RAID devices
Commit 64a2fad5d6 raised the maximum number of RAID devices to 64.

Commit e2354ea344 introduced RAID_BITMAP_SIZE as 4 to have
256 bits (4 * 64 bit array members), thus changing the libdm API
unnecessarilly for the time being.

To not change the API, reduce RAID_BITMAP_SIZE to 1.
Remove an unneeded definition of it from libdm-common.h.

If we ever decide to raise past 64, we'll version the API.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-27 21:42:15 +01:00
Alasdair G Kergon
c41e999488 git: Upstream repository moved to sourceware.org
The fedorahosted git repository shuts down tomorrow:
  https://communityblog.fedoraproject.org/fedorahosted-sunset-2017-02-28/

Our upstream git repository has moved back to sourceware.org.
Mailing list hosting is not changing.

Gitweb:
  https://www.sourceware.org/git/?p=lvm2

Git:
  git://sourceware.org/git/lvm2.git
  ssh://sourceware.org/git/lvm2.git
  http://sourceware.org/git/lvm2.git

Example command to change the origin of a repository clone:
  Public:
    git remote set-url origin git://sourceware.org/git/lvm2.git
  Committers:
    git remote set-url origin git+ssh://sourceware.org/git/lvm2.git
2017-02-27 14:05:50 +00:00
David Teigland
4f7631b4ad man: change option sorting in synopsis
The options list was sorted as:
- options with both long and short forms, alphabetically
- options with only long form, alphabetically

This was done only for the visual effect.  Change to
sort alphabetically by long opt, without regard to
short forms.
2017-02-24 15:11:18 -06:00
David Teigland
5f6bdf707d man: add ENVIRONMENT VARIABLES section 2017-02-24 15:05:17 -06:00
David Teigland
84cceaf9b9 lvconvert: fix handling args in combining snapshots
Fixes commit 286d39ee3c, which was correct except
for a reversed strstr.  Now uses strchr, and modifies
a copy of the name so the original argv is preserved.
2017-02-24 14:17:58 -06:00
David Teigland
74ba326007 man: use Size variable for a number with unit
Define a separate variable type Size to represent
a number that takes an optional UNIT.
2017-02-24 13:44:05 -06:00
Heinz Mauelshagen
189fa64793 lvconvert: impose region size constraints
When requesting a regionsize change during conversions, check
for constraints or the command may fail in the kernel n case
the region size is too smalle or too large thus leaving any
new SubLVs behind.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 07:27:43 +01:00
Heinz Mauelshagen
3bdc4045c2 lvconvert: fix 2 issues identified in intesting
Allow regionsize on upconvert from linear:
fix related commit 2574d3257a to actually work

Related: rhbz1394427

Remove setting raid5_n on conversions from raid1
as of commit 932db3db53 because any raid5 mapping
may be requested.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:58:45 +01:00
Heinz Mauelshagen
d768fbe010 WHATS_NEW: add entry 2017-02-24 05:24:59 +01:00
Heinz Mauelshagen
76f60cc430 lvconvert: add missed new test scripts for reshaping
Add aforementioned but forgotten new test scripts
lvconvert-raid-reshape-linear_to_striped.sh,
lvconvert-raid-reshape-striped_to_linear.sh and
lvconvert-raid-reshape.sh

Those presume dm-raid target version 1.10.2
provided by a following kernel patch.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:59 +01:00
Heinz Mauelshagen
2574d3257a lvconvert: allow regionsize on upconvert from linear
Allow to provide regionsize with "lvconvert -m1 -R N " on
upconverts from linear and on N -> M raid1 leg conversions.

Resolves: rhbz1394427
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
64a2fad5d6 lvconvert/lvcreate: raise maximum number of raid images
Because of contraints in renaming shifted rimage/rmeta LV names
the current RaidLV limit is a maximum of 10 SubLV pairs.

With the previous introduction of reshaping infratructure that
constriant got removed.

Kernel supports 253 since dm-raid target 1.9.0, older kernels 64.

Raise the maximum number of RaidLV rimage/rmeta pairs to 64.
If we want to raise past 64, we have to introdce a check for
the kernel supporting it in lvcreate/lvconvert.

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
34caf83172 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces the changes to call the reshaping infratructure
from lv_raid_convert().

Changes:
- add reshaping calls from lv_raid_convert()
- add command definitons for reshaping to tools/command-lines.in
- fix raid_rimage_extents()
- add 2 new test scripts lvconvert-raid-reshape-linear_to_striped.sh
  and lvconvert-raid-reshape-striped_to_linear.sh to test
  the linear <-> striped multi-step conversions
- add lvconvert-raid-reshape.sh reshaping tests
- enhance lvconvert-raid-takeover.sh with new raid10 tests

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
f79bd30a8b lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Change:
- allow raid_rimage_extents() to calculate raid10
- remove an __unused__ attribute

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
1784cc990e lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Change:
- add missing raid1 <-> raid5 conversions to support
  linear <-> raid5 <-> raid0(_meta)/striped conversions
- rename related new takeover functions to
  _takeover_from_raid1_to_raid5 and _takeover_from_raid5_to_raid1,
  because a reshape to > 2 legs is only possible with
  raid5 layout

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
2d74de3f05 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Change:
- enhance _clear_meta_lvs() to support raid0 allowing
  raid0_meta -> raid10 conversions to succeed by clearing
  the raid0 rmeta images or the kernel will fail because
  of discovering reordered raid devices

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
34a8d3c2fd lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- enhance _raid45610_to_raid0_or_striped_wrapper() to support
  raid5_n with 2 areas to raid1 conversion to allow for
  striped/raid0(_meta)/raid4/5/6 -> raid1/linear conversions;
  rename it to _takeover_downconvert_wrapper to discontinue the
  illegible function name
- enhance _striped_or_raid0_to_raid45610_wrapper()  to support
  raid1 with 2 areas to raid5* conversions to allow for
  linear/raid1 -> striped/raid0(_meta)/raid4/5/6 conversions;
  rename it to _takeover_upconvert_wrapper for the same reason

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
932db3db53 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add missing possible reshape conversions and conversion options
  to allow/prohibit changing stripesize or number fo stripes
- enhance setting convenient riad types in reshape conversions
  (e.g. raid1 with 2 legs -> radi5_n)

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
fe18e5e77a lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add _raid_reshape() using the pre/post callbacks and
  the stripes add/remove reshape functions introduced before
- and _reshape_requested function checking if a reshape
  was requested

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
929cf4b73c lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add vg metadata update functions
- add pre and post activation callback functions for
  proper sequencing of sub lv activations during reshaping
- move and enhance _lv_update_reload_fns_reset_eliminate_lvs()
  to support pre and post activation callbacks
- add _reset_flags_passed_to_kernel() which resets anyxi
  rebuild/reshape flags after they have been passed into the kernel
  and sets the SubLV remove after reshape flags on legs to be removed

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
4de0e692db lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add function to support disk adding reshapes
- add function to support disk removing reshapes
- add function to support layout (e.g. raid5ls -> raid5_rs)
  or stripesize reshaping

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
7d39b4d5e7 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add function providing state of a reshaped RaidLV
- add function to adjust the size of a RaidLV was
  reshaped to add/remove stripes

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
92691e345d lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces more local infrastructure to raid_manip.c
used by followup patches.

Changes:
- add lv_raid_data_copies returning raid type specific number;
  needed for raid10 with more than 2 data copies
- remove _shift_and_rename_image_components() constraint
  to support more than 10 raid legs
- add function to calculate total rimage length used by out-of-place
  reshape space allocation
- add out-of-place reshape space alloc/relocate/free functions
- move _data_rimages_count() used by reshape space alloc/realocate functions

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
c1865b0a86 raid: typo 2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
b499d96215 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces local infrastructure to raid_manip.c
used by followup patches.

Add functions:
- to check reshaping is supported in target attibute
- to return device health string needed to check
  the raid device is ready to reshape

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
Heinz Mauelshagen
e2354ea344 lvconvert: add infrastructure for RaidLV reshaping support
In order to support striped raid5/6/10 LV reshaping (change
of LV type, stripesize or number of legs), this patch
introduces infrastructure prerequisites to be used
by raid_manip.c extensions in followup patches.

This base is needed for allocation of out-of-place
reshape space required by the MD raid personalities to
avoid writing over data in-place when reading off the
current RAID layout or number of legs and writing out
the new layout or to a different number of legs
(i.e. restripe)

Changes:
- add members reshape_len to 'struct lv_segment' to store
  out-of-place reshape length per component rimage
- add member data_copies to struct lv_segment
  to support more than 2 raid10 data copies
- make alloc_lv_segment() aware of both reshape_len and data_copies
- adjust all alloc_lv_segment() callers to the new API
- add functions to retrieve the current data offset (needed for
  out-of-place reshaping space allocation) and the devices count
  from the kernel
- make libdm deptree code aware of reshape_len
- add LV flags for disk add/remove reshaping
- support import/export of the new 'struct lv_segment' members
- enhance lv_extend/_lv_reduce to cope with reshape_len
- add seg_is_*/segtype_is_* macros related to reshaping
- add target version check for reshaping
- grow rebuilds/writemostly bitmaps to 246 bit to support kernel maximal
- enhance libdm deptree code to support data_offset (out-of-place reshaping)
  and delta_disk (legs add/remove reshaping) target arguments

Related: rhbz834579
Related: rhbz1191935
Related: rhbz1191978
2017-02-24 05:20:58 +01:00
David Teigland
ffe3ca26e0 man: improve line breaks
Borrow tricks from dmsetup man page to improve
the line break and indentation using .ad l, .ad b,
and soft break \%.
2017-02-23 17:06:42 -06:00
David Teigland
3fd3c9430d man/help: change syntax to UNIT
(Change to recent commit 3f4ecaf8c2.)

Use --foo Number[k|UNIT] to indicate that
the default units of the number is k, but other
units listed below are also accepted.

Previously, underlined/italic Unit was used,
like other of variables, but this UNIT is more
like a shortcut than an actual variable.
2017-02-23 14:24:28 -06:00
69 changed files with 6012 additions and 1224 deletions

5
README
View File

@@ -6,11 +6,12 @@ Installation instructions are in INSTALL.
There is no warranty - see COPYING and COPYING.LIB.
Tarballs are available from:
ftp://sourceware.org/pub/lvm2/
ftp://sources.redhat.com/pub/lvm2/
The source code is stored in git:
http://git.fedorahosted.org/git/lvm2.git
git clone git://git.fedorahosted.org/git/lvm2.git
https://sourceware.org/git/?p=lvm2.git
git clone git://sourceware.org/git/lvm2.git
Mailing list for general discussion related to LVM2:
linux-lvm@redhat.com

View File

@@ -1,5 +1,7 @@
Version 2.02.169 -
=====================================
Upstream git moved to https://sourceware.org/git/?p=lvm2
Support conversion of raid type, stripesize and number of disks
Reject writemostly/writebehind in lvchange during resynchronization.
Deactivate active origin first before removal for improved workflow.
Fix regression of accepting options --type and -m with lvresize (2.02.158).

42
configure vendored
View File

@@ -702,6 +702,7 @@ BLKDEACTIVATE
FSADM
ELDFLAGS
DM_LIB_PATCHLEVEL
DMFILEMAPD
DMEVENTD_PATH
DMEVENTD
DL_LIBS
@@ -737,6 +738,7 @@ CLDNOWHOLEARCHIVE
CLDFLAGS
CACHE
BUILD_NOTIFYDBUS
BUILD_DMFILEMAPD
BUILD_LOCKDDLM
BUILD_LOCKDSANLOCK
BUILD_LVMLOCKD
@@ -960,6 +962,7 @@ enable_use_lvmetad
with_lvmetad_pidfile
enable_use_lvmpolld
with_lvmpolld_pidfile
enable_dmfilemapd
enable_notify_dbus
enable_blkid_wiping
enable_udev_systemd_background_jobs
@@ -1694,6 +1697,7 @@ Optional Features:
--disable-use-lvmlockd disable usage of LVM lock daemon
--disable-use-lvmetad disable usage of LVM Metadata Daemon
--disable-use-lvmpolld disable usage of LVM Poll Daemon
--enable-dmfilemapd enable the dmstats filemap daemon
--enable-notify-dbus enable LVM notification using dbus
--disable-blkid_wiping disable libblkid detection of signatures when wiping
and use native code instead
@@ -12074,6 +12078,21 @@ cat >>confdefs.h <<_ACEOF
_ACEOF
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build dmfilemapd" >&5
$as_echo_n "checking whether to build dmfilemapd... " >&6; }
# Check whether --enable-dmfilemapd was given.
if test "${enable_dmfilemapd+set}" = set; then :
enableval=$enable_dmfilemapd; DMFILEMAPD=$enableval
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $DMFILEMAPD" >&5
$as_echo "$DMFILEMAPD" >&6; }
BUILD_DMFILEMAPD=$DMFILEMAPD
$as_echo "#define DMFILEMAPD 1" >>confdefs.h
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build notifydbus" >&5
$as_echo_n "checking whether to build notifydbus... " >&6; }
@@ -15123,6 +15142,24 @@ done
fi
if test "$DMFILEMAPD" = yes; then
for ac_header in sys/inotify.h
do :
ac_fn_c_check_header_mongrel "$LINENO" "sys/inotify.h" "ac_cv_header_sys_inotify_h" "$ac_includes_default"
if test "x$ac_cv_header_sys_inotify_h" = xyes; then :
cat >>confdefs.h <<_ACEOF
#define HAVE_SYS_INOTIFY_H 1
_ACEOF
else
hard_bailout
fi
done
fi
################################################################################
if test -n "$ac_tool_prefix"; then
# Extract the first word of "${ac_tool_prefix}modprobe", so it can be a program name with args.
@@ -15582,11 +15619,13 @@ LVM_LIBAPI=`echo "$VER" | $AWK -F '[()]' '{print $2}'`
################################################################################
ac_config_files="$ac_config_files Makefile make.tmpl daemons/Makefile daemons/clvmd/Makefile daemons/cmirrord/Makefile daemons/dmeventd/Makefile daemons/dmeventd/libdevmapper-event.pc daemons/dmeventd/plugins/Makefile daemons/dmeventd/plugins/lvm2/Makefile daemons/dmeventd/plugins/raid/Makefile daemons/dmeventd/plugins/mirror/Makefile daemons/dmeventd/plugins/snapshot/Makefile daemons/dmeventd/plugins/thin/Makefile daemons/lvmdbusd/Makefile daemons/lvmdbusd/path.py daemons/lvmetad/Makefile daemons/lvmpolld/Makefile daemons/lvmlockd/Makefile conf/Makefile conf/example.conf conf/lvmlocal.conf conf/command_profile_template.profile conf/metadata_profile_template.profile include/.symlinks include/Makefile lib/Makefile lib/format1/Makefile lib/format_pool/Makefile lib/locking/Makefile lib/mirror/Makefile lib/replicator/Makefile include/lvm-version.h lib/raid/Makefile lib/snapshot/Makefile lib/thin/Makefile lib/cache_segtype/Makefile libdaemon/Makefile libdaemon/client/Makefile libdaemon/server/Makefile libdm/Makefile libdm/libdevmapper.pc liblvm/Makefile liblvm/liblvm2app.pc man/Makefile po/Makefile python/Makefile python/setup.py scripts/blkdeactivate.sh scripts/blk_availability_init_red_hat scripts/blk_availability_systemd_red_hat.service scripts/clvmd_init_red_hat scripts/cmirrord_init_red_hat scripts/com.redhat.lvmdbus1.service scripts/dm_event_systemd_red_hat.service scripts/dm_event_systemd_red_hat.socket scripts/lvm2_cluster_activation_red_hat.sh scripts/lvm2_cluster_activation_systemd_red_hat.service scripts/lvm2_clvmd_systemd_red_hat.service scripts/lvm2_cmirrord_systemd_red_hat.service scripts/lvm2_lvmdbusd_systemd_red_hat.service scripts/lvm2_lvmetad_init_red_hat scripts/lvm2_lvmetad_systemd_red_hat.service scripts/lvm2_lvmetad_systemd_red_hat.socket scripts/lvm2_lvmpolld_init_red_hat scripts/lvm2_lvmpolld_systemd_red_hat.service scripts/lvm2_lvmpolld_systemd_red_hat.socket scripts/lvm2_lvmlockd_systemd_red_hat.service scripts/lvm2_lvmlocking_systemd_red_hat.service scripts/lvm2_monitoring_init_red_hat scripts/lvm2_monitoring_systemd_red_hat.service scripts/lvm2_pvscan_systemd_red_hat@.service scripts/lvm2_tmpfiles_red_hat.conf scripts/lvmdump.sh scripts/Makefile test/Makefile test/api/Makefile test/unit/Makefile tools/Makefile udev/Makefile unit-tests/datastruct/Makefile unit-tests/regex/Makefile unit-tests/mm/Makefile"
ac_config_files="$ac_config_files Makefile make.tmpl daemons/Makefile daemons/clvmd/Makefile daemons/cmirrord/Makefile daemons/dmeventd/Makefile daemons/dmeventd/libdevmapper-event.pc daemons/dmeventd/plugins/Makefile daemons/dmeventd/plugins/lvm2/Makefile daemons/dmeventd/plugins/raid/Makefile daemons/dmeventd/plugins/mirror/Makefile daemons/dmeventd/plugins/snapshot/Makefile daemons/dmeventd/plugins/thin/Makefile daemons/dmfilemapd/Makefile daemons/lvmdbusd/Makefile daemons/lvmdbusd/path.py daemons/lvmetad/Makefile daemons/lvmpolld/Makefile daemons/lvmlockd/Makefile conf/Makefile conf/example.conf conf/lvmlocal.conf conf/command_profile_template.profile conf/metadata_profile_template.profile include/.symlinks include/Makefile lib/Makefile lib/format1/Makefile lib/format_pool/Makefile lib/locking/Makefile lib/mirror/Makefile lib/replicator/Makefile include/lvm-version.h lib/raid/Makefile lib/snapshot/Makefile lib/thin/Makefile lib/cache_segtype/Makefile libdaemon/Makefile libdaemon/client/Makefile libdaemon/server/Makefile libdm/Makefile libdm/libdevmapper.pc liblvm/Makefile liblvm/liblvm2app.pc man/Makefile po/Makefile python/Makefile python/setup.py scripts/blkdeactivate.sh scripts/blk_availability_init_red_hat scripts/blk_availability_systemd_red_hat.service scripts/clvmd_init_red_hat scripts/cmirrord_init_red_hat scripts/com.redhat.lvmdbus1.service scripts/dm_event_systemd_red_hat.service scripts/dm_event_systemd_red_hat.socket scripts/lvm2_cluster_activation_red_hat.sh scripts/lvm2_cluster_activation_systemd_red_hat.service scripts/lvm2_clvmd_systemd_red_hat.service scripts/lvm2_cmirrord_systemd_red_hat.service scripts/lvm2_lvmdbusd_systemd_red_hat.service scripts/lvm2_lvmetad_init_red_hat scripts/lvm2_lvmetad_systemd_red_hat.service scripts/lvm2_lvmetad_systemd_red_hat.socket scripts/lvm2_lvmpolld_init_red_hat scripts/lvm2_lvmpolld_systemd_red_hat.service scripts/lvm2_lvmpolld_systemd_red_hat.socket scripts/lvm2_lvmlockd_systemd_red_hat.service scripts/lvm2_lvmlocking_systemd_red_hat.service scripts/lvm2_monitoring_init_red_hat scripts/lvm2_monitoring_systemd_red_hat.service scripts/lvm2_pvscan_systemd_red_hat@.service scripts/lvm2_tmpfiles_red_hat.conf scripts/lvmdump.sh scripts/Makefile test/Makefile test/api/Makefile test/unit/Makefile tools/Makefile udev/Makefile unit-tests/datastruct/Makefile unit-tests/regex/Makefile unit-tests/mm/Makefile"
cat >confcache <<\_ACEOF
# This file is a shell script that caches the results of configure
@@ -16294,6 +16333,7 @@ do
"daemons/dmeventd/plugins/mirror/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/mirror/Makefile" ;;
"daemons/dmeventd/plugins/snapshot/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/snapshot/Makefile" ;;
"daemons/dmeventd/plugins/thin/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmeventd/plugins/thin/Makefile" ;;
"daemons/dmfilemapd/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/dmfilemapd/Makefile" ;;
"daemons/lvmdbusd/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/lvmdbusd/Makefile" ;;
"daemons/lvmdbusd/path.py") CONFIG_FILES="$CONFIG_FILES daemons/lvmdbusd/path.py" ;;
"daemons/lvmetad/Makefile") CONFIG_FILES="$CONFIG_FILES daemons/lvmetad/Makefile" ;;

View File

@@ -1271,6 +1271,16 @@ fi
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMPOLLD, [$DEFAULT_USE_LVMPOLLD],
[Use lvmpolld by default.])
################################################################################
dnl -- Check dmfilemapd
AC_MSG_CHECKING(whether to build dmfilemapd)
AC_ARG_ENABLE(dmfilemapd, AC_HELP_STRING([--enable-dmfilemapd],
[enable the dmstats filemap daemon]),
DMFILEMAPD=$enableval)
AC_MSG_RESULT($DMFILEMAPD)
BUILD_DMFILEMAPD=$DMFILEMAPD
AC_DEFINE([DMFILEMAPD], 1, [Define to 1 to enable the device-mapper filemap daemon.])
################################################################################
dnl -- Build notifydbus
AC_MSG_CHECKING(whether to build notifydbus)
@@ -1855,6 +1865,10 @@ if test "$UDEV_SYNC" = yes; then
AC_CHECK_HEADERS(sys/ipc.h sys/sem.h,,hard_bailout)
fi
if test "$DMFILEMAPD" = yes; then
AC_CHECK_HEADERS([sys/inotify.h],,hard_bailout)
fi
################################################################################
AC_PATH_TOOL(MODPROBE_CMD, modprobe)
@@ -1994,6 +2008,7 @@ AC_SUBST(BUILD_LVMPOLLD)
AC_SUBST(BUILD_LVMLOCKD)
AC_SUBST(BUILD_LOCKDSANLOCK)
AC_SUBST(BUILD_LOCKDDLM)
AC_SUBST(BUILD_DMFILEMAPD)
AC_SUBST(BUILD_NOTIFYDBUS)
AC_SUBST(CACHE)
AC_SUBST(CFLAGS)
@@ -2043,6 +2058,7 @@ AC_SUBST(DLM_LIBS)
AC_SUBST(DL_LIBS)
AC_SUBST(DMEVENTD)
AC_SUBST(DMEVENTD_PATH)
AC_SUBST(DMFILEMAPD)
AC_SUBST(DM_LIB_PATCHLEVEL)
AC_SUBST(ELDFLAGS)
AC_SUBST(FSADM)
@@ -2158,6 +2174,7 @@ daemons/dmeventd/plugins/raid/Makefile
daemons/dmeventd/plugins/mirror/Makefile
daemons/dmeventd/plugins/snapshot/Makefile
daemons/dmeventd/plugins/thin/Makefile
daemons/dmfilemapd/Makefile
daemons/lvmdbusd/Makefile
daemons/lvmdbusd/path.py
daemons/lvmetad/Makefile

View File

@@ -48,8 +48,12 @@ ifeq ("@BUILD_LVMDBUSD@", "yes")
SUBDIRS += lvmdbusd
endif
ifeq ("@BUILD_DMFILEMAPD@", "yes")
SUBDIRS += dmfilemapd
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd dmfilemapd
endif
include $(top_builddir)/make.tmpl

1
daemons/dmfilemapd/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
dmfilemapd

View File

@@ -0,0 +1,69 @@
#
# Copyright (C) 2016 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES = dmfilemapd.c
TARGETS = dmfilemapd
.PHONY: install_dmeventd install_dmeventd_static
INSTALL_DMFILEMAPD_TARGETS = install_dmfilemapd_dynamic
CLEAN_TARGETS = dmfilemapd.static
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
CFLOW_TARGET = dmfilemapd
include $(top_builddir)/make.tmpl
all: device-mapper
device-mapper: $(TARGETS)
LIBS += -ldevmapper
LVMLIBS += -ldevmapper-event $(PTHREAD_LIBS)
CFLAGS_dmeventd.o += $(EXTRA_EXEC_CFLAGS)
dmfilemapd: $(LIB_SHARED) dmfilemapd.o
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) -L. -o $@ dmfilemapd.o \
$(DL_LIBS) $(LVMLIBS) $(LIBS) -rdynamic
dmfilemapd.static: $(LIB_STATIC) dmfilemapd.o $(interfacebuilddir)/libdevmapper.a
$(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -static -L. -L$(interfacebuilddir) -o $@ \
dmfilemapd.o $(DL_LIBS) $(LVMLIBS) $(LIBS) $(STATIC_LIBS)
ifneq ("$(CFLOW_CMD)", "")
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
-include $(top_builddir)/libdm/libdevmapper.cflow
-include $(top_builddir)/lib/liblvm-internal.cflow
-include $(top_builddir)/lib/liblvm2cmd.cflow
-include $(top_builddir)/daemons/dmfilemapd/$(LIB_NAME).cflow
endif
install_dmfilemapd_dynamic: dmfilemapd
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
install_dmfilemapd_static: dmfilemapd.static
$(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F)
install_dmfilemapd: $(INSTALL_DMEVENTD_TARGETS)
install: install_dmfilemapd
install_device-mapper: install_dmfilemapd

View File

@@ -0,0 +1,764 @@
/*
* Copyright (C) 2016 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* It includes tree drawing code based on pstree: http://psmisc.sourceforge.net/
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "tool.h"
#include "dm-logging.h"
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/inotify.h>
#include <dirent.h>
#include <ctype.h>
#ifdef __linux__
# include "kdev_t.h"
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
# define MKDEV(x,y) makedev((x),(y))
#endif
/* limit to two updates/sec */
#define FILEMAPD_WAIT_USECS 500000
/* how long to wait for unlinked files */
#define FILEMAPD_NOFILE_WAIT_USECS 100000
#define FILEMAPD_NOFILE_WAIT_TRIES 10
struct filemap_monitor {
dm_filemapd_mode_t mode;
/* group_id to update */
uint64_t group_id;
char *path;
int inotify_fd;
int inotify_watch_fd;
/* file to monitor */
int fd;
/* monitoring heuristics */
int64_t blocks; /* allocated blocks, from stat.st_blocks */
int64_t nr_regions;
int deleted;
};
static int _foreground;
static int _verbose;
const char *const _usage = "dmfilemapd <fd> <group_id> <path> <mode> "
"[<foreground>[<log_level>]]";
/*
* Daemon logging. By default, all messages are thrown away: messages
* are only written to the terminal if the daemon is run in the foreground.
*/
__attribute__((format(printf, 5, 0)))
static void _dmfilemapd_log_line(int level,
const char *file __attribute__((unused)),
int line __attribute__((unused)),
int dm_errno_or_class,
const char *f, va_list ap)
{
static int _abort_on_internal_errors = -1;
FILE *out = log_stderr(level) ? stderr : stdout;
level = log_level(level);
if (level <= _LOG_WARN || _verbose) {
if (level < _LOG_WARN)
out = stderr;
vfprintf(out, f, ap);
fputc('\n', out);
}
if (_abort_on_internal_errors < 0)
/* Set when env DM_ABORT_ON_INTERNAL_ERRORS is not "0" */
_abort_on_internal_errors =
strcmp(getenv("DM_ABORT_ON_INTERNAL_ERRORS") ? : "0", "0");
if (_abort_on_internal_errors &&
!strncmp(f, INTERNAL_ERROR, sizeof(INTERNAL_ERROR) - 1))
abort();
}
__attribute__((format(printf, 5, 6)))
static void _dmfilemapd_log_with_errno(int level,
const char *file, int line,
int dm_errno_or_class,
const char *f, ...)
{
va_list ap;
va_start(ap, f);
_dmfilemapd_log_line(level, file, line, dm_errno_or_class, f, ap);
va_end(ap);
}
/*
* Only used for reporting errors before daemonise().
*/
__attribute__((format(printf, 1, 2)))
static void _early_log(const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
fputc('\n', stderr);
va_end(ap);
}
static void _setup_logging(void)
{
dm_log_init_verbose(_verbose - 1);
dm_log_with_errno_init(_dmfilemapd_log_with_errno);
}
#define PROC_FD_DELETED_STR " (deleted)"
/*
* Scan the /proc/<pid>/fd directory for pid and check for an fd
* symlink whose contents match path.
*/
static int _is_open_in_pid(pid_t pid, const char *path)
{
char deleted_path[PATH_MAX + sizeof(PROC_FD_DELETED_STR)];
struct dirent *pid_dp = NULL;
char path_buf[PATH_MAX];
char link_buf[PATH_MAX];
DIR *pid_d = NULL;
ssize_t len;
if (pid == getpid())
return 0;
if (dm_snprintf(path_buf, sizeof(path_buf), "/proc/%d/fd", pid) < 0) {
log_error("Could not format pid path.");
goto bad;
}
/*
* Test for the kernel 'file (deleted)' form when scanning.
*/
if (dm_snprintf(deleted_path, sizeof(deleted_path), "%s%s",
path, PROC_FD_DELETED_STR) < 0) {
log_error("Could not format check path.");
}
pid_d = opendir(path_buf);
if (!pid_d) {
log_error("Could not open proc path: %s.", path_buf);
goto bad;
}
while ((pid_dp = readdir(pid_d)) != NULL) {
if (pid_dp->d_name[0] == '.')
continue;
if ((len = readlinkat(dirfd(pid_d), pid_dp->d_name, link_buf,
sizeof(link_buf))) < 0) {
log_error("readlink failed for /proc/%d/fd/.", pid);
goto bad;
}
link_buf[len] = '\0';
if (!strcmp(deleted_path, link_buf)) {
closedir(pid_d);
return 1;
}
}
bad:
closedir(pid_d);
return 0;
}
/*
* Attempt to guess detect whether a file is open by any process by
* scanning symbolic links in /proc/<pid>/fd.
*
* This is a heuristic since it cannot guarantee to detect brief
* access in all cases: a process that opens and then closes the
* file rapidly may never be seen by the scan.
*
* The method will also give false-positives if a process exists
* that has a deleted file open that had the same path, but a
* different inode number, to the file being monitored.
*
* For this reason the daemon only uses _is_open() for unlinked
* files when the mode is DM_FILEMAPD_FOLLOW_INODE, since these
* files can no longer be newly opened by processes.
*
* In this situation !is_open(path) provides an indication that
* the daemon should shut down: the file has been unlinked form
* the file system and we appear to hold the final reference.
*/
static int _is_open(const char *path)
{
struct dirent *proc_dp = NULL;
DIR *proc_d = NULL;
pid_t pid;
proc_d = opendir("/proc");
if (!proc_d)
return 0;
while ((proc_dp = readdir(proc_d)) != NULL) {
if (!isdigit(proc_dp->d_name[0]))
continue;
pid = strtol(proc_dp->d_name, NULL, 10);
if (!pid)
continue;
if (_is_open_in_pid(pid, path)) {
closedir(proc_d);
return 1;
}
}
closedir(proc_d);
return 0;
}
static void _filemap_monitor_wait(uint64_t usecs)
{
if (_verbose) {
if (usecs == FILEMAPD_WAIT_USECS)
log_very_verbose("waiting for FILEMAPD_WAIT");
if (usecs == FILEMAPD_NOFILE_WAIT_USECS)
log_very_verbose("waiting for FILEMAPD_NOFILE_WAIT");
}
usleep((useconds_t) usecs);
}
static int _parse_args(int argc, char **argv, struct filemap_monitor *fm)
{
char *endptr;
/* we don't care what is in argv[0]. */
argc--;
argv++;
if (argc < 5) {
_early_log("Wrong number of arguments.");
_early_log("usage: %s", _usage);
return 1;
}
memset(fm, 0, sizeof(*fm));
/*
* We don't know the true nr_regions at daemon start time,
* and it is not worth a dm_stats_list()/group walk to count:
* we can assume that there is at least one region or the
* daemon would not have been started.
*
* A correct value will be obtained following the first update
* of the group's regions.
*/
fm->nr_regions = 1;
/* parse <fd> */
fm->fd = strtol(argv[0], &endptr, 10);
if (*endptr) {
_early_log("Could not parse file descriptor: %s", argv[0]);
return 0;
}
argc--;
argv++;
/* parse <group_id> */
fm->group_id = strtoull(argv[0], &endptr, 10);
if (*endptr) {
_early_log("Could not parse group identifier: %s", argv[0]);
return 0;
}
argc--;
argv++;
/* parse <path> */
if (!argv[0] || !strlen(argv[0])) {
_early_log("Path argument is required.");
return 0;
}
fm->path = dm_strdup(argv[0]);
if (!fm->path) {
_early_log("Could not allocate memory for path argument.");
return 0;
}
argc--;
argv++;
/* parse <mode> */
if (!argv[0] || !strlen(argv[0])) {
_early_log("Mode argument is required.");
return 0;
}
fm->mode = dm_filemapd_mode_from_string(argv[0]);
if (fm->mode == DM_FILEMAPD_FOLLOW_NONE)
return 0;
argc--;
argv++;
/* parse [<foreground>[<verbose>]] */
if (argc) {
_foreground = strtol(argv[0], &endptr, 10);
if (*endptr) {
_early_log("Could not parse debug argument: %s.",
argv[0]);
return 0;
}
argc--;
argv++;
if (argc) {
_verbose = strtol(argv[0], &endptr, 10);
if (*endptr) {
_early_log("Could not parse verbose "
"argument: %s", argv[0]);
return 0;
}
if (_verbose < 0 || _verbose > 3) {
_early_log("Verbose argument out of range: %d.",
_verbose);
return 0;
}
}
}
return 1;
}
static int _filemap_fd_check_changed(struct filemap_monitor *fm)
{
int64_t blocks, old_blocks;
struct stat buf;
if (fm->fd < 0) {
log_error("Filemap fd is not open.");
return -1;
}
if (fstat(fm->fd, &buf)) {
log_error("Failed to fstat filemap file descriptor.");
return -1;
}
blocks = buf.st_blocks;
/* first check? */
if (fm->blocks < 0)
old_blocks = buf.st_blocks;
else
old_blocks = fm->blocks;
fm->blocks = blocks;
return (fm->blocks != old_blocks);
}
static void _filemap_monitor_end_notify(struct filemap_monitor *fm)
{
inotify_rm_watch(fm->inotify_fd, fm->inotify_watch_fd);
if (close(fm->inotify_fd))
log_error("Error closing inotify fd.");
}
static int _filemap_monitor_set_notify(struct filemap_monitor *fm)
{
int inotify_fd, watch_fd;
/*
* Set IN_NONBLOCK since we do not want to block in event read()
* calls. Do not set IN_CLOEXEC as dmfilemapd is single-threaded
* and does not fork or exec.
*/
if ((inotify_fd = inotify_init1(IN_NONBLOCK)) < 0) {
_early_log("Failed to initialise inotify.");
return 0;
}
if ((watch_fd = inotify_add_watch(inotify_fd, fm->path,
IN_MODIFY | IN_DELETE_SELF)) < 0) {
_early_log("Failed to add inotify watch.");
return 0;
}
fm->inotify_fd = inotify_fd;
fm->inotify_watch_fd = watch_fd;
return 1;
}
static void _filemap_monitor_close_fd(struct filemap_monitor *fm)
{
if (close(fm->fd))
log_error("Error closing file descriptor.");
fm->fd = -1;
}
static int _filemap_monitor_reopen_fd(struct filemap_monitor *fm)
{
int tries = FILEMAPD_NOFILE_WAIT_TRIES;
/*
* In DM_FILEMAPD_FOLLOW_PATH mode, inotify watches must be
* re-established whenever the file at the watched path is
* changed.
*
* FIXME: stat file and skip if inode is unchanged.
*/
_filemap_monitor_end_notify(fm);
if (fm->fd > 0)
log_error("Filemap file descriptor already open.");
while ((fm->fd < 0) && --tries)
if (((fm->fd = open(fm->path, O_RDONLY)) < 0) && tries)
_filemap_monitor_wait(FILEMAPD_NOFILE_WAIT_USECS);
if (!tries && (fm->fd < 0)) {
log_error("Could not re-open file descriptor.");
return 0;
}
return _filemap_monitor_set_notify(fm);
}
static int _filemap_monitor_get_events(struct filemap_monitor *fm)
{
/* alignment as per man(7) inotify */
char buf[sizeof(struct inotify_event) + NAME_MAX + 1]
__attribute__ ((aligned(__alignof__(struct inotify_event))));
struct inotify_event *event;
int check = 0;
ssize_t len;
char *ptr;
if (fm->mode == DM_FILEMAPD_FOLLOW_PATH)
_filemap_monitor_close_fd(fm);
len = read(fm->inotify_fd, (void *) &buf, sizeof(buf));
/* no events to read? */
if (len < 0 && (errno == EAGAIN))
goto out;
/* interrupted by signal? */
if (len < 0 && (errno == EINTR))
goto out;
if (len < 0)
return -1;
if (!len)
goto out;
for (ptr = buf; ptr < buf + len; ptr += sizeof(*event) + event->len) {
event = (struct inotify_event *) ptr;
if (event->mask & IN_DELETE_SELF)
fm->deleted = 1;
if (event->mask & IN_MODIFY)
check = 1;
/*
* Event IN_IGNORED is generated when a file has been deleted
* and IN_DELETE_SELF generated, and indicates that the file
* watch has been automatically removed.
*
* This can only happen for the DM_FILEMAPD_FOLLOW_PATH mode,
* since inotify IN_DELETE events are generated at the time
* the inode is destroyed: DM_FILEMAPD_FOLLOW_INODE will hold
* the file descriptor open, meaning that the event will not
* be generated until after the daemon closes the file.
*
* The event is ignored here since inotify monitoring will
* be reestablished (or the daemon will terminate) following
* deletion of a DM_FILEMAPD_FOLLOW_PATH monitored file.
*/
if (event->mask & IN_IGNORED)
log_very_verbose("Inotify watch removed: IN_IGNORED "
"in event->mask");
}
out:
/*
* Re-open file descriptor if required and log disposition.
*/
if (fm->mode == DM_FILEMAPD_FOLLOW_PATH)
if (!_filemap_monitor_reopen_fd(fm))
return -1;
log_very_verbose("exiting _filemap_monitor_get_events() with "
"deleted=%d, check=%d", fm->deleted, check);
return check;
}
static void _filemap_monitor_destroy(struct filemap_monitor *fm)
{
if (fm->fd > 0) {
_filemap_monitor_end_notify(fm);
if (close(fm->fd))
log_error("Error closing fd %d.", fm->fd);
}
}
static int _filemap_monitor_check_same_file(int fd1, int fd2)
{
struct stat buf1, buf2;
if ((fd1 < 0) || (fd2 < 0))
return 0;
if (fstat(fd1, &buf1)) {
log_error("Failed to fstat file descriptor %d", fd1);
return -1;
}
if (fstat(fd2, &buf2)) {
log_error("Failed to fstat file descriptor %d", fd2);
return -1;
}
return ((buf1.st_dev == buf2.st_dev) && (buf1.st_ino == buf2.st_ino));
}
static int _filemap_monitor_check_file_unlinked(struct filemap_monitor *fm)
{
char path_buf[PATH_MAX];
char link_buf[PATH_MAX];
int same, fd, len;
fm->deleted = 0;
if ((fd = open(fm->path, O_RDONLY)) < 0)
goto check_unlinked;
if ((same = _filemap_monitor_check_same_file(fm->fd, fd)) < 0)
return 0;
if (close(fd))
log_error("Error closing fd %d", fd);
if (same)
return 1;
check_unlinked:
/*
* The file has been unlinked from its original location: test
* whether it is still reachable in the filesystem, or if it is
* unlinked and anonymous.
*/
if (dm_snprintf(path_buf, sizeof(path_buf),
"/proc/%d/fd/%d", getpid(), fm->fd) < 0) {
log_error("Could not format pid path.");
return 0;
}
if ((len = readlink(path_buf, link_buf, sizeof(link_buf))) < 0) {
log_error("readlink failed for /proc/%d/fd/%d.",
getpid(), fm->fd);
return 0;
}
/*
* Try to re-open the file, from the path now reported in /proc/pid/fd.
*/
if ((fd = open(link_buf, O_RDONLY)) < 0)
fm->deleted = 1;
if ((same = _filemap_monitor_check_same_file(fm->fd, fd)) < 0)
return 0;
if ((fd > 0) && close(fd))
log_error("Error closing fd %d", fd);
/* Should not happen with normal /proc. */
if ((fd > 0) && !same) {
log_error("File descriptor mismatch: %d and %s (read from %s) "
"are not the same file!", fm->fd, link_buf, path_buf);
return 0;
}
return 1;
}
static int _daemonise(struct filemap_monitor *fm)
{
pid_t pid = 0, sid;
int fd;
if (!(sid = setsid())) {
_early_log("setsid failed.");
return 0;
}
if ((pid = fork()) < 0) {
_early_log("Failed to fork daemon process.");
return 0;
}
if (pid > 0) {
if (_verbose)
_early_log("Started dmfilemapd with pid=%d", pid);
exit(0);
}
if (chdir("/")) {
_early_log("Failed to change directory.");
return 0;
}
if (!_verbose) {
if (close(STDIN_FILENO))
_early_log("Error closing stdin");
if (close(STDOUT_FILENO))
_early_log("Error closing stdout");
if (close(STDERR_FILENO))
_early_log("Error closing stderr");
if ((open("/dev/null", O_RDONLY) < 0) ||
(open("/dev/null", O_WRONLY) < 0) ||
(open("/dev/null", O_WRONLY) < 0)) {
_early_log("Error opening stdio streams.");
return 0;
}
}
for (fd = sysconf(_SC_OPEN_MAX) - 1; fd > STDERR_FILENO; fd--) {
if (fd == fm->fd)
continue;
close(fd);
}
return 1;
}
static int _update_regions(struct dm_stats *dms, struct filemap_monitor *fm)
{
uint64_t *regions = NULL, *region, nr_regions = 0;
regions = dm_stats_update_regions_from_fd(dms, fm->fd, fm->group_id);
if (!regions) {
log_error("Failed to update filemap regions for group_id="
FMTu64 ".", fm->group_id);
return 0;
}
for (region = regions; *region != DM_STATS_REGIONS_ALL; region++)
nr_regions++;
if (regions[0] != fm->group_id) {
log_warn("group_id changed from " FMTu64 " to " FMTu64,
fm->group_id, regions[0]);
fm->group_id = regions[0];
}
fm->nr_regions = nr_regions;
return 1;
}
static int _dmfilemapd(struct filemap_monitor *fm)
{
int running = 1, check = 0, open = 0;
struct dm_stats *dms;
dms = dm_stats_create("dmstats"); /* FIXME */
if (!dm_stats_bind_from_fd(dms, fm->fd)) {
log_error("Could not bind dm_stats handle to file descriptor "
"%d", fm->fd);
goto bad;
}
if (!_filemap_monitor_set_notify(fm))
goto bad;
do {
if (!dm_stats_list(dms, NULL)) {
log_error("Failed to list stats handle.");
goto bad;
}
if (!dm_stats_group_present(dms, fm->group_id)) {
log_info("Filemap group removed: exiting.");
running = 0;
continue;
}
if ((check = _filemap_monitor_get_events(fm)) < 0)
goto bad;
if (!check)
goto wait;
if ((check = _filemap_fd_check_changed(fm)) < 0)
goto bad;
if (!check)
goto wait;
if (!_update_regions(dms, fm))
goto bad;
wait:
_filemap_monitor_wait(FILEMAPD_WAIT_USECS);
running = !!fm->nr_regions;
/* mode=inode termination condions */
if (fm->mode == DM_FILEMAPD_FOLLOW_INODE) {
if (!_filemap_monitor_check_file_unlinked(fm))
goto bad;
if (fm->deleted && !(open = _is_open(fm->path))) {
log_info("File unlinked and closed: exiting.");
running = 0;
} else if (fm->deleted && open)
log_verbose("File unlinked and open: "
"continuing.");
}
} while (running);
_filemap_monitor_destroy(fm);
dm_stats_destroy(dms);
return 0;
bad:
_filemap_monitor_destroy(fm);
dm_stats_destroy(dms);
log_error("Exiting");
return 1;
}
static const char * _mode_names[] = {
"inode",
"path"
};
/*
* dmfilemapd <fd> <group_id> <path> <mode> [<foreground>[<log_level>]]
*/
int main(int argc, char **argv)
{
struct filemap_monitor fm;
if (!_parse_args(argc, argv, &fm))
return 1;
_setup_logging();
log_info("Starting dmfilemapd with fd=%d, group_id=" FMTu64 " "
"mode=%s, path=%s", fm.fd, fm.group_id,
_mode_names[fm.mode], fm.path);
if (!_foreground && !_daemonise(&fm))
return 1;
return _dmfilemapd(&fm);
}

View File

@@ -127,6 +127,9 @@
/* Path to dmeventd pidfile. */
#undef DMEVENTD_PIDFILE
/* Define to 1 to enable the device-mapper filemap daemon. */
#undef DMFILEMAPD
/* Define to enable compat protocol */
#undef DM_COMPAT

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -272,10 +272,18 @@ int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return 0;
}
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset)
{
return 0;
}
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
{
return 0;
}
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt)
{
return 0;
}
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt)
{
return 0;
@@ -984,6 +992,30 @@ int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
return lv_mirror_percent(lv->vg->cmd, lv, 0, percent, NULL);
}
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset)
{
int r;
struct dev_manager *dm;
struct dm_status_raid *status;
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
return 0;
log_debug_activation("Checking raid data offset and dev sectors for LV %s/%s",
lv->vg->name, lv->name);
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
return_0;
if (!(r = dev_manager_raid_status(dm, lv, &status)))
stack;
*data_offset = status->data_offset;
dev_manager_destroy(dm);
return r;
}
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
{
int r;
@@ -1013,6 +1045,32 @@ int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health)
return r;
}
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt)
{
struct dev_manager *dm;
struct dm_status_raid *status;
*dev_cnt = 0;
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
return 0;
log_debug_activation("Checking raid device count for LV %s/%s",
lv->vg->name, lv->name);
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
return_0;
if (!dev_manager_raid_status(dm, lv, &status)) {
dev_manager_destroy(dm);
return_0;
}
*dev_cnt = status->dev_count;
dev_manager_destroy(dm);
return 1;
}
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt)
{
struct dev_manager *dm;

View File

@@ -168,6 +168,8 @@ int lv_snapshot_percent(const struct logical_volume *lv, dm_percent_t *percent);
int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
int wait, dm_percent_t *percent, uint32_t *event_nr);
int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent);
int lv_raid_dev_count(const struct logical_volume *lv, uint32_t *dev_cnt);
int lv_raid_data_offset(const struct logical_volume *lv, uint64_t *data_offset);
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health);
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt);
int lv_raid_sync_action(const struct logical_volume *lv, char **sync_action);

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -214,6 +214,14 @@ typedef enum {
STATUS, /* DM_DEVICE_STATUS ioctl */
} info_type_t;
/* Return length of segment depending on type and reshape_len */
static uint32_t _seg_len(const struct lv_segment *seg)
{
uint32_t reshape_len = seg_is_raid(seg) ? ((seg->area_count - seg->segtype->parity_devs) * seg->reshape_len) : 0;
return seg->len - reshape_len;
}
static int _info_run(const char *dlid, struct dm_info *dminfo,
uint32_t *read_ahead,
struct lv_seg_status *seg_status,
@@ -250,7 +258,7 @@ static int _info_run(const char *dlid, struct dm_info *dminfo,
if (seg_status && dminfo->exists) {
start = length = seg_status->seg->lv->vg->extent_size;
start *= seg_status->seg->le;
length *= seg_status->seg->len;
length *= _seg_len(seg_status->seg);
do {
target = dm_get_next_target(dmt, target, &target_start,
@@ -1308,14 +1316,13 @@ int dev_manager_raid_message(struct dev_manager *dm,
return 0;
}
/* These are the supported RAID messages for dm-raid v1.5.0 */
/* These are the supported RAID messages for dm-raid v1.9.0 */
if (strcmp(msg, "idle") &&
strcmp(msg, "frozen") &&
strcmp(msg, "resync") &&
strcmp(msg, "recover") &&
strcmp(msg, "check") &&
strcmp(msg, "repair") &&
strcmp(msg, "reshape")) {
strcmp(msg, "repair")) {
log_error(INTERNAL_ERROR "Unknown RAID message: %s.", msg);
return 0;
}
@@ -2214,7 +2221,7 @@ static char *_add_error_or_zero_device(struct dev_manager *dm, struct dm_tree *d
struct lv_segment *seg_i;
struct dm_info info;
int segno = -1, i = 0;
uint64_t size = (uint64_t) seg->len * seg->lv->vg->extent_size;
uint64_t size = (uint64_t) _seg_len(seg) * seg->lv->vg->extent_size;
dm_list_iterate_items(seg_i, &seg->lv->segments) {
if (seg == seg_i) {
@@ -2500,7 +2507,7 @@ static int _add_target_to_dtree(struct dev_manager *dm,
return seg->segtype->ops->add_target_line(dm, dm->mem, dm->cmd,
&dm->target_state, seg,
laopts, dnode,
extent_size * seg->len,
extent_size * _seg_len(seg),
&dm->pvmove_mirror_count);
}
@@ -2693,7 +2700,7 @@ static int _add_segment_to_dtree(struct dev_manager *dm,
/* Replace target and all its used devs with error mapping */
log_debug_activation("Using error for pending delete %s.",
display_lvname(seg->lv));
if (!dm_tree_node_add_error_target(dnode, (uint64_t)seg->lv->vg->extent_size * seg->len))
if (!dm_tree_node_add_error_target(dnode, (uint64_t)seg->lv->vg->extent_size * _seg_len(seg)))
return_0;
} else if (!_add_target_to_dtree(dm, dnode, seg, laopts))
return_0;
@@ -3165,7 +3172,6 @@ static int _tree_action(struct dev_manager *dm, const struct logical_volume *lv,
log_error(INTERNAL_ERROR "_tree_action: Action %u not supported.", action);
goto out;
}
r = 1;
out:

View File

@@ -71,7 +71,7 @@
* FIXME: Increase these to 64 and further to the MD maximum
* once the SubLVs split and name shift got enhanced
*/
#define DEFAULT_RAID1_MAX_IMAGES 10
#define DEFAULT_RAID1_MAX_IMAGES 64
#define DEFAULT_RAID_MAX_IMAGES 64
#define DEFAULT_ALLOCATION_STRIPE_ALL_DEVICES 0 /* Don't stripe across all devices if not -i/--stripes given */

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -225,8 +225,8 @@ static int _read_linear(struct cmd_context *cmd, struct lv_map *lvm)
while (le < lvm->lv->le_count) {
len = _area_length(lvm, le);
if (!(seg = alloc_lv_segment(segtype, lvm->lv, le, len, 0, 0,
NULL, 1, len, 0, 0, 0, NULL))) {
if (!(seg = alloc_lv_segment(segtype, lvm->lv, le, len, 0, 0, 0,
NULL, 1, len, 0, 0, 0, 0, NULL))) {
log_error("Failed to allocate linear segment.");
return 0;
}
@@ -297,10 +297,10 @@ static int _read_stripes(struct cmd_context *cmd, struct lv_map *lvm)
if (!(seg = alloc_lv_segment(segtype, lvm->lv,
lvm->stripes * first_area_le,
lvm->stripes * area_len,
lvm->stripes * area_len, 0,
0, lvm->stripe_size, NULL,
lvm->stripes,
area_len, 0, 0, 0, NULL))) {
area_len, 0, 0, 0, 0, NULL))) {
log_error("Failed to allocate striped segment.");
return 0;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 1997-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -192,9 +192,9 @@ static int _add_stripe_seg(struct dm_pool *mem,
return_0;
if (!(seg = alloc_lv_segment(segtype, lv, *le_cur,
area_len * usp->num_devs, 0,
area_len * usp->num_devs, 0, 0,
usp->striping, NULL, usp->num_devs,
area_len, 0, 0, 0, NULL))) {
area_len, 0, 0, 0, 0, NULL))) {
log_error("Unable to allocate striped lv_segment structure");
return 0;
}
@@ -232,8 +232,8 @@ static int _add_linear_seg(struct dm_pool *mem,
area_len = (usp->devs[j].blocks) / POOL_PE_SIZE;
if (!(seg = alloc_lv_segment(segtype, lv, *le_cur,
area_len, 0, usp->striping,
NULL, 1, area_len,
area_len, 0, 0, usp->striping,
NULL, 1, area_len, 0,
POOL_PE_SIZE, 0, 0, NULL))) {
log_error("Unable to allocate linear lv_segment "
"structure");

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2009 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -583,8 +583,10 @@ static int _print_segment(struct formatter *f, struct volume_group *vg,
outf(f, "start_extent = %u", seg->le);
outsize(f, (uint64_t) seg->len * vg->extent_size,
"extent_count = %u", seg->len);
outnl(f);
if (seg->reshape_len)
outsize(f, (uint64_t) seg->reshape_len * vg->extent_size,
"reshape_count = %u", seg->reshape_len);
outf(f, "type = \"%s\"", seg->segtype->name);
if (!_out_list(f, &seg->tags, "tags"))

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2013 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -61,6 +61,9 @@ static const struct flag _lv_flags[] = {
{LOCKED, "LOCKED", STATUS_FLAG},
{LV_NOTSYNCED, "NOTSYNCED", STATUS_FLAG},
{LV_REBUILD, "REBUILD", STATUS_FLAG},
{LV_RESHAPE_DELTA_DISKS_PLUS, "RESHAPE_DELTA_DISKS_PLUS", STATUS_FLAG},
{LV_RESHAPE_DELTA_DISKS_MINUS, "RESHAPE_DELTA_DISKS_MINUS", STATUS_FLAG},
{LV_REMOVE_AFTER_RESHAPE, "REMOVE_AFTER_RESHAPE", STATUS_FLAG},
{LV_WRITEMOSTLY, "WRITEMOSTLY", STATUS_FLAG},
{LV_ACTIVATION_SKIP, "ACTIVATION_SKIP", COMPATIBLE_FLAG},
{LV_ERROR_WHEN_FULL, "ERROR_WHEN_FULL", COMPATIBLE_FLAG},

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -354,7 +354,7 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
struct lv_segment *seg;
const struct dm_config_node *sn_child = sn->child;
const struct dm_config_value *cv;
uint32_t start_extent, extent_count;
uint32_t area_extents, start_extent, extent_count, reshape_count, data_copies;
struct segment_type *segtype;
const char *segtype_str;
@@ -375,6 +375,12 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
return 0;
}
if (!_read_int32(sn_child, "reshape_count", &reshape_count))
reshape_count = 0;
if (!_read_int32(sn_child, "data_copies", &data_copies))
data_copies = 1;
segtype_str = SEG_TYPE_NAME_STRIPED;
if (!dm_config_get_str(sn_child, "type", &segtype_str)) {
@@ -389,9 +395,11 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
!segtype->ops->text_import_area_count(sn_child, &area_count))
return_0;
area_extents = segtype->parity_devs ?
raid_rimage_extents(segtype, extent_count, area_count - segtype->parity_devs, data_copies) : extent_count;
if (!(seg = alloc_lv_segment(segtype, lv, start_extent,
extent_count, 0, 0, NULL, area_count,
extent_count, 0, 0, 0, NULL))) {
extent_count, reshape_count, 0, 0, NULL, area_count,
area_extents, data_copies, 0, 0, 0, NULL))) {
log_error("Segment allocation failed");
return 0;
}

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -1104,6 +1104,19 @@ int lv_raid_healthy(const struct logical_volume *lv)
return 1;
}
/* Helper: check for any sub LVs after a disk removing reshape */
static int _sublvs_remove_after_reshape(const struct logical_volume *lv)
{
uint32_t s;
struct lv_segment *seg = first_seg(lv);
for (s = seg->area_count -1; s; s--)
if (seg_lv(seg, s)->status & LV_REMOVE_AFTER_RESHAPE)
return 1;
return 0;
}
char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_with_info_and_seg_status *lvdm)
{
const struct logical_volume *lv = lvdm->lv;
@@ -1269,6 +1282,8 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
repstr[8] = 'p';
else if (lv_is_raid_type(lv)) {
uint64_t n;
char *sync_action;
if (!activation())
repstr[8] = 'X'; /* Unknown */
else if (!lv_raid_healthy(lv))
@@ -1276,8 +1291,17 @@ char *lv_attr_dup_with_info_and_seg_status(struct dm_pool *mem, const struct lv_
else if (lv_is_raid(lv)) {
if (lv_raid_mismatch_count(lv, &n) && n)
repstr[8] = 'm'; /* RAID has 'm'ismatches */
else if (lv_raid_sync_action(lv, &sync_action) &&
!strcmp(sync_action, "reshape"))
repstr[8] = 's'; /* LV is re(s)haping */
else if (_sublvs_remove_after_reshape(lv))
repstr[8] = 'R'; /* sub-LV got freed from raid set by reshaping
and has to be 'R'emoved */
} else if (lv->status & LV_WRITEMOSTLY)
repstr[8] = 'w'; /* sub-LV has 'w'ritemostly */
else if (lv->status & LV_REMOVE_AFTER_RESHAPE)
repstr[8] = 'R'; /* sub-LV got freed from raid set by reshaping
and has to be 'R'emoved */
} else if (lvdm->seg_status.type == SEG_STATUS_CACHE) {
if (lvdm->seg_status.cache->fail)
repstr[8] = 'F';

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2003-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -21,11 +21,13 @@
struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
struct logical_volume *lv,
uint32_t le, uint32_t len,
uint32_t reshape_len,
uint64_t status,
uint32_t stripe_size,
struct logical_volume *log_lv,
uint32_t area_count,
uint32_t area_len,
uint32_t data_copies,
uint32_t chunk_size,
uint32_t region_size,
uint32_t extents_copied,

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2014 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -912,11 +912,13 @@ static uint32_t _round_to_stripe_boundary(struct volume_group *vg, uint32_t exte
struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
struct logical_volume *lv,
uint32_t le, uint32_t len,
uint32_t reshape_len,
uint64_t status,
uint32_t stripe_size,
struct logical_volume *log_lv,
uint32_t area_count,
uint32_t area_len,
uint32_t data_copies,
uint32_t chunk_size,
uint32_t region_size,
uint32_t extents_copied,
@@ -950,10 +952,12 @@ struct lv_segment *alloc_lv_segment(const struct segment_type *segtype,
seg->lv = lv;
seg->le = le;
seg->len = len;
seg->reshape_len = reshape_len;
seg->status = status;
seg->stripe_size = stripe_size;
seg->area_count = area_count;
seg->area_len = area_len;
seg->data_copies = data_copies ? : lv_raid_data_copies(segtype, area_count);
seg->chunk_size = chunk_size;
seg->region_size = region_size;
seg->extents_copied = extents_copied;
@@ -1047,11 +1051,10 @@ static int _release_and_discard_lv_segment_area(struct lv_segment *seg, uint32_t
if (lv_is_raid_image(lv)) {
/* Calculate the amount of extents to reduce per rmate/rimage LV */
uint32_t rimage_extents;
struct lv_segment *seg1 = first_seg(lv);
/* FIXME: avoid extra seg_is_*() conditonals */
area_reduction =_round_to_stripe_boundary(lv->vg, area_reduction,
(seg_is_raid1(seg) || seg_is_any_raid0(seg)) ? 0 : _raid_stripes_count(seg), 0);
rimage_extents = raid_rimage_extents(seg->segtype, area_reduction, seg_is_any_raid0(seg) ? 0 : _raid_stripes_count(seg),
/* FIXME: avoid extra seg_is_*() conditionals here */
rimage_extents = raid_rimage_extents(seg1->segtype, area_reduction, seg_is_any_raid0(seg) ? 0 : _raid_stripes_count(seg),
seg_is_raid10(seg) ? 1 :_raid_data_copies(seg));
if (!rimage_extents)
return 0;
@@ -1258,7 +1261,7 @@ static uint32_t _calc_area_multiple(const struct segment_type *segtype,
* the 'stripes' argument will always need to
* be given.
*/
if (!strcmp(segtype->name, _lv_type_names[LV_TYPE_RAID10])) {
if (segtype_is_raid10(segtype)) {
if (!stripes)
return area_count / 2;
return stripes;
@@ -1278,16 +1281,17 @@ static uint32_t _calc_area_multiple(const struct segment_type *segtype,
static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
{
uint32_t area_reduction, s;
uint32_t areas = (seg->area_count / (seg_is_raid10(seg) ? seg->data_copies : 1)) - seg->segtype->parity_devs;
/* Caller must ensure exact divisibility */
if (seg_is_striped(seg)) {
if (reduction % seg->area_count) {
if (seg_is_striped(seg) || seg_is_striped_raid(seg)) {
if (reduction % areas) {
log_error("Segment extent reduction %" PRIu32
" not divisible by #stripes %" PRIu32,
reduction, seg->area_count);
return 0;
}
area_reduction = (reduction / seg->area_count);
area_reduction = reduction / areas;
} else
area_reduction = reduction;
@@ -1296,7 +1300,11 @@ static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
return_0;
seg->len -= reduction;
seg->area_len -= area_reduction;
if (seg_is_raid(seg))
seg->area_len = seg->len;
else
seg->area_len -= area_reduction;
return 1;
}
@@ -1306,11 +1314,13 @@ static int _lv_segment_reduce(struct lv_segment *seg, uint32_t reduction)
*/
static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
{
struct lv_segment *seg;
struct lv_segment *seg = first_seg(lv);;
uint32_t count = extents;
uint32_t reduction;
struct logical_volume *pool_lv;
struct logical_volume *external_lv = NULL;
int is_raid10 = seg_is_any_raid10(seg) && seg->reshape_len;
uint32_t data_copies = seg->data_copies;
if (lv_is_merging_origin(lv)) {
log_debug_metadata("Dropping snapshot merge of %s to removed origin %s.",
@@ -1373,8 +1383,18 @@ static int _lv_reduce(struct logical_volume *lv, uint32_t extents, int delete)
count -= reduction;
}
lv->le_count -= extents;
seg = first_seg(lv);
if (is_raid10) {
lv->le_count -= extents * data_copies;
if (seg)
seg->len = seg->area_len = lv->le_count;
} else
lv->le_count -= extents;
lv->size = (uint64_t) lv->le_count * lv->vg->extent_size;
if (seg)
seg->extents_copied = seg->len;
if (!delete)
return 1;
@@ -1487,11 +1507,10 @@ int lv_reduce(struct logical_volume *lv, uint32_t extents)
{
struct lv_segment *seg = first_seg(lv);
/* Ensure stipe boundary extents on RAID LVs */
/* Ensure stripe boundary extents on RAID LVs */
if (lv_is_raid(lv) && extents != lv->le_count)
extents =_round_to_stripe_boundary(lv->vg, extents,
seg_is_raid1(seg) ? 0 : _raid_stripes_count(seg), 0);
return _lv_reduce(lv, extents, 1);
}
@@ -1793,10 +1812,10 @@ static int _setup_alloced_segment(struct logical_volume *lv, uint64_t status,
area_multiple = _calc_area_multiple(segtype, area_count, 0);
extents = aa[0].len * area_multiple;
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents,
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents, 0,
status, stripe_size, NULL,
area_count,
aa[0].len, 0u, region_size, 0u, NULL))) {
aa[0].len, 0, 0u, region_size, 0u, NULL))) {
log_error("Couldn't allocate new LV segment.");
return 0;
}
@@ -3234,9 +3253,9 @@ int lv_add_virtual_segment(struct logical_volume *lv, uint64_t status,
seg->area_len += extents;
seg->len += extents;
} else {
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents,
if (!(seg = alloc_lv_segment(segtype, lv, lv->le_count, extents, 0,
status, 0, NULL, 0,
extents, 0, 0, 0, NULL))) {
extents, 0, 0, 0, 0, NULL))) {
log_error("Couldn't allocate new %s segment.", segtype->name);
return 0;
}
@@ -3562,10 +3581,10 @@ static struct lv_segment *_convert_seg_to_mirror(struct lv_segment *seg,
}
if (!(newseg = alloc_lv_segment(get_segtype_from_string(seg->lv->vg->cmd, SEG_TYPE_NAME_MIRROR),
seg->lv, seg->le, seg->len,
seg->lv, seg->le, seg->len, 0,
seg->status, seg->stripe_size,
log_lv,
seg->area_count, seg->area_len,
seg->area_count, seg->area_len, 0,
seg->chunk_size, region_size,
seg->extents_copied, NULL))) {
log_error("Couldn't allocate converted LV segment.");
@@ -3667,8 +3686,8 @@ int lv_add_segmented_mirror_image(struct alloc_handle *ah,
}
if (!(new_seg = alloc_lv_segment(segtype, copy_lv,
seg->le, seg->len, PVMOVE, 0,
NULL, 1, seg->len,
seg->le, seg->len, 0, PVMOVE, 0,
NULL, 1, seg->len, 0,
0, 0, 0, NULL)))
return_0;
@@ -3863,9 +3882,9 @@ static int _lv_insert_empty_sublvs(struct logical_volume *lv,
/*
* First, create our top-level segment for our top-level LV
*/
if (!(mapseg = alloc_lv_segment(segtype, lv, 0, 0, lv->status,
if (!(mapseg = alloc_lv_segment(segtype, lv, 0, 0, 0, lv->status,
stripe_size, NULL,
devices, 0, 0, region_size, 0, NULL))) {
devices, 0, 0, 0, region_size, 0, NULL))) {
log_error("Failed to create mapping segment for %s.",
display_lvname(lv));
return 0;
@@ -3925,7 +3944,7 @@ bad:
static int _lv_extend_layered_lv(struct alloc_handle *ah,
struct logical_volume *lv,
uint32_t extents, uint32_t first_area,
uint32_t stripes, uint32_t stripe_size)
uint32_t mirrors, uint32_t stripes, uint32_t stripe_size)
{
const struct segment_type *segtype;
struct logical_volume *sub_lv, *meta_lv;
@@ -3953,7 +3972,7 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
for (fa = first_area, s = 0; s < seg->area_count; s++) {
if (is_temporary_mirror_layer(seg_lv(seg, s))) {
if (!_lv_extend_layered_lv(ah, seg_lv(seg, s), extents / area_multiple,
fa, stripes, stripe_size))
fa, mirrors, stripes, stripe_size))
return_0;
fa += lv_mirror_count(seg_lv(seg, s));
continue;
@@ -3967,6 +3986,8 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
return 0;
}
last_seg(lv)->data_copies = mirrors;
/* Extend metadata LVs only on initial creation */
if (seg_is_raid_with_meta(seg) && !lv->le_count) {
if (!seg->meta_areas) {
@@ -4063,8 +4084,11 @@ static int _lv_extend_layered_lv(struct alloc_handle *ah,
lv_set_hidden(seg_metalv(seg, s));
}
seg->area_len += extents / area_multiple;
seg->len += extents;
if (seg_is_raid(seg))
seg->area_len = seg->len;
else
seg->area_len += extents / area_multiple;
if (!_setup_lv_size(lv, lv->le_count + extents))
return_0;
@@ -4171,7 +4195,7 @@ int lv_extend(struct logical_volume *lv,
}
if (!(r = _lv_extend_layered_lv(ah, lv, new_extents - lv->le_count, 0,
stripes, stripe_size)))
mirrors, stripes, stripe_size)))
goto_out;
/*
@@ -5391,6 +5415,17 @@ int lv_resize(struct logical_volume *lv,
if (!_lvresize_check(lv, lp))
return_0;
if (seg->reshape_len) {
/* Prevent resizing on out-of-sync reshapable raid */
if (!lv_raid_in_sync(lv)) {
log_error("Can't resize reshaping LV %s.", display_lvname(lv));
return 0;
}
/* Remove any striped raid reshape space for LV resizing */
if (!lv_raid_free_reshape_space(lv))
return_0;
}
if (lp->use_policies) {
lp->extents = 0;
lp->sign = SIGN_PLUS;
@@ -5902,6 +5937,7 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
int ask_discard;
struct lv_list *lvl;
struct seg_list *sl;
struct lv_segment *seg = first_seg(lv);
int is_last_pool = lv_is_pool(lv);
vg = lv->vg;
@@ -6008,6 +6044,13 @@ int lv_remove_single(struct cmd_context *cmd, struct logical_volume *lv,
is_last_pool = 1;
}
/* Special case removing a striped raid LV with allocated reshape space */
if (seg && seg->reshape_len) {
if (!(seg->segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_STRIPED)))
return_0;
lv->le_count = seg->len = seg->area_len = seg_lv(seg, 0)->le_count * seg->area_count;
}
/* Used cache pool, COW or historical LV cannot be activated */
if ((!lv_is_cache_pool(lv) || dm_list_empty(&lv->segs_using_this_lv)) &&
!lv_is_cow(lv) && !lv_is_historical(lv) &&
@@ -6309,7 +6352,6 @@ static int _lv_update_and_reload(struct logical_volume *lv, int origin_only)
log_very_verbose("Updating logical volume %s on disk(s)%s.",
display_lvname(lock_lv), origin_only ? " (origin only)": "");
if (!vg_write(vg))
return_0;
@@ -6776,8 +6818,8 @@ struct logical_volume *insert_layer_for_lv(struct cmd_context *cmd,
return_NULL;
/* allocate a new linear segment */
if (!(mapseg = alloc_lv_segment(segtype, lv_where, 0, layer_lv->le_count,
status, 0, NULL, 1, layer_lv->le_count,
if (!(mapseg = alloc_lv_segment(segtype, lv_where, 0, layer_lv->le_count, 0,
status, 0, NULL, 1, layer_lv->le_count, 0,
0, 0, 0, NULL)))
return_NULL;
@@ -6833,8 +6875,8 @@ static int _extend_layer_lv_for_segment(struct logical_volume *layer_lv,
/* allocate a new segment */
if (!(mapseg = alloc_lv_segment(segtype, layer_lv, layer_lv->le_count,
seg->area_len, status, 0,
NULL, 1, seg->area_len, 0, 0, 0, seg)))
seg->area_len, 0, status, 0,
NULL, 1, seg->area_len, 0, 0, 0, 0, seg)))
return_0;
/* map the new segment to the original underlying are */

View File

@@ -236,7 +236,7 @@ static void _check_raid_seg(struct lv_segment *seg, int *error_count)
if (!seg->areas)
raid_seg_error("zero areas");
if (seg->extents_copied > seg->area_len)
if (seg->extents_copied > seg->len)
raid_seg_error_val("extents_copied too large", seg->extents_copied);
/* Default < 10, change once raid1 split shift and rename SubLVs works! */
@@ -475,7 +475,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
struct lv_segment *seg, *seg2;
uint32_t le = 0;
unsigned seg_count = 0, seg_found, external_lv_found = 0;
uint32_t area_multiplier, s;
uint32_t data_rimage_count, s;
struct seg_list *sl;
struct glv_list *glvl;
int error_count = 0;
@@ -498,13 +498,13 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
inc_error_count;
}
area_multiplier = segtype_is_striped(seg->segtype) ?
seg->area_count : 1;
if (seg->area_len * area_multiplier != seg->len) {
log_error("LV %s: segment %u has inconsistent "
"area_len %u",
lv->name, seg_count, seg->area_len);
data_rimage_count = seg->area_count - seg->segtype->parity_devs;
/* FIXME: raid varies seg->area_len? */
if (seg->len != seg->area_len &&
seg->len != seg->area_len * data_rimage_count) {
log_error("LV %s: segment %u with len=%u "
" has inconsistent area_len %u",
lv->name, seg_count, seg->len, seg->area_len);
inc_error_count;
}
@@ -766,10 +766,10 @@ static int _lv_split_segment(struct logical_volume *lv, struct lv_segment *seg,
/* Clone the existing segment */
if (!(split_seg = alloc_lv_segment(seg->segtype,
seg->lv, seg->le, seg->len,
seg->lv, seg->le, seg->len, seg->reshape_len,
seg->status, seg->stripe_size,
seg->log_lv,
seg->area_count, seg->area_len,
seg->area_count, seg->area_len, seg->data_copies,
seg->chunk_size, seg->region_size,
seg->extents_copied, seg->pvmove_source_seg))) {
log_error("Couldn't allocate cloned LV segment.");

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -137,7 +137,11 @@
e.g. to prohibit allocation of a RAID image
on a PV already holing an image of the RAID set */
#define LOCKD_SANLOCK_LV UINT64_C(0x0080000000000000) /* LV - Internal use only */
/* Next unused flag: UINT64_C(0x0100000000000000) */
#define LV_RESHAPE_DELTA_DISKS_PLUS UINT64_C(0x0100000000000000) /* LV reshape flag delta disks plus image(s) */
#define LV_RESHAPE_DELTA_DISKS_MINUS UINT64_C(0x0200000000000000) /* LV reshape flag delta disks minus image(s) */
#define LV_REMOVE_AFTER_RESHAPE UINT64_C(0x0400000000000000) /* LV needs to be removed after a shrinking reshape */
/* Next unused flag: UINT64_C(0x0800000000000000) */
/* Format features flags */
#define FMT_SEGMENTS 0x00000001U /* Arbitrary segment params? */
@@ -446,6 +450,7 @@ struct lv_segment {
const struct segment_type *segtype;
uint32_t le;
uint32_t len;
uint32_t reshape_len; /* For RAID: user hidden additional out of place reshaping length off area_len and len */
uint64_t status;
@@ -454,6 +459,7 @@ struct lv_segment {
uint32_t writebehind; /* For RAID (RAID1 only) */
uint32_t min_recovery_rate; /* For RAID */
uint32_t max_recovery_rate; /* For RAID */
uint32_t data_offset; /* For RAID: data offset in sectors on each data component image */
uint32_t area_count;
uint32_t area_len;
uint32_t chunk_size; /* For snapshots/thin_pool. In sectors. */
@@ -464,6 +470,7 @@ struct lv_segment {
struct logical_volume *cow;
struct dm_list origin_list;
uint32_t region_size; /* For mirrors, replicators - in sectors */
uint32_t data_copies; /* For RAID: number of data copies (e.g. 3 for RAID 6 */
uint32_t extents_copied;/* Number of extents synced for raids/mirrors */
struct logical_volume *log_lv;
struct lv_segment *pvmove_source_seg;
@@ -909,8 +916,8 @@ struct lvcreate_params {
int wipe_signatures; /* all */
int32_t major; /* all */
int32_t minor; /* all */
int log_count; /* mirror */
int nosync; /* mirror */
int log_count; /* mirror/RAID */
int nosync; /* mirror/RAID */
int pool_metadata_spare; /* pools */
int type; /* type arg is given */
int temporary; /* temporary LV */
@@ -941,15 +948,15 @@ struct lvcreate_params {
#define PASS_ARG_ZERO 0x08
int passed_args;
uint32_t stripes; /* striped */
uint32_t stripe_size; /* striped */
uint32_t stripes; /* striped/RAID */
uint32_t stripe_size; /* striped/RAID */
uint32_t chunk_size; /* snapshot */
uint32_t region_size; /* mirror */
uint32_t region_size; /* mirror/RAID */
unsigned stripes_supplied; /* striped */
unsigned stripe_size_supplied; /* striped */
unsigned stripes_supplied; /* striped/RAID */
unsigned stripe_size_supplied; /* striped/RAID */
uint32_t mirrors; /* mirror */
uint32_t mirrors; /* mirror/RAID */
uint32_t min_recovery_rate; /* RAID */
uint32_t max_recovery_rate; /* RAID */
@@ -1205,7 +1212,9 @@ struct logical_volume *first_replicator_dev(const struct logical_volume *lv);
int lv_is_raid_with_tracking(const struct logical_volume *lv);
uint32_t lv_raid_image_count(const struct logical_volume *lv);
int lv_raid_change_image_count(struct logical_volume *lv,
uint32_t new_count, struct dm_list *allocate_pvs);
uint32_t new_count,
uint32_t new_region_size,
struct dm_list *allocate_pvs);
int lv_raid_split(struct logical_volume *lv, const char *split_name,
uint32_t new_count, struct dm_list *splittable_pvs);
int lv_raid_split_and_track(struct logical_volume *lv,
@@ -1233,6 +1242,8 @@ uint32_t raid_ensure_min_region_size(const struct logical_volume *lv, uint64_t r
int lv_raid_change_region_size(struct logical_volume *lv,
int yes, int force, uint32_t new_region_size);
int lv_raid_in_sync(const struct logical_volume *lv);
uint32_t lv_raid_data_copies(const struct segment_type *segtype, uint32_t area_count);
int lv_raid_free_reshape_space(const struct logical_volume *lv);
/* -- metadata/raid_manip.c */
/* ++ metadata/cache_manip.c */

File diff suppressed because it is too large Load Diff

View File

@@ -43,7 +43,8 @@ struct segment_type *get_segtype_from_flag(struct cmd_context *cmd, uint64_t fla
{
struct segment_type *segtype;
dm_list_iterate_items(segtype, &cmd->segtypes)
/* Iterate backwards to provide aliases; e.g. raid5 instead of raid5_ls */
dm_list_iterate_back_items(segtype, &cmd->segtypes)
if (flag & segtype->flags)
return segtype;

View File

@@ -140,7 +140,11 @@ struct dev_manager;
#define segtype_is_any_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10(segtype) ((segtype)->flags & SEG_RAID10 ? 1 : 0)
#define segtype_is_raid10_near(segtype) segtype_is_raid10(segtype)
/* FIXME: once raid10_offset supported */
#define segtype_is_raid10_offset(segtype) 0 // ((segtype)->flags & SEG_RAID10_OFFSET ? 1 : 0)
#define segtype_is_raid_with_meta(segtype) (segtype_is_raid(segtype) && !segtype_is_raid0(segtype))
#define segtype_is_striped_raid(segtype) (segtype_is_raid(segtype) && !segtype_is_raid1(segtype))
#define segtype_is_reshapable_raid(segtype) ((segtype_is_striped_raid(segtype) && !segtype_is_any_raid0(segtype)) || segtype_is_raid10_near(segtype) || segtype_is_raid10_offset(segtype))
#define segtype_is_snapshot(segtype) ((segtype)->flags & SEG_SNAPSHOT ? 1 : 0)
#define segtype_is_striped(segtype) ((segtype)->flags & SEG_AREAS_STRIPED ? 1 : 0)
#define segtype_is_thin(segtype) ((segtype)->flags & (SEG_THIN_POOL|SEG_THIN_VOLUME) ? 1 : 0)
@@ -190,6 +194,8 @@ struct dev_manager;
#define seg_is_raid10(seg) segtype_is_raid10((seg)->segtype)
#define seg_is_raid10_near(seg) segtype_is_raid10_near((seg)->segtype)
#define seg_is_raid_with_meta(seg) segtype_is_raid_with_meta((seg)->segtype)
#define seg_is_striped_raid(seg) segtype_is_striped_raid((seg)->segtype)
#define seg_is_reshapable_raid(seg) segtype_is_reshapable_raid((seg)->segtype)
#define seg_is_replicator(seg) ((seg)->segtype->flags & SEG_REPLICATOR ? 1 : 0)
#define seg_is_replicator_dev(seg) ((seg)->segtype->flags & SEG_REPLICATOR_DEV ? 1 : 0)
#define seg_is_snapshot(seg) segtype_is_snapshot((seg)->segtype)
@@ -280,6 +286,7 @@ struct segment_type *init_unknown_segtype(struct cmd_context *cmd,
#define RAID_FEATURE_RAID0 (1U << 1) /* version 1.7 */
#define RAID_FEATURE_RESHAPING (1U << 2) /* version 1.8 */
#define RAID_FEATURE_RAID4 (1U << 3) /* ! version 1.8 or 1.9.0 */
#define RAID_FEATURE_RESHAPE (1U << 4) /* version 1.10.1 */
#ifdef RAID_INTERNAL
int init_raid_segtypes(struct cmd_context *cmd, struct segtype_library *seglib);

View File

@@ -238,8 +238,8 @@ static struct lv_segment *_alloc_snapshot_seg(struct logical_volume *lv)
return NULL;
}
if (!(seg = alloc_lv_segment(segtype, lv, 0, lv->le_count, 0, 0,
NULL, 0, lv->le_count, 0, 0, 0, NULL))) {
if (!(seg = alloc_lv_segment(segtype, lv, 0, lv->le_count, 0, 0, 0,
NULL, 0, lv->le_count, 0, 0, 0, 0, NULL))) {
log_error("Couldn't allocate new snapshot segment.");
return NULL;
}

View File

@@ -58,13 +58,13 @@
#define r1__r0m _takeover_from_raid1_to_raid0_meta
#define r1__r1 _takeover_from_raid1_to_raid1
#define r1__r10 _takeover_from_raid1_to_raid10
#define r1__r45 _takeover_from_raid1_to_raid45
#define r1__r5 _takeover_from_raid1_to_raid5
#define r1__str _takeover_from_raid1_to_striped
#define r45_lin _takeover_from_raid45_to_linear
#define r45_mir _takeover_from_raid45_to_mirrored
#define r45_r0 _takeover_from_raid45_to_raid0
#define r45_r0m _takeover_from_raid45_to_raid0_meta
#define r45_r1 _takeover_from_raid45_to_raid1
#define r5_r1 _takeover_from_raid5_to_raid1
#define r45_r54 _takeover_from_raid45_to_raid54
#define r45_r6 _takeover_from_raid45_to_raid6
#define r45_str _takeover_from_raid45_to_striped
@@ -109,8 +109,8 @@ static takeover_fn_t _takeover_fns[][11] = {
/* mirror */ { X , X , N , mir_r0, mir_r0m, mir_r1, mir_r45, X , mir_r10, X , X },
/* raid0 */ { r0__lin, r0__str, r0__mir, N , r0__r0m, r0__r1, r0__r45, r0__r6, r0__r10, X , X },
/* raid0_meta */ { r0m_lin, r0m_str, r0m_mir, r0m_r0, N , r0m_r1, r0m_r45, r0m_r6, r0m_r10, X , X },
/* raid1 */ { r1__lin, r1__str, r1__mir, r1__r0, r1__r0m, r1__r1, r1__r45, X , r1__r10, X , X },
/* raid4/5 */ { r45_lin, r45_str, r45_mir, r45_r0, r45_r0m, r45_r1, r45_r54, r45_r6, X , X , X },
/* raid1 */ { r1__lin, r1__str, r1__mir, r1__r0, r1__r0m, r1__r1, r1__r5, X , r1__r10, X , X },
/* raid4/5 */ { r45_lin, r45_str, r45_mir, r45_r0, r45_r0m, r5_r1 , r45_r54, r45_r6, X , X , X },
/* raid6 */ { X , r6__str, X , r6__r0, r6__r0m, X , r6__r45, X , X , X , X },
/* raid10 */ { r10_lin, r10_str, r10_mir, r10_r0, r10_r0m, r10_r1, X , X , X , X , X },
/* raid01 */ // { X , r01_str, X , X , X , X , X , X , r01_r10, r01_r01, X },

View File

@@ -137,6 +137,7 @@ static int _raid_text_import(struct lv_segment *seg,
} raid_attr_import[] = {
{ "region_size", &seg->region_size },
{ "stripe_size", &seg->stripe_size },
{ "data_copies", &seg->data_copies },
{ "writebehind", &seg->writebehind },
{ "min_recovery_rate", &seg->min_recovery_rate },
{ "max_recovery_rate", &seg->max_recovery_rate },
@@ -146,6 +147,10 @@ static int _raid_text_import(struct lv_segment *seg,
for (i = 0; i < DM_ARRAY_SIZE(raid_attr_import); i++, aip++) {
if (dm_config_has_node(sn, aip->name)) {
if (!dm_config_get_uint32(sn, aip->name, aip->var)) {
if (!strcmp(aip->name, "data_copies")) {
*aip->var = 0;
continue;
}
log_error("Couldn't read '%s' for segment %s of logical volume %s.",
aip->name, dm_config_parent_name(sn), seg->lv->name);
return 0;
@@ -165,6 +170,9 @@ static int _raid_text_import(struct lv_segment *seg,
return 0;
}
if (seg->data_copies < 2)
seg->data_copies = lv_raid_data_copies(seg->segtype, seg->area_count);
if (seg_is_any_raid0(seg))
seg->area_len /= seg->area_count;
@@ -183,18 +191,31 @@ static int _raid_text_export_raid0(const struct lv_segment *seg, struct formatte
static int _raid_text_export_raid(const struct lv_segment *seg, struct formatter *f)
{
outf(f, "device_count = %u", seg->area_count);
int raid0 = seg_is_any_raid0(seg);
if (raid0)
outfc(f, (seg->area_count == 1) ? "# linear" : NULL,
"stripe_count = %u", seg->area_count);
else {
outf(f, "device_count = %u", seg->area_count);
if (seg_is_any_raid10(seg) && seg->data_copies > 0)
outf(f, "data_copies = %" PRIu32, seg->data_copies);
if (seg->region_size)
outf(f, "region_size = %" PRIu32, seg->region_size);
}
if (seg->stripe_size)
outf(f, "stripe_size = %" PRIu32, seg->stripe_size);
if (seg->region_size)
outf(f, "region_size = %" PRIu32, seg->region_size);
if (seg->writebehind)
outf(f, "writebehind = %" PRIu32, seg->writebehind);
if (seg->min_recovery_rate)
outf(f, "min_recovery_rate = %" PRIu32, seg->min_recovery_rate);
if (seg->max_recovery_rate)
outf(f, "max_recovery_rate = %" PRIu32, seg->max_recovery_rate);
if (!raid0) {
if (seg_is_raid1(seg) && seg->writebehind)
outf(f, "writebehind = %" PRIu32, seg->writebehind);
if (seg->min_recovery_rate)
outf(f, "min_recovery_rate = %" PRIu32, seg->min_recovery_rate);
if (seg->max_recovery_rate)
outf(f, "max_recovery_rate = %" PRIu32, seg->max_recovery_rate);
}
return out_areas(f, seg, "raid");
}
@@ -216,14 +237,16 @@ static int _raid_add_target_line(struct dev_manager *dm __attribute__((unused)),
struct dm_tree_node *node, uint64_t len,
uint32_t *pvmove_mirror_count __attribute__((unused)))
{
int delta_disks = 0, delta_disks_minus = 0, delta_disks_plus = 0, data_offset = 0;
uint32_t s;
uint64_t flags = 0;
uint64_t rebuilds = 0;
uint64_t writemostly = 0;
struct dm_tree_node_raid_params params;
int raid0 = seg_is_any_raid0(seg);
uint64_t rebuilds[RAID_BITMAP_SIZE];
uint64_t writemostly[RAID_BITMAP_SIZE];
struct dm_tree_node_raid_params_v2 params;
memset(&params, 0, sizeof(params));
memset(&rebuilds, 0, sizeof(rebuilds));
memset(&writemostly, 0, sizeof(writemostly));
if (!seg->area_count) {
log_error(INTERNAL_ERROR "_raid_add_target_line called "
@@ -232,64 +255,85 @@ static int _raid_add_target_line(struct dev_manager *dm __attribute__((unused)),
}
/*
* 64 device restriction imposed by kernel as well. It is
* not strictly a userspace limitation.
* 253 device restriction imposed by kernel due to MD and dm-raid bitfield limitation in superblock.
* It is not strictly a userspace limitation.
*/
if (seg->area_count > 64) {
log_error("Unable to handle more than 64 devices in a "
"single RAID array");
if (seg->area_count > DEFAULT_RAID_MAX_IMAGES) {
log_error("Unable to handle more than %u devices in a "
"single RAID array", DEFAULT_RAID_MAX_IMAGES);
return 0;
}
if (!raid0) {
if (!seg_is_any_raid0(seg)) {
if (!seg->region_size) {
log_error("Missing region size for mirror segment.");
log_error("Missing region size for raid segment in %s.",
seg_lv(seg, 0)->name);
return 0;
}
for (s = 0; s < seg->area_count; s++)
if (seg_lv(seg, s)->status & LV_REBUILD)
rebuilds |= 1ULL << s;
for (s = 0; s < seg->area_count; s++) {
uint64_t status = seg_lv(seg, s)->status;
for (s = 0; s < seg->area_count; s++)
if (seg_lv(seg, s)->status & LV_WRITEMOSTLY)
writemostly |= 1ULL << s;
if (status & LV_REBUILD)
rebuilds[s/64] |= 1ULL << (s%64);
if (status & LV_RESHAPE_DELTA_DISKS_PLUS) {
delta_disks++;
delta_disks_plus++;
} else if (status & LV_RESHAPE_DELTA_DISKS_MINUS) {
delta_disks--;
delta_disks_minus++;
}
if (delta_disks_plus && delta_disks_minus) {
log_error(INTERNAL_ERROR "Invalid request for delta disks minus and delta disks plus!");
return 0;
}
if (status & LV_WRITEMOSTLY)
writemostly[s/64] |= 1ULL << (s%64);
}
data_offset = seg->data_offset;
if (mirror_in_sync())
flags = DM_NOSYNC;
}
params.raid_type = lvseg_name(seg);
params.stripe_size = seg->stripe_size;
params.flags = flags;
if (raid0) {
params.mirrors = 1;
params.stripes = seg->area_count;
} else if (seg->segtype->parity_devs) {
if (seg->segtype->parity_devs) {
/* RAID 4/5/6 */
params.mirrors = 1;
params.stripes = seg->area_count - seg->segtype->parity_devs;
} else if (seg_is_raid10(seg)) {
/* RAID 10 only supports 2 mirrors now */
params.mirrors = 2;
params.stripes = seg->area_count / 2;
} else if (seg_is_any_raid0(seg)) {
params.mirrors = 1;
params.stripes = seg->area_count;
} else if (seg_is_any_raid10(seg)) {
params.data_copies = seg->data_copies;
params.stripes = seg->area_count;
} else {
/* RAID 1 */
params.mirrors = seg->area_count;
params.mirrors = seg->data_copies;
params.stripes = 1;
params.writebehind = seg->writebehind;
memcpy(params.writemostly, writemostly, sizeof(params.writemostly));
}
if (!raid0) {
/* RAID 0 doesn't have a bitmap, thus no region_size, rebuilds etc. */
if (!seg_is_any_raid0(seg)) {
params.region_size = seg->region_size;
params.rebuilds = rebuilds;
params.writemostly = writemostly;
memcpy(params.rebuilds, rebuilds, sizeof(params.rebuilds));
params.min_recovery_rate = seg->min_recovery_rate;
params.max_recovery_rate = seg->max_recovery_rate;
params.delta_disks = delta_disks;
params.data_offset = data_offset;
}
if (!dm_tree_node_add_raid_target_with_params(node, len, &params))
params.stripe_size = seg->stripe_size;
params.flags = flags;
if (!dm_tree_node_add_raid_target_with_params_v2(node, len, &params))
return_0;
return add_areas_line(dm, seg, node, 0u, seg->area_count);
@@ -404,19 +448,32 @@ out:
return r;
}
/* Define raid feature based on the tuple(major, minor, patchlevel) of raid target */
struct raid_feature {
uint32_t maj;
uint32_t min;
uint32_t patchlevel;
unsigned raid_feature;
const char *feature;
};
/* Return true if tuple(@maj, @min, @patchlevel) is greater/equal to @*feature members */
static int _check_feature(const struct raid_feature *feature, uint32_t maj, uint32_t min, uint32_t patchlevel)
{
return (maj > feature->maj) ||
(maj == feature->maj && min >= feature->min) ||
(maj == feature->maj && min == feature->min && patchlevel >= feature->patchlevel);
}
static int _raid_target_present(struct cmd_context *cmd,
const struct lv_segment *seg __attribute__((unused)),
unsigned *attributes)
{
/* List of features with their kernel target version */
static const struct feature {
uint32_t maj;
uint32_t min;
unsigned raid_feature;
const char *feature;
} _features[] = {
{ 1, 3, RAID_FEATURE_RAID10, SEG_TYPE_NAME_RAID10 },
{ 1, 7, RAID_FEATURE_RAID0, SEG_TYPE_NAME_RAID0 },
const struct raid_feature _features[] = {
{ 1, 3, 0, RAID_FEATURE_RAID10, SEG_TYPE_NAME_RAID10 },
{ 1, 7, 0, RAID_FEATURE_RAID0, SEG_TYPE_NAME_RAID0 },
{ 1, 10, 1, RAID_FEATURE_RESHAPE, "reshaping" },
};
static int _raid_checked = 0;
@@ -438,13 +495,19 @@ static int _raid_target_present(struct cmd_context *cmd,
return_0;
for (i = 0; i < DM_ARRAY_SIZE(_features); ++i)
if ((maj > _features[i].maj) ||
(maj == _features[i].maj && min >= _features[i].min))
if (_check_feature(_features + i, maj, min, patchlevel))
_raid_attrs |= _features[i].raid_feature;
else
log_very_verbose("Target raid does not support %s.",
_features[i].feature);
/*
* Seperate check for proper raid4 mapping supported
*
* If we get more of these range checks, avoid them
* altogether by enhancing 'struct raid_feature'
* and _check_feature() to handle them.
*/
if (!(maj == 1 && (min == 8 || (min == 9 && patchlevel == 0))))
_raid_attrs |= RAID_FEATURE_RAID4;
else

View File

@@ -69,7 +69,7 @@ FIELD(LVS, lv, BIN, "ActExcl", lvid, 10, lvactiveexclusively, lv_active_exclusiv
FIELD(LVS, lv, SNUM, "Maj", major, 0, int32, lv_major, "Persistent major number or -1 if not persistent.", 0)
FIELD(LVS, lv, SNUM, "Min", minor, 0, int32, lv_minor, "Persistent minor number or -1 if not persistent.", 0)
FIELD(LVS, lv, SIZ, "Rahead", lvid, 0, lvreadahead, lv_read_ahead, "Read ahead setting in current units.", 0)
FIELD(LVS, lv, SIZ, "LSize", size, 0, size64, lv_size, "Size of LV in current units.", 0)
FIELD(LVS, lv, SIZ, "LSize", lvid, 0, lv_size, lv_size, "Size of LV in current units.", 0)
FIELD(LVS, lv, SIZ, "MSize", lvid, 0, lvmetadatasize, lv_metadata_size, "For thin and cache pools, the size of the LV that holds the metadata.", 0)
FIELD(LVS, lv, NUM, "#Seg", lvid, 0, lvsegcount, seg_count, "Number of segments in LV.", 0)
FIELD(LVS, lv, STR, "Origin", lvid, 0, origin, origin, "For snapshots and thins, the origin device of this LV.", 0)
@@ -241,9 +241,16 @@ FIELD(VGS, vg, NUM, "#VMdaCps", cmd, 0, vgmdacopies, vg_mda_copies, "Target numb
* SEGS type fields
*/
FIELD(SEGS, seg, STR, "Type", list, 0, segtype, segtype, "Type of LV segment.", 0)
FIELD(SEGS, seg, NUM, "#Str", area_count, 0, uint32, stripes, "Number of stripes or mirror legs.", 0)
FIELD(SEGS, seg, NUM, "#Str", list, 0, seg_stripes, stripes, "Number of stripes or mirror/raid1 legs.", 0)
FIELD(SEGS, seg, NUM, "#DStr", list, 0, seg_data_stripes, data_stripes, "Number of data stripes or mirror/raid1 legs.", 0)
FIELD(SEGS, seg, SIZ, "RSize", list, 0, seg_reshape_len, reshape_len, "Size of out-of-place reshape space in current units.", 0)
FIELD(SEGS, seg, NUM, "RSize", list, 0, seg_reshape_len_le, reshape_len_le, "Size of out-of-place reshape space in logical extents.", 0)
FIELD(SEGS, seg, NUM, "#Cpy", list, 0, seg_data_copies, data_copies, "Number of data copies.", 0)
FIELD(SEGS, seg, NUM, "DOff", list, 0, seg_data_offset, data_offset, "Data offset on each image device.", 0)
FIELD(SEGS, seg, NUM, "NOff", list, 0, seg_new_data_offset, new_data_offset, "New data offset after any reshape on each image device.", 0)
FIELD(SEGS, seg, NUM, "#Par", list, 0, seg_parity_chunks, parity_chunks, "Number of (rotating) parity chunks.", 0)
FIELD(SEGS, seg, SIZ, "Stripe", stripe_size, 0, size32, stripe_size, "For stripes, amount of data placed on one device before switching to the next.", 0)
FIELD(SEGS, seg, SIZ, "Region", region_size, 0, size32, region_size, "For mirrors, the unit of data copied when synchronising devices.", 0)
FIELD(SEGS, seg, SIZ, "Region", region_size, 0, size32, region_size, "For mirrors/raids, the unit of data per leg when synchronizing devices.", 0)
FIELD(SEGS, seg, SIZ, "Chunk", list, 0, chunksize, chunk_size, "For snapshots, the unit of data used when tracking changes.", 0)
FIELD(SEGS, seg, NUM, "#Thins", list, 0, thincount, thin_count, "For thin pools, the number of thin volumes in this pool.", 0)
FIELD(SEGS, seg, STR, "Discards", list, 0, discards, discards, "For thin pools, how discards are handled.", 0)

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2010-2013 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010-2017 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -446,8 +446,22 @@ GET_VG_NUM_PROPERTY_FN(vg_missing_pv_count, vg_missing_pv_count(vg))
/* LVSEG */
GET_LVSEG_STR_PROPERTY_FN(segtype, lvseg_segtype_dup(lvseg->lv->vg->vgmem, lvseg))
#define _segtype_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(data_copies, lvseg->data_copies)
#define _data_copies_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(reshape_len, lvseg->reshape_len)
#define _reshape_len_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(reshape_len_le, lvseg->reshape_len)
#define _reshape_len_le_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(data_offset, lvseg->data_offset)
#define _data_offset_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(new_data_offset, lvseg->data_offset)
#define _new_data_offset_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(parity_chunks, lvseg->data_offset)
#define _parity_chunks_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(stripes, lvseg->area_count)
#define _stripes_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(data_stripes, lvseg->area_count)
#define _data_stripes_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(stripe_size, (SECTOR_SIZE * lvseg->stripe_size))
#define _stripe_size_set prop_not_implemented_set
GET_LVSEG_NUM_PROPERTY_FN(region_size, (SECTOR_SIZE * lvseg->region_size))

View File

@@ -2296,6 +2296,22 @@ static int _size64_disp(struct dm_report *rh __attribute__((unused)),
return _field_set_value(field, repstr, sortval);
}
static int _lv_size_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct logical_volume *lv = (const struct logical_volume *) data;
const struct lv_segment *seg = first_seg(lv);
uint64_t size = lv->le_count;
if (!lv_is_raid_image(lv))
size -= seg->reshape_len * (seg->area_count > 2 ? seg->area_count : 1);
size *= lv->vg->extent_size;
return _size64_disp(rh, mem, field, &size, private);
}
static int _uint32_disp(struct dm_report *rh, struct dm_pool *mem __attribute__((unused)),
struct dm_report_field *field,
const void *data, void *private __attribute__((unused)))
@@ -2412,6 +2428,197 @@ static int _segstartpe_disp(struct dm_report *rh,
return dm_report_field_uint32(rh, field, &seg->le);
}
/* Hepler: get used stripes = total stripes minux any to remove after reshape */
static int _get_seg_used_stripes(const struct lv_segment *seg)
{
uint32_t s;
uint32_t stripes = seg->area_count;
for (s = seg->area_count - 1; stripes && s; s--) {
if (seg_type(seg, s) == AREA_LV &&
seg_lv(seg, s)->status & LV_REMOVE_AFTER_RESHAPE)
stripes--;
else
break;
}
return stripes;
}
static int _seg_stripes_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = ((const struct lv_segment *) data);
return dm_report_field_uint32(rh, field, &seg->area_count);
}
/* Report the number of data stripes, which is less than total stripes (e.g. 2 less for raid6) */
static int _seg_data_stripes_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = (const struct lv_segment *) data;
uint32_t stripes = _get_seg_used_stripes(seg) - seg->segtype->parity_devs;
/* FIXME: in case of odd numbers of raid10 stripes */
if (seg_is_raid10(seg))
stripes /= seg->data_copies;
return dm_report_field_uint32(rh, field, &stripes);
}
/* Helper: return the top-level, reshapable raid LV in case @seg belongs to an raid rimage LV */
static struct logical_volume *_lv_for_raid_image_seg(const struct lv_segment *seg, struct dm_pool *mem)
{
char *lv_name;
if (seg_is_reshapable_raid(seg))
return seg->lv;
if (seg->lv &&
lv_is_raid_image(seg->lv) && !seg->le &&
(lv_name = dm_pool_strdup(mem, seg->lv->name))) {
char *p = strchr(lv_name, '_');
if (p) {
/* Handle duplicated sub LVs */
if (strstr(p, "_dup_"))
p = strchr(p + 5, '_');
if (p) {
struct lv_list *lvl;
*p = '\0';
if ((lvl = find_lv_in_vg(seg->lv->vg, lv_name)) &&
seg_is_reshapable_raid(first_seg(lvl->lv)))
return lvl->lv;
}
}
}
return NULL;
}
/* Helper: return the top-level raid LV in case it is reshapale for @seg or @seg if it is */
static const struct lv_segment *_get_reshapable_seg(const struct lv_segment *seg, struct dm_pool *mem)
{
return _lv_for_raid_image_seg(seg, mem) ? seg : NULL;
}
/* Display segment reshape length in current units */
static int _seg_reshape_len_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = _get_reshapable_seg((const struct lv_segment *) data, mem);
if (seg) {
uint32_t reshape_len = seg->reshape_len * seg->area_count * seg->lv->vg->extent_size;
return _size32_disp(rh, mem, field, &reshape_len, private);
}
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
}
/* Display segment reshape length of in logical extents */
static int _seg_reshape_len_le_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = _get_reshapable_seg((const struct lv_segment *) data, mem);
if (seg) {
uint32_t reshape_len = seg->reshape_len* seg->area_count;
return dm_report_field_uint32(rh, field, &reshape_len);
}
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
}
/* Display segment data copies (e.g. 3 for raid6) */
static int _seg_data_copies_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = (const struct lv_segment *) data;
if (seg->data_copies)
return dm_report_field_uint32(rh, field, &seg->data_copies);
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
}
/* Helper: display segment data offset/new data offset in sectors */
static int _segdata_offset(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private, int new_data_offset)
{
const struct lv_segment *seg = (const struct lv_segment *) data;
struct logical_volume *lv;
if ((lv = _lv_for_raid_image_seg(seg, mem))) {
uint64_t data_offset;
if (lv_raid_data_offset(lv, &data_offset)) {
if (new_data_offset && !lv_raid_image_in_sync(seg->lv))
data_offset = data_offset ? 0 : seg->reshape_len * lv->vg->extent_size;
return dm_report_field_uint64(rh, field, &data_offset);
}
}
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
}
static int _seg_data_offset_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
return _segdata_offset(rh, mem, field, data, private, 0);
}
static int _seg_new_data_offset_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
return _segdata_offset(rh, mem, field, data, private, 1);
}
static int _seg_parity_chunks_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)
{
const struct lv_segment *seg = (const struct lv_segment *) data;
uint32_t parity_chunks = seg->segtype->parity_devs ?: seg->data_copies - 1;
if (parity_chunks) {
uint32_t s, resilient_sub_lvs = 0;
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) == AREA_LV) {
struct lv_segment *seg1 = first_seg(seg_lv(seg, s));
if (seg1->segtype->parity_devs ||
seg1->data_copies > 1)
resilient_sub_lvs++;
}
}
if (resilient_sub_lvs && resilient_sub_lvs == seg->area_count)
parity_chunks++;
return dm_report_field_uint32(rh, field, &parity_chunks);
}
return _field_set_value(field, "", &GET_TYPE_RESERVED_VALUE(num_undef_64));
}
static int _segsize_disp(struct dm_report *rh, struct dm_pool *mem,
struct dm_report_field *field,
const void *data, void *private)

View File

@@ -1,5 +1,8 @@
dm_bit_get_last
dm_bit_get_prev
dm_filemapd_mode_from_string
dm_stats_update_regions_from_fd
dm_bitset_parse_list
dm_stats_bind_from_fd
dm_stats_start_filemapd
dm_tree_node_add_raid_target_with_params_v2

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2017 Red Hat, Inc. All rights reserved.
* Copyright (C) 2006 Rackable Systems All rights reserved.
*
* This file is part of the device-mapper userspace tools.
@@ -331,6 +331,7 @@ struct dm_status_raid {
char *dev_health;
/* idle, frozen, resync, recover, check, repair */
char *sync_action;
uint64_t data_offset; /* RAID out-of-place reshaping */
};
int dm_get_status_raid(struct dm_pool *mem, const char *params,
@@ -1368,6 +1369,69 @@ uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
uint64_t group_id);
/*
* The file map monitoring daemon can monitor files in two distinct
* ways: the mode affects the behaviour of the daemon when a file
* under monitoring is renamed or unlinked, and the conditions which
* cause the daemon to terminate.
*
* In both modes, the daemon will always shut down when the group
* being monitored is deleted.
*
* Follow inode:
* The daemon follows the inode of the file, as it was at the time the
* daemon started. The file descriptor referencing the file is kept
* open at all times, and the daemon will exit when it detects that
* the file has been unlinked and it is the last holder of a reference
* to the file.
*
* This mode is useful if the file is expected to be renamed, or moved
* within the file system, while it is being monitored.
*
* Follow path:
* The daemon follows the path that was given on the daemon command
* line. The file descriptor referencing the file is re-opened on each
* iteration of the daemon, and the daemon will exit if no file exists
* at this location (a tolerance is allowed so that a brief delay
* between unlink() and creat() is permitted).
*
* This mode is useful if the file is updated by unlinking the original
* and placing a new file at the same path.
*/
typedef enum {
DM_FILEMAPD_FOLLOW_INODE,
DM_FILEMAPD_FOLLOW_PATH,
DM_FILEMAPD_FOLLOW_NONE
} dm_filemapd_mode_t;
/*
* Parse a string representation of a dmfilemapd mode.
*
* Returns a valid dm_filemapd_mode_t value on success, or
* DM_FILEMAPD_FOLLOW_NONE on error.
*/
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str);
/*
* Start the dmfilemapd filemap monitoring daemon for the specified
* file descriptor, group, and file system path. The daemon will
* monitor the file for allocation changes, and when a change is
* detected, call dm_stats_update_regions_from_fd() to update the
* mapped regions for the file.
*
* The mode parameter controls the behaviour of the daemon when the
* file being monitored is unlinked or moved: see the comments for
* dm_filemapd_mode_t for a full description and possible values.
*
* The daemon can be stopped at any time by sending SIGTERM to the
* daemon pid.
*/
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
dm_filemapd_mode_t mode, unsigned foreground,
unsigned verbose);
/*
* Call this to actually run the ioctl.
*/
@@ -1738,6 +1802,11 @@ int dm_tree_node_add_raid_target(struct dm_tree_node *node,
*/
#define DM_CACHE_METADATA_MAX_SECTORS DM_THIN_METADATA_MAX_SECTORS
/*
* Define number of elements in rebuild and writemostly arrays
* 'of struct dm_tree_node_raid_params'.
*/
struct dm_tree_node_raid_params {
const char *raid_type;
@@ -1749,25 +1818,70 @@ struct dm_tree_node_raid_params {
/*
* 'rebuilds' and 'writemostly' are bitfields that signify
* which devices in the array are to be rebuilt or marked
* writemostly. By choosing a 'uint64_t', we limit ourself
* to RAID arrays with 64 devices.
* writemostly. The kernel supports up to 253 legs.
* We limit ourselves by choosing a lower value
* for DEFAULT_RAID{1}_MAX_IMAGES in defaults.h.
*/
uint64_t rebuilds;
uint64_t writemostly;
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
uint32_t sync_daemon_sleep; /* ms (kernel default = 5sec) */
uint32_t max_recovery_rate; /* kB/sec/disk */
uint32_t min_recovery_rate; /* kB/sec/disk */
uint32_t stripe_cache; /* sectors */
uint64_t flags; /* [no]sync */
uint32_t reserved2;
};
/*
* Version 2 of above node raid params struct to keeep API compatibility.
*
* Extended for more than 64 legs (max 253 in the MD kernel runtime!),
* delta_disks for disk add/remove reshaping,
* data_offset for out-of-place reshaping
* and data_copies for odd number of raid10 legs.
*/
#define RAID_BITMAP_SIZE 4 /* 4 * 64 bit elements in rebuilds/writemostly arrays */
struct dm_tree_node_raid_params_v2 {
const char *raid_type;
uint32_t stripes;
uint32_t mirrors;
uint32_t region_size;
uint32_t stripe_size;
int delta_disks; /* +/- number of disks to add/remove (reshaping) */
int data_offset; /* data offset to set (out-of-place reshaping) */
/*
* 'rebuilds' and 'writemostly' are bitfields that signify
* which devices in the array are to be rebuilt or marked
* writemostly. The kernel supports up to 253 legs.
* We limit ourselvs by choosing a lower value
* for DEFAULT_RAID_MAX_IMAGES.
*/
uint64_t rebuilds[RAID_BITMAP_SIZE];
uint64_t writemostly[RAID_BITMAP_SIZE];
uint32_t writebehind; /* I/Os (kernel default COUNTER_MAX / 2) */
uint32_t data_copies; /* RAID # of data copies */
uint32_t sync_daemon_sleep; /* ms (kernel default = 5sec) */
uint32_t max_recovery_rate; /* kB/sec/disk */
uint32_t min_recovery_rate; /* kB/sec/disk */
uint32_t stripe_cache; /* sectors */
uint64_t flags; /* [no]sync */
uint64_t reserved2;
};
int dm_tree_node_add_raid_target_with_params(struct dm_tree_node *node,
uint64_t size,
const struct dm_tree_node_raid_params *p);
/* Version 2 API function taking dm_tree_node_raid_params_v2 for aforementioned extensions. */
int dm_tree_node_add_raid_target_with_params_v2(struct dm_tree_node *node,
uint64_t size,
const struct dm_tree_node_raid_params_v2 *p);
/* Cache feature_flags */
#define DM_CACHE_FEATURE_WRITEBACK 0x00000001
#define DM_CACHE_FEATURE_WRITETHROUGH 0x00000002

View File

@@ -205,11 +205,14 @@ struct load_segment {
struct dm_tree_node *replicator;/* Replicator-dev */
uint64_t rdevice_index; /* Replicator-dev */
uint64_t rebuilds; /* raid */
uint64_t writemostly; /* raid */
int delta_disks; /* raid reshape number of disks */
int data_offset; /* raid reshape data offset on disk to set */
uint64_t rebuilds[RAID_BITMAP_SIZE]; /* raid */
uint64_t writemostly[RAID_BITMAP_SIZE]; /* raid */
uint32_t writebehind; /* raid */
uint32_t max_recovery_rate; /* raid kB/sec/disk */
uint32_t min_recovery_rate; /* raid kB/sec/disk */
uint32_t data_copies; /* raid10 data_copies */
struct dm_tree_node *metadata; /* Thin_pool + Cache */
struct dm_tree_node *pool; /* Thin_pool, Thin */
@@ -2353,16 +2356,21 @@ static int _mirror_emit_segment_line(struct dm_task *dmt, struct load_segment *s
return 1;
}
/* Is parameter non-zero? */
#define PARAM_IS_SET(p) ((p) ? 1 : 0)
static int _2_if_value(unsigned p)
{
return p ? 2 : 0;
}
/* Return number of bits assuming 4 * 64 bit size */
static int _get_params_count(uint64_t bits)
/* Return number of bits passed in @bits assuming 2 * 64 bit size */
static int _get_params_count(uint64_t *bits)
{
int r = 0;
int i = RAID_BITMAP_SIZE;
r += 2 * hweight32(bits & 0xFFFFFFFF);
r += 2 * hweight32(bits >> 32);
while (i--) {
r += 2 * hweight32(bits[i] & 0xFFFFFFFF);
r += 2 * hweight32(bits[i] >> 32);
}
return r;
}
@@ -2373,32 +2381,60 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
size_t paramsize)
{
uint32_t i;
uint32_t area_count = seg->area_count / 2;
int param_count = 1; /* mandatory 'chunk size'/'stripe size' arg */
int pos = 0;
unsigned type = seg->type;
unsigned type;
if (seg->area_count % 2)
return 0;
if ((seg->flags & DM_NOSYNC) || (seg->flags & DM_FORCESYNC))
param_count++;
param_count += 2 * (PARAM_IS_SET(seg->region_size) +
PARAM_IS_SET(seg->writebehind) +
PARAM_IS_SET(seg->min_recovery_rate) +
PARAM_IS_SET(seg->max_recovery_rate));
param_count += _2_if_value(seg->data_offset) +
_2_if_value(seg->delta_disks) +
_2_if_value(seg->region_size) +
_2_if_value(seg->writebehind) +
_2_if_value(seg->min_recovery_rate) +
_2_if_value(seg->max_recovery_rate) +
_2_if_value(seg->data_copies > 1);
/* rebuilds and writemostly are 64 bits */
/* rebuilds and writemostly are BITMAP_SIZE * 64 bits */
param_count += _get_params_count(seg->rebuilds);
param_count += _get_params_count(seg->writemostly);
if ((type == SEG_RAID1) && seg->stripe_size)
log_error("WARNING: Ignoring RAID1 stripe size");
if ((seg->type == SEG_RAID1) && seg->stripe_size)
log_info("WARNING: Ignoring RAID1 stripe size");
/* Kernel only expects "raid0", not "raid0_meta" */
type = seg->type;
if (type == SEG_RAID0_META)
type = SEG_RAID0;
#if 0
/* Kernel only expects "raid10", not "raid10_{far,offset}" */
else if (type == SEG_RAID10_FAR ||
type == SEG_RAID10_OFFSET) {
param_count += 2;
type = SEG_RAID10_NEAR;
}
#endif
EMIT_PARAMS(pos, "%s %d %u", _dm_segtypes[type].target,
EMIT_PARAMS(pos, "%s %d %u",
// type == SEG_RAID10_NEAR ? "raid10" : _dm_segtypes[type].target,
type == SEG_RAID10 ? "raid10" : _dm_segtypes[type].target,
param_count, seg->stripe_size);
#if 0
if (seg->type == SEG_RAID10_FAR)
EMIT_PARAMS(pos, " raid10_format far");
else if (seg->type == SEG_RAID10_OFFSET)
EMIT_PARAMS(pos, " raid10_format offset");
#endif
if (seg->data_copies > 1 && type == SEG_RAID10)
EMIT_PARAMS(pos, " raid10_copies %u", seg->data_copies);
if (seg->flags & DM_NOSYNC)
EMIT_PARAMS(pos, " nosync");
else if (seg->flags & DM_FORCESYNC)
@@ -2407,27 +2443,38 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
if (seg->region_size)
EMIT_PARAMS(pos, " region_size %u", seg->region_size);
for (i = 0; i < (seg->area_count / 2); i++)
if (seg->rebuilds & (1ULL << i))
/* If seg-data_offset == 1, kernel needs a zero offset to adjust to it */
if (seg->data_offset)
EMIT_PARAMS(pos, " data_offset %d", seg->data_offset == 1 ? 0 : seg->data_offset);
if (seg->delta_disks)
EMIT_PARAMS(pos, " delta_disks %d", seg->delta_disks);
for (i = 0; i < area_count; i++)
if (seg->rebuilds[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " rebuild %u", i);
if (seg->min_recovery_rate)
EMIT_PARAMS(pos, " min_recovery_rate %u",
seg->min_recovery_rate);
if (seg->max_recovery_rate)
EMIT_PARAMS(pos, " max_recovery_rate %u",
seg->max_recovery_rate);
for (i = 0; i < (seg->area_count / 2); i++)
if (seg->writemostly & (1ULL << i))
for (i = 0; i < area_count; i++)
if (seg->writemostly[i/64] & (1ULL << (i%64)))
EMIT_PARAMS(pos, " write_mostly %u", i);
if (seg->writebehind)
EMIT_PARAMS(pos, " max_write_behind %u", seg->writebehind);
/*
* Has to be before "min_recovery_rate" or the kernels
* check will fail when both set and min > previous max
*/
if (seg->max_recovery_rate)
EMIT_PARAMS(pos, " max_recovery_rate %u",
seg->max_recovery_rate);
if (seg->min_recovery_rate)
EMIT_PARAMS(pos, " min_recovery_rate %u",
seg->min_recovery_rate);
/* Print number of metadata/data device pairs */
EMIT_PARAMS(pos, " %u", seg->area_count/2);
EMIT_PARAMS(pos, " %u", area_count);
if (_emit_areas_line(dmt, seg, params, paramsize, &pos) <= 0)
return_0;
@@ -3267,8 +3314,10 @@ int dm_tree_node_add_raid_target_with_params(struct dm_tree_node *node,
seg->region_size = p->region_size;
seg->stripe_size = p->stripe_size;
seg->area_count = 0;
seg->rebuilds = p->rebuilds;
seg->writemostly = p->writemostly;
memset(seg->rebuilds, 0, sizeof(seg->rebuilds));
seg->rebuilds[0] = p->rebuilds;
memset(seg->writemostly, 0, sizeof(seg->writemostly));
seg->writemostly[0] = p->writemostly;
seg->writebehind = p->writebehind;
seg->min_recovery_rate = p->min_recovery_rate;
seg->max_recovery_rate = p->max_recovery_rate;
@@ -3296,6 +3345,47 @@ int dm_tree_node_add_raid_target(struct dm_tree_node *node,
return dm_tree_node_add_raid_target_with_params(node, size, &params);
}
/*
* Version 2 of dm_tree_node_add_raid_target() allowing for:
*
* - maximum 253 legs in a raid set (MD kernel limitation)
* - delta_disks for disk add/remove reshaping
* - data_offset for out-of-place reshaping
* - data_copies to cope witth odd numbers of raid10 disks
*/
int dm_tree_node_add_raid_target_with_params_v2(struct dm_tree_node *node,
uint64_t size,
const struct dm_tree_node_raid_params_v2 *p)
{
unsigned i;
struct load_segment *seg = NULL;
for (i = 0; i < DM_ARRAY_SIZE(_dm_segtypes) && !seg; ++i)
if (!strcmp(p->raid_type, _dm_segtypes[i].target))
if (!(seg = _add_segment(node,
_dm_segtypes[i].type, size)))
return_0;
if (!seg) {
log_error("Unsupported raid type %s.", p->raid_type);
return 0;
}
seg->region_size = p->region_size;
seg->stripe_size = p->stripe_size;
seg->area_count = 0;
seg->delta_disks = p->delta_disks;
seg->data_offset = p->data_offset;
memcpy(seg->rebuilds, p->rebuilds, sizeof(seg->rebuilds));
memcpy(seg->writemostly, p->writemostly, sizeof(seg->writemostly));
seg->writebehind = p->writebehind;
seg->data_copies = p->data_copies;
seg->min_recovery_rate = p->min_recovery_rate;
seg->max_recovery_rate = p->max_recovery_rate;
seg->flags = p->flags;
return 1;
}
int dm_tree_node_add_cache_target(struct dm_tree_node *node,
uint64_t size,
uint64_t feature_flags, /* DM_CACHE_FEATURE_* */

View File

@@ -4875,6 +4875,154 @@ out:
return NULL;
}
#ifdef DMFILEMAPD
static const char *_filemapd_mode_names[] = {
"inode",
"path",
NULL
};
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str)
{
dm_filemapd_mode_t mode = DM_FILEMAPD_FOLLOW_INODE;
const char **mode_name;
if (mode_str) {
for (mode_name = _filemapd_mode_names; *mode_name; mode_name++)
if (!strcmp(*mode_name, mode_str))
break;
if (*mode_name)
mode = DM_FILEMAPD_FOLLOW_INODE
+ (mode_name - _filemapd_mode_names);
else {
log_error("Could not parse dmfilemapd mode: %s",
mode_str);
return DM_FILEMAPD_FOLLOW_NONE;
}
}
return mode;
}
#define DM_FILEMAPD "dmfilemapd"
#define NR_FILEMAPD_ARGS 6
/*
* Start dmfilemapd to monitor the specified file descriptor, and to
* update the group given by 'group_id' when the file's allocation
* changes.
*
* usage: dmfilemapd <fd> <group_id> <mode> [<foreground>[<log_level>]]
*/
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
dm_filemapd_mode_t mode, unsigned foreground,
unsigned verbose)
{
char fd_str[8], group_str[8], fg_str[2], verb_str[2];
const char *mode_str = _filemapd_mode_names[mode];
char *args[NR_FILEMAPD_ARGS + 1];
pid_t pid = 0;
int argc = 0;
if (fd < 0) {
log_error("dmfilemapd file descriptor must be "
"non-negative: %d", fd);
return 0;
}
if (mode < DM_FILEMAPD_FOLLOW_INODE
|| mode > DM_FILEMAPD_FOLLOW_PATH) {
log_error("Invalid dmfilemapd mode argument: "
"Must be DM_FILEMAPD_FOLLOW_INODE or "
"DM_FILEMAPD_FOLLOW_PATH");
return 0;
}
if (foreground > 1) {
log_error("Invalid dmfilemapd foreground argument. "
"Must be 0 or 1: %d.", foreground);
return 0;
}
if (verbose > 3) {
log_error("Invalid dmfilemapd verbose argument. "
"Must be 0..3: %d.", verbose);
return 0;
}
/* set argv[0] */
args[argc++] = (char *) DM_FILEMAPD;
/* set <fd> */
if ((dm_snprintf(fd_str, sizeof(fd_str), "%d", fd)) < 0) {
log_error("Could not format fd argument.");
return 0;
}
args[argc++] = fd_str;
/* set <group_id> */
if ((dm_snprintf(group_str, sizeof(group_str), FMTu64, group_id)) < 0) {
log_error("Could not format group_id argument.");
return 0;
}
args[argc++] = group_str;
/* set <path> */
args[argc++] = (char *) path;
/* set <mode> */
args[argc++] = (char *) mode_str;
/* set <foreground> */
if ((dm_snprintf(fg_str, sizeof(fg_str), "%u", foreground)) < 0) {
log_error("Could not format foreground argument.");
return 0;
}
args[argc++] = fg_str;
/* set <verbose> */
if ((dm_snprintf(verb_str, sizeof(verb_str), "%u", verbose)) < 0) {
log_error("Could not format verbose argument.");
return 0;
}
args[argc++] = verb_str;
/* terminate args[argc] */
args[argc] = NULL;
log_very_verbose("Spawning daemon as '%s %d " FMTu64 " %s %s %u %u'",
*args, fd, group_id, path, mode_str,
foreground, verbose);
if (!foreground && ((pid = fork()) < 0)) {
log_error("Failed to fork filemapd process.");
return 0;
}
if (pid > 0) {
log_very_verbose("Forked filemapd process as pid %d", pid);
return 1;
}
execvp(args[0], args);
log_error("execvp() failed.");
if (!foreground)
_exit(127);
return 0;
}
# else /* !DMFILEMAPD */
dm_filemapd_mode_t dm_filemapd_mode_from_string(const char *mode_str)
{
return 0;
};
int dm_stats_start_filemapd(int fd, uint64_t group_id, const char *path,
dm_filemapd_mode_t mode, unsigned foreground,
unsigned verbose)
{
log_error("dmfilemapd support disabled.");
return 0;
}
#endif /* DMFILEMAPD */
#else /* HAVE_LINUX_FIEMAP */
uint64_t *dm_stats_create_regions_from_fd(struct dm_stats *dms, int fd,
@@ -4892,6 +5040,13 @@ uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
log_error("File mapping requires FIEMAP ioctl support.");
return 0;
}
int dm_stats_start_filemapd(struct dm_stats *dms, int fd, uint64_t group_id,
const char *path)
{
log_error("File mapping requires FIEMAP ioctl support.");
return 0;
}
#endif /* HAVE_LINUX_FIEMAP */
/*

View File

@@ -89,6 +89,8 @@ static unsigned _count_fields(const char *p)
* <raid_type> <#devs> <health_str> <sync_ratio>
* Versions 1.5.0+ (6 fields):
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt>
* Versions 1.9.0+ (7 fields):
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt> <data_offset>
*/
int dm_get_status_raid(struct dm_pool *mem, const char *params,
struct dm_status_raid **status)
@@ -147,6 +149,22 @@ int dm_get_status_raid(struct dm_pool *mem, const char *params,
if (sscanf(p, "%s %" PRIu64, s->sync_action, &s->mismatch_count) != 2)
goto_bad;
if (num_fields < 7)
goto out;
/*
* All pre-1.9.0 version parameters are read. Now we check
* for additional 1.9.0+ parameters (i.e. nr_fields at least 7).
*
* Note that data_offset will be 0 if the
* kernel returns a pre-1.9.0 status.
*/
msg_fields = "<data_offset>";
if (!(p = _skip_fields(params, 6))) /* skip pre-1.9.0 params */
goto bad;
if (sscanf(p, "%" PRIu64, &s->data_offset) != 1)
goto bad;
out:
*status = s;

View File

@@ -45,6 +45,9 @@ MAN8GEN=lvm-config.8 lvm-dumpconfig.8 lvm-fullreport.8 lvm-lvpoll.8 \
vgimport.8 vgimportclone.8 vgmerge.8 vgmknodes.8 vgreduce.8 vgremove.8 \
vgrename.8 vgs.8 vgscan.8 vgsplit.8 \
lvmsar.8 lvmsadc.8 lvmdiskscan.8 lvmchange.8
MAN8DM=dmsetup.8 dmstats.8 dmfilemapd.8
MAN8CLUSTER=
MAN8SYSTEMD_GENERATORS=lvm2-activation-generator.8
ifeq ($(MAKECMDGOALS),all_man)
MAN_ALL="yes"
@@ -144,12 +147,12 @@ Makefile: Makefile.in
man-generator:
$(CC) -DMAN_PAGE_GENERATOR -I$(top_builddir)/tools $(CFLAGS) $(top_srcdir)/tools/command.c -o $@
- ./man-generator lvmconfig > test.gen
- ./man-generator --primary lvmconfig > test.gen
if [ ! -s test.gen ] ; then cp genfiles/*.gen $(top_builddir)/man; fi;
$(MAN8GEN): man-generator
echo "Generating $@" ;
if [ ! -e $@.gen ]; then ./man-generator $(basename $@) $(top_srcdir)/man/$@.des > $@.gen; fi
if [ ! -e $@.gen ]; then ./man-generator --primary $(basename $@) $(top_srcdir)/man/$@.des > $@.gen; ./man-generator --secondary $(basename $@) >> $@.gen; fi
if [ -f $(top_srcdir)/man/$@.end ]; then cat $(top_srcdir)/man/$@.end >> $@.gen; fi;
cat $(top_srcdir)/man/see_also.end >> $@.gen
$(SED) -e "s+#VERSION#+$(LVM_VERSION)+;s+#DEFAULT_SYS_DIR#+$(DEFAULT_SYS_DIR)+;s+#DEFAULT_ARCHIVE_DIR#+$(DEFAULT_ARCHIVE_DIR)+;s+#DEFAULT_BACKUP_DIR#+$(DEFAULT_BACKUP_DIR)+;s+#DEFAULT_PROFILE_DIR#+$(DEFAULT_PROFILE_DIR)+;s+#DEFAULT_CACHE_DIR#+$(DEFAULT_CACHE_DIR)+;s+#DEFAULT_LOCK_DIR#+$(DEFAULT_LOCK_DIR)+;s+#CLVMD_PATH#+@CLVMD_PATH@+;s+#LVM_PATH#+@LVM_PATH@+;s+#DEFAULT_RUN_DIR#+@DEFAULT_RUN_DIR@+;s+#DEFAULT_PID_DIR#+@DEFAULT_PID_DIR@+;s+#SYSTEMD_GENERATOR_DIR#+$(SYSTEMD_GENERATOR_DIR)+;s+#DEFAULT_MANGLING#+$(DEFAULT_MANGLING)+;" $@.gen > $@

212
man/dmfilemapd.8.in Normal file
View File

@@ -0,0 +1,212 @@
.TH DMFILEMAPD 8 "Dec 17 2016" "Linux" "MAINTENANCE COMMANDS"
.de OPT_FD
. RB [ file_descriptor ]
..
.
.de OPT_GROUP
. RB [ group_id ]
..
.de OPT_PATH
. RB [ path ]
..
.
.de OPT_MODE
. RB [ mode ]
..
.
.de OPT_DEBUG
. RB [ foreground [ verbose ] ]
..
.
.SH NAME
.
dmfilemapd \(em device-mapper filemap monitoring daemon
.
.SH SYNOPSIS
.
.de CMD_DMFILEMAPD
. ad l
. IR dmfilemapd
. OPT_FD
. OPT_GROUP
. OPT_PATH
. OPT_MODE
. OPT_DEBUG
. ad b
..
.CMD_DMFILEMAPD
.
.PD
.ad b
.
.SH DESCRIPTION
.
The dmfilemapd daemon monitors groups of \fIdmstats\fP regions that
correspond to the extents of a file, adding and removing regions to
reflect the changing state of the file on-disk.
The daemon is normally launched automatically by the \fPdmstats
create\fP command, but can be run manually, either to create a new
daemon where one did not previously exist, or to change the options
previously used, by killing the existing daemon and starting a new
one.
.
.SH OPTIONS
.
.HP
.BR file_descriptor
.br
Specify the file descriptor number for the file to be monitored.
The file descriptor must reference a regular file, open for reading,
in a local file system that supports the FIEMAP ioctl, and that
returns data describing the physical location of extents.
The process that executes \fBdmfilemapd\fP is responsible for
opening the file descriptor that is handed to the daemon.
.
.HP
.BR group_id
.br
The \fBdmstats\fP group identifier of the group that \fBdmfilemapd\fP
should update. The group must exist and it should correspond to
a set of regions created by a previous filemap operation.
.
.HP
.BR path
.br
The path to the file being monitored, at the time that it was
opened. The use of \fBpath\fP by the daemon differs, depending
on the filemap following mode in use; see \fBMODES\fP and the
\fBmode\fP option for more information.
.br
.HP
.BR mode
.br
The filemap monitoring mode the daemon should use: either "inode"
(\fBDM_FILEMAP_FOLLOW_INODE\fP), or "path"
(\fBDM_FILEMAP_FOLLOW_PATH\fP), to enable follow-inode or
follow-path mode respectively.
.
.HP
.BR [foreground]
.br
If set to 1, disable forking and allow the daemon to run in the
foreground.
.
.HP
.BR [verbose]
Control daemon logging. If set to zero, the daemon will close all
stdio streams and run silently. If \fBverbose\fP is a number
between 1 and 3, stdio will be retained and the daemon will log
messages to stdout and stderr that match the specified verbosity
level.
.
.
.SH MODES
.
The file map monitoring daemon can monitor files in two distinct
ways: the mode affects the behaviour of the daemon when a file
under monitoring is renamed or unlinked, and the conditions which
cause the daemon to terminate.
In both modes, the daemon will always shut down when the group
being monitored is deleted.
.P
.B Follow inode
.P
The daemon follows the inode of the file, as it was at the time the
daemon started. The file descriptor referencing the file is kept
open at all times, and the daemon will exit when it detects that
the file has been unlinked and it is the last holder of a reference
to the file.
This mode is useful if the file is expected to be renamed, or moved
within the file system, while it is being monitored.
.P
.B Follow path
.P
The daemon follows the path that was given on the daemon command
line. The file descriptor referencing the file is re-opened on each
iteration of the daemon, and the daemon will exit if no file exists
at this location (a tolerance is allowed so that a brief delay
between removal and replacement is permitted).
This mode is useful if the file is updated by unlinking the original
and placing a new file at the same path.
.
.SH LIMITATIONS
.
The daemon attempts to maintain good synchronisation between the file
extents and the regions contained in the group, however, since the
daemon can only react to new allocations once they have been written,
there are inevitably some IO events that cannot be counted when a
file is growing, particularly if the file is being extended by a
single thread writing beyond EOF (for example, the \fBdd\fP program).
There is a further loss of events in that there is currently no way
to atomically resize a \fBdmstats\fP region and preserve its current
counter values. This affects files when they grow by extending the
final extent, rather than allocating a new extent: any events that
had accumulated in the region between any prior operation and the
resize are lost.
File mapping is currently most effective in cases where the majority
of IO does not trigger extent allocation. Future updates may address
these limitations when kernel support is available.
.
.SH EXAMPLES
.
Normally the daemon is started automatically by the \fBdmstats\fP
\fBcreate\fP or \fBupdate_filemap\fP commands but it can be run
manually for debugging or testing purposes.
.P
Start the daemon in the background, in follow-path mode
.br
#
.B dmfilemapd 3 0 vm.img path 0 0 3< vm.img
.br
.P
Start the daemon in follow-inode mode, disable forking and enable
verbose logging
.br
#
.B dmfilemapd 3 0 vm.img inode 1 3 3< vm.img
.br
Starting dmfilemapd with fd=3, group_id=0 mode=inode, path=vm.img
.br
dm version [ opencount flush ] [16384] (*1)
.br
dm info (253:0) [ opencount flush ] [16384] (*1)
.br
dm message (253:0) [ opencount flush ] @stats_list dmstats [16384] (*1)
.br
Read alias 'vm.img' from aux_data
.br
Found group_id 0: alias="vm.img"
.br
dm_stats_walk_init: initialised flags to 4000000000000
.br
starting stats walk with GROUP
.br
exiting _filemap_monitor_get_events() with deleted=0, check=0
.br
waiting for FILEMAPD_WAIT
.br
.P
.
.SH AUTHORS
.
Bryn M. Reeves <bmr@redhat.com>
.
.SH SEE ALSO
.
.BR dmstats (8)
LVM2 resource page: https://www.sourceware.org/lvm2/
.br
Device-mapper resource page: http://sources.redhat.com/dm/
.br

View File

@@ -14,6 +14,9 @@
. RB [ \-\-region ]
. RB [ \-\-group ]
..
.de OPT_FOREGROUND
. RB [ \-\-foreground ]
..
.
.\" Print units suffix, use with arg to print human
.\" man2html can't handle too many changes per command
@@ -89,6 +92,10 @@ dmstats \(em device-mapper statistics management
. RB [ \-\-bounds
. IR \%histogram_boundaries ]
. RB [ \-\-filemap ]
. RB [ \-\-follow
. IR follow_mode ]
. OPT_FOREGROUND
. RB [ \-\-nomonitor ]
. RB [ \-\-nogroup ]
. RB [ \-\-precise ]
. RB [ \-\-start
@@ -215,6 +222,9 @@ dmstats \(em device-mapper statistics management
. IR file_path
. RB [ \-\-groupid
. IR id ]
. RB [ \-\-follow
. IR follow_mode ]
. OPT_FOREGROUND
. ad b
..
.CMD_UPDATE_FILEMAP
@@ -314,6 +324,60 @@ create regions corresponding to the locations of the on-disk extents
allocated to the file(s).
.
.HP
.BR \-\-nomonitor
.br
Disable the \fBdmfilemapd\fP daemon when creating new file mapped
groups. Normally the device-mapper filemap monitoring daemon,
\fBdmfilemapd\fP, is started for each file mapped group to update the
set of regions as the file changes on-disk: use of this option
disables this behaviour.
Regions in the group may still be updated with the
\fBupdate_filemap\fP command, or by starting the daemon manually.
.
.HP
.BR \-\-follow
.IR follow_mode
.br
Specify the \fBdmfilemapd\fP file following mode. The file map
monitoring daemon can monitor files in two distinct ways: the mode
affects the behaviour of the daemon when a file under monitoring is
renamed or unlinked, and the conditions which cause the daemon to
terminate.
The \fBfollow_mode\fP argument is either "inode", for follow-inode
mode, or "path", for follow-path.
If follow-inode mode is used, the daemon will hold the file open, and
continue to update regions from the same file descriptor. This means
that the mapping will follow rename, move (within the same file
system), and unlink operations. This mode is useful if the file is
expected to be moved, renamed, or unlinked while it is being
monitored.
In follow-inode mode, the daemon will exit once it detects that the
file has been unlinked and it is the last holder of a reference to it.
If follow-path is used, the daemon will re-open the provided path on
each monitoring iteration. This means that the group will be updated
to reflect a new file being moved to the same path as the original
file. This mode is useful for files that are expected to be updated
via unlink and rename.
In follow-path mode, the daemon will exit if the file is removed and
not replaced within a brief tolerance interval.
In either mode, the daemon exits automatically if the monitored group
is removed.
.
.HP
.BR \-\-foreground
.br
Specify that the \fBdmfilemapd\fP daemon should run in the foreground.
The daemon will not fork into the background, and will replace the
\fBdmstats\fP command that started it.
.
.HP
.BR \-\-groupid
.IR id
.br
@@ -568,6 +632,11 @@ By default regions that map a file are placed into a group and the
group alias is set to the basename of the file. This behaviour can be
overridden with the \fB\-\-alias\fP and \fB\-\-nogroup\fP options.
Creating a group that maps a file automatically starts a daemon,
\fBdmfilemapd\fP to monitor the file and update the mapping as the
extents allocated to the file change. This behaviour can be disabled
using the \fB\-\-nomonitor\fP option.
Use the \fB\-\-group\fP option to only display information for groups
when listing and reporting.
.
@@ -678,17 +747,23 @@ The group to be removed is specified using \fB\-\-groupid\fP.
.CMD_UPDATE_FILEMAP
.br
Update a group of \fBdmstats\fP regions specified by \fBgroup_id\fP,
that were previously created with \fB\-\-filemap\fP. This will add
and remove regions to reflect changes in the allocated extents of
the file on-disk, since the time that it was crated or last updated.
that were previously created with \fB\-\-filemap\fP, either directly,
or by starting the monitoring daemon, \fBdmfilemapd\fP.
This will add and remove regions to reflect changes in the allocated
extents of the file on-disk, since the time that it was crated or last
updated.
Use of this command is not normally needed since the \fBdmfilemapd\fP
daemon will automatically monitor filemap groups and perform these
updates when required.
If a filemapped group was created with \fB\-\-nominitor\fP, or the
If a filemapped group was created with \fB\-\-nomonitor\fP, or the
daemon has been killed, the \fBupdate_filemap\fP can be used to
manually force an update.
manually force an update or start a new daemon.
Use \fB\-\-nomonitor\fP to force a direct update and disable starting
the monitoring daemon.
.
.SH REGIONS, AREAS, AND GROUPS
.
@@ -750,6 +825,93 @@ containing device.
The \fBgroup_id\fP should be treated as an opaque identifier used to
reference the group.
.
.SH FILE MAPPING
.
Using \fB\-\-filemap\fP, it is possible to create regions that
correspond to the extents of a file in the file system. This allows
IO statistics to be monitored on a per-file basis, for example to
observe large database files, virtual machine images, or other files
of interest.
To be able to use file mapping, the file must be backed by a
device-mapper device, and in a file system that supports the FIEMAP
ioctl (and which returns data describing the physical location of
extents). This currently includes \fBxfs(5)\fP and \fBext4(5)\fP.
By default the regions making up a file are placed together in a
group, and the group alias is set to the \fBbasename(3)\fP of the
file. This allows statistics to be reported for the file as a whole,
aggregating values for the regions making up the group. To see only
the whole file (group) when using the \fBlist\fP and \fBreport\fP
commands, use \fB\-\-group\fP.
Since it is possible for the file to change after the initial
group of regions is created, the \fBupdate_filemap\fP command, and
\fBdmfilemapd\fP daemon are provided to update file mapped groups
either manually or automatically.
.
.P
.B File follow modes
.P
The file map monitoring daemon can monitor files in two distinct ways:
follow-inode mode, and follow-path mode.
The mode affects the behaviour of the daemon when a file under
monitoring is renamed or unlinked, and the conditions which cause the
daemon to terminate.
If follow-inode mode is used, the daemon will hold the file open, and
continue to update regions from the same file descriptor. This means
that the mapping will follow rename, move (within the same file
system), and unlink operations. This mode is useful if the file is
expected to be moved, renamed, or unlinked while it is being
monitored.
In follow-inode mode, the daemon will exit once it detects that the
file has been unlinked and it is the last holder of a reference to it.
If follow-path is used, the daemon will re-open the provided path on
each monitoring iteration. This means that the group will be updated
to reflect a new file being moved to the same path as the original
file. This mode is useful for files that are expected to be updated
via unlink and rename.
In follow-path mode, the daemon will exit if the file is removed and
not replaced within a brief tolerance interval (one second).
To stop the daemon, delete the group containing the mapped regions:
the daemon will automatically shut down.
The daemon can also be safely killed at any time and the group kept:
if the file is still being allocated the mapping will become
progressively out-of-date as extents are added and removed (in this
case the daemon can be re-started or the group updated manually with
the \fBupdate_filemap\fP command).
See the \fBcreate\fP command and \fB\-\-filemap\fP, \fB\-\-follow\fP,
and \fB\-\-nomonitor\fP options for further information.
.
.P
.B Limitations
.P
The daemon attempts to maintain good synchronisation between the file
extents and the regions contained in the group, however, since it can
only react to new allocations once they have been written, there are
inevitably some IO events that cannot be counted when a file is
growing, particularly if the file is being extended by a single thread
writing beyond end-of-file (for example, the \fBdd\fP program).
There is a further loss of events in that there is currently no way
to atomically resize a \fBdmstats\fP region and preserve its current
counter values. This affects files when they grow by extending the
final extent, rather than allocating a new extent: any events that
had accumulated in the region between any prior operation and the
resize are lost.
File mapping is currently most effective in cases where the majority
of IO does not trigger extent allocation. Future updates may address
these limitations when kernel support is available.
.
.SH REPORT FIELDS
.
The dmstats report provides several types of field that may be added to

View File

@@ -27,6 +27,39 @@ A command run on a visible LV sometimes operates on a sub LV rather than
the specified LV. In other cases, a sub LV must be specified directly on
the command line.
Striped raid types are
.B raid0/raid0_meta
,
.B raid5
(an alias for raid5_ls),
.B raid6
(an alias for raid6_zr) and
.B raid10
(an alias for raid10_near).
As opposed to mirroring, raid5 and raid6 stripe data and calculate parity
blocks. The parity blocks can be used for data block recovery in case devices
fail. A maximum number of one device in a raid5 LV may fail and two in case
of raid6. Striped raid types typically rotate the parity blocks for performance
reasons thus avoiding contention on a single device. Layouts of raid5 rotating
parity blocks can be one of left-asymmetric (raid5_la), left-symmetric (raid5_ls
with alias raid5), right-asymmetric (raid5_ra), right-symmetric (raid5_rs) and raid5_n,
which doesn't rotate parity blocks. Any \"_n\" layouts allow for conversion between
raid levels (raid5_n -> raid6 or raid5_n -> striped/raid0/raid0_meta).
raid6 layouts are zero-restart (raid6_zr with alias raid6), next-restart (raid6_nr),
next-continue (raid6_nc). Additionally, special raid6 layouts for raid level conversions
between raid5 and raid6 are raid6_ls_6, raid6_rs_6, raid6_la_6 and raid6_ra_6. Those
correspond to their raid5 counterparts (e.g. raid5_rs can be directly converted to raid6_rs_6
and vice-versa).
raid10 (an alias for raid10_near) is currently limited to one data copy and even number of
sub LVs. This is a mirror group layout thus a single sub LV may fail per mirror group
without data loss.
Striped raid types support converting the layout, their stripesize
and their number of stripes.
The striped raid types combined with raid1 allow for conversion from linear -> striped/raid0/raid0_meta
and vice-versa by e.g. linear <-> raid1 <-> raid5_n (then adding stripes) <-> striped/raid0/raid0_meta.
Sub LVs can be displayed with the command
.B lvs -a

View File

@@ -28,9 +28,9 @@ to improve performance.
.SS Usage notes
In the usage section below, \fB--size\fP \fINumber\fP can be replaced
in each case with \fB--extents\fP \fINumberExtents\fP. Also see both
descriptions the options section.
In the usage section below, \fB--size\fP \fISize\fP can be replaced
with \fB--extents\fP \fINumber\fP. See both descriptions
the options section.
In the usage section below, \fB--name\fP is omitted from the required
options, even though it is typically used. When the name is not

View File

@@ -1,5 +1,12 @@
lvextend extends the size of an LV. This requires allocating logical
extents from the VG's free physical extents. A copy\-on\-write snapshot LV
can also be extended to provide more space to hold COW blocks. Use
\fBlvconvert\fP(8) to change the number of data images in a RAID or
extents from the VG's free physical extents. If the extension adds a new
LV segment, the new segment will use the existing segment type of the LV.
Extending a copy\-on\-write snapshot LV adds space for COW blocks.
Use \fBlvconvert\fP(8) to change the number of data images in a RAID or
mirrored LV.
In the usage section below, \fB--size\fP \fISize\fP can be replaced
with \fB--extents\fP \fINumber\fP. See both descriptions
the options section.

View File

@@ -19,6 +19,11 @@ LVM RAID uses both Device Mapper (DM) and Multiple Device (MD) drivers
from the Linux kernel. DM is used to create and manage visible LVM
devices, and MD is used to place data on physical devices.
LVM creates hidden LVs (dm devices) layered between the visible LV and
physical devices. LVs in that middle layers are called sub LVs.
For LVM raid, a sub LV pair to store data and metadata (raid superblock
and bitmap) is created per raid image/leg (see lvs command examples below).
.SH Create a RAID LV
To create a RAID LV, use lvcreate and specify an LV type.
@@ -77,7 +82,7 @@ data that is written to one device before moving to the next.
Also called mirroring, raid1 uses multiple devices to duplicate LV data.
The LV data remains available if all but one of the devices fail.
The minimum number of devices required is 2.
The minimum number of devices (i.e. sub LV pairs) required is 2.
.B lvcreate \-\-type raid1
[\fB\-\-mirrors\fP \fINumber\fP]
@@ -98,8 +103,8 @@ original and one mirror image.
\&
raid4 is a form of striping that uses an extra device dedicated to storing
parity blocks. The LV data remains available if one device fails. The
raid4 is a form of striping that uses an extra, first device dedicated to
storing parity blocks. The LV data remains available if one device fails. The
parity is used to recalculate data that is lost from a single device. The
minimum number of devices required is 3.
@@ -131,10 +136,10 @@ stored on the same device.
\&
raid5 is a form of striping that uses an extra device for storing parity
blocks. LV data and parity blocks are stored on each device. The LV data
remains available if one device fails. The parity is used to recalculate
data that is lost from a single device. The minimum number of devices
required is 3.
blocks. LV data and parity blocks are stored on each device, typically in
a rotating pattern for performance reasons. The LV data remains available
if one device fails. The parity is used to recalculate data that is lost
from a single device. The minimum number of devices required is 3.
.B lvcreate \-\-type raid5
[\fB\-\-stripes\fP \fINumber\fP \fB\-\-stripesize\fP \fISize\fP]
@@ -167,7 +172,8 @@ parity 0 with data restart.) See \fBRAID5 variants\fP below.
\&
raid6 is a form of striping like raid5, but uses two extra devices for
parity blocks. LV data and parity blocks are stored on each device. The
parity blocks. LV data and parity blocks are stored on each device, typically
in a rotating pattern for perfomramce reasons. The
LV data remains available if up to two devices fail. The parity is used
to recalculate data that is lost from one or two devices. The minimum
number of devices required is 5.
@@ -919,7 +925,6 @@ Convert the linear LV to raid1 with three images
# lvconvert --type raid1 --mirrors 2 vg/my_lv
.fi
.ig
4. Converting an LV from \fBstriped\fP (with 4 stripes) to \fBraid6_nc\fP.
.nf
@@ -927,9 +932,9 @@ Start with a striped LV:
# lvcreate --stripes 4 -L64M -n my_lv vg
Convert the striped LV to raid6_nc:
Convert the striped LV to raid6_n_6:
# lvconvert --type raid6_nc vg/my_lv
# lvconvert --type raid6 vg/my_lv
# lvs -a -o lv_name,segtype,sync_percent,data_copies
LV Type Cpy%Sync #Cpy
@@ -954,14 +959,12 @@ existing stripe devices. It then creates 2 additional MetaLV/DataLV pairs
If rotating data/parity is required, such as with raid6_nr, it must be
done by reshaping (see below).
..
.SH RAID Reshaping
RAID reshaping is changing attributes of a RAID LV while keeping the same
RAID level, i.e. changes that do not involve changing the number of
devices. This includes changing RAID layout, stripe size, or number of
RAID level. This includes changing RAID layout, stripe size, or number of
stripes.
When changing the RAID layout or stripe size, no new SubLVs (MetaLVs or
@@ -975,15 +978,12 @@ partially updated and corrupted. Instead, an existing stripe is quiesced,
read, changed in layout, and the new stripe written to free space. Once
that is done, the new stripe is unquiesced and used.)
(The reshaping features are planned for a future release.)
.ig
.SS Examples
1. Converting raid6_n_6 to raid6_nr with rotating data/parity.
This conversion naturally follows a previous conversion from striped to
raid6_n_6 (shown above). It completes the transition to a more
This conversion naturally follows a previous conversion from striped/raid0
to raid6_n_6 (shown above). It completes the transition to a more
traditional RAID6.
.nf
@@ -1029,15 +1029,13 @@ traditional RAID6.
The DataLVs are larger (additional segment in each) which provides space
for out-of-place reshaping. The result is:
FIXME: did the lv name change from my_lv to r?
.br
FIXME: should we change device names in the example to sda,sdb,sdc?
.br
FIXME: include -o devices or seg_pe_ranges above also?
.nf
# lvs -a -o lv_name,segtype,seg_pe_ranges,dataoffset
LV Type PE Ranges data
LV Type PE Ranges Doff
r raid6_nr r_rimage_0:0-32 \\
r_rimage_1:0-32 \\
r_rimage_2:0-32 \\
@@ -1093,19 +1091,15 @@ RAID5 right asymmetric
\[bu]
Rotating parity 0 with data continuation
.ig
raid5_n
.br
\[bu]
RAID5 striping
RAID5 parity n
.br
\[bu]
Same layout as raid4 with a dedicated parity N with striped data.
.br
Dedicated parity device n used for striped/raid0 conversions
\[bu]
Used for
.B RAID Takeover
..
Used for RAID Takeover
.SH RAID6 Variants
@@ -1144,7 +1138,24 @@ RAID6 N continue
\[bu]
Rotating parity N with data continuation
.ig
raid6_n_6
.br
\[bu]
RAID6 last parity devices
.br
\[bu]
Dedicated last parity devices used for striped/raid0 conversions
\[bu]
Used for RAID Takeover
raid6_{ls,rs,la,ra}_6
.br
\[bu]
RAID6 last parity device
.br
\[bu]
Dedicated last parity device used for conversions from/to raid5_{ls,rs,la,ra}
raid6_n_6
.br
\[bu]
@@ -1154,8 +1165,7 @@ RAID6 N continue
Fixed P-Syndrome N-1 and Q-Syndrome N with striped data
.br
\[bu]
Used for
.B RAID Takeover
Used for RAID Takeover
raid6_ls_6
.br
@@ -1166,8 +1176,7 @@ RAID6 N continue
Same as raid5_ls for N-1 disks with fixed Q-Syndrome N
.br
\[bu]
Used for
.B RAID Takeover
Used for RAID Takeover
raid6_la_6
.br
@@ -1178,8 +1187,7 @@ RAID6 N continue
Same as raid5_la for N-1 disks with fixed Q-Syndrome N
.br
\[bu]
Used for
.B RAID Takeover
Used forRAID Takeover
raid6_rs_6
.br
@@ -1190,8 +1198,7 @@ RAID6 N continue
Same as raid5_rs for N-1 disks with fixed Q-Syndrome N
.br
\[bu]
Used for
.B RAID Takeover
Used for RAID Takeover
raid6_ra_6
.br
@@ -1202,9 +1209,7 @@ RAID6 N continue
Same as raid5_ra for N-1 disks with fixed Q-Syndrome N
.br
\[bu]
Used for
.B RAID Takeover
..
Used for RAID Takeover
.ig

View File

@@ -12,3 +12,8 @@ system.
Sizes will be rounded if necessary. For example, the LV size must be an
exact number of extents, and the size of a striped segment must be a
multiple of the number of stripes.
In the usage section below, \fB--size\fP \fISize\fP can be replaced
with \fB--extents\fP \fINumber\fP. See both descriptions
the options section.

View File

@@ -1,2 +1,7 @@
lvresize resizes an LV in the same way as lvextend and lvreduce. See
\fBlvextend\fP(8) and \fBlvreduce\fP(8) for more information.
In the usage section below, \fB--size\fP \fISize\fP can be replaced
with \fB--extents\fP \fINumber\fP. See both descriptions
the options section.

View File

@@ -56,6 +56,7 @@ Inconsistencies are detected by initiating a "check" on a RAID logical volume.
(The scrubbing operations, "check" and "repair", can be performed on a RAID
logical volume via the 'lvchange' command.) (w)ritemostly signifies the
devices in a RAID 1 logical volume that have been marked write-mostly.
(R)emove after reshape signifies freed striped raid images to be removed.
.IP
Related to Thin pool Logical Volumes: (F)ailed, out of (D)ata space,
(M)etadata read only.

View File

@@ -198,6 +198,9 @@ class TestDbusService(unittest.TestCase):
self.objs[MANAGER_INT][0].Manager.PvCreate(
dbus.String(device), dbus.Int32(g_tmo), EOD)
)
self._validate_lookup(device, pv_path)
self.assertTrue(pv_path is not None and len(pv_path) > 0)
return pv_path
@@ -229,6 +232,7 @@ class TestDbusService(unittest.TestCase):
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(vg_name, vg_path)
self.assertTrue(vg_path is not None and len(vg_path) > 0)
return ClientProxy(self.bus, vg_path, interfaces=(VG_INT, ))
@@ -263,6 +267,9 @@ class TestDbusService(unittest.TestCase):
def _create_raid5_thin_pool(self, vg=None):
meta_name = "meta_r5"
data_name = "data_r5"
if not vg:
pv_paths = []
for pp in self.objs[PV_INT]:
@@ -272,7 +279,7 @@ class TestDbusService(unittest.TestCase):
lv_meta_path = self.handle_return(
vg.LvCreateRaid(
dbus.String("meta_r5"),
dbus.String(meta_name),
dbus.String("raid5"),
dbus.UInt64(mib(4)),
dbus.UInt32(0),
@@ -280,10 +287,11 @@ class TestDbusService(unittest.TestCase):
dbus.Int32(g_tmo),
EOD)
)
self._validate_lookup("%s/%s" % (vg.Name, meta_name), lv_meta_path)
lv_data_path = self.handle_return(
vg.LvCreateRaid(
dbus.String("data_r5"),
dbus.String(data_name),
dbus.String("raid5"),
dbus.UInt64(mib(16)),
dbus.UInt32(0),
@@ -292,6 +300,8 @@ class TestDbusService(unittest.TestCase):
EOD)
)
self._validate_lookup("%s/%s" % (vg.Name, data_name), lv_data_path)
thin_pool_path = self.handle_return(
vg.CreateThinPool(
dbus.ObjectPath(lv_meta_path),
@@ -339,7 +349,13 @@ class TestDbusService(unittest.TestCase):
self.assertTrue(cached_thin_pool_object.ThinPool.MetaDataLv != '/')
def _lookup(self, lvm_id):
return self.objs[MANAGER_INT][0].Manager.LookUpByLvmId(lvm_id)
return self.objs[MANAGER_INT][0].\
Manager.LookUpByLvmId(dbus.String(lvm_id))
def _validate_lookup(self, lvm_name, object_path):
t = self._lookup(lvm_name)
self.assertTrue(
object_path == t, "%s != %s for %s" % (object_path, t, lvm_name))
def test_lookup_by_lvm_id(self):
# For the moment lets just lookup what we know about which is PVs
@@ -392,10 +408,8 @@ class TestDbusService(unittest.TestCase):
def test_vg_rename(self):
vg = self._vg_create().Vg
mgr = self.objs[MANAGER_INT][0].Manager
# Do a vg lookup
path = mgr.LookUpByLvmId(dbus.String(vg.Name))
path = self._lookup(vg.Name)
vg_name_start = vg.Name
@@ -406,7 +420,7 @@ class TestDbusService(unittest.TestCase):
for i in range(0, 5):
lv_t = self._create_lv(size=mib(4), vg=vg)
full_name = "%s/%s" % (vg_name_start, lv_t.LvCommon.Name)
lv_path = mgr.LookUpByLvmId(dbus.String(full_name))
lv_path = self._lookup(full_name)
self.assertTrue(lv_path == lv_t.object_path)
new_name = 'renamed_' + vg.Name
@@ -417,7 +431,7 @@ class TestDbusService(unittest.TestCase):
self._check_consistency()
# Do a vg lookup
path = mgr.LookUpByLvmId(dbus.String(new_name))
path = self._lookup(new_name)
self.assertTrue(path != '/', "%s" % (path))
self.assertTrue(prev_path == path, "%s != %s" % (prev_path, path))
@@ -435,14 +449,12 @@ class TestDbusService(unittest.TestCase):
lv_proxy.Vg == vg.object_path, "%s != %s" %
(lv_proxy.Vg, vg.object_path))
full_name = "%s/%s" % (new_name, lv_proxy.Name)
lv_path = mgr.LookUpByLvmId(dbus.String(full_name))
lv_path = self._lookup(full_name)
self.assertTrue(
lv_path == lv_proxy.object_path, "%s != %s" %
(lv_path, lv_proxy.object_path))
def _verify_hidden_lookups(self, lv_common_object, vgname):
mgr = self.objs[MANAGER_INT][0].Manager
hidden_lv_paths = lv_common_object.HiddenLvs
for h in hidden_lv_paths:
@@ -454,7 +466,7 @@ class TestDbusService(unittest.TestCase):
full_name = "%s/%s" % (vgname, h_lv.Name)
# print("Hidden check %s" % (full_name))
lookup_path = mgr.LookUpByLvmId(dbus.String(full_name))
lookup_path = self._lookup(full_name)
self.assertTrue(lookup_path != '/')
self.assertTrue(lookup_path == h_lv.object_path)
@@ -462,7 +474,7 @@ class TestDbusService(unittest.TestCase):
full_name = "%s/%s" % (vgname, h_lv.Name[1:-1])
# print("Hidden check %s" % (full_name))
lookup_path = mgr.LookUpByLvmId(dbus.String(full_name))
lookup_path = self._lookup(full_name)
self.assertTrue(lookup_path != '/')
self.assertTrue(lookup_path == h_lv.object_path)
@@ -471,7 +483,6 @@ class TestDbusService(unittest.TestCase):
(vg, thin_pool) = self._create_raid5_thin_pool()
vg_name_start = vg.Name
mgr = self.objs[MANAGER_INT][0].Manager
# noinspection PyTypeChecker
self._verify_hidden_lookups(thin_pool.LvCommon, vg_name_start)
@@ -486,11 +497,14 @@ class TestDbusService(unittest.TestCase):
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(
"%s/%s" % (vg_name_start, lv_name), thin_lv_path)
self.assertTrue(thin_lv_path != '/')
full_name = "%s/%s" % (vg_name_start, lv_name)
lookup_lv_path = mgr.LookUpByLvmId(dbus.String(full_name))
lookup_lv_path = self._lookup(full_name)
self.assertTrue(
thin_lv_path == lookup_lv_path,
"%s != %s" % (thin_lv_path, lookup_lv_path))
@@ -518,7 +532,7 @@ class TestDbusService(unittest.TestCase):
(lv_proxy.Vg, vg.object_path))
full_name = "%s/%s" % (new_name, lv_proxy.Name)
# print('Full Name %s' % (full_name))
lv_path = mgr.LookUpByLvmId(dbus.String(full_name))
lv_path = self._lookup(full_name)
self.assertTrue(
lv_path == lv_proxy.object_path, "%s != %s" %
(lv_path, lv_proxy.object_path))
@@ -543,75 +557,88 @@ class TestDbusService(unittest.TestCase):
return lv
def test_lv_create(self):
lv_name = lv_n()
vg = self._vg_create().Vg
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreate,
(dbus.String(lv_n()), dbus.UInt64(mib(4)),
(dbus.String(lv_name), dbus.UInt64(mib(4)),
dbus.Array([], signature='(ott)'), dbus.Int32(g_tmo),
EOD), vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def test_lv_create_job(self):
lv_name = lv_n()
vg = self._vg_create().Vg
(object_path, job_path) = vg.LvCreate(
dbus.String(lv_n()), dbus.UInt64(mib(4)),
dbus.String(lv_name), dbus.UInt64(mib(4)),
dbus.Array([], signature='(ott)'), dbus.Int32(0),
EOD)
self.assertTrue(object_path == '/')
self.assertTrue(job_path != '/')
object_path = self._wait_for_job(job_path)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), object_path)
self.assertTrue(object_path != '/')
def test_lv_create_linear(self):
lv_name = lv_n()
vg = self._vg_create().Vg
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreateLinear,
(dbus.String(lv_n()), dbus.UInt64(mib(4)), dbus.Boolean(False),
(dbus.String(lv_name), dbus.UInt64(mib(4)), dbus.Boolean(False),
dbus.Int32(g_tmo), EOD),
vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def test_lv_create_striped(self):
lv_name = lv_n()
pv_paths = []
for pp in self.objs[PV_INT]:
pv_paths.append(pp.object_path)
vg = self._vg_create(pv_paths).Vg
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreateStriped,
(dbus.String(lv_n()), dbus.UInt64(mib(4)),
(dbus.String(lv_name), dbus.UInt64(mib(4)),
dbus.UInt32(2), dbus.UInt32(8), dbus.Boolean(False),
dbus.Int32(g_tmo), EOD),
vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def test_lv_create_mirror(self):
lv_name = lv_n()
pv_paths = []
for pp in self.objs[PV_INT]:
pv_paths.append(pp.object_path)
vg = self._vg_create(pv_paths).Vg
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreateMirror,
(dbus.String(lv_n()), dbus.UInt64(mib(4)), dbus.UInt32(2),
(dbus.String(lv_name), dbus.UInt64(mib(4)), dbus.UInt32(2),
dbus.Int32(g_tmo), EOD), vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def test_lv_create_raid(self):
lv_name = lv_n()
pv_paths = []
for pp in self.objs[PV_INT]:
pv_paths.append(pp.object_path)
vg = self._vg_create(pv_paths).Vg
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreateRaid,
(dbus.String(lv_n()), dbus.String('raid5'), dbus.UInt64(mib(16)),
(dbus.String(lv_name), dbus.String('raid5'), dbus.UInt64(mib(16)),
dbus.UInt32(2), dbus.UInt32(8), dbus.Int32(g_tmo),
EOD),
vg,
LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def _create_lv(self, thinpool=False, size=None, vg=None):
lv_name = lv_n()
interfaces = list(LV_BASE_INT)
if thinpool:
@@ -627,12 +654,15 @@ class TestDbusService(unittest.TestCase):
if size is None:
size = mib(4)
return self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreateLinear,
(dbus.String(lv_n()), dbus.UInt64(size),
(dbus.String(lv_name), dbus.UInt64(size),
dbus.Boolean(thinpool), dbus.Int32(g_tmo), EOD),
vg, interfaces)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
return lv
def test_lv_create_rounding(self):
self._create_lv(size=(mib(2) + 13))
@@ -643,7 +673,7 @@ class TestDbusService(unittest.TestCase):
# Rename a regular LV
lv = self._create_lv()
path = self.objs[MANAGER_INT][0].Manager.LookUpByLvmId(lv.LvCommon.Name)
path = self._lookup(lv.LvCommon.Name)
prev_path = path
new_name = 'renamed_' + lv.LvCommon.Name
@@ -651,8 +681,7 @@ class TestDbusService(unittest.TestCase):
self.handle_return(lv.Lv.Rename(dbus.String(new_name),
dbus.Int32(g_tmo), EOD))
path = self.objs[MANAGER_INT][0].Manager.LookUpByLvmId(
dbus.String(new_name))
path = self._lookup(new_name)
self._check_consistency()
self.assertTrue(prev_path == path, "%s != %s" % (prev_path, path))
@@ -677,26 +706,32 @@ class TestDbusService(unittest.TestCase):
# This returns a LV with the LV interface, need to get a proxy for
# thinpool interface too
tp = self._create_lv(True)
vg = self._vg_create().Vg
tp = self._create_lv(thinpool=True, vg=vg)
lv_name = lv_n('_thin_lv')
thin_path = self.handle_return(
tp.ThinPool.LvCreate(
dbus.String(lv_n('_thin_lv')),
dbus.String(lv_name),
dbus.UInt64(mib(8)),
dbus.Int32(g_tmo),
EOD)
)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), thin_path)
lv = ClientProxy(self.bus, thin_path,
interfaces=(LV_COMMON_INT, LV_INT))
re_named = 'rename_test' + lv.LvCommon.Name
rc = self.handle_return(
lv.Lv.Rename(
dbus.String('rename_test' + lv.LvCommon.Name),
dbus.String(re_named),
dbus.Int32(g_tmo),
EOD)
)
self._validate_lookup("%s/%s" % (vg.Name, re_named), thin_path)
self.assertTrue(rc == '/')
self._check_consistency()
@@ -748,18 +783,18 @@ class TestDbusService(unittest.TestCase):
def test_lv_create_pv_specific(self):
vg = self._vg_create().Vg
lv_name = lv_n()
pv = vg.Pvs
pvp = ClientProxy(self.bus, pv[0], interfaces=(PV_INT,))
self._test_lv_create(
lv = self._test_lv_create(
vg.LvCreate, (
dbus.String(lv_n()),
dbus.String(lv_name),
dbus.UInt64(mib(4)),
dbus.Array([[pvp.object_path, 0, (pvp.Pv.PeCount - 1)]],
signature='(ott)'),
dbus.Int32(g_tmo), EOD), vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def test_lv_resize(self):
@@ -930,7 +965,8 @@ class TestDbusService(unittest.TestCase):
self.assertTrue(vg_path == '/')
self.assertTrue(vg_job and len(vg_job) > 0)
self._wait_for_job(vg_job)
vg_path = self._wait_for_job(vg_job)
self._validate_lookup(vg_name, vg_path)
def _test_expired_timer(self, num_lvs):
rc = False
@@ -945,17 +981,20 @@ class TestDbusService(unittest.TestCase):
vg_proxy = self._vg_create(pv_paths)
for i in range(0, num_lvs):
lv_name = lv_n()
vg_proxy.update()
if vg_proxy.Vg.FreeCount > 0:
job = self.handle_return(
lv_path = self.handle_return(
vg_proxy.Vg.LvCreateLinear(
dbus.String(lv_n()),
dbus.String(lv_name),
dbus.UInt64(mib(4)),
dbus.Boolean(False),
dbus.Int32(g_tmo),
EOD))
self.assertTrue(job != '/')
self.assertTrue(lv_path != '/')
self._validate_lookup(
"%s/%s" % (vg_proxy.Vg.Name, lv_name), lv_path)
else:
# We ran out of space, test will probably fail
break
@@ -1064,15 +1103,18 @@ class TestDbusService(unittest.TestCase):
def test_lv_tags(self):
vg = self._vg_create().Vg
lv_name = lv_n()
lv = self._test_lv_create(
vg.LvCreateLinear,
(dbus.String(lv_n()),
(dbus.String(lv_name),
dbus.UInt64(mib(4)),
dbus.Boolean(False),
dbus.Int32(g_tmo),
EOD),
vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
t = ['Testing', 'tags']
self.handle_return(
@@ -1148,15 +1190,18 @@ class TestDbusService(unittest.TestCase):
def test_vg_activate_deactivate(self):
vg = self._vg_create().Vg
self._test_lv_create(
lv_name = lv_n()
lv = self._test_lv_create(
vg.LvCreateLinear, (
dbus.String(lv_n()),
dbus.String(lv_name),
dbus.UInt64(mib(4)),
dbus.Boolean(False),
dbus.Int32(g_tmo),
EOD),
vg, LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
vg.update()
rc = self.handle_return(
@@ -1361,15 +1406,19 @@ class TestDbusService(unittest.TestCase):
def test_snapshot_merge_thin(self):
# Create a thin LV, snapshot it and merge it
tp = self._create_lv(True)
vg = self._vg_create().Vg
tp = self._create_lv(thinpool=True, vg=vg)
lv_name = lv_n('_thin_lv')
thin_path = self.handle_return(
tp.ThinPool.LvCreate(
dbus.String(lv_n('_thin_lv')),
dbus.String(lv_name),
dbus.UInt64(mib(10)),
dbus.Int32(g_tmo),
EOD))
self._validate_lookup("%s/%s" % (vg.Name, lv_name), thin_path)
lv_p = ClientProxy(self.bus, thin_path,
interfaces=(LV_INT, LV_COMMON_INT))
@@ -1512,12 +1561,14 @@ class TestDbusService(unittest.TestCase):
EOD))
# Create a VG and try to create LVs with different bad names
vg_name = vg_n()
vg_path = self.handle_return(
mgr.VgCreate(
dbus.String(vg_n()),
dbus.String(vg_name),
dbus.Array(pv_paths, 'o'),
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(vg_name, vg_path)
vg_proxy = ClientProxy(self.bus, vg_path, interfaces=(VG_INT, ))
@@ -1563,13 +1614,16 @@ class TestDbusService(unittest.TestCase):
def test_invalid_tags(self):
mgr = self.objs[MANAGER_INT][0].Manager
pv_paths = [self.objs[PV_INT][0].object_path]
vg_name = vg_n()
vg_path = self.handle_return(
mgr.VgCreate(
dbus.String(vg_n()),
dbus.String(vg_name),
dbus.Array(pv_paths, 'o'),
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(vg_name, vg_path)
vg_proxy = ClientProxy(self.bus, vg_path, interfaces=(VG_INT, ))
for c in self._invalid_tag_characters():
@@ -1591,13 +1645,15 @@ class TestDbusService(unittest.TestCase):
def test_tag_names(self):
mgr = self.objs[MANAGER_INT][0].Manager
pv_paths = [self.objs[PV_INT][0].object_path]
vg_name = vg_n()
vg_path = self.handle_return(
mgr.VgCreate(
dbus.String(vg_n()),
dbus.String(vg_name),
dbus.Array(pv_paths, 'o'),
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(vg_name, vg_path)
vg_proxy = ClientProxy(self.bus, vg_path, interfaces=(VG_INT, ))
for i in range(1, 64):
@@ -1622,13 +1678,15 @@ class TestDbusService(unittest.TestCase):
def test_tag_regression(self):
mgr = self.objs[MANAGER_INT][0].Manager
pv_paths = [self.objs[PV_INT][0].object_path]
vg_name = vg_n()
vg_path = self.handle_return(
mgr.VgCreate(
dbus.String(vg_n()),
dbus.String(vg_name),
dbus.Array(pv_paths, 'o'),
dbus.Int32(g_tmo),
EOD))
self._validate_lookup(vg_name, vg_path)
vg_proxy = ClientProxy(self.bus, vg_path, interfaces=(VG_INT, ))
tag = '--h/K.6g0A4FOEatf3+k_nI/Yp&L_u2oy-=j649x:+dUcYWPEo6.IWT0c'

View File

@@ -1317,7 +1317,7 @@ udev_wait() {
wait_for_sync() {
local i
for i in {1..100} ; do
check in_sync $1 $2 && return
check in_sync $1 $2 $3 && return
sleep .2
done

View File

@@ -178,7 +178,7 @@ linear() {
$(lvl $lv -o+devices)
}
# in_sync <VG> <LV>
# in_sync <VG> <LV> <ignore 'a'>
# Works for "mirror" and "raid*"
in_sync() {
local a
@@ -187,8 +187,11 @@ in_sync() {
local type
local snap=""
local lvm_name="$1/$2"
local ignore_a="$3"
local dm_name=$(echo $lvm_name | sed s:-:--: | sed s:/:-:)
[ -z "$ignore_a" ] && ignore_a=0
a=( $(dmsetup status $dm_name) ) || \
die "Unable to get sync status of $1"
@@ -225,7 +228,7 @@ in_sync() {
return 1
fi
[[ ${a[$(($idx - 1))]} =~ a ]] && \
[[ ${a[$(($idx - 1))]} =~ a ]] && [ $ignore_a -eq 0 ] && \
die "$lvm_name ($type$snap) in-sync, but 'a' characters in health status"
echo "$lvm_name ($type$snap) is in-sync \"${a[@]}\""
@@ -310,6 +313,12 @@ lv_field() {
die "lv_field: lv=$1, field=\"$2\", actual=\"$actual\", expected=\"$3\""
}
lv_first_seg_field() {
local actual=$(get lv_first_seg_field "$1" "$2" "${@:4}")
test "$actual" = "$3" || \
die "lv_field: lv=$1, field=\"$2\", actual=\"$actual\", expected=\"$3\""
}
lvh_field() {
local actual=$(get lvh_field "$1" "$2" "${@:4}")
test "$actual" = "$3" || \

View File

@@ -42,6 +42,11 @@ lv_field() {
trim_ "$r"
}
lv_first_seg_field() {
local r=$(lvs --config 'log{prefix=""}' --noheadings -o "$2" "${@:3}" "$1" | head -1)
trim_ "$r"
}
lvh_field() {
local r=$(lvs -H --config 'log{prefix=""}' --noheadings -o "$2" "${@:3}" "$1")
trim_ "$r"

View File

@@ -188,7 +188,7 @@ run_syncaction_check() {
# 'lvs' should show results
lvchange --syncaction check $vg/$lv
aux wait_for_sync $vg $lv
check lv_attr_bit health $vg/$lv "-"
check lv_attr_bit health $vg/$lv "-" || check lv_attr_bit health $vg/$lv "m"
check lv_field $vg/$lv raid_mismatch_count "0"
}

View File

@@ -21,14 +21,14 @@ aux prepare_vg 4
for d in $dev1 $dev2 $dev3 $dev4
do
aux delay_dev $d 1
aux delay_dev $d 1 1
done
#
# Test writemostly prohibited on resyncrhonizing raid1
# Test writemostly prohibited on resynchronizing raid1
#
# Create 4-way striped LV
# Create 4-way raid1 LV
lvcreate -aey --ty raid1 -m 3 -L 32M -n $lv1 $vg
not lvchange -y --writemostly $dev1 $vg/$lv1
check lv_field $vg/$lv1 segtype "raid1"

View File

@@ -0,0 +1,68 @@
#!/bin/sh
# Copyright (C) 2017 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
SKIP_WITH_LVMLOCKD=1
SKIP_WITH_LVMPOLLD=1
. lib/inittest
which mkfs.ext4 || skip
aux have_raid 1 10 2 || skip
aux prepare_vg 5
#
# Test single step linear -> striped conversion
#
# Create linear LV
lvcreate -aey -L 16M -n $lv1 $vg
check lv_field $vg/$lv1 segtype "linear"
check lv_field $vg/$lv1 stripes 1
check lv_field $vg/$lv1 data_stripes 1
echo y|mkfs -t ext4 $DM_DEV_DIR/$vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Convert linear -> raid1
lvconvert -y -m 1 $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_field $vg/$lv1 segtype "raid1"
check lv_field $vg/$lv1 stripes 2
check lv_field $vg/$lv1 data_stripes 2
check lv_field $vg/$lv1 regionsize "512.00k"
aux wait_for_sync $vg $lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Convert raid1 -> raid5_n
lvconvert -y --ty raid5_n $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_field $vg/$lv1 segtype "raid5_n"
check lv_field $vg/$lv1 stripes 2
check lv_field $vg/$lv1 data_stripes 1
check lv_field $vg/$lv1 stripesize "64.00k"
check lv_field $vg/$lv1 regionsize "512.00k"
# Convert raid5_n adding stripes
lvconvert -y --stripes 4 $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_first_seg_field $vg/$lv1 segtype "raid5_n"
check lv_first_seg_field $vg/$lv1 data_stripes 4
check lv_first_seg_field $vg/$lv1 stripes 5
check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
check lv_first_seg_field $vg/$lv1 regionsize "512.00k"
aux wait_for_sync $vg $lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Convert raid5_n -> striped
lvconvert -y --type striped $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
vgremove -ff $vg

View File

@@ -0,0 +1,106 @@
#!/bin/sh
# Copyright (C) 2017 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
SKIP_WITH_LVMLOCKD=1
SKIP_WITH_LVMPOLLD=1
. lib/inittest
which mkfs.ext4 || skip
aux have_raid 1 10 2 || skip
aux prepare_vg 5
#
# Test single step linear -> striped conversion
#
# Create 4-way striped LV
lvcreate -aey -i 4 -I 32k -L 16M -n $lv1 $vg
check lv_field $vg/$lv1 segtype "striped"
check lv_field $vg/$lv1 data_stripes 4
check lv_field $vg/$lv1 stripes 4
check lv_field $vg/$lv1 stripesize "32.00k"
check lv_field $vg/$lv1 reshape_len ""
echo y|mkfs -t ext4 $DM_DEV_DIR/$vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Convert striped -> raid5(_n)
lvconvert -y --ty raid5 -R 128k $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_field $vg/$lv1 segtype "raid5_n"
check lv_field $vg/$lv1 data_stripes 4
check lv_field $vg/$lv1 stripes 5
check lv_field $vg/$lv1 stripesize "32.00k"
check lv_field $vg/$lv1 regionsize "128.00k"
check lv_field $vg/$lv1 reshape_len ""
aux wait_for_sync $vg $lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Extend raid5_n LV by factor 4 to keep size once linear
lvresize -y -L 64 $vg/$lv1
check lv_field $vg/$lv1 segtype "raid5_n"
check lv_field $vg/$lv1 data_stripes 4
check lv_field $vg/$lv1 stripes 5
check lv_field $vg/$lv1 stripesize "32.00k"
check lv_field $vg/$lv1 regionsize "128.00k"
check lv_field $vg/$lv1 reshape_len ""
aux wait_for_sync $vg $lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Convert raid5_n LV to 1 stripe (2 legs total),
# 64k stripesize and 1024k regionsize
# FIXME: "--type" superfluous (cli fix needed)
lvconvert -y -f --ty raid5_n --stripes 1 -I 64k -R 1024k $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_first_seg_field $vg/$lv1 segtype "raid5_n"
check lv_first_seg_field $vg/$lv1 data_stripes 1
check lv_first_seg_field $vg/$lv1 stripes 5
check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
check lv_first_seg_field $vg/$lv1 regionsize "1.00m"
check lv_first_seg_field $vg/$lv1 reshape_len 10
# for slv in {0..4}
# do
# check lv_first_seg_field $vg/${lv1}_rimage_${slv} reshape_len 2
# done
aux wait_for_sync $vg $lv1 1
fsck -fn $DM_DEV_DIR/$vg/$lv1
# Remove the now freed legs
lvconvert --stripes 1 $vg/$lv1
check lv_first_seg_field $vg/$lv1 segtype "raid5_n"
check lv_first_seg_field $vg/$lv1 data_stripes 1
check lv_first_seg_field $vg/$lv1 stripes 2
check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
check lv_first_seg_field $vg/$lv1 regionsize "1.00m"
check lv_first_seg_field $vg/$lv1 reshape_len 4
# Convert raid5_n to raid1
lvconvert -y --type raid1 $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_first_seg_field $vg/$lv1 segtype "raid1"
check lv_first_seg_field $vg/$lv1 data_stripes 2
check lv_first_seg_field $vg/$lv1 stripes 2
check lv_first_seg_field $vg/$lv1 stripesize "0"
check lv_first_seg_field $vg/$lv1 regionsize "1.00m"
check lv_first_seg_field $vg/$lv1 reshape_len ""
# Convert raid5_n -> linear
lvconvert -y --type linear $vg/$lv1
fsck -fn $DM_DEV_DIR/$vg/$lv1
check lv_first_seg_field $vg/$lv1 segtype "linear"
check lv_first_seg_field $vg/$lv1 data_stripes 1
check lv_first_seg_field $vg/$lv1 stripes 1
check lv_first_seg_field $vg/$lv1 stripesize "0"
check lv_first_seg_field $vg/$lv1 regionsize "0"
check lv_first_seg_field $vg/$lv1 reshape_len ""
vgremove -ff $vg

View File

@@ -0,0 +1,207 @@
#!/bin/sh
# Copyright (C) 2017 Red Hat, Inc. All rights reserved.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
SKIP_WITH_LVMLOCKD=1
SKIP_WITH_LVMPOLLD=1
. lib/inittest
which mkfs.ext4 || skip
aux have_raid 1 10 2 || skip
aux prepare_pvs 65 64
vgcreate -s 1M $vg $(cat DEVICES)
function _lvcreate
{
local level=$1
local req_stripes=$2
local stripes=$3
local size=$4
local vg=$5
local lv=$6
lvcreate -y -aey --type $level -i $req_stripes -L $size -n $lv $vg
check lv_first_seg_field $vg/$lv segtype "$level"
check lv_first_seg_field $vg/$lv datastripes $req_stripes
check lv_first_seg_field $vg/$lv stripes $stripes
mkfs.ext4 "$DM_DEV_DIR/$vg/$lv"
fsck -fn "$DM_DEV_DIR/$vg/$lv"
}
function _lvconvert
{
local req_level=$1
local level=$2
local data_stripes=$3
local stripes=$4
local vg=$5
local lv=$6
local region_size=$7
local wait_and_check=1
local R=""
[ -n "$region_size" ] && R="-R $region_size"
[ "${level:0:7}" = "striped" ] && wait_and_check=0
[ "${level:0:5}" = "raid0" ] && wait_and_check=0
lvconvert -y --ty $req_level $R $vg/$lv
[ $? -ne 0 ] && return $?
check lv_first_seg_field $vg/$lv segtype "$level"
check lv_first_seg_field $vg/$lv data_stripes $data_stripes
check lv_first_seg_field $vg/$lv stripes $stripes
[ -n "$region_size" ] && check lv_field $vg/$lv regionsize $region_size
if [ "$wait_and_check" -eq 1 ]
then
fsck -fn "$DM_DEV_DIR/$vg/$lv"
aux wait_for_sync $vg $lv
fi
fsck -fn "$DM_DEV_DIR/$vg/$lv"
}
function _reshape_layout
{
local type=$1
shift
local data_stripes=$1
shift
local stripes=$1
shift
local vg=$1
shift
local lv=$1
shift
local opts="$*"
local ignore_a_chars=0
[[ "$opts" =~ "--stripes" ]] && ignore_a_chars=1
lvconvert -vvvv -y --ty $type $opts $vg/$lv
check lv_first_seg_field $vg/$lv segtype "$type"
check lv_first_seg_field $vg/$lv data_stripes $data_stripes
check lv_first_seg_field $vg/$lv stripes $stripes
aux wait_for_sync $vg $lv $ignore_a_chars
fsck -fn "$DM_DEV_DIR/$vg/$lv"
}
# Delay leg so that rebuilding status characters
# can be read before resync finished too quick.
# aux delay_dev "$dev1" 1
#
# Start out with raid5(_ls)
#
# Create 3-way striped raid5 (4 legs total)
_lvcreate raid5_ls 3 4 16M $vg $lv1
check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
aux wait_for_sync $vg $lv1
# Reshape it to 256K stripe size
_reshape_layout raid5_ls 3 4 $vg $lv1 --stripesize 256K
check lv_first_seg_field $vg/$lv1 stripesize "256.00k"
# Convert raid5(_n) -> striped
not _lvconvert striped striped 3 3 $vg $lv1 512k
_reshape_layout raid5_n 3 4 $vg $lv1
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid5_n
_lvconvert raid5_n raid5_n 3 4 $vg $lv1 "" 1
# Convert raid5_n -> raid5_ls
_reshape_layout raid5_ls 3 4 $vg $lv1
# Convert raid5_ls to 5 stripes
_reshape_layout raid5_ls 5 6 $vg $lv1 --stripes 5
# Convert raid5_ls back to 3 stripes
_reshape_layout raid5_ls 3 6 $vg $lv1 --stripes 3 --force
_reshape_layout raid5_ls 3 4 $vg $lv1 --stripes 3
# Convert raid5_ls to 7 stripes
_reshape_layout raid5_ls 7 8 $vg $lv1 --stripes 7
# Convert raid5_ls to 9 stripes
_reshape_layout raid5_ls 9 10 $vg $lv1 --stripes 9
# Convert raid5_ls to 14 stripes
_reshape_layout raid5_ls 14 15 $vg $lv1 --stripes 14
# Convert raid5_ls to 63 stripes
_reshape_layout raid5_ls 63 64 $vg $lv1 --stripes 63
# Convert raid5_ls back to 27 stripes
_reshape_layout raid5_ls 27 64 $vg $lv1 --stripes 27 --force
_reshape_layout raid5_ls 27 28 $vg $lv1 --stripes 27
# Convert raid5_ls back to 4 stripes
_reshape_layout raid5_ls 4 28 $vg $lv1 --stripes 4 --force
_reshape_layout raid5_ls 4 5 $vg $lv1 --stripes 4
# Convert raid5_ls back to 3 stripes
_reshape_layout raid5_ls 3 5 $vg $lv1 --stripes 3 --force
_reshape_layout raid5_ls 3 4 $vg $lv1 --stripes 3
# Convert raid5_ls -> raid5_rs
_reshape_layout raid5_rs 3 4 $vg $lv1
# Convert raid5_rs -> raid5_la
_reshape_layout raid5_la 3 4 $vg $lv1
# Convert raid5_la -> raid5_ra
_reshape_layout raid5_ra 3 4 $vg $lv1
# Convert raid5_ra -> raid6_ra_6
_lvconvert raid6_ra_6 raid6_ra_6 3 5 $vg $lv1 "4.00m" 1
# Convert raid5_la -> raid6(_zr)
_reshape_layout raid6 3 5 $vg $lv1
# Convert raid6(_zr) -> raid6_nc
_reshape_layout raid6_nc 3 5 $vg $lv1
# Convert raid6(_nc) -> raid6_nr
_reshape_layout raid6_nr 3 5 $vg $lv1
# Convert raid6_nr) -> raid6_rs_6
_reshape_layout raid6_rs_6 3 5 $vg $lv1
# Convert raid6_rs_6 to 5 stripes
_reshape_layout raid6_rs_6 5 7 $vg $lv1 --stripes 5
# Convert raid6_rs_6 to 4 stripes
_reshape_layout raid6_rs_6 4 7 $vg $lv1 --stripes 4 --force
_reshape_layout raid6_rs_6 4 6 $vg $lv1 --stripes 4
check lv_first_seg_field $vg/$lv1 stripesize "256.00k"
# Convert raid6_rs_6 to raid6_n_6
_reshape_layout raid6_n_6 4 6 $vg $lv1
# Convert raid6_n_6 -> striped
_lvconvert striped striped 4 4 $vg $lv1
check lv_first_seg_field $vg/$lv1 stripesize "256.00k"
# Convert striped -> raid10(_near)
_lvconvert raid10 raid10 4 8 $vg $lv1
# Convert raid10 to 10 stripes and 64K stripesize
# FIXME: change once we support odd numbers of raid10 stripes
not _reshape_layout raid10 4 9 $vg $lv1 --stripes 9 --stripesize 64K
_reshape_layout raid10 5 10 $vg $lv1 --stripes 10 --stripesize 64K
check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
# Convert raid6_n_6 -> striped
_lvconvert striped striped 5 5 $vg $lv1
check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
vgremove -ff $vg

View File

@@ -33,6 +33,7 @@ function _lvcreate
lvcreate -y -aey --type $level -i $req_stripes -L $size -n $lv $vg
check lv_field $vg/$lv segtype "$level"
check lv_field $vg/$lv data_stripes $req_stripes
check lv_field $vg/$lv stripes $stripes
mkfs.ext4 "$DM_DEV_DIR/$vg/$lv"
fsck -fn "$DM_DEV_DIR/$vg/$lv"
@@ -42,10 +43,11 @@ function _lvconvert
{
local req_level=$1
local level=$2
local stripes=$3
local vg=$4
local lv=$5
local region_size=$6
local data_stripes=$3
local stripes=$4
local vg=$5
local lv=$6
local region_size=$7
local wait_and_check=1
local R=""
@@ -56,6 +58,7 @@ function _lvconvert
lvconvert -y --ty $req_level $R $vg/$lv
[ $? -ne 0 ] && return $?
check lv_field $vg/$lv segtype "$level"
check lv_field $vg/$lv data_stripes $data_stripes
check lv_field $vg/$lv stripes $stripes
if [ "$wait_and_check" -eq 1 ]
then
@@ -70,19 +73,19 @@ function _invalid_raid5_conversions
local vg=$1
local lv=$2
not _lvconvert striped 4 $vg $lv1
not _lvconvert raid0 raid0 4 $vg $lv1
not _lvconvert raid0_meta raid0_meta 4 $vg $lv1
not _lvconvert raid4 raid4 5 $vg $lv1
not _lvconvert raid5_ls raid5_ls 5 $vg $lv1
not _lvconvert raid5_rs raid5_rs 5 $vg $lv1
not _lvconvert raid5_la raid5_la 5 $vg $lv1
not _lvconvert raid5_ra raid5_ra 5 $vg $lv1
not _lvconvert raid6_zr raid6_zr 6 $vg $lv1
not _lvconvert raid6_nr raid6_nr 6 $vg $lv1
not _lvconvert raid6_nc raid6_nc 6 $vg $lv1
not _lvconvert raid6_n_6 raid6_n_6 6 $vg $lv1
not _lvconvert raid6 raid6_n_6 6 $vg $lv1
not _lvconvert striped 4 4 $vg $lv1
not _lvconvert raid0 raid0 4 4 $vg $lv1
not _lvconvert raid0_meta raid0_meta 4 4 $vg $lv1
not _lvconvert raid4 raid4 4 5 $vg $lv1
not _lvconvert raid5_ls raid5_ls 4 5 $vg $lv1
not _lvconvert raid5_rs raid5_rs 4 5 $vg $lv1
not _lvconvert raid5_la raid5_la 4 5 $vg $lv1
not _lvconvert raid5_ra raid5_ra 4 5 $vg $lv1
not _lvconvert raid6_zr raid6_zr 4 6 $vg $lv1
not _lvconvert raid6_nr raid6_nr 4 6 $vg $lv1
not _lvconvert raid6_nc raid6_nc 4 6 $vg $lv1
not _lvconvert raid6_n_6 raid6_n_6 4 6 $vg $lv1
not _lvconvert raid6 raid6_n_6 4 6 $vg $lv1
}
# Delayst leg so that rebuilding status characters
@@ -117,8 +120,7 @@ fsck -fn "$DM_DEV_DIR/$vg/$lv1"
lvconvert -m 4 -R 128K $vg/$lv1
check lv_field $vg/$lv1 segtype "raid1"
check lv_field $vg/$lv1 stripes 5
# FIXME: once lv_raid_chanage_image_count() supports region_size changes
not check lv_field $vg/$lv1 regionsize "128.00k"
check lv_field $vg/$lv1 regionsize "128.00k"
fsck -fn "$DM_DEV_DIR/$vg/$lv1"
aux wait_for_sync $vg $lv1
fsck -fn "$DM_DEV_DIR/$vg/$lv1"
@@ -163,110 +165,116 @@ _lvcreate raid4 3 4 8M $vg $lv1
aux wait_for_sync $vg $lv1
# Convert raid4 -> striped
not _lvconvert striped striped 3 $vg $lv1 512k
_lvconvert striped striped 3 $vg $lv1
not _lvconvert striped striped 3 3 $vg $lv1 512k
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid4
_lvconvert raid4 raid4 4 $vg $lv1 64k
_lvconvert raid4 raid4 3 4 $vg $lv1 64k
check lv_field $vg/$lv1 regionsize "64.00k"
# Convert raid4 -> raid5_n
_lvconvert raid5 raid5_n 4 $vg $lv1 128k
_lvconvert raid5 raid5_n 3 4 $vg $lv1 128k
check lv_field $vg/$lv1 regionsize "128.00k"
# Convert raid5_n -> striped
_lvconvert striped striped 3 $vg $lv1
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid5_n
_lvconvert raid5_n raid5_n 4 $vg $lv1
_lvconvert raid5_n raid5_n 3 4 $vg $lv1
# Convert raid5_n -> raid4
_lvconvert raid4 raid4 4 $vg $lv1
_lvconvert raid4 raid4 3 4 $vg $lv1
# Convert raid4 -> raid0
_lvconvert raid0 raid0 3 $vg $lv1
_lvconvert raid0 raid0 3 3 $vg $lv1
# Convert raid0 -> raid5_n
_lvconvert raid5_n raid5_n 4 $vg $lv1
_lvconvert raid5_n raid5_n 3 4 $vg $lv1
# Convert raid5_n -> raid0_meta
_lvconvert raid0_meta raid0_meta 3 $vg $lv1
_lvconvert raid0_meta raid0_meta 3 3 $vg $lv1
# Convert raid0_meta -> raid5_n
_lvconvert raid5 raid5_n 4 $vg $lv1
_lvconvert raid5 raid5_n 3 4 $vg $lv1
# Convert raid4 -> raid0_meta
not _lvconvert raid0_meta raid0_meta 3 $vg $lv1 256k
_lvconvert raid0_meta raid0_meta 3 $vg $lv1
not _lvconvert raid0_meta raid0_meta 3 3 $vg $lv1 256k
_lvconvert raid0_meta raid0_meta 3 3 $vg $lv1
# Convert raid0_meta -> raid4
_lvconvert raid4 raid4 4 $vg $lv1
_lvconvert raid4 raid4 3 4 $vg $lv1
# Convert raid4 -> raid0
_lvconvert raid0 raid0 3 $vg $lv1
_lvconvert raid0 raid0 3 3 $vg $lv1
# Convert raid0 -> raid4
_lvconvert raid4 raid4 4 $vg $lv1
_lvconvert raid4 raid4 3 4 $vg $lv1
# Convert raid4 -> striped
_lvconvert striped striped 3 $vg $lv1
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid6_n_6
_lvconvert raid6_n_6 raid6_n_6 5 $vg $lv1
_lvconvert raid6_n_6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> striped
_lvconvert striped striped 3 $vg $lv1
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid6_n_6
_lvconvert raid6 raid6_n_6 5 $vg $lv1
_lvconvert raid6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> raid5_n
_lvconvert raid5_n raid5_n 4 $vg $lv1
_lvconvert raid5_n raid5_n 3 4 $vg $lv1
# Convert raid5_n -> raid6_n_6
_lvconvert raid6_n_6 raid6_n_6 5 $vg $lv1
_lvconvert raid6_n_6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> raid4
_lvconvert raid4 raid4 4 $vg $lv1
_lvconvert raid4 raid4 3 4 $vg $lv1
# Convert raid4 -> raid6_n_6
_lvconvert raid6 raid6_n_6 5 $vg $lv1
_lvconvert raid6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> raid0
_lvconvert raid0 raid0 3 $vg $lv1
_lvconvert raid0 raid0 3 3 $vg $lv1
# Convert raid0 -> raid6_n_6
_lvconvert raid6_n_6 raid6_n_6 5 $vg $lv1
_lvconvert raid6_n_6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> raid0_meta
_lvconvert raid0_meta raid0_meta 3 $vg $lv1
_lvconvert raid0_meta raid0_meta 3 3 $vg $lv1
# Convert raid0_meta -> raid6_n_6
_lvconvert raid6 raid6_n_6 5 $vg $lv1
_lvconvert raid6 raid6_n_6 3 5 $vg $lv1
# Convert raid6_n_6 -> striped
not _lvconvert striped striped 3 $vg $lv1 128k
_lvconvert striped striped 3 $vg $lv1
not _lvconvert striped striped 3 3 $vg $lv1 128k
_lvconvert striped striped 3 3 $vg $lv1
# Convert striped -> raid10
_lvconvert raid10 raid10 6 $vg $lv1
_lvconvert raid10 raid10 3 6 $vg $lv1
# Convert raid10 -> raid0
not _lvconvert raid0 raid0 3 $vg $lv1 64k
_lvconvert raid0 raid0 3 $vg $lv1
not _lvconvert raid0 raid0 3 3 $vg $lv1 64k
_lvconvert raid0 raid0 3 3 $vg $lv1
# Convert raid0 -> raid10
_lvconvert raid10 raid10 6 $vg $lv1
_lvconvert raid10 raid10 3 6 $vg $lv1
# Convert raid10 -> raid0
_lvconvert raid0_meta raid0_meta 3 $vg $lv1
# Convert raid10 -> raid0_meta
_lvconvert raid0_meta raid0_meta 3 3 $vg $lv1
# Convert raid0_meta -> raid5
_lvconvert raid5_n raid5_n 3 4 $vg $lv1
# Convert raid5_n -> raid0_meta
_lvconvert raid0_meta raid0_meta 3 3 $vg $lv1
# Convert raid0_meta -> raid10
_lvconvert raid10 raid10 6 $vg $lv1
_lvconvert raid10 raid10 3 6 $vg $lv1
# Convert raid10 -> striped
not _lvconvert striped striped 3 $vg $lv1 256k
_lvconvert striped striped 3 $vg $lv1
not _lvconvert striped striped 3 3 $vg $lv1 256k
_lvconvert striped striped 3 3 $vg $lv1
# Clean up
lvremove -y $vg
@@ -275,51 +283,51 @@ lvremove -y $vg
_lvcreate raid5 4 5 8M $vg $lv1
aux wait_for_sync $vg $lv1
_invalid_raid5_conversions $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
_lvconvert raid6_ls_6 raid6_ls_6 6 $vg $lv1
_lvconvert raid5_ls raid5_ls 5 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 4 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 4 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
_lvconvert raid6_ls_6 raid6_ls_6 4 6 $vg $lv1
_lvconvert raid5_ls raid5_ls 4 5 $vg $lv1
lvremove -y $vg
_lvcreate raid5_ls 4 5 8M $vg $lv1
aux wait_for_sync $vg $lv1
_invalid_raid5_conversions $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
_lvconvert raid6_ls_6 raid6_ls_6 6 $vg $lv1
_lvconvert raid5_ls raid5_ls 5 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 4 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 4 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
_lvconvert raid6_ls_6 raid6_ls_6 4 6 $vg $lv1
_lvconvert raid5_ls raid5_ls 4 5 $vg $lv1
lvremove -y $vg
_lvcreate raid5_rs 4 5 8M $vg $lv1
aux wait_for_sync $vg $lv1
_invalid_raid5_conversions $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
_lvconvert raid6_rs_6 raid6_rs_6 6 $vg $lv1
_lvconvert raid5_rs raid5_rs 5 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 4 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
_lvconvert raid6_rs_6 raid6_rs_6 4 6 $vg $lv1
_lvconvert raid5_rs raid5_rs 4 5 $vg $lv1
lvremove -y $vg
_lvcreate raid5_la 4 5 8M $vg $lv1
aux wait_for_sync $vg $lv1
_invalid_raid5_conversions $vg $lv1
not _lvconvert raid6_ls_6 raid6_ls_6 6 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
_lvconvert raid6_la_6 raid6_la_6 6 $vg $lv1
_lvconvert raid5_la raid5_la 5 $vg $lv1
not _lvconvert raid6_ls_6 raid6_ls_6 4 6 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 4 6 $vg $lv1
not _lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
_lvconvert raid6_la_6 raid6_la_6 4 6 $vg $lv1
_lvconvert raid5_la raid5_la 4 5 $vg $lv1
lvremove -y $vg
_lvcreate raid5_ra 4 5 8M $vg $lv1
aux wait_for_sync $vg $lv1
_invalid_raid5_conversions $vg $lv1
not _lvconvert raid6_ls_6 raid6_ls_6 6 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 6 $vg $lv1
_lvconvert raid6_ra_6 raid6_ra_6 6 $vg $lv1
_lvconvert raid5_ra raid5_ra 5 $vg $lv1
not _lvconvert raid6_ls_6 raid6_ls_6 4 6 $vg $lv1
not _lvconvert raid6_rs_6 raid6_rs_6 4 6 $vg $lv1
not _lvconvert raid6_la_6 raid6_la_6 4 6 $vg $lv1
_lvconvert raid6_ra_6 raid6_ra_6 4 6 $vg $lv1
_lvconvert raid5_ra raid5_ra 4 5 $vg $lv1
lvremove -y $vg
else

View File

@@ -109,10 +109,10 @@ arg(cachemode_ARG, '\0', "cachemode", cachemode_VAL, 0, 0,
"been stored in both the cache pool and on the origin LV.\n"
"While writethrough may be slower for writes, it is more\n"
"resilient if something should happen to a device associated with the\n"
"cache pool LV. With writethrough, all reads are served\n"
"cache pool LV. With \\fBpassthrough\\fP, all reads are served\n"
"from the origin LV (all reads miss the cache) and all writes are\n"
"forwarded to the origin LV; additionally, write hits cause cache\n"
"block invalidates. See lvmcache(7) for more information.\n")
"block invalidates. See \\fBlvmcache\\fP(7) for more information.\n")
arg(cachepool_ARG, '\0', "cachepool", lv_VAL, 0, 0,
"The name of a cache pool LV.\n")
@@ -414,8 +414,15 @@ arg(pooldatasize_ARG, '\0', "pooldatasize", sizemb_VAL, 0, 0, NULL)
arg(poolmetadata_ARG, '\0', "poolmetadata", lv_VAL, 0, 0,
"The name of a an LV to use for storing pool metadata.\n")
arg(poolmetadatasize_ARG, '\0', "poolmetadatasize", sizemb_VAL, 0, 0,
"The size of the pool metadata LV created by the command.\n")
arg(poolmetadatasize_ARG, '\0', "poolmetadatasize", ssizemb_VAL, 0, 0,
"#lvcreate\n"
"#lvconvert\n"
"Specifies the size of the new pool metadata LV.\n"
"#lvresize\n"
"#lvextend\n"
"Specifies the new size of the pool metadata LV.\n"
"The plus prefix \\fB+\\fP can be used, in which case\n"
"the value is added to the current size.\n")
arg(poolmetadataspare_ARG, '\0', "poolmetadataspare", bool_VAL, 0, 0,
"Enable or disable the automatic creation and management of a\n"
@@ -693,7 +700,7 @@ arg(unquoted_ARG, '\0', "unquoted", 0, 0, 0,
"pairs are not quoted.\n")
arg(usepolicies_ARG, '\0', "usepolicies", 0, 0, 0,
"Perform an operation according to the policy configured in lvm.conf.\n"
"Perform an operation according to the policy configured in lvm.conf\n"
"or a profile.\n")
arg(validate_ARG, '\0', "validate", 0, 0, 0,
@@ -807,8 +814,8 @@ arg(activate_ARG, 'a', "activate", activation_VAL, 0, 0,
"if the list is set but empty, no LVs match.\n"
"Autoactivation should be used during system boot to make it possible\n"
"to select which LVs should be automatically activated by the system.\n"
"See lvmlockd(8) for more information about activation options for shared VGs.\n"
"See clvmd(8) for more information about activation options for clustered VGs.\n"
"See lvmlockd(8) for more information about activation options \\fBey\\fP and \\fBsy\\fP for shared VGs.\n"
"See clvmd(8) for more information about activation options \\fBey\\fP, \\fBsy\\fP, \\fBly\\fP and \\fBln\\fP for clustered VGs.\n"
"#lvcreate\n"
"Controls the active state of the new LV.\n"
"\\fBy\\fP makes the LV active, or available.\n"
@@ -967,15 +974,15 @@ arg(stripes_ARG, 'i', "stripes", number_VAL, 0, 0,
"Specifies the number of stripes in a striped LV. This is the number of\n"
"PVs (devices) that a striped LV is spread across. Data that\n"
"appears sequential in the LV is spread across multiple devices in units of\n"
"the stripe size (see --stripesize). This does not apply to\n"
"existing allocated space, only newly allocated space can be striped.\n"
"the stripe size (see --stripesize). This does not change existing\n"
"allocated space, but only applies to space being allocated by the command.\n"
"When creating a RAID 4/5/6 LV, this number does not include the extra\n"
"devices that are required for parity. The largest number depends on\n"
"the RAID type (raid0: 64, raid10: 32, raid4/5: 63, raid6: 62.)\n"
"When unspecified, the default depends on the RAID type\n"
"the RAID type (raid0: 64, raid10: 32, raid4/5: 63, raid6: 62), and\n"
"when unspecified, the default depends on the RAID type\n"
"(raid0: 2, raid10: 4, raid4/5: 3, raid6: 5.)\n"
"When unspecified, to stripe across all PVs of the VG,\n"
"set lvm.conf allocation/raid_stripe_all_devices=1.\n")
"To stripe a new raid LV across all PVs by default,\n"
"see lvm.conf allocation/raid_stripe_all_devices.\n")
arg(stripesize_ARG, 'I', "stripesize", sizekb_VAL, 0, 0,
"The amount of data that is written to one device before\n"
@@ -987,7 +994,7 @@ arg(logicalvolume_ARG, 'l', "logicalvolume", uint32_VAL, 0, 0,
arg(maxlogicalvolumes_ARG, 'l', "maxlogicalvolumes", uint32_VAL, 0, 0,
"Sets the maximum number of LVs allowed in a VG.\n")
arg(extents_ARG, 'l', "extents", numsignedper_VAL, 0, 0,
arg(extents_ARG, 'l', "extents", extents_VAL, 0, 0,
"#lvcreate\n"
"Specifies the size of the new LV in logical extents.\n"
"The --size and --extents options are alternate methods of specifying size.\n"
@@ -1022,10 +1029,9 @@ arg(extents_ARG, 'l', "extents", numsignedper_VAL, 0, 0,
"When expressed as a percentage, the size defines an upper limit for the\n"
"number of logical extents in the new LV. The precise number of logical\n"
"extents in the new LV is not determined until the command has completed.\n"
"The plus prefix \\fB+\\fP can be used, in which case\n"
"the value is added to the current size,\n"
"or the minus prefix \\fB-\\fP can be used, in which case\n"
"the value is subtracted from the current size.\n")
"The plus \\fB+\\fP or minus \\fB-\\fP prefix can be used, in which case\n"
"the value is not an absolute size, but is an amount added or subtracted\n"
"relative to the current size.\n")
arg(list_ARG, 'l', "list", 0, 0, 0,
"#lvmconfig\n"
@@ -1042,18 +1048,20 @@ arg(list_ARG, 'l', "list", 0, 0, 0,
arg(lvmpartition_ARG, 'l', "lvmpartition", 0, 0, 0,
"Only report PVs.\n")
arg(size_ARG, 'L', "size", sizemb_VAL, 0, 0,
/*
* FIXME: for lvcreate, size only accepts absolute values, no +|-,
* for lvresize, size can relative +|-, for lvreduce, size
* can be relative -, and for lvextend, size can be relative +.
* Should we define separate val enums for each of those cases,
* and at the start of the command, change the val type for
* size_ARG? The same for extents_ARG.
*/
arg(size_ARG, 'L', "size", ssizemb_VAL, 0, 0,
"#lvcreate\n"
"Specifies the size of the new LV.\n"
"The --size and --extents options are alternate methods of specifying size.\n"
"The total number of physical extents used will be\n"
"greater when redundant data is needed for RAID levels.\n"
"A suffix can be chosen from: \\fBbBsSkKmMgGtTpPeE\\fP.\n"
"All units are base two values, regardless of letter capitalization:\n"
"b|B is bytes, s|S is sectors of 512 bytes,\n"
"k|K is kilobytes, m|M is megabytes,\n"
"g|G is gigabytes, t|T is terabytes,\n"
"p|P is petabytes, e|E is exabytes.\n"
"#lvreduce\n"
"#lvextend\n"
"#lvresize\n"
@@ -1061,12 +1069,6 @@ arg(size_ARG, 'L', "size", sizemb_VAL, 0, 0,
"The --size and --extents options are alternate methods of specifying size.\n"
"The total number of physical extents used will be\n"
"greater when redundant data is needed for RAID levels.\n"
"A suffix can be chosen from: \\fBbBsSkKmMgGtTpPeE\\fP.\n"
"All units are base two values, regardless of letter capitalization:\n"
"b|B is bytes, s|S is sectors of 512 bytes,\n"
"k|K is kilobytes, m|M is megabytes,\n"
"g|G is gigabytes, t|T is terabytes,\n"
"p|P is petabytes, e|E is exabytes.\n"
"The plus prefix \\fB+\\fP can be used, in which case\n"
"the value is added to the current size,\n"
"or the minus prefix \\fB-\\fP can be used, in which case\n"
@@ -1104,7 +1106,7 @@ arg(maps_ARG, 'm', "maps", 0, 0, 0,
/* FIXME: should the unused mirrors option be removed from lvextend? */
arg(mirrors_ARG, 'm', "mirrors", numsigned_VAL, 0, 0,
arg(mirrors_ARG, 'm', "mirrors", snumber_VAL, 0, 0,
"#lvcreate\n"
"Specifies the number of mirror images in addition to the original LV\n"
"image, e.g. --mirrors 1 means there are two images of the data, the\n"
@@ -1230,7 +1232,9 @@ arg(resizefs_ARG, 'r', "resizefs", 0, 0, 0,
arg(reset_ARG, 'R', "reset", 0, 0, 0, NULL)
arg(regionsize_ARG, 'R', "regionsize", regionsize_VAL, 0, 0,
"Size of each raid or mirror synchronization region.\n")
"Size of each raid or mirror synchronization region.\n"
"lvm.conf activation/raid_region_size can be used to\n"
"configure a default.\n")
arg(physicalextentsize_ARG, 's', "physicalextentsize", sizemb_VAL, 0, 0,
"#vgcreate\n"
@@ -1295,9 +1299,10 @@ arg(stdin_ARG, 's', "stdin", 0, 0, 0, NULL)
arg(select_ARG, 'S', "select", string_VAL, ARG_GROUPABLE, 0,
"Select objects for processing and reporting based on specified criteria.\n"
"The criteria syntax is described in lvmreport(7) under Selection.\n"
"For reporting commands, display rows that match the criteria.\n"
"All rows can be displayed with an additional \"selected\" field (-o selected)\n"
"The criteria syntax is described by \\fB--select help\\fP and \\fBlvmreport\\fP(7).\n"
"For reporting commands, one row is displayed for each object matching the criteria.\n"
"See \\fB--options help\\fP for selectable object fields.\n"
"Rows can be displayed with an additional \"selected\" field (-o selected)\n"
"showing 1 if the row matches the selection and 0 otherwise.\n"
"For non-reporting commands which process LVM entities, the selection is\n"
"used to choose items to process.\n")

View File

@@ -307,9 +307,12 @@ RULE: all not LV_thinpool LV_cachepool
OO_LVCONVERT_RAID: --mirrors SNumber, --stripes_long Number,
--stripesize SizeKB, --regionsize RegionSize, --interval Number
OO_LVCONVERT_POOL: --poolmetadata LV, --poolmetadatasize SizeMB,
OO_LVCONVERT_POOL: --poolmetadata LV, --poolmetadatasize SSizeMB,
--poolmetadataspare Bool, --readahead Readahead, --chunksize SizeKB
OO_LVCONVERT_CACHE: --cachemode CacheMode, --cachepolicy String,
--cachesettings String, --zero Bool
OO_LVCONVERT: --alloc Alloc, --background, --force, --noudevsync
---
@@ -335,14 +338,19 @@ lvconvert --type mirror LV
OO: OO_LVCONVERT_RAID, OO_LVCONVERT, --mirrorlog MirrorLog
OP: PV ...
ID: lvconvert_raid_types
DESC: Convert LV to type mirror (also see type raid1).
DESC: Convert LV to type mirror (also see type raid1),
DESC: (also see lvconvert --mirrors).
RULE: all not lv_is_locked lv_is_pvmove
FLAGS: SECONDARY_SYNTAX
# When LV is already raid, this changes the raid layout
# (changing layout of raid0 and raid1 not allowed.)
lvconvert --type raid LV
OO: OO_LVCONVERT_RAID, OO_LVCONVERT
OP: PV ...
ID: lvconvert_raid_types
DESC: Convert LV to raid.
DESC: Convert LV to raid or change raid layout.
RULE: all not lv_is_locked lv_is_pvmove
lvconvert --mirrors SNumber LV
@@ -352,12 +360,28 @@ ID: lvconvert_raid_types
DESC: Convert LV to raid1 or mirror, or change number of mirror images.
RULE: all not lv_is_locked lv_is_pvmove
lvconvert --stripes_long Number LV_raid
OO: OO_LVCONVERT, --interval Number, --regionsize RegionSize, --stripesize SizeKB
OP: PV ...
ID: lvconvert_raid_types
DESC: Convert raid LV to change number of stripe images.
RULE: all not lv_is_locked lv_is_pvmove
RULE: all not LV_raid0 LV_raid1
lvconvert --stripesize SizeKB LV_raid
OO: OO_LVCONVERT, --interval Number, --regionsize RegionSize
ID: lvconvert_raid_types
DESC: Convert raid LV to change the stripe size.
RULE: all not lv_is_locked lv_is_pvmove
RULE: all not LV_raid0 LV_raid1
lvconvert --regionsize RegionSize LV_raid
OO: OO_LVCONVERT
ID: lvconvert_change_region_size
DESC: Change the region size of an LV.
RULE: all not lv_is_locked lv_is_pvmove
RULE: all not LV_raid0
FLAGS: SECONDARY_SYNTAX
---
@@ -390,6 +414,7 @@ OP: PV ...
ID: lvconvert_change_mirrorlog
DESC: Change the type of mirror log used by a mirror LV.
RULE: all not lv_is_locked lv_is_pvmove
FLAGS: SECONDARY_SYNTAX
---
@@ -407,8 +432,8 @@ RULE: all not lv_is_locked
lvconvert --thin --thinpool LV LV_linear_striped_raid_cache
OO: --type thin, --originname LV_new, --zero Bool, OO_LVCONVERT_POOL, OO_LVCONVERT
ID: lvconvert_to_thin_with_external
DESC: Convert LV to a thin LV, using the original LV as an external origin.
DESC: (variant, infers --type thin).
DESC: Convert LV to a thin LV, using the original LV as an external origin
DESC: (infers --type thin).
FLAGS: SECONDARY_SYNTAX
RULE: all and lv_is_visible
RULE: all not lv_is_locked
@@ -416,20 +441,18 @@ RULE: all not lv_is_locked
---
lvconvert --type cache --cachepool LV LV_linear_striped_raid_thinpool
OO: --cache, --cachemode CacheMode, --cachepolicy String,
--cachesettings String, --zero Bool, OO_LVCONVERT_POOL, OO_LVCONVERT
OO: --cache, OO_LVCONVERT_CACHE, OO_LVCONVERT_POOL, OO_LVCONVERT
ID: lvconvert_to_cache_vol
DESC: Convert LV to type cache.
RULE: all and lv_is_visible
# alternate form of lvconvert --type cache
lvconvert --cache --cachepool LV LV_linear_striped_raid_thinpool
OO: --type cache, --cachemode CacheMode, --cachepolicy String,
--cachesettings String, --zero Bool, OO_LVCONVERT_POOL, OO_LVCONVERT
OO: --type cache, OO_LVCONVERT_CACHE, OO_LVCONVERT_POOL, OO_LVCONVERT
ID: lvconvert_to_cache_vol
DESC: Convert LV to type cache (variant, infers --type cache).
FLAGS: SECONDARY_SYNTAX
DESC: Convert LV to type cache (infers --type cache).
RULE: all and lv_is_visible
FLAGS: SECONDARY_SYNTAX
---
@@ -476,8 +499,7 @@ FLAGS: PREVIOUS_SYNTAX
---
lvconvert --type cache-pool LV_linear_striped_raid
OO: OO_LVCONVERT_POOL, OO_LVCONVERT,
--cachemode CacheMode, --cachepolicy String, --cachesettings String
OO: OO_LVCONVERT_CACHE, OO_LVCONVERT_POOL, OO_LVCONVERT
OP: PV ...
ID: lvconvert_to_cachepool
DESC: Convert LV to type cache-pool.
@@ -505,8 +527,7 @@ DESC: Convert LV to type cache-pool.
# of creating a pool or swapping metadata should be used.
lvconvert --cachepool LV_linear_striped_raid_cachepool
OO: --type cache-pool, OO_LVCONVERT_POOL, OO_LVCONVERT,
--cachemode CacheMode, --cachepolicy String, --cachesettings String
OO: --type cache-pool, OO_LVCONVERT_CACHE, OO_LVCONVERT_POOL, OO_LVCONVERT
OP: PV ...
ID: lvconvert_to_cachepool_or_swap_metadata
DESC: Convert LV to type cache-pool (variant, use --type cache-pool).
@@ -526,6 +547,7 @@ lvconvert --uncache LV_cache_thinpool
OO: OO_LVCONVERT
ID: lvconvert_split_and_remove_cachepool
DESC: Separate and delete the cache pool from a cache LV.
FLAGS: SECONDARY_SYNTAX
---
@@ -533,6 +555,7 @@ lvconvert --swapmetadata --poolmetadata LV LV_thinpool_cachepool
OO: --chunksize SizeKB, OO_LVCONVERT
ID: lvconvert_swap_pool_metadata
DESC: Swap metadata LV in a thin pool or cache pool (for repair only).
FLAGS: SECONDARY_SYNTAX
---
@@ -580,6 +603,7 @@ OO: OO_LVCONVERT
ID: lvconvert_split_cow_snapshot
DESC: Separate a COW snapshot from its origin LV.
RULE: all not lv_is_locked lv_is_pvmove lv_is_origin lv_is_external_origin lv_is_merging_cow
FLAGS: SECONDARY_SYNTAX
---
@@ -597,9 +621,9 @@ OO: --snapshot, --chunksize SizeKB, --zero Bool, OO_LVCONVERT
ID: lvconvert_combine_split_snapshot
DESC: Combine a former COW snapshot (second arg) with a former
DESC: origin LV (first arg) to reverse a splitsnapshot command.
FLAGS: SECONDARY_SYNTAX
RULE: all not lv_is_locked lv_is_pvmove
RULE: all and lv_is_visible
FLAGS: SECONDARY_SYNTAX
lvconvert --snapshot LV LV_linear
OO: --type snapshot, --chunksize SizeKB, --zero Bool, OO_LVCONVERT
@@ -608,6 +632,7 @@ DESC: Combine a former COW snapshot (second arg) with a former
DESC: origin LV (first arg) to reverse a splitsnapshot command.
RULE: all not lv_is_locked lv_is_pvmove
RULE: all and lv_is_visible
FLAGS: SECONDARY_SYNTAX
---
@@ -640,7 +665,7 @@ lvconvert --replace PV LV_raid
OO: OO_LVCONVERT
OP: PV ...
ID: lvconvert_replace_pv
DESC: Replace specific PV(s) in a raid* LV with another PV.
DESC: Replace specific PV(s) in a raid LV with another PV.
RULE: all not lv_is_locked lv_is_pvmove
---
@@ -648,7 +673,7 @@ RULE: all not lv_is_locked lv_is_pvmove
# This command just (re)starts the polling process on the LV
# to continue a previous conversion.
lvconvert --startpoll LV_mirror
lvconvert --startpoll LV_mirror_raid
OO: OO_LVCONVERT
ID: lvconvert_start_poll
DESC: Poll LV to continue conversion.
@@ -656,10 +681,10 @@ RULE: all and lv_is_converting
# alternate form of lvconvert --startpoll, this is only kept
# for compat since this was how it used to be done.
lvconvert LV_mirror
lvconvert LV_mirror_raid
OO: OO_LVCONVERT
ID: lvconvert_start_poll
DESC: Poll LV to continue conversion.
DESC: Poll LV to continue conversion (also see --startpoll).
RULE: all and lv_is_converting
FLAGS: SECONDARY_SYNTAX
@@ -674,9 +699,10 @@ OO_LVCREATE: --addtag Tag, --alloc Alloc, --autobackup Bool, --activate Active,
--reportformat ReportFmt, --setactivationskip Bool, --wipesignatures Bool,
--zero Bool
OO_LVCREATE_CACHE: --cachemode CacheMode, --cachepolicy String, --cachesettings String
OO_LVCREATE_CACHE: --cachemode CacheMode, --cachepolicy String, --cachesettings String,
--chunksize SizeKB
OO_LVCREATE_POOL: --poolmetadatasize SizeMB, --poolmetadataspare Bool, --chunksize SizeKB
OO_LVCREATE_POOL: --poolmetadatasize SSizeMB, --poolmetadataspare Bool, --chunksize SizeKB
OO_LVCREATE_THIN: --discards Discards, --errorwhenfull Bool
@@ -685,7 +711,7 @@ OO_LVCREATE_RAID: --mirrors SNumber, --stripes Number, --stripesize SizeKB,
---
lvcreate --type error --size SizeMB VG
lvcreate --type error --size SSizeMB VG
OO: OO_LVCREATE
ID: lvcreate_error_vol
DESC: Create an LV that returns errors when used.
@@ -693,7 +719,7 @@ FLAGS: SECONDARY_SYNTAX
---
lvcreate --type zero --size SizeMB VG
lvcreate --type zero --size SSizeMB VG
OO: OO_LVCREATE
ID: lvcreate_zero_vol
DESC: Create an LV that returns zeros when read.
@@ -701,7 +727,7 @@ FLAGS: SECONDARY_SYNTAX
---
lvcreate --type linear --size SizeMB VG
lvcreate --type linear --size SSizeMB VG
OO: OO_LVCREATE
OP: PV ...
IO: --mirrors 0, --stripes 1
@@ -709,27 +735,23 @@ ID: lvcreate_linear
DESC: Create a linear LV.
FLAGS: SECONDARY_SYNTAX
# This is the one place we mention the optional --name
# because it's the most common case and may be confusing
# to people to not see the name parameter.
lvcreate --size SizeMB VG
lvcreate --size SSizeMB VG
OO: --type linear, OO_LVCREATE
OP: PV ...
IO: --mirrors 0, --stripes 1
ID: lvcreate_linear
DESC: Create a linear LV (default --type linear).
DESC: When --name is omitted, the name is generated.
DESC: Create a linear LV.
---
lvcreate --type striped --size SizeMB VG
lvcreate --type striped --size SSizeMB VG
OO: --stripes Number, --stripesize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_striped
DESC: Create a striped LV.
DESC: Create a striped LV (also see lvcreate --stripes).
FLAGS: SECONDARY_SYNTAX
lvcreate --stripes Number --size SizeMB VG
lvcreate --stripes Number --size SSizeMB VG
OO: --type striped, --stripesize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_striped
@@ -737,72 +759,73 @@ DESC: Create a striped LV (infers --type striped).
---
lvcreate --type mirror --size SizeMB VG
lvcreate --type mirror --size SSizeMB VG
OO: --mirrors SNumber, --mirrorlog MirrorLog, --regionsize RegionSize, --stripes Number, OO_LVCREATE
OP: PV ...
ID: lvcreate_mirror
DESC: Create a mirror LV (also see --type raid1).
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type raid1|mirror
lvcreate --mirrors SNumber --size SizeMB VG
lvcreate --mirrors SNumber --size SSizeMB VG
OO: --type raid1, --type mirror, --mirrorlog MirrorLog, --stripes Number, OO_LVCREATE_RAID, OO_LVCREATE
OP: PV ...
ID: lvcreate_mirror_or_raid1
DESC: Create a raid1 or mirror LV (variant, infers --type raid1|mirror).
DESC: Create a raid1 or mirror LV (infers --type raid1|mirror).
---
lvcreate --type raid --size SizeMB VG
lvcreate --type raid --size SSizeMB VG
OO: OO_LVCREATE_RAID, OO_LVCREATE
OP: PV ...
ID: lvcreate_raid_any
DESC: Create a raid LV (a specific raid level must be used, e.g. raid1.)
DESC: Create a raid LV (a specific raid level must be used, e.g. raid1).
---
# FIXME: the LV created by these commands actually has type linear or striped,
# The LV created by these commands actually has type linear or striped,
# not snapshot as specified by the command. If LVs never have type
# snapshot, perhaps "snapshot" should not be considered an LV type, but
# another new LV property?
#
# This is the one case where the --type variant is the unpreferred,
# secondary syntax, because the LV type is not actually "snapshot".
# alternate form of lvcreate --snapshot
lvcreate --type snapshot --size SizeMB LV
lvcreate --type snapshot --size SSizeMB LV
OO: --snapshot, --stripes Number, --stripesize SizeKB,
--chunksize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_cow_snapshot
DESC: Create a COW snapshot LV from an origin LV.
DESC: Create a COW snapshot LV of an origin LV
DESC: (also see --snapshot).
FLAGS: SECONDARY_SYNTAX
lvcreate --snapshot --size SizeMB LV
lvcreate --snapshot --size SSizeMB LV
OO: --type snapshot, --stripes Number, --stripesize SizeKB,
--chunksize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_cow_snapshot
DESC: Create a COW snapshot LV from an origin LV.
DESC: Create a COW snapshot LV of an origin LV.
---
# alternate form of lvcreate --snapshot
lvcreate --type snapshot --size SizeMB --virtualsize SizeMB VG
lvcreate --type snapshot --size SSizeMB --virtualsize SizeMB VG
OO: --snapshot, --chunksize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_cow_snapshot_with_virtual_origin
DESC: Create a sparse COW snapshot LV of a virtual origin LV
DESC: (also see --snapshot).
FLAGS: SECONDARY_SYNTAX
lvcreate --snapshot --size SSizeMB --virtualsize SizeMB VG
OO: --type snapshot, --chunksize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_cow_snapshot_with_virtual_origin
DESC: Create a sparse COW snapshot LV of a virtual origin LV.
FLAGS: SECONDARY_SYNTAX
lvcreate --snapshot --size SizeMB --virtualsize SizeMB VG
OO: --type snapshot, --chunksize SizeKB, OO_LVCREATE
OP: PV ...
ID: lvcreate_cow_snapshot_with_virtual_origin
DESC: Create a sparse COW snapshot LV of a virtual origin LV.
---
lvcreate --type thin-pool --size SizeMB VG
lvcreate --type thin-pool --size SSizeMB VG
OO: --thinpool LV_new, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -811,24 +834,24 @@ ID: lvcreate_thinpool
DESC: Create a thin pool.
# alternate form of lvcreate --type thin-pool
lvcreate --thin --size SizeMB VG
lvcreate --thin --size SSizeMB VG
OO: --type thin-pool, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
IO: --mirrors 0
ID: lvcreate_thinpool
DESC: Create a thin pool (variant, infers --type thin-pool).
DESC: Create a thin pool (infers --type thin-pool).
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type thin-pool
lvcreate --size SizeMB --thinpool LV_new VG
lvcreate --size SSizeMB --thinpool LV_new VG
OO: --thin, --type thin-pool, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
IO: --mirrors 0
ID: lvcreate_thinpool
DESC: Create a thin pool named by the --thinpool arg
DESC: (variant, infers --type thin-pool).
DESC: (infers --type thin-pool).
FLAGS: SECONDARY_SYNTAX
---
@@ -838,14 +861,14 @@ FLAGS: SECONDARY_SYNTAX
# still needs to be listed as an optional addition to
# --type cache-pool.
lvcreate --type cache-pool --size SizeMB VG
lvcreate --type cache-pool --size SSizeMB VG
OO: --cache, OO_LVCREATE_POOL, OO_LVCREATE_CACHE, OO_LVCREATE
OP: PV ...
ID: lvcreate_cachepool
DESC: Create a cache pool.
# alternate form of lvcreate --type cache-pool
lvcreate --type cache-pool --size SizeMB --cachepool LV_new VG
lvcreate --type cache-pool --size SSizeMB --cachepool LV_new VG
OO: --cache, OO_LVCREATE_POOL, OO_LVCREATE_CACHE, OO_LVCREATE
OP: PV ...
ID: lvcreate_cachepool
@@ -860,6 +883,7 @@ OO: --thin, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE
IO: --mirrors 0
ID: lvcreate_thin_vol
DESC: Create a thin LV in a thin pool.
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type thin
lvcreate --type thin --virtualsize SizeMB LV_thinpool
@@ -878,8 +902,7 @@ lvcreate --virtualsize SizeMB --thinpool LV_thinpool VG
OO: --type thin, --thin, OO_LVCREATE_THIN, OO_LVCREATE
IO: --mirrors 0
ID: lvcreate_thin_vol
DESC: Create a thin LV in a thin pool (variant, infers --type thin).
FLAGS: SECONDARY_SYNTAX
DESC: Create a thin LV in a thin pool (infers --type thin).
# alternate form of lvcreate --type thin
lvcreate --virtualsize SizeMB LV_thinpool
@@ -898,6 +921,7 @@ OO: --thin, OO_LVCREATE_THIN, OO_LVCREATE
IO: --mirrors 0
ID: lvcreate_thin_snapshot
DESC: Create a thin LV that is a snapshot of an existing thin LV.
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type thin
lvcreate --thin LV_thin
@@ -929,6 +953,7 @@ IO: --mirrors 0
ID: lvcreate_thin_snapshot_of_external
DESC: Create a thin LV that is a snapshot of an external origin LV
DESC: (infers --type thin).
FLAGS: SECONDARY_SYNTAX
---
@@ -948,7 +973,7 @@ DESC: (infers --type thin).
# definition. Note that when LV_new is used in arg pos 1,
# it needs to include a VG name, i.e. VG/LV_new
lvcreate --type thin --virtualsize SizeMB --size SizeMB --thinpool LV_new
lvcreate --type thin --virtualsize SizeMB --size SSizeMB --thinpool LV_new
OO: --thin, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -958,7 +983,7 @@ DESC: Create a thin LV, first creating a thin pool for it,
DESC: where the new thin pool is named by the --thinpool arg.
# alternate form of lvcreate --type thin
lvcreate --thin --virtualsize SizeMB --size SizeMB --thinpool LV_new
lvcreate --thin --virtualsize SizeMB --size SSizeMB --thinpool LV_new
OO: --type thin, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -970,7 +995,7 @@ DESC: (variant, infers --type thin).
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type thin
lvcreate --type thin --virtualsize SizeMB --size SizeMB LV_new|VG
lvcreate --type thin --virtualsize SizeMB --size SSizeMB LV_new|VG
OO: --thin, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -983,7 +1008,7 @@ DESC: arg is a VG name.
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type thin
lvcreate --thin --virtualsize SizeMB --size SizeMB LV_new|VG
lvcreate --thin --virtualsize SizeMB --size SSizeMB LV_new|VG
OO: --type thin, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -997,7 +1022,7 @@ FLAGS: SECONDARY_SYNTAX
---
lvcreate --size SizeMB --virtualsize SizeMB VG
lvcreate --size SSizeMB --virtualsize SizeMB VG
OO: --type thin, --type snapshot, --thin, --snapshot, OO_LVCREATE_POOL, OO_LVCREATE_THIN, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -1017,7 +1042,7 @@ FLAGS: SECONDARY_SYNTAX
# but here it applies to creating the new origin that
# is used to create the cache LV
lvcreate --type cache --size SizeMB --cachepool LV_cachepool VG
lvcreate --type cache --size SSizeMB --cachepool LV_cachepool VG
OO: --cache, OO_LVCREATE_POOL, OO_LVCREATE_CACHE, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -1027,7 +1052,7 @@ DESC: then combining it with the existing cache pool named
DESC: by the --cachepool arg.
# alternate form of lvcreate --type cache
lvcreate --size SizeMB --cachepool LV_cachepool VG
lvcreate --size SSizeMB --cachepool LV_cachepool VG
OO: --type cache, --cache, OO_LVCREATE_CACHE, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -1038,7 +1063,7 @@ DESC: by the --cachepool arg (variant, infers --type cache).
FLAGS: SECONDARY_SYNTAX
# alternate form of lvcreate --type cache
lvcreate --type cache --size SizeMB LV_cachepool
lvcreate --type cache --size SSizeMB LV_cachepool
OO: --cache, OO_LVCREATE_POOL, OO_LVCREATE_CACHE, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -1057,7 +1082,7 @@ FLAGS: SECONDARY_SYNTAX
# an already complicated command above.
#
# # alternate form for lvcreate_cache_vol_with_new_origin
# lvcreate --cache --size SizeMB LV_cachepool
# lvcreate --cache --size SSizeMB LV_cachepool
# OO: --type cache, --cache, OO_LVCREATE_CACHE, OO_LVCREATE, --stripes Number, --stripesize SizeKB
# OP: PV ...
# ID: lvcreate_cache_vol_with_new_origin
@@ -1069,7 +1094,7 @@ FLAGS: SECONDARY_SYNTAX
# 2. If LV is not a cachepool, then it's a disguised lvconvert.
#
# # FIXME: this should be done by lvconvert, and this command removed
# lvcreate --type cache --size SizeMB LV
# lvcreate --type cache --size SSizeMB LV
# OO: OO_LVCREATE_POOL, OO_LVCREATE_CACHE, OO_LVCREATE
# OP: PV ...
# ID: lvcreate_convert_to_cache_vol_with_cachepool
@@ -1086,7 +1111,7 @@ FLAGS: SECONDARY_SYNTAX
# def1: alternate form of lvcreate --type cache, or
# def2: it should be done by lvconvert.
lvcreate --cache --size SizeMB LV
lvcreate --cache --size SSizeMB LV
OO: OO_LVCREATE_CACHE, OO_LVCREATE_POOL, OO_LVCREATE,
--stripes Number, --stripesize SizeKB
OP: PV ...
@@ -1120,10 +1145,10 @@ ID: lvdisplay_general
# --extents is not specified; it's an automatic alternative for --size
lvextend --size SizeMB LV
lvextend --size SSizeMB LV
OO: --alloc Alloc, --autobackup Bool, --force, --mirrors SNumber,
--nofsck, --nosync, --noudevsync, --reportformat ReportFmt, --resizefs,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SizeMB,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SSizeMB,
--type SegType
OP: PV ...
ID: lvextend_by_size
@@ -1136,9 +1161,8 @@ OO: --alloc Alloc, --autobackup Bool, --force, --mirrors SNumber,
--type SegType
ID: lvextend_by_pv
DESC: Extend an LV by specified PV extents.
FLAGS: SECONDARY_SYNTAX
lvextend --poolmetadatasize SizeMB LV_thinpool
lvextend --poolmetadatasize SSizeMB LV_thinpool
OO: --alloc Alloc, --autobackup Bool, --force, --mirrors SNumber,
--nofsck, --nosync, --noudevsync,
--reportformat ReportFmt, --stripes Number, --stripesize SizeKB,
@@ -1165,7 +1189,7 @@ ID: lvmconfig_general
---
lvreduce --size SizeMB LV
lvreduce --size SSizeMB LV
OO: --autobackup Bool, --force, --nofsck, --noudevsync,
--reportformat ReportFmt, --resizefs
ID: lvreduce_general
@@ -1193,10 +1217,10 @@ ID: lvrename_lv_lv
# value can be checked to match the existing type; using it doesn't
# currently enable any different behavior.
lvresize --size SizeMB LV
lvresize --size SSizeMB LV
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync, --reportformat ReportFmt, --resizefs,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SizeMB,
--stripes Number, --stripesize SizeKB, --poolmetadatasize SSizeMB,
--type SegType
OP: PV ...
ID: lvresize_by_size
@@ -1209,9 +1233,8 @@ OO: --alloc Alloc, --autobackup Bool, --force,
--type SegType
ID: lvresize_by_pv
DESC: Resize an LV by specified PV extents.
FLAGS: SECONDARY_SYNTAX
lvresize --poolmetadatasize SizeMB LV_thinpool
lvresize --poolmetadatasize SSizeMB LV_thinpool
OO: --alloc Alloc, --autobackup Bool, --force,
--nofsck, --nosync, --noudevsync,
--reportformat ReportFmt, --stripes Number, --stripesize SizeKB,
@@ -1487,7 +1510,6 @@ vgexport --all
OO: OO_VGEXPORT
ID: vgexport_all
DESC: Export all VGs.
FLAGS: SECONDARY_SYNTAX
---
@@ -1611,14 +1633,12 @@ config
OO: OO_CONFIG
OP: String ...
ID: lvmconfig_general
FLAGS: SECONDARY_SYNTAX
# use lvmconfig
dumpconfig
OO: OO_CONFIG
OP: String ...
ID: lvmconfig_general
FLAGS: SECONDARY_SYNTAX
devtypes
OO: --aligned, --binary, --nameprefixes, --noheadings,
@@ -1652,7 +1672,6 @@ ID: version_general
# deprecated
pvdata
ID: pvdata_general
FLAGS: SECONDARY_SYNTAX
segtypes
ID: segtypes_general
@@ -1666,22 +1685,18 @@ ID: tags_general
# deprecated
lvmchange
ID: lvmchange_general
FLAGS: SECONDARY_SYNTAX
# deprecated
lvmdiskscan
OO: --lvmpartition, --readonly
ID: lvmdiskscan_general
FLAGS: SECONDARY_SYNTAX
# deprecated
lvmsadc
ID: lvmsadc_general
FLAGS: SECONDARY_SYNTAX
# deprecated
lvmsar
OO: --full, --stdin
ID: lvmsar_general
FLAGS: SECONDARY_SYNTAX

File diff suppressed because it is too large Load Diff

View File

@@ -211,11 +211,56 @@ struct command {
int pos_count; /* temp counter used by create-command */
};
/* see global opt_names[] */
struct opt_name {
const char *name; /* "foo_ARG" */
int opt_enum; /* foo_ARG */
const char short_opt; /* -f */
char _padding[7];
const char *long_opt; /* --foo */
int val_enum; /* xyz_VAL when --foo takes a val like "--foo xyz" */
uint32_t flags;
uint32_t prio;
const char *desc;
};
/* see global val_names[] */
struct val_name {
const char *enum_name; /* "foo_VAL" */
int val_enum; /* foo_VAL */
int (*fn) (struct cmd_context *cmd, struct arg_values *av); /* foo_arg() */
const char *name; /* FooVal */
const char *usage;
};
/* see global lv_props[] */
struct lv_prop {
const char *enum_name; /* "is_foo_LVP" */
int lvp_enum; /* is_foo_LVP */
const char *name; /* "lv_is_foo" */
int (*fn) (struct cmd_context *cmd, struct logical_volume *lv); /* lv_is_foo() */
};
/* see global lv_types[] */
struct lv_type {
const char *enum_name; /* "foo_LVT" */
int lvt_enum; /* foo_LVT */
const char *name; /* "foo" */
int (*fn) (struct cmd_context *cmd, struct logical_volume *lv); /* lv_is_foo() */
};
int define_commands(char *run_name);
int command_id_to_enum(const char *str);
void print_usage(struct command *cmd, int longhelp);
void print_usage(struct command *cmd, int longhelp, int desc_first);
void print_usage_common_cmd(struct command_name *cname, struct command *cmd);
void print_usage_common_lvm(struct command_name *cname, struct command *cmd);
void print_usage_notes(struct command_name *cname);
void factor_common_options(void);
int command_has_alternate_extents(const char *name);
#endif

View File

@@ -172,7 +172,9 @@ enum {
SELECT_ARG,
EXEC_ARG,
FILEMAP_ARG,
FOLLOW_ARG,
FORCE_ARG,
FOREGROUND_ARG,
GID_ARG,
GROUP_ARG,
GROUP_ID_ARG,
@@ -196,6 +198,7 @@ enum {
NOTABLE_ARG,
NOTIMESUFFIX_ARG,
UDEVCOOKIE_ARG,
NOMONITOR_ARG,
NOUDEVRULES_ARG,
NOUDEVSYNC_ARG,
OPTIONS_ARG,
@@ -4999,15 +5002,25 @@ static int _stats_check_filemap_switches(void)
return 1;
}
static dm_filemapd_mode_t _stats_get_filemapd_mode(void)
{
if (!_switches[FOLLOW_ARG])
return DM_FILEMAPD_FOLLOW_INODE;
return dm_filemapd_mode_from_string(_string_args[FOLLOW_ARG]);
}
static int _stats_create_file(CMD_ARGS)
{
const char *alias, *program_id = DM_STATS_PROGRAM_ID;
const char *bounds_str = _string_args[BOUNDS_ARG];
int foreground = _switches[FOREGROUND_ARG];
int verbose = _switches[VERBOSE_ARG];
uint64_t *regions, *region, count = 0;
struct dm_histogram *bounds = NULL;
char *path, *abspath = NULL;
struct dm_stats *dms = NULL;
int group, fd = -1, precise;
dm_filemapd_mode_t mode;
if (names) {
err("Device names are not compatibile with --filemap.");
@@ -5060,6 +5073,10 @@ static int _stats_create_file(CMD_ARGS)
precise = _int_args[PRECISE_ARG];
group = !_switches[NOGROUP_ARG];
if (!_switches[NOMONITOR_ARG] && group)
if ((mode = _stats_get_filemapd_mode()) == -1)
goto bad;
if (!(dms = dm_stats_create(DM_STATS_PROGRAM_ID)))
goto_bad;
@@ -5091,6 +5108,12 @@ static int _stats_create_file(CMD_ARGS)
regions = dm_stats_create_regions_from_fd(dms, fd, group, precise,
bounds, alias);
if (!_switches[NOMONITOR_ARG] && group) {
if (!dm_stats_start_filemapd(fd, regions[0], abspath, mode,
foreground, verbose))
log_warn("Failed to start filemap monitoring daemon.");
}
if (close(fd))
log_error("Error closing %s", abspath);
@@ -5620,12 +5643,16 @@ out:
static int _stats_update_file(CMD_ARGS)
{
uint64_t group_id, *region, *regions, count = 0;
uint64_t group_id, *region, *regions = NULL, count = 0;
const char *program_id = DM_STATS_PROGRAM_ID;
int foreground = _switches[FOREGROUND_ARG];
int verbose = _switches[VERBOSE_ARG];
char *path, *abspath = NULL;
dm_filemapd_mode_t mode;
struct dm_stats *dms;
char *path, *abspath;
int fd = -1;
if (names) {
err("Device names are not compatibile with update_filemap.");
return 0;
@@ -5654,6 +5681,10 @@ static int _stats_update_file(CMD_ARGS)
group_id = (uint64_t) _int_args[GROUP_ID_ARG];
if (!_switches[NOMONITOR_ARG])
if ((mode = _stats_get_filemapd_mode()) < 0)
goto bad;
if (_switches[PROGRAM_ID_ARG])
program_id = _string_args[PROGRAM_ID_ARG];
if (!strlen(program_id) && !_switches[FORCE_ARG])
@@ -5676,6 +5707,25 @@ static int _stats_update_file(CMD_ARGS)
/* force creation of a region with no id */
dm_stats_set_program_id(dms, 1, NULL);
/*
* Start dmfilemapd - it will test the file descriptor to determine
* whether it is necessary to call dm_stats_update_regions_from_fd().
*
* If starting the daemon fails, fall back to a direct update.
*/
if (!_switches[NOMONITOR_ARG]) {
if (!dm_stats_start_filemapd(fd, group_id, abspath, mode,
foreground, verbose)) {
log_warn("Failed to start filemap monitoring daemon.");
goto fallback;
}
goto out;
}
fallback:
/*
* --nomonitor case - perform a one-shot update directly from dmstats.
*/
regions = dm_stats_update_regions_from_fd(dms, fd, group_id);
if (close(fd))
@@ -5700,6 +5750,7 @@ static int _stats_update_file(CMD_ARGS)
printf("%s: Updated group ID " FMTu64 " with "FMTu64" region(s).\n",
path, group_id, count);
out:
dm_free(regions);
dm_free(abspath);
dm_stats_destroy(dms);
@@ -5732,7 +5783,7 @@ static int _stats_help(CMD_ARGS);
* [--programid <id>] [--userdata <data> ]
* [--bounds histogram_boundaries] [--precise]
* [--alldevices|<device>...]
* create --filemap [--nogroup]
* create --filemap [--nogroup] [--nomonitor] [--follow=mode]
* [--programid <id>] [--userdata <data> ]
* [--bounds histogram_boundaries] [--precise] [<file_path>]
* delete [--allprograms|--programid id]
@@ -5764,6 +5815,8 @@ static int _stats_help(CMD_ARGS);
#define PRECISE_OPTS "[--precise] "
#define SEGMENTS_OPT "[--segments] "
#define EXTRA_OPTS HIST_OPTS PRECISE_OPTS
#define FILE_MONITOR_OPTS "[--nomonitor] [--follow mode]"
#define GROUP_ID_OPT "--groupid <id> "
#define ALL_PROGS_OPT "[--allprograms|--programid id] "
#define ALL_REGIONS_OPT "[--allregions|--regionid id] "
#define ALL_DEVICES_OPT "[--alldevices|<device>...] "
@@ -5774,12 +5827,13 @@ static int _stats_help(CMD_ARGS);
/* command options */
#define CREATE_OPTS REGION_OPTS INDENT ID_OPTS INDENT EXTRA_OPTS INDENT SEGMENTS_OPT
#define FILEMAP_OPTS "--filemap [--nogroup]" INDENT ID_OPTS INDENT EXTRA_OPTS
#define FILEMAP_OPTS "--filemap [--nogroup] " FILE_MONITOR_OPTS INDENT ID_OPTS INDENT EXTRA_OPTS
#define PRINT_OPTS "[--clear] " ALL_PROGS_REGIONS_DEVICES
#define REPORT_OPTS "[--interval <seconds>] [--count <cnt>]" INDENT \
"[--units <u>] " SELECT_OPTS INDENT DM_REPORT_OPTS INDENT ALL_PROGS_OPT
#define GROUP_OPTS "[--alias NAME] --regions <regions>" INDENT ALL_PROGS_OPT ALL_DEVICES_OPT
#define UNGROUP_OPTS ALL_PROGS_OPT INDENT ALL_DEVICES_OPT
#define UNGROUP_OPTS GROUP_ID_OPT ALL_PROGS_OPT INDENT ALL_DEVICES_OPT
#define UPDATE_OPTS GROUP_ID_OPT INDENT FILE_MONITOR_OPTS " <file_path>"
/*
* The 'create' command has two entries in the table, to allow for the
@@ -5790,14 +5844,14 @@ static struct command _stats_subcommands[] = {
{"help", "", 0, 0, 0, 0, _stats_help},
{"clear", ALL_REGIONS_OPT ALL_DEVICES_OPT, 0, -1, 1, 0, _stats_clear},
{"create", CREATE_OPTS ALL_DEVICES_OPT, 0, -1, 1, 0, _stats_create},
{"create", FILEMAP_OPTS "[<file_path>]", 0, -1, 1, 0, _stats_create},
{"create", FILEMAP_OPTS "<file_path>", 0, -1, 1, 0, _stats_create},
{"delete", ALL_PROGS_REGIONS_DEVICES, 1, -1, 1, 0, _stats_delete},
{"group", GROUP_OPTS, 1, -1, 1, 0, _stats_group},
{"list", ALL_PROGS_OPT ALL_REGIONS_OPT, 0, -1, 1, 0, _stats_report},
{"print", PRINT_OPTS, 0, -1, 1, 0, _stats_print},
{"report", REPORT_OPTS "[<device>...]", 0, -1, 1, 0, _stats_report},
{"ungroup", "--groupid <id> " UNGROUP_OPTS, 1, -1, 1, 0, _stats_ungroup},
{"update_filemap", "--groupid <id> <file_path>", 1, 1, 0, 0, _stats_update_file},
{"ungroup", UNGROUP_OPTS, 1, -1, 1, 0, _stats_ungroup},
{"update_filemap", UPDATE_OPTS, 1, 1, 0, 0, _stats_update_file},
{"version", "", 0, -1, 1, 0, _version},
{NULL, NULL, 0, 0, 0, 0, NULL}
};
@@ -6053,6 +6107,11 @@ static int _stats(CMD_ARGS)
return 0;
}
if (_switches[FOLLOW_ARG] && _switches[NOMONITOR_ARG]) {
log_error("Use of --follow is incompatible with --nomonitor.");
return 0;
}
/*
* Pass the sub-command through to allow a single function to be
* used to implement several distinct sub-commands (e.g. 'report'
@@ -6418,7 +6477,9 @@ static int _process_switches(int *argcp, char ***argvp, const char *dev_dir)
{"select", 1, &ind, SELECT_ARG},
{"exec", 1, &ind, EXEC_ARG},
{"filemap", 0, &ind, FILEMAP_ARG},
{"follow", 1, &ind, FOLLOW_ARG},
{"force", 0, &ind, FORCE_ARG},
{"foreground", 0, &ind, FOREGROUND_ARG},
{"gid", 1, &ind, GID_ARG},
{"group", 0, &ind, GROUP_ARG},
{"groupid", 1, &ind, GROUP_ID_ARG},
@@ -6441,6 +6502,7 @@ static int _process_switches(int *argcp, char ***argvp, const char *dev_dir)
{"notable", 0, &ind, NOTABLE_ARG},
{"notimesuffix", 0, &ind, NOTIMESUFFIX_ARG},
{"udevcookie", 1, &ind, UDEVCOOKIE_ARG},
{"nomonitor", 0, &ind, NOMONITOR_ARG},
{"noudevrules", 0, &ind, NOUDEVRULES_ARG},
{"noudevsync", 0, &ind, NOUDEVSYNC_ARG},
{"options", 1, &ind, OPTIONS_ARG},
@@ -6584,8 +6646,14 @@ static int _process_switches(int *argcp, char ***argvp, const char *dev_dir)
_switches[COLS_ARG]++;
if (ind == FILEMAP_ARG)
_switches[FILEMAP_ARG]++;
if (ind == FOLLOW_ARG) {
_switches[FOLLOW_ARG]++;
_string_args[FOLLOW_ARG] = optarg;
}
if (c == 'f' || ind == FORCE_ARG)
_switches[FORCE_ARG]++;
if (ind == FOREGROUND_ARG)
_switches[FOREGROUND_ARG]++;
if (c == 'r' || ind == READ_ONLY)
_switches[READ_ONLY]++;
if (ind == HISTOGRAM_ARG)
@@ -6678,6 +6746,8 @@ static int _process_switches(int *argcp, char ***argvp, const char *dev_dir)
_switches[UDEVCOOKIE_ARG]++;
_udev_cookie = _get_cookie_value(optarg);
}
if (ind == NOMONITOR_ARG)
_switches[NOMONITOR_ARG]++;
if (ind == NOUDEVRULES_ARG)
_switches[NOUDEVRULES_ARG]++;
if (ind == NOUDEVSYNC_ARG)

View File

@@ -1228,6 +1228,9 @@ static int _lvconvert_mirrors(struct cmd_context *cmd,
static int _is_valid_raid_conversion(const struct segment_type *from_segtype,
const struct segment_type *to_segtype)
{
if (!from_segtype)
return 1;
if (from_segtype == to_segtype)
return 1;
@@ -1356,7 +1359,7 @@ static int _lvconvert_raid(struct logical_volume *lv, struct lvconvert_params *l
DEFAULT_RAID1_MAX_IMAGES, lp->segtype->name, display_lvname(lv));
return 0;
}
if (!lv_raid_change_image_count(lv, image_count, lp->pvh))
if (!lv_raid_change_image_count(lv, image_count, lp->region_size, lp->pvh))
return_0;
log_print_unless_silent("Logical volume %s successfully converted.",
@@ -1365,10 +1368,13 @@ static int _lvconvert_raid(struct logical_volume *lv, struct lvconvert_params *l
return 1;
}
goto try_new_takeover_or_reshape;
} else if (!*lp->type_str || seg->segtype == lp->segtype) {
}
#if 0
} else if ((!*lp->type_str || seg->segtype == lp->segtype) && !lp->stripe_size_supplied) {
log_error("Conversion operation not yet supported.");
return 0;
}
#endif
if ((seg_is_linear(seg) || seg_is_striped(seg) || seg_is_mirrored(seg) || lv_is_raid(lv)) &&
(lp->type_str && lp->type_str[0])) {
@@ -1390,10 +1396,16 @@ static int _lvconvert_raid(struct logical_volume *lv, struct lvconvert_params *l
return 0;
}
/* FIXME This needs changing globally. */
if (!arg_is_set(cmd, stripes_long_ARG))
lp->stripes = 0;
if (!arg_is_set(cmd, type_ARG))
lp->segtype = NULL;
if (!arg_is_set(cmd, regionsize_ARG))
lp->region_size = 0;
if (!lv_raid_convert(lv, lp->segtype, lp->yes, lp->force, lp->stripes, lp->stripe_size_supplied, lp->stripe_size,
if (!lv_raid_convert(lv, lp->segtype,
lp->yes, lp->force, lp->stripes, lp->stripe_size_supplied, lp->stripe_size,
lp->region_size, lp->pvh))
return_0;
@@ -1410,12 +1422,16 @@ try_new_takeover_or_reshape:
/* FIXME This needs changing globally. */
if (!arg_is_set(cmd, stripes_long_ARG))
lp->stripes = 0;
if (!arg_is_set(cmd, type_ARG))
lp->segtype = NULL;
/* Only let raid4 through for now. */
if (lp->type_str && lp->type_str[0] && lp->segtype != seg->segtype &&
((seg_is_raid4(seg) && seg_is_striped(lp) && lp->stripes > 1) ||
(seg_is_striped(seg) && seg->area_count > 1 && seg_is_raid4(lp)))) {
if (!lv_raid_convert(lv, lp->segtype, lp->yes, lp->force, lp->stripes, lp->stripe_size_supplied, lp->stripe_size,
if (!lp->segtype ||
(lp->type_str && lp->type_str[0] && lp->segtype != seg->segtype &&
((seg_is_raid4(seg) && seg_is_striped(lp) && lp->stripes > 1) ||
(seg_is_striped(seg) && seg->area_count > 1 && seg_is_raid4(lp))))) {
if (!lv_raid_convert(lv, lp->segtype,
lp->yes, lp->force, lp->stripes, lp->stripe_size_supplied, lp->stripe_size,
lp->region_size, lp->pvh))
return_0;
@@ -1700,6 +1716,8 @@ static int _lvconvert_raid_types(struct cmd_context *cmd, struct logical_volume
/* FIXME This is incomplete */
if (_mirror_or_raid_type_requested(cmd, lp->type_str) || _raid0_type_requested(lp->type_str) ||
_striped_type_requested(lp->type_str) || lp->mirrorlog || lp->corelog) {
if (!arg_is_set(cmd, type_ARG))
lp->segtype = first_seg(lv)->segtype;
/* FIXME Handle +/- adjustments too? */
if (!get_stripe_params(cmd, lp->segtype, &lp->stripes, &lp->stripe_size, &lp->stripes_supplied, &lp->stripe_size_supplied))
goto_out;
@@ -2505,7 +2523,7 @@ static int _lvconvert_swap_pool_metadata(struct cmd_context *cmd,
struct volume_group *vg = lv->vg;
struct logical_volume *prev_metadata_lv;
struct lv_segment *seg;
struct lv_types *lvtype;
struct lv_type *lvtype;
char meta_name[NAME_LEN];
const char *swap_name;
uint32_t chunk_size;
@@ -2990,9 +3008,9 @@ static int _lvconvert_to_pool(struct cmd_context *cmd,
}
/* Allocate a new pool segment */
if (!(seg = alloc_lv_segment(pool_segtype, pool_lv, 0, data_lv->le_count,
if (!(seg = alloc_lv_segment(pool_segtype, pool_lv, 0, data_lv->le_count, 0,
pool_lv->status, 0, NULL, 1,
data_lv->le_count, 0, 0, 0, NULL)))
data_lv->le_count, 0, 0, 0, 0, NULL)))
return_0;
/* Add the new segment to the layer LV */
@@ -3650,8 +3668,9 @@ static int _lvconvert_combine_split_snapshot_single(struct cmd_context *cmd,
int lvconvert_combine_split_snapshot_cmd(struct cmd_context *cmd, int argc, char **argv)
{
const char *vgname = NULL;
const char *lvname1;
const char *lvname2;
const char *lvname1_orig;
const char *lvname2_orig;
const char *lvname1_split;
char *vglv;
int vglv_sz;
@@ -3669,20 +3688,25 @@ int lvconvert_combine_split_snapshot_cmd(struct cmd_context *cmd, int argc, char
* This is the only instance in all commands.
*/
lvname1 = cmd->position_argv[0];
lvname2 = cmd->position_argv[1];
lvname1_orig = cmd->position_argv[0];
lvname2_orig = cmd->position_argv[1];
if (strstr("/", lvname1) && !strstr("/", lvname2) && !getenv("LVM_VG_NAME")) {
if (!validate_lvname_param(cmd, &vgname, &lvname1))
if (strchr(lvname1_orig, '/') && !strchr(lvname2_orig, '/') && !getenv("LVM_VG_NAME")) {
if (!(lvname1_split = dm_pool_strdup(cmd->mem, lvname1_orig)))
return_ECMD_FAILED;
vglv_sz = strlen(vgname) + strlen(lvname2) + 2;
if (!validate_lvname_param(cmd, &vgname, &lvname1_split))
return_ECMD_FAILED;
vglv_sz = strlen(vgname) + strlen(lvname2_orig) + 2;
if (!(vglv = dm_pool_alloc(cmd->mem, vglv_sz)) ||
dm_snprintf(vglv, vglv_sz, "%s/%s", vgname, lvname2) < 0) {
dm_snprintf(vglv, vglv_sz, "%s/%s", vgname, lvname2_orig) < 0) {
log_error("vg/lv string alloc failed.");
return_ECMD_FAILED;
}
/* vglv is now vgname/lvname2 and replaces lvname2_orig */
cmd->position_argv[1] = vglv;
}
@@ -3817,7 +3841,7 @@ static int _lvconvert_to_cache_vol_single(struct cmd_context *cmd,
if (!lv_is_cache_pool(cachepool_lv)) {
int lvt_enum = get_lvt_enum(cachepool_lv);
struct lv_types *lvtype = get_lv_type(lvt_enum);
struct lv_type *lvtype = get_lv_type(lvt_enum);
if (lvt_enum != striped_LVT && lvt_enum != linear_LVT && lvt_enum != raid_LVT) {
log_error("LV %s with type %s cannot be converted to a cache pool.",
@@ -3926,7 +3950,7 @@ static int _lvconvert_to_thin_with_external_single(struct cmd_context *cmd,
if (!lv_is_thin_pool(thinpool_lv)) {
int lvt_enum = get_lvt_enum(thinpool_lv);
struct lv_types *lvtype = get_lv_type(lvt_enum);
struct lv_type *lvtype = get_lv_type(lvt_enum);
if (lvt_enum != striped_LVT && lvt_enum != linear_LVT && lvt_enum != raid_LVT) {
log_error("LV %s with type %s cannot be converted to a thin pool.",
@@ -4250,7 +4274,7 @@ static int _lvconvert_raid_types_check(struct cmd_context *cmd, struct logical_v
int lv_is_named_arg)
{
int lvt_enum = get_lvt_enum(lv);
struct lv_types *lvtype = get_lv_type(lvt_enum);
struct lv_type *lvtype = get_lv_type(lvt_enum);
if (!lv_is_visible(lv)) {
if (!lv_is_cache_pool_metadata(lv) &&

View File

@@ -97,8 +97,8 @@ static char *_list_args(const char *text, int state)
while (match_no < cname->num_args) {
char s[3];
char c;
if (!(c = (_cmdline->arg_props +
cname->valid_args[match_no++])->short_arg))
if (!(c = (_cmdline->opt_names +
cname->valid_args[match_no++])->short_opt))
continue;
sprintf(s, "-%c", c);
@@ -113,8 +113,8 @@ static char *_list_args(const char *text, int state)
while (match_no - cname->num_args < cname->num_args) {
const char *l;
l = (_cmdline->arg_props +
cname->valid_args[match_no++ - cname->num_args])->long_arg;
l = (_cmdline->opt_names +
cname->valid_args[match_no++ - cname->num_args])->long_opt;
if (*(l + 2) && !strncmp(text, l, len))
return strdup(l);
}

View File

@@ -19,7 +19,7 @@
struct cmd_context;
struct cmdline_context {
struct arg_props *arg_props;
struct opt_name *opt_names;
struct command *commands;
int num_commands;
struct command_name *command_names;

View File

@@ -53,47 +53,27 @@ extern char *optarg;
/*
* Table of valid --option values.
*/
static struct val_props _val_props[VAL_COUNT + 1] = {
#define val(a, b, c, d) {a, b, c, d},
#include "vals.h"
#undef val
};
extern struct val_name val_names[VAL_COUNT + 1];
/*
* Table of valid --option's
*/
static struct arg_props _arg_props[ARG_COUNT + 1] = {
#define arg(a, b, c, d, e, f, g) {a, b, "", "--" c, d, e, f, g},
#include "args.h"
#undef arg
};
extern struct opt_name opt_names[ARG_COUNT + 1];
/*
* Table of LV properties
*/
static struct lv_props _lv_props[LVP_COUNT + 1] = {
#define lvp(a, b, c) {a, b, c},
#include "lv_props.h"
#undef lvp
};
extern struct lv_prop lv_props[LVP_COUNT + 1];
/*
* Table of LV types
*/
static struct lv_types _lv_types[LVT_COUNT + 1] = {
#define lvt(a, b, c) {a, b, c},
#include "lv_types.h"
#undef lvt
};
extern struct lv_type lv_types[LVT_COUNT + 1];
/*
* Table of command names
*/
struct command_name command_names[MAX_COMMAND_NAMES] = {
#define xx(a, b, c...) { # a, b, c, a},
#include "commands.h"
#undef xx
};
extern struct command_name command_names[MAX_COMMAND_NAMES];
/*
* Table of commands (as defined in command-lines.in)
@@ -277,7 +257,7 @@ unsigned grouped_arg_is_set(const struct arg_values *av, int a)
const char *arg_long_option_name(int a)
{
return _cmdline.arg_props[a].long_arg;
return _cmdline.opt_names[a].long_opt;
}
const char *arg_value(const struct cmd_context *cmd, int a)
@@ -316,7 +296,7 @@ int32_t first_grouped_arg_int_value(const struct cmd_context *cmd, int a, const
int32_t arg_int_value(const struct cmd_context *cmd, int a, const int32_t def)
{
return (_cmdline.arg_props[a].flags & ARG_GROUPABLE) ?
return (_cmdline.opt_names[a].flags & ARG_GROUPABLE) ?
first_grouped_arg_int_value(cmd, a, def) : (arg_is_set(cmd, a) ? cmd->opt_arg_values[a].i_value : def);
}
@@ -629,19 +609,41 @@ static int _size_arg(struct cmd_context *cmd __attribute__((unused)),
return 1;
}
/* negative not accepted */
int size_kb_arg(struct cmd_context *cmd, struct arg_values *av)
{
if (!_size_arg(cmd, av, 2, 0))
return 0;
if (av->sign == SIGN_MINUS) {
log_error("Size may not be negative.");
return 0;
}
return 1;
}
int ssize_kb_arg(struct cmd_context *cmd, struct arg_values *av)
{
return _size_arg(cmd, av, 2, 0);
}
int size_mb_arg(struct cmd_context *cmd, struct arg_values *av)
{
return _size_arg(cmd, av, 2048, 0);
if (!_size_arg(cmd, av, 2048, 0))
return 0;
if (av->sign == SIGN_MINUS) {
log_error("Size may not be negative.");
return 0;
}
return 1;
}
int size_mb_arg_with_percent(struct cmd_context *cmd, struct arg_values *av)
int ssize_mb_arg(struct cmd_context *cmd, struct arg_values *av)
{
return _size_arg(cmd, av, 2048, 1);
return _size_arg(cmd, av, 2048, 0);
}
int int_arg(struct cmd_context *cmd __attribute__((unused)), struct arg_values *av)
@@ -672,8 +674,8 @@ int int_arg_with_sign(struct cmd_context *cmd __attribute__((unused)), struct ar
return 1;
}
int int_arg_with_sign_and_percent(struct cmd_context *cmd __attribute__((unused)),
struct arg_values *av)
int extents_arg(struct cmd_context *cmd __attribute__((unused)),
struct arg_values *av)
{
char *ptr;
@@ -1066,7 +1068,7 @@ static void _set_valid_args_for_command_name(int ci)
for (i = 0; i < ARG_COUNT; i++) {
if (all_args[i]) {
opt_enum = _cmdline.arg_props[i].arg_enum;
opt_enum = _cmdline.opt_names[i].opt_enum;
command_names[ci].valid_args[num_args] = opt_enum;
num_args++;
@@ -1183,18 +1185,18 @@ int lvm_register_commands(char *name)
return 1;
}
struct lv_props *get_lv_prop(int lvp_enum)
struct lv_prop *get_lv_prop(int lvp_enum)
{
if (!lvp_enum)
return NULL;
return &_lv_props[lvp_enum];
return &lv_props[lvp_enum];
}
struct lv_types *get_lv_type(int lvt_enum)
struct lv_type *get_lv_type(int lvt_enum)
{
if (!lvt_enum)
return NULL;
return &_lv_types[lvt_enum];
return &lv_types[lvt_enum];
}
struct command *get_command(int cmd_enum)
@@ -1253,13 +1255,9 @@ static int _command_required_opt_matches(struct cmd_context *cmd, int ci, int ro
* For some commands, --size and --extents are interchangable,
* but command[] definitions use only --size.
*/
if ((opt_enum == size_ARG) && arg_is_set(cmd, extents_ARG)) {
if (!strcmp(commands[ci].name, "lvcreate") ||
!strcmp(commands[ci].name, "lvresize") ||
!strcmp(commands[ci].name, "lvextend") ||
!strcmp(commands[ci].name, "lvreduce"))
goto check_val;
}
if ((opt_enum == size_ARG) && arg_is_set(cmd, extents_ARG) &&
command_has_alternate_extents(commands[ci].name))
goto check_val;
return 0;
@@ -1560,11 +1558,10 @@ static struct command *_find_command(struct cmd_context *cmd, const char *path,
if (!best_required) {
/* cmd did not have all the required opt/pos args of any command */
log_error("Failed to find a matching command definition.");
log_error("Run '%s --help' for more information.", name);
log_error("Incorrect syntax. Run '%s --help' for more information.", name);
if (close_ro) {
log_warn("Closest command usage is:");
print_usage(&_cmdline.commands[close_i], 0);
log_warn("Nearest similar command has syntax:");
print_usage(&_cmdline.commands[close_i], 0, 0);
}
return NULL;
}
@@ -1677,10 +1674,11 @@ static void _short_usage(const char *name)
log_error("Run `%s --help' for more information.", name);
}
static int _usage(const char *name, int longhelp)
static int _usage(const char *name, int longhelp, int skip_notes)
{
struct command_name *cname = find_command_name(name);
struct command *cmd;
int show_full = longhelp;
int i;
if (!cname) {
@@ -1699,7 +1697,7 @@ static int _usage(const char *name, int longhelp)
/* Reduce the default output when there are several variants. */
if (cname->variants < 3)
longhelp = 1;
show_full = 1;
for (i = 0; i < COMMAND_COUNT; i++) {
if (strcmp(_cmdline.commands[i].name, name))
@@ -1708,19 +1706,26 @@ static int _usage(const char *name, int longhelp)
if (_cmdline.commands[i].cmd_flags & CMD_FLAG_PREVIOUS_SYNTAX)
continue;
if ((_cmdline.commands[i].cmd_flags & CMD_FLAG_SECONDARY_SYNTAX) && !longhelp)
if ((_cmdline.commands[i].cmd_flags & CMD_FLAG_SECONDARY_SYNTAX) && !show_full)
continue;
print_usage(&_cmdline.commands[i], longhelp);
print_usage(&_cmdline.commands[i], show_full, 1);
cmd = &_cmdline.commands[i];
}
/* Common options are printed once for all variants of a command name. */
if (longhelp) {
if (show_full) {
print_usage_common_cmd(cname, cmd);
print_usage_common_lvm(cname, cmd);
} else
log_print("Use --longhelp to show all options.");
}
if (skip_notes)
return 1;
if (longhelp)
print_usage_notes(cname);
else
log_print("Use --longhelp to show all options and advanced commands.");
return 1;
}
@@ -1732,8 +1737,10 @@ static void _usage_all(void)
for (i = 0; i < MAX_COMMAND_NAMES; i++) {
if (!command_names[i].name)
break;
_usage(command_names[i].name, 0);
_usage(command_names[i].name, 1, 1);
}
print_usage_notes(NULL);
}
/*
@@ -1762,12 +1769,12 @@ static void _usage_all(void)
* can't have more than 'a' long arguments.
*/
static void _add_getopt_arg(int arg_enum, char **optstrp, struct option **longoptsp)
static void _add_getopt_arg(int opt_enum, char **optstrp, struct option **longoptsp)
{
struct arg_props *a = _cmdline.arg_props + arg_enum;
struct opt_name *a = _cmdline.opt_names + opt_enum;
if (a->short_arg) {
*(*optstrp)++ = a->short_arg;
if (a->short_opt) {
*(*optstrp)++ = a->short_opt;
if (a->val_enum)
*(*optstrp)++ = ':';
@@ -1775,8 +1782,8 @@ static void _add_getopt_arg(int arg_enum, char **optstrp, struct option **longop
#ifdef HAVE_GETOPTLONG
/* long_arg is "--foo", so +2 is the offset of the name after "--" */
if (*(a->long_arg + 2)) {
(*longoptsp)->name = a->long_arg + 2;
if (*(a->long_opt + 2)) {
(*longoptsp)->name = a->long_opt + 2;
(*longoptsp)->has_arg = a->val_enum ? 1 : 0;
(*longoptsp)->flag = NULL;
@@ -1793,10 +1800,10 @@ static void _add_getopt_arg(int arg_enum, char **optstrp, struct option **longop
* (11 is the enum value for --cachepool, so 11+128)
*/
if (a->short_arg)
(*longoptsp)->val = a->short_arg;
if (a->short_opt)
(*longoptsp)->val = a->short_opt;
else
(*longoptsp)->val = arg_enum + 128;
(*longoptsp)->val = opt_enum + 128;
(*longoptsp)++;
}
#endif
@@ -1829,14 +1836,14 @@ static int _find_arg(const char *cmd_name, int goval)
for (i = 0; i < cname->num_args; i++) {
arg_enum = cname->valid_args[i];
/* assert arg_enum == _cmdline.arg_props[arg_enum].arg_enum */
/* assert arg_enum == _cmdline.opt_names[arg_enum].arg_enum */
/* the value returned by getopt matches the ascii value of single letter option */
if (_cmdline.arg_props[arg_enum].short_arg && (goval == _cmdline.arg_props[arg_enum].short_arg))
if (_cmdline.opt_names[arg_enum].short_opt && (goval == _cmdline.opt_names[arg_enum].short_opt))
return arg_enum;
/* the value returned by getopt matches the enum value plus 128 */
if (!_cmdline.arg_props[arg_enum].short_arg && (goval == (arg_enum + 128)))
if (!_cmdline.opt_names[arg_enum].short_opt && (goval == (arg_enum + 128)))
return arg_enum;
}
@@ -1847,7 +1854,7 @@ static int _process_command_line(struct cmd_context *cmd, int *argc, char ***arg
{
char str[((ARG_COUNT + 1) * 2) + 1], *ptr = str;
struct option opts[ARG_COUNT + 1], *o = opts;
struct arg_props *a;
struct opt_name *a;
struct arg_values *av;
struct arg_value_group_list *current_group = NULL;
struct command_name *cname;
@@ -1866,7 +1873,7 @@ static int _process_command_line(struct cmd_context *cmd, int *argc, char ***arg
/*
* create the short-form character array (str) and the long-form option
* array (opts) to pass to the getopt_long() function. IOW we generate
* the arguments to pass to getopt_long() from the args.h/arg_props data.
* the arguments to pass to getopt_long() from the opt_names data.
*/
for (i = 0; i < cname->num_args; i++)
_add_getopt_arg(cname->valid_args[i], &ptr, &o);
@@ -1890,7 +1897,7 @@ static int _process_command_line(struct cmd_context *cmd, int *argc, char ***arg
return 0;
}
a = _cmdline.arg_props + arg_enum;
a = _cmdline.opt_names + arg_enum;
av = &cmd->opt_arg_values[arg_enum];
@@ -1920,10 +1927,10 @@ static int _process_command_line(struct cmd_context *cmd, int *argc, char ***arg
if (av->count && !(a->flags & ARG_COUNTABLE)) {
log_error("Option%s%c%s%s may not be repeated.",
a->short_arg ? " -" : "",
a->short_arg ? : ' ',
(a->short_arg && a->long_arg) ?
"/" : "", a->long_arg ? : "");
a->short_opt ? " -" : "",
a->short_opt ? : ' ',
(a->short_opt && a->long_opt) ?
"/" : "", a->long_opt ? : "");
return 0;
}
@@ -1935,8 +1942,8 @@ static int _process_command_line(struct cmd_context *cmd, int *argc, char ***arg
av->value = optarg;
if (!_val_props[a->val_enum].fn(cmd, av)) {
log_error("Invalid argument for %s: %s", a->long_arg, optarg);
if (!val_names[a->val_enum].fn(cmd, av)) {
log_error("Invalid argument for %s: %s", a->long_opt, optarg);
return 0;
}
}
@@ -1970,12 +1977,12 @@ static int _merge_synonym(struct cmd_context *cmd, int oldarg, int newarg)
if (arg_is_set(cmd, oldarg) && arg_is_set(cmd, newarg)) {
log_error("%s and %s are synonyms. Please only supply one.",
_cmdline.arg_props[oldarg].long_arg, _cmdline.arg_props[newarg].long_arg);
_cmdline.opt_names[oldarg].long_opt, _cmdline.opt_names[newarg].long_opt);
return 0;
}
/* Not groupable? */
if (!(_cmdline.arg_props[oldarg].flags & ARG_GROUPABLE)) {
if (!(_cmdline.opt_names[oldarg].flags & ARG_GROUPABLE)) {
if (arg_is_set(cmd, oldarg))
_copy_arg_values(cmd->opt_arg_values, oldarg, newarg);
return 1;
@@ -2019,7 +2026,12 @@ int version(struct cmd_context *cmd __attribute__((unused)),
return ECMD_PROCESSED;
}
static void _get_output_settings(struct cmd_context *cmd)
static void _reset_current_settings_to_default(struct cmd_context *cmd)
{
cmd->current_settings = cmd->default_settings;
}
static void _get_current_output_settings_from_args(struct cmd_context *cmd)
{
if (arg_is_set(cmd, debug_ARG))
cmd->current_settings.debug = _LOG_FATAL + (arg_count(cmd, debug_ARG) - 1);
@@ -2034,7 +2046,7 @@ static void _get_output_settings(struct cmd_context *cmd)
}
}
static void _apply_output_settings(struct cmd_context *cmd)
static void _apply_current_output_settings(struct cmd_context *cmd)
{
init_debug(cmd->current_settings.debug);
init_debug_classes_logged(cmd->default_settings.debug_classes);
@@ -2042,10 +2054,12 @@ static void _apply_output_settings(struct cmd_context *cmd)
init_silent(cmd->current_settings.silent);
}
static int _get_settings(struct cmd_context *cmd)
static int _get_current_settings(struct cmd_context *cmd)
{
const char *activation_mode;
_get_current_output_settings_from_args(cmd);
if (arg_is_set(cmd, test_ARG))
cmd->current_settings.test = arg_is_set(cmd, test_ARG);
@@ -2173,7 +2187,7 @@ static int _process_common_commands(struct cmd_context *cmd)
if (arg_is_set(cmd, help_ARG) ||
arg_is_set(cmd, longhelp_ARG) ||
arg_is_set(cmd, help2_ARG)) {
_usage(cmd->name, arg_is_set(cmd, longhelp_ARG));
_usage(cmd->name, arg_is_set(cmd, longhelp_ARG), 0);
return ECMD_PROCESSED;
}
@@ -2211,15 +2225,17 @@ int help(struct cmd_context *cmd __attribute__((unused)), int argc, char **argv)
else {
int i;
for (i = 0; i < argc; i++)
if (!_usage(argv[i], 0))
if (!_usage(argv[i], 0, 0))
ret = EINVALID_CMD_LINE;
}
return ret;
}
static void _apply_settings(struct cmd_context *cmd)
static void _apply_current_settings(struct cmd_context *cmd)
{
_apply_current_output_settings(cmd);
init_test(cmd->current_settings.test);
init_full_scan_done(0);
init_mirror_in_sync(0);
@@ -2543,13 +2559,12 @@ int lvm_run_command(struct cmd_context *cmd, int argc, char **argv)
}
/*
* log_debug() can be enabled now that we know the settings
* from the command. Previous calls to log_debug() will
* do nothing.
* Now we have the command line args, set up any known output logging
* options immediately.
*/
cmd->current_settings = cmd->default_settings;
_get_output_settings(cmd);
_apply_output_settings(cmd);
_reset_current_settings_to_default(cmd);
_get_current_output_settings_from_args(cmd);
_apply_current_output_settings(cmd);
log_debug("Parsing: %s", cmd->cmd_line);
@@ -2610,9 +2625,17 @@ int lvm_run_command(struct cmd_context *cmd, int argc, char **argv)
if (arg_is_set(cmd, readonly_ARG))
cmd->metadata_read_only = 1;
if ((ret = _get_settings(cmd)))
/*
* Now that all configs, profiles and command lines args are available,
* freshly calculate and apply all settings. Specific command line
* options take precedence over config files (which include --config as
* that is treated like a config file).
*/
_reset_current_settings_to_default(cmd);
if ((ret = _get_current_settings(cmd)))
goto_out;
_apply_settings(cmd);
_apply_current_settings(cmd);
if (cmd->degraded_activation)
log_debug("DEGRADED MODE. Incomplete RAID LVs will be processed.");
@@ -2763,8 +2786,13 @@ int lvm_run_command(struct cmd_context *cmd, int argc, char **argv)
log_debug("Completed: %s", cmd->cmd_line);
cmd->current_settings = cmd->default_settings;
_apply_settings(cmd);
/*
* Reset all settings back to the persistent defaults that
* ignore everything supplied on the command line of the
* completed command.
*/
_reset_current_settings_to_default(cmd);
_apply_current_settings(cmd);
/*
* free off any memory the command used.
@@ -3037,7 +3065,7 @@ struct cmd_context *init_lvm(unsigned set_connections, unsigned set_filters)
return_NULL;
}
_cmdline.arg_props = &_arg_props[0];
_cmdline.opt_names = &opt_names[0];
if (stored_errno()) {
destroy_toolcontext(cmd);

View File

@@ -2366,7 +2366,7 @@ void opt_array_to_str(struct cmd_context *cmd, int *opts, int count,
static void lvp_bits_to_str(uint64_t bits, char *buf, int len)
{
struct lv_props *prop;
struct lv_prop *prop;
int lvp_enum;
int pos = 0;
int ret;
@@ -2387,7 +2387,7 @@ static void lvp_bits_to_str(uint64_t bits, char *buf, int len)
static void lvt_bits_to_str(uint64_t bits, char *buf, int len)
{
struct lv_types *type;
struct lv_type *type;
int lvt_enum;
int pos = 0;
int ret;
@@ -2593,7 +2593,7 @@ int get_lvt_enum(struct logical_volume *lv)
static int _lv_types_match(struct cmd_context *cmd, struct logical_volume *lv, uint64_t lvt_bits,
uint64_t *match_bits, uint64_t *unmatch_bits)
{
struct lv_types *type;
struct lv_type *type;
int lvt_enum;
int found_a_match = 0;
int match;
@@ -2642,7 +2642,7 @@ static int _lv_types_match(struct cmd_context *cmd, struct logical_volume *lv, u
static int _lv_props_match(struct cmd_context *cmd, struct logical_volume *lv, uint64_t lvp_bits,
uint64_t *match_bits, uint64_t *unmatch_bits)
{
struct lv_props *prop;
struct lv_prop *prop;
int lvp_enum;
int found_a_mismatch = 0;
int match;
@@ -2697,7 +2697,7 @@ static int _check_lv_types(struct cmd_context *cmd, struct logical_volume *lv, i
ret = _lv_types_match(cmd, lv, cmd->command->required_pos_args[pos-1].def.lvt_bits, NULL, NULL);
if (!ret) {
int lvt_enum = get_lvt_enum(lv);
struct lv_types *type = get_lv_type(lvt_enum);
struct lv_type *type = get_lv_type(lvt_enum);
log_warn("Operation on LV %s which has invalid type %s.",
display_lvname(lv), type ? type->name : "unknown");
}
@@ -2711,7 +2711,7 @@ static int _check_lv_rules(struct cmd_context *cmd, struct logical_volume *lv)
{
char buf[64];
struct cmd_rule *rule;
struct lv_types *lvtype = NULL;
struct lv_type *lvtype = NULL;
uint64_t lv_props_match_bits, lv_props_unmatch_bits;
uint64_t lv_types_match_bits, lv_types_unmatch_bits;
int opts_match_count, opts_unmatch_count;

View File

@@ -108,47 +108,12 @@ struct arg_values {
/* void *ptr; // Currently not used. */
};
/* a global table of possible --option's */
struct arg_props {
int arg_enum; /* foo_ARG from args.h */
const char short_arg;
char _padding[7];
const char *long_arg;
int val_enum; /* foo_VAL from vals.h */
uint32_t flags;
uint32_t prio;
const char *desc;
};
struct arg_value_group_list {
struct dm_list list;
struct arg_values arg_values[0];
uint32_t prio;
};
/* a global table of possible --option values */
struct val_props {
int val_enum; /* foo_VAL from vals.h */
int (*fn) (struct cmd_context *cmd, struct arg_values *av);
const char *name;
const char *usage;
};
/* a global table of possible LV properties */
struct lv_props {
int lvp_enum; /* is_foo_LVP from lv_props.h */
const char *name; /* "lv_is_foo" used in command-lines.in */
int (*fn) (struct cmd_context *cmd, struct logical_volume *lv); /* lv_is_foo() */
};
/* a global table of possible LV types */
/* (as exposed externally in command line interface, not exactly as internal segtype is used) */
struct lv_types {
int lvt_enum; /* is_foo_LVT from lv_types.h */
const char *name; /* "foo" used in command-lines.in, i.e. LV_foo */
int (*fn) (struct cmd_context *cmd, struct logical_volume *lv); /* lv_is_foo() */
};
#define CACHE_VGMETADATA 0x00000001
#define PERMITTED_READ_ONLY 0x00000002
/* Process all VGs if none specified on the command line. */
@@ -183,12 +148,13 @@ int cachemode_arg(struct cmd_context *cmd, struct arg_values *av);
int discards_arg(struct cmd_context *cmd, struct arg_values *av);
int mirrorlog_arg(struct cmd_context *cmd, struct arg_values *av);
int size_kb_arg(struct cmd_context *cmd, struct arg_values *av);
int ssize_kb_arg(struct cmd_context *cmd, struct arg_values *av);
int size_mb_arg(struct cmd_context *cmd, struct arg_values *av);
int size_mb_arg_with_percent(struct cmd_context *cmd, struct arg_values *av);
int ssize_mb_arg(struct cmd_context *cmd, struct arg_values *av);
int int_arg(struct cmd_context *cmd, struct arg_values *av);
int uint32_arg(struct cmd_context *cmd, struct arg_values *av);
int int_arg_with_sign(struct cmd_context *cmd, struct arg_values *av);
int int_arg_with_sign_and_percent(struct cmd_context *cmd, struct arg_values *av);
int extents_arg(struct cmd_context *cmd, struct arg_values *av);
int major_arg(struct cmd_context *cmd, struct arg_values *av);
int minor_arg(struct cmd_context *cmd, struct arg_values *av);
int string_arg(struct cmd_context *cmd, struct arg_values *av);
@@ -251,8 +217,8 @@ int vgchange_activate(struct cmd_context *cmd, struct volume_group *vg,
int vgchange_background_polling(struct cmd_context *cmd, struct volume_group *vg);
struct lv_props *get_lv_prop(int lvp_enum);
struct lv_types *get_lv_type(int lvt_enum);
struct lv_prop *get_lv_prop(int lvp_enum);
struct lv_type *get_lv_type(int lvt_enum);
struct command *get_command(int cmd_enum);
int lvchange_properties_cmd(struct cmd_context *cmd, int argc, char **argv);

View File

@@ -63,7 +63,7 @@
* then also be added to the usage string for the val type here.
* It would be nice if the accepted values could be defined in a
* more consistent way, and perhaps in a single place, perhaps in
* struct val_props.
* struct val_names.
*
* The usage text for an option is not always the full
* set of words accepted for an option, but may be a
@@ -79,14 +79,14 @@
* options included in the usage text below that should
* be removed? Should "lvm1" be removed?
*
* For Number args that take optional units, a full usage
* could be "Number[bBsSkKmMgGtTpPeE]" (with implied |),
* but repeating this full specification produces cluttered
* output, and doesn't indicate which unit is the default.
* "Number[Units]" would be cleaner, as would a subset of
* common units, e.g. "Number[kmg...]", but neither helps
* with default. "Number[k|Unit]" and "Number[m|Unit]" show
* the default, and "Unit" indicates that other units
* Size is a Number that takes an optional unit.
* A full usage could be "Size[b|B|s|S|k|K|m|M|g|G|t|T|p|P|e|E]"
* but repeating this full specification produces long and
* cluttered output, and doesn't indicate which unit is the default.
* "Size[Units]" would be cleaner, as would a subset of
* common units, e.g. "Size[kmg...]", but neither helps
* with default. "Size[k|UNIT]" and "Size[m|UNIT]" show
* the default, and "UNIT" indicates that other units
* are possible without listing them all. This also
* suggests using the preferred lower case letters, because
* --size and other option args treat upper/lower letters
@@ -112,21 +112,23 @@ val(tag_VAL, tag_arg, "Tag", NULL)
val(select_VAL, NULL, "Select", NULL) /* used only for command defs */
val(activationmode_VAL, string_arg, "ActivationMode", "partial|degraded|complete")
val(activation_VAL, activation_arg, "Active", "y|n|ay")
val(cachemode_VAL, cachemode_arg, "CacheMode", "writethrough|writeback")
val(cachemode_VAL, cachemode_arg, "CacheMode", "writethrough|writeback|passthrough")
val(discards_VAL, discards_arg, "Discards", "passdown|nopassdown|ignore")
val(mirrorlog_VAL, mirrorlog_arg, "MirrorLog", "core|disk")
val(sizekb_VAL, size_kb_arg, "SizeKB", "Number[k|Unit]")
val(sizemb_VAL, size_mb_arg, "SizeMB", "Number[m|Unit]")
val(regionsize_VAL, regionsize_arg, "RegionSize", "Number[m|Unit]")
val(numsigned_VAL, int_arg_with_sign, "SNumber", "[+|-]Number")
val(numsignedper_VAL, int_arg_with_sign_and_percent, "SNumberP", "[+|-]Number[%VG|%PVS|%FREE]")
val(sizekb_VAL, size_kb_arg, "SizeKB", "Size[k|UNIT]")
val(sizemb_VAL, size_mb_arg, "SizeMB", "Size[m|UNIT]")
val(ssizekb_VAL, ssize_kb_arg, "SSizeKB", "[+|-]Size[k|UNIT]")
val(ssizemb_VAL, ssize_mb_arg, "SSizeMB", "[+|-]Size[m|UNIT]")
val(regionsize_VAL, regionsize_arg, "RegionSize", "Size[m|UNIT]")
val(snumber_VAL, int_arg_with_sign, "SNumber", "[+|-]Number")
val(extents_VAL, extents_arg, "Extents", "[+|-]Number[PERCENT]")
val(permission_VAL, permission_arg, "Permission", "rw|r")
val(metadatatype_VAL, metadatatype_arg, "MetadataType", "lvm2|lvm1")
val(units_VAL, string_arg, "Units", "r|R|h|H|b|B|s|S|k|K|m|M|g|G|t|T|p|P|e|E")
val(segtype_VAL, segtype_arg, "SegType", "linear|striped|snapshot|mirror|raid|thin|cache|thin-pool|cache-pool")
val(alloc_VAL, alloc_arg, "Alloc", "contiguous|cling|cling_by_tags|normal|anywhere|inherit")
val(locktype_VAL, locktype_arg, "LockType", "sanlock|dlm|none")
val(readahead_VAL, readahead_arg, "Readahead", "auto|none|NumberSectors")
val(readahead_VAL, readahead_arg, "Readahead", "auto|none|Number")
val(vgmetadatacopies_VAL, vgmetadatacopies_arg, "MetadataCopiesVG", "all|unmanaged|Number")
val(pvmetadatacopies_VAL, pvmetadatacopies_arg, "MetadataCopiesPV", "0|1|2")
val(metadatacopies_VAL, metadatacopies_arg, "unused", "unused")