1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-10-18 03:33:15 +03:00

Compare commits

...

141 Commits

Author SHA1 Message Date
Bryn M. Reeves
261b1e54e4 dmfilemapd: do not wait if file has been truncated 2017-06-07 19:41:54 +01:00
Bryn M. Reeves
08658195b9 dmfilemapd: update file block count at daemon start
The file block count stored in the filemap_monitor was lazily
initialised at the time of the first check. This causes problems
in the case that the file has been truncated between this time and
the time the daemon started: the initial block count and current
block count match and the daemon fails to detect a change.

Separate the setting of the block count from the check and make a
call to update the value at the start of _dmfilemapd().
2017-06-07 19:41:54 +01:00
Bryn M. Reeves
3d85c9ae57 libdm: allow truncated files dm_stats_update_regions_from_fd()
It's not an error to attempt to update regions from an fd that has
been truncated (or otherwise no longer has any allocated extents):
in this case, the call should remove all regions corresponding to
the group, and return an empty region table.
2017-06-07 19:41:50 +01:00
Heinz Mauelshagen
39703cb485 lvconvert: reject RAID conversions on inactive LVs
Only support RAID conversions on active LVs.

If we'd accept e.g. upconverting linear -> raid1 on inactive
linear LVs, any LV flags passed to the kernel aren't properly
cleared thus errouneously passing them on every activation.

Add respective check to lv_raid_change_image_count() and
move existing one in lv_raid_convert() for better messages.
2017-06-07 18:37:04 +02:00
Tony Asleson
61420309ee lvmdbusd: Prevent stall when update thread gets exception
If during the process of fetching current lvm state we experience an
exception we fail to call set_result on the queued_requests we were
processing.  When this happens those threads block forever which causes
the service to stall infinitely.  Only clear the queued_requests after
we have called set_result.
2017-06-02 12:39:04 -05:00
Tony Asleson
699ccc05ca lvmdbusd: Add background command to flight recorder
We were not adding background tasks to flight recorder.  Add the meta
data to the flight recorder when we start the command and update the meta
data when the command is finished.  Locking was added to meta data to
prevent concurrent update and returning string representation as these can
happen in two different threads.
2017-06-02 12:32:51 -05:00
Tony Asleson
192d142e1c lvmdbusd: cmdhandler.py vg_reduce, remove extranous '--all'
vgreduce previously allowed --all and --removemissing together even though
it only actual did the remove missing.  The lvm dbus daemon was passing
--all anytime there was no entries in pv_object_paths.  This change supplies
--all if and only if we are not removing missing and the pv_object_paths
is empty.

Vgreduce has and continues to enforce the invalid combination of supplying a
device list when you specify --all or --removemissing so we do not need
to check for that invalid combination explicitly in the lvm dbus service as
it's already covered.

Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1455471
2017-06-02 12:32:43 -05:00
Heinz Mauelshagen
3217e0cfea lvconvert: choose direct path to desired raid level
Remove superfluous raid5_n interim LV type from raid4 -> raid10 conversion.

Resolves: rhbz1458006
2017-06-02 14:30:57 +02:00
David Teigland
c98a25aab1 print warning about in-use orphans
Warn about a PV that has the in-use flag set, but appears in
the orphan VG (no VG was found referencing it.)

There are a number of conditions that could lead to this:

. The PV was created with no mdas and is used in a VG with
  other PVs (with metadata) that have not yet appeared on
  the system.  So, no VG metadata is found by lvm which
  references the in-use PV with no mdas.

. vgremove could have failed after clearing mdas but
  before clearing the in-use flag.  In this case, the
  in-use flag needs to be manually cleared on the PV.

. The PV may have damanged/unrecognized VG metadata
  that lvm could not read.

. The PV may have no mdas, and the PVs with the metadata
  may have damaged/unrecognized metadata.
2017-06-01 11:18:42 -05:00
David Teigland
f3c90e90f8 disable repairing in-use flag on orphan PVs
A PV holding VG metadata that lvm can't understand
(e.g. damaged, checksum error, unrecognized flag)
will appear as an in-use orphan, and will be cleared
by this repair code.  Disable this repair until the
code can keep track of these problematic PVs, and
distinguish them from actual in-use orphans.
2017-06-01 09:53:14 -05:00
Zdenek Kabelac
743ffb1962 tests: longer delay
Slower sync a bit more and use bigger raid arrays to
be more sure we will catch raid being sychronized.
2017-05-31 13:59:42 +02:00
Zdenek Kabelac
42b87c23e2 tests: add some extra udev waits
To get less random results on older systems with systemd (i.e. fc23)
put few extra udev wait operations to avoid any udev event collision.
2017-05-31 13:23:34 +02:00
Zdenek Kabelac
091c55a13f tests: skip reshaping raid10
Needs newer target  >= 1.10.1 for reshaping of raid10 LVs.
2017-05-31 12:59:53 +02:00
Heinz Mauelshagen
3719f4bc54 lvconvert: reject changing number of stripes on single core
Reject any stripe adding/removing reshape on raid4/5/6/10 because
of related MD kernel deadlock on single core systems until
we get a proper fix in MD.

Related: rhbz1443999
2017-05-30 19:14:32 +02:00
Zdenek Kabelac
c245996d70 tests: missed to export lvm binary for fsadm 2017-05-30 18:43:56 +02:00
Zdenek Kabelac
b876e8c915 tests: drop extra debug vvvv
Also show output of mount so when it fails we can check the state.
2017-05-30 18:43:56 +02:00
Zdenek Kabelac
7c84c5c421 tests: also check new flag with segtype
Make sure new unknown flag is having same behavior as old unknown segtype.
2017-05-30 18:43:56 +02:00
Zdenek Kabelac
fb86bddda2 flags: improve unknown flags logic
Use same logic as with unknown segment type - so preserve such
name fully with all flags just with UNKNOWN segment type bits.
2017-05-30 18:43:45 +02:00
Zdenek Kabelac
d1ac6108c3 flags: restore same logic with MISSING
Since lvmetad is using 'MISSING' in status for 'another' purpose,
we need to support ATM also flag get from this place.

Until fixed better - we accept both flags - alhough lvm2 will
only print in flags.
2017-05-30 16:16:29 +02:00
Zdenek Kabelac
4141409eb0 flags: add segtype flag support
Switch METADATA_FORMAT flag usage to be stored via segtype
instead of 'status' flag which appeared to cause major
incompatibility troubles.

For backward compatiblity segtype flags are still accepted also
via 'status' bits which were used from version 2.02.169 so metadata
saved by this newer lvm2 version should still work nicely, although
new save version will no longer work on this older lvm2 version.
2017-05-29 14:52:56 +02:00
Zdenek Kabelac
0299a7af1e flags: add read and print of segtype flag
Allow storing LV status bits with segment type name field.
Switching to this since this field has better support for compatibility
with older version of lvm2 - since such unknown segtype will not cause
complete invisiblity of metadata from older lvm2 code - just the
particular LV will become unusable with unknown type of segment.
2017-05-29 14:49:41 +02:00
Zdenek Kabelac
1bb0c5197f cleanup: backtrace
Add debug backtrace.
2017-05-29 14:48:33 +02:00
Zdenek Kabelac
966d1130db cleanup: separate type and mask
Split misused 'enum' into 2 fields - one for type
of PV, VG, LV and other for mask.
2017-05-29 14:47:26 +02:00
Zdenek Kabelac
8e0bc73eba cleanup: bad flag is internal error here
Convert to internal error.
2017-05-29 14:47:16 +02:00
Zdenek Kabelac
597b3576c7 tests: wait for raid in sync
Lvchange needs synchronized raid.
2017-05-29 14:41:53 +02:00
Marian Csontos
223c594f0e test: Fix dbus testing using testsuite
- Must reread all objects as PVs might be removed.
- Never consider testsuite provided PVs nested, or tearDown fails to
  remove any outstanding VGs on them.
2017-05-26 15:39:20 +02:00
Marian Csontos
7687ab82c8 test: Use _pv suffix for nested devices
Testsuite uses global_filter to accept only test devices with
suffix matching /_pv[0-9_]*$/ set by generate_config in aux.sh.
2017-05-26 08:33:39 +02:00
Marian Csontos
3745b52ed4 spec: Enable notify-dbus in builds with dbus 2017-05-26 07:40:09 +02:00
Heinz Mauelshagen
65b10281f8 Proper dm_snprintf return checks 2017-05-24 14:00:44 +02:00
Heinz Mauelshagen
3da5cdc5dc Fix typo 2017-05-24 13:47:45 +02:00
David Teigland
7a0f46e2f8 add comment about PV in-use repair
copied from commit message for
d97f1c89de
2017-05-23 16:59:46 -05:00
David Teigland
4d261cd719 man lvmsystemid: change some wording
Clarify the wording in a couple cases.
2017-05-23 10:50:41 -05:00
Zdenek Kabelac
5e8beb4023 tests: fix test for fsadm
Avoid mountinfo
Pass 'y'  - since ATM lvresize cannot pass --yes to fsadm.
2017-05-23 14:02:43 +02:00
Zdenek Kabelac
1fe4f80e45 fsadm: avoid hidden --yes
When 'fsadm' was running without terminal (i.e. pipe), it's been
automatically working like in '--yes'.

Detect terminal and only accept empty "" input in this mode.
2017-05-23 14:02:41 +02:00
Alasdair G Kergon
57492a6094 raid: Drop unnecessary/incorrect use of dm_pool_free 2017-05-23 01:51:04 +01:00
Alasdair G Kergon
fbe7464df5 metadata: Unlock VG on more _vg_make_handle error paths
Internal error: VG lock vg0 must be requested before vg3, not after.
Internal error: 3 device(s) were left open and have been closed.
2017-05-23 01:38:02 +01:00
Alasdair G Kergon
d1ddfc4085 format_text: More internal errors if given invalid internal metadata
Three more messages to ensure each failure in out_areas() results in a
low-level message instead of sometimes just <backtrace>.
2017-05-22 23:30:34 +01:00
David Teigland
ca24196491 man lvmraid: add more indirect conversion info 2017-05-22 14:28:24 -05:00
Zdenek Kabelac
3877ef0e43 tests: catch some fsadm tricky paths
When user is renaming a device, we are getting into troubles.
We needs to recognize which case is actually supportable by fsadm.
2017-05-22 15:12:31 +02:00
Zdenek Kabelac
9e04e0483f fsadm: enhance detection of already mounted volumes
Add more validation to catch mainly renamed devices, where
filesystem utils are not able to handle devices properly,
as they are not addressed by major:minor by rather via some
symbolic path names which can change over time via rename operation.
2017-05-22 15:11:21 +02:00
Zdenek Kabelac
2b7ac2bfb3 fsadm: always detect mounted fs with extX
Since we add more validation to 'detect_mounted' function make sure
we always use it even with 'resize' action, so numerous validations
are not skipped.
2017-05-22 15:10:16 +02:00
David Teigland
a29e7843b1 man lvmraid: indirect conversions
Start documenting multi-step conversions.

Don't number examples.
2017-05-18 16:47:37 -05:00
Heinz Mauelshagen
2bf01c2f37 lvconvert: fix logic in automatic settings of possible (raid) LV types
Commit 5fe07d3574 failed to set raid5 types
properly on conversions from raid6.  It always enforced raid6_ls_6
for types raid6/raid6_zr/raid6_nr/raid6_nc, thus requiring 3 conversions
instead of 2 when asking for raid5_{la,rs,ra,n}.

Related: rhbz1439403
2017-05-18 16:20:39 +02:00
Marian Csontos
8e99a46d09 lvmdbusd: Fix missed rename
Introduced By: e53454d6 (2.02.169)
Related: RHBZ#1348688
2017-05-18 12:52:46 +02:00
Heinz Mauelshagen
9c651b146e lvconvert: fix indent and typo in last commit 2017-05-18 00:43:20 +02:00
Heinz Mauelshagen
5fe07d3574 lvconvert: enhance automatic settings of possible (raid) LV types
Offer possible interim LV types and display their aliases
(e.g. raid5 and raid5_ls) for all conversions between
striped and any raid LVs in case user requests a type
not suitable to direct conversion.

E.g. running "lvconvert --type raid5 LV" on a striped
LV will replace raid5 aka raid5_ls (rotating parity)
with raid5_n (dedicated parity on last image).
User is asked to repeat the lvconvert command to get to the
requested LV type (raid5 aka raid5_ls in this example)
when such replacement occurs.

Resolves: rhbz1439403
2017-05-18 00:18:15 +02:00
Marian Csontos
16c6d9f11a lvmdbusd: Fix notify_dbus mangling config option
If config option is passed by caller, dbusd appends to the option not to
the value, and also without using delimiter.

Bug: RHBZ#1451612
2017-05-17 15:35:20 +02:00
Marian Csontos
9291fb7bf5 test: Fix condition 2017-05-17 11:43:49 +02:00
Marian Csontos
1fb13e8660 test: Fix skipped cleanup 2017-05-17 11:37:29 +02:00
Heinz Mauelshagen
48408ece6d test: add missing yes option to lvconvert stripe removal 2017-05-16 17:52:54 +02:00
Marian Csontos
f4f408610c test: Fix previous commit - skip only RAID6 part 2017-05-16 10:47:53 +02:00
Marian Csontos
cdb49216c5 test: Update condition for changing RAID regionsize
The test is failing on F23 and F25 with 1.9.1
2017-05-16 10:44:52 +02:00
David Teigland
5406191cb9 lvchange: allow changing properties on thin pool data lv
Add an exception to not allowing lvchange to change properties
on hidden LVs.  When a thin pool data LV is a cache LV, we
need to allow changing cache properties on the tdata sublv of
the thin pool.
2017-05-15 10:59:48 -05:00
David Teigland
dfc58c637b config: keep description lines under 80
As far as possible, it's nice to keep the config
description lines from going over 80 columns.
2017-05-12 09:55:16 -05:00
Alasdair G Kergon
2583732165 lvcreate: Fix last commit for virtual sizes.
Don't stop when extents is 0 if a virtual size parameter was supplied
instead.
2017-05-12 13:16:10 +01:00
Alasdair G Kergon
cf73f6cf61 lvcreate: Fix mirror percentage size calculations.
Trap cases where the percentage calculation currently leads to an empty
LV and the message:

  Internal error: Unable to create new logical volume with no extents

Additionally convert the calculated number of extents from physical to
logical when creating a mirror using a percentage that is based on
Physical Extents.  Otherwise a command like 'lvcreate -m3 -l80%FREE'
can never leave any free space.

This brings the behaviour closer to that of lvresize.
(A further patch is needed to cover all the raid types.)
2017-05-12 02:31:35 +01:00
David Teigland
d49a20b7bd man lvmcache: add info about metadata formats 2017-05-11 10:52:59 -05:00
Alasdair G Kergon
80900dcf76 metadata: Fix metadata repair when devs still missing.
_check_reappeared_pv() incorrectly clears the MISSING_PV flags of
PVs with unknown devices.
While one caller avoids passing such PVs into the function, the other
doesn't.  Move the check inside the function so it's not forgotten.

Without this patch, if the normal VG reading code tries to repair
inconsistent metadata while there is an unknown PV, it incorrectly
considers the missing PVs no longer to be missing and produces
incorrect 'pvs' output omitting the missing PV, for example.

Easy reproducer:
Create a VG with 3 PVs pv1, pv2, pv3.
Hide pv2.
Run vgreduce --removemissing.
Reinstate the hidden PV pv2 and at the same time hide a different PV
pv3.
Run 'pvs' - incorrect output.
Run 'pvs' again - correct output.

See https://bugzilla.redhat.com/1434054
2017-05-11 02:17:34 +01:00
David Teigland
d45531712d vg_read: check for NULL dev to avoid segfault
There are certain situations (not fully understood)
where is_missing_pv() is false, but pv->dev is NULL,
so this adds a check for NULL pv->dev after is_missing_pv()
to avoid a segfault.
2017-05-10 10:45:41 -05:00
Zdenek Kabelac
b817cc5746 tests: better skip 2017-05-10 15:40:32 +02:00
Zdenek Kabelac
a1a9ae0aa5 fsadm: all path define MAJOR MINOR 2017-05-10 15:40:32 +02:00
Zdenek Kabelac
8ea33b633a fsadm: some cleanup
Put some extra "" around vars.
Indent.
Error messages with dots.
2017-05-10 15:40:31 +02:00
Zdenek Kabelac
1107d483a2 fsadm: fix test of subshell return value
Subshell is not returning error code value upward thus
error results in this case were actually ignored.
Also add dots to moved error messages.
2017-05-10 15:40:29 +02:00
Zdenek Kabelac
455a4de090 dmeventd: restore multiple warnings
With recent updates for thin pool monitoring in version 169
we lost multiple WARNINGs to be printed in syslog, when
pool crossed  80%, 85%, 90%, 95%, 100%.

Restore this logic as we want to keep user informed more
then just once when 80% boundary is passed.
2017-05-10 15:39:36 +02:00
Bryn M. Reeves
a9940d16fe dmfilemapd: always initialise 'same' local variable (Coverity)
Fix a regression introduced in 70bb726 that allows a local variable
in the monitored file checking routine to be accessed before its
assignment when the file has already been unlinked.
2017-05-08 17:10:25 +01:00
David Teigland
c5fee2ee6e man lvmetad: mention repair as a reason for disabling
lvconvert --repair now also disables lvmetad.
2017-05-05 10:48:58 -05:00
Tony Asleson
405a3689bc lvmdbusd: Correct PV lookups
When a user does a Manager.PvCreate they can specify the block device using a
device path that may be different than what lvm reports is the device path.  For
example a user could use:

/dev/disk/by-id/wwn-0x5002538500000000 instead of /dev/sdc

In this case the pvcreate will succeed, but when we query lvm we don't find the
newly created PV. We fail because it's device path is returned as /dev/sdc.  This
change re-uses an internal lookup which can accommodate this and correctly find
the newly created PV.

Corrects https://bugzilla.redhat.com/show_bug.cgi?id=1445654
2017-05-05 10:30:06 -05:00
Tony Asleson
fcce7e1660 lvmdbustest.py: Add PV symlink testing
We fixed this issue once before and it's back, add a test to make
sure it never comes back again!

https://bugzilla.redhat.com/show_bug.cgi?id=1318754
https://bugzilla.redhat.com/show_bug.cgi?id=1445654
2017-05-05 10:30:06 -05:00
David Teigland
97d5e192fe WHATS_NEW: cache pool options 2017-05-05 10:02:54 -05:00
David Teigland
df5fd5ae88 lvcreate: cachemode writeback and cachepolicy cleaner is invalid
Return an error if lvconvert is used to create a cache pool
with that combination.
2017-05-05 09:59:12 -05:00
Bryn M. Reeves
7fbeea30e5 dmfilemapd: clear filemap_monitor before calling _parse_args()
If the wrong number of arguments are given, main() will attempt
to free the uninitialised pointer in fm.path.
2017-05-05 11:53:44 +01:00
David Teigland
c56d8535a7 man lvmconfig: add descriptions for typeconfig and ignorelocal 2017-05-04 14:04:10 -05:00
David Teigland
20026c9c22 man vgexport: reference to vgimport 2017-05-03 16:44:01 -05:00
David Teigland
f2d943a4b6 man lvm fullreport and lvpoll references
Add references to lvm-fullreport(8) and lvm-lvpoll(8).
2017-05-03 16:40:44 -05:00
David Teigland
1423ab93f9 man clvmd: mention lvmlockd is another option 2017-05-03 16:26:21 -05:00
David Teigland
a7a28bd998 man: reference other man pages with bold
There were a handful of references to other man
pages using the standard command(N) form which were
not in bold, so they were not turned into links
in html formats.
2017-05-03 16:21:01 -05:00
David Teigland
596fd0b106 man lvm.conf: say how to get a description of settings
Many will look in 'man lvm.conf' expecting to find a
description of the lvm.conf fields, which are not there.
State at the beginning how to get this (by running
lvmconfig.)
2017-05-03 16:01:19 -05:00
David Teigland
5de3870662 pvscan: define command as taking only -aay
The fact that -an and -ay are not accepted can be
stated in the command definition.
2017-05-03 15:46:49 -05:00
David Teigland
b869db30ac man pvscan: add description for --activate 2017-05-03 15:38:53 -05:00
David Teigland
892f3b1002 man vgimport: add description for --all 2017-05-03 15:19:19 -05:00
David Teigland
ab1de07c97 man pvchange: mention one option is required 2017-05-03 15:10:10 -05:00
David Teigland
2773667627 man pvchange: add description for --all 2017-05-03 15:03:14 -05:00
David Teigland
f7edadf870 commands: check for memory failures
just return for now
2017-05-03 14:46:43 -05:00
David Teigland
cb573d1ec9 man lvmlockd: minor wording updates 2017-05-03 14:45:33 -05:00
Alasdair G Kergon
a946327bb7 post-release 2017-05-03 11:26:58 +01:00
Alasdair G Kergon
fb4bf1f4ea pre-release 2017-05-03 11:23:31 +01:00
Alasdair G Kergon
6bdbb283d5 man: generate 2017-05-03 11:23:15 +01:00
Alasdair G Kergon
a0f742542f command: avoid compiler warning
man-generator.c:3243: warning: declaration of ‘stat’ shadows a global
declaration
2017-05-03 11:19:43 +01:00
David Teigland
e15c7c5ff9 man: references to lvm entities
Try to reference lvm(8) at the start of topical
man pages, and spell out acronyms early in the
text, descriptions of which can be found in lvm(8).
2017-05-02 16:47:02 -05:00
David Teigland
253bc5eb2e man lvm: define LVM and its terms
The lvm(8) man page never said what LVM is,
it never defined what the acronym LVM stands for,
it never spelled out other common acronyms VG, PV, LV,
and never described what they are.

This adds a very minimal definition which at least defines
the acronyms and entities, but it could obviously be
expanded on, either here or elsewhere.
2017-05-02 15:49:58 -05:00
David Teigland
882a918bef man lvmsystemid: general improvements
Mainly improve the readability/wording.
Use "system ID" instead of system_id.
A couple small additions.
2017-05-02 14:57:58 -05:00
Marian Csontos
7da13bbf7b dbus: Add --yes when --setphysicalvolumesize is used 2017-05-02 08:42:21 +02:00
David Teigland
15eaf703fc commands: fix memory debug for cmd defs
Clean up the handling of memory used for cmd defs
so it doesn't trip up memory debugging.

Allocate memory for commands[] from libmem.

Free temporary memory used by define_commands()
at the end of the function.

Clear all the command def state in in lvm_fin().
2017-05-01 15:27:14 -05:00
David Teigland
54726a4950 fix running commands from a script file
Using any arg with a command name in a script file
would cause the command to fail.

The name of the script file being executed was being passed
to lvm_register_commands() and define_commands(), which
prevented command defs from being defined (simple commands
were still being defined only by name which was enough for those
to still work when run trivially with no args).
2017-04-28 16:51:04 -05:00
David Teigland
c73b9f062c WHATS_NEW: pvcreate prompt 2017-04-28 09:06:41 -05:00
Alasdair G Kergon
1764524b06 WHATS_NEW: pvcreate --setphysicalvolumesize 2017-04-28 14:34:26 +01:00
David Teigland
86b9c23dbe commands: improve syntax suggestion when no command is found
The logic for suggesting the nearest valid command syntax
was missing the simplest case.  If a command has only one
valid syntax, that is the one we should suggest.  (We were
suggesting nothing in this case.)
2017-04-27 14:21:01 -05:00
David Teigland
4f9ff14508 pvcreate: add prompt when setting dev size
If the device size does not match the size requested
by --setphysicalvolumesize, then prompt the user.

Make the pvcreate checking/prompting code handle
multiple prompts for the same device, since the
new prompt can be in addition to the existing
prompt when the PV is in a VG.
2017-04-27 13:25:41 -05:00
Marian Csontos
5cf51fb2f7 dbus: log_debug needs qualifier
Adding qualifier makes the only unqualified log_debug occurence
consistent with other uses in the same file.

Other possible ways to fix this:

- using `from .utils import log_debug`
- moving the line below `from . import utils` line
2017-04-27 18:16:17 +02:00
Alasdair G Kergon
0e3c16af56 pvresize: Missing a message on error path. 2017-04-27 15:00:41 +01:00
Heinz Mauelshagen
33afe2ca76 test: add -y to raid1 up conversions 2017-04-27 15:56:58 +02:00
Heinz Mauelshagen
0516447978 lvconvert: preserve region size on raid1 image count changes (v2)
Unless a change of the regionsize is requested via "lvconvert -R N ...",
keep the region size when the number of images changes in a raid1 LV.

Related: rhbz1443705
2017-04-27 15:52:25 +02:00
Marian Csontos
14c84c79db test: Update pattern to match code
Fix commit a3fdc966b5
2017-04-27 15:30:03 +02:00
Heinz Mauelshagen
af47ec9f51 Revert "lvconvert: preserve region size on raid1 image count changes"
This reverts commit 8333d5a969.
2017-04-27 15:29:03 +02:00
Alasdair G Kergon
813bcb24f0 test: Add -y to pvresize --setphysicalvolumesize 2017-04-27 02:57:59 +01:00
Alasdair G Kergon
cbc69f8c69 pvresize: Prompt when non-default size supplied.
Seek confirmation before changing the PV size to one that differs
from the underlying block device.
2017-04-27 02:36:34 +01:00
Jonathan Brassow
78a0b4a08a typo: s/extends/extents/ in lvmthin man page
bug 1439905
2017-04-26 15:49:28 -05:00
Tony Asleson
0f31f10ac5 lvmdbusd: Improve error msg to include PV
Include PV device path when we believe it already exists.
2017-04-26 07:31:08 -05:00
Tony Asleson
e50fb06792 lvmdbustest.py: Add nested testing
Make sure when a LV is used as a PV the dbus service works correctly.

Signed-off-by: Tony Asleson <tasleson@redhat.com>
2017-04-26 07:30:45 -05:00
Tony Asleson
6de3a9b4d0 lvmdbustest.py: Handle nested setUp & tearDown
Handle cleaning up correctly if a LV is used as a PV.
2017-04-26 07:10:18 -05:00
Tony Asleson
e78329e281 lvmdbusd: Make sure we don't hang on lvcreate
If we happen to create a lv that has a previous signature we hang on y/n
prompt, add '--yes'.
2017-04-26 07:10:18 -05:00
David Teigland
a3fdc966b5 commands: improve error messages for rules
Make the error messages more consistent,
and use less code-centric wording.
2017-04-25 15:49:58 -05:00
Heinz Mauelshagen
c534a7bcc9 lvconvert: FIXME
Add FIXME to move error path processing out of tool into library.

Related: rhbz1437653
2017-04-24 18:56:36 +02:00
Heinz Mauelshagen
aa1d5d5c89 lvconvert: fix inactive mirror up converting regression
Up converting an inactive mirror with insufficient
devs results in an over concerned warning.

Resolves: rhbz1437653
2017-04-24 17:44:54 +02:00
Heinz Mauelshagen
8333d5a969 lvconvert: preserve region size on raid1 image count changes
Unless a change of the regionsize is requested via "lvconvert -R N ...",
keep the region size when the number of images changes in a raid1 LV.

Resolves: rhbz1443705
2017-04-22 02:04:49 +02:00
Heinz Mauelshagen
8f305f025e raid: handle insufficent PVs on takeover to/from raid4
Commit 7bc85177b0
felt short relative to striped/raid0* -> raid4
and raid4 -> raid6.

Related: rhbz1438013
2017-04-22 01:19:44 +02:00
Heinz Mauelshagen
97a5fa4b87 raid: avoid superfluous variable 2017-04-22 00:50:36 +02:00
Heinz Mauelshagen
effeb2b93d test: add raid4 to upconvert allocation failure tests 2017-04-22 00:43:31 +02:00
Heinz Mauelshagen
149e4fa04b test: consider changed default
commit b81b4aad24
raised the region size so demand the sizes the
test checks for.
2017-04-22 00:42:09 +02:00
Heinz Mauelshagen
7c7122a3b1 test: add upconvert allocation failure tests 2017-04-21 20:57:31 +02:00
Heinz Mauelshagen
d48b816764 test: also prevent lvconvert-raid-reshape.sh on single core 2017-04-21 02:17:55 +02:00
Heinz Mauelshagen
a004cceed2 test: Adjust previous commit
Change have_single_core to  have_multi_core and go back to || logic in related test scripts.
2017-04-21 01:21:24 +02:00
Heinz Mauelshagen
18bf954801 test: Fix skip some reshape tests that hang on single core machines
Fix commit c7fb0cb861.
2017-04-20 21:45:56 +02:00
Heinz Mauelshagen
0c2fd133d7 raid: remove double minimum area check on takeover 2017-04-20 21:35:06 +02:00
Heinz Mauelshagen
d8a63f446e raid: define return value on error paths 2017-04-20 21:32:40 +02:00
Heinz Mauelshagen
5fb5717402 raid: avoid superfluous reload on takeover
Allow any reset rebuild flags to trigger the second update on takeover.
Use descriptive callback names.
Fix typo and add comments.
2017-04-20 21:18:27 +02:00
Alasdair G Kergon
c7fb0cb861 test: Skip some reshape tests that hang on single core machines
Skip hanging raid reshape tests until https://bugzilla.redhat.com/1443999
is fixed
2017-04-20 20:05:07 +01:00
Heinz Mauelshagen
83cdba75bd mirror/raid: display adjusted region size with units
Display adjusted region size in units (e.g. "4.00 MiB") rather than sectors.
2017-04-20 20:42:21 +02:00
David Teigland
b9d10857b2 man: quote the word no 2017-04-18 15:56:48 -05:00
Alasdair G Kergon
658d524d26 configure: Fix notify-dbus and dmfilemapd options. 2017-04-18 20:37:53 +01:00
David Teigland
2d9097e9ca WHATS_NEW: configure option enable-lvmlockd 2017-04-18 12:58:18 -05:00
David Teigland
a5256d1353 spec: rename lockd to lvmlockd 2017-04-18 11:22:07 -05:00
David Teigland
a41a8430d6 configure: rename lockd to lvmlockd 2017-04-18 11:18:07 -05:00
Zdenek Kabelac
aa25cfe084 test: correcting binary usage
Ensure 'test suite' run uses fsadm and dmeventd from compiled dir,
while for 'rpm' installed test use binaries installed in system.
2017-04-14 01:03:18 +02:00
Heinz Mauelshagen
15c3ad9641 lvconvert: typo in message 2017-04-13 22:19:29 +02:00
David Teigland
0ea9a15612 tests: use raid_region_size 512
given the new default from 5ae7a016b8
2017-04-13 14:21:34 -05:00
Heinz Mauelshagen
b81b4aad24 WHATS_NEW: raise region_size default 2017-04-13 17:49:58 +02:00
Alasdair G Kergon
c6a6ce6cd5 conf: Remove exec_prefix from confdir.
Some distros now want to put /etc outside prefix/exec_prefix.
2017-04-13 16:21:29 +01:00
Heinz Mauelshagen
5ae7a016b8 lvcreate: raise default raid regionsize to 2MiB
Related: rhbz1392947.
2017-04-13 16:10:49 +02:00
Alasdair G Kergon
4547218a7d conf: Add missing prefix to installation directory.
Our Makefiles also support systems that set the installation prefix
at configuration time - the use of DESTDIR is not required.
2017-04-13 02:17:27 +01:00
Alasdair G Kergon
ae7f696d53 post-release 2017-04-13 01:47:47 +01:00
132 changed files with 2514 additions and 1001 deletions

View File

@@ -1 +1 @@
2.02.170(2)-git (2017-04-13)
2.02.172(2)-git (2017-05-03)

View File

@@ -1 +1 @@
1.02.139-git (2017-04-13)
1.02.141-git (2017-05-03)

View File

@@ -1,3 +1,31 @@
Version 2.02.172 -
===============================
Print a warning about in-use PVs with no VG using them.
Disable automatic clearing of PVs that look like in-use orphans.
Cache format2 flag is now using segment name type field.
Support storing status flags via segtype name field.
Stop using '--yes' mode when fsadm runs without terminal.
Extend validation of filesystems resized by fsadm.
Enhance lvconvert automatic settings of possible (raid) LV types.
Allow lvchange to change properties on a thin pool data sub LV.
Fix lvcreate extent percentage calculation for mirrors.
Don't reinstate still-missing devices when correcting inconsistent metadata.
Properly handle subshell return codes in fsadm.
Disallow cachepool creation with policy cleaner and mode writeback.
Version 2.02.171 - 3rd May 2017
===============================
Fix memory warnings by using mempools for command definition processing.
Fix running commands from a script file.
Add pvcreate prompt when device size doesn't match setphysicalvolumesize.
lvconvert - preserve region size on raid1 image count changes
Adjust pvresize/pvcreate messages and prompt if underlying dev size differs.
raid - sanely handle insufficient space on takeover.
Fix configure --enable-notify-dbus status message.
Change configure option name prefix from --enable-lockd to --enable-lvmlockd.
lvcreate - raise mirror/raid default regionsize to 2MiB
Add missing configurable prefix to configuration file installation directory.
Version 2.02.170 - 13th April 2017
==================================
Introduce global/fsadm_executable to make fsadm path configurable.

View File

@@ -1,3 +1,12 @@
Version 1.02.141 -
===============================
Accept truncated files in calls to dm_stats_update_regions_from_fd().
Restore Warning by 5% increment when thin-pool is over 80% (1.02.138).
Version 1.02.140 - 3rd May 2017
===============================
Add missing configure --enable-dmfilemapd status message and fix --disable.
Version 1.02.139 - 13th April 2017
==================================
Fix assignment in _target_version() when dm task can't run.

View File

@@ -48,8 +48,8 @@ install_localconf: $(CONFLOCAL)
fi
install_profiles: $(PROFILES)
$(INSTALL_DIR) $(DESTDIR)$(DEFAULT_PROFILE_DIR)
$(INSTALL_DATA) $(PROFILES) $(DESTDIR)$(DEFAULT_PROFILE_DIR)/
$(INSTALL_DIR) $(profiledir)
$(INSTALL_DATA) $(PROFILES) $(profiledir)/
install_lvm2: install_conf install_localconf install_profiles

View File

@@ -1297,7 +1297,7 @@ activation {
# The clean/dirty state of data is tracked for each region.
# The value is rounded down to a power of two if necessary, and
# is ignored if it is not a multiple of the machine memory page size.
raid_region_size = 512
raid_region_size = 2048
# Configuration option activation/error_when_full.
# Return errors if a thin pool runs out of space.

62
configure vendored
View File

@@ -703,7 +703,6 @@ FSADM_PATH
FSADM
ELDFLAGS
DM_LIB_PATCHLEVEL
DMFILEMAPD
DMEVENTD_PATH
DMEVENTD
DL_LIBS
@@ -738,7 +737,6 @@ CLDWHOLEARCHIVE
CLDNOWHOLEARCHIVE
CLDFLAGS
CACHE
BUILD_NOTIFYDBUS
BUILD_DMFILEMAPD
BUILD_LOCKDDLM
BUILD_LOCKDSANLOCK
@@ -955,8 +953,8 @@ enable_valgrind_pool
enable_devmapper
enable_lvmetad
enable_lvmpolld
enable_lockd_sanlock
enable_lockd_dlm
enable_lvmlockd_sanlock
enable_lvmlockd_dlm
enable_use_lvmlockd
with_lvmlockd_pidfile
enable_use_lvmetad
@@ -1693,8 +1691,9 @@ Optional Features:
--disable-devmapper disable LVM2 device-mapper interaction
--enable-lvmetad enable the LVM Metadata Daemon
--enable-lvmpolld enable the LVM Polling Daemon
--enable-lockd-sanlock enable the LVM lock daemon using sanlock
--enable-lockd-dlm enable the LVM lock daemon using dlm
--enable-lvmlockd-sanlock
enable the LVM lock daemon using sanlock
--enable-lvmlockd-dlm enable the LVM lock daemon using dlm
--disable-use-lvmlockd disable usage of LVM lock daemon
--disable-use-lvmetad disable usage of LVM Metadata Daemon
--disable-use-lvmpolld disable usage of LVM Poll Daemon
@@ -11774,11 +11773,11 @@ $as_echo "$BUILD_LVMPOLLD" >&6; }
################################################################################
BUILD_LVMLOCKD=no
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lockdsanlock" >&5
$as_echo_n "checking whether to build lockdsanlock... " >&6; }
# Check whether --enable-lockd-sanlock was given.
if test "${enable_lockd_sanlock+set}" = set; then :
enableval=$enable_lockd_sanlock; LOCKDSANLOCK=$enableval
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lvmlockdsanlock" >&5
$as_echo_n "checking whether to build lvmlockdsanlock... " >&6; }
# Check whether --enable-lvmlockd-sanlock was given.
if test "${enable_lvmlockd_sanlock+set}" = set; then :
enableval=$enable_lvmlockd_sanlock; LOCKDSANLOCK=$enableval
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $LOCKDSANLOCK" >&5
@@ -11865,11 +11864,11 @@ $as_echo "#define LOCKDSANLOCK_SUPPORT 1" >>confdefs.h
fi
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lockddlm" >&5
$as_echo_n "checking whether to build lockddlm... " >&6; }
# Check whether --enable-lockd-dlm was given.
if test "${enable_lockd_dlm+set}" = set; then :
enableval=$enable_lockd_dlm; LOCKDDLM=$enableval
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lvmlockddlm" >&5
$as_echo_n "checking whether to build lvmlockddlm... " >&6; }
# Check whether --enable-lvmlockd-dlm was given.
if test "${enable_lvmlockd_dlm+set}" = set; then :
enableval=$enable_lvmlockd_dlm; LOCKDDLM=$enableval
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $LOCKDDLM" >&5
@@ -12105,17 +12104,18 @@ _ACEOF
$as_echo_n "checking whether to build dmfilemapd... " >&6; }
# Check whether --enable-dmfilemapd was given.
if test "${enable_dmfilemapd+set}" = set; then :
enableval=$enable_dmfilemapd; DMFILEMAPD=$enableval
enableval=$enable_dmfilemapd; BUILD_DMFILEMAPD=$enableval
else
BUILD_DMFILEMAPD=no
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $DMFILEMAPD" >&5
$as_echo "$DMFILEMAPD" >&6; }
BUILD_DMFILEMAPD=$DMFILEMAPD
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $BUILD_DMFILEMAPD" >&5
$as_echo "$BUILD_DMFILEMAPD" >&6; }
$as_echo "#define DMFILEMAPD 1" >>confdefs.h
$as_echo "#define DMFILEMAPD \$BUILD_DMFILEMAPD" >>confdefs.h
if test "$DMFILEMAPD" = yes; then
if test "$BUILD_DMFILEMAPD" = yes; then
ac_fn_c_check_header_mongrel "$LINENO" "linux/fiemap.h" "ac_cv_header_linux_fiemap_h" "$ac_includes_default"
if test "x$ac_cv_header_linux_fiemap_h" = xyes; then :
@@ -12131,15 +12131,15 @@ fi
$as_echo_n "checking whether to build notifydbus... " >&6; }
# Check whether --enable-notify-dbus was given.
if test "${enable_notify_dbus+set}" = set; then :
enableval=$enable_notify_dbus; NOTIFYDBUS=$enableval
enableval=$enable_notify_dbus; NOTIFYDBUS_SUPPORT=$enableval
else
NOTIFYDBUS_SUPPORT=no
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $NOTIFYDBUS" >&5
$as_echo "$NOTIFYDBUS" >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $NOTIFYDBUS_SUPPORT" >&5
$as_echo "$NOTIFYDBUS_SUPPORT" >&6; }
BUILD_NOTIFYDBUS=$NOTIFYDBUS
if test "$BUILD_NOTIFYDBUS" = yes; then
if test "$NOTIFYDBUS_SUPPORT" = yes; then
$as_echo "#define NOTIFYDBUS_SUPPORT 1" >>confdefs.h
@@ -12147,7 +12147,7 @@ $as_echo "#define NOTIFYDBUS_SUPPORT 1" >>confdefs.h
fi
################################################################################
if test "$BUILD_NOTIFYDBUS" = yes; then
if test "$NOTIFYDBUS_SUPPORT" = yes; then
pkg_failed=no
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for NOTIFY_DBUS" >&5
@@ -15175,7 +15175,7 @@ done
fi
if test "$DMFILEMAPD" = yes; then
if test "$BUILD_DMFILEMAPD" = yes; then
for ac_header in sys/inotify.h
do :
ac_fn_c_check_header_mongrel "$LINENO" "sys/inotify.h" "ac_cv_header_sys_inotify_h" "$ac_includes_default"
@@ -15666,8 +15666,6 @@ _ACEOF

View File

@@ -1150,10 +1150,10 @@ AC_MSG_RESULT($BUILD_LVMPOLLD)
################################################################################
BUILD_LVMLOCKD=no
dnl -- Build lockdsanlock
AC_MSG_CHECKING(whether to build lockdsanlock)
AC_ARG_ENABLE(lockd-sanlock,
AC_HELP_STRING([--enable-lockd-sanlock],
dnl -- Build lvmlockdsanlock
AC_MSG_CHECKING(whether to build lvmlockdsanlock)
AC_ARG_ENABLE(lvmlockd-sanlock,
AC_HELP_STRING([--enable-lvmlockd-sanlock],
[enable the LVM lock daemon using sanlock]),
LOCKDSANLOCK=$enableval)
AC_MSG_RESULT($LOCKDSANLOCK)
@@ -1168,10 +1168,10 @@ if test "$BUILD_LOCKDSANLOCK" = yes; then
fi
################################################################################
dnl -- Build lockddlm
AC_MSG_CHECKING(whether to build lockddlm)
AC_ARG_ENABLE(lockd-dlm,
AC_HELP_STRING([--enable-lockd-dlm],
dnl -- Build lvmlockddlm
AC_MSG_CHECKING(whether to build lvmlockddlm)
AC_ARG_ENABLE(lvmlockd-dlm,
AC_HELP_STRING([--enable-lvmlockd-dlm],
[enable the LVM lock daemon using dlm]),
LOCKDDLM=$enableval)
AC_MSG_RESULT($LOCKDDLM)
@@ -1278,13 +1278,12 @@ dnl -- Check dmfilemapd
AC_MSG_CHECKING(whether to build dmfilemapd)
AC_ARG_ENABLE(dmfilemapd, AC_HELP_STRING([--enable-dmfilemapd],
[enable the dmstats filemap daemon]),
DMFILEMAPD=$enableval)
AC_MSG_RESULT($DMFILEMAPD)
BUILD_DMFILEMAPD=$DMFILEMAPD
AC_DEFINE([DMFILEMAPD], 1, [Define to 1 to enable the device-mapper filemap daemon.])
BUILD_DMFILEMAPD=$enableval, BUILD_DMFILEMAPD=no)
AC_MSG_RESULT($BUILD_DMFILEMAPD)
AC_DEFINE([DMFILEMAPD], $BUILD_DMFILEMAPD, [Define to 1 to enable the device-mapper filemap daemon.])
dnl -- dmfilemapd requires FIEMAP
if test "$DMFILEMAPD" = yes; then
if test "$BUILD_DMFILEMAPD" = yes; then
AC_CHECK_HEADER([linux/fiemap.h], , [AC_MSG_ERROR(--enable-dmfilemapd requires fiemap.h)])
fi
@@ -1294,19 +1293,17 @@ AC_MSG_CHECKING(whether to build notifydbus)
AC_ARG_ENABLE(notify-dbus,
AC_HELP_STRING([--enable-notify-dbus],
[enable LVM notification using dbus]),
NOTIFYDBUS=$enableval)
AC_MSG_RESULT($NOTIFYDBUS)
NOTIFYDBUS_SUPPORT=$enableval, NOTIFYDBUS_SUPPORT=no)
AC_MSG_RESULT($NOTIFYDBUS_SUPPORT)
BUILD_NOTIFYDBUS=$NOTIFYDBUS
if test "$BUILD_NOTIFYDBUS" = yes; then
if test "$NOTIFYDBUS_SUPPORT" = yes; then
AC_DEFINE([NOTIFYDBUS_SUPPORT], 1, [Define to 1 to include code that uses dbus notification.])
LIBS="-lsystemd $LIBS"
fi
################################################################################
dnl -- Look for dbus libraries
if test "$BUILD_NOTIFYDBUS" = yes; then
if test "$NOTIFYDBUS_SUPPORT" = yes; then
PKG_CHECK_MODULES(NOTIFY_DBUS, systemd >= 221, [HAVE_NOTIFY_DBUS=yes], $bailout)
fi
@@ -1872,7 +1869,7 @@ if test "$UDEV_SYNC" = yes; then
AC_CHECK_HEADERS(sys/ipc.h sys/sem.h,,hard_bailout)
fi
if test "$DMFILEMAPD" = yes; then
if test "$BUILD_DMFILEMAPD" = yes; then
AC_CHECK_HEADERS([sys/inotify.h],,hard_bailout)
fi
@@ -2022,7 +2019,6 @@ AC_SUBST(BUILD_LVMLOCKD)
AC_SUBST(BUILD_LOCKDSANLOCK)
AC_SUBST(BUILD_LOCKDDLM)
AC_SUBST(BUILD_DMFILEMAPD)
AC_SUBST(BUILD_NOTIFYDBUS)
AC_SUBST(CACHE)
AC_SUBST(CFLAGS)
AC_SUBST(CFLOW_CMD)
@@ -2071,7 +2067,6 @@ AC_SUBST(DLM_LIBS)
AC_SUBST(DL_LIBS)
AC_SUBST(DMEVENTD)
AC_SUBST(DMEVENTD_PATH)
AC_SUBST(DMFILEMAPD)
AC_SUBST(DM_LIB_PATCHLEVEL)
AC_SUBST(ELDFLAGS)
AC_SUBST(FSADM)

View File

@@ -47,10 +47,8 @@ struct dso_state {
struct dm_pool *mem;
int metadata_percent_check;
int metadata_percent;
int metadata_warn_once;
int data_percent_check;
int data_percent;
int data_warn_once;
uint64_t known_metadata_size;
uint64_t known_data_size;
unsigned fails;
@@ -253,9 +251,8 @@ void process_event(struct dm_task *dmt,
* action is called for: >50%, >55% ... >95%, 100%
*/
state->metadata_percent = dm_make_percent(tps->used_metadata_blocks, tps->total_metadata_blocks);
if (state->metadata_percent <= WARNING_THRESH)
state->metadata_warn_once = 0; /* Dropped bellow threshold, reset warn once */
else if (!state->metadata_warn_once++) /* Warn once when raised above threshold */
if ((state->metadata_percent > WARNING_THRESH) &&
(state->metadata_percent > state->metadata_percent_check))
log_warn("WARNING: Thin pool %s metadata is now %.2f%% full.",
device, dm_percent_to_float(state->metadata_percent));
if (state->metadata_percent > CHECK_MINIMUM) {
@@ -269,9 +266,8 @@ void process_event(struct dm_task *dmt,
state->metadata_percent_check = CHECK_MINIMUM;
state->data_percent = dm_make_percent(tps->used_data_blocks, tps->total_data_blocks);
if (state->data_percent <= WARNING_THRESH)
state->data_warn_once = 0;
else if (!state->data_warn_once++)
if ((state->data_percent > WARNING_THRESH) &&
(state->data_percent > state->data_percent_check))
log_warn("WARNING: Thin pool %s data is now %.2f%% full.",
device, dm_percent_to_float(state->data_percent));
if (state->data_percent > CHECK_MINIMUM) {

View File

@@ -266,8 +266,6 @@ static int _parse_args(int argc, char **argv, struct filemap_monitor *fm)
return 0;
}
memset(fm, 0, sizeof(*fm));
/*
* We don't know the true nr_regions at daemon start time,
* and it is not worth a dm_stats_list()/group walk to count:
@@ -359,30 +357,33 @@ static int _parse_args(int argc, char **argv, struct filemap_monitor *fm)
return 1;
}
static int _filemap_fd_check_changed(struct filemap_monitor *fm)
static int _filemap_fd_update_blocks(struct filemap_monitor *fm)
{
int64_t blocks, old_blocks;
struct stat buf;
if (fm->fd < 0) {
log_error("Filemap fd is not open.");
return -1;
return 0;
}
if (fstat(fm->fd, &buf)) {
log_error("Failed to fstat filemap file descriptor.");
return -1;
return 0;
}
blocks = buf.st_blocks;
fm->blocks = buf.st_blocks;
/* first check? */
if (fm->blocks < 0)
old_blocks = buf.st_blocks;
else
old_blocks = fm->blocks;
return 1;
}
fm->blocks = blocks;
static int _filemap_fd_check_changed(struct filemap_monitor *fm)
{
int64_t old_blocks;
old_blocks = fm->blocks;
if (!_filemap_fd_update_blocks(fm))
return -1;
return (fm->blocks != old_blocks);
}
@@ -564,6 +565,7 @@ static int _filemap_monitor_check_file_unlinked(struct filemap_monitor *fm)
ssize_t len;
fm->deleted = 0;
same = 0;
if ((fd = open(fm->path, O_RDONLY)) < 0)
goto check_unlinked;
@@ -684,7 +686,10 @@ static int _update_regions(struct dm_stats *dms, struct filemap_monitor *fm)
for (region = regions; *region != DM_STATS_REGIONS_ALL; region++)
nr_regions++;
if (regions[0] != fm->group_id) {
if (!nr_regions)
log_warn("File contains no extents: exiting.");
if (nr_regions && (regions[0] != fm->group_id)) {
log_warn("group_id changed from " FMTu64 " to " FMTu64,
fm->group_id, regions[0]);
fm->group_id = regions[0];
@@ -715,6 +720,9 @@ static int _dmfilemapd(struct filemap_monitor *fm)
if (!_filemap_monitor_set_notify(fm))
goto bad;
if (!_filemap_fd_update_blocks(fm))
goto bad;
if (!dm_stats_list(dms, DM_STATS_ALL_PROGRAMS)) {
log_error("Failed to list stats handle.");
goto bad;
@@ -748,17 +756,16 @@ static int _dmfilemapd(struct filemap_monitor *fm)
if ((check = _filemap_fd_check_changed(fm)) < 0)
goto bad;
if (!check)
goto wait;
if (!_update_regions(dms, fm))
if (check && !_update_regions(dms, fm))
goto bad;
running = !!fm->nr_regions;
if (!running)
continue;
wait:
_filemap_monitor_wait(FILEMAPD_WAIT_USECS);
running = !!fm->nr_regions;
/* mode=inode termination condions */
if (fm->mode == DM_FILEMAPD_FOLLOW_INODE) {
if (!_filemap_monitor_check_file_unlinked(fm))
@@ -801,6 +808,8 @@ int main(int argc, char **argv)
{
struct filemap_monitor fm;
memset(&fm, 0, sizeof(fm));
if (!_parse_args(argc, argv, &fm)) {
dm_free(fm.path);
return 1;

View File

@@ -100,7 +100,7 @@ class AutomatedProperties(dbus.service.Object):
raise dbus.exceptions.DBusException(
obj._ap_interface,
'The object %s does not implement the %s interface'
% (self.__class__, interface_name))
% (obj.__class__, interface_name))
@dbus.service.method(dbus_interface=dbus.PROPERTIES_IFACE,
in_signature='s', out_signature='a{sv}',

View File

@@ -9,12 +9,13 @@
import subprocess
from . import cfg
from .cmdhandler import options_to_cli_args
from .cmdhandler import options_to_cli_args, LvmExecutionMeta
import dbus
from .utils import pv_range_append, pv_dest_ranges, log_error, log_debug,\
add_no_notify
import os
import threading
import time
def pv_move_lv_cmd(move_options, lv_full_name,
@@ -47,6 +48,11 @@ def _move_merge(interface_name, command, job_state):
# Instruct lvm to not register an event with us
command = add_no_notify(command)
#(self, start, ended, cmd, ec, stdout_txt, stderr_txt)
meta = LvmExecutionMeta(time.time(), 0, command, -1000, None, None)
cfg.blackbox.add(meta)
process = subprocess.Popen(command, stdout=subprocess.PIPE,
env=os.environ,
stderr=subprocess.PIPE, close_fds=True)
@@ -74,6 +80,11 @@ def _move_merge(interface_name, command, job_state):
out = process.communicate()
with meta.lock:
meta.ended = time.time()
meta.ec = process.returncode
meta.stderr_txt = out[1]
if process.returncode == 0:
job_state.Percent = 100
else:

View File

@@ -37,6 +37,7 @@ cmd_lock = threading.RLock()
class LvmExecutionMeta(object):
def __init__(self, start, ended, cmd, ec, stdout_txt, stderr_txt):
self.lock = threading.RLock()
self.start = start
self.ended = ended
self.cmd = cmd
@@ -45,12 +46,13 @@ class LvmExecutionMeta(object):
self.stderr_txt = stderr_txt
def __str__(self):
return "EC= %d for %s\n" \
"STARTED: %f, ENDED: %f\n" \
"STDOUT=%s\n" \
"STDERR=%s\n" % \
(self.ec, str(self.cmd), self.start, self.ended, self.stdout_txt,
self.stderr_txt)
with self.lock:
return "EC= %d for %s\n" \
"STARTED: %f, ENDED: %f\n" \
"STDOUT=%s\n" \
"STDERR=%s\n" % \
(self.ec, str(self.cmd), self.start, self.ended, self.stdout_txt,
self.stderr_txt)
class LvmFlightRecorder(object):
@@ -279,7 +281,7 @@ def vg_lv_create(vg_name, create_options, name, size_bytes, pv_dests):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--size', str(size_bytes) + 'B'])
cmd.extend(['--name', name, vg_name])
cmd.extend(['--name', name, vg_name, '--yes'])
pv_dest_ranges(cmd, pv_dests)
return call(cmd)
@@ -304,6 +306,8 @@ def _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool):
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
cmd.extend(['--yes'])
return cmd
@@ -340,7 +344,7 @@ def _vg_lv_create_raid(vg_name, create_options, name, raid_type, size_bytes,
if stripe_size_kb != 0:
cmd.extend(['--stripesize', str(stripe_size_kb)])
cmd.extend(['--name', name, vg_name])
cmd.extend(['--name', name, vg_name, '--yes'])
return call(cmd)
@@ -361,7 +365,7 @@ def vg_lv_create_mirror(
cmd.extend(['--type', 'mirror'])
cmd.extend(['--mirrors', str(num_copies)])
cmd.extend(['--size', str(size_bytes) + 'B'])
cmd.extend(['--name', name, vg_name])
cmd.extend(['--name', name, vg_name, '--yes'])
return call(cmd)
@@ -415,7 +419,7 @@ def lv_lv_create(lv_full_name, create_options, name, size_bytes):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--virtualsize', str(size_bytes) + 'B', '-T'])
cmd.extend(['--name', name, lv_full_name])
cmd.extend(['--name', name, lv_full_name, '--yes'])
return call(cmd)
@@ -551,7 +555,7 @@ def pv_resize(device, size_bytes, create_options):
cmd.extend(options_to_cli_args(create_options))
if size_bytes != 0:
cmd.extend(['--setphysicalvolumesize', str(size_bytes) + 'B'])
cmd.extend(['--yes', '--setphysicalvolumesize', str(size_bytes) + 'B'])
cmd.extend([device])
return call(cmd)
@@ -616,10 +620,10 @@ def vg_reduce(vg_name, missing, pv_devices, reduce_options):
cmd = ['vgreduce']
cmd.extend(options_to_cli_args(reduce_options))
if len(pv_devices) == 0:
cmd.append('--all')
if missing:
cmd.append('--removemissing')
elif len(pv_devices) == 0:
cmd.append('--all')
cmd.append(vg_name)
cmd.extend(pv_devices)

View File

@@ -82,10 +82,10 @@ class StateUpdate(object):
@staticmethod
def update_thread(obj):
queued_requests = []
while cfg.run.value != 0:
# noinspection PyBroadException
try:
queued_requests = []
refresh = True
emit_signal = True
cache_refresh = True
@@ -96,7 +96,7 @@ class StateUpdate(object):
wait = not obj.deferred
obj.deferred = False
if wait:
if len(queued_requests) == 0 and wait:
queued_requests.append(obj.queue.get(True, 2))
# Ok we have one or the deferred queue has some,
@@ -131,11 +131,17 @@ class StateUpdate(object):
for i in queued_requests:
i.set_result(num_changes)
# Only clear out the requests after we have given them a result
# otherwise we can orphan the waiting threads and they never
# wake up if we get an exception
queued_requests = []
except queue.Empty:
pass
except Exception:
st = traceback.format_exc()
log_error("update_thread exception: \n%s" % st)
cfg.blackbox.dump()
def __init__(self):
self.lock = threading.RLock()

View File

@@ -6,7 +6,6 @@
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from utils import log_debug
from .automatedproperties import AutomatedProperties
from . import utils
@@ -48,7 +47,7 @@ class Manager(AutomatedProperties):
pv = cfg.om.get_object_path_by_uuid_lvm_id(device, device)
if pv:
raise dbus.exceptions.DBusException(
MANAGER_INTERFACE, "PV Already exists!")
MANAGER_INTERFACE, "PV %s Already exists!" % device)
rc, out, err = cmdhandler.pv_create(create_options, [device])
Manager.handle_execute(rc, out, err)
@@ -145,7 +144,7 @@ class Manager(AutomatedProperties):
p = cfg.om.get_object_path_by_uuid_lvm_id(key, key)
if not p:
p = '/'
log_debug('LookUpByLvmId: key = %s, result = %s' % (key, p))
utils.log_debug('LookUpByLvmId: key = %s, result = %s' % (key, p))
return p
@dbus.service.method(

View File

@@ -223,8 +223,9 @@ class ObjectManager(AutomatedProperties):
:param lvm_id: The lvm identifier
"""
with self.rlock:
if lvm_id in self._id_to_object_path:
return self.get_object_by_path(self._id_to_object_path[lvm_id])
lookup_rc = self._id_lookup(lvm_id)
if lookup_rc:
return self.get_object_by_path(lookup_rc)
return None
def get_object_path_by_lvm_id(self, lvm_id):
@@ -234,8 +235,9 @@ class ObjectManager(AutomatedProperties):
:return: Object path or '/' if not found
"""
with self.rlock:
if lvm_id in self._id_to_object_path:
return self._id_to_object_path[lvm_id]
lookup_rc = self._id_lookup(lvm_id)
if lookup_rc:
return lookup_rc
return '/'
def _uuid_verify(self, path, uuid, lvm_id):

View File

@@ -519,7 +519,9 @@ def add_no_notify(cmdline):
if '--config' in cmdline:
for i, arg in enumerate(cmdline):
if arg == '--config':
cmdline[i] += "global/notify_dbus=0"
if len(cmdline) <= i+1:
raise dbus.exceptions.DBusException("Missing value for --config option.")
cmdline[i+1] += " global/notify_dbus=0"
break
else:
cmdline.extend(['--config', 'global/notify_dbus=0'])

View File

@@ -469,8 +469,9 @@ cfg(allocation_mirror_logs_require_separate_pvs_CFG, "mirror_logs_require_separa
cfg(allocation_raid_stripe_all_devices_CFG, "raid_stripe_all_devices", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ALLOCATION_STRIPE_ALL_DEVICES, vsn(2, 2, 162), NULL, 0, NULL,
"Stripe across all PVs when RAID stripes are not specified.\n"
"If enabled, all PVs in the VG or on the command line are used for raid0/4/5/6/10\n"
"when the command does not specify the number of stripes to use.\n"
"If enabled, all PVs in the VG or on the command line are used for\n"
"raid0/4/5/6/10 when the command does not specify the number of\n"
"stripes to use.\n"
"This was the default behaviour until release 2.02.162.\n")
cfg(allocation_cache_pool_metadata_require_separate_pvs_CFG, "cache_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_BOOL, DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 106), NULL, 0, NULL,
@@ -934,7 +935,7 @@ cfg(global_use_lvmetad_CFG, "use_lvmetad", global_CFG_SECTION, 0, CFG_TYPE_BOOL,
"devices/global_filter.\n")
cfg(global_lvmetad_update_wait_time_CFG, "lvmetad_update_wait_time", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_LVMETAD_UPDATE_WAIT_TIME, vsn(2, 2, 151), NULL, 0, NULL,
"The number of seconds a command will wait for lvmetad update to finish.\n"
"Number of seconds a command will wait for lvmetad update to finish.\n"
"After waiting for this period, a command will not use lvmetad, and\n"
"will revert to disk scanning.\n")

View File

@@ -202,7 +202,7 @@
#define DEFAULT_ACTIVATION_MODE "degraded"
#define DEFAULT_USE_LINEAR_TARGET 1
#define DEFAULT_STRIPE_FILLER "error"
#define DEFAULT_RAID_REGION_SIZE 512 /* KB */
#define DEFAULT_RAID_REGION_SIZE 2048 /* KB */
#define DEFAULT_INTERVAL 15
#define DEFAULT_MAX_HISTORY 100

View File

@@ -358,11 +358,12 @@ static int _print_header(struct cmd_context *cmd, struct formatter *f,
static int _print_flag_config(struct formatter *f, uint64_t status, int type)
{
char buffer[4096];
if (!print_flags(status, type | STATUS_FLAG, buffer, sizeof(buffer)))
if (!print_flags(buffer, sizeof(buffer), type, STATUS_FLAG, status))
return_0;
outf(f, "status = %s", buffer);
if (!print_flags(status, type, buffer, sizeof(buffer)))
if (!print_flags(buffer, sizeof(buffer), type, COMPATIBLE_FLAG, status))
return_0;
outf(f, "flags = %s", buffer);
@@ -501,7 +502,13 @@ static int _print_vg(struct formatter *f, struct volume_group *vg)
*/
static const char *_get_pv_name_from_uuid(struct formatter *f, char *uuid)
{
return dm_hash_lookup(f->pv_names, uuid);
const char *pv_name = dm_hash_lookup(f->pv_names, uuid);
if (!pv_name)
log_error(INTERNAL_ERROR "PV name for uuid %s missing from text metadata export hash table.",
uuid);
return pv_name;
}
static const char *_get_pv_name(struct formatter *f, struct physical_volume *pv)
@@ -577,6 +584,11 @@ static int _print_pvs(struct formatter *f, struct volume_group *vg)
static int _print_segment(struct formatter *f, struct volume_group *vg,
int count, struct lv_segment *seg)
{
char buffer[2048];
if (!print_segtype_lvflags(buffer, sizeof(buffer), seg->lv->status))
return_0;
outf(f, "segment%u {", count);
_inc_indent(f);
@@ -587,7 +599,8 @@ static int _print_segment(struct formatter *f, struct volume_group *vg,
if (seg->reshape_len)
outsize(f, (uint64_t) seg->reshape_len * vg->extent_size,
"reshape_count = %u", seg->reshape_len);
outf(f, "type = \"%s\"", seg->segtype->name);
outf(f, "type = \"%s%s\"", seg->segtype->name, buffer);
if (!_out_list(f, &seg->tags, "tags"))
return_0;
@@ -607,6 +620,7 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
{
const char *name;
unsigned int s;
struct physical_volume *pv;
outnl(f);
@@ -616,7 +630,13 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
for (s = 0; s < seg->area_count; s++) {
switch (seg_type(seg, s)) {
case AREA_PV:
if (!(name = _get_pv_name(f, seg_pv(seg, s))))
if (!(pv = seg_pv(seg, s))) {
log_error(INTERNAL_ERROR "Missing PV for area %" PRIu32 " of %s segment of LV %s.",
s, type, display_lvname(seg->lv));
return 0;
}
if (!(name = _get_pv_name(f, pv)))
return_0;
outf(f, "\"%s\", %u%s", name,
@@ -650,6 +670,8 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
break;
case AREA_UNASSIGNED:
log_error(INTERNAL_ERROR "Invalid type for area %" PRIu32 " of %s segment of LV %s.",
s, type, display_lvname(seg->lv));
return 0;
}
}

View File

@@ -47,6 +47,7 @@ static const struct flag _pv_flags[] = {
{ALLOCATABLE_PV, "ALLOCATABLE", STATUS_FLAG},
{EXPORTED_VG, "EXPORTED", STATUS_FLAG},
{MISSING_PV, "MISSING", COMPATIBLE_FLAG},
{MISSING_PV, "MISSING", STATUS_FLAG},
{UNLABELLED_PV, NULL, 0},
{0, NULL, 0}
};
@@ -67,7 +68,7 @@ static const struct flag _lv_flags[] = {
{LV_WRITEMOSTLY, "WRITEMOSTLY", STATUS_FLAG},
{LV_ACTIVATION_SKIP, "ACTIVATION_SKIP", COMPATIBLE_FLAG},
{LV_ERROR_WHEN_FULL, "ERROR_WHEN_FULL", COMPATIBLE_FLAG},
{LV_METADATA_FORMAT, "METADATA_FORMAT", STATUS_FLAG},
{LV_METADATA_FORMAT, "METADATA_FORMAT", SEGTYPE_FLAG},
{LV_NOSCAN, NULL, 0},
{LV_TEMPORARY, NULL, 0},
{POOL_METADATA_SPARE, NULL, 0},
@@ -101,9 +102,9 @@ static const struct flag _lv_flags[] = {
{0, NULL, 0}
};
static const struct flag *_get_flags(int type)
static const struct flag *_get_flags(enum pv_vg_lv_e type)
{
switch (type & ~STATUS_FLAG) {
switch (type) {
case VG_FLAGS:
return _vg_flags;
@@ -114,7 +115,7 @@ static const struct flag *_get_flags(int type)
return _lv_flags;
}
log_error("Unknown flag set requested.");
log_error(INTERNAL_ERROR "Unknown flag set requested.");
return NULL;
}
@@ -123,7 +124,7 @@ static const struct flag *_get_flags(int type)
* using one of the tables defined at the top of
* the file.
*/
int print_flags(uint64_t status, int type, char *buffer, size_t size)
int print_flags(char *buffer, size_t size, enum pv_vg_lv_e type, int mask, uint64_t status)
{
int f, first = 1;
const struct flag *flags;
@@ -132,13 +133,13 @@ int print_flags(uint64_t status, int type, char *buffer, size_t size)
return_0;
if (!emit_to_buffer(&buffer, &size, "["))
return 0;
return_0;
for (f = 0; flags[f].mask; f++) {
if (status & flags[f].mask) {
status &= ~flags[f].mask;
if ((type & STATUS_FLAG) != flags[f].kind)
if (mask != flags[f].kind)
continue;
/* Internal-only flag? */
@@ -147,18 +148,18 @@ int print_flags(uint64_t status, int type, char *buffer, size_t size)
if (!first) {
if (!emit_to_buffer(&buffer, &size, ", "))
return 0;
return_0;
} else
first = 0;
if (!emit_to_buffer(&buffer, &size, "\"%s\"",
flags[f].description))
return 0;
flags[f].description))
return_0;
}
}
if (!emit_to_buffer(&buffer, &size, "]"))
return 0;
return_0;
if (status)
log_warn(INTERNAL_ERROR "Metadata inconsistency: "
@@ -167,9 +168,9 @@ int print_flags(uint64_t status, int type, char *buffer, size_t size)
return 1;
}
int read_flags(uint64_t *status, int type, const struct dm_config_value *cv)
int read_flags(uint64_t *status, enum pv_vg_lv_e type, int mask, const struct dm_config_value *cv)
{
int f;
unsigned f;
uint64_t s = UINT64_C(0);
const struct flag *flags;
@@ -186,7 +187,8 @@ int read_flags(uint64_t *status, int type, const struct dm_config_value *cv)
}
for (f = 0; flags[f].description; f++)
if (!strcmp(flags[f].description, cv->v.str)) {
if ((flags[f].kind & mask) &&
!strcmp(flags[f].description, cv->v.str)) {
s |= flags[f].mask;
break;
}
@@ -200,7 +202,7 @@ int read_flags(uint64_t *status, int type, const struct dm_config_value *cv)
* by this case.
*/
s |= PARTIAL_VG;
} else if (!flags[f].description && (type & STATUS_FLAG)) {
} else if (!flags[f].description && (mask & STATUS_FLAG)) {
log_error("Unknown status flag '%s'.", cv->v.str);
return 0;
}
@@ -212,3 +214,71 @@ int read_flags(uint64_t *status, int type, const struct dm_config_value *cv)
*status |= s;
return 1;
}
/*
* Parse extra status flags from segment "type" string.
* These flags are seen as INCOMPATIBLE by any older lvm2 code.
* All flags separated by '+' are trimmed from passed string.
* All UNKNOWN flags will again cause the "UNKNOWN" segtype.
*
* Note: using these segtype status flags instead of actual
* status flags ensures wanted incompatiblity.
*/
int read_segtype_lvflags(uint64_t *status, char *segtype_str)
{
unsigned i;
const struct flag *flags = _lv_flags;
char *delim;
char *flag, *buffer, *str;
if (!(str = strchr(segtype_str, '+')))
return 1; /* No flags */
if (!(buffer = dm_strdup(str + 1))) {
log_error("Cannot duplicate segment string.");
return 0;
}
delim = buffer;
do {
flag = delim;
if ((delim = strchr(delim, '+')))
*delim++ = '\0';
for (i = 0; flags[i].description; i++)
if ((flags[i].kind & SEGTYPE_FLAG) &&
!strcmp(flags[i].description, flag)) {
*status |= flags[i].mask;
break;
}
} while (delim && flags[i].description); /* Till no more flags in type appear */
if (!flags[i].description)
/* Unknown flag is incompatible - returns unmodified segtype_str */
log_warn("WARNING: Unrecognised flag %s in segment type %s.",
flag, segtype_str);
else
*str = '\0'; /* Cut away 1st. '+' */
dm_free(buffer);
return 1;
}
int print_segtype_lvflags(char *buffer, size_t size, uint64_t status)
{
unsigned i;
const struct flag *flags = _lv_flags;
buffer[0] = 0;
for (i = 0; flags[i].mask; i++)
if ((flags[i].kind & SEGTYPE_FLAG) &&
(status & flags[i].mask) &&
!emit_to_buffer(&buffer, &size, "+%s",
flags[i].description))
return 0;
return 1;
}

View File

@@ -35,14 +35,16 @@
* VGs, PVs and LVs all have status bitsets, we gather together
* common code for reading and writing them.
*/
enum {
COMPATIBLE_FLAG = 0x0,
enum pv_vg_lv_e {
PV_FLAGS = 1,
VG_FLAGS,
PV_FLAGS,
LV_FLAGS,
STATUS_FLAG = 0x8,
};
#define COMPATIBLE_FLAG 0x01
#define STATUS_FLAG 0x02
#define SEGTYPE_FLAG 0x04
struct text_vg_version_ops {
int (*check_version) (const struct dm_config_tree * cf);
struct volume_group *(*read_vg) (struct format_instance * fid,
@@ -58,8 +60,11 @@ struct text_vg_version_ops {
struct text_vg_version_ops *text_vg_vsn1_init(void);
int print_flags(uint64_t status, int type, char *buffer, size_t size);
int read_flags(uint64_t *status, int type, const struct dm_config_value *cv);
int print_flags(char *buffer, size_t size, enum pv_vg_lv_e type, int mask, uint64_t status);
int read_flags(uint64_t *status, enum pv_vg_lv_e type, int mask, const struct dm_config_value *cv);
int print_segtype_lvflags(char *buffer, size_t size, uint64_t status);
int read_segtype_lvflags(uint64_t *status, char *segtype_scr);
int text_vg_export_file(struct volume_group *vg, const char *desc, FILE *fp);
size_t text_vg_export_raw(struct volume_group *vg, const char *desc, char **buf);

View File

@@ -140,13 +140,14 @@ static int _read_flag_config(const struct dm_config_node *n, uint64_t *status, i
return 0;
}
if (!(read_flags(status, type | STATUS_FLAG, cv))) {
/* For backward compatible metadata accept both type of flags */
if (!(read_flags(status, type, STATUS_FLAG | SEGTYPE_FLAG, cv))) {
log_error("Could not read status flags.");
return 0;
}
if (dm_config_get_list(n, "flags", &cv)) {
if (!(read_flags(status, type, cv))) {
if (!(read_flags(status, type, COMPATIBLE_FLAG, cv))) {
log_error("Could not read flags.");
return 0;
}
@@ -357,6 +358,7 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
uint32_t area_extents, start_extent, extent_count, reshape_count, data_copies;
struct segment_type *segtype;
const char *segtype_str;
char *segtype_with_flags;
if (!sn_child) {
log_error("Empty segment section.");
@@ -388,9 +390,24 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
return 0;
}
if (!(segtype = get_segtype_from_string(lv->vg->cmd, segtype_str)))
/* Locally duplicate to parse out status flag bits */
if (!(segtype_with_flags = dm_pool_strdup(mem, segtype_str))) {
log_error("Cannot duplicate segtype string.");
return 0;
}
if (!read_segtype_lvflags(&lv->status, segtype_with_flags)) {
log_error("Couldn't read segtype for logical volume %s.",
display_lvname(lv));
return 0;
}
if (!(segtype = get_segtype_from_string(lv->vg->cmd, segtype_with_flags)))
return_0;
/* Can drop temporary string here as nothing has allocated from VGMEM meanwhile */
dm_pool_free(mem, segtype_with_flags);
if (segtype->ops->text_import_area_count &&
!segtype->ops->text_import_area_count(sn_child, &area_count))
return_0;

View File

@@ -7514,7 +7514,8 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
status |= LV_NOTSYNCED;
}
lp->region_size = adjusted_mirror_region_size(vg->extent_size,
lp->region_size = adjusted_mirror_region_size(vg->cmd,
vg->extent_size,
lp->extents,
lp->region_size, 0,
vg_is_clustered(vg));

View File

@@ -222,7 +222,7 @@ static void _check_non_raid_seg_members(struct lv_segment *seg, int *error_count
}
/*
* Check RAID segment sruct members of @seg for acceptable
* Check RAID segment struct members of @seg for acceptable
* properties and increment @error_count for any bogus ones.
*/
static void _check_raid_seg(struct lv_segment *seg, int *error_count)

View File

@@ -748,7 +748,8 @@ int pvremove_many(struct cmd_context *cmd, struct dm_list *pv_names,
int pv_resize_single(struct cmd_context *cmd,
struct volume_group *vg,
struct physical_volume *pv,
const uint64_t new_size);
const uint64_t new_size,
int yes);
int pv_analyze(struct cmd_context *cmd, const char *pv_name,
uint64_t label_sector);
@@ -1158,7 +1159,8 @@ uint32_t lv_mirror_count(const struct logical_volume *lv);
/* Remove CMIRROR_REGION_COUNT_LIMIT when http://bugzilla.redhat.com/682771 is fixed */
#define CMIRROR_REGION_COUNT_LIMIT (256*1024 * 8)
uint32_t adjusted_mirror_region_size(uint32_t extent_size, uint32_t extents,
uint32_t adjusted_mirror_region_size(struct cmd_context *cmd,
uint32_t extent_size, uint32_t extents,
uint32_t region_size, int internal, int clustered);
int remove_mirrors_from_segments(struct logical_volume *lv,

View File

@@ -1089,8 +1089,7 @@ static struct volume_group *_vg_make_handle(struct cmd_context *cmd,
if (!vg && !(vg = alloc_vg("vg_make_handle", cmd, NULL)))
return_NULL;
if (vg->read_status != failure)
vg->read_status = failure;
vg->read_status = failure;
if (vg->fid && !_vg_update_vg_committed(vg))
vg->read_status |= FAILED_ALLOCATION;
@@ -1122,6 +1121,7 @@ int vg_has_unknown_segments(const struct volume_group *vg)
struct volume_group *vg_lock_and_create(struct cmd_context *cmd, const char *vg_name)
{
uint32_t rc;
struct volume_group *vg;
if (!validate_name(vg_name)) {
log_error("Invalid vg name %s", vg_name);
@@ -1134,7 +1134,11 @@ struct volume_group *vg_lock_and_create(struct cmd_context *cmd, const char *vg_
/* NOTE: let caller decide - this may be check for existence */
return _vg_make_handle(cmd, NULL, rc);
return vg_create(cmd, vg_name);
vg = vg_create(cmd, vg_name);
if (!vg || vg_read_error(vg))
unlock_vg(cmd, NULL, vg_name);
return vg;
}
/*
@@ -3707,6 +3711,37 @@ struct _vg_read_orphan_baton {
int repair;
};
/*
* If we know that the PV is orphan, meaning there's at least one MDA on
* that PV which does not reference any VG and at the same time there's
* PV_EXT_USED flag set, we're certainly in an inconsistent state and we
* need to fix this.
*
* For example, such situation can happen during vgremove/vgreduce if we
* removed/reduced the VG, but we haven't written PV headers yet because
* vgremove stopped abruptly for whatever reason just before writing new
* PV headers with updated state, including PV extension flags (and so the
* PV_EXT_USED flag).
*
* However, in case the PV has no MDAs at all, we can't double-check
* whether the PV_EXT_USED is correct or not - if that PV is marked
* as used, it's either:
* - really used (but other disks with MDAs are missing)
* - or the error state as described above is hit
*
* User needs to overwrite the PV header directly if it's really clear
* the PV having no MDAs does not belong to any VG and at the same time
* it's still marked as being in use (pvcreate -ff <dev_name> will fix this).
*
* Note that the above doesn't account for the case where the PV has
* VG metadata that fails to be parsed. In that case, the PV looks
* like an in-use orphan, and is auto-repaired here. A PV with
* unparsable metadata should be kept on a special list of devices
* (like duplicate PVs) that are not auto-repaired, cannot be used
* by pvcreate, and are displayed with a special flag by 'pvs'.
*/
#if 0
static int _check_or_repair_orphan_pv_ext(struct physical_volume *pv,
struct lvmcache_info *info,
struct _vg_read_orphan_baton *b)
@@ -3760,12 +3795,15 @@ static int _check_or_repair_orphan_pv_ext(struct physical_volume *pv,
return 1;
}
#endif
static int _vg_read_orphan_pv(struct lvmcache_info *info, void *baton)
{
struct _vg_read_orphan_baton *b = baton;
struct physical_volume *pv = NULL;
struct pv_list *pvl;
uint32_t ext_version;
uint32_t ext_flags;
if (!(pv = _pv_read(b->vg->cmd, b->vg->vgmem, dev_name(lvmcache_device(info)),
b->vg->fid, b->warn_flags, 0))) {
@@ -3781,10 +3819,59 @@ static int _vg_read_orphan_pv(struct lvmcache_info *info, void *baton)
pvl->pv = pv;
add_pvl_to_vgs(b->vg, pvl);
/*
* FIXME: this bit of code that does the auto repair is disabled
* until we can distinguish cases where the repair should not
* happen, i.e. the VG metadata could not be read/parsed.
*
* A PV holding VG metadata that lvm can't understand
* (e.g. damaged, checksum error, unrecognized flag)
* will appear as an in-use orphan, and would be cleared
* by this repair code. Disable this repair until the
* code can keep track of these problematic PVs, and
* distinguish them from actual in-use orphans.
*/
/*
if (!_check_or_repair_orphan_pv_ext(pv, info, baton)) {
stack;
return 0;
}
*/
/*
* Nothing to do if PV header extension < 2:
* - version 0 is PV header without any extensions,
* - version 1 has bootloader area support only and
* we're not checking anything for that one here.
*/
ext_version = lvmcache_ext_version(info);
ext_flags = lvmcache_ext_flags(info);
/*
* Warn about a PV that has the in-use flag set, but appears in
* the orphan VG (no VG was found referencing it.)
* There are a number of conditions that could lead to this:
*
* . The PV was created with no mdas and is used in a VG with
* other PVs (with metadata) that have not yet appeared on
* the system. So, no VG metadata is found by lvm which
* references the in-use PV with no mdas.
*
* . vgremove could have failed after clearing mdas but
* before clearing the in-use flag. In this case, the
* in-use flag needs to be manually cleared on the PV.
*
* . The PV may have damanged/unrecognized VG metadata
* that lvm could not read.
*
* . The PV may have no mdas, and the PVs with the metadata
* may have damaged/unrecognized metadata.
*/
if ((ext_version >= 2) && (ext_flags & PV_EXT_USED)) {
log_warn("WARNING: PV %s is marked in use but no VG was found using it.", pv_dev_name(pv));
log_warn("WARNING: PV %s might need repairing.", pv_dev_name(pv));
}
return 1;
}
@@ -3910,7 +3997,13 @@ static int _check_reappeared_pv(struct volume_group *correct_vg,
* confusing.
*/
if (correct_vg->cmd->handles_missing_pvs)
return rv;
return rv;
/*
* Skip this if there is no underlying device present for this PV.
*/
if (!pv->dev)
return rv;
dm_list_iterate_items(pvl, &correct_vg->pvs)
if (pv->dev == pvl->pv->dev && is_missing_pv(pvl->pv)) {
@@ -4039,6 +4132,7 @@ static int _check_or_repair_pv_ext(struct cmd_context *cmd,
struct volume_group *vg,
int repair, int *inconsistent_pvs)
{
char uuid[64] __attribute__((aligned(8)));
struct lvmcache_info *info;
uint32_t ext_version, ext_flags;
struct pv_list *pvl;
@@ -4052,6 +4146,14 @@ static int _check_or_repair_pv_ext(struct cmd_context *cmd,
if (is_missing_pv(pvl->pv))
continue;
if (!pvl->pv->dev) {
/* is_missing_pv doesn't catch NULL dev */
memset(&uuid, 0, sizeof(uuid));
id_write_format(&pvl->pv->id, uuid, sizeof(uuid));
log_warn("WARNING: Not repairing PV %s with missing device.", uuid);
continue;
}
if (!(info = lvmcache_info_from_pvid(pvl->pv->dev->pvid, pvl->pv->dev, 0))) {
log_error("Failed to find cached info for PV %s.", pv_dev_name(pvl->pv));
goto out;
@@ -4165,8 +4267,7 @@ static struct volume_group *_vg_read(struct cmd_context *cmd,
if (lvmetad_used() && !use_precommitted) {
if ((correct_vg = lvmcache_get_vg(cmd, vgname, vgid, precommitted))) {
dm_list_iterate_items(pvl, &correct_vg->pvs)
if (pvl->pv->dev)
reappeared += _check_reappeared_pv(correct_vg, pvl->pv, *consistent);
reappeared += _check_reappeared_pv(correct_vg, pvl->pv, *consistent);
if (reappeared && *consistent)
*consistent = _repair_inconsistent_vg(correct_vg);
else
@@ -5863,7 +5964,11 @@ static struct volume_group *_vg_lock_and_read(struct cmd_context *cmd, const cha
if (failure)
goto_bad;
return _vg_make_handle(cmd, vg, failure);
if (!(vg = _vg_make_handle(cmd, vg, failure)) || vg_read_error(vg))
if (!already_locked)
unlock_vg(cmd, vg, vg_name);
return vg;
bad:
if (!already_locked)
@@ -5924,7 +6029,12 @@ struct volume_group *vg_read(struct cmd_context *cmd, const char *vg_name,
struct volume_group *vg_read_for_update(struct cmd_context *cmd, const char *vg_name,
const char *vgid, uint32_t read_flags, uint32_t lockd_state)
{
return vg_read(cmd, vg_name, vgid, read_flags | READ_FOR_UPDATE, lockd_state);
struct volume_group *vg = vg_read(cmd, vg_name, vgid, read_flags | READ_FOR_UPDATE, lockd_state);
if (!vg || vg_read_error(vg))
stack;
return vg;
}
/*
@@ -5953,9 +6063,8 @@ uint32_t vg_read_error(struct volume_group *vg_handle)
*/
uint32_t vg_lock_newname(struct cmd_context *cmd, const char *vgname)
{
if (!lock_vol(cmd, vgname, LCK_VG_WRITE, NULL)) {
if (!lock_vol(cmd, vgname, LCK_VG_WRITE, NULL))
return FAILED_LOCKING;
}
/* Find the vgname in the cache */
/* If it's not there we must do full scan to be completely sure */

View File

@@ -157,7 +157,8 @@ struct lv_segment *find_mirror_seg(struct lv_segment *seg)
*
* For internal use only log only in verbose mode
*/
uint32_t adjusted_mirror_region_size(uint32_t extent_size, uint32_t extents,
uint32_t adjusted_mirror_region_size(struct cmd_context *cmd,
uint32_t extent_size, uint32_t extents,
uint32_t region_size, int internal, int clustered)
{
uint64_t region_max;
@@ -168,11 +169,11 @@ uint32_t adjusted_mirror_region_size(uint32_t extent_size, uint32_t extents,
if (region_max < UINT32_MAX && region_size > region_max) {
region_size = (uint32_t) region_max;
if (!internal)
log_print_unless_silent("Using reduced mirror region size of %"
PRIu32 " sectors.", region_size);
log_print_unless_silent("Using reduced mirror region size of %s",
display_size(cmd, region_size));
else
log_verbose("Using reduced mirror region size of %"
PRIu32 " sectors.", region_size);
log_verbose("Using reduced mirror region size of %s",
display_size(cmd, region_size));
}
#ifdef CMIRROR_REGION_COUNT_LIMIT
@@ -199,13 +200,13 @@ uint32_t adjusted_mirror_region_size(uint32_t extent_size, uint32_t extents,
if (region_size < region_min_pow2) {
if (internal)
log_print_unless_silent("Increasing mirror region size from %"
PRIu32 " to %" PRIu64 " sectors.",
region_size, region_min_pow2);
log_print_unless_silent("Increasing mirror region size from %s to %s",
display_size(cmd, region_size),
display_size(cmd, region_min_pow2));
else
log_verbose("Increasing mirror region size from %"
PRIu32 " to %" PRIu64 " sectors.",
region_size, region_min_pow2);
log_verbose("Increasing mirror region size from %s to %s",
display_size(cmd, region_size),
display_size(cmd, region_min_pow2));
region_size = region_min_pow2;
}
}
@@ -1665,7 +1666,8 @@ static int _add_mirrors_that_preserve_segments(struct logical_volume *lv,
if (!(segtype = get_segtype_from_string(cmd, SEG_TYPE_NAME_MIRROR)))
return_0;
adjusted_region_size = adjusted_mirror_region_size(lv->vg->extent_size,
adjusted_region_size = adjusted_mirror_region_size(cmd,
lv->vg->extent_size,
lv->le_count,
region_size, 1,
vg_is_clustered(lv->vg));

View File

@@ -604,7 +604,7 @@ static int pv_resize(struct physical_volume *pv,
log_verbose("Resizing physical volume %s from %" PRIu32
" to %" PRIu32 " extents.",
pv_dev_name(pv), pv->pe_count, new_pe_count);
pv_dev_name(pv), old_pe_count, new_pe_count);
if (new_pe_count > old_pe_count)
return _extend_pv(pv, vg, old_pe_count, new_pe_count);
@@ -618,7 +618,8 @@ static int pv_resize(struct physical_volume *pv,
int pv_resize_single(struct cmd_context *cmd,
struct volume_group *vg,
struct physical_volume *pv,
const uint64_t new_size)
const uint64_t new_size,
int yes)
{
uint64_t size = 0;
int r = 0;
@@ -642,11 +643,30 @@ int pv_resize_single(struct cmd_context *cmd,
}
if (new_size) {
if (new_size > size)
log_warn("WARNING: %s: Overriding real size. "
"You could lose data.", pv_name);
log_verbose("%s: Pretending size is %" PRIu64 " not %" PRIu64
" sectors.", pv_name, new_size, pv_size(pv));
if (new_size > size) {
log_warn("WARNING: %s: Overriding real size %s. You could lose data.",
pv_name, display_size(cmd, (uint64_t) size));
if (!yes && yes_no_prompt("%s: Requested size %s exceeds real size %s. Proceed? [y/n]: ",
pv_name, display_size(cmd, (uint64_t) new_size),
display_size(cmd, (uint64_t) size)) == 'n') {
log_error("Physical Volume %s not resized.", pv_name);
goto_out;
}
} else if (new_size < size)
if (!yes && yes_no_prompt("%s: Requested size %s is less than real size %s. Proceed? [y/n]: ",
pv_name, display_size(cmd, (uint64_t) new_size),
display_size(cmd, (uint64_t) size)) == 'n') {
log_error("Physical Volume %s not resized.", pv_name);
goto_out;
}
if (new_size == size)
log_verbose("%s: Size is already %s (%" PRIu64 " sectors).",
pv_name, display_size(cmd, (uint64_t) new_size), new_size);
else
log_warn("WARNING: %s: Pretending size is %" PRIu64 " not %" PRIu64 " sectors.",
pv_name, new_size, size);
size = new_size;
}

View File

@@ -138,6 +138,18 @@ static void _check_and_adjust_region_size(const struct logical_volume *lv)
seg->region_size = raid_ensure_min_region_size(lv, lv->size, seg->region_size);
}
/* Drop @suffix from *str by writing '\0' to the beginning of @suffix */
static int _drop_suffix(const char *str, const char *suffix)
{
char *p;
if (!(p = strstr(str, suffix)))
return_0;
*p = '\0';
return 1;
}
/* Strip any raid suffix off LV name */
char *top_level_lv_name(struct volume_group *vg, const char *lv_name)
{
@@ -415,8 +427,10 @@ static int _raid_remove_top_layer(struct logical_volume *lv,
return 0;
}
if (!(lvl_array = dm_pool_alloc(lv->vg->vgmem, 2 * sizeof(*lvl))))
if (!(lvl_array = dm_pool_alloc(lv->vg->vgmem, 2 * sizeof(*lvl)))) {
log_error("Memory allocation failed.");
return_0;
}
/* Add last metadata area to removal_lvs */
lvl_array[0].lv = seg_metalv(seg, 0);
@@ -518,7 +532,8 @@ static int _lv_update_reload_fns_reset_eliminate_lvs(struct logical_volume *lv,
fn_pre_data = va_arg(ap, void *);
}
/* Call any efn_pre_on_lv before the first update and reload call (e.g. to rename LVs) */
/* Call any fn_pre_on_lv before the first update and reload call (e.g. to rename LVs) */
/* returns 1: ok+ask caller to update, 2: metadata commited+ask caller to resume */
if (fn_pre_on_lv && !(r = fn_pre_on_lv(lv, fn_pre_data))) {
log_error(INTERNAL_ERROR "Pre callout function failed.");
goto err;
@@ -529,18 +544,18 @@ static int _lv_update_reload_fns_reset_eliminate_lvs(struct logical_volume *lv,
* Returning 2 from pre function -> lv is suspended and
* metadata got updated, don't need to do it again
*/
if (!(origin_only ? resume_lv_origin(lv->vg->cmd, lv_lock_holder(lv)) :
resume_lv(lv->vg->cmd, lv_lock_holder(lv)))) {
if (!(r = (origin_only ? resume_lv_origin(lv->vg->cmd, lv_lock_holder(lv)) :
resume_lv(lv->vg->cmd, lv_lock_holder(lv))))) {
log_error("Failed to resume %s.", display_lvname(lv));
goto err;
}
/* Update metadata and reload mappings including flags (e.g. LV_REBUILD, LV_RESHAPE_DELTA_DISKS_PLUS) */
} else if (!(origin_only ? lv_update_and_reload_origin(lv) : lv_update_and_reload(lv)))
} else if (!(r = (origin_only ? lv_update_and_reload_origin(lv) : lv_update_and_reload(lv))))
goto err;
/* Eliminate any residual LV and don't commit the metadata */
if (!_eliminate_extracted_lvs_optional_write_vg(lv->vg, removal_lvs, 0))
if (!(r = _eliminate_extracted_lvs_optional_write_vg(lv->vg, removal_lvs, 0)))
goto err;
/*
@@ -553,7 +568,7 @@ static int _lv_update_reload_fns_reset_eliminate_lvs(struct logical_volume *lv,
* and if successful, performs metadata backup.
*/
log_debug_metadata("Clearing any flags for %s passed to the kernel.", display_lvname(lv));
if (!_reset_flags_passed_to_kernel(lv, &flags_reset))
if (!(r = _reset_flags_passed_to_kernel(lv, &flags_reset)))
goto err;
/* Call any @fn_post_on_lv before the second update call (e.g. to rename LVs back) */
@@ -564,7 +579,7 @@ static int _lv_update_reload_fns_reset_eliminate_lvs(struct logical_volume *lv,
/* Update and reload to clear out reset flags in the metadata and in the kernel */
log_debug_metadata("Updating metadata mappings for %s.", display_lvname(lv));
if ((r != 2 || flags_reset) && !(origin_only ? lv_update_and_reload_origin(lv) : lv_update_and_reload(lv))) {
if ((r != 2 || flags_reset) && !(r = (origin_only ? lv_update_and_reload_origin(lv) : lv_update_and_reload(lv)))) {
log_error(INTERNAL_ERROR "Update of LV %s failed.", display_lvname(lv));
goto err;
}
@@ -773,8 +788,10 @@ static int _reorder_raid10_near_seg_areas(struct lv_segment *seg, enum raid0_rai
/* FIXME: once more data copies supported with raid10 */
stripes /= data_copies;
if (!(idx = dm_pool_zalloc(seg_lv(seg, 0)->vg->vgmem, seg->area_count * sizeof(*idx))))
return 0;
if (!(idx = dm_pool_zalloc(seg_lv(seg, 0)->vg->vgmem, seg->area_count * sizeof(*idx)))) {
log_error("Memory allocation failed.");
return_0;
}
/* Set up positional index array */
switch (conv) {
@@ -1043,8 +1060,10 @@ static int _alloc_image_components(struct logical_volume *lv,
const char *raid_segtype;
if (!(lvl_array = dm_pool_alloc(lv->vg->vgmem,
sizeof(*lvl_array) * count * 2)))
sizeof(*lvl_array) * count * 2))) {
log_error("Memory allocation failed.");
return_0;
}
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0, 1)))
return_0;
@@ -1237,6 +1256,10 @@ static int _cmp_level(const struct segment_type *t1, const struct segment_type *
(!segtype_is_any_raid10(t1) && segtype_is_any_raid10(t2)))
return 0;
if ((segtype_is_raid4(t1) && segtype_is_raid5_n(t2)) ||
(segtype_is_raid5_n(t1) && segtype_is_raid4(t2)))
return 1;
return !strncmp(t1->name, t2->name, 5);
}
@@ -1973,10 +1996,11 @@ static int _raid_reshape_remove_images(struct logical_volume *lv,
/*
* HM Helper:
*
* Reshape: keep images in RAID @lv but change stripe size or data copies
* Reshape: keep images in RAID @lv but change layout, stripe size or data copies
*
*/
static const char *_get_segtype_alias(const struct segment_type *segtype);
static const char *_get_segtype_alias_str(const struct logical_volume *lv, const struct segment_type *segtype);
static int _raid_reshape_keep_images(struct logical_volume *lv,
const struct segment_type *new_segtype,
int yes, int force, int *force_repair,
@@ -1988,10 +2012,13 @@ static int _raid_reshape_keep_images(struct logical_volume *lv,
struct lv_segment *seg = first_seg(lv);
if (seg->segtype != new_segtype)
log_print_unless_silent("Converting %s LV %s to %s.",
lvseg_name(seg), display_lvname(lv), new_segtype->name);
if (!yes && yes_no_prompt("Are you sure you want to convert %s LV %s to %s? [y/n]: ",
lvseg_name(seg), display_lvname(lv), new_segtype->name) == 'n') {
log_print_unless_silent("Converting %s%s LV %s to %s%s.",
lvseg_name(seg), _get_segtype_alias_str(lv, seg->segtype),
display_lvname(lv), new_segtype->name,
_get_segtype_alias_str(lv, new_segtype));
if (!yes && yes_no_prompt("Are you sure you want to convert %s LV %s? [y/n]: ",
lvseg_name(seg), display_lvname(lv)) == 'n') {
log_error("Logical volume %s NOT converted.", display_lvname(lv));
return 0;
}
@@ -2009,12 +2036,9 @@ static int _raid_reshape_keep_images(struct logical_volume *lv,
* The dm-raid target is able to use the space whereever it
* is found by appropriately selecting forward or backward reshape.
*/
if (seg->segtype != new_segtype) {
const char *alias = _get_segtype_alias(seg->segtype);
if (!strcmp(alias, new_segtype->name))
alloc_reshape_space = 0;
}
if (seg->segtype != new_segtype &&
!strcmp(_get_segtype_alias(seg->segtype), new_segtype->name))
alloc_reshape_space = 0;
if (seg->stripe_size != new_stripe_size)
alloc_reshape_space = 1;
@@ -2100,7 +2124,7 @@ static int _activate_sub_lvs_excl_local(struct logical_volume *lv, uint32_t star
for (s = start_idx; s < seg->area_count; s++)
if (!_activate_sub_lv_excl_local(seg_lv(seg, s)) ||
(seg->meta_areas && !_activate_sub_lv_excl_local(seg_metalv(seg, s))))
return 0;
return_0;
return 1;
}
@@ -2123,17 +2147,18 @@ static int _activate_sub_lvs_excl_local_list(struct logical_volume *lv, struct d
return r;
}
/* Helper: callback function to activate any new image component pairs @lv */
static int _pre_raid_add_legs(struct logical_volume *lv, void *data)
/* Helper: callback function to activate image component pairs of @lv to update size after reshape space allocation */
static int _pre_raid_reactivate_legs(struct logical_volume *lv, void *data)
{
if (!_vg_write_lv_suspend_vg_commit(lv, 1))
return 0;
/* Reload any changed image component pairs for out-of-place reshape apace */
/* Reload any changed image component pairs for out-of-place reshape space */
if (!_activate_sub_lvs_excl_local(lv, 0))
return 0;
return 2; /* 1: ok, 2: metadata commited */
/* 1: ok+ask caller to update, 2: metadata commited+ask caller to resume */
return 2;
}
/* Helper: callback function to activate any rmetas on @data list */
@@ -2145,16 +2170,24 @@ static int _pre_raid0_remove_rmeta(struct logical_volume *lv, void *data)
if (!_vg_write_lv_suspend_vg_commit(lv, 1))
return 0;
/* 1: ok, 2: metadata commited */
/* 1: ok+ask caller to update, 2: metadata commited+ask caller to resume */
return _activate_sub_lvs_excl_local_list(lv, lv_list) ? 2 : 0;
}
/* Helper: callback dummy needed for */
static int _post_raid_dummy(struct logical_volume *lv, void *data)
/* Helper: callback dummy needed for takeover+reshape */
static int _post_raid_reshape(struct logical_volume *lv, void *data)
{
/* 1: ask caller to update, 2: don't ask caller to update */
return 1;
}
/* Helper: callback dummy needed for takeover+reshape */
static int _post_raid_takeover(struct logical_volume *lv, void *data)
{
/* 1: ask caller to update, 2: don't ask caller to update */
return 2;
}
/*
* Reshape logical volume @lv by adding/removing stripes
* (absolute new stripes given in @new_stripes), changing
@@ -2289,6 +2322,12 @@ static int _raid_reshape(struct logical_volume *lv,
/* Handle disk addition reshaping */
if (old_image_count < new_image_count) {
/* FIXME: remove once MD kernel rhbz1443999 got fixed. */
if (sysconf(_SC_NPROCESSORS_ONLN) < 2) {
log_error("Can't add stripes to LV %s on single core.", display_lvname(lv));
return 0;
}
if (!_raid_reshape_add_images(lv, new_segtype, yes,
old_image_count, new_image_count,
new_stripes, new_stripe_size, allocate_pvs))
@@ -2296,6 +2335,12 @@ static int _raid_reshape(struct logical_volume *lv,
/* Handle disk removal reshaping */
} else if (old_image_count > new_image_count) {
/* FIXME: remove once MD kernel rhbz1443999 got fixed. */
if (sysconf(_SC_NPROCESSORS_ONLN) < 2) {
log_error("Can't remove stripes from LV %s on single core.", display_lvname(lv));
return 0;
}
if (!_raid_reshape_remove_images(lv, new_segtype, yes, force,
old_image_count, new_image_count,
new_stripes, new_stripe_size,
@@ -2317,8 +2362,8 @@ static int _raid_reshape(struct logical_volume *lv,
if (seg->area_count != 2 || old_image_count != seg->area_count) {
if (!_lv_update_reload_fns_reset_eliminate_lvs(lv, 0, &removal_lvs,
_post_raid_dummy, NULL,
_pre_raid_add_legs, NULL))
_post_raid_reshape, NULL,
_pre_raid_reactivate_legs, NULL))
return 0;
} if (!_vg_write_commit_backup(lv->vg))
return 0;
@@ -2985,7 +3030,7 @@ static int _lv_raid_change_image_count(struct logical_volume *lv, int yes, uint3
uint32_t old_count = lv_raid_image_count(lv);
if (old_count == new_count) {
log_warn("WARGNING: %s already has image count of %d.",
log_warn("WARNING: %s already has image count of %d.",
display_lvname(lv), new_count);
return 1;
}
@@ -3013,6 +3058,13 @@ int lv_raid_change_image_count(struct logical_volume *lv, int yes, uint32_t new_
const char *level = seg->area_count == 1 ? "raid1 with " : "";
const char *resil = new_count < seg->area_count ? "reducing" : "enhancing";
/* LV must be active to perform raid conversion operations */
if (!lv_is_active(lv)) {
log_error("%s must be active to perform this operation.",
display_lvname(lv));
return 0;
}
if (new_count != 1 && /* Already prompted for in _raid_remove_images() */
!yes && yes_no_prompt("Are you sure you want to convert %s LV %s to %s%u images %s resilience? [y/n]: ",
lvseg_name(first_seg(lv)), display_lvname(lv), level, new_count, resil) == 'n') {
@@ -3376,6 +3428,27 @@ static int _alloc_rmeta_devs_for_rimage_devs(struct logical_volume *lv,
return 1;
}
/* Add new @lv to @seg at area index @idx */
static int _add_component_lv(struct lv_segment *seg, struct logical_volume *lv, uint64_t lv_flags, uint32_t idx)
{
if (lv_flags & VISIBLE_LV)
lv_set_visible(lv);
else
lv_set_hidden(lv);
if (lv_flags & LV_REBUILD)
lv->status |= LV_REBUILD;
else
lv->status &= ~LV_REBUILD;
if (!set_lv_segment_area_lv(seg, idx, lv, 0 /* le */, lv->status)) {
log_error("Failed to add sublv %s.", display_lvname(lv));
return 0;
}
return 1;
}
/* Add new @lvs to @lv at @area_offset */
static int _add_image_component_list(struct lv_segment *seg, int delete_from_list,
uint64_t lv_flags, struct dm_list *lvs, uint32_t area_offset)
@@ -3386,22 +3459,8 @@ static int _add_image_component_list(struct lv_segment *seg, int delete_from_lis
dm_list_iterate_items_safe(lvl, tmp, lvs) {
if (delete_from_list)
dm_list_del(&lvl->list);
if (lv_flags & VISIBLE_LV)
lv_set_visible(lvl->lv);
else
lv_set_hidden(lvl->lv);
if (lv_flags & LV_REBUILD)
lvl->lv->status |= LV_REBUILD;
else
lvl->lv->status &= ~LV_REBUILD;
if (!set_lv_segment_area_lv(seg, s++, lvl->lv, 0 /* le */, lvl->lv->status)) {
log_error("Failed to add sublv %s.",
display_lvname(lvl->lv));
if (!_add_component_lv(seg, lvl->lv, lv_flags, s++))
return 0;
}
}
return 1;
@@ -4408,6 +4467,7 @@ static int _log_possible_conversion(uint64_t *processed_segtypes, void *data)
return 1;
}
/* Return any segment type alias name for @segtype or empty string */
static const char *_get_segtype_alias(const struct segment_type *segtype)
{
if (!strcmp(segtype->name, SEG_TYPE_NAME_RAID5))
@@ -4431,12 +4491,28 @@ static const char *_get_segtype_alias(const struct segment_type *segtype)
return "";
}
/* Return any segment type alias string (format " (same as raid*)") for @segtype or empty string */
static const char *_get_segtype_alias_str(const struct logical_volume *lv, const struct segment_type *segtype)
{
const char *alias = _get_segtype_alias(segtype);
if (*alias) {
const char *msg = " (same as ";
size_t sz = strlen(msg) + strlen(alias) + 2;
char *buf = dm_pool_alloc(lv->vg->cmd->mem, sz);
if (buf)
alias = (dm_snprintf(buf, sz, "%s%s)", msg, alias) < 0) ? "" : buf;
}
return alias;
}
static int _log_possible_conversion_types(const struct logical_volume *lv, const struct segment_type *new_segtype)
{
unsigned possible_conversions = 0;
const struct lv_segment *seg = first_seg(lv);
struct possible_type *pt = NULL;
const char *alias;
uint64_t processed_segtypes = UINT64_C(0);
/* Count any possible segment types @seg an be directly converted to */
@@ -4447,12 +4523,10 @@ static int _log_possible_conversion_types(const struct logical_volume *lv, const
if (!possible_conversions)
log_error("Direct conversion of %s LV %s is not possible.", lvseg_name(seg), display_lvname(lv));
else {
alias = _get_segtype_alias(seg->segtype);
log_error("Converting %s from %s%s%s%s is "
log_error("Converting %s from %s%s is "
"directly possible to the following layout%s:",
display_lvname(lv), lvseg_name(seg),
*alias ? " (same as " : "", alias, *alias ? ")" : "",
_get_segtype_alias_str(lv, seg->segtype),
possible_conversions > 1 ? "s" : "");
pt = NULL;
@@ -4497,10 +4571,16 @@ static int _takeover_noop(TAKEOVER_FN_ARGS)
static int _takeover_unsupported(TAKEOVER_FN_ARGS)
{
log_error("Converting the segment type for %s from %s to %s is not supported.",
display_lvname(lv), lvseg_name(first_seg(lv)),
(segtype_is_striped_target(new_segtype) &&
(new_stripes == 1)) ? SEG_TYPE_NAME_LINEAR : new_segtype->name);
struct lv_segment *seg = first_seg(lv);
if (seg->segtype == new_segtype)
log_error("Logical volume %s already is type %s.",
display_lvname(lv), lvseg_name(seg));
else
log_error("Converting the segment type for %s from %s to %s is not supported.",
display_lvname(lv), lvseg_name(seg),
(segtype_is_striped_target(new_segtype) &&
(new_stripes == 1)) ? SEG_TYPE_NAME_LINEAR : new_segtype->name);
if (!_log_possible_conversion_types(lv, new_segtype))
stack;
@@ -4720,9 +4800,6 @@ static int _rename_area_lvs(struct logical_volume *lv, const char *suffix)
return_0;
}
for (s = 0; s < SLV_COUNT; s++)
dm_pool_free(lv->vg->cmd->mem, sfx[s]);
return 1;
}
@@ -4798,11 +4875,8 @@ static int _shift_parity_dev(struct lv_segment *seg)
static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
{
struct lv_segment *seg = first_seg(lv);
struct dm_list removal_lvs;
uint32_t region_size = seg->region_size;
dm_list_init(&removal_lvs);
if (!(seg_is_raid4(seg) && segtype_is_raid5_n(new_segtype)) &&
!(seg_is_raid5_n(seg) && segtype_is_raid4(new_segtype))) {
log_error("LV %s has to be of type raid4 or raid5_n to allow for this conversion.",
@@ -4818,6 +4892,15 @@ static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
return 0;
}
if (!yes && yes_no_prompt("Are you sure you want to convert %s%s LV %s to %s%s type? [y/n]: ",
lvseg_name(seg), _get_segtype_alias_str(lv, seg->segtype),
display_lvname(lv), new_segtype->name,
_get_segtype_alias_str(lv, new_segtype)) == 'n') {
log_error("Logical volume %s NOT converted to \"%s\".",
display_lvname(lv), new_segtype->name);
return 0;
}
log_debug_metadata("Converting LV %s from %s to %s.", display_lvname(lv),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID4 : SEG_TYPE_NAME_RAID5_N),
(seg_is_raid4(seg) ? SEG_TYPE_NAME_RAID5_N : SEG_TYPE_NAME_RAID4));
@@ -4831,6 +4914,7 @@ static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
return 0;
}
/* Have to clear rmeta LVs or the kernel will reject due to reordering disks */
if (!_clear_meta_lvs(lv))
return_0;
@@ -4843,7 +4927,7 @@ static int _raid45_to_raid54_wrapper(TAKEOVER_FN_ARGS)
seg->region_size = new_region_size ?: region_size;
seg->segtype = new_segtype;
if (!_lv_update_reload_fns_reset_eliminate_lvs(lv, 0, &removal_lvs, NULL))
if (!lv_update_and_reload(lv))
return_0;
init_mirror_in_sync(0);
@@ -4991,7 +5075,7 @@ static int _takeover_downconvert_wrapper(TAKEOVER_FN_ARGS)
}
if (segtype_is_raid4(new_segtype))
return _raid45_to_raid54_wrapper(lv, new_segtype, yes, force, first_seg(lv)->area_count,
return _raid45_to_raid54_wrapper(lv, new_segtype, 1 /* yes */, force, first_seg(lv)->area_count,
1 /* data_copies */, 0, 0, 0, allocate_pvs);
return 1;
@@ -5018,12 +5102,30 @@ static int _striped_to_raid0_wrapper(struct logical_volume *lv,
return 1;
}
/* Set sizes of @lv on takeover upconvert */
static void _set_takeover_upconvert_sizes(struct logical_volume *lv,
const struct segment_type *new_segtype,
uint32_t region_size, uint32_t stripe_size,
uint32_t extents_copied, uint32_t seg_len) {
struct lv_segment *seg = first_seg(lv);
seg->segtype = new_segtype;
seg->region_size = region_size;
seg->stripe_size = stripe_size;
seg->extents_copied = extents_copied;
/* FIXME Hard-coded to raid4/5/6/10 */
lv->le_count = seg->len = seg->area_len = seg_len;
_check_and_adjust_region_size(lv);
}
/* Helper: striped/raid0/raid0_meta/raid1 -> raid4/5/6/10, raid45 -> raid6 wrapper */
static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
{
uint32_t extents_copied, region_size, seg_len, stripe_size;
struct lv_segment *seg = first_seg(lv);
const struct segment_type *initial_segtype = seg->segtype;
const struct segment_type *raid5_n_segtype, *initial_segtype = seg->segtype;
struct dm_list removal_lvs;
dm_list_init(&removal_lvs);
@@ -5051,12 +5153,6 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
}
}
if (seg_is_any_raid5(seg) && segtype_is_any_raid6(new_segtype) && seg->area_count < 4) {
log_error("Minimum of 3 stripes needed for conversion from %s to %s.",
lvseg_name(seg), new_segtype->name);
return 0;
}
if (seg_is_raid1(seg)) {
if (seg->area_count != 2) {
log_error("Can't convert %s LV %s to %s with != 2 legs.",
@@ -5075,11 +5171,9 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
}
if (!new_stripe_size)
new_stripe_size = 128;
new_stripe_size = 2 * DEFAULT_STRIPESIZE;
}
region_size = seg->region_size;
if (!_check_region_size_constraints(lv, new_segtype, new_region_size, new_stripe_size))
return 0;
@@ -5098,23 +5192,25 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
return_0;
}
/* Add metadata LVs */
/* Add metadata LVs in case of raid0 */
if (seg_is_raid0(seg)) {
log_debug_metadata("Adding metadata LVs to %s.", display_lvname(lv));
if (!_raid0_add_or_remove_metadata_lvs(lv, 0 /* update_and_reload */, allocate_pvs, NULL))
return 0;
/* raid0_meta -> raid4 needs clearing of MetaLVs in order to avoid raid disk role change issues in the kernel */
}
if (seg_is_raid0_meta(seg) &&
/* Have to be cleared in conversion from raid0_meta -> raid4 or kernel will reject due to reordering disks */
if (segtype_is_raid0_meta(initial_segtype) &&
segtype_is_raid4(new_segtype) &&
!_clear_meta_lvs(lv))
return_0;
region_size = new_region_size ?: seg->region_size;
stripe_size = new_stripe_size ?: seg->stripe_size;
extents_copied = seg->extents_copied;
seg_len = seg->len;
stripe_size = seg->stripe_size;
/* In case of raid4/5, adjust to allow for allocation of additonal image pairs */
if (seg_is_raid4(seg) || seg_is_any_raid5(seg)) {
if (!(seg->segtype = get_segtype_from_flag(lv->vg->cmd, SEG_RAID0_META)))
return_0;
@@ -5138,7 +5234,8 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
*
* - initial type is raid0 -> just remove remove metadata devices
*
* - initial type is striped -> convert back to it (removes metadata devices)
* - initial type is striped -> convert back to it
* (removes metadata and image devices)
*/
if (segtype_is_raid0(initial_segtype) &&
!_raid0_add_or_remove_metadata_lvs(lv, 0, NULL, &removal_lvs))
@@ -5155,6 +5252,51 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
seg = first_seg(lv);
}
/* Process raid4 (up)converts */
if (segtype_is_raid4(initial_segtype)) {
if (!(raid5_n_segtype = get_segtype_from_flag(lv->vg->cmd, SEG_RAID5_N)))
return_0;
/* raid6 upconvert: convert to raid5_n preserving already allocated new image component pair */
if (segtype_is_any_raid6(new_segtype)) {
struct logical_volume *meta_lv, *data_lv;
if (new_image_count != seg->area_count)
return_0;
log_debug_metadata ("Extracting last image component pair of %s temporarily.",
display_lvname(lv));
if (!_extract_image_components(seg, seg->area_count - 1, &meta_lv, &data_lv))
return_0;
_set_takeover_upconvert_sizes(lv, initial_segtype,
region_size, stripe_size,
extents_copied, seg_len);
seg->area_count--;
if (!_raid45_to_raid54_wrapper(lv, raid5_n_segtype, 1 /* yes */, force, seg->area_count,
1 /* data_copies */, 0, 0, 0, allocate_pvs))
return 0;
if (!_drop_suffix(meta_lv->name, "_extracted") ||
!_drop_suffix(data_lv->name, "_extracted"))
return 0;
data_lv->status |= RAID_IMAGE;
meta_lv->status |= RAID_META;
seg->area_count++;
log_debug_metadata ("Adding extracted last image component pair back to %s to convert to %s.",
display_lvname(lv), new_segtype->name);
if (!_add_component_lv(seg, meta_lv, LV_REBUILD, seg->area_count - 1) ||
!_add_component_lv(seg, data_lv, LV_REBUILD, seg->area_count - 1))
return_0;
} else if (segtype_is_raid5_n(new_segtype) &&
!_raid45_to_raid54_wrapper(lv, raid5_n_segtype, yes, force, seg->area_count,
1 /* data_copies */, 0, 0, 0, allocate_pvs))
return 0;
}
seg->data_copies = new_data_copies;
@@ -5180,21 +5322,15 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
}
seg->segtype = new_segtype;
seg->region_size = new_region_size ?: region_size;
seg->stripe_size = new_stripe_size ?: stripe_size;
seg->extents_copied = extents_copied;
/* FIXME Hard-coded to raid4/5/6/10 */
lv->le_count = seg->len = seg->area_len = seg_len;
_check_and_adjust_region_size(lv);
_set_takeover_upconvert_sizes(lv, new_segtype,
region_size, stripe_size,
extents_copied, seg_len);
log_debug_metadata("Updating VG metadata and reloading %s LV %s.",
lvseg_name(seg), display_lvname(lv));
if (!_lv_update_reload_fns_reset_eliminate_lvs(lv, 0, &removal_lvs,
_post_raid_dummy, NULL,
_pre_raid_add_legs, NULL))
_post_raid_takeover, NULL,
_pre_raid_reactivate_legs, NULL))
return 0;
if (segtype_is_raid4(new_segtype) &&
@@ -5454,14 +5590,6 @@ static int _takeover_from_raid45_to_raid54(TAKEOVER_FN_ARGS)
static int _takeover_from_raid45_to_raid6(TAKEOVER_FN_ARGS)
{
if (seg_is_raid4(first_seg(lv))) {
struct segment_type *segtype = get_segtype_from_flag(lv->vg->cmd, SEG_RAID5_N);
if (!segtype ||
!_raid45_to_raid54_wrapper(lv, segtype, yes, force, first_seg(lv)->area_count,
1 /* data_copies */, 0, 0, 0, allocate_pvs))
return 0;
}
return _takeover_upconvert_wrapper(lv, new_segtype, yes, force,
first_seg(lv)->area_count + 1 /* new_image_count */,
3 /* data_copies */, 0, new_stripe_size,
@@ -5680,95 +5808,173 @@ static uint64_t _r5_to_r6[][2] = {
/* Return segment type flag for raid5 -> raid6 conversions */
static uint64_t _get_r56_flag(const struct lv_segment *seg, unsigned idx)
static uint64_t _get_r56_flag(const struct segment_type *segtype, unsigned idx)
{
unsigned elems = ARRAY_SIZE(_r5_to_r6);
while (elems--)
if (seg->segtype->flags & _r5_to_r6[elems][idx])
if (segtype->flags & _r5_to_r6[elems][idx])
return _r5_to_r6[elems][!idx];
return 0;
}
/* Return segment type flag for raid5 -> raid6 conversions */
/* Return segment type flag of @seg for raid5 -> raid6 conversions */
static uint64_t _raid_seg_flag_5_to_6(const struct lv_segment *seg)
{
return _get_r56_flag(seg, 0);
return _get_r56_flag(seg->segtype, 0);
}
/* Return segment type flag for raid6 -> raid5 conversions */
/* Return segment type flag of @seg for raid6 -> raid5 conversions */
static uint64_t _raid_seg_flag_6_to_5(const struct lv_segment *seg)
{
return _get_r56_flag(seg, 1);
return _get_r56_flag(seg->segtype, 1);
}
/* Change segtype for raid4 <-> raid5 <-> raid6 where necessary. */
static int _set_convenient_raid1456_segtype_to(const struct lv_segment *seg_from,
const struct segment_type **segtype,
int yes)
/* Return segment type flag of @segtype for raid5 -> raid6 conversions */
static uint64_t _raid_segtype_flag_5_to_6(const struct segment_type *segtype)
{
size_t len = min(strlen((*segtype)->name), strlen(lvseg_name(seg_from)));
uint64_t seg_flag;
return _get_r56_flag(segtype, 0);
}
/* Change segtype for raid* for convenience where necessary. */
/* FIXME: do this like _conversion_options_allowed()? */
static int _set_convenient_raid145610_segtype_to(const struct lv_segment *seg_from,
const struct segment_type **segtype,
int yes)
{
uint64_t seg_flag = 0;
struct cmd_context *cmd = seg_from->lv->vg->cmd;
const struct segment_type *segtype_sav = *segtype;
/* Bail out if same RAID level is requested. */
if (!strncmp((*segtype)->name, lvseg_name(seg_from), len))
if (is_same_level(seg_from->segtype, *segtype))
return 1;
/* Striped/raid0 -> raid5/6 */
log_debug("Checking LV %s requested %s segment type for convenience",
display_lvname(seg_from->lv), (*segtype)->name);
/* striped/raid0 -> raid5/6 */
if (seg_is_striped(seg_from) || seg_is_any_raid0(seg_from)) {
/* If this is any raid5 conversion request -> enforce raid5_n, because we convert from striped */
if (segtype_is_any_raid5(*segtype) && !segtype_is_raid5_n(*segtype)) {
if (segtype_is_any_raid5(*segtype) && !segtype_is_raid5_n(*segtype))
seg_flag = SEG_RAID5_N;
goto replaced;
/* If this is any raid6 conversion request -> enforce raid6_n_6, because we convert from striped */
} else if (segtype_is_any_raid6(*segtype) && !segtype_is_raid6_n_6(*segtype)) {
else if (segtype_is_any_raid6(*segtype) && !segtype_is_raid6_n_6(*segtype))
seg_flag = SEG_RAID6_N_6;
goto replaced;
/* raid1 -> */
} else if (seg_is_raid1(seg_from) && !segtype_is_mirror(*segtype)) {
if (seg_from->area_count != 2) {
log_warn("Convert %s LV %s to 2 images first.",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
} else if (segtype_is_striped(*segtype) ||
segtype_is_any_raid0(*segtype) ||
segtype_is_raid10(*segtype))
seg_flag = SEG_RAID5_N;
else if (!segtype_is_raid4(*segtype) && !segtype_is_any_raid5(*segtype))
seg_flag = SEG_RAID5_LS;
/* raid4/raid5 -> striped/raid0/raid1/raid6/raid10 */
} else if (seg_is_raid4(seg_from) || seg_is_any_raid5(seg_from)) {
if (segtype_is_raid1(*segtype) &&
seg_from->area_count != 2) {
log_warn("Convert %s LV %s to 2 stripes first (i.e. --stripes 1).",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
} else if (seg_is_raid4(seg_from) &&
segtype_is_any_raid5(*segtype) &&
!segtype_is_raid5_n(*segtype))
seg_flag = SEG_RAID5_N;
else if (seg_is_any_raid5(seg_from) &&
segtype_is_raid4(*segtype) &&
!segtype_is_raid5_n(*segtype))
seg_flag = SEG_RAID5_N;
else if (segtype_is_raid10(*segtype)) {
if (seg_from->area_count < 3) {
log_warn("Convert %s LV %s to minimum 3 stripes first (i.e. --stripes 2).",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
}
seg_flag = SEG_RAID0_META;
} else if (segtype_is_any_raid6(*segtype)) {
if (seg_from->area_count < 4) {
log_warn("Convert %s LV %s to minimum 4 stripes first (i.e. --stripes 3).",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
} else if (seg_is_raid4(seg_from) && !segtype_is_raid6_n_6(*segtype))
seg_flag = SEG_RAID6_N_6;
else
seg_flag = _raid_seg_flag_5_to_6(seg_from);
}
/* raid4 -> raid5_n */
} else if (seg_is_raid4(seg_from) && segtype_is_any_raid5(*segtype)) {
seg_flag = SEG_RAID5_N;
goto replaced;
/* raid6 -> striped/raid0/raid5/raid10 */
} else if (seg_is_any_raid6(seg_from)) {
if (segtype_is_raid1(*segtype)) {
/* No result for raid6_{zr,nr,nc} */
if (!(seg_flag = _raid_seg_flag_6_to_5(seg_from)) ||
!(seg_flag & (*segtype)->flags))
seg_flag = SEG_RAID6_LS_6;
/* raid4/raid5_n -> striped/raid0/raid6 */
} else if ((seg_is_raid4(seg_from) || seg_is_raid5_n(seg_from)) &&
!segtype_is_striped(*segtype) &&
!segtype_is_any_raid0(*segtype) &&
!segtype_is_raid1(*segtype) &&
!segtype_is_raid4(*segtype) &&
!segtype_is_raid5_n(*segtype) &&
!segtype_is_raid6_n_6(*segtype)) {
seg_flag = SEG_RAID6_N_6;
goto replaced;
} else if (segtype_is_any_raid10(*segtype)) {
seg_flag = seg_is_raid6_n_6(seg_from) ? SEG_RAID0_META : SEG_RAID6_N_6;
/* Got to do check for raid5 -> raid6 ... */
} else if (seg_is_any_raid5(seg_from) && segtype_is_any_raid6(*segtype)) {
if (!(seg_flag = _raid_seg_flag_5_to_6(seg_from)))
return_0;
goto replaced;
} else if ((segtype_is_striped(*segtype) || segtype_is_any_raid0(*segtype)) &&
!seg_is_raid6_n_6(seg_from)) {
seg_flag = SEG_RAID6_N_6;
/* ... and raid6 -> raid5 */
} else if (seg_is_any_raid6(seg_from) && segtype_is_any_raid5(*segtype)) {
/* No result for raid6_{zr,nr,nc} */
if (!(seg_flag = _raid_seg_flag_6_to_5(seg_from)))
} else if (segtype_is_raid4(*segtype) && !seg_is_raid6_n_6(seg_from)) {
seg_flag = SEG_RAID6_N_6;
} else if (segtype_is_any_raid5(*segtype))
/* No result for raid6_{zr,nr,nc} */
if (!(seg_flag = _raid_seg_flag_6_to_5(seg_from)) ||
!(seg_flag & (*segtype)->flags))
seg_flag = _raid_segtype_flag_5_to_6(*segtype);
/* -> raid1 */
} else if (!seg_is_mirror(seg_from) && segtype_is_raid1(*segtype)) {
if (!seg_is_raid4(seg_from) && !seg_is_any_raid5(seg_from)) {
log_warn("Convert %s LV %s to raid4/raid5 first.",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
goto replaced;
} else if (seg_from->area_count != 2) {
log_warn("Convert %s LV %s to 2 stripes first (i.e. --stripes 1).",
lvseg_name(seg_from), display_lvname(seg_from->lv));
return 0;
}
/* raid10 -> ... */
} else if (seg_is_raid10(seg_from) &&
!segtype_is_striped(*segtype) &&
!segtype_is_any_raid0(*segtype))
seg_flag = SEG_RAID0_META;
if (seg_flag) {
if (!(*segtype = get_segtype_from_flag(cmd, seg_flag)))
return_0;
if (segtype_sav != *segtype) {
log_warn("Replaced LV type %s%s with possible type %s.",
segtype_sav->name, _get_segtype_alias_str(seg_from->lv, segtype_sav),
(*segtype)->name);
log_warn("Repeat this command to convert to %s after an interim conversion has finished.",
segtype_sav->name);
}
}
return 1;
replaced:
if (!(*segtype = get_segtype_from_flag(cmd, seg_flag)))
return_0;
if (segtype_sav != *segtype)
log_warn("Replaced LV type %s with possible type %s.",
segtype_sav->name, (*segtype)->name);
return 1;
}
/*
@@ -5853,7 +6059,7 @@ static int _conversion_options_allowed(const struct lv_segment *seg_from,
int r = 1;
uint32_t opts;
if (!new_image_count && !_set_convenient_raid1456_segtype_to(seg_from, segtype_to, yes))
if (!new_image_count && !_set_convenient_raid145610_segtype_to(seg_from, segtype_to, yes))
return_0;
if (!_get_allowed_conversion_options(seg_from, *segtype_to, new_image_count, &opts)) {
@@ -5881,12 +6087,28 @@ static int _conversion_options_allowed(const struct lv_segment *seg_from,
}
if (r &&
!yes &&
strcmp((*segtype_to)->name, SEG_TYPE_NAME_MIRROR) && /* "mirror" is prompted for later */
!yes && yes_no_prompt("Are you sure you want to convert %s LV %s to %s type? [y/n]: ",
lvseg_name(seg_from), display_lvname(seg_from->lv),
!is_same_level(seg_from->segtype, *segtype_to)) { /* Prompt here for takeover */
const char *basic_fmt = "Are you sure you want to convert %s LV %s";
const char *type_fmt = " to %s type";
const char *question_fmt = "? [y/n]: ";
char *fmt;
size_t sz = strlen(basic_fmt) + ((seg_from->segtype == *segtype_to) ? 0 : strlen(type_fmt)) + strlen(question_fmt) + 1;
if (!(fmt = dm_pool_alloc(seg_from->lv->vg->cmd->mem, sz)))
return_0;
if (dm_snprintf(fmt, sz, "%s%s%s", basic_fmt, (seg_from->segtype == *segtype_to) ? "" : type_fmt, question_fmt) < 0) {
log_error(INTERNAL_ERROR "dm_snprintf failed.");
return_0;
}
if (yes_no_prompt(fmt, lvseg_name(seg_from), display_lvname(seg_from->lv),
(*segtype_to)->name) == 'n') {
log_error("Logical volume %s NOT converted.", display_lvname(seg_from->lv));
r = 0;
log_error("Logical volume %s NOT converted.", display_lvname(seg_from->lv));
r = 0;
}
}
return r;
@@ -5943,6 +6165,15 @@ int lv_raid_convert(struct logical_volume *lv,
uint32_t available_slvs, removed_slvs;
takeover_fn_t takeover_fn;
/* FIXME If not active, prompt and activate */
/* FIXME Some operations do not require the LV to be active */
/* LV must be active to perform raid conversion operations */
if (!lv_is_active(lv)) {
log_error("%s must be active to perform this operation.",
display_lvname(lv));
return 0;
}
new_segtype = new_segtype ? : seg->segtype;
if (!new_segtype) {
log_error(INTERNAL_ERROR "New segtype not specified.");
@@ -5968,6 +6199,15 @@ int lv_raid_convert(struct logical_volume *lv,
region_size = new_region_size ? : seg->region_size;
region_size = region_size ? : get_default_region_size(lv->vg->cmd);
/*
* Check acceptible options mirrors, region_size,
* stripes and/or stripe_size have been provided.
*/
if (!_conversion_options_allowed(seg, &new_segtype, yes,
0 /* Takeover */, 0 /*new_data_copies*/, new_region_size,
new_stripes, new_stripe_size_supplied))
return _log_possible_conversion_types(lv, new_segtype);
/*
* reshape of capable raid type requested
*/
@@ -6003,15 +6243,6 @@ int lv_raid_convert(struct logical_volume *lv,
return 0;
}
/*
* Check acceptible options mirrors, region_size,
* stripes and/or stripe_size have been provided.
*/
if (!_conversion_options_allowed(seg, &new_segtype, yes,
0 /* Takeover */, 0 /*new_data_copies*/, new_region_size,
new_stripes, new_stripe_size_supplied))
return _log_possible_conversion_types(lv, new_segtype);
takeover_fn = _get_takeover_fn(first_seg(lv), new_segtype, new_image_count);
/* Exit without doing activation checks if the combination isn't possible */
@@ -6046,15 +6277,6 @@ int lv_raid_convert(struct logical_volume *lv,
(segtype_is_striped_target(new_segtype) &&
(new_stripes == 1)) ? SEG_TYPE_NAME_LINEAR : new_segtype->name);
/* FIXME If not active, prompt and activate */
/* FIXME Some operations do not require the LV to be active */
/* LV must be active to perform raid conversion operations */
if (!lv_is_active(lv)) {
log_error("%s must be active to perform this operation.",
display_lvname(lv));
return 0;
}
/* In clustered VGs, the LV must be active on this node exclusively. */
if (vg_is_clustered(lv->vg) && !lv_is_active_exclusive_locally(lv)) {
log_error("%s must be active exclusive locally to "

View File

@@ -370,7 +370,8 @@ static int _mirrored_add_target_line(struct dev_manager *dm, struct dm_pool *mem
region_size = seg->region_size;
} else
region_size = adjusted_mirror_region_size(seg->lv->vg->extent_size,
region_size = adjusted_mirror_region_size(cmd,
seg->lv->vg->extent_size,
seg->area_len,
mirr_state->default_region_size, 1,
vg_is_clustered(seg->lv->vg));

View File

@@ -4466,6 +4466,7 @@ static struct _extent *_stats_get_extents_for_file(struct dm_pool *mem, int fd,
return extents;
bad:
*count = 0;
dm_pool_abandon_object(mem);
dm_free(buf);
return NULL;
@@ -4536,7 +4537,7 @@ static int _stats_unmap_regions(struct dm_stats *dms, uint64_t group_id,
region = &dms->regions[i];
nr_old++;
if (_find_extent(*count, extents,
if (extents && _find_extent(*count, extents,
region->start, region->len)) {
ext.start = region->start;
ext.len = region->len;
@@ -4653,11 +4654,12 @@ static uint64_t *_stats_map_file_regions(struct dm_stats *dms, int fd,
* causing complications in the error path.
*/
if (!(extent_mem = dm_pool_create("extents", sizeof(*extents))))
return_0;
return_NULL;
if (!(extents = _stats_get_extents_for_file(extent_mem, fd, count))) {
dm_pool_destroy(extent_mem);
return_0;
log_very_verbose("No extents found in fd %d", fd);
if (!update)
goto out;
}
if (update) {
@@ -4734,7 +4736,10 @@ static uint64_t *_stats_map_file_regions(struct dm_stats *dms, int fd,
if (bounds)
dm_free(hist_arg);
dm_pool_free(extent_mem, extents);
/* the extent table will be empty if the file has been truncated. */
if (extents)
dm_pool_free(extent_mem, extents);
dm_pool_destroy(extent_mem);
return regions;
@@ -4755,12 +4760,6 @@ out_remove:
*count = 0;
out:
/*
* The table of file extents in 'extents' is always built, so free
* it explicitly: this will also free any 'old_extents' table that
* was later allocated from the 'extent_mem' pool by this function.
*/
dm_pool_free(extent_mem, extents);
dm_pool_destroy(extent_mem);
dm_free(hist_arg);
dm_free(regions);
@@ -4872,7 +4871,8 @@ uint64_t *dm_stats_update_regions_from_fd(struct dm_stats *dms, int fd,
if (!dm_stats_list(dms, NULL))
goto bad;
if (regroup)
/* regroup if there are regions to group */
if (regroup && (*regions != DM_STATS_REGION_NOT_PRESENT))
if (!_stats_group_file_regions(dms, regions, count, alias))
goto bad;

View File

@@ -312,7 +312,7 @@ static int _lvm_pv_resize(const pv_t pv, uint64_t new_size)
if (!vg_check_write_mode(pv->vg))
return -1;
if (!pv_resize_single(pv->vg->cmd, pv->vg, pv, size)) {
if (!pv_resize_single(pv->vg->cmd, pv->vg, pv, size, 1)) {
log_error("PV re-size failed!");
return -1;
}

View File

@@ -83,6 +83,7 @@ sysconfdir = @sysconfdir@
rootdir = $(DESTDIR)/
bindir = $(DESTDIR)@bindir@
confdir = $(DESTDIR)@CONFDIR@/lvm
profiledir = $(confdir)/@DEFAULT_PROFILE_SUBDIR@
includedir = $(DESTDIR)@includedir@
libdir = $(DESTDIR)@libdir@
libexecdir = $(DESTDIR)@libexecdir@

View File

@@ -89,7 +89,7 @@ else
MAN8DM+=$(DMEVENTDMAN)
endif
ifeq ("@DMFILEMAPD@", "yes")
ifeq ("@BUILD_DMFILEMAPD@", "yes")
MAN8DM+=$(DMFILEMAPDMAN)
endif

View File

@@ -31,6 +31,9 @@ clvmd \(em cluster LVM daemon
clvmd is the daemon that distributes LVM metadata updates around a cluster.
It must be running on all nodes in the cluster and will give an error
if a node in the cluster does not have this daemon running.
Also see \fBlvmlockd\fP(8) for a newer method of using LVM on shared
storage.
.
.SH OPTIONS
.
@@ -196,4 +199,6 @@ Defaults to \fI#LVM_PATH#\fP.
.SH SEE ALSO
.BR syslog (3),
.BR lvm.conf (5),
.BR lvm (8)
.BR lvm (8),
.BR lvmlockd (8),
.BR lvmsystemid (7)

View File

@@ -913,7 +913,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -1178,7 +1178,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -1301,7 +1301,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -349,7 +349,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -484,7 +484,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -3,4 +3,3 @@ and LV segments. The information is all gathered together for each VG
(under a per-VG lock) so it is consistent. Information gathered from
separate calls to \fBvgs\fP, \fBpvs\fP, and \fBlvs\fP can be inconsistent
if information changes between commands.

View File

@@ -331,7 +331,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -174,7 +174,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -11,8 +11,20 @@ lvm \(em LVM2 tools
.
.SH DESCRIPTION
.
lvm provides the command-line tools for LVM2. A separate
manual page describes each command in detail.
The Logical Volume Manager (LVM) provides tools to create virtual block
devices from physical devices. Virtual devices may be easier to manage
than physical devices, and can have capabilities beyond what the physical
devices provide themselves. A Volume Group (VG) is a collection of one or
more physical devices, each called a Physical Volume (PV). A Logical
Volume (LV) is a virtual block device that can be used by the system or
applications. Each block of data in an LV is stored on one or more PV in
the VG, according to algorithms implemented by Device Mapper (DM) in the
kernel.
.P
The lvm command, and other commands listed below, are the command-line
tools for LVM. A separate manual page describes each command in detail.
.P
If \fBlvm\fP is invoked with no arguments it presents a readline prompt
(assuming it was compiled with readline support).
@@ -525,6 +537,8 @@ directly.
.BR lvs (8)
.BR lvscan (8)
.BR lvm-fullreport (8)
.BR lvm-lvpoll (8)
.BR lvm2-activation-generator (8)
.BR blkdeactivate (8)
.BR lvmdump (8)

View File

@@ -10,6 +10,10 @@ being loaded - settings read in later override earlier
settings. File timestamps are checked between commands and if
any have changed, all the files are reloaded.
For a description of each lvm.conf setting, run:
.B lvmconfig --typeconfig default --withcomments --withspaces
The settings defined in lvm.conf can be overridden by any
of these extended configuration methods:
.TP

View File

@@ -4,9 +4,9 @@ lvmcache \(em LVM caching
.SH DESCRIPTION
The \fBcache\fP logical volume type uses a small and fast LV to improve
the performance of a large and slow LV. It does this by storing the
frequently used blocks on the faster LV.
An \fBlvm\fP(8) \fBcache\fP Logical Volume (LV) uses a small and
fast LV to improve the performance of a large and slow LV. It does this
by storing the frequently used blocks on the faster LV.
LVM refers to the small fast LV as a \fBcache pool LV\fP. The large
slow LV is called the \fBorigin LV\fP. Due to requirements from dm-cache
(the kernel driver), LVM further splits the cache pool LV into two
@@ -16,7 +16,8 @@ origin LV to increase speed. The cache metadata LV holds the
accounting information that specifies where data blocks are stored (e.g.
on the origin LV or on the cache data LV). Users should be familiar with
these LVs if they wish to create the best and most robust cached
logical volumes. All of these associated LVs must be in the same VG.
LVs. All of these associated LVs must be in the same Volume
Group (VG).
.SH Cache Terms
.nf
@@ -29,7 +30,7 @@ cache LV CacheLV OriginLV + CachePoolLV
.SH Cache Usage
The primary method for using a cache type logical volume:
The primary method for using a cache type LV:
.SS 0. create OriginLV
@@ -403,6 +404,24 @@ This is equivalent to:
.B lvconvert --type cache --cachepool VG/CachePoolLV VG/OriginLV
.SS Cache metadata formats
\&
There are two disk formats for cache metadata. The metadata format can be
specified when a cache pool is created, and cannot be changed.
Format \fB2\fP has better performance; it is more compact, and stores
dirty bits in a separate btree, which improves the speed of shutting down
the cache.
With \fBauto\fP, lvm selects the best option provided by the current
dm-cache kernel target.
.B lvconvert --type cache-pool --cachemetadataformat auto|1|2
.RS
.B VG/CacheDataLV
.RE
.SH SEE ALSO
.BR lvm.conf (5),
.BR lvchange (8),

View File

@@ -269,7 +269,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -143,7 +143,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -53,10 +53,13 @@ metadata.
In some cases, lvmetad will be temporarily disabled while it continues
running. In this state, LVM commands will ignore the lvmetad cache and
revert to scanning disks. A warning will also be printed which includes
the reason why lvmetad is not being used. The most common reason is the
existence of duplicate PVs (lvmetad cannot cache data for duplicate PVs.)
Once duplicates have been resolved, the lvmetad cache is can be updated
with pvscan --cache and commands will return to using the cache.
the reason why lvmetad is not being used. The most common reasons are the
existence of duplicate PVs (lvmetad cannot cache data for duplicate PVs),
or an 'lvconvert --repair' command has been run (the lvmetad cache may
not be reliable while repairs are neeeded.)
Once duplicates have been resolved, or repairs have been completed,
the lvmetad cache is can be updated with pvscan --cache and commands
will return to using the cache.
Use of lvmetad is enabled/disabled by:
.br

View File

@@ -117,17 +117,22 @@ Assign each host a unique host_id in the range 1-2000 by setting
.SS 3. start lvmlockd
Use a service/init file if available, or just run "lvmlockd".
Use a unit/init file, or run the lvmlockd daemon directly:
.br
systemctl start lvm2-lvmlockd
.SS 4. start lock manager
.I sanlock
.br
Use unit/init files, or start wdmd and sanlock daemons directly:
.br
systemctl start wdmd sanlock
.I dlm
.br
Follow external clustering documentation when applicable, otherwise:
Follow external clustering documentation when applicable, or use
unit/init files:
.br
systemctl start corosync dlm
@@ -146,8 +151,8 @@ vgchange --lock-start
lvmlockd requires shared VGs to be started before they are used. This is
a lock manager operation to start (join) the VG lockspace, and it may take
some time. Until the start completes, locks for the VG are not available.
LVM commands are allowed to read the VG while start is in progress. (An
init/unit file can also be used to start VGs.)
LVM commands are allowed to read the VG while start is in progress. (A
unit/init file can also be used to start VGs.)
.SS 7. create and activate LVs
@@ -247,9 +252,9 @@ clvmd for clustering. See below for converting a clvm VG to a lockd VG.
.SS lockd VGs from hosts not using lvmlockd
Only hosts that use lockd VGs should be configured to run lvmlockd.
However, shared devices used by lockd VGs may be visible from hosts not
using lvmlockd. From a host not using lvmlockd, visible lockd VGs are
ignored in the same way as foreign VGs (see
However, shared devices in lockd VGs may be visible from hosts not
using lvmlockd. From a host not using lvmlockd, lockd VGs are ignored
in the same way as foreign VGs (see
.BR lvmsystemid (7).)
The --shared option for reporting and display commands causes lockd VGs
@@ -267,9 +272,9 @@ for all vgcreate options.
.B vgcreate <vgname> <devices>
.IP \[bu] 2
Creates a local VG with the local system ID when neither lvmlockd nor clvm are configured.
Creates a local VG with the local host's system ID when neither lvmlockd nor clvm are configured.
.IP \[bu] 2
Creates a local VG with the local system ID when lvmlockd is configured.
Creates a local VG with the local host's system ID when lvmlockd is configured.
.IP \[bu] 2
Creates a clvm VG when clvm is configured.
@@ -300,10 +305,11 @@ LVM commands request locks from clvmd to use the VG.
.SS creating the first sanlock VG
Creating the first sanlock VG is not protected by locking and requires
special attention. This is because sanlock locks exist within the VG, so
they are not available until the VG exists. The first sanlock VG will
contain the "global lock".
Creating the first sanlock VG is not protected by locking, so it requires
special attention. This is because sanlock locks exist on storage within
the VG, so they are not available until the VG exists. The first sanlock
VG created will automatically contain the "global lock". Be aware of the
following special considerations:
.IP \[bu] 2
The first vgcreate command needs to be given the path to a device that has
@@ -312,6 +318,11 @@ be done by vgcreate. This is because the pvcreate command requires the
global lock, which will not be available until after the first sanlock VG
is created.
.IP \[bu] 2
Because the first sanlock VG will contain the global lock, this VG needs
to be accessible to all hosts that will use sanlock shared VGs. All hosts
will need to use the global lock from the first sanlock VG.
.IP \[bu] 2
While running vgcreate for the first sanlock VG, ensure that the device
being used is not used by another LVM command. Allocation of shared
@@ -323,11 +334,6 @@ While running vgcreate for the first sanlock VG, ensure that the VG name
being used is not used by another LVM command. Uniqueness of VG names is
usually ensured by the global lock.
.IP \[bu] 2
Because the first sanlock VG will contain the global lock, this VG needs
to be accessible to all hosts that will use sanlock shared VGs. All hosts
will need to use the global lock from the first sanlock VG.
See below for more information about managing the sanlock global lock.
@@ -383,7 +389,7 @@ lvmlockd is running
the lock manager is running
.br
\[bu]
the VG is visible to the system
the VG's devices are visible on the system
.br
A lockd VG can be stopped if all LVs are deactivated.
@@ -425,22 +431,23 @@ activation {
.SS automatic starting and automatic activation
Scripts or programs on a host that automatically start VGs will use the
"auto" option to indicate that the command is being run automatically by
the system:
When system-level scripts/programs automatically start VGs, they should
use the "auto" option. This option indicates that the command is being
run automatically by the system:
vgchange --lock-start --lock-opt auto [<vgname> ...]
Without any additional configuration, including the "auto" option has no
effect; all VGs are started unless restricted by lock_start_list.
The "auto" option causes the command to follow the lvm.conf
activation/auto_lock_start_list. If auto_lock_start_list is undefined,
all VGs are started, just as if the auto option was not used.
However, when the lvm.conf activation/auto_lock_start_list is defined, the
auto start command performs an additional filtering phase to all VGs being
started, testing each VG name against the auto_lock_start_list. The
auto_lock_start_list defines lockd VGs that will be started by the auto
start command. Visible lockd VGs not included in the list are ignored by
the auto start command. If the list is undefined, all VG names pass this
filter. (The lock_start_list is also still used to filter all VGs.)
When auto_lock_start_list is defined, it lists the lockd VGs that should
be started by the auto command. VG names that do not match an item in the
list will be ignored by the auto start command.
(The lock_start_list is also still used to filter VG names from all start
commands, i.e. with or without the auto option. When the lock_start_list
is defined, only VGs matching a list item can be started with vgchange.)
The auto_lock_start_list allows a user to select certain lockd VGs that
should be automatically started by the system (or indirectly, those that
@@ -470,14 +477,12 @@ The set of orphan PVs and unused devices.
The properties of orphan PVs, e.g. PV size.
.br
The global lock is used in shared mode by commands that read this
information, or in exclusive mode by commands that change it.
The command 'vgs' acquires the global lock in shared mode because it
reports the list of all VG names.
The vgcreate command acquires the global lock in exclusive mode because it
creates a new VG name, and it takes a PV from the list of unused PVs.
The global lock is acquired in shared mode by commands that read this
information, or in exclusive mode by commands that change it. For
example, the command 'vgs' acquires the global lock in shared mode because
it reports the list of all VG names, and the vgcreate command acquires the
global lock in exclusive mode because it creates a new VG name, and it
takes a PV from the list of unused PVs.
When an LVM command is given a tag argument, or uses select, it must read
all VGs to match the tag or selection, which causes the global lock to be
@@ -485,10 +490,10 @@ acquired.
.I VG lock
A VG lock is associated with each VG. The VG lock is acquired in shared
mode to read the VG and in exclusive mode to change the VG (modify the VG
metadata or activate LVs). This lock serializes access to a VG with all
other LVM commands accessing the VG from all hosts.
A VG lock is associated with each lockd VG. The VG lock is acquired in
shared mode to read the VG and in exclusive mode to change the VG (modify
the VG metadata or activating LVs). This lock serializes access to a VG
with all other LVM commands accessing the VG from all hosts.
The command 'vgs' will not only acquire the GL lock to read the list of
all VG names, but will acquire the VG lock for each VG prior to reading
@@ -502,7 +507,7 @@ argument.
An LV lock is acquired before the LV is activated, and is released after
the LV is deactivated. If the LV lock cannot be acquired, the LV is not
activated. LV locks are persistent and remain in place after the
activated. LV locks are persistent and remain in place when the
activation command is done. GL and VG locks are transient, and are held
only while an LVM command is running.
@@ -822,8 +827,8 @@ While lvmlockd and clvmd are entirely different systems, LVM command usage
remains similar. Differences are more notable when using lvmlockd's
sanlock option.
Visible usage differences between lockd VGs with lvmlockd and clvm VGs
with clvmd:
Visible usage differences between lockd VGs (using lvmlockd) and clvm VGs
(using clvmd):
.IP \[bu] 2
lvm.conf must be configured to use either lvmlockd (use_lvmlockd=1) or

View File

@@ -5,14 +5,16 @@ lvmraid \(em LVM RAID
.SH DESCRIPTION
LVM RAID is a way to create logical volumes (LVs) that use multiple physical
devices to improve performance or tolerate device failure. How blocks of
data in an LV are placed onto physical devices is determined by the RAID
level. RAID levels are commonly referred by a level specific number
suffixed to the string 'raid', e.g. raid1, raid5 or raid6.
Selecting a RAID level involves tradeoffs among physical device
requirements, fault tolerance, and performance. A description of the RAID
levels can be found at
\fBlvm\fP(8) RAID is a way to create a Logical Volume (LV) that uses
multiple physical devices to improve performance or tolerate device
failures. In LVM, the physical devices are Physical Volumes (PVs) in a
single Volume Group (VG).
How LV data blocks are placed onto PVs is determined by the RAID level.
RAID levels are commonly referred to as 'raid' followed by a number, e.g.
raid1, raid5 or raid6. Selecting a RAID level involves making tradeoffs
among: physical device requirements, fault tolerance, and performance. A
description of the RAID levels can be found at
.br
www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
@@ -894,7 +896,7 @@ between linear and raid1.
.IP \(bu 3
between mirror and raid1.
.IP \(bu 3
between 2-legged raid1 and raid4/5.
between raid1 with two images and raid4/5.
.IP \(bu 3
between striped/raid0 and raid4.
.IP \(bu 3
@@ -910,9 +912,90 @@ between striped/raid0 and raid10.
.IP \(bu 3
between striped and raid4.
.SS Indirect conversions
Converting from one raid level to another may require multiple steps,
converting first to intermediate raid levels.
.B linear to raid6
To convert an LV from linear to raid6:
.br
1. convert to raid1 with two images
.br
2. convert to raid5 (internally raid5_ls) with two images
.br
3. convert to raid5 with three or more stripes (reshape)
.br
4. convert to raid6 (internally raid6_ls_6)
.br
5. convert to raid6 (internally raid6_zr, reshape)
The commands to perform the steps above are:
.br
1. lvconvert --type raid1 --mirrors 1 LV
.br
2. lvconvert --type raid5 LV
.br
3. lvconvert --stripes 3 LV
.br
4. lvconvert --type raid6 LV
.br
5. lvconvert --type raid6 LV
The final conversion from raid6_ls_6 to raid6_zr is done to avoid the
potential write/recovery performance reduction in raid6_ls_6 because of
the dedicated parity device. raid6_zr rotates data and parity blocks to
avoid this.
.B linear to striped
To convert an LV from linear to striped:
.br
1. convert to raid1 with two images
.br
2. convert to raid5_n
.br
3. convert to raid5_n with five 128k stripes (reshape)
.br
4. convert raid5_n to striped
The commands to perform the steps above are:
.br
1. lvconvert --type raid1 --mirrors 1 LV
.br
2. lvconvert --type raid5_n LV
.br
3. lvconvert --stripes 5 --stripesize 128k LV
.br
4. lvconvert --type striped LV
The raid5_n type in step 2 is used because it has dedicated parity SubLVs
at the end, and can be converted to striped directly. The stripe size is
increased in step 3 to add extra space for the conversion process. This
step grows the LV size by a factor of five. After conversion, this extra
space can be reduced (or used to grow the file system using the LV).
Reversing these steps will convert a striped LV to linear.
.B raid6 to striped
To convert an LV from raid6_nr to striped:
.br
1. convert to raid6_n_6
.br
2. convert to striped
The commands to perform the steps above are:
.br
1. lvconvert --type raid6_n_6 LV
.br
2. lvconvert --type striped LV
.SS Examples
1. Converting an LV from \fBlinear\fP to \fBraid1\fP.
Converting an LV from \fBlinear\fP to \fBraid1\fP.
.nf
# lvs -a -o name,segtype,size vg
@@ -930,37 +1013,7 @@ between striped and raid4.
[lv_rmeta_1] linear 3.00m
.fi
2. Converting an LV from \fBmirror\fP to \fBraid1\fP.
.nf
# lvs -a -o name,segtype,size vg
LV Type LSize
lv mirror 100.00g
[lv_mimage_0] linear 100.00g
[lv_mimage_1] linear 100.00g
[lv_mlog] linear 3.00m
.SS Examples
1. Converting an LV from \fBlinear\fP to \fBraid1\fP.
.nf
# lvs -a -o name,segtype,size vg
LV Type LSize
lv linear 300.00g
# lvconvert --type raid1 --mirrors 1 vg/lv
# lvs -a -o name,segtype,size vg
LV Type LSize
lv raid1 300.00g
[lv_rimage_0] linear 300.00g
[lv_rimage_1] linear 300.00g
[lv_rmeta_0] linear 3.00m
[lv_rmeta_1] linear 3.00m
.fi
2. Converting an LV from \fBmirror\fP to \fBraid1\fP.
Converting an LV from \fBmirror\fP to \fBraid1\fP.
.nf
# lvs -a -o name,segtype,size vg
@@ -981,28 +1034,17 @@ between striped and raid4.
[lv_rmeta_1] linear 3.00m
.fi
3. Converting an LV from \fBlinear\fP to \fBraid1\fP (with 3 images).
Converting an LV from \fBlinear\fP to \fBraid1\fP (with 3 images).
.nf
Start with a linear LV:
# lvcreate -L1G -n lv vg
Convert the linear LV to raid1 with three images
(original linear image plus 2 mirror images):
# lvconvert --type raid1 --mirrors 2 vg/lv
.fi
4. Converting an LV from \fBstriped\fP (with 4 stripes) to \fBraid6_n_6\fP.
Converting an LV from \fBstriped\fP (with 4 stripes) to \fBraid6_n_6\fP.
.nf
Start with a striped LV:
# lvcreate --stripes 4 -L64M -n lv vg
Convert the striped LV to raid6_n_6:
# lvconvert --type raid6 vg/lv
# lvs -a -o lv_name,segtype,sync_percent,data_copies
@@ -1049,7 +1091,9 @@ that is done, the new stripe is unquiesced and used.)
.SS Examples
1. Converting raid6_n_6 to raid6_nr with rotating data/parity.
(Command output shown in examples may change.)
Converting raid6_n_6 to raid6_nr with rotating data/parity.
This conversion naturally follows a previous conversion from striped/raid0
to raid6_n_6 (shown above). It completes the transition to a more
@@ -1316,7 +1360,8 @@ In case the RaidLV should be converted to striped:
.nf
# lvconvert --type striped vg/lv
Unable to convert LV vg/lv from raid6_nr to striped.
Converting vg/lv from raid6_nr is directly possible to the following layouts:
Converting vg/lv from raid6_nr is directly possible to the \\
following layouts:
raid6_nc
raid6_zr
raid6_la_6
@@ -1619,7 +1664,9 @@ RAID6 last parity devices
.br
\[bu]
Fixed dedicated last devices (P-Syndrome N-1 and Q-Syndrome N)
.RS 2
with striped data used for striped/raid0 conversions
.RE
.br
\[bu]
Used for RAID Takeover
@@ -1630,7 +1677,10 @@ raid6_{ls,rs,la,ra}_6
RAID6 last parity device
.br
\[bu]
Dedicated last parity device used for conversions from/to raid5_{ls,rs,la,ra}
Dedicated last parity device used for conversions from/to
.RS 2
raid5_{ls,rs,la,ra}
.RE
raid6_ls_6
.br

View File

@@ -948,7 +948,7 @@ configuration directly on command line.
You can obtain the same information with single command where all the
information about PVs, PV segments, LVs and LV segments are obtained
per VG under a single VG lock for consistency, see also \fBlvm fullreport\fP(8)
per VG under a single VG lock for consistency, see also \fBlvm-fullreport\fP(8)
man page for more information. The fullreport has its own configuration
settings to define field sets to use, similar to individual reports as
displayed above, but configuration settings have "_full" suffix now.

View File

@@ -5,132 +5,161 @@ lvmsystemid \(em LVM system ID
.SH DESCRIPTION
Local VGs may exist on shared storage where they are visible to multiple
hosts. These VGs are intended to be used by only a single machine, even
though they are visible to many. A system_id identifying a single host
can be assigned to a VG to indicate the VGs owner. The VG owner can use
the VG as usual, and all other hosts will ignore it. This protects the VG
from accidental use by other hosts.
The \fBlvm\fP(8) system ID restricts Volume Group (VG) access to one host.
This is useful when a VG is placed on shared storage devices, or when
local devices are visible to both host and guest operating systems. In
cases like these, a VG can be visible to multiple hosts at once, and some
mechanism is needed to protect it from being used by more than one host at
a time.
The system_id is not a dynamic property, and can only be changed in very
limited circumstances (see vgexport and vgimport). Even limited changes
to the VG system_id are not perfectly reflected across hosts. A more
coherent view of shared storage requires using an inter-host locking
system to coordinate access and update caches.
A VG's system ID identifies one host as the VG owner. The host with a
matching system ID can use the VG and its LVs, while LVM on other hosts
will ignore it. This protects the VG from being accidentally used from
other hosts.
The system_id is a string uniquely identifying a host. It can be manually
set to a custom value or it can be assigned automatically by lvm using a
unique identifier already available on the host, e.g. machine-id or uname.
The system ID is a string that uniquely identifies a host. It can be
configured as a custom value, or it can be assigned automatically by LVM
using some unique identifier already available on the host, e.g.
machine-id or uname.
In vgcreate, the local system_id is saved in the new VG metadata. The
local host owns the new VG, and other hosts cannot use it.
When a new VG is created, the system ID of the local host is recorded in
the VG metadata. The creating host then owns the new VG, and LVM on other
hosts will ignore it. When an existing, exported VG is imported
(vgimport), the system ID of the local host is saved in the VG metadata,
and the importing host owns the VG.
A VG without a system_id can be used by any host, and a VG with a
system_id can only be used by a host with a matching system_id. A
A VG without a system ID can be used by LVM on any host where the VG's
devices are visible. When system IDs are not used, device filters should
be configured on all hosts to exclude the VG's devices from all but one
host.
A
.B foreign VG
is a VG with a system_id as viewed by a host with a system_id
that does not match the VGs system_id. (Or from a host without a
system_id.)
is a VG seen by a host with an unmatching system ID, i.e. the system ID
in the VG metadata does not match the system ID configured on the host.
If the host has no system ID, and the VG does, the VG is foreign and LVM
will ignore it. If the VG has no system ID, access is unrestricted, and
LVM can access it from any host, whether the host has a system ID or not.
Valid system_id characters are the same as valid VG name characters. If a
system_id contains invalid characters, those characters are omitted and
remaining characters are used. If a system_id is longer than the maximum
Changes to a host's system ID and a VG's system ID can be made in limited
circumstances (see vgexport and vgimport). Improper changes can result in
a host losing access to its VG, or a VG being accidentally damaged by
access from an unintended host. Even limited changes to the VG system ID
may not be perfectly reflected across hosts. A more coherent view of
shared storage requires an inter-host locking system to coordinate access
and update caches.
Valid system ID characters are the same as valid VG name characters. If a
system ID contains invalid characters, those characters are omitted and
remaining characters are used. If a system ID is longer than the maximum
name length, the characters up to the maximum length are used. The
maximum length of a system_id is 128 characters.
maximum length of a system ID is 128 characters.
Print the system ID of a VG to check if it is set:
.B vgs -o systemid
.I VG
Print the system ID of the local host to check if it is configured:
.B lvm systemid
.SS Limitations and warnings
To benefit fully from system_id, all hosts must have system_id set, and
VGs must have system_id set. A VG on shared storage can be damaged or
destroyed in some cases which the user must be careful to avoid.
To benefit fully from system ID, all hosts should have a system ID
configured, and all VGs should have a system ID set. Without any method
to restrict access, e.g. system ID or device filters, a VG that is visible
to multiple hosts can be accidentally damaged or destroyed.
.IP \[bu] 2
A VG without a system_id can be used without restriction from any host,
even from hosts that have a system_id. Many VGs will not have a system_id
and are unprotected. Verify that a VG has a system_id by running the
command 'vgs -o+systemid'
A VG will not have a system_id if it was created before this feature was
added to lvm, or if it was created by a host that did not have a system_id
defined. A system_id can be assigned to these VGs by using vgchange
--systemid (see below).
A VG without a system ID can be used without restriction from any host
where it is visible, even from hosts that have a system ID.
.IP \[bu] 2
Two hosts should not be assigned the same system_id. Doing so defeats
the purpose of the system_id which is to distinguish different hosts.
Many VGs will not have a system ID set because LVM has not enabled it by
default, and even when enabled, many VGs were created before the feature
was added to LVM or enabled. A system ID can be assigned to these VGs by
using vgchange --systemid (see below).
.IP \[bu] 2
Orphan PVs (or unused devices) on shared storage are completely
unprotected by the system_id feature. Commands that use these PVs, such
as vgcreate or vgextend, are not prevented from performing conflicting
operations and corrupting the PVs. See the
Two hosts should not be assigned the same system ID. Doing so defeats
the purpose of distinguishing different hosts with this value.
.IP \[bu] 2
Orphan PVs (or unused devices) on shared storage are unprotected by the
system ID feature. Commands that use these PVs, such as vgcreate or
vgextend, are not prevented from performing conflicting operations and
corrupting the PVs. See the
.B orphans
section for more information.
.IP \[bu] 2
A host using an old version of lvm without the system_id feature will not
recognize a new system_id in VGs from other hosts. Even though the old
version of lvm is not blocked from reading a VG with a system_id, it is
blocked from writing to the VG (or its LVs). The new system_id changes
the write mode of a VG, making it appear read-only to previous lvm
versions.
The system ID does not protect devices in a VG from programs other than LVM.
This also means that if a host downgrades its version of lvm, it would
lose access to any VGs it had created with a system_id. To avoid this,
the system_id should be removed from VGs before downgrading to an lvm
version without the system_id feature.
.IP \[bu] 2
A host using an old LVM version (without the system ID feature) will not
recognize a system ID set in VGs. The old LVM can read a VG with a
system ID, but is prevented from writing to the VG (or its LVs).
The system ID feature changes the write mode of a VG, making it appear
read-only to previous versions of LVM.
This also means that if a host downgrades to the old LVM version, it would
lose access to any VGs it had created with a system ID. To avoid this,
the system ID should be removed from local VGs before downgrading LVM to a
version without the system ID feature.
.P
.SS Types of VG access
A local VG is meant to be used by a single host.
.br
A shared or clustered VG is meant to be used by multiple hosts.
.br
These can be further distinguished as:
.B Unrestricted:
A local VG that has no system_id. This VG type is unprotected and
A local VG that has no system ID. This VG type is unprotected and
accessible to any host.
.B Owned:
A local VG that has a system_id set, as viewed from the one host with a
matching system_id (the owner). This VG type is by definition acessible.
A local VG that has a system ID set, as viewed from the host with a
matching system ID (the owner). This VG type is acessible to the host.
.B Foreign:
A local VG that has a system_id set, as viewed from any host with an
unmatching system_id (or no system_id). It is owned by another host.
This VG type is by definition not accessible.
A local VG that has a system ID set, as viewed from any host with an
unmatching system ID (or no system ID). It is owned by another host.
This VG type is not accessible to the host.
.B Exported:
A local VG that has been exported with vgexport and has no system_id.
A local VG that has been exported with vgexport and has no system ID.
This VG type can only be accessed by vgimport which will change it to
owned.
.B Shared:
A shared or "lockd" VG has lock_type set and no system_id.
A shared or "lockd" VG has the lock_type set and has no system ID.
A shared VG is meant to be used on shared storage from multiple hosts,
and is only accessible to hosts using lvmlockd. Applicable only if LVM
is compiled with lockd support.
is compiled with lvmlockd support.
.B Clustered:
A clustered or "clvm" VG has the clustered flag set and no system_id.
A clustered or "clvm" VG has the clustered flag set and has no system ID.
A clustered VG is meant to be used on shared storage from multiple hosts,
and is only accessible to hosts using clvmd.
and is only accessible to hosts using clvmd. Applicable only if LVM
is compiled with clvm support.
.SS system_id_source
A host's own system_id can be defined in a number of ways. lvm.conf
global/system_id_source defines the method lvm will use to find the local
system_id:
.SS Host system ID configuration
A host's own system ID can be defined in a number of ways. lvm.conf
global/system_id_source defines the method LVM will use to find the local
system ID:
.TP
.B none
.br
lvm will not use a system_id. lvm is allowed to access VGs without a
system_id, and will create new VGs without a system_id. An undefined
LVM will not use a system ID. LVM is allowed to access VGs without a
system ID, and will create new VGs without a system ID. An undefined
system_id_source is equivalent to none.
.I lvm.conf
@@ -144,7 +173,7 @@ global {
.B machineid
.br
The content of /etc/machine-id is used as the system_id if available.
The content of /etc/machine-id is used as the system ID if available.
See
.BR machine-id (5)
and
@@ -164,7 +193,7 @@ global {
The string utsname.nodename from
.BR uname (2)
is used as the system_id. A uname beginning with "localhost"
is used as the system ID. A uname beginning with "localhost"
is ignored and equivalent to none.
.I lvm.conf
@@ -178,7 +207,7 @@ global {
.B lvmlocal
.br
The system_id is defined in lvmlocal.conf local/system_id.
The system ID is defined in lvmlocal.conf local/system_id.
.I lvm.conf
.nf
@@ -198,7 +227,7 @@ local {
.B file
.br
The system_id is defined in a file specified by lvm.conf
The system ID is defined in a file specified by lvm.conf
global/system_id_file.
.I lvm.conf
@@ -211,20 +240,22 @@ global {
.LP
Changing system_id_source will often cause the system_id to change, which
may prevent the host from using VGs that it previously used (see
extra_system_ids below to handle this.)
Changing system_id_source will likely cause the system ID of the host to
change, which will prevent the host from using VGs that it previously used
(see extra_system_ids below to handle this.)
If a system_id_source other than none fails to resolve a system_id, the
host will be allowed to access VGs with no system_id, but will not be
allowed to access VGs with a defined system_id.
If a system_id_source other than none fails to produce a system ID value,
it is the equivalent of having none. The host will be allowed to access
VGs with no system ID, but will not be allowed to access VGs with a system
ID set.
.SS extra_system_ids
In some cases, it may be useful for a host to access VGs with different
system_id's, e.g. if a host's system_id changes, and it wants to use VGs
that it created with its old system_id. To allow a host to access VGs
with other system_id's, those other system_id's can be listed in
.SS Overriding system ID
In some cases, it may be necessary for a host to access VGs with different
system IDs, e.g. if a host's system ID changes, and it wants to use VGs
that it created with its old system ID. To allow a host to access VGs
with other system IDs, those other system IDs can be listed in
lvmlocal.conf local/extra_system_ids.
.I lvmlocal.conf
@@ -234,106 +265,115 @@ local {
}
.fi
A safer option may be configuring the extra values as needed on the
command line as:
.br
\fB--config 'local/extra_system_ids=["\fP\fIid\fP\fB"]'\fP
.SS vgcreate
In vgcreate, the host running the command assigns its own system_id to the
new VG. To override this and set another system_id:
In vgcreate, the host running the command assigns its own system ID to the
new VG. To override this and set another system ID:
.B vgcreate --systemid
.I SystemID VG Devices
.I SystemID VG PVs
Overriding the system_id makes it possible for a host to create a VG that
it may not be able to use. Another host with a system_id matching the one
specified may not recognize the new VG without manually rescanning
Overriding the host's system ID makes it possible for a host to create a
VG that it may not be able to use. Another host with a system ID matching
the one specified may not recognize the new VG without manually rescanning
devices.
If the --systemid argument is an empty string (""), the VG is created with
no system_id, making it accessible to other hosts (see warnings above.)
no system ID, making it accessible to other hosts (see warnings above.)
.SS report/display
The system_id of a VG is displayed with the "systemid" reporting option.
The system ID of a VG is displayed with the "systemid" reporting option.
Report/display commands ignore foreign VGs by default. To report foreign
VGs, the --foreign option can be used. This causes the VGs to be read
from disk. Because lvmetad caching is not used, this option can cause
poor performance.
.B vgs --foreign -o+systemid
.B vgs --foreign -o +systemid
When a host with no system_id sees foreign VGs, it warns about them as
they are skipped. The host should be assigned a system_id, after which
When a host with no system ID sees foreign VGs, it warns about them as
they are skipped. The host should be assigned a system ID, after which
standard reporting commands will silently ignore foreign VGs.
.SS vgexport/vgimport
vgexport clears the system_id.
vgexport clears the system ID.
Other hosts will continue to see a newly exported VG as foreign because of
local caching (when lvmetad is used). Manually updating the local lvmetad
cache with pvscan --cache will allow a host to recognize the newly
exported VG.
vgimport sets the VG system_id to the local system_id as determined by
lvm.conf system_id_source. vgimport automatically scans storage for
newly exported VGs.
vgimport sets the VG system ID to the system ID of the host doing the
import. vgimport automatically scans storage for newly exported VGs.
After vgimport, the exporting host may continue to see the VG as exported,
and not owned by the new host. Manually updating the local cache with
pvscan --cache will allow a host to recognize the newly imported VG as
foreign.
After vgimport, the exporting host will continue to see the VG as
exported, and not owned by the new host. Manually updating the local
cache with pvscan --cache will allow a host to recognize the newly
imported VG as foreign.
.SS vgchange
A host can change the system_id of its own VGs, but the command requires
A host can change the system ID of its own VGs, but the command requires
confirmation because the host may lose access to the VG being changed:
.B vgchange --systemid
.I SystemID VG
The system_id can be removed from a VG by specifying an empty string ("")
as the new system_id. This makes the VG accessible to other hosts (see
The system ID can be removed from a VG by specifying an empty string ("")
as the new system ID. This makes the VG accessible to other hosts (see
warnings above.)
A host cannot directly change the system_id of a foreign VG.
A host cannot directly change the system ID of a foreign VG.
To move a VG from one host to another, vgexport and vgimport should be
used.
To forcibly gain ownership of a foreign VG, a host can add the foreign
system_id to its extra_system_ids list, change the system_id of the
foreign VG to its own, and remove the foreign system_id from its
extra_system_ids list.
To forcibly gain ownership of a foreign VG, a host can temporarily add the
foreign system ID to its extra_system_ids list, and change the system ID
of the foreign VG to its own. See Overriding system ID above.
.SS shared VGs
A shared/lockd VG has no system_id set, allowing multiple hosts to
use it via lvmlockd. Changing a VG to a lockd type will clear the
existing system_id. Applicable only if LVM is compiled with lockd
support.
A shared/lockd VG has no system ID set, allowing multiple hosts to use it
via lvmlockd. Changing a VG to a lockd type will clear the existing
system ID. Applicable only if LVM is compiled with lockd support.
.SS clustered VGs
A clustered/clvm VG has no system_id set, allowing multiple hosts to
use it via clvmd. Changing a VG to clustered will clear the existing
system_id. Changing a VG to not clustered will set the system_id to the
host running the vgchange command.
A clustered/clvm VG has no system ID set, allowing multiple hosts to use
it via clvmd. Changing a VG to clustered will clear the existing system
ID. Changing a VG to not clustered will set the system ID to the host
running the vgchange command.
.SS creation_host
In vgcreate, the VG metadata field creation_host is set by default to the
host's uname. The creation_host cannot be changed, and is not used to
control access. When system_id_source is "uname", the system_id and
creation_host will be the same.
creation_host fields will be the same.
.SS orphans
Orphan PVs are unused devices; they are not currently used in any VG.
Because of this, they are not protected by a system_id, and any host can
Because of this, they are not protected by a system ID, and any host can
use them. Coordination of changes to orphan PVs is beyond the scope of
system_id. The same is true of any block device that is not a PV.
system ID. The same is true of any block device that is not a PV.
The effects of this are especially evident when lvm uses lvmetad caching.
The effects of this are especially evident when LVM uses lvmetad caching.
For example, if multiple hosts see an orphan PV, and one host creates a VG
using the orphan, the other hosts will continue to report the PV as an
orphan. Nothing would automatically prevent the other hosts from using
@@ -347,8 +387,9 @@ could be pvscan --cache, or vgs --foreign.
.BR vgchange (8),
.BR vgimport (8),
.BR vgexport (8),
.BR vgs (8),
.BR lvmlockd (8),
.BR lvm.conf (5),
.BR machine-id (5),
.BR uname (2),
.BR vgs (8)
.BR uname (2)

View File

@@ -5,17 +5,17 @@ lvmthin \(em LVM thin provisioning
.SH DESCRIPTION
Blocks in a standard logical volume are allocated when the LV is created,
but blocks in a thin provisioned logical volume are allocated as they are
written. Because of this, a thin provisioned LV is given a virtual size,
and can then be much larger than physically available storage. The amount
of physical storage provided for thin provisioned LVs can be increased
later as the need arises.
Blocks in a standard \fBlvm\fP(8) Logical Volume (LV) are allocated when
the LV is created, but blocks in a thin provisioned LV are allocated as
they are written. Because of this, a thin provisioned LV is given a
virtual size, and can then be much larger than physically available
storage. The amount of physical storage provided for thin provisioned LVs
can be increased later as the need arises.
Blocks in a standard LV are allocated (during creation) from the VG, but
blocks in a thin LV are allocated (during use) from a special "thin pool
LV". The thin pool LV contains blocks of physical storage, and blocks in
thin LVs just reference blocks in the thin pool LV.
Blocks in a standard LV are allocated (during creation) from the Volume
Group (VG), but blocks in a thin LV are allocated (during use) from a
special "thin pool LV". The thin pool LV contains blocks of physical
storage, and blocks in thin LVs just reference blocks in the thin pool LV.
A thin pool LV must be created before thin LVs can be created within it.
A thin pool LV is created by combining two standard LVs: a large data LV
@@ -1016,7 +1016,7 @@ Possible discard behaviors:
ignore: Ignore any discards that are received.
nopassdown: Process any discards in the thin pool itself and allow
the no longer needed extends to be overwritten by new data.
the no longer needed extents to be overwritten by new data.
passdown: Process discards in the thin pool (as with nopassdown), and
pass the discards down the the underlying device. This is the default

View File

@@ -222,7 +222,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -203,7 +203,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -163,7 +163,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -426,7 +426,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -345,7 +345,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -190,7 +190,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -1 +1,4 @@
pvchange changes PV attributes in the VG.
For options listed in parentheses, any one is required, after which the
others are optional.

View File

@@ -264,7 +264,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -136,7 +136,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -294,7 +294,7 @@ on the ability to use vgsplit later.)
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -334,7 +334,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -268,7 +268,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -144,7 +144,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -132,7 +132,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -332,7 +332,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -327,7 +327,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -46,6 +46,8 @@
.BR lvs (8)
.BR lvscan (8)
.BR lvm-fullreport (8)
.BR lvm-lvpoll (8)
.BR lvm2-activation-generator (8)
.BR blkdeactivate (8)
.BR lvmdump (8)

View File

@@ -185,7 +185,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -284,7 +284,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -817,7 +817,7 @@ on the ability to use vgsplit later.)
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -132,7 +132,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -215,7 +215,7 @@ on the ability to use vgsplit later.)
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -367,7 +367,7 @@ on the ability to use vgsplit later.)
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -323,7 +323,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -1,6 +1,6 @@
vgexport makes inactive VGs unknown to the system. In this state, all the
PVs in the VG can be moved to a different system, from which
\fBvgimport\fP can then be run.
\fBvgimport\fP(8) can then be run.
Most LVM tools ignore exported VGs.

View File

@@ -160,7 +160,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -258,7 +258,7 @@ on the ability to use vgsplit later.)
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -168,7 +168,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -153,7 +153,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -146,7 +146,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -151,7 +151,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -267,7 +267,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -163,7 +163,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -174,7 +174,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -327,7 +327,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -171,7 +171,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -244,7 +244,7 @@ on the command.
\fB-q\fP|\fB--quiet\fP ...
.br
Suppress output and log messages. Overrides --debug and --verbose.
Repeat once to also suppress any prompts with answer no.
Repeat once to also suppress any prompts with answer 'no'.
.ad b
.HP
.ad l

View File

@@ -75,8 +75,9 @@ BLOCKCOUNT=
MOUNTPOINT=
MOUNTED=
REMOUNT=
PROCMOUNTS="/proc/mounts"
PROCSELFMOUNTINFO="/proc/self/mountinfo"
PROCDIR="/proc"
PROCMOUNTS="$PROCDIR/mounts"
PROCSELFMOUNTINFO="$PROCDIR/self/mountinfo"
NULL="$DM_DEV_DIR/null"
IFS_OLD=$IFS
@@ -113,8 +114,11 @@ verbose() {
test -n "$VERB" && echo "$TOOL: $@" || true
}
# Support multi-line error messages
error() {
echo "$TOOL: $@" >&2
for i in "$@" ; do
echo "$TOOL: $i" >&2
done
cleanup 1
}
@@ -178,52 +182,135 @@ decode_size() {
fi
}
decode_major_minor() {
# 0x00000fff00 mask MAJOR
# 0xfffff000ff mask MINOR
#MINOR=$(( $1 / 1048576 ))
#MAJOR=$(( ($1 - ${MINOR} * 1048576) / 256 ))
#MINOR=$(( $1 - ${MINOR} * 1048576 - ${MAJOR} * 256 + ${MINOR} * 256))
echo "$(( ( $1 >> 8 ) & 4095 )):$(( ( ( $1 >> 12 ) & 268435200 ) | ( $1 & 255 ) ))"
}
# detect filesystem on the given device
# dereference device name if it is symbolic link
detect_fs() {
VOLUME_ORIG=$1
VOLUME=${1/#"${DM_DEV_DIR}/"/}
VOLUME=$("$READLINK" $READLINK_E "$DM_DEV_DIR/$VOLUME") || error "Cannot get readlink \"$1\""
VOLUME=$("$READLINK" $READLINK_E "$DM_DEV_DIR/$VOLUME")
test -n "$VOLUME" || error "Cannot get readlink \"$1\"."
RVOLUME=$VOLUME
case "$RVOLUME" in
# hardcoded /dev since udev does not create these entries elsewhere
# hardcoded /dev since udev does not create these entries elsewhere
/dev/dm-[0-9]*)
read </sys/block/${RVOLUME#/dev/}/dm/name SYSVOLUME 2>&1 && VOLUME="$DM_DEV_DIR/mapper/$SYSVOLUME"
read </sys/block/${RVOLUME#/dev/}/dev MAJORMINOR 2>&1 || error "Cannot get major:minor for \"$VOLUME\""
read </sys/block/${RVOLUME#/dev/}/dev MAJORMINOR 2>&1 || error "Cannot get major:minor for \"$VOLUME\"."
MAJOR=${MAJORMINOR%%:*}
MINOR=${MAJORMINOR##*:}
;;
*)
STAT=$(stat --format "MAJOR=%t MINOR=%T" ${RVOLUME}) || error "Cannot get major:minor for \"$VOLUME\""
eval $STAT
MAJOR=$((0x${MAJOR}))
MINOR=$((0x${MINOR}))
MAJORMINOR=${MAJOR}:${MINOR}
STAT=$(stat --format "MAJOR=\$((0x%t)) MINOR=\$((0x%T))" ${RVOLUME})
test -n "$STAT" || error "Cannot get major:minor for \"$VOLUME\"."
eval "$STAT"
MAJORMINOR="${MAJOR}:${MINOR}"
;;
esac
# use null device as cache file to be sure about the result
# not using option '-o value' to be compatible with older version of blkid
FSTYPE=$("$BLKID" -c "$NULL" -s TYPE "$VOLUME") || error "Cannot get FSTYPE of \"$VOLUME\""
FSTYPE=$("$BLKID" -c "$NULL" -s TYPE "$VOLUME")
test -n "$FSTYPE" || error "Cannot get FSTYPE of \"$VOLUME\"."
FSTYPE=${FSTYPE##*TYPE=\"} # cut quotation marks
FSTYPE=${FSTYPE%%\"*}
verbose "\"$FSTYPE\" filesystem found on \"$VOLUME\""
verbose "\"$FSTYPE\" filesystem found on \"$VOLUME\"."
}
detect_mounted_with_proc_self_mountinfo() {
MOUNTED=$("$GREP" "^[0-9]* [0-9]* $MAJORMINOR " "$PROCSELFMOUNTINFO")
# extract 5th field which is mount point
# Check that passed mounted MAJOR:MINOR is not matching $MAJOR:MINOR of resized $VOLUME
validate_mounted_major_minor() {
test "$1" = "$MAJORMINOR" || {
local REFNAME=$(dmsetup info -c -j "${1%%:*}" -m "${1##*:}" -o name --noheadings 2>/dev/null)
local CURNAME=$(dmsetup info -c -j "$MAJOR" -m "$MINOR" -o name --noheadings 2>/dev/null)
error "Cannot ${CHECK+CHECK}${RESIZE+RESIZE} device \"$VOLUME\" without umounting filesystem $MOUNTED first." \
"Mounted filesystem is using device $CURNAME, but referenced device is $REFNAME." \
"Filesystem utilities currently do not support renamed devices."
}
}
# ATM fsresize & fsck tools are not able to work properly
# when mounted device has changed its name.
# So whenever such device no longer exists with original name
# abort further command processing
check_valid_mounted_device() {
local MOUNTEDMAJORMINOR
local VOL=$("$READLINK" $READLINK_E "$1")
local CURNAME=$(dmsetup info -c -j "$MAJOR" -m "$MINOR" -o name --noheadings)
local SUGGEST="Possibly device \"$1\" has been renamed to \"$CURNAME\"?"
# more confused, device is not DM....
test -n "$CURNAME" || SUGGEST="Mounted volume is not a device mapper device???"
test -n "$VOL" ||
error "Cannot access device \"$1\" referenced by mounted filesystem \"$MOUNTED\"." \
"$SUGGEST" \
"Filesystem utilities currently do not support renamed devices."
case "$VOL" in
# hardcoded /dev since udev does not create these entries elsewhere
/dev/dm-[0-9]*)
read </sys/block/${VOL#/dev/}/dev MOUNTEDMAJORMINOR 2>&1 || error "Cannot get major:minor for \"$VOLUME\"."
;;
*)
STAT=$(stat --format "MOUNTEDMAJORMINOR=\$((0x%t)):\$((0x%T))" "$VOL")
test -n "$STAT" || error "Cannot get major:minor for \"$VOLUME\"."
eval "$STAT"
;;
esac
validate_mounted_major_minor "$MOUNTEDMAJORMINOR"
}
detect_mounted_with_proc_self_mountinfo() {
# Check self mountinfo
# grab major:minor mounted_device mount_point
MOUNTED=$("$GREP" "^[0-9]* [0-9]* $MAJORMINOR " "$PROCSELFMOUNTINFO" 2>/dev/null | head -1)
# If device is opened and not yet found as self mounted
# check all other mountinfos (since it can be mounted in cgroups)
# Use 'find' to not fail on to long list of args with too many pids
# only 1st. line is needed
test -z "$MOUNTED" &&
test $(dmsetup info -c --noheading -o open -j "$MAJOR" -m "$MINOR") -gt 0 &&
MOUNTED=$(find "$PROCDIR" -maxdepth 2 -name mountinfo -print0 | xargs -0 "$GREP" "^[0-9]* [0-9]* $MAJORMINOR " 2>/dev/null | head -1 2>/dev/null)
# TODO: for performance compare with sed and stop with 1st. match:
# sed -n "/$MAJORMINOR/ {;p;q;}"
# extract 2nd field after ' - ' separator as mouted device
MOUNTDEV=$(echo ${MOUNTED##* - } | cut -d ' ' -f 2)
MOUNTDEV=$(echo -n -e ${MOUNTDEV})
# extract 5th field as mount point
# echo -e translates \040 to spaces
MOUNTED=$(echo ${MOUNTED} | cut -d " " -f 5)
MOUNTED=$(echo ${MOUNTED} | cut -d ' ' -f 5)
MOUNTED=$(echo -n -e ${MOUNTED})
test -n "$MOUNTED"
test -n "$MOUNTED" || return 1 # Not seen mounted anywhere
check_valid_mounted_device "$MOUNTDEV"
}
detect_mounted_with_proc_mounts() {
# With older systems without /proc/*/mountinfo we may need to check
# every mount point as cannot easily depend on the name of mounted
# device (which could have been renamed).
# We need to visit every mount point and check it's major minor
detect_mounted_with_proc_mounts() {
MOUNTED=$("$GREP" "^$VOLUME[ \t]" "$PROCMOUNTS")
# for empty string try again with real volume name
test -z "$MOUNTED" && MOUNTED=$("$GREP" "^$RVOLUME[ \t]" "$PROCMOUNTS")
MOUNTDEV=$(echo -n -e ${MOUNTED%% *})
# cut device name prefix and trim everything past mountpoint
# echo translates \040 to spaces
MOUNTED=${MOUNTED#* }
@@ -231,24 +318,43 @@ detect_mounted_with_proc_mounts() {
# for systems with different device names - check also mount output
if test -z "$MOUNTED" ; then
# will not work with spaces in paths
MOUNTED=$(LC_ALL=C "$MOUNT" | "$GREP" "^$VOLUME[ \t]")
test -z "$MOUNTED" && MOUNTED=$(LC_ALL=C "$MOUNT" | "$GREP" "^$RVOLUME[ \t]")
MOUNTDEV=${MOUNTED%% on *}
MOUNTED=${MOUNTED##* on }
MOUNTED=${MOUNTED% type *} # allow type in the mount name
fi
test -n "$MOUNTED"
if test -n "$MOUNTED" ; then
check_valid_mounted_device "$MOUNTDEV"
return 0 # mounted
fi
# If still nothing found and volume is in use
# check every known mount point against MAJOR:MINOR
if test $(dmsetup info -c --noheading -o open -j "$MAJOR" -m "$MINOR") -gt 0 ; then
while IFS=$'\n' read -r i ; do
MOUNTDEV=$(echo -n -e ${i%% *})
MOUNTED=${i#* }
MOUNTED=$(echo -n -e ${MOUNTED%% *})
STAT=$(stat --format "%d" $MOUNTED)
validate_mounted_major_minor $(decode_major_minor "$STAT")
done < "$PROCMOUNTS"
fi
return 1 # nothing is mounted
}
# check if the given device is already mounted and where
# FIXME: resolve swap usage and device stacking
detect_mounted() {
detect_mounted() {
if test -e "$PROCSELFMOUNTINFO"; then
detect_mounted_with_proc_self_mountinfo
elif test -e "$PROCMOUNTS"; then
detect_mounted_with_proc_mounts
else
error "Cannot detect mounted device \"$VOLUME\""
error "Cannot detect mounted device \"$VOLUME\"."
fi
}
@@ -257,10 +363,13 @@ detect_device_size() {
# check if blockdev supports getsize64
"$BLOCKDEV" 2>&1 | "$GREP" getsize64 >"$NULL"
if test $? -eq 0; then
DEVSIZE=$("$BLOCKDEV" --getsize64 "$VOLUME") || error "Cannot read size of device \"$VOLUME\""
DEVSIZE=$("$BLOCKDEV" --getsize64 "$VOLUME")
test -n "$DEVSIZE" || error "Cannot read size of device \"$VOLUME\"."
else
DEVSIZE=$("$BLOCKDEV" --getsize "$VOLUME") || error "Cannot read size of device \"$VOLUME\""
SSSIZE=$("$BLOCKDEV" --getss "$VOLUME") || error "Cannot block size read device \"$VOLUME\""
DEVSIZE=$("$BLOCKDEV" --getsize "$VOLUME")
test -n "$DEVSIZE" || error "Cannot read size of device \"$VOLUME\"."
SSSIZE=$("$BLOCKDEV" --getss "$VOLUME")
test -n "$SSSIZE" || error "Cannot read sector size of device \"$VOLUME\"."
DEVSIZE=$(($DEVSIZE * $SSSIZE))
fi
}
@@ -273,14 +382,14 @@ round_up_block_size() {
}
temp_mount() {
dry "$MKDIR" -p -m 0000 "$TEMPDIR" || error "Failed to create $TEMPDIR"
dry "$MOUNT" "$VOLUME" "$TEMPDIR" || error "Failed to mount $TEMPDIR"
dry "$MKDIR" -p -m 0000 "$TEMPDIR" || error "Failed to create $TEMPDIR."
dry "$MOUNT" "$VOLUME" "$TEMPDIR" || error "Failed to mount $TEMPDIR."
}
temp_umount() {
dry "$UMOUNT" "$TEMPDIR" || error "Failed to umount \"$TEMPDIR\""
dry "$RMDIR" "${TEMPDIR}" || error "Failed to remove \"$TEMPDIR\""
dry "$RMDIR" "${TEMPDIR%%m}" || error "Failed to remove \"${TEMPDIR%%m}\""
dry "$UMOUNT" "$TEMPDIR" || error "Failed to umount \"$TEMPDIR\"."
dry "$RMDIR" "${TEMPDIR}" || error "Failed to remove \"$TEMPDIR\","
dry "$RMDIR" "${TEMPDIR%%m}" || error "Failed to remove \"${TEMPDIR%%m}\"."
}
yes_no() {
@@ -292,19 +401,24 @@ yes_no() {
while read -r -s -n 1 ANS ; do
case "$ANS" in
"y" | "Y" | "") echo y ; return 0 ;;
"n" | "N") echo n ; return 1 ;;
"y" | "Y" ) echo y ; return 0 ;;
"" ) if [ -t 1 ] ; then
echo y ; return 0
fi ;;
esac
done
echo n
return 1
}
try_umount() {
yes_no "Do you want to unmount \"$MOUNTED\"" && dry "$UMOUNT" "$MOUNTED" && return 0
error "Cannot proceed with mounted filesystem \"$MOUNTED\""
error "Cannot proceed with mounted filesystem \"$MOUNTED\"."
}
validate_parsing() {
test -n "$BLOCKSIZE" && test -n "$BLOCKCOUNT" || error "Cannot parse $1 output"
test -n "$BLOCKSIZE" && test -n "$BLOCKCOUNT" || error "Cannot parse $1 output."
}
####################################
# Resize ext2/ext3/ext4 filesystem
@@ -312,6 +426,9 @@ validate_parsing() {
# - unmounted for downsize
####################################
resize_ext() {
local IS_MOUNTED=0
detect_mounted && IS_MOUNTED=1
verbose "Parsing $TUNE_EXT -l \"$VOLUME\""
for i in $(LC_ALL=C "$TUNE_EXT" -l "$VOLUME"); do
case "$i" in
@@ -324,7 +441,7 @@ resize_ext() {
FSFORCE=$FORCE
if [ "$NEWBLOCKCOUNT" -lt "$BLOCKCOUNT" -o "$EXTOFF" -eq 1 ]; then
detect_mounted && verbose "$RESIZE_EXT needs unmounted filesystem" && try_umount
test $IS_MOUNTED -eq 1 && verbose "$RESIZE_EXT needs unmounted filesystem" && try_umount
REMOUNT=$MOUNTED
if test -n "$MOUNTED" ; then
# Forced fsck -f for umounted extX filesystem.
@@ -374,7 +491,7 @@ resize_xfs() {
MOUNTPOINT=$MOUNTED
if [ -z "$MOUNTED" ]; then
MOUNTPOINT=$TEMPDIR
temp_mount || error "Cannot mount Xfs filesystem"
temp_mount || error "Cannot mount Xfs filesystem."
fi
verbose "Parsing $TUNE_XFS \"$MOUNTPOINT\""
for i in $(LC_ALL=C "$TUNE_XFS" "$MOUNTPOINT"); do
@@ -392,7 +509,7 @@ resize_xfs() {
elif [ $NEWBLOCKCOUNT -eq $BLOCKCOUNT ]; then
verbose "Xfs filesystem already has the right size"
else
error "Xfs filesystem shrinking is unsupported"
error "Xfs filesystem shrinking is unsupported."
fi
}
@@ -412,8 +529,8 @@ resize() {
"ext3"|"ext2"|"ext4") resize_ext $NEWSIZE ;;
"reiserfs") resize_reiser $NEWSIZE ;;
"xfs") resize_xfs $NEWSIZE ;;
*) error "Filesystem \"$FSTYPE\" on device \"$VOLUME\" is not supported by this tool" ;;
esac || error "Resize $FSTYPE failed"
*) error "Filesystem \"$FSTYPE\" on device \"$VOLUME\" is not supported by this tool." ;;
esac || error "Resize $FSTYPE failed."
cleanup 0
}
@@ -494,12 +611,12 @@ for i in "$TUNE_EXT" "$RESIZE_EXT" "$TUNE_REISER" "$RESIZE_REISER" \
test -n "$i" || error "Required command definitions in the script are missing!"
done
"$LVM" version >"$NULL" 2>&1 || error "Could not run lvm binary \"$LVM\""
"$LVM" version >"$NULL" 2>&1 || error "Could not run lvm binary \"$LVM\"."
$("$READLINK" -e / >"$NULL" 2>&1) || READLINK_E="-f"
TEST64BIT=$(( 1000 * 1000000000000 ))
test "$TEST64BIT" -eq 1000000000000000 || error "Shell does not handle 64bit arithmetic"
$(echo Y | "$GREP" Y >"$NULL") || error "Grep does not work properly"
test $("$DATE" -u -d"Jan 01 00:00:01 1970" +%s) -eq 1 || error "Date translation does not work"
test "$TEST64BIT" -eq 1000000000000000 || error "Shell does not handle 64bit arithmetic."
$(echo Y | "$GREP" Y >"$NULL") || error "Grep does not work properly."
test $("$DATE" -u -d"Jan 01 00:00:01 1970" +%s) -eq 1 || error "Date translation does not work."
if [ "$#" -eq 0 ] ; then

View File

@@ -27,13 +27,14 @@
%enableif %{enable_lvmpolld} lvmpolld
%global enable_lvmlockd %(if echo %{services} | grep -q lvmlockd; then echo 1; else echo 0; fi)
%if %{enable_lvmlockd}
%enableif %{enable_lockd_dlm} lockd-dlm
%enableif %{enable_lockd_sanlock} lockd-sanlock
%enableif %{enable_lvmlockd_dlm} lvmlockd-dlm
%enableif %{enable_lvmlockd_sanlock} lvmlockd-sanlock
%endif
%enableif %{enable_python} python2-bindings
%enableif %{enable_python3} python3-bindings
%enableif %{enable_python} applib
%enableif %{enable_dbusd} dbus-service
%enableif %{enable_dbusd} notify-dbus
%enableif %{enable_dmfilemapd} dmfilemapd
%build

View File

@@ -275,10 +275,10 @@ This package contains shared lvm2 libraries for applications.
Summary: LVM locking daemon
Group: System Environment/Base
Requires: lvm2 = %{version}-%{release}
%if %{enable_lockd_dlm}
%if %{enable_lvmlockd_dlm}
Requires: dlm-lib >= %{dlm_version}
%endif
%if %{enable_lockd_sanlock}
%if %{enable_lvmlockd_sanlock}
Requires: sanlock-lib >= %{sanlock_version}
%endif
Requires(post): systemd-units

View File

@@ -16,8 +16,8 @@
%global enable_lvmetad 1
%global enable_lvmpolld 1
%global enable_dmfilemapd 0
#%global enable_lockd_dlm 0
#%global enable_lockd_sanlock 0
#%global enable_lvmlockd_dlm 0
#%global enable_lvmlockd_sanlock 0
%if %{enable_udev}
%service lvmetad 1
@@ -52,24 +52,24 @@
%if %{fedora} >= 24 || %{rhel} >= 7
%service lvmlockd 1
%global sanlock_version 3.3.0-1
%global enable_lockd_dlm 1
%global enable_lockd_sanlock 1
%global enable_lvmlockd_dlm 1
%global enable_lvmlockd_sanlock 1
%if %{rhel}
%ifarch i686 x86_64 s390x
%global buildreq_lockd_dlm dlm-devel >= %{dlm_version}
%global buildreq_lvmlockd_dlm dlm-devel >= %{dlm_version}
%else
%global enable_lockd_dlm 0
%global enable_lvmlockd_dlm 0
%endif
%ifarch x86_64 ppc64le ppc64 aarch64
%global buildreq_lockd_sanlock sanlock-devel >= %{sanlock_version}
%global buildreq_lvmlockd_sanlock sanlock-devel >= %{sanlock_version}
%else
%global enable_lockd_sanlock 0
%global enable_lvmlockd_sanlock 0
%endif
%endif
%else
%if %{fedora} >= 22
%service lvmlockd 1
%global enable_lockd_dlm 1
%global enable_lvmlockd_dlm 1
%endif
%endif
@@ -192,8 +192,8 @@ BuildRequires: pkgconfig
%maybe BuildRequires: %{?buildreq_udev}
%maybe BuildRequires: %{?buildreq_cluster}
%maybe BuildRequires: %{?buildreq_lockd_dlm}
%maybe BuildRequires: %{?buildreq_lockd_sanlock}
%maybe BuildRequires: %{?buildreq_lvmlockd_dlm}
%maybe BuildRequires: %{?buildreq_lvmlockd_sanlock}
%maybe BuildRequires: %{?buildreq_python2_devel}
%maybe BuildRequires: %{?buildreq_python3_devel}
%maybe BuildRequires: %{?buildreq_python_setuptools}

View File

@@ -18,6 +18,8 @@ import pyudev
from testlib import *
import testlib
from subprocess import Popen, PIPE
from glob import glob
import os
g_tmo = 0
@@ -68,6 +70,46 @@ def lv_n(suffix=None):
return g_prefix + rs(8, s)
def _is_testsuite_pv(pv_name):
return g_prefix != "" and pv_name[-1].isdigit() and pv_name[:-1].endswith(g_prefix + "pv")
def is_nested_pv(pv_name):
return pv_name.count('/') == 3 and not _is_testsuite_pv(pv_name)
def _root_pv_name(res, pv_name):
if not is_nested_pv(pv_name):
return pv_name
vg_name = pv_name.split('/')[2]
for v in res[VG_INT]:
if v.Vg.Name == vg_name:
pv = ClientProxy(bus, v.Vg.Pvs[0], interfaces=(PV_INT, ))
return _root_pv_name(res, pv.Pv.Name)
def _prune(res, pv_filter):
if pv_filter:
pv_lookup = {}
pv_list = []
for p in res[PV_INT]:
if _root_pv_name(res, p.Pv.Name) in pv_filter:
pv_list.append(p)
pv_lookup[p.object_path] = p
res[PV_INT] = pv_list
vg_list = []
for v in res[VG_INT]:
# Only need to validate one of the PVs is in the selection set
if v.Vg.Pvs[0] in pv_lookup:
vg_list.append(v)
res[VG_INT] = vg_list
return res
def get_objects():
rc = {
MANAGER_INT: [], PV_INT: [], VG_INT: [], LV_INT: [],
@@ -84,16 +126,12 @@ def get_objects():
for object_path, v in objects.items():
proxy = ClientProxy(bus, object_path, v)
for interface, prop in v.items():
if interface == PV_INT:
# If we have a list of PVs to use, lets only use those in
# the list
# noinspection PyUnresolvedReferences
if pv_device_list and not (proxy.Pv.Name in pv_device_list):
continue
for interface in v.keys():
rc[interface].append(proxy)
return rc, bus
# At this point we have a full population of everything, we now need to
# prune the PV list and the VG list if we are using a sub selection
return _prune(rc, pv_device_list), bus
def set_execution(lvmshell, test_result):
@@ -151,41 +189,63 @@ class TestDbusService(unittest.TestCase):
std_err_print('Expecting a manager object!')
sys.exit(1)
# We will skip the vg device number check if the test user
# has specified a PV list
if pv_device_list is None:
if len(self.objs[VG_INT]) != 0:
std_err_print('Expecting no VGs to exist!')
sys.exit(1)
if len(self.objs[VG_INT]) != 0:
std_err_print('Expecting no VGs to exist!')
sys.exit(1)
self.pvs = []
for p in self.objs[PV_INT]:
self.pvs.append(p.Pv.Name)
def _recurse_vg_delete(self, vg_proxy, pv_proxy, nested_pv_hash):
for pv_device_name, t in nested_pv_hash.items():
vg_name = str(vg_proxy.Vg.Name)
if vg_name in pv_device_name:
self._recurse_vg_delete(t[0], t[1], nested_pv_hash)
break
vg_proxy.update()
self.handle_return(vg_proxy.Vg.Remove(dbus.Int32(g_tmo), EOD))
if is_nested_pv(pv_proxy.Pv.Name):
rc = self._pv_remove(pv_proxy)
self.assertTrue(rc == '/')
def tearDown(self):
# If we get here it means we passed setUp, so lets remove anything
# and everything that remains, besides the PVs themselves
self.objs, self.bus = get_objects()
if pv_device_list is None:
for v in self.objs[VG_INT]:
self.handle_return(
v.Vg.Remove(
dbus.Int32(g_tmo),
EOD))
else:
for p in self.objs[PV_INT]:
# When we remove a VG for a PV it could ripple across multiple
# VGs, so update each PV while removing each VG, to ensure
# the properties are current and correct.
p.update()
# The self.objs[PV_INT] list only contains those which we should be
# mucking with, lets remove any embedded/nested PVs first, then proceed
# to walk the base PVs and remove the VGs
nested_pvs = {}
non_nested = []
for p in self.objs[PV_INT]:
if is_nested_pv(p.Pv.Name):
if p.Pv.Vg != '/':
v = ClientProxy(self.bus, p.Pv.Vg, interfaces=(VG_INT, ))
self.handle_return(
v.Vg.Remove(dbus.Int32(g_tmo), EOD))
v = ClientProxy(self.bus, p.Pv.Vg, interfaces=(VG_INT,))
nested_pvs[p.Pv.Name] = (v, p)
else:
# Nested PV with no VG, so just simply remove it!
self._pv_remove(p)
else:
non_nested.append(p)
for p in non_nested:
# When we remove a VG for a PV it could ripple across multiple
# PVs, so update each PV while removing each VG, to ensure
# the properties are current and correct.
p.update()
if p.Pv.Vg != '/':
v = ClientProxy(self.bus, p.Pv.Vg, interfaces=(VG_INT, ))
self._recurse_vg_delete(v, p, nested_pvs)
# Check to make sure the PVs we had to start exist, else re-create
# them
self.objs, self.bus = get_objects()
if len(self.pvs) != len(self.objs[PV_INT]):
for p in self.pvs:
found = False
@@ -661,9 +721,9 @@ class TestDbusService(unittest.TestCase):
LV_BASE_INT)
self._validate_lookup("%s/%s" % (vg.Name, lv_name), lv.object_path)
def _create_lv(self, thinpool=False, size=None, vg=None):
def _create_lv(self, thinpool=False, size=None, vg=None, suffix=None):
lv_name = lv_n()
lv_name = lv_n(suffix=suffix)
interfaces = list(LV_BASE_INT)
if thinpool:
@@ -1779,6 +1839,80 @@ class TestDbusService(unittest.TestCase):
cmd = ['pvcreate', target.Pv.Name]
self._verify_existence(cmd, cmd[0], target.Pv.Name)
def _create_nested(self, pv_object_path):
vg = self._vg_create([pv_object_path])
pv = ClientProxy(self.bus, pv_object_path, interfaces=(PV_INT,))
self.assertEqual(pv.Pv.Vg, vg.object_path)
self.assertIn(pv_object_path, vg.Vg.Pvs,
"Expecting PV object path in Vg.Pvs")
lv = self._create_lv(vg=vg.Vg, size=vg.Vg.FreeBytes,
suffix="_pv")
device_path = '/dev/%s/%s' % (vg.Vg.Name, lv.LvCommon.Name)
new_pv_object_path = self._pv_create(device_path)
vg.update()
self.assertEqual(lv.LvCommon.Vg, vg.object_path)
self.assertIn(lv.object_path, vg.Vg.Lvs,
"Expecting LV object path in Vg.Lvs")
new_pv_proxy = ClientProxy(self.bus,
new_pv_object_path,
interfaces=(PV_INT, ))
self.assertEqual(new_pv_proxy.Pv.Name, device_path)
return new_pv_object_path
def test_nesting(self):
# check to see if we handle an LV becoming a PV which has it's own
# LV
pv_object_path = self.objs[PV_INT][0].object_path
for i in range(0, 5):
pv_object_path = self._create_nested(pv_object_path)
def test_pv_symlinks(self):
# Lets take one of our test PVs, pvremove it, find a symlink to it
# and re-create using the symlink to ensure we return an object
# path to it. Additionally, we will take the symlink and do a lookup
# (Manager.LookUpByLvmId) using it and the original device path to
# ensure that we can find the PV.
symlink = None
pv = self.objs[PV_INT][0]
pv_device_path = pv.Pv.Name
self._pv_remove(pv)
# Make sure we no longer find the pv
rc = self._lookup(pv_device_path)
self.assertEqual(rc, '/')
# Lets locate a symlink for it
devices = glob('/dev/disk/*/*')
for d in devices:
if pv_device_path == os.path.realpath(d):
symlink = d
break
self.assertIsNotNone(symlink, "We expected to find at least 1 symlink!")
# Make sure symlink look up fails too
rc = self._lookup(symlink)
self.assertEqual(rc, '/')
pv_object_path = self._pv_create(symlink)
self.assertNotEqual(pv_object_path, '/')
pv_proxy = ClientProxy(self.bus, pv_object_path, interfaces=(PV_INT, ))
self.assertEqual(pv_proxy.Pv.Name, pv_device_path)
# Lets check symlink lookup
self.assertEqual(pv_object_path, self._lookup(symlink))
self.assertEqual(pv_object_path, self._lookup(pv_device_path))
class AggregateResults(object):

View File

@@ -1127,6 +1127,7 @@ activation/snapshot_autoextend_threshold = 50
activation/udev_rules = 1
activation/udev_sync = 1
activation/verify_udev_operations = $LVM_VERIFY_UDEV
activation/raid_region_size = 512
allocation/wipe_signatures_when_zeroing_new_lvs = 0
backup/archive = 0
backup/backup = 0
@@ -1156,7 +1157,6 @@ global/thin_repair_executable = "$LVM_TEST_THIN_REPAIR_CMD"
global/use_lvmetad = $LVM_TEST_LVMETAD
global/use_lvmpolld = $LVM_TEST_LVMPOLLD
global/use_lvmlockd = $LVM_TEST_LVMLOCKD
global/fsadm_executable = "$TESTDIR/lib/fsadm"
log/activation = 1
log/file = "$TESTDIR/debug.log"
log/indent = 1
@@ -1165,6 +1165,14 @@ log/overwrite = 1
log/syslog = 0
log/verbose = 0
EOF
# For 'rpm' builds use system installed binaries.
# For test suite run use binaries from builddir.
test -z "${abs_top_builddir+varset}" || {
cat >> "$config_values" <<-EOF
dmeventd/executable = "$abs_top_builddir/test/lib/dmeventd"
global/fsadm_executable = "$abs_top_builddir/test/lib/fsadm"
EOF
}
}
local v
@@ -1482,6 +1490,11 @@ have_readline() {
echo version | lvm &>/dev/null
}
have_multi_core() {
which nproc &>/dev/null || return 0
[ $(nproc) -ne 1 ]
}
dmsetup_wrapped() {
udev_wait
dmsetup "$@"

View File

@@ -58,10 +58,10 @@ vgremove -f $vg
# missing params
not pvresize
# negative size
not pvresize --setphysicalvolumesize -10M "$dev1"
not pvresize --setphysicalvolumesize -10M -y "$dev1"
# not existing device
not pvresize --setphysicalvolumesize 10M "$dev7"
pvresize --setphysicalvolumesize 10M "$dev1"
not pvresize --setphysicalvolumesize 10M -y "$dev7"
pvresize --setphysicalvolumesize 10M -y "$dev1"
pvresize "$dev1"

View File

@@ -45,11 +45,6 @@ sleep 7
not pgrep dmeventd
rm LOCAL_DMEVENTD
# set dmeventd path
if test -n "$abs_top_builddir"; then
aux lvmconf "dmeventd/executable=\"$abs_top_builddir/test/lib/dmeventd\""
fi
lvchange --monitor y --verbose $vg/3way 2>&1 | tee lvchange.out
pgrep -o dmeventd >LOCAL_DMEVENTD
not grep 'already monitored' lvchange.out

Some files were not shown because too many files have changed in this diff Show More