1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-11-16 04:23:50 +03:00

Compare commits

...

628 Commits

Author SHA1 Message Date
Marian Csontos
05716c2d8a clvmd: Fix stack overflow on 64 bit ARM
Seems the amount of allocated data on stack is dependent on page size.
As the page size on aarch64 is 64kiB writing to memory allocated by
alloca results in stack overflow as at the time of allocation the are
already 2 pages allocated. Clearly 128kiB is not sufficient and at least
3 pages are needed.
2014-09-16 17:34:32 +02:00
Zdenek Kabelac
4a853361b0 vgchange: disable cluster convert for active LVs
While we could probably reacquire some type of lock when
going from non-clustered to clustered vg, we don't have any
single road back to drop the lock and keep LV active.

For now keep it safe and prohibit conversion when LV
is active in the VG.
2014-09-16 11:42:41 +02:00
Zdenek Kabelac
1ce21c19d5 va_list: properly pass va_list through functions
Code should not just pass va_list arg through the function
as args could be passed in many strange ways.
Use va_copy().

For details look in i.e.:

http://julipedia.meroh.net/2011/09/using-vacopy-to-safely-pass-ap.html
2014-09-16 11:42:40 +02:00
Alasdair G Kergon
b9c16b7506 devices: Detect rotational devices.
Add dev_is_rotational() for future use by allocation code.
2014-09-16 00:44:25 +01:00
Alasdair G Kergon
979be63f25 mirrors: Fix checks for mirror/raid/pvmove LVs.
Try to enforce consistent macro usage along these lines:

lv_is_mirror - mirror that uses the original dm-raid1 implementation
               (segment type "mirror")
lv_is_mirror_type - also includes internal mirror image and log LVs

lv_is_raid - raid volume that uses the new dm-raid implementation
             (segment type "raid")
lv_is_raid_type - also includes internal raid image / log / metadata LVs

lv_is_mirrored - LV is mirrored using either kernel implementation
                 (excludes non-mirror modes like raid5 etc.)

lv_is_pvmove - internal pvmove volume
2014-09-16 00:13:46 +01:00
Liuhua Wang
829e5a4037 cmirror: fix endian issues on s390
Cmirrord has endian bugs, which cause failure to lvcreate a mirrored lv
on s390.
- data_size is uint32, should not use xlate64 to convert, which will
  cause data_size 0 after xlate.
- request_type and data_size still used by local(v5_data_switch),
  should convert later.  If request_type xlate too early, it will
  cause request_type judge error; if data_size xlate too early, it
  will cause coredump in case DM_ULOG_CLEAR_REGION.
- when receiving package in clog_request_from_network. vp[0] will always
  be little endian.  We could use xlate64(vp[0]) == vp[0] to decide if
  the local node is little endian or not.

Signed-off-by: Lidong Zhong<lzhong@suse.com> & Liuhua Wang <lwang@suse.com>
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
2014-09-15 16:08:35 -05:00
Alasdair G Kergon
e9216eedfe cleanup: fix last commit 2014-09-15 22:04:14 +01:00
Alasdair G Kergon
2360ce3551 cleanup: Use lv_is_ macros.
Use lv_is_* macros throughout the code base, introducing
lv_is_pvmove, lv_is_locked, lv_is_converting and lv_is_merging.

lv_is_mirror_type no longer includes pvmove.
2014-09-15 21:33:53 +01:00
Zdenek Kabelac
10a448eb2f tests: update lv_no_exists
On successful exit path remove debug.log file.
2014-09-15 13:51:19 +02:00
Zdenek Kabelac
f435bef957 test: test there is no leak of LV on error path 2014-09-15 13:51:19 +02:00
Zdenek Kabelac
75a5de1462 thin: check for active lv
Before calling deactivate, check the lv is actually active,
as we may reach this 'bad' error path with pool_lv inactive.
2014-09-15 13:51:19 +02:00
Petr Rockai
ef6508e9a4 WHATS_NEW for filter-related changes 2014-09-13 17:34:13 +02:00
Peter Rajnoha
30e0c0d863 libdm: finish the comment 2014-09-12 15:35:57 +02:00
Peter Rajnoha
5895657b59 libdm: fix dm_is_dm_major to not issue error about missing /proc lines for dm module.
This is probably better approach than 3880ca5eca.

If dm module is not loaded during dm_is_dm_major call, there are no
lines for dm in /proc/devices, of course. Normally, dm_is_dm_major
is called to check existing devices, hence if module is not loaded,
we can expect there's no DM device present at the same time so we
can directly return 0 here (meaning the major number being inspected
is not dm device's one).

See also https://bugzilla.redhat.com/show_bug.cgi?id=1059711.
2014-09-12 15:28:51 +02:00
Peter Rajnoha
25ae9383bb revert: commit 3880ca5eca
There's a better solution to this...
2014-09-12 15:28:51 +02:00
Zdenek Kabelac
ae08a3a294 cleanup: skip unused assign
Reset of tmp_names is only needed in else{} path.
2014-09-12 13:51:31 +02:00
Zdenek Kabelac
07b3e6cd74 cleanup: avoid strlen() we know max size
Just use max NAME_LEN size buffer and copy the name.
2014-09-12 13:51:31 +02:00
Zdenek Kabelac
ab7977de7b cleanup: simplify _extract_image_components
Reorder test - first check for writable flag and then allocate.
2014-09-12 13:51:31 +02:00
Zdenek Kabelac
6898131091 cleanup: missing error message 2014-09-12 13:51:31 +02:00
Zdenek Kabelac
3e57143abd cleanup: better error messages 2014-09-12 13:51:30 +02:00
Zdenek Kabelac
08914ed7c1 raid: destroy allocation handle on error path
Don't leak ah memory pool on error path.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
76c3c94bd2 cleanup: update _alloc_image_component function
Return allocated volume directly instead of 1/0.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
126463ad1f cleanup: plain code reindent
Just simple reindent and brace changes.
2014-09-12 13:51:30 +02:00
Zdenek Kabelac
ad376e9e00 debug: add missing stack trace on error path 2014-09-12 13:51:29 +02:00
Zdenek Kabelac
c10c16cc35 raid: use _generate_raid_name
Use new function to get implicit name validation
(so we do not exit with internal error on metadata validation).
2014-09-12 13:51:29 +02:00
Zdenek Kabelac
2db0312455 raid: add function for name creation
Add name for construction and validation of raid subvolume
name with a given suffix.

TODO: check if reusable for mirrors as well.
2014-09-12 13:51:29 +02:00
Zdenek Kabelac
40b7b107b1 raid: check result of get_segtype_from_string
Error here is rather highly unpexpected for these types, but
stay consistent with rest of the code and don't use unchecked value.
2014-09-12 13:45:50 +02:00
Zdenek Kabelac
08bde75093 raid: add missing archive call
Before starting to update raid metadata, archive existing unmodified one.
2014-09-12 13:45:49 +02:00
Zdenek Kabelac
569184a3bb raid: add missing vg_revert
After failing vg_write() and suspend_lv() there was missing vg_revert() call.
2014-09-12 13:45:14 +02:00
Zdenek Kabelac
dd1fa0e808 raid: add missing backups
Add backup() calls that were missing after successful update
of metadata.
2014-09-12 13:42:57 +02:00
Zdenek Kabelac
15ba2afdc2 allocation: use vg memory pool
Looks like forgotten memory allocation related to VG used cmd mem pool.
2014-09-12 13:39:58 +02:00
Peter Rajnoha
3880ca5eca libdm: use dm-mod autoloading during dm_is_dm_major call if needed
For dm_is_dm_major to determine whether the major number given as
an argument belongs to a DM device, libdm code needs to know what
the actual DM major is to do the comparison.

It may happen that the dm-mod module is not loaded during this
call and so for the completness let's try our best before we start
giving various errors - we can still make use of dm-mod autoloading,
though only since kernels 2.6.36 where this feature was introduced.
2014-09-12 12:49:37 +02:00
Peter Rajnoha
f0cafc9281 conf: add allocation/physical_extent_size config option for default PE size of VGs.
Removes a need to use "vgcreate -s <desired PE size>" all the
time time just to override hardcoded default which is 4096KiB.
2014-09-12 10:09:21 +02:00
Peter Rajnoha
80ac8f37d6 filters: fix incorrect filter indexing in composite filter array
Caused by recent changes - a7be3b12df.
If global filter was not defined, then part of the code
creating composite filter (the cmd->lvmetad_filter) incorrectly
increased index value even if this global filter was not created
as part of the composite filter. This caused a gap with "NULL"
value in the composite filter array which ended up with the rest
of the filters after the gap to be ignored and also it caused a mem
leak when destroying the composite filter.
2014-09-11 09:30:03 +02:00
Zdenek Kabelac
4748f4a9e4 tests: test for rename of snapshot 2014-09-10 22:59:13 +01:00
Petr Rockai
671d0ea1b1 lvmetad: Differentiate between filtered and truly missing devices.
We used to print an error message whenever we tried to deal with devices that
lvmetad knew about but were rejected by a client-side filter. Instead, we now
check whether the device is actually absent or only filtered out and only print
a warning in the latter case.
2014-09-10 22:58:22 +01:00
Petr Rockai
5f9b30d178 test: Add a test for MD component detection in pvscan --cache. 2014-09-10 22:58:12 +01:00
Petr Rockai
a7be3b12df lvmetad: Re-organise filters to properly avoid scans of component devices.
If a PV label is exposed both through a composite device (MD for example) and
through its component devices, we always want the PV that lvmetad sees to be the
composite, since this is what all LVM commands (including activation) will then
use. If pvscan --cache is triggered for multiple clones of the same PV, the last
to finish wins. This patch basically re-arranges the filters so that
component-device filters are part of the global_filter chain, not of the
client-side filter chain. This has a subtle effect on filter evaluation order,
but should not alter visible semantics in the non-lvmetad case.
2014-09-10 22:58:02 +01:00
Petr Rockai
1f0c4c811d dev-cache: Filter wipe does not guarantee a full /dev scan.
The code in dev_iter_create assumes that if a filter can be wiped, doing so will
always trigger a call to _full_scan. This is not true for composite filters
though, since they can always be wiped in principle, but there is no way to know
that a component filter inside will exist that actually triggers the scan.
2014-09-10 22:57:49 +01:00
Zdenek Kabelac
47ff145e08 debug: turn message into debug
This message should be printed only for activation commands,
however since the handling of this flag is not correct
(rhbz 1140029) and will require further changes,
do now just a minor change and switch message into log_debug
(so it's not printed i.e. with every  'lvs -v')
2014-09-10 10:10:13 +02:00
Zdenek Kabelac
55aa3cc813 tests: test for rename of snapshot 2014-09-09 20:17:47 +02:00
Zdenek Kabelac
a86d9a3b30 lv_rename: actual fix for snapshot
By my rebasing mistake it's been eliminated from previous patch set.
2014-09-09 20:15:51 +02:00
Zdenek Kabelac
c710f02e01 lv_update_and_reload: replace code sequence
Use lv_update_and_reload() and lv_update_and_reload_origin()
to handle write/suspend/commit/resume sequence.

In few places this properly handle vg_revert() after suspend failure,
and also ensures there is metadata backup after successful vg_commit().
2014-09-09 19:20:09 +02:00
Zdenek Kabelac
e4e50863a2 lvconvert: use lv_update_and_reload
Use lib function.
2014-09-09 19:15:26 +02:00
Zdenek Kabelac
aee8611af5 lv_manip: remove vg_revert
vg_commit is supposed to have implicit revert handling.
(however as of now it needs fixes).
2014-09-09 19:15:26 +02:00
Zdenek Kabelac
413fc9d3e6 lv_rename: fix snapshot rename
Fix rename operation for snapshot (cow) LV.
Only the snapshot's origin has the lock and by mistake suspend
and resume has been called for the snapshot LV.
This further made volumes unusable in cluster.

So instead of suspend and resuming list of LVs,
we need to just suspend and resume origin.

As the sequence write/suspend/commit/resume
is widely used in lvm2 code base - move it to
new lv_update_and_reload function.
2014-09-09 19:15:24 +02:00
Zdenek Kabelac
319f67b1ab cleanup: add stacktrace for error path 2014-09-08 22:36:42 +02:00
Peter Rajnoha
c774d9a3f3 so: make sure shared libs are built with RELRO option
In addition to using RELRO for daemons, use this option for shared
libraries. See also commit a65ab773b4.
2014-09-04 10:52:41 +02:00
Alasdair G Kergon
b25e0086b6 post-release 2014-09-01 01:53:44 +01:00
Alasdair G Kergon
fcb433abec pre-release 2014-09-01 01:51:47 +01:00
Zdenek Kabelac
fa1a0d170a cleanup: drop extra ()
Pure  '==' test doesn't need extra ().
2014-08-29 13:11:35 +02:00
Zdenek Kabelac
2a0ec5d396 cleanup: drop duplicate const
No need to specify 'const' twice in these cases.
2014-08-29 13:11:34 +02:00
Zdenek Kabelac
19375e4fca cleanup: assignment into ()
Put is_float=1 into () - so the intention is obvious.
Remove uneeded extra check for for  *s != 0,
since it's already checked for either digit or '.'.
2014-08-29 13:11:34 +02:00
Zdenek Kabelac
db77041d93 makefiles: include path missing
For deps calcs path for blkid.h needs to be known.
2014-08-29 13:10:20 +02:00
Zdenek Kabelac
ca32920b16 WHATS_NEW 2014-08-29 13:10:20 +02:00
Zdenek Kabelac
3c8fa2aa01 clvmd: use correctly sized buffers for sscanf
sscanf needs extra 1 char for '\0'
2014-08-29 13:10:20 +02:00
Zdenek Kabelac
91a453de05 WHATS_NEW_DM 2014-08-29 13:10:19 +02:00
Zdenek Kabelac
93e9b3a1d1 libdm: revert incorrect path length size for sscanf
Commit 94786a3bbf introduced
another bug - since sscanf needs extra 1 byte for \0.

Since there is no easy way to do a macro evaluation for (PATH_MAX-1)
and string concatation of this number to get resulting (%4095s) - let's
go with easiest path and restore extra byte for 0.

Other option would be to prepare sscanf parsing string in runtime.

But lets resolve it when we look at PATH_MAX handling later...
2014-08-29 13:10:18 +02:00
Alasdair G Kergon
2faf416e0e lvextend: Reinstate --nosync logic for mirrors.
Reinstate the logic for syncing extensions of mirrors created with
--nosync.  (Inadvertently disabled by the approximate allocation
changes.)
2014-08-28 00:40:09 +01:00
Zdenek Kabelac
3003a9a7be WHATS_NEW 2014-08-27 16:52:32 +02:00
Zdenek Kabelac
22bfac5dc2 cache: fix allocation size
Commit 0b3d0e79f6 caused regression
in allocation of cache pool. This patch is restoring corect size
for allocation.
2014-08-27 16:47:14 +02:00
Jonathan Brassow
8b9eb95ea9 cache: Clean-up error message.
It is not an internal error message to report to the user that they
cannot create a cache LV on top of a cache LV.  It is simply not
supported yet.
2014-08-24 19:44:37 -05:00
Alasdair G Kergon
dd9700f192 post-release 2014-08-26 16:41:18 +01:00
Alasdair G Kergon
8b8d21f873 pre-release 2014-08-26 16:34:14 +01:00
Peter Rajnoha
50babdf123 revert: commit 8d00499167
Let's test this more...
2014-08-26 17:07:37 +02:00
Zdenek Kabelac
70e998754e tests: thin and volume_list testing 2014-08-26 14:13:07 +02:00
Zdenek Kabelac
c37ca279e3 tests: fix volume list test
Proper use of \" escaping and shell vars.
2014-08-26 14:13:07 +02:00
Zdenek Kabelac
25fe716b12 cleanup: indent and stacktrack
Add missing stacktrace on error path
and newline indent.
2014-08-26 14:13:07 +02:00
Zdenek Kabelac
24001a08ab cleanup: check pv_count is not 0
Since we already detect writemostly_ARG is non-zero
make it obvious pv_count will also be non-zero in this case.
2014-08-26 14:13:06 +02:00
Zdenek Kabelac
3b5afac9b4 cleanup: use unsigned 1bit elements
Avoid using signed 'int' type for 1 bit size.
2014-08-26 14:13:06 +02:00
Zdenek Kabelac
e5356eeba1 cleanup: never return uninitialized buffer
Coverity noticed this function may return untouched buffer,
however in this state can't really happen, but anyway
ensure on error path the buffer will have zero lenght string.
2014-08-26 14:13:06 +02:00
Zdenek Kabelac
8f518cf197 libdm: add check transaction_id after message
Add extra safety detection for thin pool transaction id
and query pool status after confirmed message.

In case there is a missmatch, immeditelly abort further
processing.
2014-08-26 14:12:20 +02:00
Zdenek Kabelac
0794a10f91 thin: fix volume_list support
Fixing problem, when user sets volume_list and excludes thin pools
from activation. In this case pool return 'success' for skipped activation.

We need to really check the volume it is actually active to properly
to remove queued pool messages. Otherwise the lvm2 and kernel
metadata started to go async since lvm2 believed, messages were submitted.

Add also better check for threshold when create a new thin volume.
In this case we require local activation of thin pool so we are able
to check pool fullness.
2014-08-26 14:10:18 +02:00
Zdenek Kabelac
1ee5e18a7b thin: more forced ignoring of pool failure
Support also 'vgremove -ff' to properly remove even inactive/broken thin pools.
Update messages to use 'print_unless_silent' for the forced case.
2014-08-26 14:09:04 +02:00
Peter Rajnoha
f4e56b2829 cleanup: consolidate lv_layout and lv_role reporting
This patch makes the keyword combinations found in "lv_layout" and
"lv_role" much more understandable - there were some ambiguities
for some of the combinations which lead to confusion before.

Now, the scheme used is:

LAYOUTS ("how the LV is laid out"):
===================================
[linear] (all segments have number of stripes = 1)

[striped] (all segments have number of stripes > 1)

[linear,striped] (mixed linear and striped)

raid (raid layout always reported together with raid level, raid layout == image + metadata LVs underneath that make up raid LV)
  [raid,raid1]
  [raid,raid10]
  [raid,raid4]
  [raid,raid5] (exact sublayout not specified during creation - default one used - raid5_ls)
    [raid,raid5,raid5_ls]
    [raid,raid5,raid6_rs]
    [raid,raid5,raid5_la]
    [raid,raid5,raid5_ra]
  [raid6,raid] (exact sublayout not specified during creation - default one used - raid6_zr)
    [raid,raid6,raid6_zr]
    [raid,raid6,raid6_nc]
    [raid,raid6,raid6_ns]

[mirror] (mirror layout == log + image LVs underneath that make up mirror LV)

thin (thin layout always reported together with sublayout)
  [thin,sparse] (thin layout == allocated out of thin pool)
  [thin,pool] (thin pool layout == data + metadata volumes underneath that make up thin pool LV, not supposed to be used for direct use!!!)

[cache] (cache layout == allocated out of cache pool in conjunction with cache origin)
  [cache,pool] (cache pool layout == data + metadata volumes underneath that make up cache pool LV, not supposed to be used for direct use!!!)

[virtual] (virtual layout == not hitting disk underneath, currently this layout denotes only 'zero' device used for origin,thickorigin role)

[unknown] (either error state or missing recognition for such layout)

ROLES ("what's the purpose or use of the LV - what is its role"):
=================================================================
- each LV has either of these two roles at least:  [public] (public LV that users may use freely to write their data to)

  [public] (public LV that users may use freely to write their data to)
  [private] (private LV that LVM maintains; not supposed to be directly used by user to write his data to)

- and then some special-purpose roles in addition to that:

  [origin,thickorigin] (origin for thick-style snapshot; "thick" as opposed to "thin")
  [origin,multithickorigin] (there are more than 2 thick-style snapshots for this origin)
  [origin,thinorigin] (origin for thin snapshot)
  [origin,multithinorigin] (there are more than 2 thin snapshots for this origin)
  [origin,extorigin] (external origin for thin snapshot)
  [origin,multiextoriginl (there are more than 2 thin snapshots using this external origin)
  [origin,cacheorigin] (cache origin)

  [snapshot,thicksnapshot] (thick-style snapshot; "thick" as opposed to "thin")
  [snapshot,thinsnapshot] (thin-style snapshot)

  [raid,metadata] (raid metadata LV)
  [raid,image] (raid image LV)

  [mirror,image] (mirror image LV)
  [mirror,log] (mirror log LV)
  [pvmove] (pvmove LV)

  [thin,pool,data] (thin pool data LV)
  [thin,pool,metadata] (thin pool metadata LV)

  [cache,pool,data] (cache pool data LV)
  [cache,pool,metadata] (cache pool metadata LV)

  [pool,spare] (pool spare LV - common role of LV that makes it used for both thin and cache repairs)
2014-08-25 16:14:40 +02:00
Peter Rajnoha
2d344c2e45 report: use dm_report_field_string_list_unordered for reporting lv_layout and lv_role fields
This makes it a bit more readable since we can report more general
layouts/roles first and keywords describing the LV more precisely
afterwards in the list.
2014-08-25 16:11:40 +02:00
Peter Rajnoha
02dc3c773e report: add dm_report_field_string_list_unsorted 2014-08-25 16:11:40 +02:00
Peter Rajnoha
993f8d1b3f refactor: rename 'lv_type' field to 'lv_role'
The 'lv_type' field name was a bit misleading. Better one is 'lv_role'
since this fields describes what's the actual use of the LV currently -
its 'role'.
2014-08-25 16:11:40 +02:00
Alasdair G Kergon
66326e2fb8 autoreconf 2014-08-22 23:47:44 +01:00
Alasdair G Kergon
83b5cb3ed5 configure: Fix shared lvm1 typo.
via https://bugs.gentoo.org/520640
2014-08-22 23:42:55 +01:00
David Teigland
a67c484fac lvcreate: disallow snapshot of cache lv 2014-08-22 11:54:49 -05:00
Alasdair G Kergon
0b3d0e79f6 lvresize: Fix raid/mirror and %PE handling code.
Sort out the lvresize calculation code to handle size changes
specified as physical extents as well as logical extents
and to process mirror resizing and raid extensions correctly.

The 'approx alloc' option was masking the underlying problem.
2014-08-22 01:26:14 +01:00
Peter Rajnoha
7e208d6504 man: dmsetup: -n is shortcut for --notable, not --noheadings
The -n was defined for --notable since beginning, but the man page
got wrong at some time...
2014-08-21 10:26:16 +02:00
Zdenek Kabelac
473a4a6548 tests: proper /dev access
Commit 5ebff6cc9f seemed to introduce
new 'for' loop but the mode is not yet used.
But the access to /dev dir needs to go through $DM_DEV_DIR
and whole path needs to be in "".
2014-08-20 14:37:41 +02:00
Peter Rajnoha
8d00499167 lvconvert: snapshot: allow using raid1 for snapshot and snapshot origin
When testing conversion sanity, we checked lv->status & MIRRORED
which encompasses both old mirrors and raid1 mirrors. But we need to
ban only the old mirrors here hence allow raid1 mirrors.
2014-08-20 10:12:09 +02:00
Jonathan Brassow
4f05e55f84 cleanup: Remove extra ';' from the end of a line. 2014-08-19 09:57:30 -05:00
Zdenek Kabelac
c5f2c541f6 cleanup: simplier struct init
Use simplier struct initilizer.
2014-08-19 14:33:07 +02:00
Zdenek Kabelac
24df01f735 cleanup: avoid double assign
Skip setting a value to a variable which is never
used and overwritten/set afterwards.
2014-08-19 14:33:06 +02:00
Zdenek Kabelac
94786a3bbf cleanup: use just PATH_MAX size
Avoid playing with +1.

PATH_MAX code needs probably more thinking anyway, since
there is no MAX path in Linux - user may easily create path
with 64kB chars - so 4kB buffer is surelly not enough for
such dirs.

Note:
http://insanecoding.blogspot.cz/2007/11/pathmax-simply-isnt.html
2014-08-19 14:33:06 +02:00
Zdenek Kabelac
5cd3b5c0cf cleanup: use _ prefix for static functions 2014-08-19 14:33:06 +02:00
Zdenek Kabelac
3e4a21427b cleanup: reindent and make obvious error path 2014-08-19 14:33:06 +02:00
Zdenek Kabelac
10e2370d2e libdm: check version prints error
Move 'bad' label above log_error, so the
error message is printed on 'bad' path.
(And return 0 is not without log_error()).
2014-08-19 14:33:06 +02:00
Zdenek Kabelac
dec39b1a5f lv_manip: check for str_list_dup failure 2014-08-19 14:33:06 +02:00
Zdenek Kabelac
cdb16a6039 lvscan: check result of id_write_format
Currently rather impossible to happen - but check
for returned value of id_write_format().
2014-08-19 14:33:06 +02:00
Zdenek Kabelac
ad9aee9af4 metadata: check result of refresh and rescan
Detect failure in case refresh_filters of lvmcache_label_scan fails.
2014-08-19 14:33:06 +02:00
Zdenek Kabelac
6d7f260f92 dmeventd: fix test for select return value
Do not call read when select return -1 && EINTR.
Also check for return valuer from read() and
abort write function when unexpected error happens.
2014-08-19 14:33:06 +02:00
Peter Rajnoha
84860fd54f lv: remove lv_type_name fn
The lv_type_name function is remnant from old code that reported
only single string for the LV type. LV types are now reported
in a more extended way as keyword list that describe the type
precisely (using lv_layout_and_type fn).

The lv_type_name was used in some error messages to display the
type of the LV so just reinstate the old messages back referencing
the type directly with a string - this is enough for error messages.
They don't need to display the LV type as precisely as it's used
on lvs output (which is optimized for selection anyway).
2014-08-19 14:16:39 +02:00
Peter Rajnoha
aec4d0c939 report: also display "mirror" keyword in lv_layout for mirrored mirror log and "cache" keyword in lv_layout for cached cache pool
$ lvs -a -o name,vg_name,attr,layout,type
  LV                    VG     Attr       Layout     Type
  lvol0                 vg     mwi-a-m--- mirror     mirror
  [lvol0_mimage_0]      vg     iwi-aom--- linear     image,mirror
  [lvol0_mimage_1]      vg     iwi-aom--- linear     image,mirror
  [lvol0_mlog]          vg     mwi-aom--- mirror     log,mirror
  [lvol0_mlog_mimage_0] vg     iwi-aom--- linear     image,mirror
  [lvol0_mlog_mimage_1] vg     iwi-aom--- linear     image,mirror

(lvol0_mlog properly displayed as "mirror" layout for mirrored mirror log)

$ lvs -a -o name,vg_name,attr,layout,type
  LV                  VG     Attr       Layout     Type
  lvol0               vg     Cwi---C--- cache,pool cache,pool
  [lvol0_cdata]       vg     Cwi------- linear     cache,data,pool
  [lvol0_cmeta]       vg     ewi------- linear     cache,metadata,pool
  [lvol1_pmspare]     vg     ewi------- linear     metadata,pool,spare
  lvol2               vg     Cwi---C--- cache,pool cache,pool
  [lvol2_cdata]       vg     Cwi---C--- cache      cache,data,pool
  [lvol2_cdata_corig] vg     owi---C--- linear     cache,origin
  [lvol2_cmeta]       vg     ewi------- linear     cache,metadata,pool

(lvol2_cdata properly displayed as cached cache pool data)
2014-08-19 13:58:32 +02:00
Peter Rajnoha
b806836164 report: also display "mirror" keyword in lv_type for pvmove LV and display "multiple" for external origin used for more than one thin snapshot
$ lvs -a -o name,vg_name,attr,layout,type
  LV        VG     Attr       Layout     Type
  lvol0     vg     -wI-a----- linear     linear
  [pvmove0] vg     p-C-aom--- mirror     mirror,pvmove

(added "mirror" for pvmove LV)

$ lvs -a -o name,vg_name,attr,layout,type
  LV              VG     Attr       Layout     Type
  lvol0           vg     ori------- linear     external,multiple,origin,thin
  [lvol1_pmspare] vg     ewi------- linear     metadata,pool,spare
  lvol2           vg     Vwi-a-tz-- thin       snapshot,thin
  lvol3           vg     Vwi-a-tz-- thin       snapshot,thin
  pool            vg     twi-a-tz-- pool,thin  pool,thin
  [pool_tdata]    vg     Twi-ao---- linear     data,pool,thin
  [pool_tmeta]    vg     ewi-ao---- linear     metadata,pool,thin

(added "multiple" for external origin used for more than one
thin snapshot - lvol0 in the example above)
2014-08-19 09:41:41 +02:00
Peter Rajnoha
90c47a4968 report: fix thin external snapshot identification for lv_layout and lv_type fields
Thin snapshots having external origins missed the "snapshot" keyword for
lv_type field. Also, thin external origins which are thin devices (from
another pool) were not recognized properly.

For example, external origin itself can be either non-thin volume (lvol0
below) or it can be a thin volume from another pool (lvol3 below):

Before this patch:

$ lvs -o name,vg_name,attr,pool_lv,origin,layout,type
  Internal error: Failed to properly detect layout and type for for LV vg/lvol3
  Internal error: Failed to properly detect layout and type for for LV vg/lvol3
  LV    VG     Attr       Pool  Origin Layout     Type
  lvol0 vg     ori-------              linear     external,origin,thin
  lvol2 vg     Vwi-a-tz-- pool  lvol0  thin       thin
  lvol3 vg     ori---tz-- pool         unknown    external,origin,thin,thin
  lvol4 vg     Vwi-a-tz-- pool1 lvol3  thin       thin
  pool  vg     twi-a-tz--              pool,thin  pool,thin
  pool1 vg     twi-a-tz--              pool,thin  pool,thin

- lvol2 as well as lvol4 have missing "snapshot" in type field
- lvol3 has unrecognized layout (should be "thin"), but has double
  "thin" in lv_type which is incorrect
- (also there's double "for" in the internal error message)

With this patch applied:

$ lvs -o name,vg_name,attr,pool_lv,origin,layout,type
  LV    VG     Attr       Pool  Origin Layout     Type
  lvol0 vg     ori-------              linear     external,origin,thin
  lvol2 vg     Vwi-a-tz-- pool  lvol0  thin       snapshot,thin
  lvol3 vg     ori---tz-- pool         thin       external,origin,thin
  lvol4 vg     Vwi-a-tz-- pool1 lvol3  thin       snapshot,thin
  pool  vg     twi-a-tz--              pool,thin  pool,thin
  pool1 vg     twi-a-tz--              pool,thin  pool,thin
2014-08-18 15:58:48 +02:00
Jonathan Brassow
4d45302e25 RAID: Fail RAID4/5/6 creation if PE size is less than STRIPE_SIZE_MIN
The maximum stripe size is equal to the volume group PE size.  If that
size falls below the STRIPE_SIZE_MIN, the creation of RAID 4/5/6 volumes
becomes impossible.  (The kernel will fail to load a RAID 4/5/6 mapping
table with a stripe size less than STRIPE_SIZE_MIN.)  So, we report an
error if it is attempted.

This is very rare because reducing the PE size down that far limits the
size of the PV below that of modern devices.
2014-08-15 21:15:34 -05:00
Alasdair G Kergon
42e07d2bce dmsetup: Support remove --deferred.
This patch adds a new flag --deferred to dmsetup remove. If this flag is
specified and the device is open, it is scheduled to be deleted on
close.

struct dm_info is extended.

The existing dm_task_get_info() is converted into a wrapper around the
new version dm_task_get_info_with_deferred_remove() so existing binaries
can still use the old smaller structure.

Recompiled code will pick up the new larger structure.

From: Mikulas Patocka <mpatocka@redhat.com>
2014-08-16 00:34:48 +01:00
Zdenek Kabelac
ec41bd1920 cleanup: use display_lvname
Show more complete LV names.
2014-08-15 15:53:17 +02:00
Zdenek Kabelac
c894c3c87f cleanup: gcc warn fix
Since gcc fail to see the origin has been already set under condition
above, just set origin again.
2014-08-15 15:53:17 +02:00
Peter Rajnoha
8af2309231 cleanup: gcc warning
One more:

metadata/thin_manip.c:503: warning: declaration of "snapshot_count" shadows a global declaration
2014-08-15 15:43:42 +02:00
Peter Rajnoha
8e449ebd63 cleanup: gcc warning
metadata/lv_manip.c:269: warning: declaration of "snapshot_count" shadows a global declaration

There's existing function called "snapshot_count" so rename the
variable to "snap_count".
2014-08-15 15:32:04 +02:00
Peter Rajnoha
d213758e57 man: missing (C)ache in target type lv_attr description 2014-08-15 15:21:15 +02:00
Zdenek Kabelac
9534c21ead cleanup: quite gcc warn
gcc can't see dev_get_primary_dev  returns only 0,1,2
so ensure 'name' is always defined in valid path.
2014-08-15 15:06:45 +02:00
Zdenek Kabelac
ff33d215ba cleanup: drop extra braces 2014-08-15 15:06:45 +02:00
Zdenek Kabelac
f183995f8e cleanup: just easier word wrapping 2014-08-15 15:06:44 +02:00
Zdenek Kabelac
1bde4c19e6 cleanup: drop unneeded inits 2014-08-15 15:06:44 +02:00
Zdenek Kabelac
5095a6517d cleanup: simplier struct initilization 2014-08-15 15:06:44 +02:00
Zdenek Kabelac
338b991e40 cleanup: move test for free arg
Move test for list of volumes into common place.
2014-08-15 15:06:44 +02:00
Zdenek Kabelac
6872adc0ff cleanup: postpone confirmation prompt for snapshot
Prompt user for confimation after more checks are done.
(So we avoid case prompting and failing after prompt)
2014-08-15 15:06:44 +02:00
Zdenek Kabelac
ba7796e055 man: add some more reserved names 2014-08-15 15:06:44 +02:00
Zdenek Kabelac
630fcab420 man: show lv name for lvs
Make it more obvious that either just vg name or lv name or path could
be passed as an argument.
2014-08-15 15:06:44 +02:00
Zdenek Kabelac
7f4b1e7411 toollib: print ignoring vorigin
When ignoring 'listed' volume, print info message.
(So the final command error message is a bit less confusing,
i.e. when user tries to deactive virtual origin:

> lvchange -an vg/lvol2_vorigin
  Ignoring virtual origin logical volume vg/lvol2_vorigin.
  One or more specified logical volume(s) not found.
2014-08-15 15:06:44 +02:00
Zdenek Kabelac
10e3715564 lvconvert: show name of activated volume
Display the name of volume that needs to be activated for merging.
2014-08-15 15:06:44 +02:00
Peter Rajnoha
e8bbcda2a3 Add lv_layout_and_type fn, lv_layout and lv_type reporting fields.
The lv_layout and lv_type fields together help with LV identification.
We can do basic identification using the lv_attr field which provides
 very condensed view. In contrast to that, the new lv_layout and lv_type
fields provide more detialed information on exact layout and type used
for LVs.

For top-level LVs which are pure types not combined with any
other LV types, the lv_layout value is equal to lv_type value.

For non-top-level LVs which may be combined with other types,
the lv_layout describes the underlying layout used, while the
lv_type describes the use/type/usage of the LV.

These two new fields are both string lists so selection (-S/--select)
criteria can be defined using the list operators easily:
  [] for strict matching
  {} for subset matching.

For example, let's consider this:

$ lvs -a -o name,vg_name,lv_attr,layout,type
  LV                    VG     Attr       Layout       Type
  [lvol1_pmspare]       vg     ewi------- linear       metadata,pool,spare
  pool                  vg     twi-a-tz-- pool,thin    pool,thin
  [pool_tdata]          vg     rwi-aor--- level10,raid data,pool,thin
  [pool_tdata_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_2] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rimage_3] vg     iwi-aor--- linear       image,raid
  [pool_tdata_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_2]  vg     ewi-aor--- linear       metadata,raid
  [pool_tdata_rmeta_3]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta]          vg     ewi-aor--- level1,raid  metadata,pool,thin
  [pool_tmeta_rimage_0] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rimage_1] vg     iwi-aor--- linear       image,raid
  [pool_tmeta_rmeta_0]  vg     ewi-aor--- linear       metadata,raid
  [pool_tmeta_rmeta_1]  vg     ewi-aor--- linear       metadata,raid
  thin_snap1            vg     Vwi---tz-k thin         snapshot,thin
  thin_snap2            vg     Vwi---tz-k thin         snapshot,thin
  thin_vol1             vg     Vwi-a-tz-- thin         thin
  thin_vol2             vg     Vwi-a-tz-- thin         multiple,origin,thin

Which is a situation with thin pool, thin volumes and thin snapshots.
We can see internal 'pool_tdata' volume that makes up thin pool has
actually a level10 raid layout and the internal 'pool_tmeta' has
level1 raid layout. Also, we can see that 'thin_snap1' and 'thin_snap2'
are both thin snapshots while 'thin_vol1' is thin origin (having
multiple snapshots).

Such reporting scheme provides much better base for selection criteria
in addition to providing more detailed information, for example:

$ lvs -a -o name,vg_name,lv_attr,layout,type -S 'type=metadata'
LV                   VG   Attr       Layout      Type
[lvol1_pmspare]      vg   ewi------- linear      metadata,pool,spare
[pool_tdata_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_1] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_2] vg   ewi-aor--- linear      metadata,raid
[pool_tdata_rmeta_3] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta]         vg   ewi-aor--- level1,raid metadata,pool,thin
[pool_tmeta_rmeta_0] vg   ewi-aor--- linear      metadata,raid
[pool_tmeta_rmeta_1] vg   ewi-aor--- linear      metadata,raid

(selected all LVs which are related to metadata of any type)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={metadata,thin}'
LV           VG   Attr       Layout      Type
[pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs which hold metadata related to thin)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'type={thin,snapshot}'
LV         VG   Attr       Layout     Type
thin_snap1 vg   Vwi---tz-k thin       snapshot,thin
thin_snap2 vg   Vwi---tz-k thin       snapshot,thin

(selected all LVs which are thin snapshots)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout=raid'
LV           VG   Attr       Layout       Type
[pool_tdata] vg   rwi-aor--- level10,raid data,pool,thin
[pool_tmeta] vg   ewi-aor--- level1,raid  metadata,pool,thin

(selected all LVs with raid layout, any raid layout)

lvs -a -o name,vg_name,lv_attr,layout,type -S 'layout={raid,level1}'
  LV           VG   Attr       Layout      Type
  [pool_tmeta] vg   ewi-aor--- level1,raid metadata,pool,thin

(selected all LVs with raid level1 layout exactly)

And so on...
2014-08-15 14:50:38 +02:00
Alasdair G Kergon
8a7682cbc9 libdm: Add DM_DEFERRED_REMOVE to dm-ioctl.h. 2014-08-15 13:45:55 +01:00
Peter Rajnoha
8740ecfa64 WHATS_NEW: previous commit 2014-08-15 13:31:21 +02:00
Peter Rajnoha
1cd622d98b report: lvs: properly display 'o' for volume type bit and 'C' for target type bit in lv_attr field for cache origin LVs
Before this patch:
LV                 VG     Attr
[cache_orig_corig] vg     -wi-ao----

With this patch applied:
LV                 VG     Attr
[cache_orig_corig] vg     owi-aoC---
2014-08-15 13:28:43 +02:00
Peter Rajnoha
8eba33510f cache+thin: add lv_is_{cache,thin}_origin fn to identify origin LVs 2014-08-15 13:28:43 +02:00
Peter Rajnoha
ec0d2f7aa4 refactor: add defines for raid segtypes
This will be reused later on in upcoming code...
2014-08-15 13:28:43 +02:00
Alasdair G Kergon
bf78e55ef3 pvcreate: Fix cache state with filters/sig wiping.
_pvcreate_check() has two missing requirements:
  After refreshing filters there must be a rescan.
    (Otherwise the persistent filter may remain empty.)
  After wiping a signature, the filters must be refreshed.
    (A device that was previously excluded by the filter due to
     its signature might now need to be included.)

If several devices are added at once, the repeated scanning isn't
strictly needed, but we can address that later as part of the command
processing restructuring (by grouping the devices).

Replace the new pvcreate code added by commit
54685c20fc "filters: fix regression caused
by commit e80884cd080cad7e10be4588e3493b9000649426"
with this change to _pvcreate_check().

The filter refresh problem dates back to commit
acb4b5e4de "Fix pvcreate device check."
2014-08-14 01:30:01 +01:00
Peter Rajnoha
20503ff067 tests: update report-select test for latest changes 2014-08-13 17:20:09 +02:00
Peter Rajnoha
fa793bed64 select: add support for selection to match string list subset, recognize { } operator
Using "[ ]" operator together with "&&" (or ",") inside causes the
string list to be matched if and only if all the items given match
the value reported and the number of items also match. This is
strict list matching and the original behaviour we already have.

In contrast to that, the new "{ }" operator together with "&&" inside
causes the string list to be matched if and only if all the items given
match the value reported but the number of items don't need to match.
So we can provide a subset in selection criteria and if the subset
is found, it matches.

For example:

$ lvs -o name,tags
  LV    LV Tags
  lvol0 a
  lvol1 a,b
  lvol2 b,c,x
  lvol3 a,b,y

$ lvs -o name,tags -S 'tags=[a,b]'
  LV    LV Tags
  lvol1 a,b

$ lvs -o name,tags -S 'tags={a,b}'
  LV    LV Tags
  lvol1 a,b
  lvol3 a,b,y

So in the example above the a,b is subset of a,b,y and therefore
it also matches.

Clearly, when using "||" (or "#") inside, the { } and [ ] is the
same:

$ lvs -o name,tags -S 'tags=[a#b]'
  LV    LV Tags
  lvol0 a
  lvol1 a,b
  lvol2 b,c,x
  lvol3 a,b,y

$ lvs -o name,tags -S 'tags={a#b}'
  LV    LV Tags
  lvol0 a
  lvol1 a,b
  lvol2 b,c,x
  lvol3 a,b,y

Also in addition to the above feature, fix list with single value
matching when using [ ]:

Before this patch:
$ lvs -o name,tags -S 'tags=[a]'
  LV    LV Tags
  lvol0 a
  lvol1 a,b
  lvol3 a,b,y

With this patch applied:
$ lvs -o name,tags -S 'tags=[a]'
  LV    LV Tags
  lvol0 a

In case neither [] or {} is used, assume {} (the behaviour is not
changed here):

$ lvs -o name,tags -S 'tags=a'
  LV    LV Tags
  lvol0 a
  lvol1 a,b
  lvol3 a,b,y

So in new terms 'tags=a' is equal to 'tags={a}'.
2014-08-13 16:10:12 +02:00
Peter Rajnoha
6dd98c1fa8 select: fix string list selection to match whole words only but not prefixes of searched string
$ lvs -o name,tags vg/lvol0
  LV    LV Tags
  lvol0 a

Before this patch:

$ lvs -o name,tags vg/lvol0 -S 'tags=[a]'
  LV    LV Tags
  lvol0 a

$ lvs -o name,tags vg/lvol0 -S 'tags=[ab]'
  LV    LV Tags
  lvol0 a
(incorrect!)

$ lvs -o name,tags vg/lvol0 -S 'tags=[abc]'
  LV    LV Tags
  lvol0 a
(incorrect!)

With this patch applied:

$ lvs -o name,tags vg/lvol0 -S 'tags=[a]'
  LV    LV Tags
  lvol0 a

$ lvs -o name,tags vg/lvol0 -S 'tags=[ab]'
(no result - correct!)

$ lvs -o name,tags vg/lvol0 -S 'tags=[abc]'
(no result - correct!)
2014-08-13 16:04:02 +02:00
Peter Rajnoha
9738a02d3d filter-mpath: fix primary device lookup failure for partition when processing mpath filter
If using persistent filter and we're refreshing filters (just like we
do for pvcreate now after commit 54685c20fc),
we can't rely on getting the primary device of the partition from the cache
as such device could be already filtered by persistent filter and we get
a device cache lookup failure for such device.

For example:

$ lvm dumpconfig --type diff
devices {
	obtain_device_list_from_udev=0
}

$lsblk /dev/sda
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  128M  0 disk
`-sda1   8:1    0  127M  0 part

$cat /etc/lvm/cache/.cache | grep sda
		"/dev/sda1",

$pvcreate /dev/sda1
  dev_is_mpath: failed to get device for 8:1
  Physical volume "/dev/sda1" successfully created

The problematic part of the code called dev_cache_get_by_devt
to get the device for the device number supplied. Then the code
used dev_name(dev) to get the name which is then used in check
whether there's any mpath on top of this dev...

This patch uses sysfs to get the base name for the partition
instead, hence avoiding the device cache which is a correct
approach here.
2014-08-08 10:49:19 +02:00
Peter Rajnoha
c52c9a1e31 activation: if LV inactive and non-clustered, do not issue "Cannot deactivate" on -aln
The message "Cannot deactivate remotely exclusive device locally." makes
sense only for clustered LV. If the LV is non-clustered, then it's
always exclusive by definition and if it's already deactivated, this
message pops up inappropriately as those two conditions are met.

So issue the message only if the conditions are met AND we have clustered VG.
2014-08-07 16:44:09 +02:00
Peter Rajnoha
ea662ca060 pvmove: remove spurious "Skipping mirror LV" message on pvmove of clustered mirror
With cmirrord, we can do pvmove of clustered mirror. The code checking
suitability of LVs on the PV being moved issued a message if a mirror
LV was found and the VG was clustered. However, the actual pvmove did
work correctly.

The top-level mirror LV is actually skipped in the code since it's
always layered on top of internal LVs making up the mirror LV and for pvmove
we consider these internal devices only as they're actually layered on
top of concrete PVs then. But we don't need to issue any message here
about skipping the top-level mirror LV - it's misleading here.
2014-08-07 15:23:58 +02:00
Alasdair G Kergon
26885ea119 post-release 2014-08-05 02:12:20 +01:00
Alasdair G Kergon
9d4e1e51a9 pre-release 2014-08-05 02:07:35 +01:00
Alasdair G Kergon
e94442bffa report: Remove lv_volume_type field.
Like lv_target_type, this field needs reworking.
2014-08-05 02:04:16 +01:00
Petr Rockai
2dac5525fb lvscan: Fix possible gcc warnings in --cache implementation. 2014-08-04 17:36:12 +02:00
Petr Rockai
46262f163b WHATS_NEW: lvscan --cache SEGV fix 2014-08-04 17:05:08 +02:00
Petr Rockai
0208665209 lvscan: Make --cache impervious to already-missing devices. 2014-08-04 17:03:17 +02:00
Petr Rockai
03a88da868 test: Add a test for lvscan --cache. 2014-08-04 17:03:17 +02:00
Peter Rajnoha
54685c20fc filters: fix regression caused by commit e80884cd08
Commit e80884cd08 tried to dump filters
for them to be reevaluated when creating a PV to avoid overwriting
any existing signature that may have been created after last
scan/filtering.

However, we need to call refresh_filters instead of
persistent_filter->dump since dump requires proper rescannig to fill
up the persistent filter again. However, this is true only for pvcreate
but not for vgcreate with PV creation where the scanning happens before
this PV creation and hence the next rescan (if not full scan), does not
fill the persistent filter.

Also, move refresh_filters so that it's called sooner and only for
pvcreate, vgcreate already calls lvmcache_label_scan(cmd, 2) which
then calls refresh_filters itself, so no need to reevaluate this again.

This caused the persistent filter (/etc/lvm/cache/.cache file) to be
wrong and contain only the PV just being processed with
vgcreate <vg_name> <pv_name_to_create>.

This regression caused other block devices to be filtered out in case
the vgcreate with PV creation was used and then the persistent filter
is used by any other LVM command afterwards.
2014-08-01 11:39:53 +02:00
Alasdair G Kergon
c7b9f0ab42 lvresize: Allow approximation with +%FREE.
Make lvresize -l+%FREE support approximate allocation.

Move existing "Reducing/Extending' message to verbose level
and change it to say 'up to' if approximate allocation is being used.

Replace it with a new message that gives the actual old and new size or
says 'unchanged'.
2014-08-01 00:35:43 +01:00
Marian Csontos
0dc3684b87 test: Skip lvextend-thin when thin not available 2014-07-31 22:56:19 +02:00
Peter Rajnoha
ef85997980 metadata: remove spurious "Physical volume <dev_name> not found"
This is addendum to commit 2e82a070f3
which fixed these spurious messages that appeared after commit
651d5093ed ("avoid pv_read in
find_pv_by_name").

There was one more "not found" message issued in case the device
could not be found in device cache (commit 2e82a07 fixed this only
for PV lookup itself). But if we "allow_unformatted" for
find_pv_by_name, we should not issue this message even in case
the device can't be found in dev cache as we just need to know
whether there's a PV or not for the code to decide on next steps
and we don't want to issue any messages if either device itself
is not found or PV is not found.

For example, when we were creating a new PV (and so allow_unformatted = 1)
and the device had a signature on it which caused it to be filtered
by device filter (e.g. MD signature if md filtering is enabled),
or it was part of some other subsystem (e.g. multipath), this message
was issued on find_pv_by_name call which was misleading.

Also, remove misleading "stack" call in case find_pv_by_name
returns NULL in pvcreate_check - any error state is reported
later by pvcreate_check code so no need to "stack" here.

There's one more and proper check to issue "not found" message if
the device can't be found in device cache within pvcreate_check fn
so this situation is still covered properly later in the code.

Before this patch (/dev/sda contains MD signature and is therefore filtered):

$ pvcreate /dev/sda
  Physical volume /dev/sda not found
WARNING: linux_raid_member signature detected on /dev/sda at offset 4096. Wipe it? [y/n]:

With this patch applied:

$ pvcreate /dev/sda
WARNING: linux_raid_member signature detected on /dev/sda at offset 4096. Wipe it? [y/n]:

Non-existent devices are still caught properly:

$ pvcreate /dev/sdx
  Device /dev/sdx not found (or ignored by filtering).
2014-07-31 10:03:30 +02:00
Alasdair G Kergon
7cff640d9a activation: Fix upgrades using uuid suffixes.
2.02.106 added suffixes to some LV uuids in the kernel.

If any of these LVs is activated with 2.02.105 or earlier,
and then a later version is used, the LVs appear invisible and
activation commands fail.

The code now has to check the kernel for both old and new uuids.
2014-07-30 21:55:11 +01:00
Petr Rockai
c4484d9050 test: Add a test for lvextend -l+100%FREE of a striped thin pool. 2014-07-30 16:17:29 +02:00
Alasdair G Kergon
321bed7137 post-release 2014-07-23 16:23:52 +01:00
Alasdair G Kergon
52217f6ebd raid: Fix partial activation logic for non-raid. 2014-07-23 16:13:12 +01:00
Alasdair G Kergon
25fa725b05 pre-release 2014-07-23 16:05:22 +01:00
Zdenek Kabelac
8d63d94d85 tests: still unusable kernel 2014-07-23 00:29:32 +02:00
Zdenek Kabelac
22be7c4417 tests: support cluster run
needs exclusive activation
2014-07-23 00:25:49 +02:00
Zdenek Kabelac
3a8bb8d3a4 tests: use exclusive activation 2014-07-22 23:44:06 +02:00
Petr Rockai
ab1887fe47 man: Update the lvscan manpage with a section on --cache. 2014-07-22 22:48:41 +02:00
Petr Rockai
66686a5bc5 WHATS_NEW: lvscan --cache, dmeventd RAID + lvmetad 2014-07-22 22:48:41 +02:00
Petr Rockai
5dc6671bb5 dmeventd: Call lvscan --cache in the RAID plugin. 2014-07-22 22:48:21 +02:00
Petr Rockai
a9ea014e51 lvscan: Implement a --cache mode. 2014-07-22 22:48:21 +02:00
Zdenek Kabelac
653fd7bee3 tests: new lvconvert features 2014-07-22 22:41:41 +02:00
Zdenek Kabelac
ee11bb8416 tests: use full option name
Don't overuse shortcut support -
since poolmetadatasize was the only allowed option
it's been equivalent to poolmetadata
2014-07-22 22:41:41 +02:00
Zdenek Kabelac
b51f1b3df6 man: lvconvert poolmetadataspare
Add missing description for --poolmetadataspare option.
2014-07-22 22:41:41 +02:00
Zdenek Kabelac
d7d81e1157 cleanup: show better messages 2014-07-22 22:41:40 +02:00
Zdenek Kabelac
894eda4707 thin and cache: unify pool common code
Fix get_pool_params to only read params.
Add poolmetadataspare option to get_pool_params.
Move all profile code into update_pool_params.
Move recalculate code into pool_manip.c
2014-07-22 22:41:38 +02:00
Zdenek Kabelac
8c56c5fbac lvconvert: better testing order
Avoid duplicate tests through implicit calls - check directly
result of string assignment.
2014-07-22 22:38:59 +02:00
Zdenek Kabelac
ab7bcc26f6 tools: switch logic for new arg_ func
Revert logic and rename new arg_ functions to:

arg_from_list_is_set()
arg_outside_list_is_set()

When err_found is given, log_error message is automaticaly
printed.
2014-07-22 22:38:59 +02:00
David Teigland
864ff3cb18 man: rework lvmcache to match lvmthin
Reorganize and rewrite parts to match lvmthin(7).
2014-07-22 15:12:02 -05:00
Alasdair G Kergon
50961f43d0 report: Remove lv_target_type field.
This field is too complicated to be useful on its own and either needs
redefining or splitting up into multiple fields.
2014-07-22 20:57:57 +01:00
Alasdair G Kergon
99e3c13012 raid: Moved degraded activation code to raid_manip.
Adjust some messages & fn names.
2014-07-22 20:50:29 +01:00
David Teigland
fe5b282a4a man: lvmcache updates
- use the lvconvert --cachepool syntax to match the lvmthin style
- rewrite removal information
- very minor formatting adjustments
2014-07-21 15:41:42 -05:00
David Teigland
1fe487043d man: use macros for indenting in lvmthin 2014-07-21 10:28:20 -05:00
Alasdair G Kergon
3366baf076 metadata: Reinstate system info in metadata.
Revert part of cac0722cac

This was deliberate and aids the investigation of problems.
2014-07-21 15:54:20 +01:00
Alasdair G Kergon
513fd029a6 config: Adjust description of activation_mode. 2014-07-21 15:50:47 +01:00
Alasdair G Kergon
8c231a5f4d raid: Correct degraded warning message level 2014-07-21 15:40:59 +01:00
Zdenek Kabelac
27574d0e41 tests: use bigger metadata
Until resolved - use bigger then 4MB cache pool metadata.
2014-07-17 16:27:39 +02:00
Zdenek Kabelac
53b787e519 cleanup: drop testing impossible path
We cannot get NULL in this test - since if the arg is set
it will always return non-NULL value here.
(in-release update)
2014-07-17 16:20:21 +02:00
Zdenek Kabelac
ffa1a7b046 cleanup: typo in message
in-release fix.
2014-07-17 16:19:57 +02:00
Zdenek Kabelac
ab72cdbf81 cleanup: check arg_count once
Do not check quiet_ARG more then necessary.
2014-07-17 16:18:35 +02:00
Zdenek Kabelac
321592af71 raid: support lvdisplay --maps
Add legs printing for --maps
Somewhat similar to mirror support - maybe there are more things to
show...
2014-07-17 16:18:34 +02:00
Zdenek Kabelac
cac0722cac metadata: use outfc for comments
Few unecessary comments were written to on-disc metadata.
Use outfc() to have comments only in archived files.
(may also save couple bytes in ringbuffer).

TODO: needed validation against newline char...
2014-07-17 16:17:44 +02:00
Zdenek Kabelac
f5d6c4b0f3 cache: use get_cache_mode for validation
Use a single function to validate cache mode arg
and set DM_ feature flags.
2014-07-17 16:16:45 +02:00
Zdenek Kabelac
8f9f180139 lvconvert: improve merge validation
Easier check for conflicting options with --merge.
2014-07-17 16:16:00 +02:00
Zdenek Kabelac
7abad9ef88 lvconvert: improve splitsnapshot test
Easier check for conflicting options with --splitsnapshot.
2014-07-17 16:15:30 +02:00
Zdenek Kabelac
4dcacbe369 lvconvert: move to single name validation
Validate all LV names in _lvconvert_name_params().
2014-07-17 16:14:36 +02:00
Zdenek Kabelac
04acf7a8d0 lvconvert: add missing option for repair
Few more option needs to be allowed with --repair.
(in-release fix).
2014-07-17 16:14:18 +02:00
David Teigland
3e4f24115a man: lvmthin fix line breaks
The html rendering inserted some unpleasant line breaks,
so insert better breaks explicitly.
2014-07-11 15:15:03 -05:00
Zdenek Kabelac
65a3f50556 lvconvert: fix missing repair option
Support --repair and --use-policies with mirrors.
(fixes another regression from lvconvert change for thin and cache).
TODO: the code path for mirror needs update.
2014-07-11 14:42:21 +02:00
Zdenek Kabelac
af219fbc04 cleanup: lets make it really obvious for gcc 2014-07-11 14:19:22 +02:00
Zdenek Kabelac
c0ebe78ef8 lvconvert: fix mirror path
lvconvert rewrite commit missed proper handling
of mirror path for --corelog and --mirrorlog options.
Document this even in man page.
2014-07-11 14:12:35 +02:00
Peter Rajnoha
17b92001ea WHATS_NEW: recent commits
Commits:
d169ff1e03
bccc2bef33
f76879ba44
2014-07-11 14:09:05 +02:00
Zdenek Kabelac
7913f64a02 cleanup: drop unintend debug error line 2014-07-11 13:54:54 +02:00
Zdenek Kabelac
a62cea3371 cleanup: older gcc is not smart enough
Avoid gcc warning about uninitialized 'seg' variable.
It's not easy for older gcc compiler to deduce it's been set.
2014-07-11 13:52:30 +02:00
Zdenek Kabelac
d7065f154e tests: updates for new lvconvert 2014-07-11 13:32:52 +02:00
Zdenek Kabelac
9f703d35a0 cleanup: lvconvert reoder repair check 2014-07-11 13:32:23 +02:00
Zdenek Kabelac
4bbfac359c man: lvconvert update
Update lvconvert doc for conversion of pools
(thin and cache pools and volumes)

Various more cleanups.
2014-07-11 13:32:23 +02:00
Zdenek Kabelac
b2988a917a lvconvert: update help
Extend help for lvconvert.
Use COMMON_OPTS for some common options.
2014-07-11 13:32:22 +02:00
Zdenek Kabelac
970989655f lvconvert: update for thin a cache
Major update of lvconvert code to handle cache and thin.
related targets.

Code tries to unify handling of cache and thin pools.
Better supports lvm2 syntax:

lvconvert --type cache --cachepool vg/pool vg/cache
lvconvert --type thin --thinpool vg/pool vg/extorg
lvconvert --type cache-pool vg/pool
lvconvert --type thin-pool vg/pool

as well as:

lvconvert --cache --cachepool vg/pool vg/cache
lvconvert --thin --thinpool vg/pool vg/extorg
lvconvert --cachepool vg/pool
lvconvert --thinpool vg/pool

While catching much more command line errors.
(Yet couple paths still needs more tests)

Detects as much cmdline errors prior opening VG.

Uses single lvconvert_name_params to convert LV names.

Detects as much incompatibilies in VG prior prompting.

Uses single prompt to confirm whole conversion.

TODO: still the code needs fixes...
2014-07-11 13:32:22 +02:00
Zdenek Kabelac
fe3ea94e58 cleanup: shift detection of chunksize sign
Sign should be checked prior opening of VG.
Since get_pool_params() needs profiles,
we need to move check for sign earlier.
2014-07-11 13:32:22 +02:00
Zdenek Kabelac
9955204e0d cleanup: reorder code
Simplify code.
2014-07-11 13:32:21 +02:00
Zdenek Kabelac
de1ee0bc52 cleanup: move merge option
Put long --merge option into section with long options.
2014-07-11 13:32:21 +02:00
Zdenek Kabelac
d5d883d91b cleanup: indent changes 2014-07-11 13:32:21 +02:00
Zdenek Kabelac
f7d6614061 cache: warn about metadata size limits
Cache pools are similar as with thin pools.
Add (needs %s) - since cache has currently
a bit strange need for extra few kb over
our default 4M extent size so make it more obvious.
2014-07-11 13:31:19 +02:00
Zdenek Kabelac
04b8e55f2a lvconvert: relocate mirror tests 2014-07-11 12:57:45 +02:00
Zdenek Kabelac
1931d1e58e tools: arg_is_any_set and arg_is_only_set
Helpful functions to more easily detect conflicting
set of options.
2014-07-11 12:57:45 +02:00
Zdenek Kabelac
c0c1ada88e pool: callback handle cache
Extend the callback functionality to handle also cache pools.

cache_check is now executed on cachepool metadata when
it's activated and deactivated.
2014-07-11 12:57:45 +02:00
Zdenek Kabelac
120bd2d6b1 pool: move code to pool source file
More code is used commonly for all pool types (cache & thin)
2014-07-11 12:57:25 +02:00
Zdenek Kabelac
4db5d78cef display: show C only for cache and cachepool
Keep target type (attr6) as the cache data and metadata volume has.
(i.e. when will show 'raid' type if metadata is raid)
2014-07-11 12:50:44 +02:00
Zdenek Kabelac
ba048612a3 lvchange: just skip on cache pool 2014-07-11 12:50:06 +02:00
Zdenek Kabelac
8932d4a625 lv_is_pool: add new defines
Defines for lv_is_pool() and  lv_is_pool_metadata()
Also update comments for prompts for their current meaning.
(Though maybe they should be renamed)
2014-07-11 12:50:06 +02:00
Zdenek Kabelac
56c5ad7b19 lvconvert: snapshot prompts to confirm conversion
Since the type passed LV is changed and content of data detroyed,
query user with prompt to confirm this operation.
Also add a proper wiping of header.

Using '--yes' will skip this prompt:

lvconvert -s --yes vg/lv vg/lvcow
2014-07-11 12:49:55 +02:00
Zdenek Kabelac
64828d877e lvconvert: fix return codes
Error codes in some function are directly used
as command result - thus return 0 is not error code,
but success - switch to proper ECMD_FAILED.
2014-07-11 12:49:02 +02:00
Zdenek Kabelac
baf825331c prompt: display 'n' for EOF
When EOF is detect - it could be either 'Ctrl+C'
or empty stdin.

For Ctrl+C there is visual ^C sign.
For EOF print 'n' so decision is clear in debug print.
2014-07-11 12:47:41 +02:00
Zdenek Kabelac
39cb8aa3ab configure 2014-07-11 12:47:41 +02:00
Zdenek Kabelac
f9d80c9d31 cache: add tool support
Introducing cache tool support.
2014-07-11 12:47:35 +02:00
Peter Rajnoha
5c3d894013 metadata: fix ALLOCATABLE_PV for lvm1 format
This is addendum for commit 6dc7b783c8.

LVM1 format stores the ALLOCATABLE flag directly in PV header, not
in VG metadata. So the code needs to be fixed further to work
properly for lvm1 format so that the correct PV header is written
(the flag is set only if the PV is in some VG, unset otherwise).
2014-07-11 12:24:15 +02:00
Peter Rajnoha
fd5912762b report: display 'unknown' value for lv_active_remotely field if the LV is also active locally
Currently, we can't determine whether the LV is active remotely
or not in that case.
2014-07-11 11:56:50 +02:00
Peter Rajnoha
c9ae21798e report: display 'unknown' value for active/active_locally/active_remotely/active_exclusively if info bypassed
Before the patch:

$ lvs -o name,active vg/lvol1 --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will beattempted.
  LV    Active
  lvol1 active

With this patch applied:
$ lvs -o name,active vg/lvol1 --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  LV    Active
  lvol1 unknown

The same for active_{locally,remotely,exclusively} fields.
Also, rename headings for these fields (ActLocal/ActRemote/ActExcl).
2014-07-11 11:15:06 +02:00
Peter Rajnoha
52af0dfbc0 report: display 'unknown' value for LVSINFO fields if unable to get info
If the lv_info call fails for whatever reason/INFO dm ioctl fails or
the dm driver communication is disabled (--driverloaded n), make
sure we always display "unknown" for LVSINFO fields as that's exactly
what happens - we don't know the state.

Before the patch:

$ lvs -o name,device_open --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  Command failed with status code 5.

With this patch applied:

$ lvs -o name,device_open --driverloaded n
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  LV        DevOpen
  lvol1        unknown
2014-07-11 10:18:59 +02:00
Jonathan Brassow
f33d75e2e5 test: Test failing due to too few PVs
Commit 33d69162e4 reduced the number of
PVs to a level where the test could not function.  (It is impossible
to replace 3 PVs of a 4-way RAID1 LV if there are only 5 PVs.)
2014-07-10 18:53:46 -05:00
Peter Rajnoha
3063b48602 report: also report linear and striped for lv_target_type 2014-07-10 16:41:42 +02:00
Peter Rajnoha
d169ff1e03 conf: add report/list_item_separator lvm.conf option
For example:

$ lvm dumpconfig report/list_item_separator
list_item_separator=","

$ lvs -o name,tags vg/lvol1
  LV    LV Tags
  lvol1 a,x,y

$ lvm dumpconfig report/list_item_separator
list_item_separator=":"

$ lvs -o name,tags vg/lvol1
  LV    LV Tags
  lvol1 ay
2014-07-10 16:18:45 +02:00
Peter Rajnoha
e38af4e28f libdm: report: fix string list internal representation if delimiter is composed of more than one char 2014-07-10 16:18:05 +02:00
Peter Rajnoha
1e5ec893c7 tests: LV's zero field now reported as binary field 2014-07-10 15:30:28 +02:00
Peter Rajnoha
e2448bb0dc report: report LV's zero field as binary field
Like other binary fields we already have:

$ lvs -o name,zero vg/lvx vg/pool vg/pool1
  LV    Zero
  lvx   unknown
  pool
  pool1    zero

$ lvs -o name,zero vg/lvx vg/pool vg/pool1 --binary
  LV    Zero
  lvx     -1
  pool     0
  pool1    1
2014-07-10 15:25:01 +02:00
Peter Rajnoha
e31ec38d8e report: reserved value: description for undefined value 2014-07-10 13:42:16 +02:00
Peter Rajnoha
175de52ded cleanup: also use values.h for final dm_report_reserved_value array composition 2014-07-10 13:37:40 +02:00
Peter Rajnoha
f598aa38ae cleanup: report reserved value macros 2014-07-10 11:54:37 +02:00
Peter Rajnoha
fafd10d564 conf: command_profile_template: global/binary_values_as_numeric -> report/binary_values_as_numeric 2014-07-10 09:48:45 +02:00
Peter Rajnoha
a4e798bd30 report: add values.h for per-field reserved value declaration 2014-07-10 09:41:21 +02:00
Jonathan Brassow
be75076dfc activation: Add "degraded" activation mode
Currently, we have two modes of activation, an unnamed nominal mode
(which I will refer to as "complete") and "partial" mode.  The
"complete" mode requires that a volume group be 'complete' - that
is, no missing PVs.  If there are any missing PVs, no affected LVs
are allowed to activate - even RAID LVs which might be able to
tolerate a failure.  The "partial" mode allows anything to be
activated (or at least attempted).  If a non-redundant LV is
missing a portion of its addressable space due to a device failure,
it will be replaced with an error target.  RAID LVs will either
activate or fail to activate depending on how badly their
redundancy is compromised.

This patch adds a third option, "degraded" mode.  This mode can
be selected via the '--activationmode {complete|degraded|partial}'
option to lvchange/vgchange.  It can also be set in lvm.conf.
The "degraded" activation mode allows RAID LVs with a sufficient
level of redundancy to activate (e.g. a RAID5 LV with one device
failure, a RAID6 with two device failures, or RAID1 with n-1
failures).  RAID LVs with too many device failures are not allowed
to activate - nor are any non-redundant LVs that may have been
affected.  This patch also makes the "degraded" mode the default
activation mode.

The degraded activation mode does not yet work in a cluster.  A
new cluster lock flag (LCK_DEGRADED_MODE) will need to be created
to make that work.  Currently, there is limited space for this
extra flag and I am looking for possible solutions.  One possible
solution is to usurp LCK_CONVERT, as it is not used.  When the
locking_type is 3, the degraded mode flag simply gets dropped and
the old ("complete") behavior is exhibited.
2014-07-09 22:56:11 -05:00
Alasdair G Kergon
a098cba0eb report: Rename common fields to special fields.
Change the help heading from 'Common Fields' to 'Special Fields' for
the fields: selected, help, ?

Remove the code that does 'all' processing with these special fields as
each of them changes the behaviour of the command in an undesirable way.

'lvs -o all,selected' was of course just printing help.
(via internal expansion to 'lv_all,common_all')

and if we ignored the help fields, then '-o common_all' would still
pull in 'selected' and change the way rows were output.
2014-07-09 23:33:09 +01:00
Peter Rajnoha
46ea315f09 report: also recognize 'yes'/'no' for selection criteria on binary fields
We have 1/"descriptive word"/"yes" for 1 and 0/"no" for 0.
For example (the new recognized values are "yes" and "no"):

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2
  LV    DevOpen
  root        open
  swap        open
  lvol1       open
  lvol2

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=open'
  LV    DevOpen
  root        open
  swap        open
  lvol1       open

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=1'
  LV    DevOpen
  root        open
  swap        open
  lvol1       open

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=yes'
  LV    DevOpen
  root        open
  swap        open
  lvol1       open

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=0'
  LV    DevOpen
  lvol2

$ lvs -o name,device_open fedora vg/lvol1 vg/lvol2 -S 'device_open=no'
  LV    DevOpen
  lvol2
2014-07-09 15:57:05 +02:00
Peter Rajnoha
c1bed36b67 cleanup: move _lvactive_disp and _thinzero_disp under 'attribute' display functions
So all attribute reporting functions are all in one section of code
for quick orientation (all these functions are defined in the order
of their attribute character displayed in pv/vg/lv_attr field).
2014-07-09 15:57:05 +02:00
Peter Rajnoha
bccc2bef33 report: add lv_active_{locally,remotely,exclusively} LV reporting fields
lv_active_{locally,remotely,exclusively} display the original
"lv_active" field in a more separate way so that we can create
selection criteria in a binary-based form (yes/no).
2014-07-09 15:57:05 +02:00
Peter Rajnoha
b6ac8819f6 report: 'whether' -> 'set if' in field description 2014-07-09 15:57:05 +02:00
Peter Rajnoha
ccab185aa7 cleanup: use macros for definition of reporting/selection reserved values
The macros for reserved value definition makes the process a bit easier,
but there's still a place for improvement and make this even more
transparent. We can optimize and provide better automatism here later on.
2014-07-09 15:55:54 +02:00
Peter Rajnoha
1a05862732 report: report unknown/-1 for binary fields with unknown value
Also respect --binary arg and/or report/binary_values_as_numeric
when displaying unknown values. If textual form is used, use "unknown",
if numeric value is used, use "-1" (which we already use to denote
unknown numeric values in other reports like lv_kernel_major and
lv_kernel_minor).
2014-07-08 16:16:02 +02:00
Peter Rajnoha
f76879ba44 conf: comment out devices/preferred_names and filter setting
This avoids creating void matchers which have no effect anyway and
they just use resources. Also, it makes lvm dumpconfig --type diff
to mark these settings properly as not being different from defaults
(where by default, devices/preferred_names as well as devices/filter
are void).

Also, add a few comments about builtin rules used to select device
alias in case preferred_names is not defined or it doesn't match
any of device aliases.
2014-07-08 10:22:59 +02:00
Peter Rajnoha
f6001465ef lv_manip: pool-metadata-spare is just a spare LV, not tightly bound to thin or cache 2014-07-07 17:02:06 +02:00
Peter Rajnoha
4b65d7ec72 WHATS_NEW: commits a473435..7021c8f1 2014-07-07 16:52:43 +02:00
Peter Rajnoha
9e1c4a3818 report: addendum for previous commit
Really call lv_info only if needed!
2014-07-07 16:28:13 +02:00
Peter Rajnoha
83b55c2dfb report: fix segfault while reporting PV/LV segment fields together with LV fields needeing device status (LVSINFO)
There was missing lv_info call for situations where there were
mixed PV/LV segment fields together with LVSINFO fields which
require extra lv_info call for LV device status. This ended up
with NULL lvinfo passed to the field reporting functions, hence
the segfault.
2014-07-07 15:54:13 +02:00
Peter Rajnoha
6dc7b783c8 metadata: fix regression causing PVs not in VGs to be marked as allocatable
If the PV is not yet in a VG, it's not allocatable.
A regression introduced by commit 0283c439ec
(_pv_create) and later commit a7ca101517
(pv_read).
2014-07-07 14:07:21 +02:00
Peter Rajnoha
7021c8f1a4 report: define reserved values/synonyms for some attribute fields
All binary attr fields have synonyms so selection criteria can use
either 0/1 or words to match against the field value (base type
for these binary fields is numeric one - DM_REPORT_FIELD_TYPE_NUMBER
so words are registered as reserved values):

pv_allocatable          - "allocatable"
pv_exported             - "exported"
pv_missing              - "missing"

vg_extendable           - "extendable"
vg_exported             - "exported"
vg_partial              - "partial"
vg_clustered            - "clustered"

lv_initial_image_sync   - "initial image sync", "sync"
lv_image_synced_names   - "image synced", "synced"
lv_merging_names        - "merging"
lv_converting_names     - "converting"
lv_allocation_locked    - "allocation locked", "locked"
lv_fixed_minor          - "fixed minor", "fixed"
lv_merge_failed         - "merge failed", "failed"

For example, these three are all equivalent:

$ lvs -o name,fixed_minor -S 'fixed_minor=fixed'
  LV    FixMin
  lvol8 fixed minor

$ lvs -o name,fixed_minor -S 'fixed_minor="fixed minor"'
  LV    FixMin
  lvol8 fixed minor

$ lvs -o name,fixed_minor -S 'fixed_minor=1'
  LV    FixMin
  lvol8 fixed minor

The same with binary output - it has no effect on this functionality:

$ lvs -o name,fixed_minor --binary -S 'fixed_minor=fixed'
  LV    FixMin
  lvol8          1

$ lvs -o name,fixed_minor --binary -S 'fixed_minor="fixed
minor"'
  LV    FixMin
  lvol8          1

[1] f20/~ # lvs -o name,fixed_minor --binary -S 'fixed_minor=1'
  LV    FixMin
  lvol8          1
2014-07-04 15:50:50 +02:00
Peter Rajnoha
0956fd230f report: adapt selection code to recognize per-field reserved values
In contrast to per-type reserved values that are applied for all fields
of that type, per-field reserved values are only applied for concrete
field only.

Also add 'struct dm_report_field_reserved_value' to libdm for per-field
reserved value definition. This is defined by field number (an index
in the 'fields' array which is given for the dm_report_init_with_selection
function during report initialization) and the value to use for any
of the specified reserved names.
2014-07-04 15:50:50 +02:00
Peter Rajnoha
da545ce3b4 tools: add --binary arg to pvs,vgs,lvs and {pv,vg,lv}display -C and report/binary_values_as_numeric lvm.conf option
The --binary option, if used, causes all the binary values reported
in reporting commands to be displayed as "0" or "1" instead of descriptive
literal values (value "unknown" is still used for values that could not be
determined).

Also, add report/binary_values_as_numeric lvm.conf option with the same
functionality as the --binary option (the --binary option prevails
if both --binary cmd option and report/binary_values_as_numeric lvm.conf
option is used at the same time). The report/binary_values_as_numeric is
also profilable.

This makes it easier to use and check lvm reporting command output in scripts.
2014-07-04 15:40:17 +02:00
Peter Rajnoha
d2af4f84c9 report: add separate fields for PV/VG/LV attributes
Physical Volume Fields:
  pv_allocatable         - Whether this device can be used for allocation.
  pv_exported            - Whether this device is exported.
  pv_missing             - Whether this device is missing in system.

Volume Group Fields:
  vg_permissions         - VG permissions.
  vg_extendable          - Whether VG is extendable.
  vg_exported            - Whether VG is exported.
  vg_partial             - Whether VG is partial.
  vg_allocation_policy   - VG allocation policy.
  vg_clustered           - Whether VG is clustered.

Logical Volume Fields:
  lv_volume_type         - LV volume type.
  lv_initial_image_sync  - Whether mirror/RAID images underwent initial resynchronization.
  lv_image_synced        - Whether mirror/RAID image is synchronized.
  lv_merging             - Whether snapshot LV is being merged to origin.
  lv_converting          - Whether LV is being converted.
  lv_allocation_policy   - LV allocation policy.
  lv_allocation_locked   - Whether LV is locked against allocation changes.
  lv_fixed_minor         - Whether LV has fixed minor number assigned.
  lv_merge_failed        - Whether snapshot merge failed.
  lv_snapshot_invalid    - Whether snapshot LV is invalid.
  lv_target_type         - Kernel target type the LV is related to.
  lv_health_status       - LV health status.
  lv_skip_activation     - Whether LV is skipped on activation.

Logical Volume Info Fields
  lv_permissions         - LV permissions.
  lv_suspended           - Whether LV is suspended.
  lv_live_table          - Whether LV has live table present.
  lv_inactive_table      - Whether LV has inactive table present.
  lv_device_open         - Whether LV device is open.
2014-07-04 15:40:17 +02:00
Peter Rajnoha
4b9b1f2319 refactor: use new LVSINFO report type for lv_kernel_{major,minor,read_ahead} field 2014-07-04 15:40:17 +02:00
Peter Rajnoha
ecb2be5d16 reporter: add separate LVSINFO report type
LVSINFO is exactly the same as existing LVS report type,
but it has the "struct lvinfo" populated in addition for
use - this is useful for fields that display the status
of the LV device itself (e.g. suspended state, tables
present/missing...).

Currently, such properties are reported within the "lv_attr"
field so separation is unnecessary - the "lvinfo" call
to populate the "struct lvinfo" is directly a part of the
field reporting function - _lvstatus_disp/lv_attr_dup.

With upcoming patches, we'd like the lv_attr field bits
to be separated into their own fields. To avoid calling
"lvinfo" fn as many times as there are fields requiring
the "lv_info" structure to be populated while reporting
one row related to one LV, we're separating former LVS
into LVS and LVSINFO report type. With this, there's
just one "lvinfo" call for one report row and LV reporting
fields will take the info needed from this struct then,
hence reusing it and not calling "lvinfo" fn on their own.
2014-07-04 15:40:17 +02:00
Peter Rajnoha
6b58647848 lv_manip: add get_lv_type_name/lv_is_linear and lv_is_striped helper fns
The get_lv_type_name helps with translating volume type
to human readable form (can be used in reports or
various messages if needed).

The lv_is_linear and lv_is_striped complete the set of
lv_is_* functions that identify exact volume types.
2014-07-04 15:40:17 +02:00
Peter Rajnoha
a4734354ce refactor: remove static modifier for lv_raid_image_in_sync and lv_raid_healthy fn
...to make use of it in other parts of the code.
2014-07-04 15:40:17 +02:00
Zdenek Kabelac
9d2c445d0a cleanup: just safely copy string
Keep analyzers happier and use constrained strcpy.
2014-07-04 12:31:17 +02:00
Zdenek Kabelac
86e116450e cleanup: drop unneeded initialization
Code assigns this variable right after clearing.
2014-07-04 12:31:17 +02:00
Zdenek Kabelac
1f72f8ed40 dev-type: print aborting log_error
When wiping is aborted print immediate log_error message
(log_error comes 1st.)
2014-07-04 12:31:17 +02:00
Zdenek Kabelac
a94abc0fdd dev-type: print log_sys_debug
For non-fatal error use log_sys_debug as the tool
is not stopping on these errors.
2014-07-04 12:31:17 +02:00
Alasdair G Kergon
ac60c876c4 vgsplit: Improve message when LV still active.
Mention parent LV as well as the LV triggering the warning.

Still leaves some confusing cases but its not worth fixing them
at the moment.
(Thin pool inactive but a thin volume active => deactivate thin vol.
Inactive mirror/raid with pvmove in progress => complete pvmove and
active&deactivate mirror/raid.
If new VG already exists it requires some LVs to be inactive
unnecessarily.)
2014-07-04 01:13:51 +01:00
Alasdair G Kergon
137ed3081a report: Add lv_parent field.
Only defined for thin/cache/raid/mirror at this stage as it
relies on get_only_segment_using_this_lv().
2014-07-03 23:49:34 +01:00
Alasdair G Kergon
1e1c2769a7 vgsplit: Fix VG component of lvid.
Fix VG component of lvid in vgsplit and vgmerge
Update vg_validate() to detect the error.
Call lv_is_active() before moving LV into new VG, not after.
2014-07-03 19:06:04 +01:00
Alasdair G Kergon
64ce3a8066 report: Add lv_dm_path and lv_full_name fields. 2014-07-02 17:24:05 +01:00
Alasdair G Kergon
5bfa2ec21d report: Exclude hidden devices from lv_path field. 2014-07-02 14:57:00 +01:00
Zdenek Kabelac
c6811dd512 tests: ensure data hits cow 2014-07-02 15:10:10 +02:00
Zdenek Kabelac
39bd6bb89e cleanup: drop inline and add prefix _ for static
Leave inline decision on compiler.
Add '_'prefix for static functions.
2014-07-02 15:09:06 +02:00
Zdenek Kabelac
d1094ec4c6 tests: replace cat with $(<
Use shell built-in $(<
Print lvm.conf in use for test.
2014-07-02 10:45:44 +02:00
Zdenek Kabelac
b22ab4dab0 tests: avoid hiding results in local
There is a difference between:

local a=$(shell)

and

local a
a=$(shell)

The first return exit code from shells' local command.
2014-07-02 10:45:43 +02:00
Zdenek Kabelac
d1dcbe0853 cleanup: add braces for if() 2014-07-02 10:45:43 +02:00
Zdenek Kabelac
52ab15b2d0 cleanup: use unsigned type for command
Keep command unsigned (as _IOWR() produces them).
2014-07-02 10:45:43 +02:00
Zdenek Kabelac
57f8b33d5d cleanup: a bit better error message 2014-07-02 10:45:43 +02:00
Zdenek Kabelac
165cfab7db cleanup: verbose in human readable size
Use normal size like we use everywhere else.
2014-07-02 10:45:42 +02:00
Zdenek Kabelac
6b3a49876e cleanup: line indent 2014-07-02 10:45:42 +02:00
Zdenek Kabelac
748af97afc cleanup: libdm simplier error comparation
When testing return value from snprintf
use simplier form '>=' instead of  '+1 >'.
2014-07-02 10:45:42 +02:00
Zdenek Kabelac
e21d0eb90e display: add display_lvname
Add simple function to print vg/lv name.
Useful i.e. in error messages.
2014-07-02 10:45:42 +02:00
Zdenek Kabelac
6f6900d457 fsadm: avoid using -a in test 2014-07-02 10:45:41 +02:00
Zdenek Kabelac
7bdf4719e8 pool: delay conversion prompt
First validate as much params as possible before prompting user
about conversion to data and metadata LV.
2014-07-02 10:45:39 +02:00
Zdenek Kabelac
3af761ba16 thin: fix chunk_size conversion prompt skip
Use --force only enables prompting for dangerous operation.
User has to add --yes to skip this prompt.
2014-07-02 10:43:56 +02:00
Zdenek Kabelac
93a80018ae lvremove: remove thin volumes on damaged pools
Support remove of thin volumes With --force --force
when thin pools is damaged.

This way it's possible to remove thin pool with
unrepairable metadata without requiring to
manually edit lvm2 metadata.

lvremove -ff vg/pool

removes all thin volumes and pool even when
thin pool cannot be activated (to accept
removal of thin volumes in kernel metadata)
2014-07-02 10:37:52 +02:00
Zdenek Kabelac
0b872ce870 raid: don't skip prompt with force
Yes is meant to be used to skip all new prompts.
(--force just adds more prompts).
2014-07-02 10:36:32 +02:00
Zdenek Kabelac
c460f35cda raid: switch to log_warn
Use log_warn for warning message.
log_error is printed when command returns error code.
2014-07-02 10:34:40 +02:00
Zdenek Kabelac
355258be58 mirror: mirror_or_raid_type_requested update
mirror_or_raid_type_requested really checks for mirror type.

Convert macros mirror_or_raid_type_requested() and
snapshot_type_requested() into inline functions.
2014-07-02 10:34:39 +02:00
Alasdair G Kergon
c77197c688 make: Fix pofile and .d file generation.
Use builddir not srcdir with make pofile.

Append 'incfile:' lines to %.d files to handle newly-missing dependencies
without 'make clean' after a file is moved or deleted.
2014-07-02 00:48:50 +01:00
Zdenek Kabelac
70551eec59 uuid: revert uuids for mirrors and raids
Using suffixes for mirrors and raids will need more work,
before this could be enabled.

Meanwhile revert to previous behavior.

Keep suffixes for thins and caches.
2014-06-30 14:58:30 +02:00
Zdenek Kabelac
13fb02ff1f cleanup: ignore vg_name in /lib
Since  vg_name inside /lib function has already been ignored mostly
except for a few debug prints - make it and official internal API
feature.

vg_name is used only in  /tools while the VG is not yet openned,
and when  lvresize/lvcreate /lib function is called with VG pointer
already being used, then vg_name becomes irrelevant (it's not been
validated anyway).

So any internal user of lvcreate_params and lvresize_params does not
need to set vg_name pointer and may leave it NULL.
2014-06-30 12:21:36 +02:00
Zdenek Kabelac
667f93b7d9 uuid: add more private uuid sufixes
Use suffixes for easier detection of private volumes.

This commit makes older volume UUIDs incompatible and
it most probably needs machine reboot after upgrade.
2014-06-30 12:17:07 +02:00
Zdenek Kabelac
2ada685216 cleanup: more lv_is_ functions 2014-06-30 12:16:08 +02:00
Zdenek Kabelac
6da14a82c6 thin: do not create reserved LVs
When creating pool's metadata - create initial LV for clearing with some
generic name and after the volume is create & cleared - rename it to
reserved name '_tmeta/_cmeta'.

We should not expose  'reserved' names for public LVs.
2014-06-30 12:16:05 +02:00
Zdenek Kabelac
eadcea2dae thin: repaired LV uses _meta%d
Don't leave 'regular' LV with reserved suffix for a user.
After succefull repair use 'normal' (non-reserved) LV name
for backup of original metadata.
2014-06-30 12:15:13 +02:00
Peter Rajnoha
b6fe906956 activation: fix typo in 'activation skip' message 2014-06-30 11:02:45 +02:00
Peter Rajnoha
100342605c libdm: fix double const for "value" in dm_report_reserved_value structure
C++ may have
2014-06-30 09:44:23 +02:00
Peter Rajnoha
b41aa985d7 man: do not mention '(i)nherited' for alloc policy in vg_attr field
VG has nothing to inherit from...
2014-06-26 15:15:10 +02:00
Jonathan Brassow
ed3c2537b8 raid: Allow repair to reuse PVs from same image that suffered a PV failure
When repairing RAID LVs that have multiple PVs per image, allow
replacement images to be reallocated from the PVs that have not
failed in the image if there is sufficient space.

This allows for scenarios where a 2-way RAID1 is spread across 4 PVs,
where each image lives on two PVs but doesn't use the entire space
on any of them.  If one PV fails and there is sufficient space on the
remaining PV in the image, the image can be reallocated on just the
remaining PV.
2014-06-25 22:26:06 -05:00
Jonathan Brassow
7028fd31a0 misc: after releasing a PV segment, merge it with any adjacent free space
Previously, the seg_pvs used to track free and allocated space where left
in place after 'release_pv_segment' was called to free space from an LV.
Now, an attempt is made to combine any adjacent seg_pvs that also track
free space.  Usually, this doesn't provide much benefit, but in a case
where one command might free some space and then do an allocation, it
can make a difference.  One such case is during a repair of a RAID LV,
where one PV of a multi-PV image fails.  This new behavior is used when
the replacement image can be allocated from the remaining space of the
PV that did not fail.  (First the entire image with the failed PV is
removed.  Then the image is reallocated from the remaining PVs.)
2014-06-25 22:04:58 -05:00
Jonathan Brassow
b35fb0b15a raid/misc: Allow creation of parallel areas by LV vs segment
I've changed build_parallel_areas_from_lv to take a new parameter
that allows the caller to build parallel areas by LV vs by segment.
Previously, the function created a list of parallel areas for each
segment in the given LV.  When it came time for allocation, the
parallel areas were honored on a segment basis.  This was problematic
for RAID because any new RAID image must avoid being placed on any
PVs used by other images in the RAID.  For example, if we have a
linear LV that has half its space on one PV and half on another, we
do not want an up-convert to use either of those PVs.  It should
especially not wind up with the following, where the first portion
of one LV is paired up with the second portion of the other:
------PV1-------  ------PV2-------
[ 2of2 image_1 ]  [ 1of2 image_1 ]
[ 1of2 image_0 ]  [ 2of2 image_0 ]
----------------  ----------------
Previously, it was possible for this to happen.  The change makes
it so that the returned parallel areas list contains one "super"
segment (seg_pvs) with a list of all the PVs from every actual
segment in the given LV and covering the entire logical extent range.

This change allows RAID conversions to function properly when there
are existing images that contain multiple segments that span more
than one PV.
2014-06-25 21:20:41 -05:00
Jonathan Brassow
1f1675b059 test: Test addition to show incorrect allocator behavior
If a RAID LV has images that are spread across more than one PV
and you allocate a new image that requires more than one PV,
parallel_areas is only honored for one segment.  This commit
adds a test for this condition.
2014-06-21 15:33:52 -05:00
Peter Rajnoha
e80884cd08 filters: always reevaluate filter before creating a PV
...to avoid using cached value (persistent filter) and therefore
not noticing any change made after last scan/filtering - the state
of the device may have changed, for example new signatures added.

$ lvm dumpconfig --type diff
allocation {
	use_blkid_wiping=0
}
devices {
	obtain_device_list_from_udev=0
}

$ cat /etc/lvm/cache/.cache | grep sda

$ vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "fedora" using metadata type lvm2

$ cat /etc/lvm/cache/.cache | grep sda
		"/dev/sda",

$ parted /dev/sda mklabel gpt
Information: You may need to update /etc/fstab.

$ parted /dev/sda print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 134MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

$ cat /etc/lvm/cache/.cache | grep sda
		"/dev/sda",

====

Before this patch:
$ pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created

With this patch applied:
$ pvcreate /dev/sda
  Physical volume /dev/sda not found
  Device /dev/sda not found (or ignored by filtering).
2014-06-25 16:24:28 +02:00
Peter Rajnoha
e329c3146d coverity: mark new switch cases with 'fall through' comment for coverity to stop complaining 2014-06-25 08:51:37 +02:00
Peter Rajnoha
3208396ce5 coverity: fix issues reported by coverity 2014-06-24 14:58:53 +02:00
Alasdair G Kergon
29ca0573ba post-release 2014-06-23 15:23:09 +01:00
Alasdair G Kergon
0bb6ffb81f pre-release 2014-06-23 14:16:39 +01:00
Alasdair G Kergon
8d27f8e003 pre-release 2014-06-23 14:03:32 +01:00
Alasdair G Kergon
867b92b031 man: More /dev/vg and /dev/mapper documentation. 2014-06-23 14:01:31 +01:00
Peter Rajnoha
9c3c357874 select: add message about 'help' field to get more help on each error hit during selection parsing
Inform about 'help' to get more help about selection fields and operators
after each syntax error hit:

  "Use 'help' for selection to get more help."
2014-06-23 12:21:17 +02:00
Peter Rajnoha
69075d0b43 select: also mark uncomparable/unselectable fields in field/selection help 2014-06-23 12:20:49 +02:00
Peter Rajnoha
2d48ef7f04 select: add FLD_UNCOMPARABLE flag for fields which can't be compared
A field where it has no meaning to do any type of comparison is the
implicit "help" or "?" field. The error given was a bit cryptic
before this patch, the FLD_UNCOMPARABLE flag makes it easier to identify
this situation anywhere in the code and provide much better error message.
This flag can be applied to other fields that may appear in the future -
mostly usable for implicit fields as they always have special purpose
(so we're not exporting it in libdevmapper for now - usual reporting
fields don't need this).

Before this patch:

$ vgs -S help=1
  dm_report_object: no data assigned to field help
  dm_report_object: no data assigned to field help

(...which is true actually, but let's provide something better...)

With this patch applied:

$vgs -S help=1
  Selection field is uncomparable: help.
  Selection syntax error at 'help=1'.

$vgs -S '(name=vg && help=1) || vg_size > 1g'
  Selection field is uncomparable: help.
  Selection syntax error at 'help=1) || vg_size > 1g'.
2014-06-23 10:09:58 +02:00
Alasdair G Kergon
c1c2e838e8 locking: fix cluster locking
Hunk missed from last commit 78533f72d3.
2014-06-20 16:38:48 +01:00
Alasdair G Kergon
78533f72d3 locking: Introduce LCK_ACTIVATION.
Take a local file lock to prevent concurrent activation/deactivation of LVs.
Thin/cache types and an extension for cluster support are excluded for
now.

'lvchange -ay $lv' and 'lvchange -an $lv' should no longer cause trouble
if issued concurrently: the new lock should make sure they
activate/deactivate $lv one-after-the-other, instead of overlapping.

(If anyone wants to experiment with the cluster patch, please get in touch.)
2014-06-20 13:24:02 +01:00
Alasdair G Kergon
f29ae59a4d pvvmove: add a few comments 2014-06-20 11:41:20 +01:00
Zdenek Kabelac
f96a499c8d lv: fix lv_is_raid 2014-06-20 11:37:45 +02:00
Zdenek Kabelac
93597bcbdc tests: add udev sync point
Missed synchronization with udev may lead to error on vgcreate,
if previous vgremove was not handled fast enough by udev.
2014-06-20 11:14:29 +02:00
Zdenek Kabelac
548269a1dd cleanup: use simplier test
Just like all other tests - use direct LV function test
2014-06-20 11:14:11 +02:00
Zdenek Kabelac
32ad8ab5a4 memlock: skip more entries
Add more entries for memlock skipping - since those are never
used by lvm code in critical section (suspend state).
2014-06-20 11:13:41 +02:00
Peter Rajnoha
59ed4d3bf6 dmsetup: no need to check for "help" field name after report init
The "help" field (as well as "?") is implicit now - libdevmapper
takes care of it completely.
2014-06-19 18:22:51 +02:00
Jonathan Brassow
c6d82c992b pvmove: Fix code that looks up the "move pv" for display
'lvs' would segfault if trying to display the "move pv" if the
pvmove was run with '--atomic'.  The structure of an atomic pvmove
is different and requires us to descend another level in the
LV tree to retrieve the PV information.
2014-06-19 10:57:08 -05:00
Jonathan Brassow
3964a1a89f pvmove: Clean-up iterator.
In 'find_pvmove_lv', separate the code that searches the atomic
pvmove LVs from the code that searches the normal pvmove LVs.  This
cleans up the segment iterator code a bit.
2014-06-19 10:52:09 -05:00
Peter Rajnoha
f16da6ef23 report: display explicit fields first, then implicit fields in field help
It's better to have implicit fields at the very end of the output
so users can see them without scrolling back if the list of fields
is long (the "help" is also an implicit field now so it should be
easily visible).
2014-06-19 16:14:53 +02:00
Peter Rajnoha
a40bc36b2e libdevmapper: revoke commit 7c86131233
We have "help" and "?" defined as implicit fields now. As such, we
don't need to export these names in libdevmapper (as it was introduced
by commit 7c86131233 within this release).
If anyone uses these field names by mistake, the libdevmapper code can
error out correctly if it detects that the set of explicit field names
(the ones supplied by "fields" arg in dm_report_init/dm_report_init_with_selection)
contains any of the implicit field names (the ones defined internally
by libdevmapper itself).
2014-06-19 16:09:32 +02:00
Peter Rajnoha
cd7325f18d report: make "help" and "?" field implicit
Making "help" and "?" implicit also simplifies code since the
dm_report_init caller (lvm/dmsetup) doesn't need to check on
dm_report_init return whether "help" or "?" was hit while parsing
fields/sort keys in libdevmapper.

The libdevmapper now sets internal "RH_ALREADY_REPORTED" flag
after it reports the "help" or "?" implicit field. Then libdevmapper
itself checks for this flag in dm_report_object and if found,
the actual reporting is skipped (because the "help" implicit field
was reported instead of the actual report).
2014-06-19 16:09:31 +02:00
Peter Rajnoha
012dab7aa3 select: add list of allowed types for each selection operator mentioned in help 2014-06-19 15:19:54 +02:00
Alasdair G Kergon
b33091cb11 pvmove: tidy 2014-06-19 13:40:47 +01:00
Zdenek Kabelac
bc0a1ca83d tests: remove dmeventd usage
This test is testing --use-policies on cmdline.
So monitoring must not be used.
2014-06-19 12:48:21 +02:00
Zdenek Kabelac
b193a3b09f cleanup: rename variable wait
With older system headers (sys/wait.h) this shadows declaration.
2014-06-19 12:02:48 +02:00
Zdenek Kabelac
00af0d13c9 cleanup: more readable
Older gcc complained a bit about uninitialized vars
so reorder code for better readability.
2014-06-19 12:02:48 +02:00
Zdenek Kabelac
aa3e413093 lvchange: better --refresh of raid and mirrors
Use lv_check_not_in_use() to detect openned device.
Plain info.open_count is not good enough for udev random
device openning.
2014-06-19 12:01:34 +02:00
Jonathan Brassow
57faf97e6f test: Clean-up pvmove-basic for atomic pvmove test
The way I was testing for the existence of pvmove mimages was
incorrect for rhel5.  This patch makes it more generic/universal.
2014-06-18 15:40:06 -05:00
David Teigland
e96a4856e6 man: lvmthin
Clean up inconsistencies in the last change.
Improve some bad formatting.
2014-06-18 14:30:57 -05:00
Zdenek Kabelac
597de5c807 cleanup: use insert_layer_for_lv implicit rename
There is implicit rename for certain layered device.
Do it now for _tdata, _cdata and _corig.

TODO: use better API here...
2014-06-18 15:00:18 +02:00
Zdenek Kabelac
e6a4cc9c31 lvconvert: print warning when not convert thinpool
Warning about destruction should not be printed,
When we are converting already existing pool
(improving original in-release commit bbf4b2c1c9)
2014-06-18 15:00:18 +02:00
Peter Rajnoha
21964f47d5 compilation: fix warnings: build_dm_uuid now accepts whole struct logical_volume, not lvid
replicator/replicator.c:338:2: warning: passing argument 2 of 'build_dm_uuid' from incompatible pointer type [enabled by default]
replicator/replicator.c:629:3: warning: passing argument 2 of 'build_dm_uuid' from incompatible pointer type [enabled by default]
replicator/replicator.c:644:6: warning: passing argument 2 of 'build_dm_uuid' from incompatible pointer type [enabled by default]
replicator/replicator.c:668:7: warning: passing argument 2 of 'build_dm_uuid' from incompatible pointer type [enabled by default]
replicator/replicator.c:677:4: warning: passing argument 2 of 'build_dm_uuid' from incompatible pointer type [enabled by default]
2014-06-18 14:43:13 +02:00
Peter Rajnoha
fc6e2a703b man: add man page entry for dmsetup info -c -S/--select + minor cleanups 2014-06-18 13:48:27 +02:00
Peter Rajnoha
0548a82e63 cleanup: gcc warnings and report-select test vs snap_percent 0%
Fix gcc warnings:
libdm-report.c:1952:5: warning: "end_op_flag_hit" may be used uninitialized in this function [-Wmaybe-uninitialized]
libdm-report.c:2232:28: warning: "custom" may be used uninitialized in this function [-Wmaybe-uninitialized]

And snap_percent is not 0% in dm < 1.10.0 so
don't test comparison with 0% here.
2014-06-18 13:26:47 +02:00
Peter Rajnoha
ca1abe70ff WHATS_NEW: commit 76467bdcfd
Ordering string list items on reports is also new compared to
previous state where items were not ordered at all and they
got reported simply as they appeared/were processed.
2014-06-18 12:30:34 +02:00
Peter Rajnoha
63f5be0170 WHATS_NEW: commits 7dbbc05a69c4cb9756464720cad29e3c1ed971c3..b16f5633ab199dedfd25f08562f686a6fb4aba9d
Report selection support...
2014-06-18 10:48:53 +02:00
Jonathan Brassow
5ebff6cc9f pvmove: Enable all-or-nothing (atomic) pvmoves
pvmove can be used to move single LVs by name or multiple LVs that
lie within the specified PV range (e.g. /dev/sdb1:0-1000).  When
moving more than one LV, the portions of those LVs that are in the
range to be moved are added to a new temporary pvmove LV.  The LVs
then point to the range in the pvmove LV, rather than the PV
range.

Example 1:
	We have two LVs in this example.  After they were
	created, the first LV was grown, yeilding two segments
	in LV1.  So, there are two LVs with a total of three
	segments.

	Before pvmove:
	      ---------  ---------   ---------
	      | LV1s0 |  | LV2s0 |   | LV1s1 |
	      ---------  ---------   ---------
	         |           |           |
	   -------------------------------------
	PV | 000 - 255 | 256 - 511 | 512 - 767 |
	   -------------------------------------

	After pvmove inserts the temporary pvmove LV:
	          ---------   ---------   ---------
	          | LV1s0 |   | LV2s0 |   | LV1s1 |
	          ---------   ---------   ---------
	              |           |           |
	        -------------------------------------
	pvmove0 |   seg 0   |   seg 1   |   seg 2   |
	        -------------------------------------
	              |           |           |
	        -------------------------------------
	PV      | 000 - 255 | 256 - 511 | 512 - 767 |
	        -------------------------------------

	Each of the affected LV segments now point to a
	range of blocks in the pvmove LV, which purposefully
	corresponds to the segments moved from the original
	LVs into the temporary pvmove LV.

The current implementation goes on from here to mirror the temporary
pvmove LV by segment.  Further, as the pvmove LV is activated, only
one of its segments is actually mirrored (i.e. "moving") at a time.
The rest are either complete or not addressed yet.  If the pvmove
is aborted, those segments that are completed will remain on the
destination and those that are not yet addressed or in the process
of moving will stay on the source PV.  Thus, it is possible to have
a partially completed move - some LVs (or certain segments of LVs)
on the source PV and some on the destination.

Example 2:
	What 'example 1' might look if it was half-way
	through the move.
	             ---------   ---------   ---------
	             | LV1s0 |   | LV2s0 |   | LV1s1 |
	             ---------   ---------   ---------
	                 |           |           |
	           -------------------------------------
	pvmove0    |   seg 0   |   seg 1   |   seg 2   |
	           -------------------------------------
	                 |           |           |
	                 |     -------------------------
	source PV        |     | 256 - 511 | 512 - 767 |
	                 |     -------------------------
	                 |           ||
	           -------------------------
	dest PV    | 000 - 255 | 256 - 511 |
	           -------------------------

This update allows the user to specify that they would like the
pvmove mirror created "by LV" rather than "by segment".  That is,
the pvmove LV becomes an image in an encapsulating mirror along
with the allocated copy image.

Example 3:
	A pvmove that is performed "by LV" rather than "by segment".

	                   ---------   ---------
	                   | LV1s0 |   | LV2s0 |
	                   ---------   ---------
	                       |           |
	                 -------------------------
	        pvmove0  |  * LV-level mirror *  |
	                 -------------------------
                             /                \
	   pvmove_mimage0   /          pvmove_mimage1
	   -------------------------   -------------------------
	   |   seg 0   |   seg 1   |   |   seg 0   |   seg 1   |
	   -------------------------   -------------------------
	        |            |               |           |
	   -------------------------   -------------------------
	   | 000 - 255 | 256 - 511 |   | 000 - 255 | 256 - 511 |
	   -------------------------   -------------------------
	           source PV                    dest PV

The thing that differentiates a pvmove done in this way and a simple
"up-convert" from linear to mirror is the preservation of the
distinct segments.  A normal up-convert would simply allocate the
necessary space with no regard for segment boundaries.  The pvmove
operation must preserve the segments because they are the critical
boundary between the segments of the LVs being moved.  So, when the
pvmove copy image is allocated, all corresponding segments must be
allocated.  The code that merges ajoining segments that are part of
the same LV when the metadata is written must also be avoided in
this case.  This method of mirroring is unique enough to warrant its
own definitional macro, MIRROR_BY_SEGMENTED_LV.  This joins the two
existing macros: MIRROR_BY_SEG (for original pvmove) and MIRROR_BY_LV
(for user created mirrors).

The advantages of performing pvmove in this way is that all of the
LVs affected can be moved together.  It is an all-or-nothing approach
that leaves all LV segments on the source PV if the move is aborted.
Additionally, a mirror log can be used (in the future) to provide tracking
of progress; allowing the copy to continue where it left off in the event
there is a deactivation.
2014-06-17 22:59:36 -05:00
Peter Rajnoha
b16f5633ab test: fix report_select test to work in cluster
The snapshot LV is used to check selection of percent values.
The orig volume must be activated exclusively in cluster.
2014-06-17 18:34:46 +02:00
Peter Rajnoha
ef43a50926 tests: update lvcreate-thin for latest changes
With recent changes introduced with the report selection support,
the content of lv_modules field is of string list type (before
it was just string type).

String list elements are always ordered now so update lvcreate-thin
test to expect the elements to be ordered.
2014-06-17 18:20:08 +02:00
Peter Rajnoha
d09590c4b6 prop: update FIELD macro to accomodate the differentiation of number, size and percent field values
The differentiation of the original number field into number, size and
percent field types has been introduced with recent changes for report
selection support.
2014-06-17 18:14:57 +02:00
Peter Rajnoha
94316dfe9d report: select: add man pages for report selection feature 2014-06-17 16:27:21 +02:00
Peter Rajnoha
40e0f44495 report: select: add --select arg to lvm devtypes 2014-06-17 16:27:21 +02:00
Peter Rajnoha
f88130fd85 report: add support for implicit fields, add implicit "selected" field
Implicit fields are fields that are registered with the report
and reported internally by libdevmapper itself (compared to explicit
fields that are registered by the layer above libdevmapper - e.g. LVM,
dmsetup...).

The "selected" field is the implicit field (for now the only one)
that reports the result of the selection. Since the selection itself
is the property of the libdevmapper, the upper layer using dm_report_init
can't register this field itself and it must be done directly at
libdevmapper layer.

The "selected" field is internally registered as part of the "common"
report type with id 0x80000000 (the last bit in uin32_t) which is then
reserved (the explicit report types are then checked if they do not
contain this id and if yes, we error out).

This way, the "selected" field is recognized by all libdevmapper users
that initialize the reporting with "dm_report_init_with_selection".
If reporting is initialized with the classical "dm_report_init",
there's no functional change (so the "selected" field is not defined
and it's not recognized).
2014-06-17 16:27:21 +02:00
Peter Rajnoha
0d8e94ce2e tests: select: add test for report selection feature 2014-06-17 16:27:21 +02:00
Peter Rajnoha
51a86dc2f8 report: select: add support for percent selection 2014-06-17 16:27:21 +02:00
Peter Rajnoha
cfed0d09e8 report: select: refactor: move percent handling code to libdm for reuse 2014-06-17 16:27:21 +02:00
Peter Rajnoha
35c4e4489c report: select: add support for reserved value recognition in report selection string - add struct dm_report_reserved_value
Make dm_report_init_with_selection to accept an argument with an
array of reserved values where each element contains a triple:

  {dm report field type, reserved value, array of strings representing this value}

When the selection is parsed, we always check whether a string
representation of some reserved value is not hit and if it is,
we use the reserved value assigned for this string instead of
trying to parse it as a value of certain field type.

This makes it possible to define selections like:

   ... --select lv_major=undefined (or -1 or unknown or undef or whatever string representations are registered for this reserved value in the future)
   ... --select lv_read_ahead=auto
   ... --select vg_mda_copies=unmanaged

With this, each time the field value of certain type is hit
and when we compare it with the selection, we use the proper
value for comparison.

For now, register these reserved values that are used at the moment
(also more descriptive names are used for the values):

  const uint64_t _reserved_number_undef_64 = UINT64_MAX;
  const uint64_t _reserved_number_unmanaged_64 = UINT64_MAX - 1;
  const uint64_t _reserved_size_auto_64 = UINT64_MAX;

 {
  {DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_undef_64, {"-1", "undefined", "undef", "unknown", NULL}},
  {DM_REPORT_FIELD_TYPE_NUMBER, _reserved_number_unmanaged_64, {"unmanaged", NULL}},
  {DM_REPORT_FIELD_TYPE_SIZE, _reserved_size_auto_64, {"auto", NULL}},
  NULL
 }

Same reserved value of different field types do not collide.
All arrays are null-terminated.

The list of reserved values is automatically displayed within
selection help output:

  Selection operands
  ------------------
  ...

  Reserved values
  ---------------
    -1, undefined, undef, unknown   - Reserved value for undefined numeric value. [number]
    unmanaged                       - Reserved value for unmanaged number of metadata copies in VG. [number]
    auto                            - Reserved value for size that is automatically calculated. [size]

  Selection operators
  -------------------
  ...
2014-06-17 16:27:21 +02:00
Peter Rajnoha
a075ec15c4 report: select: show field type in field list if in context of selection
When the field list is displayed as help for constructing selection
criteria, show also the field value type. This is useful for users
to know what set of operators are allowed for the type - the subsequent
"Selection operands" section in the help output summarize all known
types that can be used in selection.
2014-06-17 16:27:21 +02:00
Peter Rajnoha
6d667adeea report: select: add help for creating selections
The "<lvm command> -S/--select help" shows help (including list of fields to match against):

  ...field list here including the field type name...

  Selection operands
  ------------------
    field               - Reporting field.
    number              - Non-negative integer value.
    size                - Floating point value with units specified.
    string              - Characters quoted by ' or " or unquoted.
    string list         - Strings enclosed by [ ] and elements delimited by either
                          "all items must match" or "at least one item must match" operator.
    regular expression  - Characters quoted by ' or " or unquoted.

  Selection operators
  -------------------
    Comparison operators:
        =~  - Matching regular expression.
        !~  - Not matching regular expression.
         =  - Equal to.
        !=  - Not equal to.
        >=  - Greater than or equal to.
         >  - Greater than
        <=  - Less than or equal to.
         <  - Less than.

    Logical and grouping operators:
        &&  - All fields must match
         ,  - All fields must match
        ||  - At least one field must match
         #  - At least one field must match
         !  - Logical negation
         (  - Left parenthesis
         )  - Right parenthesis
         [  - List start
         ]  - List end
2014-06-17 16:27:21 +02:00
Peter Rajnoha
03a3f6078d report: select: add support for comparing string lists with selection defined 2014-06-17 16:27:20 +02:00
Peter Rajnoha
8faa4ded9c report: select: add support for processing string lists in selection
Selection list items are enclosed in '[' and ']' (if there's only
one item, the '[' and ']' can be omitted). Each element of the list
is a string (either quoted or unquoted, like the usual string operand
used in selection) and each element is delimited either by conjunction
(meaining "match all") or disjunction operator (meaning "match any").

For example, if "," is the conjuction operator and "/" is the
disjunction operator then:

  lv_tags=[a,b,c]

...will match all fields where tags contain *all* a, b and c.

  lv_tags=[a/b/c]

...will match all fields where tags contain *any* of a, b, or c.

Mixing operators within the list is not supported:

  lv_tags=[a,b/c]

...will give an error.

The order in which items are defined in the selection do not matter.

This patch enhances the selection parsing functionality to recognize
such lists.
2014-06-17 16:27:20 +02:00
Peter Rajnoha
a6694cfc29 report: select: add DM_REPORT_FIELD_TYPE_STRING_LIST to make a difference between STRING and STRING_LIST
The {pv,vg,lv,seg}_tags and lv_modules fields are reported as string
lists using the new dm_report_field_string_list - so we just pass
the list to the fn that takes care of reporting and item sorting itself.
2014-06-17 16:27:20 +02:00
Peter Rajnoha
76467bdcfd report: select: add dm_report_field_string_list to libdm
Add a separate dm_report_field_string_list fn to libdevmapper to
support reporting string lists. Before, the code used libdevmappers's
dm_report_field_string fn which required formatting the list to a
single string. This functionality is now moved to libdevmapper
and the code that needs to report the string list just needs
to pass the list itself and libdevmapper will take care of this.
This also enhances code reuse.

The dm_report_field_string_list also accepts an argument to define
custom delimiter to use. If not defined, a default "," (comma) is
used as item delimiter in the string list reported.

The dm_report_field_string_list automatically sorts the items in
the list before formatting it to a final string. It also encodes
the position and length within the final string where each element
can be found. This can be used to support checking against each
list item reported since since when formatted as a single string
for the actual report, we would lose this information otherwise
(we don't want to copy each item, the position and length within
the final string is enough for us to get the original items back).

When such lists are checked against the selection tree, we can check
each item individually this way and we can support operators like
"match any" and "match all".
2014-06-17 16:27:20 +02:00
Peter Rajnoha
5abdb52fdc report: select: refactor: move str_list to libdm
The list of strings is used quite frequently and we'd like to reuse
this simple structure for report selection support too. Make it part
of libdevmapper for general reuse throughout the code.

This also simplifies the LVM code a bit since we don't need to
include and manage lvm-types.h anymore (the string list was the
only structure defined there).
2014-06-17 16:27:20 +02:00
Peter Rajnoha
fe952e735a report: select: add --select arg to pvdisplay, vgdisplay and lvdisplay 2014-06-17 16:27:20 +02:00
Peter Rajnoha
5b734a0ea1 report: select: add --select arg to pvs, vgs and lvs 2014-06-17 16:27:20 +02:00
Peter Rajnoha
3a1c7e5d78 report: select: add --select arg to dmsetup 2014-06-17 16:27:20 +02:00
Peter Rajnoha
bc6458de87 report: select: use _check_report_selection in dm_report_object to report only objects that satisfy the report selection
This is rebased and edited version of the original design and
patch proposed by Jun'ichi Nomura:
  http://www.redhat.com/archives/dm-devel/2007-April/msg00025.html

This activates the actual selection process in dm_report_object.
2014-06-17 16:27:20 +02:00
Peter Rajnoha
d33280a978 report: select: add _check_selection fn to support checking fields against given selections
This is rebased and edited versions of the original design and
patch proposed by Jun'ichi Nomura:
  http://www.redhat.com/archives/dm-devel/2007-April/msg00025.html

The _check_selection implements the actual field checking against the
selection tree.
2014-06-17 16:27:20 +02:00
Peter Rajnoha
0103738ef5 report: select: add dm_report_init_with_selection to libdm
This is rebased and edited version of the original design and
patch proposed by Jun'ichi Nomura:
  http://www.redhat.com/archives/dm-devel/2007-April/msg00025.html

The dm_report_init_with_selection is the same as dm_report_init
but it contains an additional argument to set the selection
in the form of a string that contains field names to check against and
selection operators. The selection string is parsend and a selection
tree is composed for use in the checks against individual fields when
the report is processed. The parsed selection tree is stored in dm_report
structure as "selection_root".
2014-06-17 16:27:20 +02:00
Peter Rajnoha
2c3e84a68d report: select: add supporting infrastucture for token parsing in report selections
This is rebased and edited version of the original design and
patch proposed by Jun'ichi Nomura:
  http://www.redhat.com/archives/dm-devel/2007-April/msg00025.html

Add support for parsing numbers, strings (quoted or unquoted), regexes
and operators amogst these operands in selection condition supplied.
2014-06-17 16:27:20 +02:00
Peter Rajnoha
4118dd8da3 report: select: add structs for report selection
This is rebased and edited version of the original design and
patch proposed by Jun'ichi Nomura:
  http://www.redhat.com/archives/dm-devel/2007-April/msg00025.html

This patch defines operators and structures that will be used
to store the report selection against which the actual values
reported will be checked.

  Selection operators
  -------------------
    Comparison operators:
        =~  - Matching regular expression.
        !~  - Not matching regular expression.
         =  - Equal to.
        !=  - Not equal to.
        >=  - Greater than or equal to.
         >  - Greater than
        <=  - Less than or equal to.
         <  - Less than.

    Logical and grouping operators:
        &&  - All fields must match
         ,  - All fields must match
        ||  - At least one field must match
         #  - At least one field must match
         !  - Logical negation
         (  - Left parenthesis
         )  - Right parenthesis
2014-06-17 16:27:20 +02:00
Peter Rajnoha
7dbbc05a69 report: select: add DM_REPORT_FIELD_TYPE_SIZE to make a difference between NUMBER and SIZE
This makes it easier to check against the fields (following patches for
report selection) and check whether size units are allowed or not
with the field value.
2014-06-17 16:27:20 +02:00
Zdenek Kabelac
378fa9d158 tests: check new snapshot skills 2014-06-17 13:43:05 +02:00
Zdenek Kabelac
8403bbd4ad tests: detect version of thin_restore command
Skip test when missing.
2014-06-17 13:43:05 +02:00
Zdenek Kabelac
6fb19f37fe tests: wait for udev
Before test exits, wait for udev.
2014-06-17 13:43:04 +02:00
Zdenek Kabelac
0558b1a086 cleanup: we already know max device name size
Use NAME_LEN constant to simplify creation of device name.
Since the max size should be already tested in validation,
throw INTERNAL_ERROR if the size of vg/lv is bigger then NAME_LEN.
2014-06-17 13:43:04 +02:00
Zdenek Kabelac
7aef45f9bb cleanup: use stack for small buffer
Avoid error checking of allocation error when just few bytes are needed
for short string and use stack.
Stacktrace lvmetad_pv_gone() fail path.
2014-06-17 13:42:45 +02:00
Zdenek Kabelac
494db11004 snapshot: %ORIGIN is relative to data size
Let's use the size of origin as the real base for percenta calculation,
and 'silenly' add needed metadata space for snapshot.

So now command   'lvcreate -s -l100%ORIGIN vg/lv' should always create a
snapshot to handle full device overwrite.
2014-06-17 13:41:01 +02:00
Zdenek Kabelac
cd6d6fc24e snapshot: report proper error message
Expresing -lXX%LV is not valid for snapshot, but error message for
snapshost case was not complete and missed %ORIGIN.
Also document correct settings for in manpage properly where
it missed %PVS.
2014-06-17 13:36:33 +02:00
Zdenek Kabelac
15e7066fe3 snapshot: do not spawn when origin is not active
Since the code is not doing anything when origin is not active,
avoid spawning polling thread.
2014-06-17 13:36:07 +02:00
Zdenek Kabelac
c46d4a745d snapshot: check snapshot exists
Return 0 if the LV is not even snapshot.
2014-06-17 13:36:07 +02:00
Zdenek Kabelac
435c82f8f6 snapshot: check it's still snapshot
While polling for snapshot, detect first the snapshot still
exits.  It's valid to have multiple polling threads watching
for the same thing and just 1 can 'win' the finish part.
All others should nicely 'fail'.
2014-06-17 13:36:07 +02:00
Jonathan Brassow
a20de8af20 poll_daemon: Cleanly exit polling if the LV is no longer active
If the we are polling an LV due to some sort of conversion and it
becomes inactive, a rather worrisome message is produced, e.g.:
"  ABORTING: Mirror percentage check failed."

We can cleanly exit if we do a simple check to see if the LV is
active before performing the check.  This eliminates the scary
message.
2014-06-16 18:56:32 -05:00
Jonathan Brassow
962a40b981 cache: Properly rename origin LV tree when adding "_corig"
When creating a cache LV with a RAID origin, we need to ensure that
the sub-LVs of that origin properly change their names to include
the "_corig" extention of the top-level LV.  We do this by first
performing a 'lv_rename_update' before making the call to
'insert_layer_for_lv'.
2014-06-16 18:15:39 -05:00
Peter Rajnoha
fea8abe56a systemd: use RemoveOnStop for dm-event.socket and lvm2-lvmetad.socket
Systemd version 214 introduced new "RemoveOnStop" option for socket
units to remove the socket/FIFO when the particular unit is stopped.

Also https://bugzilla.redhat.com/show_bug.cgi?id=802748.
2014-06-13 15:45:25 +02:00
Peter Rajnoha
4c9fbe048f spec: new thin-generic.profile 2014-06-13 10:01:34 +02:00
Peter Rajnoha
8e0687ca69 profile: add thin-generic.profile
The thin-generic.profile contains settings for thin/thin pool volumes
suitable for generic environment/use containing default settings.
This allows users to change the global lvm.conf settings at will
and still keep the original settings for volumes that have this
thin profile assigned already.
2014-06-13 09:56:29 +02:00
Zdenek Kabelac
cf4d5ead02 test: pvs bz1108394 2014-06-12 11:56:06 +02:00
Zdenek Kabelac
eb316fec33 libdm: dm_report_object report error for no data
NULL data would cause problems....
2014-06-12 11:56:06 +02:00
Zdenek Kabelac
3d9737442b libdm: dm_report_object avoid duplicat strlen call
Remember strlen result.
2014-06-12 11:56:06 +02:00
Zdenek Kabelac
922f884abe report: avoid passing NULL label
Internal reporting function cannot handle NULL reporting value,
so ensure there is at least dummy label.

So move dummy_lable from tools/reporter.c and use it for all
report_object() calls in lib/report/report.c.
(Fixes RHBZ 1108394)

Simlify lvm_report_object initialization.
2014-06-12 11:55:58 +02:00
Zdenek Kabelac
c230ae95ab tests: change to inittest 2014-06-11 17:46:55 +02:00
Zdenek Kabelac
3f81b7c55c tests: update vgchange -c
Vgchange now detects runnig clvmd - so update test to reflect this.
2014-06-11 11:11:10 +02:00
Zdenek Kabelac
f845afe7cf man: dmsetup manglename
More updates to manglename option.
Add reference to LVM2 resource page, since for a long time,
this is the right places for sources for libdevmapper....
2014-06-11 11:11:10 +02:00
Zdenek Kabelac
4db71422a2 man: kiB uppercase 2014-06-11 11:10:56 +02:00
Zdenek Kabelac
d13efac51b man: update lvmthin
Improve graphics form of page and use shorter correct and suggested
forms of thin pool manipulation commands.
2014-06-11 11:10:55 +02:00
Zdenek Kabelac
e156837636 man: more compliant 2014-06-11 11:10:55 +02:00
Zdenek Kabelac
0896987633 man: properly escape -
Dash should be using '\' to be typographically correct.
2014-06-11 11:10:55 +02:00
Zdenek Kabelac
4956091027 man: use bullets 2014-06-11 11:10:55 +02:00
Zdenek Kabelac
d2b60c6e35 man: advertise lvmcache, lvmthin
Add references to new man pages so they get known.
2014-06-11 11:10:54 +02:00
Peter Rajnoha
6766cdc8a1 tests: some more renames lib/test -> lib/inittest 2014-06-11 11:07:32 +02:00
Zdenek Kabelac
5c5177c37c tests: rename test to inittest
We are getting into problem when we use 'test' for commands like
should/not/...

So avoid overloading test name and change it to inittest.
2014-06-10 10:51:27 +02:00
Zdenek Kabelac
9f68aadb74 tests: make timeouts longer
Add more time for tests, since debug kernels are getting slower...
and we add more and more tests.

However many test should be shortened to avoid testing disk-perfomance
and focus on lvm functionality...
(Often we should probably test with inactive volumes when we check
metadata operation of lvm2)

We may need to support option for 'DEEP' longer testing.

Also something like LVM_TEST_TIMEOUT_FACTOR might be useful
though it would be much better if test suite could approximete
from system perfomance test lenght...
2014-06-10 10:51:26 +02:00
Zdenek Kabelac
9f9a196dc0 cleanup: add missing log_error
log_error about no change in volume group with 'n' prompt answer.
(in-release fix)
2014-06-10 10:51:26 +02:00
Zdenek Kabelac
5e52a7788f cleanup: drop inline keyword
Inline would need function body in header file.
2014-06-10 10:51:26 +02:00
Zdenek Kabelac
2f260c9909 activation: retry cleanup deactivation
Enable 'retry' deactivation also in 'cleanup' phase.
It shouldn't be mostly needed - however udev now produces
more and more completelny non-synchronizable device opens,
so even for orphan devices we can't easily predict where
udevd opens devices.

So it's more preferable here to log error about device being open
and retry clean, but let the command proceed.
2014-06-10 10:51:24 +02:00
Petr Rockai
1824a781db lvmetad: Drop active connection upon lvmetad_set_active(0). 2014-06-09 01:55:33 +02:00
Petr Rockai
488f308527 libdaemon: Keep track of client threads, wait before shutdown. 2014-06-09 01:50:57 +02:00
Petr Rockai
4bb1efe2fb test: Reflect that --sysinit only treats lvmetad specially with -aay (not -ay). 2014-06-08 23:37:08 +02:00
Petr Rockai
ee200ddfc3 pvremove: Update lvmcache => avoid spurious error messages. 2014-06-08 22:57:04 +02:00
Petr Rockai
02e1bf406b lvmetad: Avoid "connect failed" spamming when lvmetad is not available. 2014-06-08 22:09:29 +02:00
Petr Rockai
150165591f test: Try harder to vgremove in lvmetad-lvm1.sh. 2014-06-08 22:01:02 +02:00
Petr Rockai
b3bdd41092 lvm1: Fail vg_write graciously when devices are missing. 2014-06-08 21:57:18 +02:00
Petr Rockai
60443d6a5d test: Fix the vgck test after vg_write change. 2014-06-08 21:10:47 +02:00
Petr Rockai
f58a7f305b test: Fail devices silently in lvconvert-repair-transient.sh. 2014-06-08 21:10:47 +02:00
Petr Rockai
eda4c3a41d test: Make it possible to enable/disable devices silently. 2014-06-08 21:10:47 +02:00
Petr Rockai
dba6dec661 metadata: Make it possible to write partial VGs obtained from lvmetad. 2014-06-08 17:41:11 +02:00
Peter Rajnoha
943f3aec3d cleanup: move the "daemon is running" checks to lvm-wrappers
And use ifdefs there, not exposing it in the tool code itself.
Later in the future, we should probably make the PIDFILE and
daemon checking code available also in case the daemon itself
is not built.
2014-06-06 14:21:09 +02:00
Zdenek Kabelac
f115a4a53f configure: update libcpg test
PKG_CHECK_MODULES needs old-way if;then;fi.
2014-06-06 10:31:45 +02:00
Peter Rajnoha
14f482077d cleanup: default.profile is not used (and it was split in two and renamed anyway) 2014-06-06 10:24:50 +02:00
Peter Rajnoha
291e55557e cleanup: commit c0f9c79 to work also with for non-clustered configuration 2014-06-06 10:17:26 +02:00
Jonathan Brassow
c0f9c79ae8 vgchange: With '--yes', don't prompt the user
If the user supplies a '--yes' argument, then don't bother them with
a question to confirm whether to change the cluster attribute (even
if clvmd isn't running).
2014-06-05 22:45:19 -05:00
Jonathan Brassow
1f2aedb190 WHATS_NEW: For commit 9399b743 (prompt for VG cluster attr change)
Minor change, but put a comment in WHATS_NEW anyway.
2014-06-05 22:30:50 -05:00
Jonathan Brassow
9399b74356 vgchange: Prompt when setting VG cluster attr if cluster is not setup
If clvmd is not running or the locking type is not clustered and someone
attempts to set the cluster attribute on a volume group, prompt them to
see if they are sure.  (Only prompt for one though.  If neither are true,
simply ask them once.)
2014-06-05 22:27:40 -05:00
Zdenek Kabelac
de12310c45 tests: disable python failing test
Aborts and needs fixes...
2014-06-05 23:07:23 +02:00
Zdenek Kabelac
7b8133e0b2 tests: fix test compare
Comparing 64T can't use -eq
2014-06-05 23:06:45 +02:00
Zdenek Kabelac
eb7ca96b59 tests: adapt test for newline delimit
content of DEVICES is now delimited by newlines
2014-06-05 23:05:52 +02:00
Zdenek Kabelac
46b0cd10fe configure: do not exit with error code
Since the test is the last command make a test in a form it will be
after its finished 0.
(regression from last configure cleanup)
2014-06-05 18:02:40 +02:00
Zdenek Kabelac
c4e0c61272 tests: typo 2014-06-05 17:49:35 +02:00
Zdenek Kabelac
54da0ea61a tests: use get_devs
Check how get_devs is usable with shell array DEVICES
2014-06-05 17:49:35 +02:00
Zdenek Kabelac
77d4e317a4 tests: use manglename none for dmsetup 2014-06-05 17:49:34 +02:00
Zdenek Kabelac
9196942312 tests: add get_devs function
Instead of rereading device list via cat - keep
the list in bash array. (Also solves problem
with spaces in device path)

Move usage of  "$path" out of lvm shell usage,
since we don't support such thing there...
2014-06-05 17:49:34 +02:00
Zdenek Kabelac
223bdc5eb2 tests: use shell arrays to keep device names
Better preserving spaces in device path name,
though admitely rest of test suite need
more repairs...
2014-06-05 17:49:34 +02:00
Zdenek Kabelac
21db25b3c4 tests: fix use of double apostrophes in get
Need to put "" around parameters.
2014-06-05 17:49:34 +02:00
Zdenek Kabelac
1b59f5a99c configure: reconfigure 2014-06-05 17:47:23 +02:00
Zdenek Kabelac
7cead4afea configure: cleanups
Replace AC_PATH_PROG with AC_PATH_TOOL.
Drop 'x' when already using "" around shell variable.
Simlify some long line and ifs.
Merge multiple test evaluation with '-a', '-o'.
Use 'case' instead if several ifs when it's more elegant.
Improve usage of pkg_config_init and add it where it's been missing.
Check for UDEV_HAS_BUILTIN_BLKID and when building udev-rules.
2014-06-05 17:47:23 +02:00
Zdenek Kabelac
b3ace4f9af configure: accept 'none' as mangling mode
Since we advertise 'none' as mangling name, accept it.
Keep it backward compatible and leave disabled option still working
(though I guess there is likely no user of this option...)
2014-06-05 17:47:23 +02:00
Zdenek Kabelac
93c2614e56 man: document DM_DEFAULT_NAME_MANGLING_MODE
Document DM_DEFAULT_NAME_MANGLING_MODE environmental variable.
(its default setting is build time configurable)
2014-06-05 17:47:21 +02:00
Jonathan Brassow
4454a580df test: use direct I/O when injecting bad data into RAID images
When directly corrupting RAID images for the purpose of testing,
we must use direct I/O (or a 'sync' after the 'dd') to ensure that
the writes are not caught in the buffer cache in a way that is not
reachable by the top-level RAID device.
2014-05-30 17:26:10 -05:00
Peter Rajnoha
0362169277 report: fix report field type for lv_kernel_major/minor
Should be defined as numeric field, not string field.
2014-05-30 17:24:07 +02:00
Jonathan Brassow
442820aae3 activation: Remove empty DM device when table fails to load.
As part of better error handling, remove DM devices that have been
sucessfully created but failed to load a table.  This can happen
when pvmove'ing in a cluster and the cluster mirror daemon is not
running on a remote node - the mapping table failing to load as a
result.  In this case, any revert would work on other nodes running
cmirrord because the DM devices on those nodes did succeed in loading.
However, because no table was able to load on the non-cmirrord nodes,
there is no table present that points to what needs to be reverted.
This causes the empty DM device to remain on the system without being
present in any LVM representation.

This patch should only be considered a partial fix to the overall
problem.  This is because only the device which failed to load a
table is removed.  Any LVs that may have been loaded as requirements
to the DM device that failed to load may be left in place.  Complete
clean-up will require tracking those devices which have been created
as dependencies and removing them along with the device that failed
to load a table.
2014-05-28 10:17:15 -05:00
Zdenek Kabelac
8212dac849 tests: rename test 2014-05-28 15:41:06 +02:00
Zdenek Kabelac
2adaef8272 revert: restore original timeout
Accidently it's been commited - but it has also shown,
that on heavy loaded systems (like our test machine could be)
slightly bigger timeouts which waits longer for udev rules
processing does help and avoids occasional refuse of deactivation
because device is still being open.
(i.e. lvcreate...; lvchange -an...)

Unsure how we could now synchronize for this. On very slow(/loaded)
system 5 second timeout is simply not enough.

TODO: introduce at least lvm.conf configurable setting to
allow longer 'retry' loops.
2014-05-28 15:33:41 +02:00
Zdenek Kabelac
171a668e81 tests: dd needs to hit disk
Unsure if this is feature or bug of syncaction,
but it needs to be present physically on the media
and it ignores content of buffer cache...

(maybe lvchange should implicitely fsync all disks
that are members of raid array before starting test??)
2014-05-28 15:33:41 +02:00
Zdenek Kabelac
ba3e6e7c32 tests: raid syncaction activation race
Demonstrace problem of syncaction being called right after activation.
2014-05-28 15:33:41 +02:00
Zdenek Kabelac
a67774c1fa tests: detect same uuid on PV
Check we know how to handle same UUID
Test  currently does NOT work on lvmetad
(or it's unclear it even should - thus test error
is currently lowered to 'test warning')

TODO: replace lib/test with a better shell script name
2014-05-27 17:09:05 +02:00
Zdenek Kabelac
9240aca369 raid: cleanup error messages
Add log_error messages on error paths.
2014-05-27 17:08:49 +02:00
Zdenek Kabelac
ae43d1afa2 activate: cleanup lv_check_not_in_use
Reindent lv_check_not_in_use to simplify internal loop code.
Also return always '0/1'  (drop -1) - since we only
check for failure (0) - and we don't really know
why  lv_info() has failed.
2014-05-27 17:08:49 +02:00
Peter Rajnoha
1569e7a498 udev: also print subsystem udev flags in debug message about udev flags + fix typo DM_SUBSSYTEM_UDEV_FLAG7 -> DM_SUBSYSTEM_UDEV_FLAG7 2014-05-27 14:44:11 +02:00
Zdenek Kabelac
b3539907f5 tests: support thin_restore configurable
Currently this tool is used only in tests.
2014-05-26 23:30:09 +02:00
Zdenek Kabelac
b0ff3359f2 tests: update aux disable_dev
disable_dev can't use transaction - since it may lead occasionaly to
weird error - example could be nomda-missing.sh test case.
Here occasionaly device instead of being removed was left as
error device and testing different code path (which is unfortunatelly
buggy)

When we want to test 'error' device -  'aux error_dev()' should be used.
2014-05-26 22:57:28 +02:00
Zdenek Kabelac
49521f4e56 cleanup: internal error for impossible path
Add 'default' path for impossible execution code path.
2014-05-26 22:57:28 +02:00
Zdenek Kabelac
965592340d man: cleanup dmsetup
Add few bold texts.
2014-05-26 22:57:20 +02:00
Zdenek Kabelac
3cb2658fb7 dmsetup: add warning
Warn when --udevcookie/DM_UDEV_COOKIE is used with 'dmsetup remove --force'.

When command is doing multiple ioctl operations on a single device,
it may invoke udev activity, that is colliding with further ioctl commands.
The result of such operation becomes unpredictable.
Use of --retry could partially help...
2014-05-26 22:56:30 +02:00
Peter Rajnoha
6e9105c7bb cleanup: use const for endptr in dm_units_to_factor 2014-05-26 12:09:01 +02:00
Zdenek Kabelac
cfe18d85c1 tests: improve command coverage 2014-05-23 23:35:42 +02:00
Zdenek Kabelac
b7476e91ef tests: add unusable kernel for raid5 testing 2014-05-23 23:35:42 +02:00
Zdenek Kabelac
c5c3995ed5 tests: increase min version for raid testing
Seems smaller version are causing weird kernel lookups.
2014-05-23 23:35:42 +02:00
Zdenek Kabelac
3f8048f28c vgextend: allow --yes to skip prompt 2014-05-23 23:35:40 +02:00
Zdenek Kabelac
1a84032322 cleanup: indent 2014-05-23 21:37:12 +02:00
Zdenek Kabelac
bf6b69c46b cleanup: use directly segtype->name
Simplify printing of segtype name.
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
952514611d cleanup: add seg_is_pool macro
Simplify code querying for pool segtype.
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
cb7bba9ffe dev_manager: disable extra udev loop
Disable code which has postprocessed whole tree and reset udev flags.
We need to find out which case was troublesome - since this loop
was just hidding bug in other code parts (most probably preload tree)
2014-05-23 21:36:55 +02:00
Zdenek Kabelac
ec9da34d86 tests: check more things with vgchange 2014-05-22 12:01:44 +02:00
Zdenek Kabelac
65b0948c1e tests: swap tests 2014-05-22 12:01:44 +02:00
Zdenek Kabelac
1208e92b34 tests: add check vg_attr_bit
Similar function like  'check lv_attr_bit'
2014-05-22 12:01:44 +02:00
Zdenek Kabelac
406ce3760b tests: detect raid presence 2014-05-22 12:01:44 +02:00
Zdenek Kabelac
496953fb39 cleanup: use y/n instead of y|n
Use same for of yes no query everywhere.
2014-05-22 12:01:43 +02:00
Peter Rajnoha
1c4fe47308 lvm_init: don't use name mangling for LVM
LVM has restricter character set that is allowed for VG-LV names
and the dm names constructed do not contain any blacklisted characters
that would require name mangling.

Also, when any other device-mapper device is scanned that could
possibly contain such blacklisted characters, we reference the
device by its major:minor instead of dm name (e.g. _device_is_usable fn).
2014-05-22 10:00:19 +02:00
Zdenek Kabelac
b2da0f0a5b tests: raid and dmeventd 2014-05-21 23:14:42 +02:00
Zdenek Kabelac
37b4dc7775 tests: more pvchange tests 2014-05-21 23:14:42 +02:00
Zdenek Kabelac
79f4665243 tests: more vgcfgrestore testing
Check '-l' and archiving.
2014-05-21 23:14:41 +02:00
Zdenek Kabelac
a4ac21aded cleanup: make error message more readable 2014-05-21 23:14:41 +02:00
Zdenek Kabelac
aafd7c878c cleanup: indent 2014-05-21 23:14:41 +02:00
Zdenek Kabelac
3ac7d2deb4 vgcfgrestore: return invalid cmd line
When error is detected on command line options, return '3'.
2014-05-21 23:14:41 +02:00
Zdenek Kabelac
9c4953df1b tests: restore disable_dev behavior
Notify needs to go  with major:minor before device disappears.
2014-05-21 16:59:38 +02:00
Zdenek Kabelac
8db67d2aff tests: skips on unsupported systems 2014-05-21 16:48:06 +02:00
Peter Rajnoha
092659644a man: missing space between option name and value name 2014-05-21 15:51:28 +02:00
Peter Rajnoha
e78092fa3a man: more man page updates for --commandprofile and --metadataprofile split 2014-05-21 14:53:56 +02:00
Peter Rajnoha
b7431f69ed man: update lvm.conf man page for latest changes 2014-05-21 13:25:09 +02:00
Peter Rajnoha
23f9c45a1b profiles: remove default.profile and add {command,metadata}_profile_template.profile
The "default.profile" name was misleading. It's actually a helper
*template* that can be used for copying and further editing to create
a new profile.

Also, we have separate command and metadata profiles now so the templates
are separated as well - we can't mix profile settings from one group with
another - such profile is rejected by lvm tools.
2014-05-21 12:36:52 +02:00
Zdenek Kabelac
c34c33d9ba tests: notify loop needs maj:min
Missed in previous commit.
2014-05-21 12:00:32 +02:00
Zdenek Kabelac
cbdb8fa589 tests: notify lvmetad after udev transation
Delay udev notification after the point udev transaction
is finished - since otherwise some device may still
be found missing until udev transaction is finished.
2014-05-21 11:43:24 +02:00
Peter Rajnoha
97c91b020e man: update dumpconfig man page for latest changes 2014-05-21 11:01:19 +02:00
Zdenek Kabelac
1ddc68ccd7 man: call installers only when there are set vars.
When MAN7, MAN8CLUSTER or MAN8SYSTEMD_GENERATORS would be empty,
don't call respective INSTALL tools.
On older systems they even generate error causing abort
of makefile target.
2014-05-21 10:52:33 +02:00
Peter Rajnoha
fca77a1ea4 cleanup: remove duplicate --commandprofile reference in dumpconfig's help string 2014-05-21 10:30:02 +02:00
Dongmao Zhang
e9db11f387 systemd: use umask 022 for generated systemd units by lvm2-activation-generator 2014-05-21 10:12:02 +02:00
Peter Rajnoha
7d7c1f2025 systemd: install lvm2-cluster-activation script as executable 2014-05-21 09:45:29 +02:00
Zdenek Kabelac
b57b4db889 tests: checking mirror_remove_missing
FIXME:

Seems like conversion of log is not supported in clustered VG
and needs to be fixed.
2014-05-20 22:50:52 +02:00
Zdenek Kabelac
f919a255b7 tests: lvconvert needs --yes 2014-05-20 22:50:33 +02:00
Zdenek Kabelac
e538354e28 spec: configurable cache build
Install lvmcache man page when being configured with cache support.
Install lvmthin man page only with thin support.
2014-05-20 21:50:30 +02:00
Zdenek Kabelac
08fd244506 tests: rebuild paths when Makefile is updated 2014-05-20 21:50:30 +02:00
Zdenek Kabelac
2e9792121f tests: add have_cache and have_raid
Need to be aware of build options, when system would be
configure without raid or cache support
2014-05-20 21:50:30 +02:00
Zdenek Kabelac
7f92c3a13e tests: wait before down-convert 2014-05-20 21:50:29 +02:00
Zdenek Kabelac
205be24768 tests: update lvconvert test
Split raid test to separate file
Add --yes flag to automated testing
2014-05-20 21:50:29 +02:00
Zdenek Kabelac
16424fed54 thin: improve lvconvert messages
Add more info into printed message.
2014-05-20 21:50:29 +02:00
Zdenek Kabelac
d1d50d4023 cleanup: use print when displaying info
Use error or warn only when we really have some problem in the code.
2014-05-20 21:50:29 +02:00
Zdenek Kabelac
54184f92ac cleanup: indent 2014-05-20 21:50:28 +02:00
Zdenek Kabelac
2941cffd2c cleanup: unneeded initialization
Move or drop initialization where it is not needed.
2014-05-20 21:50:28 +02:00
Zdenek Kabelac
65b3fe9b05 man: cleanup style 2014-05-20 21:50:28 +02:00
Zdenek Kabelac
9fd0be2a85 debug: fix backtracing 2014-05-20 21:50:28 +02:00
Zdenek Kabelac
c70c100cce lvconvert: check ret code of mirror_remove_missing
When mirror_remove_missing() fails, stop repairing mirror.
2014-05-20 21:49:42 +02:00
Zdenek Kabelac
bbf4b2c1c9 thin: lvconvert warn before conversion
Warn user before converting volume to different type.

  WARNING: Converting vg/lvol0 logical volume to pool's meta/data volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

Since the content of volume is lost we have to query user to confirm
such operation.  If user is 100% sure, he may use '--yes' to avoid prompts.
2014-05-20 21:48:47 +02:00
Peter Rajnoha
83f468be4e tests: update profiles.sh test for latest changes 2014-05-20 16:27:09 +02:00
Peter Rajnoha
9c937e7d54 dumpconfig: add --type profilable-command/profilable-metadata, --metadataprofile/--commandprofile
The dumpconfig now understands --commandprofile/--profile/--metadataprofile

The --commandprofile and --profile functionality is almost the same
with only one difference and that is that the --profile is just used
for dumping the content, it's not applied for the command itself
(while the --commandprofile profile is applied like it is done for
any other LVM command).

We also allow --metadataprofile for dumpconfig - dumpconfig *does not*
touch VG/LV and metadata in any way so it's OK to use it here (just for
dumping the content, checking the profile validity etc.).

The validity of the profile can be checked with:
      dumpconfig --commandprofile/--profile/--metadataprofile --validate

...depending on the profile type.

Also, mention --config in the dumpconfig help string so users know
that  dumpconfig handles this too (it did even before, but it was not
documented in the help string).
2014-05-20 16:27:07 +02:00
Peter Rajnoha
9e3e4d6994 config: differentiate command and metadata profiles and consolidate profile handling code
- When defining configuration source, the code now uses separate
  CONFIG_PROFILE_COMMAND and CONFIG_PROFILE_METADATA markers
  (before, it was just CONFIG_PROFILE that did not make the
  difference between the two). This helps when checking the
  configuration if it contains correct set of options which
  are all in either command-profilable or metadata-profilable
  group without mixing these groups together - so it's a firm
  distinction. The "command profile" can't contain
  "metadata profile" and vice versa! This is strictly checked
  and if the settings are mixed, such profile is rejected and
  it's not used. So in the end, the CONFIG_PROFILE_COMMAND
  set of options and CONFIG_PROFILE_METADATA are mutually exclusive
  sets.

- Marking configuration with one or the other marker will also
  determine the way these configuration sources are positioned
  in the configuration cascade which is now:

  CONFIG_STRING -> CONFIG_PROFILE_COMMAND -> CONFIG_PROFILE_METADATA -> CONFIG_FILE/CONFIG_MERGED_FILES

- Marking configuration with one or the other marker will also make
  it possible to issue a command context refresh (will be probably
  a part of a future patch) if needed for settings in global profile
  set. For settings in metadata profile set this is impossible since
  we can't refresh cmd context in the middle of reading VG/LV metadata
  and for each VG/LV separately because each VG/LV can have a different
  metadata profile assinged and it's not possible to change these
  settings at this level.

- When command profile is incorrect, it's rejected *and also* the
  command exits immediately - the profile *must* be correct for the
  command that was run with a profile to be executed. Before this
  patch, when the profile was found incorrect, there was just the
  warning message and the command continued without profile applied.
  But it's more correct to exit immediately in this case.

- When metadata profile is incorrect, we reject it during command
  runtime (as we know the profile name from metadata and not early
  from command line as it is in case of command profiles) and we
  *do continue* with the command as we're in the middle of operation.
  Also, the metadata profile is applied directly and on the fly on
  find_config_tree_* fn call and even if the metadata profile is
  found incorrect, we still need to return the non-profiled value
  as found in the other configuration provided or default value.
  To exit immediately even in this case, we'd need to refactor
  existing find_config_tree_* fns so they can return error. Currently,
  these fns return only config values (which end up with default
  values in the end if the config is not found).

- To check the profile validity before use to be sure it's correct,
  one can use :

    lvm dumpconfig --commandprofile/--metadataprofile ProfileName --validate

  (the --commandprofile/--metadataprofile for dumpconfig will come
   as part of the subsequent patch)

- This patch also adds a reference to --commandprofile and
  --metadataprofile in the cmd help string (which was missing before
  for the --profile for some commands). We do not mention --profile
  now as people should use --commandprofile or --metadataprofile
  directly. However, the --profile is still supported for backward
  compatibility and it's translated as:

    --profile == --metadataprofile for lvcreate, vgcreate, lvchange and vgchange
                 (as these commands are able to attach profile to metadata)

    --profile == --commandprofile for all the other commands
                (--metadataprofile is not allowed there as it makes no sense)

- This patch also contains some cleanups to make the code handling
  the profiles more readable...
2014-05-20 16:21:48 +02:00
Peter Rajnoha
c5fbd2c59f config: add CFG_PROFILABLE_METADATA flag
Mark profilable settings with a separate CFG_PROFILABLE_METADATA
flag where the profile can be attached to VG/LV. This makes it possible
to differentiate global command-profilable settings (CFG_PROFILABLE flag)
and contextual metadata-profilable (per VG/LV) settings (CFG_PROFILABLE_METADATA flag).
2014-05-19 16:31:15 +02:00
Peter Rajnoha
22cab9c481 commands: do not register profile_ARG for lvcreate/lvchange separetely
The --profile is globally available for all commands.
2014-05-19 16:30:49 +02:00
Peter Rajnoha
24f32721a9 dumpconfig: fix dumpconfig --type diff used in lvm shell as second and later command
The dumpconfig reuses existing config_def_check results in case
the check is done during general lvm command context initialization
(when enabled by config/checks=1) so dumpconfig does not need to run
the same check again during its execution, hence saving some time.

However, we don't check for differences from defaults during general
lvm command initialization as it's useless at that time. It makes
sense only in case when such a check is directly requested (like in
the case of lvm dumpconfig --type diff). We need to take care that
the reused information was already produced with this "diff" checking
before and if not, we need to force the check so the check status also
gathers the new "diff" info now.

Also, do not do diff checking for any other dumpconfig command that
is run after dumpconfig --type diff.
2014-05-19 15:41:25 +02:00
Peter Rajnoha
9a324df3b3 config: fix incorrect profile initialization on cmd context refresh
When cmd refresh is called, we need to move any already loaded profiles
to profiles_to_load list which will cause their reload on subsequent
use. In addition to that, we need to take into account any change
in config/profile configuration setting on cmd context refresh
since this setting could be overriden with --config.

Also, when running commands in the shell, we need to remove the
global profile used from the configuration cascade so the profile
is not incorrectly reused next time when the --profile option is
not specified anymore for the next command in the shell.

This bug only affected profile specified by --profile cmd line
arg, not profiles referenced from LVM metadata.
2014-05-19 15:39:55 +02:00
Peter Rajnoha
c42f72867a config: attach cft_check_handle to each config tree instead of global cmd_context
Before, the cft_check_handle used to direct configuration checking
was part of cmd_context. It's better to attach this as part of the
exact config tree against which the check is done. This patch moves
the cft_check_handle out of cmd_context and it attaches it to the
config tree directly as dm_config_tree->custom->config_source->check_handle.

This change makes it easier to track the config tree check results
and provides less space for bugs as the results are directly attached
to the tree and we don't need to be cautious whether the global value
is correct or not (and whether it needs reinitialization) as it was
in the case when the cft_check_handle was part of cmd_context.
2014-05-19 15:38:04 +02:00
Peter Rajnoha
ff9d27a1c7 config: add CONFIG_FILE_SPECIAL config source id
Add CONFIG_FILE_SPECIAL config source id to make a difference between
real configuration tree (like lvm.conf and tag configs) and special purpose
configuration tree (like LVM metadata, persistent filter).

This makes it easier to attach correct customized data to the config
tree that is created out of the source then.
2014-05-19 15:37:41 +02:00
Zdenek Kabelac
ecccb75904 tests: drop nosync
Mirrors currently do not support any conversion,
when they are created no-sync.
2014-05-19 13:22:46 +02:00
Zdenek Kabelac
477351fc4d man: lvmcache
separate man page for lvm cache
2014-05-18 20:09:47 +02:00
Zdenek Kabelac
91284bd9b9 cleanup: device extent_size first 2014-05-18 20:08:07 +02:00
Zdenek Kabelac
58777756e8 debug: backtrace error path
Add backtrace for 'n' answer.
2014-05-18 20:07:24 +02:00
Zdenek Kabelac
b73a786755 man: lvmcache
Migrate cache description into  man(7) entry
(like lvmthin).
2014-05-15 12:13:24 +02:00
Zdenek Kabelac
11bedf1baf display: print skipped prompt
Since decisions in the silent mode may not be always obvious,
print skipped prompt with answer 'n'.

Also document  '-qq' behaviour (single -q only shuts
logging, while -qq sets silent mode).
2014-05-15 12:11:35 +02:00
Zdenek Kabelac
76c06c7252 tests: speedup
Avoid some expencive raid/mirror synchronization when testing
just allocation sizes.
Use lv_attr_bit
2014-05-15 12:08:35 +02:00
Zdenek Kabelac
309201a53b man:misc updates 2014-05-15 12:08:35 +02:00
Peter Rajnoha
7c86131233 report: export DM_REPORT_FIELD_RESERVED_NAME_{HELP,HELP_ALT} and show help on '<lvm_command> -O help'
Share DM_REPORT_FIELD_RESERVED_NAME_{HELP,HELP_ALT} between libdm and
any libdm user to handle reserved field names, in this case the virtual
field name to show help instead of failing on unrecognized field.
The libdm user also needs to check the field name so it can fire
proper code in this case (cleanup, exit etc.).
2014-05-15 10:58:14 +02:00
Alasdair G Kergon
5684cfcb1c report: Add metadata_percent to lvs_cols. 2014-05-15 08:32:27 +01:00
David Teigland
42d7409da2 man: lvmthin cover snapshot merge and xfs
also fix a couple inconsistent example values.
2014-05-14 15:15:35 -05:00
David Teigland
044b796800 man: more lvmthin discard references
and some fixes from Tom.
2014-05-14 15:15:35 -05:00
Alasdair G Kergon
3b989e317f allocation: Fix alloc anywhere with parity.
Take account of parity areas with alloc anywhere in
_calc_required_extents.  Extents beyond area_count were treated
incorrectly as mirror logs.
2014-05-14 16:25:43 +01:00
Alasdair G Kergon
422b3b0fb5 man: Fix man7 dir dependency.
https://bugs.gentoo.org/510202
2014-05-13 11:18:13 +01:00
Zdenek Kabelac
85cf5a23d2 cleanup: improve error message
Update impossile to happen error message.
2014-05-13 10:33:17 +02:00
Zdenek Kabelac
7b766b0648 tests: updates
Move thin test out of listing.
2014-05-13 10:28:55 +02:00
Zdenek Kabelac
6f057e3c66 tests: replance hostname call 2014-05-12 16:24:40 +02:00
Zdenek Kabelac
c07fb7678e cleanup: drop unused header 2014-05-12 16:24:40 +02:00
Zdenek Kabelac
506ade802b cleanup: cast int to typedef 2014-05-12 16:24:40 +02:00
Zdenek Kabelac
90c44f5371 conf: document new thin_check option 2014-05-12 16:24:40 +02:00
Zdenek Kabelac
8b95c82fed coverity: catch unwanted path
We validate this path already earlier.
2014-05-12 16:24:39 +02:00
Zdenek Kabelac
2e1192f691 configure: improve needs_check thin_check test
Improve testing for needs_check - when old tools are installed,
issue proper warning from configure, but stop sending needs_check
flag to the old tool.
2014-05-12 16:24:39 +02:00
Zdenek Kabelac
5b787a24f0 configure: drop siginterrupt
Not used anymore
2014-05-12 16:24:39 +02:00
Zdenek Kabelac
e416d84e10 cleanup: use enum return codes 2014-05-07 14:17:46 +02:00
Zdenek Kabelac
2cc02c570e cleanup: constify pointers 2014-05-07 14:17:46 +02:00
Zdenek Kabelac
9845f8c767 clenaup: drop unused assigns 2014-05-07 14:17:46 +02:00
Zdenek Kabelac
9bccaf7ae4 cleanup: missed conversion to dm_malloc/free usage
Few missed unconverted dm_malloc/free calls.
2014-05-07 14:17:46 +02:00
Zdenek Kabelac
d3e68c8a71 cleanup: cosmetics.
Initialized attrs so analyzers are less confused
(since currently our method calls should always initialize attrs on
return).
2014-05-07 14:17:46 +02:00
Zdenek Kabelac
d88fab8d3a cleanup: drop uneeded headers 2014-05-07 14:17:45 +02:00
Zdenek Kabelac
bcd6deea18 coverity: ignore ret val
Since we intentionaly do not want to check them,
cast result values to void.
2014-05-07 14:17:12 +02:00
Zdenek Kabelac
d11617864a coverity: error for undefined origin
If we would have been missing origin here, it would
be an internal error - since these values are validated
earlier.
2014-05-07 14:16:18 +02:00
Zdenek Kabelac
a8042f33d0 coverity: check for profile
Ensure str is not NULL for analyzer.
2014-05-07 14:15:52 +02:00
Zdenek Kabelac
48a8cf28f7 cache: avoid expression overflow
Cast data_extents to 64bit so calculation is in 64b arithmetic.
2014-05-07 14:14:54 +02:00
Zdenek Kabelac
e585a6bbcf signals: better nesting support
Support upto 3 levels os nesting signal blocking.
As of today - code blocks signals immediatelly when it opens
VG in read-write mode - this however makes current prompt usage
then partially unusable since user may not 'break' command
during prompt (something most user would expect).

Until a better fix for prompting is implemented, put in support
for signal nesting - thus when prompt enables signal acceptance,
make it possible to really break command at this point.
2014-05-07 14:09:33 +02:00
Zdenek Kabelac
31ac200a37 debug: add more debug message for signal handling
Adding log_sys_debug for eventual logging of system errors.
(Using debug level, since currently signal handling functions
do not fail when any error is encoutered).
2014-05-07 14:07:13 +02:00
Zdenek Kabelac
04b29a3587 locking: use sigaction signal handling
Use sigint_allow/restore function instead of duplicating code
and switch to use only sigactiction based signal handling.
2014-05-07 14:01:13 +02:00
Peter Rajnoha
9d64573da1 make: fix commit 1756bf6
"DIST_TARGET" should be "DISTCLEAN_TARGET"
2014-05-07 12:13:36 +02:00
Peter Rajnoha
1899783aa6 cleanup: fix compiler warning
locking/file_locking.c:162:2: warning: implicit declaration of function ‘init_signals’
2014-05-06 14:38:38 +02:00
Jonathan Brassow
8b49d61d83 logging: Add LCK_REVERT_MODE to flags printed by decode_flags()
The decode_flags() function does not yet know about the
LCK_REVERT_MODE flag.
2014-05-05 14:30:09 -05:00
Alasdair G Kergon
09064cc2db signals: Add init_signals. 2014-05-01 20:31:19 +01:00
Alasdair G Kergon
2eed136f0f signals: Move sigint handling out to lvm-signal. 2014-05-01 20:07:17 +01:00
Alasdair G Kergon
29a3fbf067 locking: Separate out flock and signal code. 2014-05-01 17:37:14 +01:00
Peter Rajnoha
239ba5bb04 systemd: use lvm binary insetad of command symlink in lvm2-pvscan.service 2014-04-30 15:21:25 +02:00
Peter Rajnoha
81b096af34 systemd: make sysinit.target to pull in lvm2-lvmetad.socket, not sockets.target
The sysinit.target is ordered even before sockets.target and there
may be situations in which the lvmetad is needed early, for example
in rescue.target to activate some LVs on which mountpoints reside.
Also, like in the case of rescue.target, the sockets.target is not
pulled in (unless some other service pulls it in explicitly).

See also: https://bugzilla.redhat.com/show_bug.cgi?id=1087586#c26
for the summary of the problem.
2014-04-30 14:52:10 +02:00
Zdenek Kabelac
4e559103d5 tests: lets the test continue
unwanted start of dmeventd was caused by seg_monitor status.
2014-04-30 10:26:31 +02:00
Zdenek Kabelac
9401827919 cleanup: modules_needed only for devmapper
Drop compilation of modules_needed and add_target_lin
function when compiled without devmapper support.
Cleanup surrounding ifdefs.
2014-04-30 10:26:30 +02:00
Zdenek Kabelac
62e8dd4f6e cleanup: indent 2014-04-30 10:26:30 +02:00
Zdenek Kabelac
cbdf63fdd2 cleanup: indent in devmapper-event
Drop header inclusion - this file is already included.
Shorten code.
2014-04-30 10:26:30 +02:00
Zdenek Kabelac
b56aef1915 unknown: add_target_line is not needed
Leave addition of unknown segment to table as internal error.
Do not replace unknown segment with error device.
2014-04-30 10:26:30 +02:00
Zdenek Kabelac
1756bf6c63 makefile: fix regression
Commit 1af05a7a16 was incorrect.
Generated files from configure could be only distclean-ed.
(in-release fix)
2014-04-30 10:26:29 +02:00
Zdenek Kabelac
62ad6dee18 lv: show X attr when lv_info fails
Print 'X' also when lv_info() fails.
(i.e. compilation with --disable-ioctl)
2014-04-30 10:26:29 +02:00
Zdenek Kabelac
816cc94ac1 devmapper-event: always initialize timeout
Always pass fully initialized timeval struct to select.
2014-04-30 10:26:29 +02:00
Zdenek Kabelac
675fcfe9b7 devmapper: fix compilation without devmapper
Fix compilation when configured with --disable-devmapper option.
2014-04-30 10:26:29 +02:00
Zdenek Kabelac
905d4cda7a libdm: cleanup complation without DM_IOCTLS 2014-04-30 10:26:29 +02:00
Zdenek Kabelac
ad77fb50bd configure: corrected ioctl option
The correct name for disable ioctl option is --disable-ioctl
(--disable-driver never worked)
Also sort scripts generated files alphabetically.
2014-04-30 10:26:28 +02:00
Zdenek Kabelac
517b002648 display: check for dmeventd support
When quering for dmeventd monitoring status, check first
if lvm2 is configured to monitor to avoid unwanted start
of dmeventd process for answering monitoring status.
2014-04-30 10:26:26 +02:00
Alasdair G Kergon
b1f765d72a pvremove: Catch CTRL-c during prompts. 2014-04-29 08:16:28 +01:00
Zdenek Kabelac
26989e0cd7 tests: improve coverage
Test more code paths for lvscan & lvdisplay
2014-04-28 12:42:57 +02:00
Zdenek Kabelac
d8214cb154 cleanup: put all tests within switch
No reason to check for VALID in extra if.
2014-04-28 12:42:56 +02:00
Zdenek Kabelac
d1aba7ccf6 lvscan: drop test for snapshosts
When showing  ACTIVE status for snapshot's origin,
avoid testing all its snapshot - it's not useful
to tell user origin is inactivate, while it's clearly
available and running - just one of its snapshot leg
is invalid...
2014-04-28 12:42:53 +02:00
Zdenek Kabelac
4c405a9b49 thin: move segment info display to correct code section
Relocate info from thin pool and thin volume segments
to proper code section for segments.
Add discards and thin count status info.

Info is shown with  'lvdisplay --maps' (like for other segments).
2014-04-28 12:41:25 +02:00
Zdenek Kabelac
71314a9905 thin: display info when -tpool is running
For percentage display we need -tpool - so check for layered
device presence here instead of plain pool device.
Also update 'info' - so when pool is 'available' we
display open count for -tpool device instead of mostly
irrelevant pool.
TODO: Maybe we should actually display this open info always?
(even when just -tpool is available, but pool is not)
2014-04-28 12:40:17 +02:00
Zdenek Kabelac
91a8e4a3d8 display: show monitoring status
When displaying segments  (lvdisplay --maps)
show monitoring status when supported by segment.
2014-04-28 12:39:03 +02:00
Zdenek Kabelac
e6168b8d70 display: use Virtual for virtual LV
Emphesize virtual extents for virtual LVs and for
those use  'Virtual extents' instead of 'Logical extents',
so it's immeditatelly visible, which extents do have
straighforward physical backend.
2014-04-28 12:37:50 +02:00
Peter Rajnoha
5b28cbd7c2 cleanup: _move_pv is static
metadata/metadata.c:363:5: warning: no previous prototype for '_move_pv' [-Wmissing-prototypes]
2014-04-28 12:11:44 +02:00
Peter Rajnoha
4360fdf89c libdevmapper: add dm_units_to_factor for size unit parsing
Actually moving the existing code from LVM to libdm for reuse.
2014-04-28 10:25:43 +02:00
Petr Rockai
75d399800a NIX: Use VM images with the correct root module list. 2014-04-26 13:46:25 +02:00
Jonathan Brassow
3bce3ad52a test: Add the new vgsplit RAID test file forgotten in the last commit 2014-04-25 16:59:09 -05:00
Jonathan Brassow
c671be434c test: Move the RAID vgsplit test into a separate file 2014-04-25 16:57:43 -05:00
Jonathan Brassow
3c4234f825 vgsplit: Make RAID 4/5/6 fail cleanly when too few PV specified
While the 'raid1/10' segment types were being handled inadvertently
by '_move_mirrors()', the parity RAIDs were not being properly checked
to ensure that the user had specified all necessary PVs when moving
them.  Thus, internal errors were being triggered when only part of
a RAID LV was moved to the new VG.  I've added a new function,
'_move_raid()', which properly checks over any affected RAID LVs and
ensures that all the necessary PVs are being moved.
2014-04-25 16:24:50 -05:00
Jonathan Brassow
4dffb9fca9 test/vgsplit-operation.sh: Add vgsplit tests for RAID
vgsplit of RAID volumes was problematic because the metadata area
and data areas were always on the same PVs.  This problem is similar
to one that was just fixed for mirrors that had log and images sharing
a PV (commit 9ac858f).  We can now add RAID LVs to the tests for
vgsplit.

Note that there still seems to be an issue when specifying an
incomplete set of PVs when moving RAID LVs.
2014-04-25 16:22:40 -05:00
Jonathan Brassow
76687f4cac WHATS_NEW: Add message for commit 9ac858f
WHATS_NEW for commit:
9ac858f vgsplit: Make vgsplit work on mirrors with leg and log on same PV
2014-04-25 14:55:32 -05:00
Jonathan Brassow
9ac858fe6b vgsplit: Make vgsplit work on mirrors with leg and log on same PV
Given a named mirror LV, vgsplit will look for the PVs that compose it
and move them to a new VG.  It does this by first looking at the log
and then the legs.  If the log is on the same device as one of the mirror
images, a problem occurs.  This is because the PV is moved to the new VG
as the log is processed and thus cannot be found in the current VG when
the image is processed.  The solution is to check and see if the PV we are
looking for has already been moved to the new VG.  If so, it is not an
error.
2014-04-25 14:53:34 -05:00
Alasdair G Kergon
0ee9d59b48 test: configurable write timeout
Hard-coded 3 minutes is far too short when investigating problems.
2014-04-24 22:44:22 +01:00
Petr Rockai
448af9ff0b NIX: Fix failure mode for "make check". 2014-04-24 21:35:31 +02:00
Alasdair G Kergon
9676ee9ba9 test: Fix default.profile path.
It's a generated file.
2014-04-24 18:52:29 +01:00
Zdenek Kabelac
22037ee328 tests: fix creation of scsi debug
Use proper '||' test form to avoid unwanted test abort
on failure.
(i.e. running failing test profiles-thin.sh on a real /dev)
2014-04-24 14:42:27 +02:00
Peter Rajnoha
8e8a47143f config: use devices/ignore_suspended_devices=0 by default
ignore_suspended_devices=0 is already used in lvm.conf we distribute,
but it was still "1" in the code (so it was used when lvm.conf value
was not defined). It should be "0" too.
2014-04-24 12:12:39 +02:00
Peter Rajnoha
2b09def606 man: minor fixes in lvmetad man page
- better add reference to lvm dumpconfig --type default
   than stating that lvmetad is not enabled by default
 - substitute #DEFAULT_PID_DIR# with concrete value
2014-04-23 14:34:42 +02:00
Marian Csontos
795944f178 test: add lvresize tests
- test for Bug 1088153
- test lvextend does not reduce nor lvreduce extend LV
2014-04-22 12:53:30 +02:00
Zdenek Kabelac
7a1777302f cleanup: dmeventd simplify restart message parsing
Since we already check every characted in the message,
skip extra callback to strlen, and do the implicit
message length checking.
2014-04-18 16:53:29 +02:00
Zdenek Kabelac
1f701c7bf6 cleanup: dmeventd drop setting of size
Size is not used when msg->data is NULL.
2014-04-18 16:52:59 +02:00
Zdenek Kabelac
78c6dea48e cleanup: dmeventd improve _handle_request
Let the compiler resolve cmd lookup and leave it to optimize it as it
needs.
2014-04-18 16:52:45 +02:00
Zdenek Kabelac
0927605ec3 cleanup: dmeventd improve _clien_write code
Switch to allocate buffer from heap, since it might be potentially
bigger when extremaly large set of volumes would be monitored.
In case of allocation failure send ENOMEM message.
Also implicitelly ignore msg->size when msg->data is NULL.
2014-04-18 16:52:35 +02:00
Zdenek Kabelac
3febd2c9d4 cleanup: dmeventd set next_time when registering
Don't change next_time, when thread is already registered.
2014-04-18 16:52:11 +02:00
Zdenek Kabelac
dc21bbfabd cleanup: dmeventd improve _get_status
Use directly dm_asprintf() to allocate buffer with message,
and properly detect failing on replacement of snprintf()
which also returns -1 on error.
2014-04-18 16:51:54 +02:00
Zdenek Kabelac
0503af8466 cleanup: dmeventd simplify buffer write loop 2014-04-18 16:50:55 +02:00
Zdenek Kabelac
13d05211d0 cleanup: dmeventd simplify status processing
Since we always know the string length, use simplier memcpy.
2014-04-18 16:38:52 +02:00
Zdenek Kabelac
4fb588c34e cleanup: dmeventd reorder _fill_device_data
Just simplify the function.
2014-04-18 16:38:51 +02:00
Zdenek Kabelac
6b701c3a48 cleanup: dmeventd abstract lvm2cmd interface
Keep  lvm2cmd  interface hidden inside dmeventd_lvm
and use regular 1/0 return codes, this we may
avoid using lvm2cmd.h in other lvm2 plugins.
2014-04-18 16:38:51 +02:00
Zdenek Kabelac
6448428d05 cleanup: add some comment indents...
Just cleanup things
2014-04-18 16:38:51 +02:00
Zdenek Kabelac
91eb8927fd cleanup: skip zeroing of cleared areas
Zalloc mem is already zeroed.
2014-04-18 16:38:51 +02:00
Zdenek Kabelac
20179523e2 cleanup: set _REENTRANT in header
Use same way of setting _REENTRANT as in other
files - set it in the first included header file
(clvmd-common.h)
2014-04-18 16:38:50 +02:00
Zdenek Kabelac
451a168bf8 cleanup: drop inclusion of devmap - merge 2014-04-18 16:38:50 +02:00
Zdenek Kabelac
559c003ee2 cleanup: reduce inclusion of unnecessary headers
Remove those file which are not needed by .c files
or already include because the headers already needs them.
2014-04-18 16:38:50 +02:00
Zdenek Kabelac
589983a257 cleanup: include stdarg.h where needed.
Avoid dependency on implicit inclusion of stdarg.h with
libdevmapper.h.
2014-04-18 16:38:50 +02:00
Zdenek Kabelac
b7741f0a83 libdaemon: header cleanup
Ensure  daemon-io.h is used as a generic header included
with configure defines before other headers.
(In future all lvm2 libraries should settle on a single lib.h header)
Rename couple defines to better match header file names.
2014-04-18 16:38:49 +02:00
Zdenek Kabelac
e552824dc0 makefiles: move subdir into same section
Just shift few lines
2014-04-18 16:38:49 +02:00
Zdenek Kabelac
07274f3dd4 makefiles: drop linking of deamon libs to plugins
Daemon lib is linked into lvm2cmd library.
2014-04-18 16:38:49 +02:00
Zdenek Kabelac
2b748e3118 makefiles: compile files on make
Make install should install already compiled/generated files.
2014-04-18 16:38:49 +02:00
Zdenek Kabelac
1af05a7a16 makefiles: clear targets in with make clean
Make clean usually cleans all compiled files.
Make distclean cleans configure generated files
2014-04-18 16:38:48 +02:00
Zdenek Kabelac
4955dae4be makefiles: wait till include is populated
Since libdaemon needs configure.h header, wait forl links.
2014-04-18 16:38:48 +02:00
Zdenek Kabelac
db0045dfc9 devmapper-event: always initialize timeout
Before calling select, always set all struct members of timeout.
2014-04-18 16:38:48 +02:00
Zdenek Kabelac
08e7de986c dmeventd: check for list size within lock
Move check for _thread_registry list size behind mutex.
Use alloca() instead of buffer[count] (they are the same anyway)
2014-04-18 16:38:48 +02:00
Zdenek Kabelac
0e05e1cf6c asprintf: fix test for error result
On error this function returns -1. Since the functions however
do not propagate error upward, it's rather cleanup change.
2014-04-18 16:38:47 +02:00
Zdenek Kabelac
0b6d6bfb77 thin: dmeventd plugins support more minors
Kernel supports upto 1M (20bit) minors.
TODO: convert to hash to reduce memory requirements
2014-04-18 16:38:47 +02:00
Zdenek Kabelac
47a60369a0 unknown: fix mempool used for name allocation
Use cmd libmem mempool for name allocation, since mem mempool
is released after each clvmd command.
2014-04-18 16:38:47 +02:00
Alasdair G Kergon
b5f8f452ac tools: Add --readonly support.
Offer lock-free access to display virtual machine or clustered VG metadata
while it might be in use.
2014-04-18 02:46:34 +01:00
Alasdair G Kergon
17e304e0ac metadata: Fix unlock on VG recovery error path.
If lock conversion failed it tried to unlock VG that was no longer locked.
2014-04-18 02:27:16 +01:00
Alasdair G Kergon
177ece01a9 reports: Use X for unknown LV attr when no dm.
It's safer not to tell people an LV is inactive when we aren't sure.
2014-04-18 02:23:39 +01:00
Alasdair G Kergon
e8a3ba1865 pvscan: Use lvmetad_used().
Config variables that are processed during setup prior to calling into
particular tools must not be accessed directly afterwards in case the
values already got overridden.

_process_config() already used the tests I'm removing here to call
lvmetad_set_active() and set up lvmetad_used().
2014-04-18 02:13:46 +01:00
Peter Rajnoha
702180b30c configure: use configure's --enable-udev-systemd-background-jobs by default
This should be the preferred way of configuring lvm2 for udev/systemd
since otherwise one can end up with the processes run from udev (the
pvscan we run for lvmetad update on events) to be killed prematurely
and this can end up with LVM volumes not activated in the end.
2014-04-16 11:33:44 +02:00
Peter Rajnoha
18caa562fe lvmdump: list also inactive units for lvmdump -s 2014-04-15 15:43:20 +02:00
Peter Rajnoha
a6763c64a7 lvmdump: add -s to gather system info and context (currently systemd-related only)
This is sort of info we always ask people to retrieve when
inspecting problems in systemd environment so let's have this
as part of lvmdump directly.

The -s option does not need to be bound to systemd only. We could
add support for initscripts or any other system-wide/service tracking
info that can help us with debugging problems.
2014-04-15 15:27:30 +02:00
Peter Rajnoha
704609b17e profiles: comment out thin_pool_chunk_size in default.profile
By default, the thin_pool_chunk_size is automatically calculated.
When defined, it disables the automatic calculation. So to be more
precise here, we should comment it out for the default.profile.

Also, "lvm dumpconfig --type profilable" was used here to generate
the default.profile content. This will be done automatically in the
future once we have the infrastructure for this in (see also
https://bugzilla.redhat.com/show_bug.cgi?id=1073415).
2014-04-15 10:15:38 +02:00
Alasdair G Kergon
8980592514 alloc: Correct existing use of positional fill.
Perform two allocation attempts with cling if maximise_cling is set,
first with then without positional fill.

Avoid segfaults from confusion between positional and sorted sequential
allocation when number of stripes varies as reported here:
https://www.redhat.com/archives/linux-lvm/2014-March/msg00001.html
2014-04-15 02:34:35 +01:00
Alasdair G Kergon
1bf4c3a1fa alloc: Introduce A_POSITIONAL_FILL.
Set A_POSITIONAL_FILL if the array of areas is being filled
positionally (with a slot corresponding to each 'leg') rather
than sequentially (with all suitable areas found, to be sorted
and selected from).
2014-04-15 01:13:47 +01:00
Alasdair G Kergon
c9a8264b8b alloc: Access alloc_parms from alloc_state.
alloc_parms is constant while allocating.
2014-04-15 01:05:34 +01:00
Zdenek Kabelac
3c82a37aee tests: correct test condition
Abort loop when PIDFILE is gone
2014-04-14 14:55:14 +02:00
Zdenek Kabelac
92ee51a4c3 tests: check there is really pvmove lv
Since the kill may take various amount of time,
(especially when running with valgrind)
check it's really pvmoved LV.

Restore initial restart of clvmd - it's currently
broken at various moments - basically killed lvm2
command may leave clvmd and confusing state leading
to reports of internal errors.
2014-04-14 13:02:28 +02:00
Zdenek Kabelac
4d48577ab9 tests: implement lv_attr_bit
Add easy check function for cheking lv_attr bits
2014-04-14 13:02:28 +02:00
Zdenek Kabelac
1d803ee980 cleanup: corrent indent level 2014-04-14 13:02:28 +02:00
Zdenek Kabelac
d896abc705 cleanup: clvmd drop unused enum state 2014-04-14 13:02:27 +02:00
Zdenek Kabelac
e2f194952a cleanup: clvmd reindent local_pipe_callback
Move !node_up check in front and reindent
rest of the function to the left.
2014-04-14 13:02:27 +02:00
Zdenek Kabelac
eccc50d861 clvmd: use thread-safe ctime_r when debugging
Use thread friendly version of ctime
TODO:should be probably replaced with strftime()
2014-04-14 13:02:25 +02:00
Zdenek Kabelac
639983b6b7 clvmd: skip adding reply when finished
Prior adding new reply to the list, check
if the reply thread is not already finished.
In that case discard adding message
(which would otherwise be leaked).
2014-04-14 13:01:42 +02:00
Zdenek Kabelac
7236b92857 clvmd: improve mutex usage in request_timed_out
Use mutex to access localsock values, so check
num_replies when the thread is not yet finished.

Check for threadid prior the mutex taking
(though this check is probably not really needed)
2014-04-14 13:00:51 +02:00
Zdenek Kabelac
7075656034 clvmd: drop reply_mutex
Added complexity with extra reply mutex is not worth the troubles.
The only place which may slightly benefit from this mutex is timeout
and since this is rather error case - let's convert it to
localsock.mutex and keep it simple.
2014-04-14 12:59:07 +02:00
Zdenek Kabelac
6115c0d112 clvmd: set finished flag with mutex
Setting this variable needs to be protected with mutex.
2014-04-14 12:58:28 +02:00
Zdenek Kabelac
cc0096ebdd clvmd: move mutex init and detroy
Move the pthread mutex and condition creation and destroy
to correct place right after client memory is allocatedd
or is going to be released.

In the original place it's been in race with lvm thread
which could have still unlock mutex while it's been already
destroyed.
2014-04-14 12:57:39 +02:00
Zdenek Kabelac
91f4e09b48 clvmd: fix test mode race
When TEST_MODE flag is passed around the cluster,
it's been use in thread unprotected way, so it may have
influenced behaviour of other running parallel lvm commands
(activation/deactivation/suspend/resume).

Fix it by set/query function only under lvm mutex.
For hold_un/lock function calls check lock_flags bits directly.
2014-04-14 12:55:46 +02:00
Zdenek Kabelac
bd3e44643d memlock: ignore more libraries
Extend the list of ignored libraries. Since we do not
use those libraries during suspend, skip their locking.
2014-04-14 12:53:07 +02:00
Zdenek Kabelac
84ff3ae703 pvmove: remove locked flag from error pvmove0
When pvmove0 is finished, it replaces temporarily pvmove0
with error segment, however in this case, pvmove0 remains
unremovable in case pvmove --abort is interrupted in this
moment - since it's not a pvmove anymore and normal
lvremove can't be used to remove LOCKED lv.
2014-04-14 12:52:32 +02:00
Zdenek Kabelac
45f45c9932 polldaemon: ret invalid cmd for negative interval
Negative intervals are not supported.
2014-04-14 12:47:14 +02:00
Alasdair G Kergon
a8d63994ea alloc: Refactor area reservation code.
No functional changes intended to be included in this patch.
2014-04-10 20:48:59 +01:00
Alasdair G Kergon
6320c3b905 post-release 2014-04-10 17:13:27 +01:00
458 changed files with 16322 additions and 7134 deletions

View File

@@ -51,6 +51,7 @@ DISTCLEAN_TARGETS += config.cache config.log config.status make.tmpl
include make.tmpl
libdm: include
libdaemon: include
lib: libdm libdaemon
liblvm: lib
daemons: lib libdaemon tools

View File

@@ -1 +1 @@
2.02.106(2)-git (2014-04-10)
2.02.112(2)-git (2014-09-01)

View File

@@ -1 +1 @@
1.02.85-git (2014-04-10)
1.02.91-git (2014-09-01)

168
WHATS_NEW
View File

@@ -1,3 +1,171 @@
Version 2.02.112 -
=====================================
Disable vgchange of clustered attribute with any active LV in VG.
Use va_copy to properly pass va_list through functions.
Add function to detect rotational devices.
Review internal checks for mirror/raid/pvmove volumes.
Track mirror segment type with separate MIRROR flag.
Fix cmirror endian conversions.
Introduce lv_is_pvmove/locked/converting/merging macros.
Avoid leaving linear logical volume when thin pool creation fails.
Demote an error to a warning when devices known to lvmetad are filtered out.
Re-order filter evaluation, making component filters global.
Don't leak alloc_handle on raid target error path.
Properly validate raid leg names.
Archive metadata before starting their modification in raid target.
Add missing vg_revert in suspend_lv() error path in raid target.
Add missing backup of lvm2 metadata after some raid modifications.
Use vg memory pool for extent allocation.
Add allocation/physical_extent_size config option for default PE size of VGs.
Introduce common code to modify metadate and reload updated LV.
Fix rename of active snapshot volume in cluster.
Make sure shared libraries are built with RELRO option.
Version 2.02.111 - 1st September 2014
=====================================
Pass properly sized char buffers for sscanf when initializing clvmd.
Reinstate nosync logic when extending mirror. (2.02.110)
Fix total area extent calculation when allocating cache pool. (2.02.110)
Version 2.02.110 - 26th August 2014
===================================
Fix manipulation with thin-pools which are excluded via volume_list.
Support lv/vgremove -ff to remove thin vols from broken/inactive thin pools.
Fix typo breaking configure --with-lvm1=shared.
Modify lvresize code to handle raid/mirrors and physical extents.
Don't allow pvcreate to proceed if scanning or filtering fails.
Cleanly error when creating RAID with stripe size < PAGE_SIZE.
Print name of LV which on activation triggers delayed snapshot merge.
Add lv_layout and lv_role LV reporting fields.
Properly display lvs lv_attr volume type and target type bit for cache origin.
Fix pvcreate_check() to update cache correctly after signature wiping.
Fix primary device lookup failure for partition when processing mpath filter.
If LV inactive and non-clustered, do not issue "Cannot deactivate" on -aln.
Remove spurious "Skipping mirror LV" message on pvmove of clustered mirror.
Version 2.02.109 - 5th August 2014
==================================
Remove lv_volume_type field from reports. (2.02.108)
Fix a segfault in lvscan --cache when devices were already missing. (2.02.108)
Fix incorrect persistent .cache after vgcreate with PV creation. (2.02.108)
Display actual size changed when resizing LV.
Allow approximate allocation with +%FREE in lvextend.
Remove possible spurious "not found" message on PV create before wiping.
Handle upgrade from 2.02.105 when an LV now gaining a uuid suffix is active.
Version 2.02.108 - 23rd July 2014
=================================
Add lvscan --cache which re-scans constituents of a particular LV.
Make dmeventd's RAID plugin re-scan failed PVs when lvmetad is in use.
Improve code sharing for lvconvert and lvcreate and pools (cache & thin).
Improve lvconvert --merge validation.
Improve lvconvert --splitsnapshot validation.
Add report/list_item_separator lvm.conf option.
Add lv_active_{locally,remotely,exclusively} LV reporting fields.
Comment out devices/{preferred_names,filter} in default lvm.conf file.
Enhance lvconvert thin, thinpool, cache and cachepool command line support.
Display 'C' only for cache and cache-pool target types in lvs.
Prompt for confirmation before change LV into a snapshot exception store.
Return proper error codes for some failing lvconvert funtions.
Add initial code to use cache tools (cache_check|dump|repair|restore).
Support lvdisplay --maps for raid.
Add --activationmode degraded to activate degraded raid volumes by default.
Add separate lv_active_{locally,remotely,exclusively} LV reporting fields.
Recognize "auto"/"unmanaged" values in selection for appropriate fields only.
Add report/binary_values_as_numeric lvm.conf option for binary values as 0/1.
Add --binary arg to pvs,vgs,lvs and {pv,vg,lv}display -C for 0/1 on reports.
Add separate reporting fields for each each {pv,vg,lv}_attr bit.
Separate LV device status reporting fields out of LV fields.
Fix regression causing PVs not in VGs to be marked as allocatable (2.02.59).
Fix VG component of lvid in vgsplit/vgmerge and check in vg_validate.
Add lv_full_name, lv_parent and lv_dm_path fields to reports.
Change lv_path field to suppress devices that never appear in /dev/vg.
Postpone thin pool lvconvert prompts (2.02.107).
Require --yes option to skip prompt to lvconvert thin pool chunksize.
Support lvremove -ff to remove thin volumes from broken thin pools.
Require --yes to skip raid repair prompt.
Change makefile %.d generation to handle filename changes without make clean.
Fix use of buildir in make pofile.
Enhance private volumes UUIDs with suffixed for easier detection.
Do not use reserved _[tc]meta volumes for temporary LVs.
Leave backup pool metadata with _meta%d suffix instead of reserved _tmeta%d.
Allow RAID repair to reuse PVs from same image that suffered a failure.
New RAID images now avoid allocation on any PVs in the same parent RAID LV.
Always reevaluate filters just before creating PV.
Version 2.02.107 - 23rd June 2014
=================================
Introduce LCK_ACTIVATION to avoid concurrent activation of basic LV types.
Fix open_count test for lvchange --refresh or mirrors and raids.
Update pvs,vgs,lvs and lvm man page for selection support.
Add -S/--select to lvm devtypes for report selection.
Add -S/--select to pvs,vgs,lvs and {pv,vg,lv}display -C for report selection.
Use dm_report_init_with_selection now, implicit "selected" field appears.
Make use of libdm's DM_REPORT_FIELD_TYPE{SIZE,PERCENT,STRING_LIST} for fields.
Support all-or-nothing pvmove --atomic.
Automatically add snapshot metadata size for -l %ORIGIN calculation.
When converting RAID origin to cache LV, properly rename sub-LVs.
Use RemoveOnStop for lvm2-lvmetad.socket systemd unit.
Add thin-generic configuration profile for generic thin settings.
Fix crash when reporting empty labels on pvs.
Use retry_deactivation also when cleaning orphan devices.
Wait for client threads when shutting down lvmetad.
Remove PV from cache on pvremove.
Avoid repeatedly reporting of failure to connect to lvmetad.
Introduce MDA_FAILED to permit metadata updates even if some mdas are missing.
Prompt when setting the VG cluster attr if the cluster is not setup.
Allow --yes to skip prompt in vgextend (worked only with -f).
Don't use name mangling for LVM - it never uses dm names with wrong char set.
Remove default.profile and add {command,metadata}_profile_template.profile.
Use proper umask for systemd units generated by lvm2-activation-generator.
Check for failing mirror_remove_missing() function.
Prompt before converting volumes to thin pool and thin pool metadata.
Add dumpconfig --type profilable-{metadata,command} to select profile type.
Exit immediately with error if command profile is found invalid.
Separate --profile cmd line arg into --commandprofile and --metadataprofile.
Strictly separate command profiles and per-VG/LV profiles referenced in mda.
Fix dumpconfig --type diff when run as second and later cmd in lvm shell.
Fix wrong profile reuse from previous run if another cmd is run in lvm shell.
Move cache description from lvm(8) to new lvmcache(7) man page.
Display skipped prompt in silent mode.
Make reporting commands show help about possible sort keys on '-O help'.
Add metadata_percent to lvs_cols.
Take account of parity areas with alloc anywhere in _calc_required_extents.
Use proper uint64 casting for calculation of cache metadata size.
Better support for nesting of blocking signals.
Use only sigaction handler and drop duplicate signal handler.
Separate signal handling and flock code out into lib/misc.
Don't start dmeventd checking seg_monitor and monitoring is disabled.
Catch CTRL-c during pvremove prompts.
Show correct availability status for snapshot origin in lvscan.
Move segment thin pool/volume info into segment display 'lvdisplay --maps'.
Display thin pool usage even when just thin volume is available.
Display monitoring status for monitorable segments in 'lvdisplay --maps'.
Display virtual extents for virtual LVs in 'lvdisplay --maps'.
Make vgsplit fail cleanly when not all PVs are specified for RAID 4/5/6.
Make vgsplit work on mirrors with logs that share PVs with images.
Use devices/ignore_suspended_devices=0 by default if not defined in lvm.conf.
Use proper libmem mempool for allocation of unknown segment name.
Add --readonly to reporting and display tools for lock-free metadata access.
Add locking_type 5 for dummy locking for tools that do not need any locks.
Fix _recover_vg() error path when lock conversion fails.
Use X for LV attributes that are unknown when activation disabled.
Only output lvdisplay 'LV Status' field when activation is enabled.
Use lvmetad_used() in pvscan instead of config_tree.
Configure --enable-udev-systemd-background-jobs if not disabled explicitly.
Add lvmdump -s to collect system info and context (currently systemd only).
Refactor allocation code to make A_POSITIONAL_FILL explicit.
Use thread-safe ctime_r() for clvmd debug logging.
Skip adding replies to already finished reply thread.
Use mutex to check number of replies in request_timed_out() in clvmd.
Drop usage of extra reply_mutex for localsock in clvmd.
Protect manipulation with finished flag with mutex in clvmd.
Shift mutex creation and destroy for localsock in clvmd to correct place.
Fix usage of --test option in clvmd.
Skip more libraries to be mlocked in memory.
Remove LOCKED flag for pvmove replaced with error target.
Return invalid command when specifying negative polling interval.
Version 2.02.106 - 10th April 2014
==================================
Fix ignored --dataalignment/dataalignment offset for pvcreate --restorefile.

View File

@@ -1,3 +1,55 @@
Version 1.02.91 -
====================================
Fix dm_is_dm_major to not issue error about missing /proc lines for dm module.
Version 1.02.90 - 1st September 2014
====================================
Restore proper buffer size for parsing mountinfo line (1.02.89)
Version 1.02.89 - 26th August 2014
==================================
Improve libdevmapper-event select() error handling.
Add extra check for matching transation_id after message submitting.
Add dm_report_field_string_list_unsorted for str. list report without sorting.
Support --deferred with dmsetup remove to defer removal of open devices.
Update dm-ioctl.h to include DM_DEFERRED_REMOVE flag.
Add support for selection to match string list subset, recognize { } operator.
Fix string list selection with '[value]' to not match list that's superset.
Fix string list selection to match whole words only, not prefixes.
Version 1.02.88 - 5th August 2014
=================================
Add dm_tree_set_optional_uuid_suffixes to handle upgrades.
Version 1.02.87 - 23rd July 2014
================================
Fix dm_report_field_string_list to handle delimiter with multiple chars.
Add dm_report_field_reserved_value for per-field reserved value definition.
Version 1.02.86 - 23rd June 2014
================================
Make "help" and "?" reporting fields implicit.
Recognize implicit "selected" field if using dm_report_init_with_selection.
Add support for implicit reporting fields which are predefined in libdm.
Add DM_REPORT_FIELD_TYPE_PERCENT: separate number and percent fields.
Add dm_percent_range_t,dm_percent_to_float,dm_make_percent to libdm for reuse.
Add dm_report_reserved_value to libdevmapper for reserved value definition.
Also display field types when listing all fields in selection help.
Recognize "help" keyword in selection string to show brief help for selection.
Always order items reported as string list field lexicographically.
Add dm_report_field_string_list to libdevmapper for direct string list report.
Add DM_REPORT_FIELD_TYPE_STRING_LIST: separate string and string list fields.
Add dm_str_list to libdevmapper for string list type definition and its reuse.
Add dmsetup -S/--select to define selection criteria for dmsetup reports.
Add dm_report_init_with_selection to intialize report with selection criteria.
Add DM_REPORT_FIELD_TYPE_SIZE: separate number and size reporting fields.
Use RemoveOnStop for dm-event.socket systemd unit.
Document env var 'DM_DEFAULT_NAME_MANGLING_MODE' in dmsetup man page.
Warn user about incorrect use of cookie with 'dmsetup remove --force'.
Also recognize 'help'/'?' as reserved sort key name to show help.
Add dm_units_to_factor for size unit parsing.
Increase bitset size for minors for thin dmeventd plugin.
Version 1.02.85 - 10th April 2014
=================================
Check for sprintf error when building internal device path.

15
aclocal.m4 vendored
View File

@@ -212,4 +212,19 @@ m4_popdef([pkg_default])
m4_popdef([pkg_description])
]) dnl PKG_NOARCH_INSTALLDIR
# PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
# [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
# -------------------------------------------
# Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
_PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])# PKG_CHECK_VAR
m4_include([acinclude.m4])

View File

@@ -18,8 +18,8 @@ top_builddir = @top_builddir@
CONFSRC=example.conf
CONFDEST=lvm.conf
DEFAULT_PROFILE=default.profile
PROFILES=$(DEFAULT_PROFILE) $(srcdir)/thin-performance.profile
PROFILE_TEMPLATES=command_profile_template.profile metadata_profile_template.profile
PROFILES=$(PROFILE_TEMPLATES) $(srcdir)/thin-generic.profile $(srcdir)/thin-performance.profile
include $(top_builddir)/make.tmpl
@@ -37,4 +37,4 @@ install_lvm2: install_conf install_profiles
install: install_lvm2
DISTCLEAN_TARGETS += $(CONFSRC) $(DEFAULT_PROFILE)
DISTCLEAN_TARGETS += $(CONFSRC) $(PROFILE_TEMPLATES)

View File

@@ -1,45 +1,37 @@
# This is a default profile for the LVM2 system.
# It contains all configuration settings that are customizable by profiles.
# This is a command profile template for the LVM2 system.
#
# To create a new profile, select the settings you want to customize
# and put them in a new file named <profile_name>.profile. Then put this
# file in a directory as defined by config/profile_dir setting found in
# @DEFAULT_SYS_DIR@/lvm.conf file.
# It contains all configuration settings that are customizable by command
# profiles. To create a new command profile, select the settings you want
# to customize and add them in a new file named <profile_name>.profile.
# Then install the new profile in a directory as defined by config/profile_dir
# setting found in @DEFAULT_SYS_DIR@/lvm.conf file.
#
# Command profiles can be referenced by using the --commandprofile option then.
#
# Refer to 'man lvm.conf' for further information about profiles and
# general configuration file layout.
#
# Refer to 'man lvm.conf' for further information about profiles and file layout.
allocation {
thin_pool_chunk_size_policy = "generic"
thin_pool_chunk_size = 64
thin_pool_discards = "passdown"
thin_pool_zero = 1
}
activation {
thin_pool_autoextend_threshold = 100
thin_pool_autoextend_percent = 20
}
global {
units="h"
si_unit_consistency=1
suffix=1
lvdisplay_shows_full_device_path=0
}
report {
aligned=1
buffered=1
headings=1
separator=" "
list_item_separator=","
prefixes=0
quoted=1
colums_as_rows=0
binary_values_as_numeric=0
devtypes_sort="devtype_name"
devtypes_cols="devtype_name,devtype_max_partitions,devtype_description"
devtypes_cols_verbose="devtype_name,devtype_max_partitions,devtype_description"
lvs_sort="vg_name,lv_name"
lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
lvs_cols_verbose="lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
vgs_sort="vg_name"
vgs_cols="vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"

View File

@@ -53,11 +53,30 @@ devices {
# same block device and the tools need to display a name for device,
# all the pathnames are matched against each item in the following
# list of regular expressions in turn and the first match is used.
preferred_names = [ ]
# By default no preferred names are defined.
# preferred_names = [ ]
# Try to avoid using undescriptive /dev/dm-N names, if present.
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
# In case no prefererred name matches or if preferred_names are not
# defined at all, builtin rules are used to determine the preference.
#
# The first builtin rule checks path prefixes and it gives preference
# based on this ordering (where "dev" depends on devices/dev setting):
# /dev/mapper > /dev/disk > /dev/dm-* > /dev/block
#
# If the ordering above cannot be applied, the path with fewer slashes
# gets preference then.
#
# If the number of slashes is the same, a symlink gets preference.
#
# Finally, if all the rules mentioned above are not applicable,
# lexicographical order is used over paths and the smallest one
# of all gets preference.
# A filter that tells LVM2 to only use a restricted set of devices.
# The filter consists of an array of regular expressions. These
# expressions can be delimited by a character of your choice, and
@@ -84,7 +103,7 @@ devices {
# lvmetad is used" comment that is attached to global/use_lvmetad setting.
# By default we accept every block device:
filter = [ "a/.*/" ]
# filter = [ "a/.*/" ]
# Exclude the cdrom drive
# filter = [ "r|/dev/cdrom|" ]
@@ -351,6 +370,9 @@ allocation {
# first use.
# N.B. zeroing larger thin pool chunk size degrades performance.
# thin_pool_zero = 1
# Default physical extent size to use for newly created VGs (in KB).
# physical_extent_size = 4096
}
# This section that allows you to configure the nature of the
@@ -520,6 +542,15 @@ global {
# Type 3 uses built-in clustered locking.
# Type 4 uses read-only locking which forbids any operations that might
# change metadata.
# Type 5 offers dummy locking for tools that do not need any locks.
# You should not need to set this directly: the tools will select when
# to use it instead of the configured locking_type. Do not use lvmetad or
# the kernel device-mapper driver with this locking type.
# It is used by the --readonly option that offers read-only access to
# Volume Group metadata that cannot be locked safely because it belongs to
# an inaccessible domain and might be in use, for example a virtual machine
# image or a disk that is shared by a clustered machine.
#
# N.B. Don't use lvmetad with locking type 3 as lvmetad is not yet
# supported in clustered environment. If use_lvmetad=1 and locking_type=3
# is set at the same time, LVM always issues a warning message about this
@@ -684,8 +715,10 @@ global {
# option "-q" is for quiet output.
# With thin_check version 2.1 or newer you can add "--ignore-non-fatal-errors"
# to let it pass through ignorable errors and fix them later.
# With thin_check version 3.2 or newer you should add
# "--clear-needs-check-flag".
#
# thin_check_options = [ "-q" ]
# thin_check_options = [ "-q", "--clear-needs-check-flag" ]
# Full path of the utility called to repair a thin metadata device
# is in a state that allows it to be used.
@@ -714,6 +747,36 @@ global {
# external_origin_extend
#
# thin_disabled_features = [ "discards", "block_size" ]
# Full path of the utility called to check that a cache metadata device
# is in a state that allows it to be used.
# Each time a cached LV needs to be used or after it is deactivated
# this utility is executed. The activation will only proceed if the utility
# has an exit status of 0.
# Set to "" to skip this check. (Not recommended.)
# The cache tools are available as part of the device-mapper-persistent-data
# package from https://github.com/jthornber/thin-provisioning-tools.
#
# cache_check_executable = "@CACHE_CHECK_CMD@"
# Array of string options passed with cache_check command. By default,
# option "-q" is for quiet output.
#
# cache_check_options = [ "-q" ]
# Full path of the utility called to repair a cache metadata device.
# Each time a cache metadata needs repair this utility is executed.
# See cache_check_executable how to obtain binaries.
#
# cache_repair_executable = "@CACHE_REPAIR_CMD@"
# Array of extra string options passed with cache_repair command.
# cache_repair_options = [ "" ]
# Full path of the utility called to dump cache metadata content.
# See cache_check_executable how to obtain binaries.
#
# cache_dump_executable = "@CACHE_DUMP_CMD@"
}
activation {
@@ -981,6 +1044,29 @@ activation {
# are no progress reports, but the process is awoken immediately the
# operation is complete.
polling_interval = 15
# 'activation_mode' determines how Logical Volumes are activated if
# any devices are missing. Possible settings are:
#
# "complete" - Only allow activation of an LV if all of the Physical
# Volumes it uses are present. Other PVs in the Volume
# Group may be missing.
#
# "degraded" - Like "complete", but additionally RAID Logical Volumes of
# segment type raid1, raid4, raid5, radid6 and raid10 will
# be activated if there is no data loss, i.e. they have
# sufficient redundancy to present the entire addressable
# range of the Logical Volume.
#
# "partial" - Allows the activation of any Logical Volume even if
# a missing or failed PV could cause data loss with a
# portion of the Logical Volume inaccessible.
# This setting should not normally be used, but may
# sometimes assist with data recovery.
#
# This setting was introduced in LVM version 2.02.108. It corresponds
# with the '--activationmode' option for lvchange and vgchange.
activation_mode = "degraded"
}
# Report settings.
@@ -1002,6 +1088,9 @@ activation {
# A separator to use on report after each field.
# separator=" "
# A separator to use for list items when reported.
# list_item_separator=","
# Use a field name prefix for each field reported.
# prefixes=0
@@ -1011,6 +1100,12 @@ activation {
# Output each column as a row. If set, this also implies report/prefixes=1.
# colums_as_rows=0
# Use binary values "0" or "1" instead of descriptive literal values for
# columns that have exactly two valid values to report (not counting the
# "unknown" value which denotes that the value could not be determined).
#
# binary_values_as_numeric = 0
# Comma separated list of columns to sort by when reporting 'lvm devtypes' command.
# See 'lvm devtypes -o help' for the list of possible fields.
# devtypes_sort="devtype_name"
@@ -1029,7 +1124,7 @@ activation {
# Comma separated list of columns to report for 'lvs' command.
# See 'lvs -o help' for the list of possible fields.
# lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
# lvs_cols="lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
# Comma separated list of columns to report for 'lvs' command in verbose mode.
# See 'lvs -o help' for the list of possible fields.

View File

@@ -0,0 +1,24 @@
# This is a metadata profile template for the LVM2 system.
#
# It contains all configuration settings that are customizable by metadata
# profiles. To create a new metadata profile, select the settings you want
# to customize and add them in a new file named <profile_name>.profile.
# Then install the new profile in a directory as defined by config/profile_dir
# setting found in @DEFAULT_SYS_DIR@/lvm.conf file.
#
# Metadata profiles can be referenced by using the --metadataprofile LVM2
# command line option.
#
# Refer to 'man lvm.conf' for further information about profiles and
# general configuration file layout.
#
allocation {
thin_pool_zero=1
thin_pool_discards="passdown"
thin_pool_chunk_size_policy="generic"
# thin_pool_chunk_size=64
}
activation {
thin_pool_autoextend_threshold=100
thin_pool_autoextend_percent=20
}

View File

@@ -0,0 +1,4 @@
allocation {
thin_pool_chunk_size_policy = "generic"
thin_pool_zero = 1
}

1744
configure vendored

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -15,10 +15,6 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
ifeq ("@BUILD_LVMETAD@", "yes")
SUBDIRS += lvmetad
endif
.PHONY: dmeventd clvmd cmirrord lvmetad
ifneq ("@CLVMD@", "none")
@@ -36,6 +32,10 @@ daemons.cflow: dmeventd.cflow
endif
endif
ifeq ("@BUILD_LVMETAD@", "yes")
SUBDIRS += lvmetad
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = clvmd cmirrord dmeventd lvmetad
endif

View File

@@ -87,7 +87,6 @@ include $(top_builddir)/make.tmpl
LVMLIBS += -ldevmapper
LIBS += $(PTHREAD_LIBS)
DEFS += -D_REENTRANT
CFLAGS += -fno-strict-aliasing $(EXTRA_EXEC_CFLAGS)
LDFLAGS += $(EXTRA_EXEC_LDFLAGS)

View File

@@ -110,12 +110,12 @@ static void _cluster_init_completed(void)
clvmd_cluster_init_completed();
}
static int _get_main_cluster_fd()
static int _get_main_cluster_fd(void)
{
return cman_get_fd(c_handle);
}
static int _get_num_nodes()
static int _get_num_nodes(void)
{
int i;
int nnodes = 0;
@@ -243,7 +243,7 @@ static void _add_up_node(const char *csid)
DEBUGLOG("Added new node %d to updown list\n", nodeid);
}
static void _cluster_closedown()
static void _cluster_closedown(void)
{
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
cman_finish(c_handle);
@@ -282,7 +282,7 @@ static void count_clvmds_running(void)
}
/* Get a list of active cluster members */
static void get_members()
static void get_members(void)
{
int retnodes;
int status;
@@ -380,7 +380,7 @@ static int nodeid_from_csid(const char *csid)
return nodeid;
}
static int _is_quorate()
static int _is_quorate(void)
{
return cman_is_quorate(c_handle);
}

View File

@@ -78,9 +78,6 @@ int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
unsigned char lock_cmd;
unsigned char lock_flags;
/* Reset test mode before we start */
init_test(0);
/* Do the command */
switch (msg->cmd) {
/* Just a test message */
@@ -112,8 +109,6 @@ int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
lockname = &args[2];
/* Check to see if the VG is in use by LVM1 */
status = do_check_lvm1(lockname);
if (lock_flags & LCK_TEST_MODE)
init_test(1);
do_lock_vg(lock_cmd, lock_flags, lockname);
break;
@@ -122,8 +117,6 @@ int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
if (lock_flags & LCK_TEST_MODE)
init_test(1);
status = do_lock_lv(lock_cmd, lock_flags, lockname);
/* Replace EIO with something less scary */
if (status == EIO) {
@@ -252,7 +245,6 @@ int do_pre_command(struct local_client *client)
int status = 0;
char *lockname;
init_test(0);
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_lock("CLVMD_TEST", LCK_EXCL, 0, &lockid);
@@ -271,8 +263,6 @@ int do_pre_command(struct local_client *client)
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
if (lock_flags & LCK_TEST_MODE)
init_test(1);
status = pre_lock_lv(lock_cmd, lock_flags, lockname);
break;
@@ -304,7 +294,6 @@ int do_post_command(struct local_client *client)
char *args = header->node + strlen(header->node) + 1;
char *lockname;
init_test(0);
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_unlock("CLVMD_TEST", (int) (long) client->bits.localsock.private);
@@ -315,8 +304,6 @@ int do_post_command(struct local_client *client)
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
if (lock_flags & LCK_TEST_MODE)
init_test(1);
status = post_lock_lv(lock_cmd, lock_flags, lockname);
break;

View File

@@ -20,14 +20,13 @@
#include "configure.h"
#define _REENTRANT
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include "libdevmapper.h"
#include "lvm-logging.h"
#include <unistd.h>
#include <sys/stat.h>
#endif

View File

@@ -19,7 +19,6 @@
#include "locking.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "lvm-functions.h"
#include "clvmd.h"
#include <sys/un.h>

View File

@@ -19,21 +19,23 @@
#include "clvmd-common.h"
#include <pthread.h>
#include <getopt.h>
#include <ctype.h>
#include "clvmd-comms.h"
#include "clvm.h"
#include "clvmd.h"
#include "lvm-functions.h"
#include "lvm-version.h"
#include "lvm-wrappers.h"
#include "refresh_clvmd.h"
#ifdef HAVE_COROSYNC_CONFDB_H
#include <corosync/confdb.h>
#endif
#include <pthread.h>
#include <getopt.h>
#include <ctype.h>
#include <stdarg.h>
#include <fcntl.h>
#include <netinet/in.h>
#include <signal.h>
@@ -87,7 +89,7 @@ static debug_t debug = DEBUG_OFF;
static int foreground_mode = 0;
static pthread_t lvm_thread;
/* Stack size 128KiB for thread, must be bigger then DEFAULT_RESERVED_STACK */
static const size_t STACK_SIZE = 128 * 1024;
static const size_t MIN_STACK_SIZE = 128 * 1024;
static pthread_attr_t stack_attr;
static int lvm_thread_exit = 0;
static pthread_mutex_t lvm_thread_mutex;
@@ -212,12 +214,13 @@ void debuglog(const char *fmt, ...)
time_t P;
va_list ap;
static int syslog_init = 0;
char buf_ctime[64];
switch (clvmd_get_debug()) {
case DEBUG_STDERR:
va_start(ap,fmt);
time(&P);
fprintf(stderr, "CLVMD[%x]: %.15s ", (int)pthread_self(), ctime(&P)+4 );
fprintf(stderr, "CLVMD[%x]: %.15s ", (int)pthread_self(), ctime_r(&P, buf_ctime) + 4);
vfprintf(stderr, fmt, ap);
va_end(ap);
break;
@@ -356,6 +359,7 @@ int main(int argc, char *argv[])
int clusterwide_opt = 0;
mode_t old_mask;
int ret = 1;
size_t stack_size;
struct option longopts[] = {
{ "help", 0, 0, 'h' },
@@ -512,8 +516,10 @@ int main(int argc, char *argv[])
/* Initialise the LVM thread variables */
dm_list_init(&lvm_cmd_head);
stack_size = 3 * lvm_getpagesize();
stack_size = stack_size < MIN_STACK_SIZE ? MIN_STACK_SIZE : stack_size;
if (pthread_attr_init(&stack_attr) ||
pthread_attr_setstacksize(&stack_attr, STACK_SIZE)) {
pthread_attr_setstacksize(&stack_attr, stack_size)) {
log_sys_error("pthread_attr_init", "");
exit(1);
}
@@ -685,6 +691,9 @@ static int local_rendezvous_callback(struct local_client *thisfd, char *buf,
return 1;
}
pthread_cond_init(&newfd->bits.localsock.cond, NULL);
pthread_mutex_init(&newfd->bits.localsock.mutex, NULL);
if (fcntl(client_fd, F_SETFD, 1))
DEBUGLOG("Setting CLOEXEC on client fd failed: %s\n", strerror(errno));
@@ -775,25 +784,26 @@ static int local_pipe_callback(struct local_client *thisfd, char *buf,
static void timedout_callback(struct local_client *client, const char *csid,
int node_up)
{
if (node_up) {
struct node_reply *reply;
char nodename[max_cluster_member_name_len];
struct node_reply *reply;
char nodename[max_cluster_member_name_len];
clops->name_from_csid(csid, nodename);
DEBUGLOG("Checking for a reply from %s\n", nodename);
pthread_mutex_lock(&client->bits.localsock.reply_mutex);
if (!node_up)
return;
reply = client->bits.localsock.replies;
while (reply && strcmp(reply->node, nodename) != 0)
reply = reply->next;
clops->name_from_csid(csid, nodename);
DEBUGLOG("Checking for a reply from %s\n", nodename);
pthread_mutex_lock(&client->bits.localsock.mutex);
pthread_mutex_unlock(&client->bits.localsock.reply_mutex);
reply = client->bits.localsock.replies;
while (reply && strcmp(reply->node, nodename) != 0)
reply = reply->next;
if (!reply) {
DEBUGLOG("Node %s timed-out\n", nodename);
add_reply_to_list(client, ETIMEDOUT, csid,
"Command timed out", 18);
}
pthread_mutex_unlock(&client->bits.localsock.mutex);
if (!reply) {
DEBUGLOG("Node %s timed-out\n", nodename);
add_reply_to_list(client, ETIMEDOUT, csid,
"Command timed out", 18);
}
}
@@ -809,16 +819,20 @@ static void request_timed_out(struct local_client *client)
DEBUGLOG("Request timed-out. padding\n");
clops->cluster_do_node_callback(client, timedout_callback);
if (client->bits.localsock.num_replies !=
client->bits.localsock.expected_replies) {
if (!client->bits.localsock.threadid)
return;
pthread_mutex_lock(&client->bits.localsock.mutex);
if (!client->bits.localsock.finished &&
(client->bits.localsock.num_replies !=
client->bits.localsock.expected_replies)) {
/* Post-process the command */
if (client->bits.localsock.threadid) {
pthread_mutex_lock(&client->bits.localsock.mutex);
client->bits.localsock.state = POST_COMMAND;
pthread_cond_signal(&client->bits.localsock.cond);
pthread_mutex_unlock(&client->bits.localsock.mutex);
}
client->bits.localsock.state = POST_COMMAND;
pthread_cond_signal(&client->bits.localsock.cond);
}
pthread_mutex_unlock(&client->bits.localsock.mutex);
}
/* This is where the real work happens */
@@ -1145,8 +1159,6 @@ static int cleanup_zombie(struct local_client *thisfd)
DEBUGLOG("EOF on local socket: inprogress=%d\n",
thisfd->bits.localsock.in_progress);
thisfd->bits.localsock.finished = 1;
if ((pipe_client = thisfd->bits.localsock.pipe_client))
pipe_client = pipe_client->bits.pipe.client;
@@ -1157,6 +1169,7 @@ static int cleanup_zombie(struct local_client *thisfd)
if (pthread_mutex_trylock(&thisfd->bits.localsock.mutex))
return 1;
thisfd->bits.localsock.state = POST_COMMAND;
thisfd->bits.localsock.finished = 1;
pthread_cond_signal(&thisfd->bits.localsock.cond);
pthread_mutex_unlock(&thisfd->bits.localsock.mutex);
@@ -1169,6 +1182,7 @@ static int cleanup_zombie(struct local_client *thisfd)
DEBUGLOG("Waiting for pre&post thread (%p)\n", pipe_client);
pthread_mutex_lock(&thisfd->bits.localsock.mutex);
thisfd->bits.localsock.state = PRE_COMMAND;
thisfd->bits.localsock.finished = 1;
pthread_cond_signal(&thisfd->bits.localsock.cond);
pthread_mutex_unlock(&thisfd->bits.localsock.mutex);
@@ -1179,8 +1193,6 @@ static int cleanup_zombie(struct local_client *thisfd)
DEBUGLOG("Joined pre&post thread\n");
thisfd->bits.localsock.threadid = 0;
pthread_cond_destroy(&thisfd->bits.localsock.cond);
pthread_mutex_destroy(&thisfd->bits.localsock.mutex);
/* Remove the pipe client */
if (thisfd->bits.localsock.pipe_client) {
@@ -1321,16 +1333,6 @@ static int read_from_local_sock(struct local_client *thisfd)
}
}
/*
* Initialise and lock the mutex so the subthread will wait
* after finishing the PRE routine
*/
if (!thisfd->bits.localsock.threadid) {
pthread_mutex_init(&thisfd->bits.localsock.mutex, NULL);
pthread_cond_init(&thisfd->bits.localsock.cond, NULL);
pthread_mutex_init(&thisfd->bits.localsock.reply_mutex, NULL);
}
/* Only run the command if all the cluster nodes are running CLVMD */
if (((inheader->flags & CLVMD_FLAG_LOCAL) == 0) &&
(check_all_clvmds_running(thisfd) == -1)) {
@@ -1640,26 +1642,28 @@ static void add_reply_to_list(struct local_client *client, int status,
} else
reply->replymsg = NULL;
pthread_mutex_lock(&client->bits.localsock.reply_mutex);
/* Hook it onto the reply chain */
reply->next = client->bits.localsock.replies;
client->bits.localsock.replies = reply;
DEBUGLOG("Got %d replies, expecting: %d\n",
client->bits.localsock.num_replies + 1,
client->bits.localsock.expected_replies);
pthread_mutex_lock(&client->bits.localsock.mutex);
/* If we have the whole lot then do the post-process */
if (++client->bits.localsock.num_replies ==
client->bits.localsock.expected_replies) {
if (client->bits.localsock.finished) {
dm_free(reply->replymsg);
dm_free(reply);
} else {
/* Hook it onto the reply chain */
reply->next = client->bits.localsock.replies;
client->bits.localsock.replies = reply;
/* If we have the whole lot then do the post-process */
/* Post-process the command */
if (client->bits.localsock.threadid) {
pthread_mutex_lock(&client->bits.localsock.mutex);
if (++client->bits.localsock.num_replies ==
client->bits.localsock.expected_replies) {
client->bits.localsock.state = POST_COMMAND;
pthread_cond_signal(&client->bits.localsock.cond);
pthread_mutex_unlock(&client->bits.localsock.mutex);
}
DEBUGLOG("Got %d replies, expecting: %d\n",
client->bits.localsock.num_replies,
client->bits.localsock.expected_replies);
}
pthread_mutex_unlock(&client->bits.localsock.reply_mutex);
pthread_mutex_unlock(&client->bits.localsock.mutex);
}
/* This is the thread that runs the PRE and post commands for a particular connection */
@@ -1762,7 +1766,6 @@ static int process_local_command(struct clvm_header *msg, int msglen,
if (msg->flags & CLVMD_FLAG_REMOTE)
status = 0;
else
/* FIXME: usage of init_test() is unprotected */
status = do_command(client, msg, msglen, &replybuf, buflen, &replylen);
if (status)
@@ -1976,6 +1979,8 @@ static int process_work_item(struct lvm_thread_cmd *cmd)
if (cmd->msg == NULL) {
DEBUGLOG("process_work_item: free fd %d\n", cmd->client->fd);
cmd_client_cleanup(cmd->client);
pthread_mutex_destroy(&cmd->client->bits.localsock.mutex);
pthread_cond_destroy(&cmd->client->bits.localsock.cond);
dm_free(cmd->client);
return 0;
}

View File

@@ -56,11 +56,9 @@ struct localsock_bits {
int cleanup_needed; /* helper for cleanup_zombie */
struct local_client *pipe_client;
pthread_t threadid;
enum { PRE_COMMAND, POST_COMMAND, QUIT } state;
enum { PRE_COMMAND, POST_COMMAND } state;
pthread_mutex_t mutex; /* Main thread and worker synchronisation */
pthread_cond_t cond;
pthread_mutex_t reply_mutex; /* Protect reply structure */
};
/* Entries for PIPE clients */

View File

@@ -17,7 +17,6 @@
#include <pthread.h>
#include "lvm-types.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
@@ -131,14 +130,15 @@ static const char *decode_flags(unsigned char flags)
static char buf[128];
int len;
len = sprintf(buf, "0x%x ( %s%s%s%s%s%s%s)", flags,
len = sprintf(buf, "0x%x ( %s%s%s%s%s%s%s%s)", flags,
flags & LCK_PARTIAL_MODE ? "PARTIAL_MODE|" : "",
flags & LCK_MIRROR_NOSYNC_MODE ? "MIRROR_NOSYNC|" : "",
flags & LCK_DMEVENTD_MONITOR_MODE ? "DMEVENTD_MONITOR|" : "",
flags & LCK_ORIGIN_ONLY_MODE ? "ORIGIN_ONLY|" : "",
flags & LCK_TEST_MODE ? "TEST|" : "",
flags & LCK_CONVERT ? "CONVERT|" : "",
flags & LCK_DMEVENTD_MONITOR_IGNORE ? "DMEVENTD_MONITOR_IGNORE|" : "");
flags & LCK_DMEVENTD_MONITOR_IGNORE ? "DMEVENTD_MONITOR_IGNORE|" : "",
flags & LCK_REVERT_MODE ? "REVERT|" : "");
if (len > 1)
buf[len - 2] = ' ';
@@ -235,7 +235,7 @@ void destroy_lvhash(void)
if ((status = sync_unlock(resource, lvi->lock_id)))
DEBUGLOG("unlock_all. unlock failed(%d): %s\n",
status, strerror(errno));
free(lvi);
dm_free(lvi);
}
dm_hash_destroy(lv_hash);
@@ -251,9 +251,6 @@ static int hold_lock(char *resource, int mode, int flags)
int saved_errno;
struct lv_info *lvi;
if (test_mode())
return 0;
/* Mask off invalid options */
flags &= LCKF_NOQUEUE | LCKF_CONVERT;
@@ -288,8 +285,7 @@ static int hold_lock(char *resource, int mode, int flags)
strerror(errno));
errno = saved_errno;
} else {
lvi = malloc(sizeof(struct lv_info));
if (!lvi) {
if (!(lvi = dm_malloc(sizeof(struct lv_info)))) {
errno = ENOMEM;
return -1;
}
@@ -298,7 +294,7 @@ static int hold_lock(char *resource, int mode, int flags)
status = sync_lock(resource, mode, flags & ~LCKF_CONVERT, &lvi->lock_id);
saved_errno = errno;
if (status) {
free(lvi);
dm_free(lvi);
DEBUGLOG("hold_lock. lock at %d failed: %s\n", mode,
strerror(errno));
} else
@@ -319,9 +315,6 @@ static int hold_unlock(char *resource)
int status;
int saved_errno;
if (test_mode())
return 0;
if (!(lvi = lookup_info(resource))) {
DEBUGLOG("hold_unlock, lock not already held\n");
return 0;
@@ -331,7 +324,7 @@ static int hold_unlock(char *resource)
saved_errno = errno;
if (!status) {
remove_info(resource);
free(lvi);
dm_free(lvi);
} else {
DEBUGLOG("hold_unlock. unlock failed(%d): %s\n", status,
strerror(errno));
@@ -381,7 +374,7 @@ static int do_activate_lv(char *resource, unsigned char command, unsigned char l
* Use lock conversion only if requested, to prevent implicit conversion
* of exclusive lock to shared one during activation.
*/
if (command & LCK_CLUSTER_VG) {
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_lock(resource, mode, LCKF_NOQUEUE | (lock_flags & LCK_CONVERT ? LCKF_CONVERT:0));
if (status) {
/* Return an LVM-sensible error for this.
@@ -415,7 +408,7 @@ static int do_activate_lv(char *resource, unsigned char command, unsigned char l
return 0;
error:
if (oldmode == -1 || oldmode != mode)
if (!test_mode() && (oldmode == -1 || oldmode != mode))
(void)hold_unlock(resource);
return EIO;
}
@@ -479,7 +472,7 @@ static int do_deactivate_lv(char *resource, unsigned char command, unsigned char
if (!lv_deactivate(cmd, resource, NULL))
return EIO;
if (command & LCK_CLUSTER_VG) {
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_unlock(resource);
if (status)
return errno;
@@ -526,6 +519,8 @@ int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
if (lock_flags & LCK_MIRROR_NOSYNC_MODE)
init_mirror_in_sync(1);
@@ -578,6 +573,7 @@ int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
/* clean the pool for another command */
dm_pool_empty(cmd->mem);
init_test(0);
pthread_mutex_unlock(&lvm_lock);
DEBUGLOG("Command return is %d, critical_section is %d\n", status, critical_section());
@@ -597,7 +593,8 @@ int pre_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
DEBUGLOG("pre_lock_lv: resource '%s', cmd = %s, flags = %s\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags));
if (hold_lock(resource, LCK_WRITE, LCKF_NOQUEUE | LCKF_CONVERT))
if (!(lock_flags & LCK_TEST_MODE) &&
hold_lock(resource, LCK_WRITE, LCKF_NOQUEUE | LCKF_CONVERT))
return errno;
}
return 0;
@@ -629,11 +626,13 @@ int post_lock_lv(unsigned char command, unsigned char lock_flags,
if (!status)
return EIO;
if (lvi.exists) {
if (hold_lock(resource, LCK_READ, LCKF_CONVERT))
if (!(lock_flags & LCK_TEST_MODE)) {
if (lvi.exists) {
if (hold_lock(resource, LCK_READ, LCKF_CONVERT))
return errno;
} else if (hold_unlock(resource))
return errno;
} else if (hold_unlock(resource))
return errno;
}
}
}
return 0;
@@ -697,6 +696,8 @@ void do_lock_vg(unsigned char command, unsigned char lock_flags, char *resource)
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
switch (lock_cmd) {
case LCK_VG_COMMIT:
DEBUGLOG("vg_commit notification for VG %s\n", vgname);
@@ -711,6 +712,8 @@ void do_lock_vg(unsigned char command, unsigned char lock_flags, char *resource)
DEBUGLOG("Invalidating cached metadata for VG %s\n", vgname);
lvmcache_drop_metadata(vgname, 0);
}
init_test(0);
pthread_mutex_unlock(&lvm_lock);
}
@@ -722,7 +725,7 @@ void do_lock_vg(unsigned char command, unsigned char lock_flags, char *resource)
static int get_initial_state(struct dm_hash_table *excl_uuid)
{
int lock_mode;
char lv[64], vg[64], flags[25], vg_flags[25];
char lv[65], vg[65], flags[26], vg_flags[26]; /* with space for '\0' */
char uuid[65];
char line[255];
char *lvs_cmd;

View File

@@ -126,13 +126,14 @@ static int v5_endian_to_network(struct clog_request *rq)
u_rq->error = xlate32(u_rq->error);
u_rq->seq = xlate32(u_rq->seq);
u_rq->request_type = xlate32(u_rq->request_type);
u_rq->data_size = xlate64(u_rq->data_size);
rq->originator = xlate32(rq->originator);
v5_data_endian_switch(rq, 1);
u_rq->request_type = xlate32(u_rq->request_type);
u_rq->data_size = xlate32(u_rq->data_size);
return size;
}
@@ -167,7 +168,7 @@ static int v5_endian_from_network(struct clog_request *rq)
u_rq->error = xlate32(u_rq->error);
u_rq->seq = xlate32(u_rq->seq);
u_rq->request_type = xlate32(u_rq->request_type);
u_rq->data_size = xlate64(u_rq->data_size);
u_rq->data_size = xlate32(u_rq->data_size);
rq->originator = xlate32(rq->originator);
@@ -187,7 +188,7 @@ int clog_request_from_network(void *data, size_t data_len)
switch (version) {
case 5: /* Upstream */
if (version == unconverted_version)
if (version == vp[0])
return 0;
break;
case 4: /* RHEL 5.[45] */

View File

@@ -26,6 +26,7 @@
//#include "libmultilog.h"
#include "dm-logging.h"
#include <stdarg.h>
#include <dlfcn.h>
#include <errno.h>
#include <pthread.h>
@@ -217,9 +218,9 @@ static pthread_cond_t _timeout_cond = PTHREAD_COND_INITIALIZER;
static struct thread_status *_alloc_thread_status(const struct message_data *data,
struct dso_data *dso_data)
{
struct thread_status *ret = (typeof(ret)) dm_zalloc(sizeof(*ret));
struct thread_status *ret;
if (!ret)
if (!(ret = dm_zalloc(sizeof(*ret))))
return NULL;
if (!(ret->device.uuid = dm_strdup(data->device_uuid))) {
@@ -227,9 +228,6 @@ static struct thread_status *_alloc_thread_status(const struct message_data *dat
return NULL;
}
ret->current_task = NULL;
ret->device.name = NULL;
ret->device.major = ret->device.minor = 0;
ret->dso_data = dso_data;
ret->events = data->events_field;
ret->timeout = data->timeout_secs;
@@ -384,7 +382,6 @@ static int _parse_message(struct message_data *message_data)
dm_free(msg->data);
msg->data = NULL;
msg->size = 0;
return ret;
}
@@ -406,15 +403,12 @@ static int _fill_device_data(struct thread_status *ts)
{
struct dm_task *dmt;
struct dm_info dmi;
int ret = 0;
if (!ts->device.uuid)
return 0;
ts->device.name = NULL;
ts->device.major = ts->device.minor = 0;
dmt = dm_task_create(DM_DEVICE_INFO);
if (!dmt)
if (!(dmt = dm_task_create(DM_DEVICE_INFO)))
return 0;
if (!dm_task_set_uuid(dmt, ts->device.uuid))
@@ -423,8 +417,8 @@ static int _fill_device_data(struct thread_status *ts)
if (!dm_task_run(dmt))
goto fail;
ts->device.name = dm_strdup(dm_task_get_name(dmt));
if (!ts->device.name)
dm_free(ts->device.name);
if (!(ts->device.name = dm_strdup(dm_task_get_name(dmt))))
goto fail;
if (!dm_task_get_info(dmt, &dmi))
@@ -432,16 +426,11 @@ static int _fill_device_data(struct thread_status *ts)
ts->device.major = dmi.major;
ts->device.minor = dmi.minor;
ret = 1;
fail:
dm_task_destroy(dmt);
return 1;
fail:
dm_task_destroy(dmt);
dm_free(ts->device.name);
ts->device.name = NULL;
return 0;
return ret;
}
/*
@@ -464,20 +453,17 @@ static int _get_status(struct message_data *message_data)
{
struct dm_event_daemon_message *msg = message_data->msg;
struct thread_status *thread;
int i, j;
int ret = -1;
int count = dm_list_size(&_thread_registry);
int i = 0, j;
int ret = -ENOMEM;
int count;
int size = 0, current;
char *buffers[count];
size_t len;
char **buffers;
char *message;
dm_free(msg->data);
for (i = 0; i < count; ++i)
buffers[i] = NULL;
i = 0;
_lock_mutex();
count = dm_list_size(&_thread_registry);
buffers = alloca(sizeof(char*) * count);
dm_list_iterate_items(thread, &_thread_registry) {
if ((current = dm_asprintf(buffers + i, "0:%d %s %s %u %" PRIu32 ";",
i, thread->dso_data->dso_name,
@@ -486,25 +472,24 @@ static int _get_status(struct message_data *message_data)
_unlock_mutex();
goto out;
}
++ i;
size += current;
++i;
size += current; /* count with trailing '\0' */
}
_unlock_mutex();
msg->size = size + strlen(message_data->id) + 1;
msg->data = dm_malloc(msg->size);
if (!msg->data)
len = strlen(message_data->id);
msg->size = size + len + 1;
dm_free(msg->data);
if (!(msg->data = dm_malloc(msg->size)))
goto out;
*msg->data = 0;
message = msg->data;
strcpy(message, message_data->id);
message += strlen(message_data->id);
*message = ' ';
message ++;
memcpy(msg->data, message_data->id, len);
message = msg->data + len;
*message++ = ' ';
for (j = 0; j < i; ++j) {
strcpy(message, buffers[j]);
message += strlen(buffers[j]);
len = strlen(buffers[j]);
memcpy(message, buffers[j], len);
message += len;
}
ret = 0;
@@ -517,26 +502,20 @@ static int _get_status(struct message_data *message_data)
static int _get_parameters(struct message_data *message_data) {
struct dm_event_daemon_message *msg = message_data->msg;
char buf[128];
int r = -1;
int size;
dm_free(msg->data);
if ((size = dm_asprintf(&msg->data, "%s pid=%d daemon=%s exec_method=%s",
message_data->id, getpid(),
_foreground ? "no" : "yes",
_systemd_activation ? "systemd" : "direct")) < 0) {
stack;
return -ENOMEM;
}
if (!(dm_snprintf(buf, sizeof(buf), "%s pid=%d daemon=%s exec_method=%s",
message_data->id,
getpid(),
_foreground ? "no" : "yes",
_systemd_activation ? "systemd" : "direct")))
goto_out;
msg->size = (uint32_t) size;
msg->size = strlen(buf) + 1;
if (!(msg->data = dm_malloc(msg->size)))
goto_out;
if (!dm_strncpy(msg->data, buf, msg->size))
goto_out;
r = 0;
out:
return r;
return 0;
}
/* Cleanup at exit. */
@@ -592,9 +571,8 @@ static int _register_for_timeout(struct thread_status *thread)
pthread_mutex_lock(&_timeout_mutex);
thread->next_time = time(NULL) + thread->timeout;
if (dm_list_empty(&thread->timeout_list)) {
thread->next_time = time(NULL) + thread->timeout;
dm_list_add(&_timeout_registry, &thread->timeout_list);
if (_timeout_running)
pthread_cond_signal(&_timeout_cond);
@@ -616,6 +594,7 @@ static void _unregister_for_timeout(struct thread_status *thread)
dm_list_del(&thread->timeout_list);
dm_list_init(&thread->timeout_list);
if (dm_list_empty(&_timeout_registry))
/* No more work -> wakeup to finish quickly */
pthread_cond_signal(&_timeout_cond);
}
pthread_mutex_unlock(&_timeout_mutex);
@@ -921,11 +900,11 @@ static struct dso_data *_lookup_dso(struct message_data *data)
struct dso_data *dso_data, *ret = NULL;
dm_list_iterate_items(dso_data, &_dso_registry)
if (!strcmp(data->dso_name, dso_data->dso_name)) {
_lib_get(dso_data);
ret = dso_data;
break;
}
if (!strcmp(data->dso_name, dso_data->dso_name)) {
_lib_get(dso_data);
ret = dso_data;
break;
}
return ret;
}
@@ -953,7 +932,7 @@ static int lookup_symbols(void *dl, struct dso_data *data)
static struct dso_data *_load_dso(struct message_data *data)
{
void *dl;
struct dso_data *ret = NULL;
struct dso_data *ret;
if (!(dl = dlopen(data->dso_name, RTLD_NOW))) {
const char *dlerr = dlerror();
@@ -1148,10 +1127,8 @@ static int _registered_device(struct message_data *message_data,
if ((r = dm_asprintf(&(msg->data), "%s %s %s %u",
message_data->id,
thread->dso_data->dso_name,
thread->device.uuid, events)) < 0) {
msg->size = 0;
thread->device.uuid, events)) < 0)
return -ENOMEM;
}
msg->size = (uint32_t) r;
@@ -1260,13 +1237,11 @@ static int _get_timeout(struct message_data *message_data)
_lock_mutex();
if ((thread = _lookup_thread_status(message_data))) {
msg->size =
dm_asprintf(&(msg->data), "%s %" PRIu32, message_data->id,
thread->timeout);
} else {
msg->size = dm_asprintf(&(msg->data), "%s %" PRIu32,
message_data->id, thread->timeout);
} else
msg->data = NULL;
msg->size = 0;
}
_unlock_mutex();
return thread ? 0 : -ENODEV;
@@ -1405,7 +1380,6 @@ static int _client_read(struct dm_event_fifos *fifos,
if (bytes != size) {
dm_free(msg->data);
msg->data = NULL;
msg->size = 0;
return 0;
}
@@ -1418,33 +1392,45 @@ static int _client_read(struct dm_event_fifos *fifos,
static int _client_write(struct dm_event_fifos *fifos,
struct dm_event_daemon_message *msg)
{
uint32_t temp[2];
unsigned bytes = 0;
int ret = 0;
fd_set fds;
size_t size = 2 * sizeof(uint32_t) + msg->size;
uint32_t *header = alloca(size);
size_t size = 2 * sizeof(uint32_t) + ((msg->data) ? msg->size : 0);
uint32_t *header = dm_malloc(size);
char *buf = (char *)header;
header[0] = htonl(msg->cmd);
header[1] = htonl(msg->size);
if (msg->data)
memcpy(buf + 2 * sizeof(uint32_t), msg->data, msg->size);
if (!header) {
/* Reply with ENOMEM message */
header = temp;
size = sizeof(temp);
header[0] = htonl(-ENOMEM);
header[1] = 0;
} else {
header[0] = htonl(msg->cmd);
header[1] = htonl((msg->data) ? msg->size : 0);
if (msg->data)
memcpy(buf + 2 * sizeof(uint32_t), msg->data, msg->size);
}
errno = 0;
while (bytes < size && errno != EIO) {
while (bytes < size) {
do {
/* Watch client write FIFO to be ready for output. */
FD_ZERO(&fds);
FD_SET(fifos->server, &fds);
} while (select(fifos->server + 1, NULL, &fds, NULL, NULL) !=
1);
} while (select(fifos->server + 1, NULL, &fds, NULL, NULL) != 1);
ret = write(fifos->server, buf + bytes, size - bytes);
bytes += ret > 0 ? ret : 0;
if ((ret = write(fifos->server, buf + bytes, size - bytes)) > 0)
bytes += ret;
else if (errno == EIO)
break;
}
return bytes == size;
if (header != temp)
dm_free(header);
return (bytes == size);
}
/*
@@ -1456,34 +1442,35 @@ static int _client_write(struct dm_event_fifos *fifos,
static int _handle_request(struct dm_event_daemon_message *msg,
struct message_data *message_data)
{
static struct request {
unsigned int cmd;
int (*f)(struct message_data *);
} requests[] = {
{ DM_EVENT_CMD_REGISTER_FOR_EVENT, _register_for_event},
{ DM_EVENT_CMD_UNREGISTER_FOR_EVENT, _unregister_for_event},
{ DM_EVENT_CMD_GET_REGISTERED_DEVICE, _get_registered_device},
{ DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE,
_get_next_registered_device},
{ DM_EVENT_CMD_SET_TIMEOUT, _set_timeout},
{ DM_EVENT_CMD_GET_TIMEOUT, _get_timeout},
{ DM_EVENT_CMD_ACTIVE, _active},
{ DM_EVENT_CMD_GET_STATUS, _get_status},
/* dmeventd parameters of running dmeventd,
* returns 'pid=<pid> daemon=<no/yes> exec_method=<direct/systemd>'
* pid - pidfile of running dmeventd
* daemon - running as a daemon or not (foreground)?
* exec_method - "direct" if executed directly or
* "systemd" if executed via systemd
*/
{ DM_EVENT_CMD_GET_PARAMETERS, _get_parameters},
}, *req;
for (req = requests; req < requests + DM_ARRAY_SIZE(requests); ++req)
if (req->cmd == msg->cmd)
return req->f(message_data);
return -EINVAL;
switch (msg->cmd) {
case DM_EVENT_CMD_REGISTER_FOR_EVENT:
return _register_for_event(message_data);
case DM_EVENT_CMD_UNREGISTER_FOR_EVENT:
return _unregister_for_event(message_data);
case DM_EVENT_CMD_GET_REGISTERED_DEVICE:
return _get_registered_device(message_data);
case DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE:
return _get_next_registered_device(message_data);
case DM_EVENT_CMD_SET_TIMEOUT:
return _set_timeout(message_data);
case DM_EVENT_CMD_GET_TIMEOUT:
return _get_timeout(message_data);
case DM_EVENT_CMD_ACTIVE:
return _active(message_data);
case DM_EVENT_CMD_GET_STATUS:
return _get_status(message_data);
/* dmeventd parameters of running dmeventd,
* returns 'pid=<pid> daemon=<no/yes> exec_method=<direct/systemd>'
* pid - pidfile of running dmeventd
* daemon - running as a daemon or not (foreground)?
* exec_method - "direct" if executed directly or
* "systemd" if executed via systemd
*/
case DM_EVENT_CMD_GET_PARAMETERS:
return _get_parameters(message_data);
default:
return -EINVAL;
}
}
/* Process a request passed from the communication thread. */
@@ -1499,11 +1486,10 @@ static int _do_process_request(struct dm_event_daemon_message *msg)
answer = msg->data;
if (answer) {
msg->size = dm_asprintf(&(msg->data), "%s %s %d", answer,
msg->cmd == DM_EVENT_CMD_DIE ? "DYING" : "HELLO",
(msg->cmd == DM_EVENT_CMD_DIE) ? "DYING" : "HELLO",
DM_EVENT_PROTOCOL_VERSION);
dm_free(answer);
} else
msg->size = 0;
}
} else if (msg->cmd != DM_EVENT_CMD_ACTIVE && !_parse_message(&message_data)) {
stack;
ret = -EINVAL;
@@ -1932,7 +1918,6 @@ static void restart(void)
struct dm_event_daemon_message msg = { 0 };
int i, count = 0;
char *message;
int length;
int version;
const char *e;
@@ -1957,16 +1942,12 @@ static void restart(void)
if (daemon_talk(&fifos, &msg, DM_EVENT_CMD_GET_STATUS, "-", "-", 0, 0))
goto bad;
message = msg.data;
message = strchr(message, ' ');
++ message;
length = strlen(msg.data);
for (i = 0; i < length; ++i) {
message = strchr(msg.data, ' ') + 1;
for (i = 0; msg.data[i]; ++i)
if (msg.data[i] == ';') {
msg.data[i] = 0;
++count;
}
}
if (!(_initial_registrations = dm_malloc(sizeof(char*) * (count + 1)))) {
fprintf(stderr, "Memory allocation registration failed.\n");
@@ -1980,7 +1961,7 @@ static void restart(void)
}
message += strlen(message) + 1;
}
_initial_registrations[count] = 0;
_initial_registrations[count] = NULL;
if (version >= 2) {
if (daemon_talk(&fifos, &msg, DM_EVENT_CMD_GET_PARAMETERS, "-", "-", 0, 0)) {

View File

@@ -20,7 +20,6 @@
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <sys/file.h>
@@ -57,7 +56,7 @@ static void _dm_event_handler_clear_dev_info(struct dm_event_handler *dmevh)
struct dm_event_handler *dm_event_handler_create(void)
{
struct dm_event_handler *dmevh = NULL;
struct dm_event_handler *dmevh;
if (!(dmevh = dm_zalloc(sizeof(*dmevh)))) {
log_error("Failed to allocate event handler.");
@@ -82,8 +81,7 @@ int dm_event_handler_set_dmeventd_path(struct dm_event_handler *dmevh, const cha
dm_free(dmevh->dmeventd_path);
dmevh->dmeventd_path = dm_strdup(dmeventd_path);
if (!dmevh->dmeventd_path)
if (!(dmevh->dmeventd_path = dm_strdup(dmeventd_path)))
return -ENOMEM;
return 0;
@@ -93,10 +91,10 @@ int dm_event_handler_set_dso(struct dm_event_handler *dmevh, const char *path)
{
if (!path) /* noop */
return 0;
dm_free(dmevh->dso);
dmevh->dso = dm_strdup(path);
if (!dmevh->dso)
if (!(dmevh->dso = dm_strdup(path)))
return -ENOMEM;
return 0;
@@ -109,9 +107,9 @@ int dm_event_handler_set_dev_name(struct dm_event_handler *dmevh, const char *de
_dm_event_handler_clear_dev_info(dmevh);
dmevh->dev_name = dm_strdup(dev_name);
if (!dmevh->dev_name)
if (!(dmevh->dev_name = dm_strdup(dev_name)))
return -ENOMEM;
return 0;
}
@@ -122,9 +120,9 @@ int dm_event_handler_set_uuid(struct dm_event_handler *dmevh, const char *uuid)
_dm_event_handler_clear_dev_info(dmevh);
dmevh->uuid = dm_strdup(uuid);
if (!dmevh->uuid)
if (!(dmevh->uuid = dm_strdup(uuid)))
return -ENOMEM;
return 0;
}
@@ -224,7 +222,6 @@ static int _daemon_read(struct dm_event_fifos *fifos,
unsigned bytes = 0;
int ret, i;
fd_set fds;
struct timeval tval = { 0, 0 };
size_t size = 2 * sizeof(uint32_t); /* status + size */
uint32_t *header = alloca(size);
char *buf = (char *)header;
@@ -232,11 +229,10 @@ static int _daemon_read(struct dm_event_fifos *fifos,
while (bytes < size) {
for (i = 0, ret = 0; (i < 20) && (ret < 1); i++) {
/* Watch daemon read FIFO for input. */
struct timeval tval = { .tv_sec = 1 };
FD_ZERO(&fds);
FD_SET(fifos->server, &fds);
tval.tv_sec = 1;
ret = select(fifos->server + 1, &fds, NULL, NULL,
&tval);
ret = select(fifos->server + 1, &fds, NULL, NULL, &tval);
if (ret < 0 && errno != EINTR) {
log_error("Unable to read from event server");
return 0;
@@ -283,15 +279,13 @@ static int _daemon_read(struct dm_event_fifos *fifos,
static int _daemon_write(struct dm_event_fifos *fifos,
struct dm_event_daemon_message *msg)
{
unsigned bytes = 0;
int ret = 0;
int ret;
fd_set fds;
size_t bytes = 0;
size_t size = 2 * sizeof(uint32_t) + msg->size;
uint32_t *header = alloca(size);
char *buf = (char *)header;
char drainbuf[128];
struct timeval tval = { 0, 0 };
header[0] = htonl(msg->cmd);
header[1] = htonl(msg->size);
@@ -299,17 +293,25 @@ static int _daemon_write(struct dm_event_fifos *fifos,
/* drain the answer fifo */
while (1) {
struct timeval tval = { .tv_usec = 100 };
FD_ZERO(&fds);
FD_SET(fifos->server, &fds);
tval.tv_usec = 100;
ret = select(fifos->server + 1, &fds, NULL, NULL, &tval);
if ((ret < 0) && (errno != EINTR)) {
if (ret < 0) {
if (errno == EINTR)
continue;
log_error("Unable to talk to event daemon");
return 0;
}
if (ret == 0)
break;
ret = read(fifos->server, drainbuf, 127);
ret = read(fifos->server, drainbuf, sizeof(drainbuf));
if (ret < 0) {
if ((errno == EINTR) || (errno == EAGAIN))
continue;
log_error("Unable to talk to event daemon");
return 0;
}
}
while (bytes < size) {
@@ -610,7 +612,7 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
int ret = 1, err;
const char *uuid;
struct dm_task *dmt;
struct dm_event_daemon_message msg = { 0, 0, NULL };
struct dm_event_daemon_message msg = { 0 };
if (!(dmt = _get_device_info(dmevh)))
return_0;
@@ -644,7 +646,7 @@ int dm_event_unregister_handler(const struct dm_event_handler *dmevh)
int ret = 1, err;
const char *uuid;
struct dm_task *dmt;
struct dm_event_daemon_message msg = { 0, 0, NULL };
struct dm_event_daemon_message msg = { 0 };
if (!(dmt = _get_device_info(dmevh)))
return_0;
@@ -695,7 +697,6 @@ static int _parse_message(struct dm_event_daemon_message *msg, char **dso_name,
(*dso_name = _fetch_string(&p, ' ')) &&
(*uuid = _fetch_string(&p, ' '))) {
*evmask = atoi(p);
dm_free(id);
return 0;
}
@@ -750,8 +751,8 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
ret = -ENXIO; /* dmeventd probably gave us bogus uuid back */
goto fail;
}
dmevh->uuid = dm_strdup(reply_uuid);
if (!dmevh->uuid) {
if (!(dmevh->uuid = dm_strdup(reply_uuid))) {
ret = -ENOMEM;
goto fail;
}
@@ -770,8 +771,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
dm_free(reply_uuid);
reply_uuid = NULL;
dmevh->dev_name = dm_strdup(dm_task_get_name(dmt));
if (!dmevh->dev_name) {
if (!(dmevh->dev_name = dm_strdup(dm_task_get_name(dmt)))) {
ret = -ENOMEM;
goto fail;
}
@@ -811,7 +811,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
int dm_event_get_version(struct dm_event_fifos *fifos, int *version) {
char *p;
struct dm_event_daemon_message msg = { 0, 0, NULL };
struct dm_event_daemon_message msg = { 0 };
if (daemon_talk(fifos, &msg, DM_EVENT_CMD_HELLO, NULL, NULL, 0, 0))
return 0;
@@ -856,6 +856,7 @@ int dm_event_get_timeout(const char *device_path, uint32_t *timeout)
if (!device_exists(device_path))
return -ENODEV;
if (!(ret = _do_event(DM_EVENT_CMD_GET_TIMEOUT, &msg, NULL, device_path,
0, 0))) {
char *p = _skip_string(msg.data, ' ');

View File

@@ -1,5 +1,5 @@
#
# Copyright (C) 2010-2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2010-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -24,7 +24,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += @LVM2CMD_LIB@ -ldevmapper $(PTHREAD_LIBS) $(DAEMON_LIBS)
LIBS += @LVM2CMD_LIB@ -ldevmapper $(PTHREAD_LIBS)
install_lvm2: install_lib_shared

View File

@@ -143,7 +143,7 @@ struct dm_pool *dmeventd_lvm2_pool(void)
int dmeventd_lvm2_run(const char *cmdline)
{
return lvm2_run(_lvm_handle, cmdline);
return (lvm2_run(_lvm_handle, cmdline) == LVM2_COMMAND_SUCCEEDED);
}
int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,

View File

@@ -22,11 +22,11 @@
* liblvm2cmd thread-safe so this can go away.
*/
#include "libdevmapper.h"
#ifndef _DMEVENTD_LVMWRAP_H
#define _DMEVENTD_LVMWRAP_H
struct dm_pool;
int dmeventd_lvm2_init(void);
void dmeventd_lvm2_exit(void);
int dmeventd_lvm2_run(const char *cmdline);

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2005, 2008-2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2005, 2008-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -16,8 +16,8 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
INCLUDES += -I$(top_srcdir)/tools -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/tools -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
INCLUDES += -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
SOURCES = dmeventd_mirror.c
@@ -30,7 +30,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper $(DAEMON_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -14,7 +14,6 @@
#include "lib.h"
#include "lvm2cmd.h"
#include "libdevmapper-event.h"
#include "dmeventd_lvm.h"
#include "defaults.h"
@@ -144,9 +143,9 @@ static int _remove_failed_devices(const char *device)
r = dmeventd_lvm2_run(cmd_str);
syslog(LOG_INFO, "Repair of mirrored device %s %s.", device,
(r == LVM2_COMMAND_SUCCEEDED) ? "finished successfully" : "failed");
(r) ? "finished successfully" : "failed");
return (r == LVM2_COMMAND_SUCCEEDED) ? 0 : -1;
return (r) ? 0 : -1;
}
void process_event(struct dm_task *dmt,

View File

@@ -1,5 +1,5 @@
#
# Copyright (C) 2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2011-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -15,8 +15,8 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
INCLUDES += -I$(top_srcdir)/tools -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/tools -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
INCLUDES += -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
SOURCES = dmeventd_raid.c

View File

@@ -14,7 +14,6 @@
#include "lib.h"
#include "lvm2cmd.h"
#include "libdevmapper-event.h"
#include "dmeventd_lvm.h"
@@ -34,16 +33,26 @@ static int run_repair(const char *device)
char cmd_str[CMD_SIZE];
if (!dmeventd_lvm2_command(dmeventd_lvm2_pool(), cmd_str, sizeof(cmd_str),
"lvconvert --config devices{ignore_suspended_devices=1} "
"--repair --use-policies", device))
"lvscan --cache", device))
return -1;
r = dmeventd_lvm2_run(cmd_str);
if (r != LVM2_COMMAND_SUCCEEDED)
if (!r)
syslog(LOG_INFO, "Re-scan of RAID device %s failed.", device);
if (!dmeventd_lvm2_command(dmeventd_lvm2_pool(), cmd_str, sizeof(cmd_str),
"lvconvert --config devices{ignore_suspended_devices=1} "
"--repair --use-policies", device))
return -1;
/* if repair goes OK, report success even if lvscan has failed */
r = dmeventd_lvm2_run(cmd_str);
if (!r)
syslog(LOG_INFO, "Repair of RAID device %s failed.", device);
return (r == LVM2_COMMAND_SUCCEEDED) ? 0 : -1;
return (r) ? 0 : -1;
}
static int _process_raid_event(char *params, const char *device)

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of the LVM2.
#
@@ -16,8 +16,8 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
INCLUDES += -I$(top_srcdir)/tools -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/tools -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
INCLUDES += -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
SOURCES = dmeventd_snapshot.c
@@ -26,7 +26,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper $(DAEMON_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -14,12 +14,12 @@
#include "lib.h"
#include "lvm2cmd.h"
#include "libdevmapper-event.h"
#include "dmeventd_lvm.h"
#include <sys/wait.h>
#include <syslog.h> /* FIXME Replace syslog with multilog */
#include <stdarg.h>
/* FIXME Missing openlog? */
/* First warning when snapshot is 80% full. */
@@ -81,7 +81,7 @@ static int _run(const char *cmd, ...)
static int _extend(const char *cmd)
{
return dmeventd_lvm2_run(cmd) == LVM2_COMMAND_SUCCEEDED;
return dmeventd_lvm2_run(cmd);
}
static void _umount(const char *device, int major, int minor)

View File

@@ -1,5 +1,5 @@
#
# Copyright (C) 2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2011-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -15,8 +15,8 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
INCLUDES += -I$(top_srcdir)/tools -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/tools -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
INCLUDES += -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
SOURCES = dmeventd_thin.c

View File

@@ -14,12 +14,12 @@
#include "lib.h"
#include "lvm2cmd.h"
#include "libdevmapper-event.h"
#include "dmeventd_lvm.h"
#include <sys/wait.h>
#include <syslog.h> /* FIXME Replace syslog with multilog */
#include <stdarg.h>
/* FIXME Missing openlog? */
/* First warning when thin is 80% full. */
@@ -146,7 +146,7 @@ static int _extend(struct dso_state *state)
#if THIN_DEBUG
syslog(LOG_INFO, "dmeventd executes: %s.\n", state->cmd_str);
#endif
return (dmeventd_lvm2_run(state->cmd_str) == LVM2_COMMAND_SUCCEEDED);
return dmeventd_lvm2_run(state->cmd_str);
}
static int _run(const char *cmd, ...)
@@ -218,7 +218,8 @@ static int _umount_device(char *buffer, unsigned major, unsigned minor,
*/
static void _umount(struct dm_task *dmt, const char *device)
{
static const size_t MINORS = 4096;
/* TODO: Convert to use hash to reduce memory usage */
static const size_t MINORS = (1U << 20); /* 20 bit */
struct mountinfo_s data = {
.device = device,
};

View File

@@ -11,7 +11,6 @@
@top_srcdir@/lib/config/config_settings.h
@top_srcdir@/lib/config/defaults.h
@top_srcdir@/lib/datastruct/btree.h
@top_srcdir@/lib/datastruct/lvm-types.h
@top_srcdir@/lib/datastruct/str_list.h
@top_srcdir@/lib/device/dev-cache.h
@top_srcdir@/lib/device/dev-type.h
@@ -42,17 +41,19 @@
@top_builddir@/lib/misc/configure.h
@top_srcdir@/lib/misc/crc.h
@top_srcdir@/lib/misc/intl.h
@top_srcdir@/lib/misc/util.h
@top_srcdir@/lib/misc/last-path-component.h
@top_srcdir@/lib/misc/lib.h
@top_srcdir@/lib/misc/lvm-exec.h
@top_srcdir@/lib/misc/lvm-file.h
@top_srcdir@/lib/misc/lvm-flock.h
@top_srcdir@/lib/misc/lvm-globals.h
@top_srcdir@/lib/misc/lvm-signal.h
@top_srcdir@/lib/misc/lvm-string.h
@top_builddir@/lib/misc/lvm-version.h
@top_srcdir@/lib/misc/lvm-wrappers.h
@top_srcdir@/lib/misc/lvm-percent.h
@top_srcdir@/lib/misc/lvm-wrappers.h
@top_srcdir@/lib/misc/sharedlib.h
@top_srcdir@/lib/misc/util.h
@top_srcdir@/lib/properties/prop_common.h
@top_srcdir@/lib/report/properties.h
@top_srcdir@/lib/report/report.h

View File

@@ -106,7 +106,9 @@ SOURCES =\
misc/crc.c \
misc/lvm-exec.c \
misc/lvm-file.c \
misc/lvm-flock.c \
misc/lvm-globals.c \
misc/lvm-signal.c \
misc/lvm-string.c \
misc/lvm-wrappers.c \
misc/lvm-percent.c \

View File

@@ -200,7 +200,7 @@ int lv_passes_auto_activation_filter(struct cmd_context *cmd, struct logical_vol
}
#ifndef DEVMAPPER_SUPPORT
void set_activation(int act)
void set_activation(int act, int silent)
{
static int warned = 0;
@@ -253,16 +253,16 @@ int lv_check_not_in_use(struct cmd_context *cmd, struct logical_volume *lv,
{
return 0;
}
int lv_snapshot_percent(const struct logical_volume *lv, percent_t *percent)
int lv_snapshot_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return 0;
}
int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
int wait, percent_t *percent, uint32_t *event_nr)
int wait, dm_percent_t *percent, uint32_t *event_nr)
{
return 0;
}
int lv_raid_percent(const struct logical_volume *lv, percent_t *percent)
int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return 0;
}
@@ -282,25 +282,25 @@ int lv_raid_message(const struct logical_volume *lv, const char *msg)
{
return 0;
}
int lv_cache_block_info(const struct logical_volume *lv,
int lv_cache_block_info(struct logical_volume *lv,
uint32_t *chunk_size, uint64_t *dirty_count,
uint64_t *used_count, uint64_t *total_count)
{
return 0;
}
int lv_cache_policy_info(const struct logical_volume *lv,
char **policy_name, int *policy_argc,
const char *const **policy_argv)
int lv_cache_policy_info(struct logical_volume *lv,
const char **policy_name, int *policy_argc,
const char ***policy_argv)
{
return 0;
}
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
percent_t *percent)
dm_percent_t *percent)
{
return 0;
}
int lv_thin_percent(const struct logical_volume *lv, int mapped,
percent_t *percent)
dm_percent_t *percent)
{
return 0;
}
@@ -309,6 +309,10 @@ int lv_thin_pool_transaction_id(const struct logical_volume *lv,
{
return 0;
}
int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id)
{
return 0;
}
int lvs_in_vg_activated(const struct volume_group *vg)
{
return 0;
@@ -431,7 +435,7 @@ int lv_has_target_type(struct dm_pool *mem, struct logical_volume *lv,
static int _activation = 1;
void set_activation(int act)
void set_activation(int act, int silent)
{
if (act == _activation)
return;
@@ -440,9 +444,12 @@ void set_activation(int act)
if (_activation)
log_verbose("Activation enabled. Device-mapper kernel "
"driver will be used.");
else
else if (!silent)
log_warn("WARNING: Activation disabled. No device-mapper "
"interaction will be attempted.");
else
log_verbose("Activation disabled. No device-mapper "
"interaction will be attempted.");
}
int activation(void)
@@ -712,21 +719,20 @@ int lv_check_not_in_use(struct cmd_context *cmd, struct logical_volume *lv,
}
open_count_check_retries = retry_deactivation() ? OPEN_COUNT_CHECK_RETRIES : 1;
while (open_count_check_retries--) {
if (info->open_count > 0) {
if (open_count_check_retries) {
usleep(OPEN_COUNT_CHECK_USLEEP_DELAY);
log_debug_activation("Retrying open_count check for %s/%s.",
lv->vg->name, lv->name);
if (!lv_info(cmd, lv, 0, info, 1, 0))
return -1;
continue;
}
while (info->open_count > 0 && open_count_check_retries--) {
if (!open_count_check_retries) {
log_error("Logical volume %s/%s in use.",
lv->vg->name, lv->name);
return 0;
} else
}
usleep(OPEN_COUNT_CHECK_USLEEP_DELAY);
log_debug_activation("Retrying open_count check for %s/%s.",
lv->vg->name, lv->name);
if (!lv_info(cmd, lv, 0, info, 1, 0)) {
stack; /* device dissappeared? */
break;
}
}
return 1;
@@ -759,7 +765,7 @@ int lv_check_transient(struct logical_volume *lv)
/*
* Returns 1 if percent set, else 0 on failure.
*/
int lv_snapshot_percent(const struct logical_volume *lv, percent_t *percent)
int lv_snapshot_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
int r;
struct dev_manager *dm;
@@ -782,7 +788,7 @@ int lv_snapshot_percent(const struct logical_volume *lv, percent_t *percent)
/* FIXME Merge with snapshot_percent */
int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
int wait, percent_t *percent, uint32_t *event_nr)
int wait, dm_percent_t *percent, uint32_t *event_nr)
{
int r;
struct dev_manager *dm;
@@ -790,7 +796,7 @@ int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
/* If mirrored LV is temporarily shrinked to 1 area (= linear),
* it should be considered in-sync. */
if (dm_list_size(&lv->segments) == 1 && first_seg(lv)->area_count == 1) {
*percent = PERCENT_100;
*percent = DM_PERCENT_100;
return 1;
}
@@ -811,7 +817,7 @@ int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
return r;
}
int lv_raid_percent(const struct logical_volume *lv, percent_t *percent)
int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return lv_mirror_percent(lv->vg->cmd, lv, 0, percent, NULL);
}
@@ -1139,7 +1145,7 @@ int lv_cache_policy_info(struct logical_volume *lv,
* Returns 1 if percent set, else 0 on failure.
*/
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
percent_t *percent)
dm_percent_t *percent)
{
int r;
struct dev_manager *dm;
@@ -1165,7 +1171,7 @@ int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
* Returns 1 if percent set, else 0 on failure.
*/
int lv_thin_percent(const struct logical_volume *lv,
int mapped, percent_t *percent)
int mapped, dm_percent_t *percent)
{
int r;
struct dev_manager *dm;
@@ -1267,7 +1273,7 @@ static int _lv_activate_lv(struct logical_volume *lv, struct lv_activate_opts *l
int r;
struct dev_manager *dm;
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, (lv->status & PVMOVE) ? 0 : 1)))
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, !lv_is_pvmove(lv))))
return_0;
if (!(r = dev_manager_activate(dm, lv, laopts)))
@@ -1284,7 +1290,7 @@ static int _lv_preload(struct logical_volume *lv, struct lv_activate_opts *laopt
struct dev_manager *dm;
int old_readonly = laopts->read_only;
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, (lv->status & PVMOVE) ? 0 : 1)))
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, !lv_is_pvmove(lv))))
goto_out;
laopts->read_only = _passes_readonly_filter(lv->vg->cmd, lv);
@@ -1326,7 +1332,7 @@ static int _lv_suspend_lv(struct logical_volume *lv, struct lv_activate_opts *la
* When we are asked to manipulate (normally suspend/resume) the PVMOVE
* device directly, we don't want to touch the devices that use it.
*/
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, (lv->status & PVMOVE) ? 0 : 1)))
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, !lv_is_pvmove(lv))))
return_0;
if (!(r = dev_manager_suspend(dm, lv, laopts, lockfs, flush_required)))
@@ -1866,8 +1872,8 @@ static int _lv_suspend(struct cmd_context *cmd, const char *lvid_s,
* tables for all the changed LVs here, as the relationships
* are not found by walking the new metadata.
*/
if (!(incore_lv->status & LOCKED) &&
(ondisk_lv->status & LOCKED) &&
if (!lv_is_locked(incore_lv) &&
lv_is_locked(ondisk_lv) &&
(pvmove_lv = find_pvmove_lv_in_lv(ondisk_lv))) {
/* Preload all the LVs above the PVMOVE LV */
dm_list_iterate_items(sl, &pvmove_lv->segs_using_this_lv) {
@@ -1945,7 +1951,7 @@ static int _lv_suspend(struct cmd_context *cmd, const char *lvid_s,
* can be called separately for each LV safely.
*/
if ((incore_lv->vg->status & PRECOMMITTED) &&
(incore_lv->status & LOCKED) && find_pvmove_lv_in_lv(incore_lv)) {
lv_is_locked(incore_lv) && find_pvmove_lv_in_lv(incore_lv)) {
if (!_lv_suspend_lv(incore_lv, laopts, lockfs, flush_required)) {
critical_section_dec(cmd, "failed precommitted suspend");
if (pvmove_lv)
@@ -2219,9 +2225,19 @@ static int _lv_activate(struct cmd_context *cmd, const char *lvid_s,
}
if ((!lv->vg->cmd->partial_activation) && (lv->status & PARTIAL_LV)) {
log_error("Refusing activation of partial LV %s. Use --partial to override.",
lv->name);
goto out;
if (!lv_is_raid_type(lv) || !partial_raid_lv_supports_degraded_activation(lv)) {
log_error("Refusing activation of partial LV %s. "
"Use '--activationmode partial' to override.",
display_lvname(lv));
goto out;
}
if (!lv->vg->cmd->degraded_activation) {
log_error("Refusing activation of partial LV %s. "
"Try '--activationmode degraded'.",
display_lvname(lv));
goto out;
}
}
if (lv_has_unknown_segments(lv)) {
@@ -2309,7 +2325,7 @@ int lv_activate_with_filter(struct cmd_context *cmd, const char *lvid_s, int exc
int lv_mknodes(struct cmd_context *cmd, const struct logical_volume *lv)
{
int r = 1;
int r;
if (!lv) {
r = dm_mknodes(NULL);

View File

@@ -52,7 +52,7 @@ struct lv_activate_opts {
* that follows. */
};
void set_activation(int activation);
void set_activation(int activation, int silent);
int activation(void);
int driver_version(char *version, size_t size);
@@ -112,10 +112,10 @@ int lv_check_transient(struct logical_volume *lv);
/*
* Returns 1 if percent has been set, else 0.
*/
int lv_snapshot_percent(const struct logical_volume *lv, percent_t *percent);
int lv_snapshot_percent(const struct logical_volume *lv, dm_percent_t *percent);
int lv_mirror_percent(struct cmd_context *cmd, const struct logical_volume *lv,
int wait, percent_t *percent, uint32_t *event_nr);
int lv_raid_percent(const struct logical_volume *lv, percent_t *percent);
int wait, dm_percent_t *percent, uint32_t *event_nr);
int lv_raid_percent(const struct logical_volume *lv, dm_percent_t *percent);
int lv_raid_dev_health(const struct logical_volume *lv, char **dev_health);
int lv_raid_mismatch_count(const struct logical_volume *lv, uint64_t *cnt);
int lv_raid_sync_action(const struct logical_volume *lv, char **sync_action);
@@ -127,9 +127,9 @@ int lv_cache_policy_info(struct logical_volume *lv,
const char **policy_name, int *policy_argc,
const char ***policy_argv);
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
percent_t *percent);
dm_percent_t *percent);
int lv_thin_percent(const struct logical_volume *lv, int mapped,
percent_t *percent);
dm_percent_t *percent);
int lv_thin_pool_transaction_id(const struct logical_volume *lv,
uint64_t *transaction_id);
int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id);

View File

@@ -41,6 +41,9 @@ typedef enum {
CLEAN
} action_t;
/* This list must match lib/misc/lvm-string.c:build_dm_uuid(). */
const char *uuid_suffix_list[] = { "pool", "cdata", "cmeta", "tdata", "tmeta", NULL};
struct dev_manager {
struct dm_pool *mem;
@@ -482,11 +485,31 @@ static int _info(const char *dlid, int with_open_count, int with_read_ahead,
struct dm_info *info, uint32_t *read_ahead)
{
int r = 0;
char old_style_dlid[sizeof(UUID_PREFIX) + 2 * ID_LEN];
const char *suffix, *suffix_position;
unsigned i = 0;
/* Check for dlid */
if ((r = _info_run(NULL, dlid, info, read_ahead, 0, with_open_count,
with_read_ahead, 0, 0)) && info->exists)
return 1;
else if ((r = _info_run(NULL, dlid + sizeof(UUID_PREFIX) - 1, info,
/* Check for original version of dlid before the suffixes got added in 2.02.106 */
if ((suffix_position = rindex(dlid, '-'))) {
while ((suffix = uuid_suffix_list[i++])) {
if (strcmp(suffix_position + 1, suffix))
continue;
(void) strncpy(old_style_dlid, dlid, sizeof(old_style_dlid));
old_style_dlid[sizeof(old_style_dlid) - 1] = '\0';
if ((r = _info_run(NULL, old_style_dlid, info, read_ahead, 0, with_open_count,
with_read_ahead, 0, 0)) && info->exists)
return 1;
}
}
/* Check for dlid before UUID_PREFIX was added */
if ((r = _info_run(NULL, dlid + sizeof(UUID_PREFIX) - 1, info,
read_ahead, 0, with_open_count,
with_read_ahead, 0, 0)) && info->exists)
return 1;
@@ -719,28 +742,28 @@ int add_linear_area_to_dtree(struct dm_tree_node *node, uint64_t size, uint32_t
return 1;
}
static percent_range_t _combine_percent(percent_t a, percent_t b,
uint32_t numerator, uint32_t denominator)
static dm_percent_range_t _combine_percent(dm_percent_t a, dm_percent_t b,
uint32_t numerator, uint32_t denominator)
{
if (a == PERCENT_MERGE_FAILED || b == PERCENT_MERGE_FAILED)
return PERCENT_MERGE_FAILED;
if (a == LVM_PERCENT_MERGE_FAILED || b == LVM_PERCENT_MERGE_FAILED)
return LVM_PERCENT_MERGE_FAILED;
if (a == PERCENT_INVALID || b == PERCENT_INVALID)
return PERCENT_INVALID;
if (a == DM_PERCENT_INVALID || b == DM_PERCENT_INVALID)
return DM_PERCENT_INVALID;
if (a == PERCENT_100 && b == PERCENT_100)
return PERCENT_100;
if (a == DM_PERCENT_100 && b == DM_PERCENT_100)
return DM_PERCENT_100;
if (a == PERCENT_0 && b == PERCENT_0)
return PERCENT_0;
if (a == DM_PERCENT_0 && b == DM_PERCENT_0)
return DM_PERCENT_0;
return (percent_range_t) make_percent(numerator, denominator);
return (dm_percent_range_t) dm_make_percent(numerator, denominator);
}
static int _percent_run(struct dev_manager *dm, const char *name,
const char *dlid,
const char *target_type, int wait,
const struct logical_volume *lv, percent_t *overall_percent,
const struct logical_volume *lv, dm_percent_t *overall_percent,
uint32_t *event_nr, int fail_if_percent_unsupported)
{
int r = 0;
@@ -754,7 +777,7 @@ static int _percent_run(struct dev_manager *dm, const char *name,
struct lv_segment *seg = NULL;
struct segment_type *segtype;
int first_time = 1;
percent_t percent = PERCENT_INVALID;
dm_percent_t percent = DM_PERCENT_INVALID;
uint64_t total_numerator = 0, total_denominator = 0;
@@ -829,12 +852,12 @@ static int _percent_run(struct dev_manager *dm, const char *name,
if (first_time) {
/* above ->target_percent() was not executed! */
/* FIXME why return PERCENT_100 et. al. in this case? */
*overall_percent = PERCENT_100;
*overall_percent = DM_PERCENT_100;
if (fail_if_percent_unsupported)
goto_out;
}
log_debug_activation("LV percent: %f", percent_to_float(*overall_percent));
log_debug_activation("LV percent: %f", dm_percent_to_float(*overall_percent));
r = 1;
out:
@@ -844,7 +867,7 @@ static int _percent_run(struct dev_manager *dm, const char *name,
static int _percent(struct dev_manager *dm, const char *name, const char *dlid,
const char *target_type, int wait,
const struct logical_volume *lv, percent_t *percent,
const struct logical_volume *lv, dm_percent_t *percent,
uint32_t *event_nr, int fail_if_percent_unsupported)
{
if (dlid && *dlid) {
@@ -988,7 +1011,7 @@ void dev_manager_exit(void)
int dev_manager_snapshot_percent(struct dev_manager *dm,
const struct logical_volume *lv,
percent_t *percent)
dm_percent_t *percent)
{
const struct logical_volume *snap_lv;
char *name;
@@ -1041,7 +1064,7 @@ int dev_manager_snapshot_percent(struct dev_manager *dm,
/* FIXME Cope with more than one target */
int dev_manager_mirror_percent(struct dev_manager *dm,
const struct logical_volume *lv, int wait,
percent_t *percent, uint32_t *event_nr)
dm_percent_t *percent, uint32_t *event_nr)
{
char *name;
const char *dlid;
@@ -1127,7 +1150,7 @@ int dev_manager_raid_message(struct dev_manager *dm,
struct dm_task *dmt;
const char *layer = lv_layer(lv);
if (!(lv->status & RAID)) {
if (!lv_is_raid(lv)) {
log_error(INTERNAL_ERROR "%s/%s is not a RAID logical volume",
lv->vg->name, lv->name);
return 0;
@@ -1325,7 +1348,7 @@ out:
int dev_manager_thin_pool_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int metadata, percent_t *percent)
int metadata, dm_percent_t *percent)
{
char *name;
const char *dlid;
@@ -1348,7 +1371,7 @@ int dev_manager_thin_pool_percent(struct dev_manager *dm,
int dev_manager_thin_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int mapped, percent_t *percent)
int mapped, dm_percent_t *percent)
{
char *name;
const char *dlid;
@@ -1703,114 +1726,153 @@ static int _add_partial_replicator_to_dtree(struct dev_manager *dm,
return 1;
}
struct thin_cb_data {
const struct logical_volume *pool_lv;
struct pool_cb_data {
struct dev_manager *dm;
const struct logical_volume *pool_lv;
int skip_zero; /* to skip zeroed device header (check first 64B) */
int exec; /* which binary to call */
int opts;
const char *defaults;
const char *global;
};
static int _thin_pool_callback(struct dm_tree_node *node,
dm_node_callback_t type, void *cb_data)
static int _pool_callback(struct dm_tree_node *node,
dm_node_callback_t type, void *cb_data)
{
int ret, status;
const struct thin_cb_data *data = cb_data;
const char *dmdir = dm_dir();
int ret, status, fd;
char *split;
const struct dm_config_node *cn;
const struct dm_config_value *cv;
const char *thin_check =
find_config_tree_str_allow_empty(data->pool_lv->vg->cmd, global_thin_check_executable_CFG, NULL);
const struct logical_volume *mlv = first_seg(data->pool_lv)->metadata_lv;
size_t len = strlen(dmdir) + 2 * (strlen(mlv->vg->name) + strlen(mlv->name)) + 3;
char meta_path[len];
const struct pool_cb_data *data = cb_data;
const struct logical_volume *pool_lv = data->pool_lv;
const struct logical_volume *mlv = first_seg(pool_lv)->metadata_lv;
long buf[64 / sizeof(long)]; /* buffer for short disk header (64B) */
int args = 0;
const char *argv[19]; /* Max supported 15 args */
char *split, *dm_name;
const char *argv[19] = { /* Max supported 15 args */
find_config_tree_str_allow_empty(pool_lv->vg->cmd, data->exec, NULL) /* argv[0] */
};
if (!thin_check[0])
if (!*argv[0])
return 1; /* Checking disabled */
if (!(dm_name = dm_build_dm_name(data->dm->mem, mlv->vg->name,
mlv->name, NULL)) ||
(dm_snprintf(meta_path, len, "%s/%s", dmdir, dm_name) < 0)) {
log_error("Failed to build thin metadata path.");
return 0;
}
if ((cn = find_config_tree_node(mlv->vg->cmd, global_thin_check_options_CFG, NULL))) {
if ((cn = find_config_tree_node(mlv->vg->cmd, data->opts, NULL))) {
for (cv = cn->v; cv && args < 16; cv = cv->next) {
if (cv->type != DM_CFG_STRING) {
log_error("Invalid string in config file: "
"global/thin_check_options");
"global/%s_check_options",
data->global);
return 0;
}
argv[++args] = cv->v.str;
}
} else {
/* Use default options (no support for options with spaces) */
if (!(split = dm_pool_strdup(data->dm->mem, DEFAULT_THIN_CHECK_OPTIONS))) {
log_error("Failed to duplicate thin check string.");
if (!(split = dm_pool_strdup(data->dm->mem, data->defaults))) {
log_error("Failed to duplicate defaults.");
return 0;
}
args = dm_split_words(split, 16, 0, (char**) argv + 1);
}
if (args == 16) {
log_error("Too many options for thin check command.");
log_error("Too many options for %s command.", argv[0]);
return 0;
}
argv[0] = thin_check;
argv[++args] = meta_path;
argv[++args] = NULL;
if (!(argv[++args] = lv_dmpath_dup(data->dm->mem, mlv))) {
log_error("Failed to build pool metadata path.");
return 0;
}
if (!(ret = exec_cmd(data->pool_lv->vg->cmd, (const char * const *)argv,
if (data->skip_zero) {
if ((fd = open(argv[args], O_RDONLY)) < 0) {
log_sys_error("open", argv[args]);
return 0;
}
/* let's assume there is no problem to read 64 bytes */
if (read(fd, buf, sizeof(buf)) < sizeof(buf)) {
log_sys_error("read", argv[args]);
return 0;
}
for (ret = 0; ret < DM_ARRAY_SIZE(buf); ++ret)
if (buf[ret])
break;
if (close(fd))
log_sys_error("close", argv[args]);
if (ret == DM_ARRAY_SIZE(buf)) {
log_debug("%s skipped, detect empty disk header on %s.",
argv[0], argv[args]);
return 1;
}
}
if (!(ret = exec_cmd(pool_lv->vg->cmd, (const char * const *)argv,
&status, 0))) {
switch (type) {
case DM_NODE_CALLBACK_PRELOADED:
log_err_once("Check of thin pool %s/%s failed (status:%d). "
"Manual repair required (thin_dump --repair %s)!",
data->pool_lv->vg->name, data->pool_lv->name,
status, meta_path);
log_err_once("Check of pool %s failed (status:%d). "
"Manual repair required!",
display_lvname(pool_lv), status);
break;
default:
log_warn("WARNING: Integrity check of metadata for thin pool "
"%s/%s failed.",
data->pool_lv->vg->name, data->pool_lv->name);
log_warn("WARNING: Integrity check of metadata for pool "
"%s failed.", display_lvname(pool_lv));
}
/*
* FIXME: What should we do here??
*
* Maybe mark the node, so it's not activating
* as thin_pool but as error/linear and let the
* as pool but as error/linear and let the
* dm tree resolve the issue.
*/
}
dm_pool_free(data->dm->mem, dm_name);
return ret;
}
static int _thin_pool_register_callback(struct dev_manager *dm,
struct dm_tree_node *node,
const struct logical_volume *lv)
static int _pool_register_callback(struct dev_manager *dm,
struct dm_tree_node *node,
const struct logical_volume *lv)
{
struct thin_cb_data *data;
struct pool_cb_data *data;
/* Skip metadata testing for unused pool. */
if (!first_seg(lv)->transaction_id ||
((first_seg(lv)->transaction_id == 1) &&
pool_has_message(first_seg(lv), NULL, 0)))
/* Skip metadata testing for unused thin pool. */
if (lv_is_thin_pool(lv) &&
(!first_seg(lv)->transaction_id ||
((first_seg(lv)->transaction_id == 1) &&
pool_has_message(first_seg(lv), NULL, 0))))
return 1;
if (!(data = dm_pool_alloc(dm->mem, sizeof(*data)))) {
if (!(data = dm_pool_zalloc(dm->mem, sizeof(*data)))) {
log_error("Failed to allocated path for callback.");
return 0;
}
data->dm = dm;
data->pool_lv = lv;
dm_tree_node_set_callback(node, _thin_pool_callback, data);
if (lv_is_thin_pool(lv)) {
data->pool_lv = lv;
data->skip_zero = 1;
data->exec = global_thin_check_executable_CFG;
data->opts = global_thin_check_options_CFG;
data->defaults = DEFAULT_THIN_CHECK_OPTIONS;
data->global = "thin";
} else if (lv_is_cache(lv)) { /* cache pool */
data->pool_lv = first_seg(lv)->pool_lv;
data->skip_zero = dm->activation;
data->exec = global_cache_check_executable_CFG;
data->opts = global_cache_check_options_CFG;
data->defaults = DEFAULT_CACHE_CHECK_OPTIONS;
data->global = "cache";
} else {
log_error(INTERNAL_ERROR "Registering unsupported pool callback.");
return 0;
}
dm_tree_node_set_callback(node, _pool_callback, data);
return 1;
}
@@ -1825,7 +1887,7 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
struct seg_list *sl;
struct dm_list *snh;
struct lv_segment *seg;
struct dm_tree_node *thin_node;
struct dm_tree_node *node;
const char *uuid;
if (lv_is_cache_pool(lv)) {
@@ -1857,8 +1919,8 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
/* FIXME Implement dm_tree_node_skip_childrens optimisation */
if (!(uuid = build_dm_uuid(dm->mem, lv, lv_layer(lv))))
return_0;
if ((thin_node = dm_tree_find_node_by_uuid(dtree, uuid)))
dm_tree_node_skip_childrens(thin_node, 1);
if ((node = dm_tree_find_node_by_uuid(dtree, uuid)))
dm_tree_node_skip_childrens(node, 1);
#endif
}
@@ -1887,8 +1949,21 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
/* TODO: extend _cached_info() to return dnode */
if (!(uuid = build_dm_uuid(dm->mem, lv, lv_layer(lv))))
return_0;
if ((thin_node = dm_tree_find_node_by_uuid(dtree, uuid)) &&
!_thin_pool_register_callback(dm, thin_node, lv))
if ((node = dm_tree_find_node_by_uuid(dtree, uuid)) &&
!_pool_register_callback(dm, node, lv))
return_0;
}
}
if (!origin_only && lv_is_cache(lv)) {
if (!dm->activation) {
/* Setup callback for non-activation partial tree */
/* Activation gets own callback when needed */
/* TODO: extend _cached_info() to return dnode */
if (!(uuid = build_dm_uuid(dm->mem, lv, lv_layer(lv))))
return_0;
if ((node = dm_tree_find_node_by_uuid(dtree, uuid)) &&
!_pool_register_callback(dm, node, lv))
return_0;
}
}
@@ -1903,7 +1978,7 @@ static int _add_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
return_0;
/* Add any LVs referencing a PVMOVE LV unless told not to. */
if (dm->track_pvmove_deps && lv->status & PVMOVE) {
if (dm->track_pvmove_deps && lv_is_pvmove(lv)) {
dm->track_pvmove_deps = 0;
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (!_add_lv_to_dtree(dm, dtree, sl->seg->lv, origin_only))
@@ -1959,6 +2034,8 @@ static struct dm_tree *_create_partial_dtree(struct dev_manager *dm, struct logi
return NULL;
}
dm_tree_set_optional_uuid_suffixes(dtree, &uuid_suffix_list[0]);
if (!_add_lv_to_dtree(dm, dtree, lv, (lv_is_origin(lv) || lv_is_thin_volume(lv)) ? origin_only : 0))
goto_bad;
@@ -2067,9 +2144,12 @@ int add_areas_line(struct dev_manager *dm, struct lv_segment *seg,
stat(name, &info) < 0 || !S_ISBLK(info.st_mode))) ||
(seg_type(seg, s) == AREA_LV && !seg_lv(seg, s))) {
if (!seg->lv->vg->cmd->partial_activation) {
log_error("Aborting. LV %s is now incomplete "
"and --partial was not specified.", seg->lv->name);
return 0;
if (!seg->lv->vg->cmd->degraded_activation ||
!lv_is_raid_type(seg->lv)) {
log_error("Aborting. LV %s is now incomplete "
"and '--activationmode partial' was not specified.", seg->lv->name);
return 0;
}
}
if (!_add_error_area(dm, node, seg, s))
return_0;
@@ -2250,7 +2330,7 @@ static int _add_target_to_dtree(struct dev_manager *dm,
&dm->target_state, seg,
laopts, dnode,
extent_size * seg->len,
&dm-> pvmove_mirror_count);
&dm->pvmove_mirror_count);
}
static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
@@ -2436,6 +2516,7 @@ static int _add_segment_to_dtree(struct dev_manager *dm,
return 1;
}
#if 0
static int _set_udev_flags_for_children(struct dev_manager *dm,
struct volume_group *vg,
struct dm_tree_node *dnode)
@@ -2486,6 +2567,7 @@ static int _set_udev_flags_for_children(struct dev_manager *dm,
return 1;
}
#endif
static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
struct logical_volume *lv, struct lv_activate_opts *laopts,
@@ -2628,7 +2710,11 @@ static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
/* Setup thin pool callback */
if (lv_is_thin_pool(lv) && layer &&
!_thin_pool_register_callback(dm, dnode, lv))
!_pool_register_callback(dm, dnode, lv))
return_0;
if (lv_is_cache(lv) &&
!_pool_register_callback(dm, dnode, lv))
return_0;
if (read_ahead == DM_READ_AHEAD_AUTO) {
@@ -2643,13 +2729,16 @@ static int _add_new_lv_to_dtree(struct dev_manager *dm, struct dm_tree *dtree,
dm_tree_node_set_read_ahead(dnode, read_ahead, read_ahead_flags);
/* Add any LVs referencing a PVMOVE LV unless told not to */
if (dm->track_pvmove_deps && (lv->status & PVMOVE))
if (dm->track_pvmove_deps && lv_is_pvmove(lv))
dm_list_iterate_items(sl, &lv->segs_using_this_lv)
if (!_add_new_lv_to_dtree(dm, dtree, sl->seg->lv, laopts, NULL))
return_0;
#if 0
/* Should not be needed, will be removed */
if (!_set_udev_flags_for_children(dm, lv->vg, dnode))
return_0;
#endif
return 1;
}
@@ -2811,6 +2900,8 @@ static int _tree_action(struct dev_manager *dm, struct logical_volume *lv,
/* Only process nodes with uuid of "LVM-" plus VG id. */
switch(action) {
case CLEAN:
if (retry_deactivation())
dm_tree_retry_remove(root);
/* Deactivate any unused non-toplevel nodes */
if (!_clean_tree(dm, root, laopts->origin_only ? dlid : NULL))
goto_out;
@@ -2826,8 +2917,7 @@ static int _tree_action(struct dev_manager *dm, struct logical_volume *lv,
break;
case SUSPEND:
dm_tree_skip_lockfs(root);
if (!dm->flush_required && !seg_is_raid(first_seg(lv)) &&
(lv->status & MIRRORED) && !(lv->status & PVMOVE))
if (!dm->flush_required && lv_is_mirror(lv) && !lv_is_pvmove(lv))
dm_tree_use_no_flush_suspend(root);
/* Fall through */
case SUSPEND_WITH_LOCKFS:
@@ -2935,6 +3025,8 @@ int dev_manager_device_uses_vg(struct device *dev,
return r;
}
dm_tree_set_optional_uuid_suffixes(dtree, &uuid_suffix_list[0]);
if (!dm_tree_add_dev(dtree, (uint32_t) MAJOR(dev->dev), (uint32_t) MINOR(dev->dev))) {
log_error("Failed to add device %s (%" PRIu32 ":%" PRIu32") to dtree",
dev_name(dev), (uint32_t) MAJOR(dev->dev), (uint32_t) MINOR(dev->dev));

View File

@@ -50,10 +50,10 @@ int dev_manager_info(struct dm_pool *mem, const struct logical_volume *lv,
struct dm_info *info, uint32_t *read_ahead);
int dev_manager_snapshot_percent(struct dev_manager *dm,
const struct logical_volume *lv,
percent_t *percent);
dm_percent_t *percent);
int dev_manager_mirror_percent(struct dev_manager *dm,
const struct logical_volume *lv, int wait,
percent_t *percent, uint32_t *event_nr);
dm_percent_t *percent, uint32_t *event_nr);
int dev_manager_raid_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct dm_status_raid **status);
@@ -69,10 +69,10 @@ int dev_manager_thin_pool_status(struct dev_manager *dm,
int noflush);
int dev_manager_thin_pool_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int metadata, percent_t *percent);
int metadata, dm_percent_t *percent);
int dev_manager_thin_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int mapped, percent_t *percent);
int mapped, dm_percent_t *percent);
int dev_manager_thin_device_id(struct dev_manager *dm,
const struct logical_volume *lv,
uint32_t *device_id);

View File

@@ -1062,7 +1062,6 @@ static int _drop_vginfo(struct lvmcache_info *info, struct lvmcache_vginfo *vgin
return 1;
}
/* Unused
void lvmcache_del(struct lvmcache_info *info)
{
if (info->dev->pvid[0] && _pvid_hash)
@@ -1071,11 +1070,11 @@ void lvmcache_del(struct lvmcache_info *info)
_drop_vginfo(info, info->vginfo);
info->label->labeller->ops->destroy_label(info->label->labeller,
info->label);
info->label);
dm_free(info);
return;
} */
}
static int _lvmcache_update_pvid(struct lvmcache_info *info, const char *pvid)
{
@@ -1708,7 +1707,7 @@ static int _get_pv_if_in_vg(struct lvmcache_info *info,
* lvmcache_label_scan() and drop cached
* vginfo so make a local copy of string.
*/
strcpy(vgname, info->vginfo->vgname);
(void) dm_strncpy(vgname, info->vginfo->vgname, sizeof(vgname));
memcpy(vgid, info->vginfo->vgid, sizeof(vgid));
if (get_pv_from_vg_by_id(info->fmt, vgname, vgid,

View File

@@ -88,17 +88,17 @@ int lvmcache_vgname_is_locked(const char *vgname);
void lvmcache_seed_infos_from_lvmetad(struct cmd_context *cmd);
/* Returns list of struct str_lists containing pool-allocated copy of vgnames */
/* Returns list of struct dm_str_list containing pool-allocated copy of vgnames */
/* If include_internal is not set, return only proper vg names. */
struct dm_list *lvmcache_get_vgnames(struct cmd_context *cmd,
int include_internal);
/* Returns list of struct str_lists containing pool-allocated copy of vgids */
/* Returns list of struct dm_str_list containing pool-allocated copy of vgids */
/* If include_internal is not set, return only proper vg ids. */
struct dm_list *lvmcache_get_vgids(struct cmd_context *cmd,
int include_internal);
/* Returns list of struct str_lists containing pool-allocated copy of pvids */
/* Returns list of struct dm_str_list containing pool-allocated copy of pvids */
struct dm_list *lvmcache_get_pvids(struct cmd_context *cmd, const char *vgname,
const char *vgid);

31
lib/cache/lvmetad.c vendored
View File

@@ -21,11 +21,12 @@
#include "lvmetad-client.h"
#include "format-text.h" // TODO for disk_locn, used as a DA representation
#include "crc.h"
#include "lvm-signal.h"
#define SCAN_TIMEOUT_SECONDS 80
#define MAX_RESCANS 10 /* Maximum number of times to scan all PVs and retry if the daemon returns a token mismatch error */
static daemon_handle _lvmetad;
static daemon_handle _lvmetad = { .error = 0 };
static int _lvmetad_use = 0;
static int _lvmetad_connected = 0;
@@ -67,12 +68,12 @@ void lvmetad_connect_or_warn(void)
if (!_lvmetad_use)
return;
if (!_lvmetad_connected)
if (!_lvmetad_connected && !_lvmetad.error) {
_lvmetad_connect();
if ((_lvmetad.socket_fd < 0 || _lvmetad.error))
log_warn("WARNING: Failed to connect to lvmetad: %s. Falling back to internal scanning.",
strerror(_lvmetad.error));
if ((_lvmetad.socket_fd < 0 || _lvmetad.error))
log_warn("WARNING: Failed to connect to lvmetad. Falling back to internal scanning.");
}
}
int lvmetad_used(void)
@@ -93,21 +94,15 @@ int lvmetad_socket_present(void)
int lvmetad_active(void)
{
if (!_lvmetad_use)
return 0;
if (!_lvmetad_connected)
_lvmetad_connect();
if ((_lvmetad.socket_fd < 0 || _lvmetad.error))
log_debug_lvmetad("Failed to connect to lvmetad: %s.", strerror(_lvmetad.error));
lvmetad_connect_or_warn();
return _lvmetad_connected;
}
void lvmetad_set_active(int active)
{
_lvmetad_use = active;
if (!active && lvmetad_active())
lvmetad_disconnect();
}
/*
@@ -124,7 +119,7 @@ void lvmetad_set_token(const struct dm_config_value *filter)
filter = filter->next;
}
if (!dm_asprintf(&_lvmetad_token, "filter:%u", ft))
if (dm_asprintf(&_lvmetad_token, "filter:%u", ft) < 0)
log_warn("WARNING: Failed to set lvmetad token. Out of memory?");
}
@@ -298,7 +293,11 @@ static struct lvmcache_info *_pv_populate_lvmcache(struct cmd_context *cmd,
dev = dev_cache_get_by_devt(fallback, cmd->filter);
if (!dev) {
log_error("No device found for PV %s.", pvid_txt);
dev = dev_cache_get_by_devt(devt, cmd->lvmetad_filter);
if (!dev)
log_error("No device found for PV %s.", pvid_txt);
else
log_warn("WARNING: Device %s for PV %s rejected by a filter.", dev_name(dev), pvid_txt);
return NULL;
}

View File

@@ -74,11 +74,7 @@ static int _cache_pool_text_import(struct lv_segment *seg,
if (dm_config_has_node(sn, "cache_mode")) {
if (!(str = dm_config_find_str(sn, "cache_mode", NULL)))
return SEG_LOG_ERROR("cache_mode must be a string in");
if (!strcmp(str, "writethrough"))
seg->feature_flags |= DM_CACHE_FEATURE_WRITETHROUGH;
else if (!strcmp(str, "writeback"))
seg->feature_flags |= DM_CACHE_FEATURE_WRITEBACK;
else
if (!get_cache_mode(str, &seg->feature_flags))
return SEG_LOG_ERROR("Unknown cache_mode in");
}
@@ -216,35 +212,6 @@ static int _cache_pool_text_export(const struct lv_segment *seg,
return 1;
}
static int _cache_pool_add_target_line(struct dev_manager *dm,
struct dm_pool *mem,
struct cmd_context *cmd __attribute__((unused)),
void **target_state __attribute__((unused)),
struct lv_segment *seg,
const struct lv_activate_opts *laopts __attribute__((unused)),
struct dm_tree_node *node, uint64_t len,
uint32_t *pvmove_mirror_count __attribute__((unused)))
{
/*
* This /could/ be directed at _cdata, but I prefer
* not to give a user direct access to a sub-LV via
* this cache_pool.
*/
return dm_tree_node_add_error_target(node, len);
}
static int _modules_needed(struct dm_pool *mem,
const struct lv_segment *seg __attribute__((unused)),
struct dm_list *modules)
{
if (!str_list_add(mem, modules, "cache")) {
log_error("cache module string list allocation failed");
return 0;
}
return 1;
}
static void _destroy(struct segment_type *segtype)
{
dm_free((void *) segtype);
@@ -281,6 +248,17 @@ static int _target_present(struct cmd_context *cmd,
return _cache_present;
}
static int _modules_needed(struct dm_pool *mem,
const struct lv_segment *seg __attribute__((unused)),
struct dm_list *modules)
{
if (!str_list_add(mem, modules, "cache")) {
log_error("String list allocation failed for cache module.");
return 0;
}
return 1;
}
#endif /* DEVMAPPER_SUPPORT */
static struct segtype_handler _cache_pool_ops = {
@@ -288,13 +266,12 @@ static struct segtype_handler _cache_pool_ops = {
.text_import = _cache_pool_text_import,
.text_import_area_count = _cache_pool_text_import_area_count,
.text_export = _cache_pool_text_export,
.add_target_line = _cache_pool_add_target_line,
#ifdef DEVMAPPER_SUPPORT
.target_present = _target_present,
.modules_needed = _modules_needed,
# ifdef DMEVENTD
# endif /* DMEVENTD */
#endif
.modules_needed = _modules_needed,
.destroy = _destroy,
};
@@ -348,6 +325,7 @@ static int _cache_text_export(const struct lv_segment *seg, struct formatter *f)
return 1;
}
#ifdef DEVMAPPER_SUPPORT
static int _cache_add_target_line(struct dev_manager *dm,
struct dm_pool *mem,
struct cmd_context *cmd __attribute__((unused)),
@@ -384,19 +362,20 @@ static int _cache_add_target_line(struct dev_manager *dm,
return add_areas_line(dm, seg, node, 0u, seg->area_count);
}
#endif /* DEVMAPPER_SUPPORT */
static struct segtype_handler _cache_ops = {
.name = _name,
.text_import = _cache_text_import,
.text_import_area_count = _cache_text_import_area_count,
.text_export = _cache_text_export,
.add_target_line = _cache_add_target_line,
#ifdef DEVMAPPER_SUPPORT
.add_target_line = _cache_add_target_line,
.target_present = _target_present,
.modules_needed = _modules_needed,
# ifdef DMEVENTD
# endif /* DMEVENTD */
#endif
.modules_needed = _modules_needed,
.destroy = _destroy,
};
@@ -445,4 +424,3 @@ int init_cache_segtypes(struct cmd_context *cmd,
return 1;
}

View File

@@ -256,24 +256,32 @@ static int _check_disable_udev(const char *msg) {
return 0;
}
static int _check_config_by_source(struct cmd_context *cmd, config_source_t source)
{
struct dm_config_tree *cft;
struct cft_check_handle *handle;
if (!(cft = get_config_tree_by_source(cmd, source)) ||
!(handle = get_config_tree_check_handle(cmd, cft)))
return 1;
return config_def_check(handle);
}
static int _check_config(struct cmd_context *cmd)
{
int abort_on_error;
if (!find_config_tree_bool(cmd, config_checks_CFG, NULL))
return 1;
if (!cmd->cft_check_handle) {
if (!(cmd->cft_check_handle = dm_pool_zalloc(cmd->libmem, sizeof(*cmd->cft_check_handle)))) {
log_error("Configuration check handle allocation failed.");
return 0;
}
}
abort_on_error = find_config_tree_bool(cmd, config_abort_on_errors_CFG, NULL);
cmd->cft_check_handle->cft = cmd->cft;
cmd->cft_check_handle->cmd = cmd;
if (!config_def_check(cmd->cft_check_handle) &&
find_config_tree_bool(cmd, config_abort_on_errors_CFG, NULL)) {
log_error("LVM configuration invalid.");
if ((!_check_config_by_source(cmd, CONFIG_STRING) ||
!_check_config_by_source(cmd, CONFIG_MERGED_FILES) ||
!_check_config_by_source(cmd, CONFIG_FILE)) &&
abort_on_error) {
log_error("LVM_ configuration invalid.");
return 0;
}
@@ -282,14 +290,16 @@ static int _check_config(struct cmd_context *cmd)
int process_profilable_config(struct cmd_context *cmd) {
if (!(cmd->default_settings.unit_factor =
units_to_bytes(find_config_tree_str(cmd, global_units_CFG, NULL),
&cmd->default_settings.unit_type))) {
dm_units_to_factor(find_config_tree_str(cmd, global_units_CFG, NULL),
&cmd->default_settings.unit_type, 1, NULL))) {
log_error("Invalid units specification");
return 0;
}
cmd->si_unit_consistency = find_config_tree_bool(cmd, global_si_unit_consistency_CFG, NULL);
cmd->report_binary_values_as_numeric = find_config_tree_bool(cmd, report_binary_values_as_numeric_CFG, NULL);
cmd->default_settings.suffix = find_config_tree_bool(cmd, global_suffix_CFG, NULL);
cmd->report_list_item_separator = find_config_tree_str(cmd, report_list_item_separator_CFG, NULL);
return 1;
}
@@ -348,7 +358,7 @@ static int _process_config(struct cmd_context *cmd)
/* activation? */
cmd->default_settings.activation = find_config_tree_bool(cmd, global_activation_CFG, NULL);
set_activation(cmd->default_settings.activation);
set_activation(cmd->default_settings.activation, 0);
cmd->auto_set_activation_skip = find_config_tree_bool(cmd, activation_auto_set_activation_skip_CFG, NULL);
@@ -571,7 +581,7 @@ static int _load_config_file(struct cmd_context *cmd, const char *tag)
return 0;
}
if (!(cfl->cft = config_file_open_and_read(config_file, CONFIG_FILE)))
if (!(cfl->cft = config_file_open_and_read(config_file, CONFIG_FILE, cmd)))
return_0;
dm_list_add(&cmd->config_files, &cfl->list);
@@ -607,7 +617,7 @@ static int _init_lvm_conf(struct cmd_context *cmd)
/* Read any additional config files */
static int _init_tag_configs(struct cmd_context *cmd)
{
struct str_list *sl;
struct dm_str_list *sl;
/* Tag list may grow while inside this loop */
dm_list_iterate_items(sl, &cmd->tags) {
@@ -621,21 +631,24 @@ static int _init_tag_configs(struct cmd_context *cmd)
static int _init_profiles(struct cmd_context *cmd)
{
const char *dir;
struct profile_params *pp;
if (!(pp = dm_pool_zalloc(cmd->libmem, sizeof(*pp)))) {
log_error("profile_params alloc failed");
return 0;
}
if (!(dir = find_config_tree_str(cmd, config_profile_dir_CFG, NULL)))
return_0;
pp->dir = dm_pool_strdup(cmd->libmem, dir);
dm_list_init(&pp->profiles_to_load);
dm_list_init(&pp->profiles);
if (!cmd->profile_params) {
if (!(cmd->profile_params = dm_pool_zalloc(cmd->libmem, sizeof(*cmd->profile_params)))) {
log_error("profile_params alloc failed");
return 0;
}
dm_list_init(&cmd->profile_params->profiles_to_load);
dm_list_init(&cmd->profile_params->profiles);
}
if (!(dm_strncpy(cmd->profile_params->dir, dir, sizeof(cmd->profile_params->dir)))) {
log_error("_init_profiles: dm_strncpy failed");
return 0;
}
cmd->profile_params = pp;
return 1;
}
@@ -685,7 +698,7 @@ static void _destroy_config(struct cmd_context *cmd)
{
struct config_tree_list *cfl;
struct dm_config_tree *cft;
struct profile *profile;
struct profile *profile, *tmp_profile;
/*
* Configuration cascade:
@@ -704,13 +717,19 @@ static void _destroy_config(struct cmd_context *cmd)
/* CONFIG_PROFILE */
if (cmd->profile_params) {
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
dm_list_iterate_items(profile, &cmd->profile_params->profiles_to_load)
remove_config_tree_by_source(cmd, CONFIG_PROFILE_COMMAND);
remove_config_tree_by_source(cmd, CONFIG_PROFILE_METADATA);
/*
* Destroy config trees for any loaded profiles and
* move these profiles to profile_to_load list.
* Whenever these profiles are referenced later,
* they will get loaded again automatically.
*/
dm_list_iterate_items_safe(profile, tmp_profile, &cmd->profile_params->profiles) {
config_destroy(profile->cft);
dm_list_iterate_items(profile, &cmd->profile_params->profiles)
config_destroy(profile->cft);
dm_list_init(&cmd->profile_params->profiles_to_load);
dm_list_init(&cmd->profile_params->profiles);
profile->cft = NULL;
dm_list_move(&cmd->profile_params->profiles_to_load, &profile->list);
}
}
/* CONFIG_STRING */
@@ -842,14 +861,13 @@ static struct dev_filter *_init_filter_components(struct cmd_context *cmd)
}
/* regex filter. Optional. */
if (!(cn = find_config_tree_node(cmd, devices_filter_CFG, NULL)))
log_very_verbose("devices/filter not found in config file: "
"no regex filter installed");
else if (!(filters[nr_filt] = regex_filter_create(cn->v))) {
log_error("Failed to create regex device filter");
goto bad;
} else
if ((cn = find_config_tree_node(cmd, devices_global_filter_CFG, NULL))) {
if (!(filters[nr_filt] = regex_filter_create(cn->v))) {
log_error("Failed to create global regex device filter");
goto bad;
}
nr_filt++;
}
/* device type filter. Required. */
if (!(filters[nr_filt] = lvm_type_filter_create(cmd->dev_types))) {
@@ -899,16 +917,26 @@ static int _init_filters(struct cmd_context *cmd, unsigned load_persistent_cache
cmd->dump_filter = 0;
if (!(f3 = _init_filter_components(cmd)))
if (!(cmd->lvmetad_filter = _init_filter_components(cmd)))
goto_bad;
init_ignore_suspended_devices(find_config_tree_bool(cmd, devices_ignore_suspended_devices_CFG, NULL));
init_ignore_lvm_mirrors(find_config_tree_bool(cmd, devices_ignore_lvm_mirrors_CFG, NULL));
if ((cn = find_config_tree_node(cmd, devices_filter_CFG, NULL))) {
if (!(f3 = regex_filter_create(cn->v)))
goto_bad;
toplevel_components[0] = cmd->lvmetad_filter;
toplevel_components[1] = f3;
if (!(f4 = composite_filter_create(2, toplevel_components)))
goto_bad;
} else
f4 = cmd->lvmetad_filter;
if (!(dev_cache = find_config_tree_str(cmd, devices_cache_CFG, NULL)))
goto_bad;
if (!(f4 = persistent_filter_create(cmd->dev_types, f3, dev_cache))) {
if (!(cmd->filter = persistent_filter_create(cmd->dev_types, f4, dev_cache))) {
log_verbose("Failed to create persistent device filter.");
goto bad;
}
@@ -929,29 +957,20 @@ static int _init_filters(struct cmd_context *cmd, unsigned load_persistent_cache
load_persistent_cache && !cmd->is_long_lived &&
!stat(dev_cache, &st) &&
(st.st_ctime > config_file_timestamp(cmd->cft)) &&
!persistent_filter_load(f4, NULL))
!persistent_filter_load(cmd->filter, NULL))
log_verbose("Failed to load existing device cache from %s",
dev_cache);
if (!(cn = find_config_tree_node(cmd, devices_global_filter_CFG, NULL))) {
cmd->filter = f4;
} else if (!(cmd->lvmetad_filter = regex_filter_create(cn->v)))
goto_bad;
else {
toplevel_components[0] = cmd->lvmetad_filter;
toplevel_components[1] = f4;
if (!(cmd->filter = composite_filter_create(2, toplevel_components)))
goto_bad;
}
return 1;
bad:
if (f4)
if (f4) /* kills both f3 and cmd->lvmetad_filter */
f4->destroy(f4);
else if (f3)
f3->destroy(f3);
if (toplevel_components[0])
toplevel_components[0]->destroy(toplevel_components[0]);
else {
if (f3)
f3->destroy(f3);
if (cmd->lvmetad_filter)
cmd->lvmetad_filter->destroy(cmd->lvmetad_filter);
}
return 0;
}
@@ -1587,6 +1606,8 @@ int refresh_filters(struct cmd_context *cmd)
int refresh_toolcontext(struct cmd_context *cmd)
{
struct dm_config_tree *cft_cmdline, *cft_tmp;
const char *profile_command_name, *profile_metadata_name;
struct profile *profile;
log_verbose("Reloading config files");
@@ -1609,7 +1630,15 @@ int refresh_toolcontext(struct cmd_context *cmd)
_destroy_dev_types(cmd);
_destroy_tags(cmd);
/* save config string passed on the command line */
cft_cmdline = remove_config_tree_by_source(cmd, CONFIG_STRING);
/* save the global profile name used */
profile_command_name = cmd->profile_params->global_command_profile ?
cmd->profile_params->global_command_profile->name : NULL;
profile_metadata_name = cmd->profile_params->global_metadata_profile ?
cmd->profile_params->global_metadata_profile->name : NULL;
_destroy_config(cmd);
cmd->config_initialized = 0;
@@ -1626,6 +1655,18 @@ int refresh_toolcontext(struct cmd_context *cmd)
if (cft_cmdline)
cmd->cft = dm_config_insert_cascaded_tree(cft_cmdline, cft_tmp);
/* Reload the global profile. */
if (profile_command_name) {
if (!(profile = add_profile(cmd, profile_command_name, CONFIG_PROFILE_COMMAND)) ||
!override_config_tree_from_profile(cmd, profile))
return_0;
}
if (profile_metadata_name) {
if (!(profile = add_profile(cmd, profile_metadata_name, CONFIG_PROFILE_METADATA)) ||
!override_config_tree_from_profile(cmd, profile))
return_0;
}
/* Uses cmd->cft i.e. cft_cmdline + lvm.conf */
_init_logging(cmd);
@@ -1649,6 +1690,9 @@ int refresh_toolcontext(struct cmd_context *cmd)
if (!_process_config(cmd))
return_0;
if (!_init_profiles(cmd))
return_0;
if (!(cmd->dev_types = create_dev_types(cmd->proc_dir,
find_config_tree_node(cmd, devices_types_CFG, NULL))))
return_0;

View File

@@ -19,7 +19,6 @@
#include "dev-cache.h"
#include "dev-type.h"
#include <stdio.h>
#include <limits.h>
/*
@@ -87,8 +86,10 @@ struct cmd_context {
unsigned handles_unknown_segments:1;
unsigned use_linear_target:1;
unsigned partial_activation:1;
unsigned degraded_activation:1;
unsigned auto_set_activation_skip:1;
unsigned si_unit_consistency:1;
unsigned report_binary_values_as_numeric:1;
unsigned metadata_read_only:1;
unsigned ignore_clustered_vgs:1;
unsigned threaded:1; /* Set if running within a thread e.g. clvmd */
@@ -106,7 +107,6 @@ struct cmd_context {
int config_initialized; /* used to reinitialize config if previous init was not successful */
struct dm_hash_table *cft_def_hash; /* config definition hash used for validity check (item type + item recognized) */
struct cft_check_handle *cft_check_handle;
/* selected settings with original default/configured value which can be changed during cmd processing */
struct config_info default_settings;
@@ -119,6 +119,7 @@ struct cmd_context {
/* List of defined tags */
struct dm_list tags;
const char *report_list_item_separator;
int hosttags;
const char *lib_dir; /* Cache value global/library_dir */

View File

@@ -38,7 +38,9 @@ static const char *_config_source_names[] = {
[CONFIG_FILE] = "file",
[CONFIG_MERGED_FILES] = "merged files",
[CONFIG_STRING] = "string",
[CONFIG_PROFILE] = "profile"
[CONFIG_PROFILE_COMMAND] = "command profile",
[CONFIG_PROFILE_METADATA] = "metadata profile",
[CONFIG_FILE_SPECIAL] = "special purpose"
};
struct config_file {
@@ -56,6 +58,7 @@ struct config_source {
struct config_file *file;
struct config_file *profile;
} source;
struct cft_check_handle *check_handle;
};
/*
@@ -81,6 +84,19 @@ config_source_t config_get_source_type(struct dm_config_tree *cft)
return cs ? cs->type : CONFIG_UNDEFINED;
}
static inline int _is_profile_based_config_source(config_source_t source)
{
return (source == CONFIG_PROFILE_COMMAND) ||
(source == CONFIG_PROFILE_METADATA);
}
static inline int _is_file_based_config_source(config_source_t source)
{
return (source == CONFIG_FILE) ||
(source == CONFIG_FILE_SPECIAL) ||
_is_profile_based_config_source(source);
}
/*
* public interface
*/
@@ -100,7 +116,7 @@ struct dm_config_tree *config_open(config_source_t source,
goto fail;
}
if ((source == CONFIG_FILE) || (source == CONFIG_PROFILE)) {
if (_is_file_based_config_source(source)) {
if (!(cf = dm_pool_zalloc(cft->mem, sizeof(struct config_file)))) {
log_error("Failed to allocate config file.");
goto fail;
@@ -133,9 +149,10 @@ int config_file_check(struct dm_config_tree *cft, const char **filename, struct
struct config_file *cf;
struct stat _info;
if ((cs->type != CONFIG_FILE) && (cs->type != CONFIG_PROFILE)) {
log_error(INTERNAL_ERROR "config_file_check: expected file or profile config source, "
"found %s config source.", _config_source_names[cs->type]);
if (!_is_file_based_config_source(cs->type)) {
log_error(INTERNAL_ERROR "config_file_check: expected file, special file or "
"profile config source, found %s config source.",
_config_source_names[cs->type]);
return 0;
}
@@ -227,7 +244,7 @@ void config_destroy(struct dm_config_tree *cft)
cs = dm_config_get_custom(cft);
if ((cs->type == CONFIG_FILE) || (cs->type == CONFIG_PROFILE)) {
if (_is_file_based_config_source(cs->type)) {
cf = cs->source.file;
if (cf && cf->dev)
if (!dev_close(cf->dev))
@@ -238,7 +255,8 @@ void config_destroy(struct dm_config_tree *cft)
}
struct dm_config_tree *config_file_open_and_read(const char *config_file,
config_source_t source)
config_source_t source,
struct cmd_context *cmd)
{
struct dm_config_tree *cft;
struct stat info;
@@ -251,7 +269,7 @@ struct dm_config_tree *config_file_open_and_read(const char *config_file,
/* Is there a config file? */
if (stat(config_file, &info) == -1) {
/* Profile file must be present! */
if (errno == ENOENT && (source != CONFIG_PROFILE))
if (errno == ENOENT && (!_is_profile_based_config_source(source)))
return cft;
log_sys_error("stat", config_file);
goto bad;
@@ -269,6 +287,22 @@ bad:
return NULL;
}
struct dm_config_tree *get_config_tree_by_source(struct cmd_context *cmd,
config_source_t source)
{
struct dm_config_tree *cft = cmd->cft;
struct config_source *cs;
while (cft) {
cs = dm_config_get_custom(cft);
if (cs && cs->type == source)
return cft;
cft = cft->cascade;
}
return NULL;
}
/*
* Returns config tree if it was removed.
*/
@@ -297,6 +331,35 @@ struct dm_config_tree *remove_config_tree_by_source(struct cmd_context *cmd,
return cft;
}
struct cft_check_handle *get_config_tree_check_handle(struct cmd_context *cmd,
struct dm_config_tree *cft)
{
struct config_source *cs;
if (!(cs = dm_config_get_custom(cft)))
return NULL;
if (cs->check_handle)
goto out;
/*
* Attach config check handle to all config types but
* CONFIG_FILE_SPECIAL - this one uses its own check
* methods and the cft_check_handle is not applicable here.
*/
if (cs->type != CONFIG_FILE_SPECIAL) {
if (!(cs->check_handle = dm_pool_zalloc(cft->mem, sizeof(*cs->check_handle)))) {
log_error("Failed to allocate configuration check handle.");
return NULL;
}
cs->check_handle->cft = cft;
cs->check_handle->cmd = cmd;
}
out:
return cs->check_handle;
}
int override_config_tree_from_string(struct cmd_context *cmd,
const char *config_settings)
{
@@ -305,7 +368,7 @@ int override_config_tree_from_string(struct cmd_context *cmd,
/*
* Follow this sequence:
* CONFIG_STRING -> CONFIG_PROFILE -> CONFIG_FILE/CONFIG_MERGED_FILES
* CONFIG_STRING -> CONFIG_PROFILE_COMMAND -> CONFIG_PROFILE_METADATA -> CONFIG_FILE/CONFIG_MERGED_FILES
*/
if (cs->type == CONFIG_STRING) {
@@ -333,37 +396,86 @@ int override_config_tree_from_string(struct cmd_context *cmd,
return 1;
}
static int _override_config_tree_from_command_profile(struct cmd_context *cmd,
struct profile *profile)
{
struct dm_config_tree *cft = cmd->cft, *cft_previous = NULL;
struct config_source *cs = dm_config_get_custom(cft);
if (cs->type == CONFIG_STRING) {
cft_previous = cft;
cft = cft->cascade;
cs = dm_config_get_custom(cft);
}
if (cs->type == CONFIG_PROFILE_COMMAND) {
log_error(INTERNAL_ERROR "_override_config_tree_from_command_profile: "
"config cascade already contains a command profile config.");
return 0;
}
if (cft_previous)
dm_config_insert_cascaded_tree(cft_previous, profile->cft);
else
cmd->cft = profile->cft;
dm_config_insert_cascaded_tree(profile->cft, cft);
return 1;
}
static int _override_config_tree_from_metadata_profile(struct cmd_context *cmd,
struct profile *profile)
{
struct dm_config_tree *cft = cmd->cft, *cft_previous = NULL;
struct config_source *cs = dm_config_get_custom(cft);
if (cs->type == CONFIG_STRING) {
cft_previous = cft;
cft = cft->cascade;
}
if (cs->type == CONFIG_PROFILE_COMMAND) {
cft_previous = cft;
cft = cft->cascade;
}
cs = dm_config_get_custom(cft);
if (cs->type == CONFIG_PROFILE_METADATA) {
log_error(INTERNAL_ERROR "_override_config_tree_from_metadata_profile: "
"config cascade already contains a metadata profile config.");
return 0;
}
if (cft_previous)
dm_config_insert_cascaded_tree(cft_previous, profile->cft);
else
cmd->cft = profile->cft;
dm_config_insert_cascaded_tree(profile->cft, cft);
return 1;
}
int override_config_tree_from_profile(struct cmd_context *cmd,
struct profile *profile)
{
struct dm_config_tree *cft = cmd->cft, *cft_string = NULL;
struct config_source *cs = dm_config_get_custom(cft);
/*
* Follow this sequence:
* CONFIG_STRING -> CONFIG_PROFILE -> CONFIG_FILE/CONFIG_MERGED_FILES
* CONFIG_STRING -> CONFIG_PROFILE_COMMAND -> CONFIG_PROFILE_METADATA -> CONFIG_FILE/CONFIG_MERGED_FILES
*/
if (!profile->cft && !load_profile(cmd, profile))
return_0;
if (cs->type == CONFIG_STRING) {
cft_string = cft;
cft = cft->cascade;
cs = dm_config_get_custom(cft);
if (cs->type == CONFIG_PROFILE) {
log_error(INTERNAL_ERROR "override_config_tree_from_profile: "
"config cascade already contains a profile config.");
return 0;
}
dm_config_insert_cascaded_tree(cft_string, profile->cft);
}
if (profile->source == CONFIG_PROFILE_COMMAND)
return _override_config_tree_from_command_profile(cmd, profile);
else if (profile->source == CONFIG_PROFILE_METADATA)
return _override_config_tree_from_metadata_profile(cmd, profile);
cmd->cft = dm_config_insert_cascaded_tree(profile->cft, cft);
cmd->cft = cft_string ? : profile->cft;
return 1;
log_error(INTERNAL_ERROR "override_config_tree_from_profile: incorrect profile source type");
return 0;
}
int config_file_read_fd(struct dm_config_tree *cft, struct device *dev,
@@ -377,9 +489,10 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev,
char *buf = NULL;
struct config_source *cs = dm_config_get_custom(cft);
if ((cs->type != CONFIG_FILE) && (cs->type != CONFIG_PROFILE)) {
log_error(INTERNAL_ERROR "config_file_read_fd: expected file or profile config source, "
"found %s config source.", _config_source_names[cs->type]);
if (!_is_file_based_config_source(cs->type)) {
log_error(INTERNAL_ERROR "config_file_read_fd: expected file, special file "
"or profile config source, found %s config source.",
_config_source_names[cs->type]);
return 0;
}
@@ -489,8 +602,10 @@ static int _cfg_def_make_path(char *buf, size_t buf_size, int id, cfg_def_item_t
int parent_id = item->parent;
int count, n;
if (id == parent_id)
if (id == parent_id) {
buf[0] = '\0';
return 0;
}
count = _cfg_def_make_path(buf, buf_size, parent_id, cfg_def_get_item_p(parent_id), xlate);
if ((n = dm_snprintf(buf + count, buf_size - count, "%s%s%s%s",
@@ -792,6 +907,48 @@ static int _config_def_check_node_value(struct cft_check_handle *handle,
return 1;
}
static int _config_def_check_node_is_profilable(struct cft_check_handle *handle,
const char *rp, struct dm_config_node *cn,
const cfg_def_item_t *def)
{
uint16_t flags;
if (!(def->flags & CFG_PROFILABLE)) {
log_warn_suppress(handle->suppress_messages,
"Configuration %s \"%s\" is not customizable by "
"a profile.", cn->v ? "option" : "section", rp);
return 0;
}
flags = def->flags & ~CFG_PROFILABLE;
/*
* Make sure there is no metadata profilable config in the command profile!
*/
if ((handle->source == CONFIG_PROFILE_COMMAND) && (flags & CFG_PROFILABLE_METADATA)) {
log_warn_suppress(handle->suppress_messages,
"Configuration %s \"%s\" is customizable by "
"metadata profile only, not command profile.",
cn->v ? "option" : "section", rp);
return 0;
}
/*
* Make sure there is no command profilable config in the metadata profile!
* (sections do not need to be flagged with CFG_PROFILABLE_METADATA, the
* CFG_PROFILABLE is enough as sections may contain both types inside)
*/
if ((handle->source == CONFIG_PROFILE_METADATA) && cn->v && !(flags & CFG_PROFILABLE_METADATA)) {
log_warn_suppress(handle->suppress_messages,
"Configuration %s \"%s\" is customizable by "
"command profile only, not metadata profile.",
cn->v ? "option" : "section", rp);
return 0;
}
return 1;
}
static int _config_def_check_node(struct cft_check_handle *handle,
const char *vp, char *pvp, char *rp, char *prp,
size_t buf_size, struct dm_config_node *cn)
@@ -838,13 +995,9 @@ static int _config_def_check_node(struct cft_check_handle *handle,
* in certain types of configuration trees as in some
* the use of configuration is restricted, e.g. profiles...
*/
if (handle->source == CONFIG_PROFILE &&
!(def->flags & CFG_PROFILABLE)) {
log_warn_suppress(handle->suppress_messages,
"Configuration %s \"%s\" is not customizable by "
"a profile.", cn->v ? "option" : "section", rp);
return 0;
}
if (_is_profile_based_config_source(handle->source) &&
!_config_def_check_node_is_profilable(handle, rp, cn, def))
return_0;
handle->status[def->id] |= CFG_VALID;
return 1;
@@ -970,21 +1123,37 @@ out:
return r;
}
static int _apply_local_profile(struct cmd_context *cmd, struct profile *profile)
{
if (!profile)
return 0;
/*
* Global metadata profile overrides the local one.
* This simply means the "--metadataprofile" arg
* overrides any profile attached to VG/LV.
*/
if ((profile->source == CONFIG_PROFILE_METADATA) &&
cmd->profile_params->global_metadata_profile)
return 0;
return override_config_tree_from_profile(cmd, profile);
}
const struct dm_config_node *find_config_tree_node(struct cmd_context *cmd, int id, struct profile *profile)
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
const struct dm_config_node *cn;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
cn = dm_config_tree_find_node(cmd->cft, path);
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return cn;
}
@@ -993,12 +1162,10 @@ const char *find_config_tree_str(struct cmd_context *cmd, int id, struct profile
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
const char *str;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_STRING)
@@ -1007,7 +1174,7 @@ const char *find_config_tree_str(struct cmd_context *cmd, int id, struct profile
str = dm_config_tree_find_str(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_STRING, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return str;
}
@@ -1016,12 +1183,10 @@ const char *find_config_tree_str_allow_empty(struct cmd_context *cmd, int id, st
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
const char *str;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_STRING)
@@ -1032,7 +1197,7 @@ const char *find_config_tree_str_allow_empty(struct cmd_context *cmd, int id, st
str = dm_config_tree_find_str_allow_empty(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_STRING, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return str;
}
@@ -1041,12 +1206,10 @@ int find_config_tree_int(struct cmd_context *cmd, int id, struct profile *profil
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
int i;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_INT)
@@ -1055,7 +1218,7 @@ int find_config_tree_int(struct cmd_context *cmd, int id, struct profile *profil
i = dm_config_tree_find_int(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_INT, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return i;
}
@@ -1064,12 +1227,10 @@ int64_t find_config_tree_int64(struct cmd_context *cmd, int id, struct profile *
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
int i64;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_INT)
@@ -1078,7 +1239,7 @@ int64_t find_config_tree_int64(struct cmd_context *cmd, int id, struct profile *
i64 = dm_config_tree_find_int64(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_INT, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return i64;
}
@@ -1087,12 +1248,10 @@ float find_config_tree_float(struct cmd_context *cmd, int id, struct profile *pr
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
float f;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_FLOAT)
@@ -1101,7 +1260,7 @@ float find_config_tree_float(struct cmd_context *cmd, int id, struct profile *pr
f = dm_config_tree_find_float(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_FLOAT, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return f;
}
@@ -1110,12 +1269,10 @@ int find_config_tree_bool(struct cmd_context *cmd, int id, struct profile *profi
{
cfg_def_item_t *item = cfg_def_get_item_p(id);
char path[CFG_PATH_MAX_LEN];
int profile_applied = 0;
int profile_applied;
int b;
if (profile && !cmd->profile_params->global_profile)
profile_applied = override_config_tree_from_profile(cmd, profile);
profile_applied = _apply_local_profile(cmd, profile);
_cfg_def_make_path(path, sizeof(path), item->id, item, 0);
if (item->type != CFG_TYPE_BOOL)
@@ -1124,7 +1281,7 @@ int find_config_tree_bool(struct cmd_context *cmd, int id, struct profile *profi
b = dm_config_tree_find_bool(cmd->cft, path, cfg_def_get_default_value(cmd, item, CFG_TYPE_BOOL, profile));
if (profile_applied)
remove_config_tree_by_source(cmd, CONFIG_PROFILE);
remove_config_tree_by_source(cmd, profile->source);
return b;
}
@@ -1466,6 +1623,7 @@ static struct dm_config_node *_add_def_node(struct dm_config_tree *cft,
static int _should_skip_def_node(struct config_def_tree_spec *spec, int section_id, int id)
{
cfg_def_item_t *def = cfg_def_get_item_p(id);
uint16_t flags;
if ((def->parent != section_id) ||
(spec->ignoreadvanced && def->flags & CFG_ADVANCED) ||
@@ -1489,9 +1647,19 @@ static int _should_skip_def_node(struct config_def_tree_spec *spec, int section_
return 1;
break;
case CFG_DEF_TREE_PROFILABLE:
case CFG_DEF_TREE_PROFILABLE_CMD:
case CFG_DEF_TREE_PROFILABLE_MDA:
if (!(def->flags & CFG_PROFILABLE) ||
(def->since_version > spec->version))
return 1;
flags = def->flags & ~CFG_PROFILABLE;
if (spec->type == CFG_DEF_TREE_PROFILABLE_CMD) {
if (flags & CFG_PROFILABLE_METADATA)
return 1;
} else if (spec->type == CFG_DEF_TREE_PROFILABLE_MDA) {
if (!(flags & CFG_PROFILABLE_METADATA))
return 1;
}
break;
default:
if (def->since_version > spec->version)
@@ -1569,7 +1737,7 @@ static int _check_profile(struct cmd_context *cmd, struct profile *profile)
handle->cmd = cmd;
handle->cft = profile->cft;
handle->source = CONFIG_PROFILE;
handle->source = profile->source;
/* the check is compulsory - allow only profilable items in a profile config! */
handle->force_check = 1;
/* provide warning messages only if config/checks=1 */
@@ -1581,11 +1749,44 @@ static int _check_profile(struct cmd_context *cmd, struct profile *profile)
return r;
}
struct profile *add_profile(struct cmd_context *cmd, const char *profile_name)
static int _get_profile_from_list(struct dm_list *list, const char *profile_name,
config_source_t source, struct profile **profile_found)
{
struct profile *profile;
dm_list_iterate_items(profile, list) {
if (!strcmp(profile->name, profile_name)) {
if (profile->source == source) {
*profile_found = profile;
return 1;
}
log_error(INTERNAL_ERROR "Profile %s already added as "
"%s type, but requested type is %s.",
profile_name,
_config_source_names[profile->source],
_config_source_names[source]);
return 0;
}
}
*profile_found = NULL;
return 1;
}
struct profile *add_profile(struct cmd_context *cmd, const char *profile_name, config_source_t source)
{
struct profile *profile;
/* Do some sanity checks first. */
if (!_is_profile_based_config_source(source)) {
log_error(INTERNAL_ERROR "add_profile: incorrect configuration "
"source, expected %s or %s but %s requested",
_config_source_names[CONFIG_PROFILE_COMMAND],
_config_source_names[CONFIG_PROFILE_METADATA],
_config_source_names[source]);
return NULL;
}
if (!profile_name || !*profile_name) {
log_error("Undefined profile name.");
return NULL;
@@ -1596,14 +1797,29 @@ struct profile *add_profile(struct cmd_context *cmd, const char *profile_name)
return NULL;
}
/* Check if the profile is added already... */
dm_list_iterate_items(profile, &cmd->profile_params->profiles_to_load) {
if (!strcmp(profile->name, profile_name))
return profile;
}
dm_list_iterate_items(profile, &cmd->profile_params->profiles) {
if (!strcmp(profile->name, profile_name))
return profile;
/*
* Check if the profile is on the list of profiles to be loaded or if
* not found there, if it's on the list of already loaded profiles.
*/
if (!_get_profile_from_list(&cmd->profile_params->profiles_to_load,
profile_name, source, &profile))
return_NULL;
if (profile)
profile->source = source;
else if (!_get_profile_from_list(&cmd->profile_params->profiles,
profile_name, source, &profile))
return_NULL;
if (profile) {
if (profile->source != source) {
log_error(INTERNAL_ERROR "add_profile: loaded profile "
"has incorrect type, expected %s but %s found",
_config_source_names[source],
_config_source_names[profile->source]);
return NULL;
}
return profile;
}
if (!(profile = dm_pool_zalloc(cmd->libmem, sizeof(*profile)))) {
@@ -1611,6 +1827,7 @@ struct profile *add_profile(struct cmd_context *cmd, const char *profile_name)
return NULL;
}
profile->source = source;
profile->name = dm_pool_strdup(cmd->libmem, profile_name);
dm_list_add(&cmd->profile_params->profiles_to_load, &profile->list);
@@ -1635,11 +1852,9 @@ int load_profile(struct cmd_context *cmd, struct profile *profile) {
return 0;
}
if (!(profile->cft = config_file_open_and_read(profile_path, CONFIG_PROFILE)))
if (!(profile->cft = config_file_open_and_read(profile_path, profile->source, cmd)))
return 0;
dm_list_move(&cmd->profile_params->profiles, &profile->list);
/*
* *Profile must be valid* otherwise we'd end up with incorrect config!
* If there were config items present that are not supposed to be
@@ -1650,25 +1865,28 @@ int load_profile(struct cmd_context *cmd, struct profile *profile) {
* for profiles!
*/
if (!_check_profile(cmd, profile)) {
log_error("Ignoring invalid configuration profile %s.", profile->name);
/* if invalid, cut the whole tree and leave it empty */
dm_pool_free(profile->cft->mem, profile->cft->root);
profile->cft->root = NULL;
log_error("Ignoring invalid %s %s.",
_config_source_names[profile->source], profile->name);
config_destroy(profile->cft);
profile->cft = NULL;
return 0;
}
dm_list_move(&cmd->profile_params->profiles, &profile->list);
return 1;
}
int load_pending_profiles(struct cmd_context *cmd)
{
struct profile *profile, *temp_profile;
int r = 1;
dm_list_iterate_items_safe(profile, temp_profile, &cmd->profile_params->profiles_to_load) {
if (!load_profile(cmd, profile))
return 0;
r = 0;
}
return 1;
return r;
}
const char *get_default_devices_cache_dir_CFG(struct cmd_context *cmd, struct profile *profile)
@@ -1766,12 +1984,15 @@ int get_default_allocation_thin_pool_chunk_size_CFG(struct cmd_context *cmd, str
const char *str;
uint32_t chunk_size;
str = find_config_tree_str(cmd, allocation_thin_pool_chunk_size_policy_CFG, profile);
if (!(str = find_config_tree_str(cmd, allocation_thin_pool_chunk_size_policy_CFG, profile))) {
log_error(INTERNAL_ERROR "Cannot find configuration.");
return 0;
}
if (!strcasecmp(str, "generic"))
chunk_size = DEFAULT_THIN_POOL_CHUNK_SIZE;
chunk_size = DEFAULT_THIN_POOL_CHUNK_SIZE * 2;
else if (!strcasecmp(str, "performance"))
chunk_size = DEFAULT_THIN_POOL_CHUNK_SIZE_PERFORMANCE;
chunk_size = DEFAULT_THIN_POOL_CHUNK_SIZE_PERFORMANCE * 2;
else {
log_error("Thin pool chunk size calculation policy \"%s\" is unrecognised.", str);
return 0;
@@ -1782,5 +2003,5 @@ int get_default_allocation_thin_pool_chunk_size_CFG(struct cmd_context *cmd, str
int get_default_allocation_cache_pool_chunk_size_CFG(struct cmd_context *cmd, struct profile *profile)
{
return DEFAULT_CACHE_POOL_CHUNK_SIZE;
return DEFAULT_CACHE_POOL_CHUNK_SIZE * 2;
}

View File

@@ -16,8 +16,7 @@
#ifndef _LVM_CONFIG_H
#define _LVM_CONFIG_H
#include "lvm-types.h"
#include "defaults.h"
#include "libdevmapper.h"
/* 16 bits: 3 bits for major, 4 bits for minor, 9 bits for patchlevel */
/* FIXME Max LVM version supported: 7.15.511. Extend bits when needed. */
@@ -31,20 +30,24 @@ typedef enum {
CONFIG_FILE, /* one file config */
CONFIG_MERGED_FILES, /* config that is a result of merging more config files */
CONFIG_STRING, /* config string typed on cmdline using '--config' arg */
CONFIG_PROFILE /* profile config */
CONFIG_PROFILE_COMMAND, /* command profile config */
CONFIG_PROFILE_METADATA,/* metadata profile config */
CONFIG_FILE_SPECIAL /* special purpose file config (e.g. metadata, persistent filter...) */
} config_source_t;
struct profile {
struct dm_list list;
config_source_t source; /* either CONFIG_PROFILE_COMMAND or CONFIG_PROFILE_METADATA */
const char *name;
struct dm_config_tree *cft;
};
struct profile_params {
const char *dir; /* subdir in LVM_SYSTEM_DIR where LVM looks for profiles */
struct profile *global_profile; /* profile that overrides any other VG/LV-based profile ('--profile' cmd line arg) */
struct dm_list profiles_to_load;/* list of profiles which are only added, but still need to be loaded for any use */
struct dm_list profiles; /* list of profiles which are loaded already and which are ready for use */
char dir[PATH_MAX]; /* subdir in LVM_SYSTEM_DIR where LVM looks for profiles */
struct profile *global_command_profile; /* profile (as given by --commandprofile cmd arg) used as global command profile */
struct profile *global_metadata_profile; /* profile (as given by --metadataprofile cmd arg) that overrides any other VG/LV-based profile */
struct dm_list profiles_to_load; /* list of profiles which are only added, but still need to be loaded for any use */
struct dm_list profiles; /* list of profiles which are loaded already and which are ready for use */
};
#define CFG_PATH_MAX_LEN 64
@@ -97,10 +100,14 @@ typedef union {
#define CFG_UNSUPPORTED 0x08
/* whether the configuration item is customizable by a profile */
#define CFG_PROFILABLE 0x10
/* whether the configuration item is customizable by a profile */
/* and whether it can be attached to VG/LV metadata at the same time
* The CFG_PROFILABLE_METADATA flag incorporates CFG_PROFILABLE flag!!! */
#define CFG_PROFILABLE_METADATA 0x30
/* whether the default value is undefned */
#define CFG_DEFAULT_UNDEFINED 0x20
#define CFG_DEFAULT_UNDEFINED 0x40
/* whether the defualt value is calculated during run time */
#define CFG_DEFAULT_RUN_TIME 0x40
#define CFG_DEFAULT_RUN_TIME 0x80
/* configuration definition item structure */
typedef struct cfg_def_item {
@@ -122,6 +129,8 @@ typedef enum {
CFG_DEF_TREE_DEFAULT, /* tree of all possible config nodes with default values */
CFG_DEF_TREE_NEW, /* tree of all new nodes that appeared in given version */
CFG_DEF_TREE_PROFILABLE, /* tree of all nodes that are customizable by profiles */
CFG_DEF_TREE_PROFILABLE_CMD, /* tree of all nodes that are customizable by command profiles (subset of PROFILABLE) */
CFG_DEF_TREE_PROFILABLE_MDA, /* tree of all nodes that are customizable by metadata profiles (subset of PROFILABLE) */
CFG_DEF_TREE_DIFF, /* tree of all nodes that differ from defaults */
} cfg_def_tree_t;
@@ -130,10 +139,10 @@ struct config_def_tree_spec {
struct cmd_context *cmd; /* command context (for run-time defaults */
cfg_def_tree_t type; /* tree type */
uint16_t version; /* tree at this LVM2 version */
int ignoreadvanced:1; /* do not include advanced configs */
int ignoreunsupported:1; /* do not include unsupported configs */
int withcomments:1; /* include comments */
int withversions:1; /* include versions */
unsigned ignoreadvanced:1; /* do not include advanced configs */
unsigned ignoreunsupported:1; /* do not include unsupported configs */
unsigned withcomments:1; /* include comments */
unsigned withversions:1; /* include versions */
uint8_t *check_status; /* status of last tree check (currently needed for CFG_DEF_TREE_MISSING only) */
};
@@ -162,7 +171,7 @@ enum {
#undef cfg_array_runtime
};
struct profile *add_profile(struct cmd_context *cmd, const char *profile_name);
struct profile *add_profile(struct cmd_context *cmd, const char *profile_name, config_source_t source);
int load_profile(struct cmd_context *cmd, struct profile *profile);
int load_pending_profiles(struct cmd_context *cmd);
@@ -183,7 +192,9 @@ int config_def_check(struct cft_check_handle *handle);
int override_config_tree_from_string(struct cmd_context *cmd, const char *config_settings);
int override_config_tree_from_profile(struct cmd_context *cmd, struct profile *profile);
struct dm_config_tree *get_config_tree_by_source(struct cmd_context *, config_source_t source);
struct dm_config_tree *remove_config_tree_by_source(struct cmd_context *cmd, config_source_t source);
struct cft_check_handle *get_config_tree_check_handle(struct cmd_context *cmd, struct dm_config_tree *cft);
config_source_t config_get_source_type(struct dm_config_tree *cft);
typedef uint32_t (*checksum_fn_t) (uint32_t initial, const uint8_t *buf, uint32_t size);
@@ -193,7 +204,8 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev,
off_t offset, size_t size, off_t offset2, size_t size2,
checksum_fn_t checksum_fn, uint32_t checksum);
int config_file_read(struct dm_config_tree *cft);
struct dm_config_tree *config_file_open_and_read(const char *config_file, config_source_t source);
struct dm_config_tree *config_file_open_and_read(const char *config_file, config_source_t source,
struct cmd_context *cmd);
int config_write(struct dm_config_tree *cft, struct config_def_tree_spec *tree_spec,
const char *file, int argc, char **argv);
struct dm_config_tree *config_def_create_tree(struct config_def_tree_spec *spec);

View File

@@ -41,6 +41,7 @@
* CFG_ADVANCED - this node belongs to advanced config set
* CFG_UNSUPPORTED - this node belongs to unsupported config set
* CFG_PROFILABLE - this node is customizable by a profile
* CFG_PROFILABLE_METADATA - profilable and attachable to VG/LV metadata
* CFG_DEFAULT_UNDEFINED - node's default value is undefined
* type: allowed type for the value of simple configuation setting, one of:
* CFG_TYPE_BOOL
@@ -65,6 +66,7 @@
* that parent nodes are consistent with versioning, no check done
* if parent node is older or the same age as any child node!)
*/
#include "defaults.h"
cfg_section(root_CFG_SECTION, "(root)", root_CFG_SECTION, 0, vsn(0, 0, 0), NULL)
@@ -120,11 +122,11 @@ cfg(allocation_mirror_logs_require_separate_pvs_CFG, "mirror_logs_require_separa
cfg(allocation_cache_pool_metadata_require_separate_pvs_CFG, "cache_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 106), NULL)
cfg_runtime(allocation_cache_pool_chunk_size_CFG, "cache_pool_chunk_size", allocation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 106), NULL)
cfg(allocation_thin_pool_metadata_require_separate_pvs_CFG, "thin_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 89), NULL)
cfg(allocation_thin_pool_zero_CFG, "thin_pool_zero", allocation_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_ZERO, vsn(2, 2, 99), NULL)
cfg(allocation_thin_pool_discards_CFG, "thin_pool_discards", allocation_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_THIN_POOL_DISCARDS, vsn(2, 2, 99), NULL)
cfg(allocation_thin_pool_chunk_size_policy_CFG, "thin_pool_chunk_size_policy", allocation_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_THIN_POOL_CHUNK_SIZE_POLICY, vsn(2, 2, 101), NULL)
cfg_runtime(allocation_thin_pool_chunk_size_CFG, "thin_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 99), NULL)
cfg(allocation_thin_pool_zero_CFG, "thin_pool_zero", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_ZERO, vsn(2, 2, 99), NULL)
cfg(allocation_thin_pool_discards_CFG, "thin_pool_discards", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_STRING, DEFAULT_THIN_POOL_DISCARDS, vsn(2, 2, 99), NULL)
cfg(allocation_thin_pool_chunk_size_policy_CFG, "thin_pool_chunk_size_policy", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_STRING, DEFAULT_THIN_POOL_CHUNK_SIZE_POLICY, vsn(2, 2, 101), NULL)
cfg_runtime(allocation_thin_pool_chunk_size_CFG, "thin_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 99), NULL)
cfg(allocation_physical_extent_size_CFG, "physical_extent_size", allocation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_EXTENT_SIZE, vsn(2, 2, 112), NULL)
cfg(log_verbose_CFG, "verbose", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_VERBOSE, vsn(1, 0, 0), NULL)
cfg(log_silent_CFG, "silent", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SILENT, vsn(2, 2, 98), NULL)
@@ -180,6 +182,11 @@ cfg_array(global_thin_disabled_features_CFG, "thin_disabled_features", global_CF
cfg(global_thin_dump_executable_CFG, "thin_dump_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, THIN_DUMP_CMD, vsn(2, 2, 100), NULL)
cfg(global_thin_repair_executable_CFG, "thin_repair_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, THIN_REPAIR_CMD, vsn(2, 2, 100), NULL)
cfg_array(global_thin_repair_options_CFG, "thin_repair_options", global_CFG_SECTION, 0, CFG_TYPE_STRING, "#S" DEFAULT_THIN_REPAIR_OPTIONS, vsn(2, 2, 100), NULL)
cfg(global_cache_check_executable_CFG, "cache_check_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, CACHE_CHECK_CMD, vsn(2, 2, 108), NULL)
cfg_array(global_cache_check_options_CFG, "cache_check_options", global_CFG_SECTION, 0, CFG_TYPE_STRING, "#S" DEFAULT_CACHE_CHECK_OPTIONS, vsn(2, 2, 108), NULL)
cfg(global_cache_dump_executable_CFG, "cache_dump_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, CACHE_DUMP_CMD, vsn(2, 2, 108), NULL)
cfg(global_cache_repair_executable_CFG, "cache_repair_executable", global_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, CACHE_REPAIR_CMD, vsn(2, 2, 108), NULL)
cfg_array(global_cache_repair_options_CFG, "cache_repair_options", global_CFG_SECTION, 0, CFG_TYPE_STRING, "#S" DEFAULT_CACHE_REPAIR_OPTIONS, vsn(2, 2, 108), NULL)
cfg(activation_checks_CFG, "checks", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ACTIVATION_CHECKS, vsn(2, 2, 86), NULL)
cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_UDEV_SYNC, vsn(2, 2, 51), NULL)
@@ -203,13 +210,14 @@ cfg(activation_mirror_log_fault_policy_CFG, "mirror_log_fault_policy", activatio
cfg_runtime(activation_mirror_image_fault_policy_CFG, "mirror_image_fault_policy", activation_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(2, 2, 57), NULL)
cfg(activation_snapshot_autoextend_threshold_CFG, "snapshot_autoextend_threshold", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_THRESHOLD, vsn(2, 2, 75), NULL)
cfg(activation_snapshot_autoextend_percent_CFG, "snapshot_autoextend_percent", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_PERCENT, vsn(2, 2, 75), NULL)
cfg(activation_thin_pool_autoextend_threshold_CFG, "thin_pool_autoextend_threshold", activation_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_THRESHOLD, vsn(2, 2, 89), NULL)
cfg(activation_thin_pool_autoextend_percent_CFG, "thin_pool_autoextend_percent", activation_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_PERCENT, vsn(2, 2, 89), NULL)
cfg(activation_thin_pool_autoextend_threshold_CFG, "thin_pool_autoextend_threshold", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_THRESHOLD, vsn(2, 2, 89), NULL)
cfg(activation_thin_pool_autoextend_percent_CFG, "thin_pool_autoextend_percent", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_PERCENT, vsn(2, 2, 89), NULL)
cfg_array(activation_mlock_filter_CFG, "mlock_filter", activation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 2, 62), NULL)
cfg(activation_use_mlockall_CFG, "use_mlockall", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_MLOCKALL, vsn(2, 2, 62), NULL)
cfg(activation_monitoring_CFG, "monitoring", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_DMEVENTD_MONITOR, vsn(2, 2, 63), NULL)
cfg(activation_polling_interval_CFG, "polling_interval", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_INTERVAL, vsn(2, 2, 63), NULL)
cfg(activation_auto_set_activation_skip_CFG, "auto_set_activation_skip", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_AUTO_SET_ACTIVATION_SKIP, vsn(2,2,99), NULL)
cfg(activation_mode_CFG, "activation_mode", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_ACTIVATION_MODE, vsn(2,2,108), NULL)
cfg(metadata_pvmetadatacopies_CFG, "pvmetadatacopies", metadata_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_INT, DEFAULT_PVMETADATACOPIES, vsn(1, 0, 0), NULL)
cfg(metadata_vgmetadatacopies_CFG, "vgmetadatacopies", metadata_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_INT, DEFAULT_VGMETADATACOPIES, vsn(2, 2, 69), NULL)
@@ -228,9 +236,11 @@ cfg(report_aligned_CFG, "aligned", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_
cfg(report_buffered_CFG, "buffered", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_REP_BUFFERED, vsn(1, 0, 0), NULL)
cfg(report_headings_CFG, "headings", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_REP_HEADINGS, vsn(1, 0, 0), NULL)
cfg(report_separator_CFG, "separator", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_REP_SEPARATOR, vsn(1, 0, 0), NULL)
cfg(report_list_item_separator_CFG, "list_item_separator", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_REP_LIST_ITEM_SEPARATOR, vsn(2, 2, 108), NULL)
cfg(report_prefixes_CFG, "prefixes", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_REP_PREFIXES, vsn(2, 2, 36), NULL)
cfg(report_quoted_CFG, "quoted", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_REP_QUOTED, vsn(2, 2, 39), NULL)
cfg(report_colums_as_rows_CFG, "colums_as_rows", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_REP_COLUMNS_AS_ROWS, vsn(1, 0, 0), NULL)
cfg(report_binary_values_as_numeric_CFG, "binary_values_as_numeric", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, 0, vsn(2, 2, 108), NULL)
cfg(report_devtypes_sort_CFG, "devtypes_sort", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_DEVTYPES_SORT, vsn(2, 2, 101), NULL)
cfg(report_devtypes_cols_CFG, "devtypes_cols", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_DEVTYPES_COLS, vsn(2, 2, 101), NULL)
cfg(report_devtypes_cols_verbose_CFG, "devtypes_cols_verbose", report_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_DEVTYPES_COLS_VERB, vsn(2, 2, 101), NULL)

View File

@@ -35,7 +35,7 @@
#define DEFAULT_MD_CHUNK_ALIGNMENT 1
#define DEFAULT_IGNORE_LVM_MIRRORS 1
#define DEFAULT_MULTIPATH_COMPONENT_DETECTION 1
#define DEFAULT_IGNORE_SUSPENDED_DEVICES 1
#define DEFAULT_IGNORE_SUSPENDED_DEVICES 0
#define DEFAULT_DISABLE_AFTER_ERROR_COUNT 0
#define DEFAULT_REQUIRE_RESTOREFILE_WITH_UUID 1
#define DEFAULT_DATA_ALIGNMENT_OFFSET_DETECTION 1
@@ -90,6 +90,8 @@
#define DEFAULT_THIN_POOL_ZERO 1
#define DEFAULT_POOL_METADATA_SPARE 1 /* thin + cache */
#define DEFAULT_CACHE_CHECK_OPTIONS "-q"
#define DEFAULT_CACHE_REPAIR_OPTIONS ""
#define DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS 0
#define DEFAULT_CACHE_POOL_CHUNK_SIZE 64 /* KB */
#define DEFAULT_CACHE_POOL_MIN_METADATA_SIZE 2048 /* KB */
@@ -163,6 +165,7 @@
#define DEFAULT_PROCESS_PRIORITY -18
#define DEFAULT_AUTO_SET_ACTIVATION_SKIP 1
#define DEFAULT_ACTIVATION_MODE "degraded"
#define DEFAULT_USE_LINEAR_TARGET 1
#define DEFAULT_STRIPE_FILLER "error"
#define DEFAULT_RAID_REGION_SIZE 512 /* KB */
@@ -179,8 +182,9 @@
#define DEFAULT_REP_PREFIXES 0
#define DEFAULT_REP_QUOTED 1
#define DEFAULT_REP_SEPARATOR " "
#define DEFAULT_REP_LIST_ITEM_SEPARATOR ","
#define DEFAULT_LVS_COLS "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,move_pv,mirror_log,copy_percent,convert_lv"
#define DEFAULT_LVS_COLS "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
#define DEFAULT_VGS_COLS "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
#define DEFAULT_PVS_COLS "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
#define DEFAULT_SEGS_COLS "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"

View File

@@ -30,10 +30,37 @@ struct dm_list *str_list_create(struct dm_pool *mem)
return sl;
}
static int _str_list_add_no_dup_check(struct dm_pool *mem, struct dm_list *sll, const char *str, int as_first)
{
struct dm_str_list *sln;
if (!str)
return_0;
if (!(sln = dm_pool_alloc(mem, sizeof(*sln))))
return_0;
sln->str = str;
if (as_first)
dm_list_add_h(sll, &sln->list);
else
dm_list_add(sll, &sln->list);
return 1;
}
int str_list_add_no_dup_check(struct dm_pool *mem, struct dm_list *sll, const char *str)
{
return _str_list_add_no_dup_check(mem, sll, str, 0);
}
int str_list_add_h_no_dup_check(struct dm_pool *mem, struct dm_list *sll, const char *str)
{
return _str_list_add_no_dup_check(mem, sll, str, 1);
}
int str_list_add(struct dm_pool *mem, struct dm_list *sll, const char *str)
{
struct str_list *sln;
if (!str)
return_0;
@@ -41,13 +68,7 @@ int str_list_add(struct dm_pool *mem, struct dm_list *sll, const char *str)
if (str_list_match_item(sll, str))
return 1;
if (!(sln = dm_pool_alloc(mem, sizeof(*sln))))
return_0;
sln->str = str;
dm_list_add(sll, &sln->list);
return 1;
return str_list_add_no_dup_check(mem, sll, str);
}
void str_list_del(struct dm_list *sll, const char *str)
@@ -55,14 +76,14 @@ void str_list_del(struct dm_list *sll, const char *str)
struct dm_list *slh, *slht;
dm_list_iterate_safe(slh, slht, sll)
if (!strcmp(str, dm_list_item(slh, struct str_list)->str))
if (!strcmp(str, dm_list_item(slh, struct dm_str_list)->str))
dm_list_del(slh);
}
int str_list_dup(struct dm_pool *mem, struct dm_list *sllnew,
const struct dm_list *sllold)
{
struct str_list *sl;
struct dm_str_list *sl;
dm_list_init(sllnew);
@@ -79,7 +100,7 @@ int str_list_dup(struct dm_pool *mem, struct dm_list *sllnew,
*/
int str_list_match_item(const struct dm_list *sll, const char *str)
{
struct str_list *sl;
struct dm_str_list *sl;
dm_list_iterate_items(sl, sll)
if (!strcmp(str, sl->str))
@@ -94,7 +115,7 @@ int str_list_match_item(const struct dm_list *sll, const char *str)
*/
int str_list_match_list(const struct dm_list *sll, const struct dm_list *sll2, const char **tag_matched)
{
struct str_list *sl;
struct dm_str_list *sl;
dm_list_iterate_items(sl, sll)
if (str_list_match_item(sll2, sl->str)) {
@@ -111,7 +132,7 @@ int str_list_match_list(const struct dm_list *sll, const struct dm_list *sll2, c
*/
int str_list_lists_equal(const struct dm_list *sll, const struct dm_list *sll2)
{
struct str_list *sl;
struct dm_str_list *sl;
if (dm_list_size(sll) != dm_list_size(sll2))
return 0;

View File

@@ -21,6 +21,8 @@ struct dm_pool;
struct dm_list *str_list_create(struct dm_pool *mem);
int str_list_add(struct dm_pool *mem, struct dm_list *sll, const char *str);
int str_list_add_no_dup_check(struct dm_pool *mem, struct dm_list *sll, const char *str);
int str_list_add_h_no_dup_check(struct dm_pool *mem, struct dm_list *sll, const char *str);
void str_list_del(struct dm_list *sll, const char *str);
int str_list_match_item(const struct dm_list *sll, const char *str);
int str_list_match_list(const struct dm_list *sll, const struct dm_list *sll2, const char **tag_matched);

View File

@@ -14,7 +14,6 @@
*/
#include "lib.h"
#include "lvm-types.h"
#include "btree.h"
#include "config.h"
#include "toolcontext.h"
@@ -70,7 +69,7 @@ static void _dev_init(struct device *dev, int max_error_count)
}
struct device *dev_create_file(const char *filename, struct device *dev,
struct str_list *alias, int use_malloc)
struct dm_str_list *alias, int use_malloc)
{
int allocate = !dev;
@@ -81,7 +80,7 @@ struct device *dev_create_file(const char *filename, struct device *dev,
return NULL;
}
if (!(alias = dm_zalloc(sizeof(*alias)))) {
log_error("struct str_list allocation failed");
log_error("struct dm_str_list allocation failed");
dm_free(dev);
return NULL;
}
@@ -97,7 +96,7 @@ struct device *dev_create_file(const char *filename, struct device *dev,
return NULL;
}
if (!(alias = _zalloc(sizeof(*alias)))) {
log_error("struct str_list allocation failed");
log_error("struct dm_str_list allocation failed");
_free(dev);
return NULL;
}
@@ -133,7 +132,7 @@ static struct device *_dev_create(dev_t d)
return dev;
}
void dev_set_preferred_name(struct str_list *sl, struct device *dev)
void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev)
{
/*
* Don't interfere with ordering specified in config file.
@@ -308,8 +307,8 @@ static int _compare_paths(const char *path0, const char *path1)
static int _add_alias(struct device *dev, const char *path)
{
struct str_list *sl = _zalloc(sizeof(*sl));
struct str_list *strl;
struct dm_str_list *sl = _zalloc(sizeof(*sl));
struct dm_str_list *strl;
const char *oldpath;
int prefer_old = 1;
@@ -327,7 +326,7 @@ static int _add_alias(struct device *dev, const char *path)
sl->str = path;
if (!dm_list_empty(&dev->aliases)) {
oldpath = dm_list_item(dev->aliases.n, struct str_list)->str;
oldpath = dm_list_item(dev->aliases.n, struct dm_str_list)->str;
prefer_old = _compare_paths(path, oldpath);
log_debug_devs("%s: Aliased to %s in device cache%s",
path, oldpath, prefer_old ? "" : " (preferred name)");
@@ -889,7 +888,7 @@ const char *dev_name_confirmed(struct device *dev, int quiet)
return dev_name(dev);
while ((r = stat(name = dm_list_item(dev->aliases.n,
struct str_list)->str, &buf)) ||
struct dm_str_list)->str, &buf)) ||
(buf.st_rdev != dev->dev)) {
if (r < 0) {
if (quiet)
@@ -1010,9 +1009,11 @@ struct dev_iter *dev_iter_create(struct dev_filter *f, int dev_scan)
if (dev_scan && !trust_cache()) {
/* Flag gets reset between each command */
if (!full_scan_done()) {
if (f && f->wipe)
f->wipe(f); /* Calls _full_scan(1) */
else
if (f && f->wipe) {
f->wipe(f); /* might call _full_scan(1) */
if (!full_scan_done())
_full_scan(1);
} else
_full_scan(1);
}
} else
@@ -1073,6 +1074,6 @@ int dev_fd(struct device *dev)
const char *dev_name(const struct device *dev)
{
return (dev && dev->aliases.n) ? dm_list_item(dev->aliases.n, struct str_list)->str :
return (dev && dev->aliases.n) ? dm_list_item(dev->aliases.n, struct dm_str_list)->str :
"unknown device";
}

View File

@@ -54,7 +54,7 @@ struct device *dev_cache_get(const char *name, struct dev_filter *f);
// TODO
struct device *dev_cache_get_by_devt(dev_t device, struct dev_filter *f);
void dev_set_preferred_name(struct str_list *sl, struct device *dev);
void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev);
/*
* Object for iterating through the cache.

View File

@@ -14,7 +14,6 @@
*/
#include "lib.h"
#include "lvm-types.h"
#include "device.h"
#include "metadata.h"
#include "lvmcache.h"
@@ -591,7 +590,7 @@ static void _close(struct device *dev)
log_debug_devs("Closed %s", dev_name(dev));
if (dev->flags & DEV_ALLOCED) {
dm_free((void *) dm_list_item(dev->aliases.n, struct str_list)->
dm_free((void *) dm_list_item(dev->aliases.n, struct dm_str_list)->
str);
dm_free(dev->aliases.n);
dm_free(dev);

View File

@@ -60,7 +60,7 @@ typedef enum {
static uint64_t _v1_sb_offset(uint64_t size, md_minor_version_t minor_version)
{
uint64_t uninitialized_var(sb_offset);
uint64_t sb_offset;
switch(minor_version) {
case MD_MINOR_V0:
@@ -72,6 +72,10 @@ static uint64_t _v1_sb_offset(uint64_t size, md_minor_version_t minor_version)
case MD_MINOR_V2:
sb_offset = 4 * 2;
break;
default:
log_warn(INTERNAL_ERROR "WARNING: Unknown minor version %d.",
minor_version);
return 0;
}
sb_offset <<= SECTOR_SHIFT;

View File

@@ -348,8 +348,8 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
const char *sysfs_dir = dm_sysfs_dir();
int major = (int) MAJOR(dev->dev);
int minor = (int) MINOR(dev->dev);
char path[PATH_MAX+1];
char temp_path[PATH_MAX+1];
char path[PATH_MAX];
char temp_path[PATH_MAX];
char buffer[64];
struct stat info;
FILE *fp = NULL;
@@ -378,7 +378,7 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
*/
/* check if dev is a partition */
if (dm_snprintf(path, PATH_MAX, "%s/dev/block/%d:%d/partition",
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d/partition",
sysfs_dir, major, minor) < 0) {
log_error("dm_snprintf partition failed");
goto out;
@@ -400,14 +400,14 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
* - basename ../../block/md0/md0 = md0
* Parent's 'dev' sysfs attribute = /sys/block/md0/dev
*/
if ((size = readlink(dirname(path), temp_path, PATH_MAX)) < 0) {
if ((size = readlink(dirname(path), temp_path, sizeof(temp_path) - 1)) < 0) {
log_sys_error("readlink", path);
goto out;
}
temp_path[size] = '\0';
if (dm_snprintf(path, PATH_MAX, "%s/block/%s/dev",
if (dm_snprintf(path, sizeof(path), "%s/block/%s/dev",
sysfs_dir, basename(dirname(temp_path))) < 0) {
log_error("dm_snprintf dev failed");
goto out;
@@ -460,9 +460,9 @@ static int _blkid_wipe(blkid_probe probe, struct device *dev, const char *name,
uint32_t types_to_exclude, uint32_t types_no_prompt,
int yes, force_t force)
{
static const char _msg_failed_offset[] = "Failed to get offset of the %s signature on %s.";
static const char _msg_failed_length[] = "Failed to get length of the %s signature on %s.";
static const char _msg_wiping[] = "Wiping %s signature on %s.";
static const char const _msg_failed_offset[] = "Failed to get offset of the %s signature on %s.";
static const char const _msg_failed_length[] = "Failed to get length of the %s signature on %s.";
static const char const _msg_wiping[] = "Wiping %s signature on %s.";
const char *offset = NULL, *type = NULL, *magic = NULL,
*usage = NULL, *label = NULL, *uuid = NULL;
loff_t offset_value;
@@ -495,9 +495,10 @@ static int _blkid_wipe(blkid_probe probe, struct device *dev, const char *name,
offset_value = strtoll(offset, NULL, 10);
if (!usage)
blkid_probe_lookup_value(probe, "USAGE", &usage, NULL);
blkid_probe_lookup_value(probe, "LABEL", &label, NULL);
blkid_probe_lookup_value(probe, "UUID", &uuid, NULL);
(void) blkid_probe_lookup_value(probe, "USAGE", &usage, NULL);
(void) blkid_probe_lookup_value(probe, "LABEL", &label, NULL);
(void) blkid_probe_lookup_value(probe, "UUID", &uuid, NULL);
/* Return values ignored here, in the worst case we print NULL */
log_verbose("Found existing signature on %s at offset %s: LABEL=\"%s\" "
"UUID=\"%s\" TYPE=\"%s\" USAGE=\"%s\"",
@@ -506,8 +507,10 @@ static int _blkid_wipe(blkid_probe probe, struct device *dev, const char *name,
if (!_type_in_flag_list(type, types_no_prompt)) {
if (!yes && (force == PROMPT) &&
yes_no_prompt("WARNING: %s signature detected on %s at offset %s. "
"Wipe it? [y/n] ", type, name, offset) != 'y')
return_0;
"Wipe it? [y/n]: ", type, name, offset) == 'n') {
log_error("Aborted wiping of %s.", type);
return 0;
}
log_print_unless_silent(_msg_wiping, type, name);
} else
log_verbose(_msg_wiping, type, name);
@@ -590,9 +593,11 @@ static int _wipe_signature(struct device *dev, const char *type, const char *nam
/* Specifying --yes => do not ask. */
if (!yes && (force == PROMPT) &&
yes_no_prompt("WARNING: %s detected on %s. Wipe it? [y/n] ",
type, name) != 'y')
return_0;
yes_no_prompt("WARNING: %s detected on %s. Wipe it? [y/n]: ",
type, name) == 'n') {
log_error("Aborted wiping of %s.", type);
return 0;
}
log_print_unless_silent("Wiping %s on %s.", type, name);
if (!dev_set(dev, offset_found, wipe_len, 0)) {
@@ -650,23 +655,25 @@ static int _snprintf_attr(char *buf, size_t buf_size, const char *sysfs_dir,
static unsigned long _dev_topology_attribute(struct dev_types *dt,
const char *attribute,
struct device *dev)
struct device *dev,
unsigned long default_value)
{
const char *sysfs_dir = dm_sysfs_dir();
char path[PATH_MAX], buffer[64];
FILE *fp;
struct stat info;
dev_t uninitialized_var(primary);
unsigned long result = 0UL;
unsigned long result = default_value;
unsigned long value = 0UL;
if (!attribute || !*attribute)
return_0;
goto_out;
if (!sysfs_dir || !*sysfs_dir)
return_0;
goto_out;
if (!_snprintf_attr(path, sizeof(path), sysfs_dir, attribute, dev->dev))
return_0;
goto_out;
/*
* check if the desired sysfs attribute exists
@@ -675,74 +682,80 @@ static unsigned long _dev_topology_attribute(struct dev_types *dt,
*/
if (stat(path, &info) == -1) {
if (errno != ENOENT) {
log_sys_error("stat", path);
return 0;
log_sys_debug("stat", path);
goto out;
}
if (!dev_get_primary_dev(dt, dev, &primary))
return 0;
goto out;
/* get attribute from partition's primary device */
if (!_snprintf_attr(path, sizeof(path), sysfs_dir, attribute, primary))
return_0;
goto_out;
if (stat(path, &info) == -1) {
if (errno != ENOENT)
log_sys_error("stat", path);
return 0;
log_sys_debug("stat", path);
goto out;
}
}
if (!(fp = fopen(path, "r"))) {
log_sys_error("fopen", path);
return 0;
log_sys_debug("fopen", path);
goto out;
}
if (!fgets(buffer, sizeof(buffer), fp)) {
log_sys_error("fgets", path);
goto out;
log_sys_debug("fgets", path);
goto out_close;
}
if (sscanf(buffer, "%lu", &result) != 1) {
log_error("sysfs file %s not in expected format: %s", path,
buffer);
goto out;
if (sscanf(buffer, "%lu", &value) != 1) {
log_warn("sysfs file %s not in expected format: %s", path, buffer);
goto out_close;
}
log_very_verbose("Device %s %s is %lu bytes.",
dev_name(dev), attribute, result);
log_very_verbose("Device %s: %s is %lu%s.",
dev_name(dev), attribute, result, default_value ? "" : " bytes");
result = value >> SECTOR_SHIFT;
out_close:
if (fclose(fp))
log_sys_debug("fclose", path);
out:
if (fclose(fp))
log_sys_error("fclose", path);
return result >> SECTOR_SHIFT;
return result;
}
unsigned long dev_alignment_offset(struct dev_types *dt, struct device *dev)
{
return _dev_topology_attribute(dt, "alignment_offset", dev);
return _dev_topology_attribute(dt, "alignment_offset", dev, 0UL);
}
unsigned long dev_minimum_io_size(struct dev_types *dt, struct device *dev)
{
return _dev_topology_attribute(dt, "queue/minimum_io_size", dev);
return _dev_topology_attribute(dt, "queue/minimum_io_size", dev, 0UL);
}
unsigned long dev_optimal_io_size(struct dev_types *dt, struct device *dev)
{
return _dev_topology_attribute(dt, "queue/optimal_io_size", dev);
return _dev_topology_attribute(dt, "queue/optimal_io_size", dev, 0UL);
}
unsigned long dev_discard_max_bytes(struct dev_types *dt, struct device *dev)
{
return _dev_topology_attribute(dt, "queue/discard_max_bytes", dev);
return _dev_topology_attribute(dt, "queue/discard_max_bytes", dev, 0UL);
}
unsigned long dev_discard_granularity(struct dev_types *dt, struct device *dev)
{
return _dev_topology_attribute(dt, "queue/discard_granularity", dev);
return _dev_topology_attribute(dt, "queue/discard_granularity", dev, 0UL);
}
int dev_is_rotational(struct dev_types *dt, struct device *dev)
{
return (int) _dev_topology_attribute(dt, "queue/rotational", dev, 1UL);
}
#else
int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
@@ -775,4 +788,8 @@ unsigned long dev_discard_granularity(struct dev_types *dt, struct device *dev)
return 0UL;
}
int dev_is_rotational(struct dev_types *dt, struct device *dev)
{
return 1;
}
#endif

View File

@@ -82,4 +82,6 @@ unsigned long dev_optimal_io_size(struct dev_types *dt, struct device *dev);
unsigned long dev_discard_max_bytes(struct dev_types *dt, struct device *dev);
unsigned long dev_discard_granularity(struct dev_types *dt, struct device *dev);
int dev_is_rotational(struct dev_types *dt, struct device *dev);
#endif

View File

@@ -17,7 +17,6 @@
#define _LVM_DEVICE_H
#include "uuid.h"
#include "lvm-types.h"
#include <fcntl.h>
@@ -34,7 +33,7 @@
* pointer comparisons are valid.
*/
struct device {
struct dm_list aliases; /* struct str_list from lvm-types.h */
struct dm_list aliases; /* struct dm_str_list */
dev_t dev;
/* private */
@@ -96,7 +95,7 @@ int dev_set(struct device *dev, uint64_t offset, size_t len, int value);
void dev_flush(struct device *dev);
struct device *dev_create_file(const char *filename, struct device *dev,
struct str_list *alias, int use_malloc);
struct dm_str_list *alias, int use_malloc);
/* Return a valid device name from the alias list; NULL otherwise */
const char *dev_name_confirmed(struct device *dev, int quiet);

View File

@@ -20,8 +20,9 @@
#include "toolcontext.h"
#include "segtype.h"
#include "defaults.h"
#include <math.h> /* fabs() */
#include <float.h> /* DBL_EPSILON */
#include "lvm-signal.h"
#include <stdarg.h>
#define SIZE_BUF 128
@@ -43,103 +44,6 @@ static const struct {
static const int _num_policies = DM_ARRAY_SIZE(_policies);
/* Test if the doubles are close enough to be considered equal */
static int _close_enough(double d1, double d2)
{
return fabs(d1 - d2) < DBL_EPSILON;
}
uint64_t units_to_bytes(const char *units, char *unit_type)
{
char *ptr = NULL;
uint64_t v;
double custom_value = 0;
uint64_t multiplier;
if (isdigit(*units)) {
custom_value = strtod(units, &ptr);
if (ptr == units)
return 0;
v = (uint64_t) strtoull(units, NULL, 10);
if (_close_enough((double) v, custom_value))
custom_value = 0; /* Use integer arithmetic */
units = ptr;
} else
v = 1;
/* Only one units char permitted. */
if (units[0] && units[1])
return 0;
if (v == 1)
*unit_type = *units;
else
*unit_type = 'U';
switch (*units) {
case 'h':
case 'H':
multiplier = v = UINT64_C(1);
*unit_type = *units;
break;
case 'b':
case 'B':
multiplier = UINT64_C(1);
break;
#define KILO UINT64_C(1024)
case 's':
case 'S':
multiplier = (KILO/2);
break;
case 'k':
multiplier = KILO;
break;
case 'm':
multiplier = KILO * KILO;
break;
case 'g':
multiplier = KILO * KILO * KILO;
break;
case 't':
multiplier = KILO * KILO * KILO * KILO;
break;
case 'p':
multiplier = KILO * KILO * KILO * KILO * KILO;
break;
case 'e':
multiplier = KILO * KILO * KILO * KILO * KILO * KILO;
break;
#undef KILO
#define KILO UINT64_C(1000)
case 'K':
multiplier = KILO;
break;
case 'M':
multiplier = KILO * KILO;
break;
case 'G':
multiplier = KILO * KILO * KILO;
break;
case 'T':
multiplier = KILO * KILO * KILO * KILO;
break;
case 'P':
multiplier = KILO * KILO * KILO * KILO * KILO;
break;
case 'E':
multiplier = KILO * KILO * KILO * KILO * KILO * KILO;
break;
#undef KILO
default:
return 0;
}
if (_close_enough(custom_value, 0.))
return v * multiplier; /* Use integer arithmetic */
else
return (uint64_t) (custom_value * multiplier);
}
char alloc_policy_char(alloc_policy_t alloc)
{
int i;
@@ -182,13 +86,19 @@ alloc_policy_t get_alloc_from_string(const char *str)
return ALLOC_INVALID;
}
static const char *_percent_types[7] = { "NONE", "VGS", "FREE", "LVS", "PVS", "ORIGIN" };
static const char *_percent_types[7] = { "NONE", "VG", "FREE", "LV", "PVS", "ORIGIN" };
const char *get_percent_string(percent_type_t def)
{
return _percent_types[def];
}
const char *display_lvname(const struct logical_volume *lv)
{
/* On allocation failure, just return the LV name. */
return lv_fullname_dup(lv->vg->cmd->mem, lv) ? : lv->name;
}
#define BASE_UNKNOWN 0
#define BASE_SHARED 1
#define BASE_1024 8
@@ -203,7 +113,7 @@ static const char *_display_size(const struct cmd_context *cmd,
{
unsigned base = BASE_UNKNOWN;
unsigned s;
int suffix = 1, precision;
int suffix, precision;
uint64_t byte = UINT64_C(0);
uint64_t units = UINT64_C(1024);
char *size_buf = NULL;
@@ -523,11 +433,11 @@ int lvdisplay_full(struct cmd_context *cmd,
struct lv_segment *snap_seg = NULL, *mirror_seg = NULL;
struct lv_segment *seg = NULL;
int lvm1compat;
percent_t snap_percent;
dm_percent_t snap_percent;
int thin_data_active = 0, thin_metadata_active = 0;
percent_t thin_data_percent, thin_metadata_percent;
dm_percent_t thin_data_percent, thin_metadata_percent;
int thin_active = 0;
percent_t thin_percent;
dm_percent_t thin_percent;
if (!id_write_format(&lv->lvid.id[1], uuid, sizeof(uuid)))
return_0;
@@ -573,7 +483,7 @@ int lvdisplay_full(struct cmd_context *cmd,
if (inkernel &&
(snap_active = lv_snapshot_percent(snap_seg->cow,
&snap_percent)))
if (snap_percent == PERCENT_INVALID)
if (snap_percent == DM_PERCENT_INVALID)
snap_active = 0;
if (lvm1compat)
log_print(" %s%s/%s [%s]",
@@ -590,7 +500,7 @@ int lvdisplay_full(struct cmd_context *cmd,
if (inkernel &&
(snap_active = lv_snapshot_percent(snap_seg->cow,
&snap_percent)))
if (snap_percent == PERCENT_INVALID)
if (snap_percent == DM_PERCENT_INVALID)
snap_active = 0;
if (lvm1compat)
@@ -607,7 +517,6 @@ int lvdisplay_full(struct cmd_context *cmd,
if (lv_is_thin_volume(lv)) {
seg = first_seg(lv);
log_print("LV Pool name %s", seg->pool_lv->name);
log_print("LV Thin device ID %u", seg->device_id);
if (seg->origin)
log_print("LV Thin origin name %s",
seg->origin->name);
@@ -623,24 +532,19 @@ int lvdisplay_full(struct cmd_context *cmd,
log_print("LV merged with %s",
find_snapshot(lv)->lv->name);
} else if (lv_is_thin_pool(lv)) {
if (inkernel) {
if (lv_info(cmd, lv, 1, &info, 1, 1) && info.exists) {
thin_data_active = lv_thin_pool_percent(lv, 0, &thin_data_percent);
thin_metadata_active = lv_thin_pool_percent(lv, 1, &thin_metadata_percent);
}
/* FIXME: display thin_pool targets transid for activated LV as well */
seg = first_seg(lv);
log_print("LV Pool transaction ID %" PRIu64, seg->transaction_id);
log_print("LV Pool metadata %s", seg->metadata_lv->name);
log_print("LV Pool data %s", seg_lv(seg, 0)->name);
log_print("LV Pool chunk size %s",
display_size(cmd, seg->chunk_size));
log_print("LV Zero new blocks %s",
seg->zero_new_blocks ? "yes" : "no");
}
if (inkernel && info.suspended)
log_print("LV Status suspended");
else
else if (activation())
log_print("LV Status %savailable",
inkernel ? "" : "NOT ");
@@ -657,15 +561,15 @@ int lvdisplay_full(struct cmd_context *cmd,
if (thin_data_active)
log_print("Allocated pool data %.2f%%",
percent_to_float(thin_data_percent));
dm_percent_to_float(thin_data_percent));
if (thin_metadata_active)
log_print("Allocated metadata %.2f%%",
percent_to_float(thin_metadata_percent));
dm_percent_to_float(thin_metadata_percent));
if (thin_active)
log_print("Mapped size %.2f%%",
percent_to_float(thin_percent));
dm_percent_to_float(thin_percent));
log_print("Current LE %u",
snap_seg ? snap_seg->origin->le_count : lv->le_count);
@@ -677,16 +581,16 @@ int lvdisplay_full(struct cmd_context *cmd,
if (snap_active)
log_print("Allocated to snapshot %.2f%%",
percent_to_float(snap_percent));
dm_percent_to_float(snap_percent));
log_print("Snapshot chunk size %s",
display_size(cmd, (uint64_t) snap_seg->chunk_size));
}
if (lv->status & MIRRORED) {
if (lv_is_mirrored(lv)) {
mirror_seg = first_seg(lv);
log_print("Mirrored volumes %" PRIu32, mirror_seg->area_count);
if (lv->status & CONVERTING)
if (lv_is_converting(lv))
log_print("LV type Mirror undergoing conversion");
}
@@ -759,11 +663,16 @@ int lvdisplay_segments(const struct logical_volume *lv)
log_print("--- Segments ---");
dm_list_iterate_items(seg, &lv->segments) {
log_print("Logical extent %u to %u:",
log_print("%s extents %u to %u:",
lv_is_virtual(lv) ? "Virtual" : "Logical",
seg->le, seg->le + seg->len - 1);
log_print(" Type\t\t%s", seg->segtype->ops->name(seg));
if (seg->segtype->ops->target_monitored)
log_print(" Monitoring\t\t%s",
lvseg_monitor_dup(lv->vg->cmd->mem, seg));
if (seg->segtype->ops->display)
seg->segtype->ops->display(seg);
}
@@ -930,7 +839,7 @@ void display_segtypes(const struct cmd_context *cmd)
void display_tags(const struct cmd_context *cmd)
{
const struct str_list *sl;
const struct dm_str_list *sl;
dm_list_iterate_items(sl, &cmd->tags) {
log_print("%s", sl->str);
@@ -939,30 +848,31 @@ void display_tags(const struct cmd_context *cmd)
void display_name_error(name_error_t name_error)
{
if (name_error != NAME_VALID) {
switch(name_error) {
case NAME_INVALID_EMPTY:
log_error("Name is zero length");
break;
case NAME_INVALID_HYPEN:
log_error("Name cannot start with hyphen");
break;
case NAME_INVALID_DOTS:
log_error("Name starts with . or .. and has no "
"following character(s)");
break;
case NAME_INVALID_CHARSET:
log_error("Name contains invalid character, valid set includes: "
"[a-zA-Z0-9.-_+]");
break;
case NAME_INVALID_LENGTH:
/* Report that name length -1 to accommodate nul*/
log_error("Name length exceeds maximum limit of %d", (NAME_LEN -1));
break;
default:
log_error("Unknown error %d on name validation", name_error);
break;
}
switch(name_error) {
case NAME_VALID:
/* Valid name */
break;
case NAME_INVALID_EMPTY:
log_error("Name is zero length.");
break;
case NAME_INVALID_HYPEN:
log_error("Name cannot start with hyphen.");
break;
case NAME_INVALID_DOTS:
log_error("Name starts with . or .. and has no "
"following character(s).");
break;
case NAME_INVALID_CHARSET:
log_error("Name contains invalid character, valid set includes: "
"[a-zA-Z0-9.-_+].");
break;
case NAME_INVALID_LENGTH:
/* Report that name length - 1 to accommodate nul*/
log_error("Name length exceeds maximum limit of %d.", (NAME_LEN - 1));
break;
default:
log_error(INTERNAL_ERROR "Unknown error %d on name validation.", name_error);
break;
}
}
@@ -973,12 +883,9 @@ void display_name_error(name_error_t name_error)
*/
char yes_no_prompt(const char *prompt, ...)
{
int c = 0, ret = 0;
int c = 0, ret = 0, cb = 0;
va_list ap;
if (silent_mode())
return 'n';
sigint_allow();
do {
if (c == '\n' || !c) {
@@ -986,11 +893,17 @@ char yes_no_prompt(const char *prompt, ...)
vfprintf(stderr, prompt, ap);
va_end(ap);
fflush(stderr);
if (silent_mode()) {
fputc('n', stderr);
ret = 'n';
break;
}
ret = 0;
}
if ((c = getchar()) == EOF) {
ret = 'n'; /* SIGINT */
cb = 1;
break;
}
@@ -1006,8 +919,11 @@ char yes_no_prompt(const char *prompt, ...)
sigint_restore();
if (cb && !sigint_caught())
fputc(ret, stderr);
if (c != '\n')
fprintf(stderr, "\n");
fputc('\n', stderr);
return ret;
}

View File

@@ -22,7 +22,7 @@
#include <stdint.h>
uint64_t units_to_bytes(const char *units, char *unit_type);
const char *display_lvname(const struct logical_volume *lv);
/* Specify size in KB */
const char *display_size(const struct cmd_context *cmd, uint64_t size);

View File

@@ -63,7 +63,6 @@ static int _errseg_target_present(struct cmd_context *cmd,
_errseg_checked = 1;
return _errseg_present;
}
#endif
static int _errseg_modules_needed(struct dm_pool *mem,
const struct lv_segment *seg __attribute__((unused)),
@@ -76,6 +75,7 @@ static int _errseg_modules_needed(struct dm_pool *mem,
return 1;
}
#endif
static void _errseg_destroy(struct segment_type *segtype)
{
@@ -88,8 +88,8 @@ static struct segtype_handler _error_ops = {
#ifdef DEVMAPPER_SUPPORT
.add_target_line = _errseg_add_target_line,
.target_present = _errseg_target_present,
#endif
.modules_needed = _errseg_modules_needed,
#endif
.destroy = _errseg_destroy,
};

View File

@@ -22,7 +22,7 @@
#define MPATH_PREFIX "mpath-"
static const char *get_sysfs_name(struct device *dev)
static const char *_get_sysfs_name(struct device *dev)
{
const char *name;
@@ -40,7 +40,35 @@ static const char *get_sysfs_name(struct device *dev)
return name;
}
static int get_sysfs_string(const char *path, char *buffer, int max_size)
static const char *_get_sysfs_name_by_devt(const char *sysfs_dir, dev_t devno,
char *buf, size_t buf_size)
{
const char *name;
char path[PATH_MAX];
int size;
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d", sysfs_dir,
(int) MAJOR(devno), (int) MINOR(devno)) < 0) {
log_error("Sysfs path string is too long.");
return NULL;
}
if ((size = readlink(path, buf, buf_size - 1)) < 0) {
log_sys_error("readlink", path);
return NULL;
}
buf[size] = '\0';
if (!(name = strrchr(buf, '/'))) {
log_error("Cannot find device name in sysfs path.");
return NULL;
}
name++;
return name;
}
static int _get_sysfs_string(const char *path, char *buffer, int max_size)
{
FILE *fp;
int r = 0;
@@ -61,7 +89,7 @@ static int get_sysfs_string(const char *path, char *buffer, int max_size)
return r;
}
static int get_sysfs_get_major_minor(const char *sysfs_dir, const char *kname, int *major, int *minor)
static int _get_sysfs_get_major_minor(const char *sysfs_dir, const char *kname, int *major, int *minor)
{
char path[PATH_MAX], buffer[64];
@@ -70,7 +98,7 @@ static int get_sysfs_get_major_minor(const char *sysfs_dir, const char *kname, i
return 0;
}
if (!get_sysfs_string(path, buffer, sizeof(buffer)))
if (!_get_sysfs_string(path, buffer, sizeof(buffer)))
return_0;
if (sscanf(buffer, "%d:%d", major, minor) != 2) {
@@ -81,7 +109,7 @@ static int get_sysfs_get_major_minor(const char *sysfs_dir, const char *kname, i
return 1;
}
static int get_parent_mpath(const char *dir, char *name, int max_size)
static int _get_parent_mpath(const char *dir, char *name, int max_size)
{
struct dirent *d;
DIR *dr;
@@ -113,13 +141,12 @@ static int get_parent_mpath(const char *dir, char *name, int max_size)
return r;
}
static int dev_is_mpath(struct dev_filter *f, struct device *dev)
static int _dev_is_mpath(struct dev_filter *f, struct device *dev)
{
struct dev_types *dt = (struct dev_types *) f->private;
const char *name;
char path[PATH_MAX+1];
char parent_name[PATH_MAX+1];
const char *part_name, *name;
struct stat info;
char path[PATH_MAX], parent_name[PATH_MAX];
const char *sysfs_dir = dm_sysfs_dir();
int major = MAJOR(dev->dev);
int minor = MINOR(dev->dev);
@@ -130,38 +157,24 @@ static int dev_is_mpath(struct dev_filter *f, struct device *dev)
return 0;
switch (dev_get_primary_dev(dt, dev, &primary_dev)) {
case 0:
/* Error. */
log_error("Failed to get primary device for %d:%d.", major, minor);
return 0;
case 1:
/* The dev is already a primary dev. Just continue with the dev. */
break;
case 2:
/* The dev is partition. */
name = dev_name(dev); /* name of original dev for log_debug msg */
/* Get primary dev from cache. */
if (!(dev = dev_cache_get_by_devt(primary_dev, NULL))) {
log_error("dev_is_mpath: failed to get device for %d:%d",
major, minor);
return 0;
}
major = (int) MAJOR(primary_dev);
minor = (int) MINOR(primary_dev);
log_debug_devs("%s: Device is a partition, using primary "
"device %s for mpath component detection",
name, dev_name(dev));
break;
case 2: /* The dev is partition. */
part_name = dev_name(dev); /* name of original dev for log_debug msg */
if (!(name = _get_sysfs_name_by_devt(sysfs_dir, primary_dev, parent_name, sizeof(parent_name))))
return_0;
log_debug_devs("%s: Device is a partition, using primary "
"device %s for mpath component detection",
part_name, name);
break;
case 1: /* The dev is already a primary dev. Just continue with the dev. */
if (!(name = _get_sysfs_name(dev)))
return_0;
break;
default: /* 0, error. */
log_error("Failed to get primary device for %d:%d.", major, minor);
return 0;
}
if (!(name = get_sysfs_name(dev)))
return_0;
if (dm_snprintf(path, PATH_MAX, "%s/block/%s/holders", sysfs_dir, name) < 0) {
if (dm_snprintf(path, sizeof(path), "%s/block/%s/holders", sysfs_dir, name) < 0) {
log_error("Sysfs path to check mpath is too long.");
return 0;
}
@@ -175,10 +188,10 @@ static int dev_is_mpath(struct dev_filter *f, struct device *dev)
return 0;
}
if (!get_parent_mpath(path, parent_name, PATH_MAX))
if (!_get_parent_mpath(path, parent_name, sizeof(parent_name)))
return 0;
if (!get_sysfs_get_major_minor(sysfs_dir, parent_name, &major, &minor))
if (!_get_sysfs_get_major_minor(sysfs_dir, parent_name, &major, &minor))
return_0;
if (major != dt->device_mapper_major)
@@ -189,7 +202,7 @@ static int dev_is_mpath(struct dev_filter *f, struct device *dev)
static int _ignore_mpath(struct dev_filter *f, struct device *dev)
{
if (dev_is_mpath(f, dev) == 1) {
if (_dev_is_mpath(f, dev) == 1) {
log_debug_devs("%s: Skipping mpath component device", dev_name(dev));
return 0;
}

View File

@@ -114,7 +114,7 @@ int persistent_filter_load(struct dev_filter *f, struct dm_config_tree **cft_out
return_0;
}
if (!(cft = config_open(CONFIG_FILE, pf->file, 1)))
if (!(cft = config_open(CONFIG_FILE_SPECIAL, pf->file, 1)))
return_0;
if (!config_file_read(cft))
@@ -272,7 +272,7 @@ static int _lookup_p(struct dev_filter *f, struct device *dev)
{
struct pfilter *pf = (struct pfilter *) f->private;
void *l = dm_hash_lookup(pf->devices, dev_name(dev));
struct str_list *sl;
struct dm_str_list *sl;
/* Cached BAD? */
if (l == PF_BAD_DEVICE) {

View File

@@ -149,7 +149,7 @@ static int _accept_p(struct dev_filter *f, struct device *dev)
{
int m, first = 1, rejected = 0;
struct rfilter *rf = (struct rfilter *) f->private;
struct str_list *sl;
struct dm_str_list *sl;
dm_list_iterate_items(sl, &dev->aliases) {
m = dm_regex_match(rf->engine, sl->str);

View File

@@ -727,6 +727,9 @@ static int _write_all_pvd(const struct format_type *fmt, struct disk_list *data,
{
int r;
if (!data->dev)
return_0;
if (!dev_open(data->dev))
return_0;

View File

@@ -16,7 +16,6 @@
#ifndef DISK_REP_FORMAT1_H
#define DISK_REP_FORMAT1_H
#include "lvm-types.h"
#include "metadata.h"
#include "toolcontext.h"

View File

@@ -381,6 +381,7 @@ static int _format1_pv_setup(const struct format_type *fmt,
struct physical_volume *pv,
struct volume_group *vg)
{
int r;
struct pvcreate_restorable_params rp = {.restorefile = NULL,
.id = {{0}},
.idp = NULL,
@@ -390,7 +391,10 @@ static int _format1_pv_setup(const struct format_type *fmt,
.extent_count = 0,
.extent_size = vg->extent_size};
return _format1_pv_initialise(fmt, -1, 0, 0, &rp, pv);
if ((r = _format1_pv_initialise(fmt, -1, 0, 0, &rp, pv)))
pv->status |= ALLOCATABLE_PV;
return r;
}
static int _format1_lv_setup(struct format_instance *fid, struct logical_volume *lv)

View File

@@ -347,7 +347,7 @@ static void _export_lv(struct lv_disk *lvd, struct volume_group *vg,
snprintf((char *)lvd->lv_name, sizeof(lvd->lv_name), "%s%s/%s",
dev_dir, vg->name, lv->name);
strcpy((char *)lvd->vg_name, vg->name);
(void) dm_strncpy((char *)lvd->vg_name, vg->name, sizeof(lvd->vg_name));
if (lv->status & LVM_READ)
lvd->lv_access |= LV_READ;

View File

@@ -190,9 +190,12 @@ static int _out_with_comment_raw(struct formatter *f,
const char *fmt, va_list ap)
{
int n;
va_list apc;
va_copy(apc, ap);
n = vsnprintf(f->data.buf.start + f->data.buf.used,
f->data.buf.size - f->data.buf.used, fmt, ap);
f->data.buf.size - f->data.buf.used, fmt, apc);
va_end(apc);
/* If metadata doesn't fit, extend buffer */
if (n < 0 || (n + f->data.buf.used + 2 > f->data.buf.size)) {
@@ -396,7 +399,7 @@ static int _print_vg(struct formatter *f, struct volume_group *vg)
outf(f, "seqno = %u", vg->seqno);
if (vg->fid && vg->fid->fmt)
outf(f, "format = \"%s\" # informational", vg->fid->fmt->name);
outfc(f, "# informational", "format = \"%s\"", vg->fid->fmt->name);
if (!_print_flag_config(f, vg->status, VG_FLAGS))
return_0;
@@ -563,8 +566,8 @@ int out_areas(struct formatter *f, const struct lv_segment *seg,
}
/* RAID devices are laid-out in metadata/data pairs */
if (!(seg_lv(seg, s)->status & RAID_IMAGE) ||
!(seg_metalv(seg, s)->status & RAID_META)) {
if (!lv_is_raid_image(seg_lv(seg, s)) ||
!lv_is_raid_metadata(seg_metalv(seg, s))) {
log_error("RAID segment has non-RAID areas");
return 0;
}

View File

@@ -67,6 +67,7 @@ static const struct flag _lv_flags[] = {
{RAID, NULL, 0},
{RAID_META, NULL, 0},
{RAID_IMAGE, NULL, 0},
{MIRROR, NULL, 0},
{MIRROR_IMAGE, NULL, 0},
{MIRROR_LOG, NULL, 0},
{MIRRORED, NULL, 0},

View File

@@ -16,7 +16,6 @@
#ifndef _LVM_FORMAT_TEXT_H
#define _LVM_FORMAT_TEXT_H
#include "lvm-types.h"
#include "metadata.h"
#define FMT_TEXT_NAME "lvm2"

View File

@@ -17,7 +17,6 @@
#define _LVM_TEXT_IMPORT_EXPORT_H
#include "config.h"
#include "lvm-types.h"
#include "metadata.h"
#include <stdio.h>

View File

@@ -46,7 +46,7 @@ const char *text_vgname_import(const struct format_type *fmt,
_init_text_import();
if (!(cft = config_open(CONFIG_FILE, NULL, 0)))
if (!(cft = config_open(CONFIG_FILE_SPECIAL, NULL, 0)))
return_NULL;
if ((!dev && !config_file_read(cft)) ||
@@ -92,7 +92,7 @@ struct volume_group *text_vg_import_fd(struct format_instance *fid,
*desc = NULL;
*when = 0;
if (!(cft = config_open(CONFIG_FILE, file, 0)))
if (!(cft = config_open(CONFIG_FILE_SPECIAL, file, 0)))
return_NULL;
if ((!dev && !config_file_read(cft)) ||

View File

@@ -99,7 +99,7 @@ static int _is_converting(struct logical_volume *lv)
{
struct lv_segment *seg;
if (lv->status & MIRRORED) {
if (lv_is_mirrored(lv)) {
seg = first_seg(lv);
/* Can't use is_temporary_mirror() because the metadata for
* seg_lv may not be read in and flags may not be set yet. */
@@ -386,6 +386,9 @@ static int _read_segment(struct logical_volume *lv, const struct dm_config_node
*/
_insert_segment(lv, seg);
if (seg_is_mirror(seg))
lv->status |= MIRROR;
if (seg_is_mirrored(seg))
lv->status |= MIRRORED;
@@ -576,7 +579,7 @@ static int _read_lvnames(struct format_instance *fid __attribute__((unused)),
if (dm_config_get_str(lvn, "profile", &str)) {
log_debug_metadata("Adding profile configuration %s for LV %s/%s.",
str, vg->name, lv->name);
lv->profile = add_profile(vg->cmd, str);
lv->profile = add_profile(vg->cmd, str, CONFIG_PROFILE_METADATA);
if (!lv->profile) {
log_error("Failed to add configuration profile %s for LV %s/%s",
str, vg->name, lv->name);
@@ -814,7 +817,7 @@ static struct volume_group *_read_vg(struct format_instance *fid,
if (dm_config_get_str(vgn, "profile", &str)) {
log_debug_metadata("Adding profile configuration %s for VG %s.", str, vg->name);
vg->profile = add_profile(vg->cmd, str);
vg->profile = add_profile(vg->cmd, str, CONFIG_PROFILE_METADATA);
if (!vg->profile) {
log_error("Failed to add configuration profile %s for VG %s", str, vg->name);
goto bad;

View File

@@ -17,7 +17,6 @@
#define _LVM_TEXT_LAYOUT_H
#include "config.h"
#include "lvm-types.h"
#include "metadata.h"
#include "uuid.h"

View File

@@ -21,7 +21,7 @@
char *alloc_printed_tags(struct dm_list *tagsl)
{
struct str_list *sl;
struct dm_str_list *sl;
int first = 1;
size_t size = 0;
char *buffer, *buf;

View File

@@ -419,12 +419,14 @@ static int _lock_resource(struct cmd_context *cmd, const char *resource,
char lockname[PATH_MAX];
int clvmd_cmd = 0;
const char *lock_scope;
const char *lock_type = "";
const char *lock_type;
assert(strlen(resource) < sizeof(lockname));
assert(resource);
switch (flags & LCK_SCOPE_MASK) {
case LCK_ACTIVATION:
return 1;
case LCK_VG:
if (!strcmp(resource, VG_SYNC_NAMES)) {
log_very_verbose("Requesting sync names.");

View File

@@ -19,7 +19,6 @@
#include "sharedlib.h"
#include "toolcontext.h"
#include "activate.h"
#include "locking.h"
static void *_locking_lib = NULL;
static void (*_reset_fn) (void) = NULL;

View File

@@ -19,248 +19,26 @@
#include "activate.h"
#include "config.h"
#include "defaults.h"
#include "lvm-file.h"
#include "lvm-string.h"
#include "lvm-flock.h"
#include "lvmcache.h"
#include <limits.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/file.h>
#include <fcntl.h>
#include <signal.h>
struct lock_list {
struct dm_list list;
int lf;
char *res;
};
static struct dm_list _lock_list;
static char _lock_dir[PATH_MAX];
static int _prioritise_write_locks;
static sig_t _oldhandler;
static sigset_t _fullsigset, _intsigset;
static volatile sig_atomic_t _handler_installed;
/* Drop lock known to be shared with another file descriptor. */
static void _drop_shared_flock(const char *file, int fd)
{
log_debug_locking("_drop_shared_flock %s.", file);
if (close(fd) < 0)
log_sys_debug("close", file);
}
static void _undo_flock(const char *file, int fd)
{
struct stat buf1, buf2;
log_debug_locking("_undo_flock %s", file);
if (!flock(fd, LOCK_NB | LOCK_EX) &&
!stat(file, &buf1) &&
!fstat(fd, &buf2) &&
is_same_inode(buf1, buf2))
if (unlink(file))
log_sys_debug("unlink", file);
if (close(fd) < 0)
log_sys_debug("close", file);
}
static int _release_lock(const char *file, int unlock)
{
struct lock_list *ll;
struct dm_list *llh, *llt;
dm_list_iterate_safe(llh, llt, &_lock_list) {
ll = dm_list_item(llh, struct lock_list);
if (!file || !strcmp(ll->res, file)) {
dm_list_del(llh);
if (unlock) {
log_very_verbose("Unlocking %s", ll->res);
if (flock(ll->lf, LOCK_NB | LOCK_UN))
log_sys_debug("flock", ll->res);
_undo_flock(ll->res, ll->lf);
} else
_drop_shared_flock(ll->res, ll->lf);
dm_free(ll->res);
dm_free(llh);
if (file)
return 1;
}
}
return 0;
}
static void _fin_file_locking(void)
{
_release_lock(NULL, 1);
release_flocks(1);
}
static void _reset_file_locking(void)
{
_release_lock(NULL, 0);
}
static void _remove_ctrl_c_handler(void)
{
siginterrupt(SIGINT, 0);
if (!_handler_installed)
return;
_handler_installed = 0;
sigprocmask(SIG_SETMASK, &_fullsigset, NULL);
if (signal(SIGINT, _oldhandler) == SIG_ERR)
log_sys_error("signal", "_remove_ctrl_c_handler");
}
static void _trap_ctrl_c(int sig __attribute__((unused)))
{
_remove_ctrl_c_handler();
log_error("CTRL-c detected: giving up waiting for lock");
}
static void _install_ctrl_c_handler(void)
{
_handler_installed = 1;
if ((_oldhandler = signal(SIGINT, _trap_ctrl_c)) == SIG_ERR) {
_handler_installed = 0;
return;
}
sigprocmask(SIG_SETMASK, &_intsigset, NULL);
siginterrupt(SIGINT, 1);
}
static int _do_flock(const char *file, int *fd, int operation, uint32_t nonblock)
{
int r = 1;
int old_errno;
struct stat buf1, buf2;
log_debug_locking("_do_flock %s %c%c", file,
operation == LOCK_EX ? 'W' : 'R', nonblock ? ' ' : 'B');
do {
if ((*fd > -1) && close(*fd))
log_sys_debug("close", file);
if ((*fd = open(file, O_CREAT | O_APPEND | O_RDWR, 0777)) < 0) {
log_sys_error("open", file);
return 0;
}
if (nonblock)
operation |= LOCK_NB;
else
_install_ctrl_c_handler();
r = flock(*fd, operation);
old_errno = errno;
if (!nonblock)
_remove_ctrl_c_handler();
if (r) {
errno = old_errno;
log_sys_error("flock", file);
if (close(*fd))
log_sys_debug("close", file);
*fd = -1;
return 0;
}
if (!stat(file, &buf1) && !fstat(*fd, &buf2) &&
is_same_inode(buf1, buf2))
return 1;
} while (!nonblock);
return_0;
}
#define AUX_LOCK_SUFFIX ":aux"
static int _do_write_priority_flock(const char *file, int *fd, int operation, uint32_t nonblock)
{
int r, fd_aux = -1;
char *file_aux = alloca(strlen(file) + sizeof(AUX_LOCK_SUFFIX));
strcpy(file_aux, file);
strcat(file_aux, AUX_LOCK_SUFFIX);
if ((r = _do_flock(file_aux, &fd_aux, LOCK_EX, 0))) {
if (operation == LOCK_EX) {
r = _do_flock(file, fd, operation, nonblock);
_undo_flock(file_aux, fd_aux);
} else {
_undo_flock(file_aux, fd_aux);
r = _do_flock(file, fd, operation, nonblock);
}
}
return r;
}
static int _lock_file(const char *file, uint32_t flags)
{
int operation;
uint32_t nonblock = flags & LCK_NONBLOCK;
int r;
struct lock_list *ll;
char state;
switch (flags & LCK_TYPE_MASK) {
case LCK_READ:
operation = LOCK_SH;
state = 'R';
break;
case LCK_WRITE:
operation = LOCK_EX;
state = 'W';
break;
case LCK_UNLOCK:
return _release_lock(file, 1);
default:
log_error("Unrecognised lock type: %d", flags & LCK_TYPE_MASK);
return 0;
}
if (!(ll = dm_malloc(sizeof(struct lock_list))))
return_0;
if (!(ll->res = dm_strdup(file))) {
dm_free(ll);
return_0;
}
ll->lf = -1;
log_very_verbose("Locking %s %c%c", ll->res, state,
nonblock ? ' ' : 'B');
(void) dm_prepare_selinux_context(file, S_IFREG);
if (_prioritise_write_locks)
r = _do_write_priority_flock(file, &ll->lf, operation, nonblock);
else
r = _do_flock(file, &ll->lf, operation, nonblock);
(void) dm_prepare_selinux_context(NULL, 0);
if (r)
dm_list_add(&_lock_list, &ll->list);
else {
dm_free(ll->res);
dm_free(ll);
stack;
}
return r;
release_flocks(0);
}
static int _file_lock_resource(struct cmd_context *cmd, const char *resource,
@@ -271,6 +49,16 @@ static int _file_lock_resource(struct cmd_context *cmd, const char *resource,
unsigned revert = (flags & LCK_REVERT) ? 1 : 0;
switch (flags & LCK_SCOPE_MASK) {
case LCK_ACTIVATION:
if (dm_snprintf(lockfile, sizeof(lockfile),
"%s/A_%s", _lock_dir, resource + 1) < 0) {
log_error("Too long locking filename %s/A_%s.", _lock_dir, resource + 1);
return 0;
}
if (!lock_file(lockfile, flags))
return_0;
break;
case LCK_VG:
/* Skip cache refresh for VG_GLOBAL - the caller handles it */
if (strcmp(resource, VG_GLOBAL))
@@ -298,7 +86,7 @@ static int _file_lock_resource(struct cmd_context *cmd, const char *resource,
return 0;
}
if (!_lock_file(lockfile, flags))
if (!lock_file(lockfile, flags))
return_0;
break;
case LCK_LV:
@@ -352,6 +140,8 @@ int init_file_locking(struct locking_type *locking, struct cmd_context *cmd,
int r;
const char *locking_dir;
init_flock(cmd);
locking->lock_resource = _file_lock_resource;
locking->reset_locking = _reset_file_locking;
locking->fin_locking = _fin_file_locking;
@@ -366,9 +156,6 @@ int init_file_locking(struct locking_type *locking, struct cmd_context *cmd,
strcpy(_lock_dir, locking_dir);
_prioritise_write_locks =
find_config_tree_bool(cmd, global_prioritise_write_locks_CFG, NULL);
(void) dm_prepare_selinux_context(_lock_dir, S_IFDIR);
r = dm_create_dir(_lock_dir);
(void) dm_prepare_selinux_context(NULL, 0);
@@ -380,19 +167,5 @@ int init_file_locking(struct locking_type *locking, struct cmd_context *cmd,
if ((access(_lock_dir, R_OK | W_OK | X_OK) == -1) && (errno == EROFS))
return 0;
dm_list_init(&_lock_list);
if (sigfillset(&_intsigset) || sigfillset(&_fullsigset)) {
log_sys_error_suppress(suppress_messages, "sigfillset",
"init_file_locking");
return 0;
}
if (sigdelset(&_intsigset, SIGINT)) {
log_sys_error_suppress(suppress_messages, "sigdelset",
"init_file_locking");
return 0;
}
return 1;
}

View File

@@ -22,148 +22,25 @@
#include "memlock.h"
#include "defaults.h"
#include "lvmcache.h"
#include "lvm-signal.h"
#include <assert.h>
#include <signal.h>
#include <sys/stat.h>
#include <limits.h>
#include <unistd.h>
static struct locking_type _locking;
static sigset_t _oldset;
static int _vg_lock_count = 0; /* Number of locks held */
static int _vg_write_lock_held = 0; /* VG write lock held? */
static int _signals_blocked = 0;
static int _blocking_supported = 0;
static volatile sig_atomic_t _sigint_caught = 0;
static volatile sig_atomic_t _handler_installed;
static struct sigaction _oldhandler;
static int _oldmasked;
typedef enum {
LV_NOOP,
LV_SUSPEND,
LV_RESUME
} lv_operation_t;
static void _catch_sigint(int unused __attribute__((unused)))
{
_sigint_caught = 1;
}
int sigint_caught(void) {
if (_sigint_caught)
log_error("Interrupted...");
return _sigint_caught;
}
void sigint_clear(void)
{
_sigint_caught = 0;
}
/*
* Temporarily allow keyboard interrupts to be intercepted and noted;
* saves interrupt handler state for sigint_restore(). Users should
* use the sigint_caught() predicate to check whether interrupt was
* requested and act appropriately. Interrupt flags are never
* cleared automatically by this code, but the tools clear the flag
* before running each command in lvm_run_command(). All other places
* where the flag needs to be cleared need to call sigint_clear().
*/
void sigint_allow(void)
{
struct sigaction handler;
sigset_t sigs;
/*
* Do not overwrite the backed-up handler data -
* just increase nesting count.
*/
if (_handler_installed) {
_handler_installed++;
return;
}
/* Grab old sigaction for SIGINT: shall not fail. */
sigaction(SIGINT, NULL, &handler);
handler.sa_flags &= ~SA_RESTART; /* Clear restart flag */
handler.sa_handler = _catch_sigint;
_handler_installed = 1;
/* Override the signal handler: shall not fail. */
sigaction(SIGINT, &handler, &_oldhandler);
/* Unmask SIGINT. Remember to mask it again on restore. */
sigprocmask(0, NULL, &sigs);
if ((_oldmasked = sigismember(&sigs, SIGINT))) {
sigdelset(&sigs, SIGINT);
sigprocmask(SIG_SETMASK, &sigs, NULL);
}
}
void sigint_restore(void)
{
if (!_handler_installed)
return;
if (_handler_installed > 1) {
_handler_installed--;
return;
}
/* Nesting count went down to 0. */
_handler_installed = 0;
if (_oldmasked) {
sigset_t sigs;
sigprocmask(0, NULL, &sigs);
sigaddset(&sigs, SIGINT);
sigprocmask(SIG_SETMASK, &sigs, NULL);
}
sigaction(SIGINT, &_oldhandler, NULL);
}
static void _block_signals(uint32_t flags __attribute__((unused)))
{
sigset_t set;
if (_signals_blocked)
return;
if (sigfillset(&set)) {
log_sys_error("sigfillset", "_block_signals");
return;
}
if (sigprocmask(SIG_SETMASK, &set, &_oldset)) {
log_sys_error("sigprocmask", "_block_signals");
return;
}
_signals_blocked = 1;
}
static void _unblock_signals(void)
{
/* Don't unblock signals while any locks are held */
if (!_signals_blocked || _vg_lock_count)
return;
if (sigprocmask(SIG_SETMASK, &_oldset, NULL)) {
log_sys_error("sigprocmask", "_block_signals");
return;
}
_signals_blocked = 0;
}
static void _lock_memory(struct cmd_context *cmd, lv_operation_t lv_op)
{
if (!(_locking.flags & LCK_PRE_MEMLOCK))
@@ -182,6 +59,13 @@ static void _unlock_memory(struct cmd_context *cmd, lv_operation_t lv_op)
critical_section_dec(cmd, "unlocking on resume");
}
static void _unblock_signals(void)
{
/* Don't unblock signals while any locks are held */
if (!_vg_lock_count)
unblock_signals();
}
void reset_locking(void)
{
int was_locked = _vg_lock_count;
@@ -284,6 +168,11 @@ int init_locking(int type, struct cmd_context *cmd, int suppress_messages)
break;
return 1;
case 5:
init_dummy_locking(&_locking, cmd, suppress_messages);
log_verbose("Locking disabled for read-only access.");
return 1;
default:
log_error("Unknown locking type requested.");
return 0;
@@ -361,7 +250,7 @@ static int _lock_vol(struct cmd_context *cmd, const char *resource,
uint32_t lck_scope = flags & LCK_SCOPE_MASK;
int ret = 0;
_block_signals(flags);
block_signals(flags);
_lock_memory(cmd, lv_op);
assert(resource);
@@ -429,6 +318,8 @@ int lock_vol(struct cmd_context *cmd, const char *vol, uint32_t flags, struct lo
}
switch (flags & LCK_SCOPE_MASK) {
case LCK_ACTIVATION:
break;
case LCK_VG:
if (!_blocking_supported)
flags |= LCK_NONBLOCK;

View File

@@ -86,9 +86,10 @@ int check_lvm1_vg_inactive(struct cmd_context *cmd, const char *vgname);
/*
* Lock scope
*/
#define LCK_SCOPE_MASK 0x00000008U
#define LCK_VG 0x00000000U
#define LCK_LV 0x00000008U
#define LCK_SCOPE_MASK 0x00001008U
#define LCK_VG 0x00000000U /* Volume Group */
#define LCK_LV 0x00000008U /* Logical Volume */
#define LCK_ACTIVATION 0x00001000U /* Activation */
/*
* Lock bits.
@@ -131,6 +132,9 @@ int check_lvm1_vg_inactive(struct cmd_context *cmd, const char *vgname);
*/
#define LCK_NONE (LCK_VG | LCK_NULL)
#define LCK_ACTIVATE_LOCK (LCK_ACTIVATION | LCK_WRITE | LCK_HOLD)
#define LCK_ACTIVATE_UNLOCK (LCK_ACTIVATION | LCK_UNLOCK)
#define LCK_VG_READ (LCK_VG | LCK_READ | LCK_HOLD)
#define LCK_VG_WRITE (LCK_VG | LCK_WRITE | LCK_HOLD)
#define LCK_VG_UNLOCK (LCK_VG | LCK_UNLOCK)
@@ -161,6 +165,33 @@ int check_lvm1_vg_inactive(struct cmd_context *cmd, const char *vgname);
lock_vol(cmd, (lv)->lvid.s, flags | LCK_LV_CLUSTERED(lv), lv) : \
0)
/*
* Activation locks are wrapped around activation commands that have to
* be processed atomically one-at-a-time.
* If a VG WRITE lock is held, an activation lock is redundant.
*
* FIXME Test and support this for thin and cache types.
* FIXME Add cluster support.
*/
#define lv_supports_activation_locking(lv) (!vg_is_clustered((lv)->vg) && !lv_is_thin_type(lv) && !lv_is_cache_type(lv))
#define lock_activation(cmd, lv) (vg_write_lock_held() && lv_supports_activation_locking(lv) ? 1 : lock_vol(cmd, (lv)->lvid.s, LCK_ACTIVATE_LOCK, lv))
#define unlock_activation(cmd, lv) (vg_write_lock_held() && lv_supports_activation_locking(lv) ? 1 : lock_vol(cmd, (lv)->lvid.s, LCK_ACTIVATE_UNLOCK, lv))
/*
* Place temporary exclusive 'activation' lock around an LV locking operation
* to serialise it.
*/
#define lock_lv_vol_serially(cmd, lv, flags) \
({ \
int rr = 0; \
\
if (lock_activation((cmd), (lv))) { \
rr = lock_lv_vol((cmd), (lv), (flags)); \
unlock_activation((cmd), (lv)); \
} \
rr; \
})
#define unlock_vg(cmd, vol) \
do { \
if (is_real_vg(vol)) \
@@ -173,16 +204,28 @@ int check_lvm1_vg_inactive(struct cmd_context *cmd, const char *vgname);
release_vg(vg); \
} while (0)
#define resume_lv(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_RESUME)
#define resume_lv(cmd, lv) \
({ \
int rr = lock_lv_vol((cmd), (lv), LCK_LV_RESUME); \
unlock_activation((cmd), (lv)); \
rr; \
})
#define resume_lv_origin(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_RESUME | LCK_ORIGIN_ONLY)
#define revert_lv(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_RESUME | LCK_REVERT)
#define suspend_lv(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_SUSPEND | LCK_HOLD)
#define revert_lv(cmd, lv) \
({ \
int rr = lock_lv_vol((cmd), (lv), LCK_LV_RESUME | LCK_REVERT); \
\
unlock_activation((cmd), (lv)); \
rr; \
})
#define suspend_lv(cmd, lv) \
(lock_activation((cmd), (lv)) ? lock_lv_vol((cmd), (lv), LCK_LV_SUSPEND | LCK_HOLD) : 0)
#define suspend_lv_origin(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_SUSPEND | LCK_HOLD | LCK_ORIGIN_ONLY)
#define deactivate_lv(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_DEACTIVATE)
#define deactivate_lv(cmd, lv) lock_lv_vol_serially(cmd, lv, LCK_LV_DEACTIVATE)
#define activate_lv(cmd, lv) lock_lv_vol(cmd, lv, LCK_LV_ACTIVATE | LCK_HOLD)
#define activate_lv(cmd, lv) lock_lv_vol_serially(cmd, lv, LCK_LV_ACTIVATE | LCK_HOLD)
#define activate_lv_excl_local(cmd, lv) \
lock_lv_vol(cmd, lv, LCK_LV_EXCLUSIVE | LCK_HOLD | LCK_LOCAL)
lock_lv_vol_serially(cmd, lv, LCK_LV_EXCLUSIVE | LCK_HOLD | LCK_LOCAL)
#define activate_lv_excl_remote(cmd, lv) \
lock_lv_vol(cmd, lv, LCK_LV_EXCLUSIVE | LCK_HOLD | LCK_REMOTE)
@@ -190,9 +233,9 @@ struct logical_volume;
int activate_lv_excl(struct cmd_context *cmd, struct logical_volume *lv);
#define activate_lv_local(cmd, lv) \
lock_lv_vol(cmd, lv, LCK_LV_ACTIVATE | LCK_HOLD | LCK_LOCAL)
lock_lv_vol_serially(cmd, lv, LCK_LV_ACTIVATE | LCK_HOLD | LCK_LOCAL)
#define deactivate_lv_local(cmd, lv) \
lock_lv_vol(cmd, lv, LCK_LV_DEACTIVATE | LCK_LOCAL)
lock_lv_vol_serially(cmd, lv, LCK_LV_DEACTIVATE | LCK_LOCAL)
#define drop_cached_metadata(vg) \
lock_vol((vg)->cmd, (vg)->name, LCK_VG_DROP_CACHE, NULL)
#define remote_commit_cached_metadata(vg) \
@@ -213,10 +256,4 @@ int resume_lvs(struct cmd_context *cmd, struct dm_list *lvs);
int revert_lvs(struct cmd_context *cmd, struct dm_list *lvs);
int activate_lvs(struct cmd_context *cmd, struct dm_list *lvs, unsigned exclusive);
/* Interrupt handling */
void sigint_clear(void);
void sigint_allow(void);
void sigint_restore(void);
int sigint_caught(void);
#endif

View File

@@ -38,8 +38,11 @@ struct locking_type {
/*
* Locking types
*/
int init_no_locking(struct locking_type *locking, struct cmd_context *cmd,
int suppress_messages);
void init_no_locking(struct locking_type *locking, struct cmd_context *cmd,
int suppress_messages);
void init_dummy_locking(struct locking_type *locking, struct cmd_context *cmd,
int suppress_messages);
int init_readonly_locking(struct locking_type *locking, struct cmd_context *cmd,
int suppress_messages);

View File

@@ -37,6 +37,8 @@ static int _no_lock_resource(struct cmd_context *cmd, const char *resource,
uint32_t flags, struct logical_volume *lv)
{
switch (flags & LCK_SCOPE_MASK) {
case LCK_ACTIVATION:
break;
case LCK_VG:
if (!strcmp(resource, VG_SYNC_NAMES))
fs_unlock();
@@ -91,7 +93,7 @@ static int _readonly_lock_resource(struct cmd_context *cmd,
return _no_lock_resource(cmd, resource, flags, lv);
}
int init_no_locking(struct locking_type *locking, struct cmd_context *cmd __attribute__((unused)),
void init_no_locking(struct locking_type *locking, struct cmd_context *cmd __attribute__((unused)),
int suppress_messages)
{
locking->lock_resource = _no_lock_resource;
@@ -99,8 +101,6 @@ int init_no_locking(struct locking_type *locking, struct cmd_context *cmd __attr
locking->reset_locking = _no_reset_locking;
locking->fin_locking = _no_fin_locking;
locking->flags = LCK_CLUSTERED;
return 1;
}
int init_readonly_locking(struct locking_type *locking, struct cmd_context *cmd __attribute__((unused)),
@@ -114,3 +114,13 @@ int init_readonly_locking(struct locking_type *locking, struct cmd_context *cmd
return 1;
}
void init_dummy_locking(struct locking_type *locking, struct cmd_context *cmd __attribute__((unused)),
int suppress_messages)
{
locking->lock_resource = _readonly_lock_resource;
locking->query_resource = _no_query_resource;
locking->reset_locking = _no_reset_locking;
locking->fin_locking = _no_fin_locking;
locking->flags = LCK_CLUSTERED;
}

View File

@@ -23,7 +23,7 @@
static FILE *_log_file;
static struct device _log_dev;
static struct str_list _log_dev_alias;
static struct dm_str_list _log_dev_alias;
static int _syslog = 0;
static int _log_to_file = 0;

View File

@@ -15,28 +15,23 @@
#include "lib.h"
#include "metadata.h"
#include "locking.h"
#include "pv_map.h"
#include "lvm-string.h"
#include "toolcontext.h"
#include "lv_alloc.h"
#include "pv_alloc.h"
#include "display.h"
#include "segtype.h"
#include "archiver.h"
#include "activate.h"
#include "str_list.h"
#include "defaults.h"
#include "lvm-exec.h"
int update_cache_pool_params(struct volume_group *vg, unsigned attr,
int passed_args,
uint32_t data_extents, uint32_t extent_size,
int *chunk_size_calc_method, uint32_t *chunk_size,
thin_discards_t *discards,
uint64_t *pool_metadata_size, int *zero)
int passed_args, uint32_t data_extents,
uint64_t *pool_metadata_size,
int *chunk_size_calc_method, uint32_t *chunk_size)
{
uint64_t min_meta_size;
if (!(passed_args & PASS_ARG_CHUNK_SIZE))
*chunk_size = DEFAULT_CACHE_POOL_CHUNK_SIZE * 2;
if ((*chunk_size < DM_CACHE_MIN_DATA_BLOCK_SIZE) ||
(*chunk_size > DM_CACHE_MAX_DATA_BLOCK_SIZE)) {
log_error("Chunk size must be in the range %s to %s.",
@@ -46,8 +41,8 @@ int update_cache_pool_params(struct volume_group *vg, unsigned attr,
}
if (*chunk_size & (DM_CACHE_MIN_DATA_BLOCK_SIZE - 1)) {
log_error("Chunk size must be a multiple of %u sectors.",
DM_CACHE_MIN_DATA_BLOCK_SIZE);
log_error("Chunk size must be a multiple of %s.",
display_size(vg->cmd, DM_CACHE_MIN_DATA_BLOCK_SIZE));
return 0;
}
@@ -57,7 +52,7 @@ int update_cache_pool_params(struct volume_group *vg, unsigned attr,
* ... plus a good amount of padding (2x) to cover any
* policy hint data that may be added in the future.
*/
min_meta_size = 16 * (data_extents * vg->extent_size);
min_meta_size = (uint64_t)data_extents * vg->extent_size * 16;
min_meta_size /= *chunk_size; /* # of Bytes we need */
min_meta_size *= 2; /* plus some padding */
min_meta_size /= 512; /* in sectors */
@@ -66,15 +61,18 @@ int update_cache_pool_params(struct volume_group *vg, unsigned attr,
if (!*pool_metadata_size)
*pool_metadata_size = min_meta_size;
if (*pool_metadata_size < min_meta_size) {
*pool_metadata_size = min_meta_size;
log_print("Increasing metadata device size to %"
PRIu64 " sectors", *pool_metadata_size);
}
if (*pool_metadata_size > (2 * DEFAULT_CACHE_POOL_MAX_METADATA_SIZE)) {
*pool_metadata_size = 2 * DEFAULT_CACHE_POOL_MAX_METADATA_SIZE;
log_print("Reducing metadata device size to %" PRIu64 " sectors",
*pool_metadata_size);
if (passed_args & PASS_ARG_POOL_METADATA_SIZE)
log_warn("WARNING: Maximum supported pool metadata size is %s.",
display_size(vg->cmd, *pool_metadata_size));
} else if (*pool_metadata_size < min_meta_size) {
if (passed_args & PASS_ARG_POOL_METADATA_SIZE)
log_warn("WARNING: Minimum supported pool metadata size is %s "
"(needs extra %s).",
display_size(vg->cmd, min_meta_size),
display_size(vg->cmd, min_meta_size - *pool_metadata_size));
*pool_metadata_size = min_meta_size;
}
return 1;
@@ -119,9 +117,8 @@ struct logical_volume *lv_cache_create(struct logical_volume *pool,
* The origin under the origin would become *_corig_corig
* before renaming the origin above to *_corig.
*/
log_error(INTERNAL_ERROR
"The origin, %s, cannot be of cache type",
origin->name);
log_error("Creating a cache LV from an existing cache LV is"
"not yet supported.");
return NULL;
}
@@ -178,7 +175,6 @@ static int _cleanup_orphan_lv(struct logical_volume *lv)
*/
int lv_cache_remove(struct logical_volume *cache_lv)
{
struct cmd_context *cmd = cache_lv->vg->cmd;
const char *policy_name;
uint64_t dirty_blocks;
struct lv_segment *cache_seg = first_seg(cache_lv);
@@ -228,14 +224,8 @@ int lv_cache_remove(struct logical_volume *cache_lv)
cache_seg->policy_argv = NULL;
/* update the kernel to put the cleaner policy in place */
if (!vg_write(cache_lv->vg))
return_0;
if (!suspend_lv(cmd, cache_lv))
return_0;
if (!vg_commit(cache_lv->vg))
return_0;
if (!resume_lv(cmd, cache_lv))
return_0;
if (lv_update_and_reload(cache_lv))
return_0;
}
//FIXME: use polling to do this...
@@ -259,7 +249,7 @@ int lv_cache_remove(struct logical_volume *cache_lv)
if (!remove_layer_from_lv(cache_lv, corigin_lv))
return_0;
if (!vg_write(cache_lv->vg))
if (!lv_update_and_reload(cache_lv))
return_0;
/*
@@ -267,20 +257,12 @@ int lv_cache_remove(struct logical_volume *cache_lv)
* - the top-level cache LV
* - the origin
* - the cache_pool _cdata and _cmeta
*/
if (!suspend_lv(cmd, cache_lv))
return_0;
if (!vg_commit(cache_lv->vg))
return_0;
/* resume_lv on this (former) cache LV will resume all */
/*
*
* resume_lv on this (former) cache LV will resume all
*
* FIXME: currently we can't easily avoid execution of
* blkid on resumed error device
*/
if (!resume_lv(cmd, cache_lv))
return_0;
/*
* cleanup orphan devices
@@ -302,3 +284,30 @@ int lv_cache_remove(struct logical_volume *cache_lv)
return 1;
}
int get_cache_mode(const char *str, uint32_t *flags)
{
if (!strcmp(str, "writethrough"))
*flags |= DM_CACHE_FEATURE_WRITETHROUGH;
else if (!strcmp(str, "writeback"))
*flags |= DM_CACHE_FEATURE_WRITEBACK;
else {
log_error("Cache pool cachemode \"%s\" is unknown.", str);
return 0;
}
return 1;
}
int lv_is_cache_origin(const struct logical_volume *lv)
{
struct lv_segment *seg;
/* Make sure there's exactly one segment in segs_using_this_lv! */
if (dm_list_empty(&lv->segs_using_this_lv) ||
(dm_list_size(&lv->segs_using_this_lv) > 1))
return 0;
seg = get_only_segment_using_this_lv(lv);
return seg && lv_is_cache(seg->lv) && (seg_lv(seg, 0) == lv);
}

View File

@@ -146,15 +146,15 @@ char *lvseg_monitor_dup(struct dm_pool *mem, const struct lv_segment *seg)
segm = first_seg(seg->log_lv);
// log_debug("Query LV:%s mon:%s segm:%s tgtm:%p segmon:%d statusm:%d", seg->lv->name, segm->lv->name, segm->segtype->name, segm->segtype->ops->target_monitored, seg_monitored(segm), (int)(segm->status & PVMOVE));
if (!segm->segtype->ops->target_monitored)
if ((dmeventd_monitor_mode() != 1) ||
!segm->segtype->ops->target_monitored)
/* Nothing to do, monitoring not supported */;
else if (lv_is_cow_covering_origin(seg->lv))
/* Nothing to do, snapshot already covers origin */;
else if (!seg_monitored(segm) || (segm->status & PVMOVE))
s = "not monitored";
else if (lv_info(seg->lv->vg->cmd, seg->lv, 1, &info, 0, 0) && info.exists) {
monitored = segm->segtype->ops->
target_monitored((struct lv_segment*)segm, &pending);
monitored = segm->segtype->ops->target_monitored(segm, &pending);
if (pending)
s = "pending";
else
@@ -170,7 +170,7 @@ uint64_t lvseg_chunksize(const struct lv_segment *seg)
if (lv_is_cow(seg->lv))
size = (uint64_t) find_snapshot(seg->lv)->chunk_size;
else if (seg_is_thin_pool(seg) || seg_is_cache_pool(seg))
else if (seg_is_pool(seg))
size = (uint64_t) seg->chunk_size;
else
size = UINT64_C(0);
@@ -219,6 +219,43 @@ char *lv_name_dup(struct dm_pool *mem, const struct logical_volume *lv)
return dm_pool_strdup(mem, lv->name);
}
char *lv_fullname_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
char lvfullname[NAME_LEN * 2 + 2];
if (dm_snprintf(lvfullname, sizeof(lvfullname), "%s/%s", lv->vg->name, lv->name) < 0) {
log_error("lvfullname snprintf failed");
return NULL;
}
return dm_pool_strdup(mem, lvfullname);
}
struct logical_volume *lv_parent(const struct logical_volume *lv)
{
struct logical_volume *parent_lv = NULL;
if (lv_is_visible(lv))
;
else if (lv_is_mirror_image(lv) || lv_is_mirror_log(lv))
parent_lv = get_only_segment_using_this_lv(lv)->lv;
else if (lv_is_raid_image(lv) || lv_is_raid_metadata(lv))
parent_lv = get_only_segment_using_this_lv(lv)->lv;
else if (lv_is_cache_pool_data(lv) || lv_is_cache_pool_metadata(lv))
parent_lv = get_only_segment_using_this_lv(lv)->lv;
else if (lv_is_thin_pool_data(lv) || lv_is_thin_pool_metadata(lv))
parent_lv = get_only_segment_using_this_lv(lv)->lv;
return parent_lv;
}
char *lv_parent_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
struct logical_volume *parent_lv = lv_parent(lv);
return dm_pool_strdup(mem, parent_lv ? parent_lv->name : "");
}
char *lv_modules_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
struct dm_list *modules;
@@ -303,7 +340,7 @@ char *lv_convert_lv_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
struct lv_segment *seg;
if (lv->status & (CONVERTING|MIRRORED)) {
if (lv_is_converting(lv) || lv_is_mirrored(lv)) {
seg = first_seg(lv);
/* Temporary mirror is always area_num == 0 */
@@ -316,11 +353,26 @@ char *lv_convert_lv_dup(struct dm_pool *mem, const struct logical_volume *lv)
char *lv_move_pv_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
struct logical_volume *mimage0_lv;
struct lv_segment *seg;
const struct device *dev;
dm_list_iterate_items(seg, &lv->segments)
if (seg->status & PVMOVE)
return dm_pool_strdup(mem, dev_name(seg_dev(seg, 0)));
dm_list_iterate_items(seg, &lv->segments) {
if (seg->status & PVMOVE) {
if (seg_type(seg, 0) == AREA_LV) { /* atomic pvmove */
mimage0_lv = seg_lv(seg, 0);
if (!lv_is_mirror_image(mimage0_lv)) {
log_error(INTERNAL_ERROR
"Bad pvmove structure");
return NULL;
}
dev = seg_dev(first_seg(mimage0_lv), 0);
} else /* Segment pvmove */
dev = seg_dev(seg, 0);
return dm_pool_strdup(mem, dev_name(dev));
}
}
return NULL;
}
@@ -355,7 +407,8 @@ char *lv_path_dup(struct dm_pool *mem, const struct logical_volume *lv)
char *repstr;
size_t len;
if (!*lv->vg->name)
/* Only for visible devices that get a link from /dev/vg */
if (!*lv->vg->name || !lv_is_visible(lv) || lv_is_thin_pool(lv))
return dm_pool_strdup(mem, "");
len = strlen(lv->vg->cmd->dev_dir) + strlen(lv->vg->name) +
@@ -363,13 +416,42 @@ char *lv_path_dup(struct dm_pool *mem, const struct logical_volume *lv)
if (!(repstr = dm_pool_zalloc(mem, len))) {
log_error("dm_pool_alloc failed");
return 0;
return NULL;
}
if (dm_snprintf(repstr, len, "%s%s/%s",
lv->vg->cmd->dev_dir, lv->vg->name, lv->name) < 0) {
log_error("lvpath snprintf failed");
return 0;
return NULL;
}
return repstr;
}
char *lv_dmpath_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
char *name;
char *repstr;
size_t len;
if (!*lv->vg->name)
return dm_pool_strdup(mem, "");
if (!(name = dm_build_dm_name(mem, lv->vg->name, lv->name, NULL))) {
log_error("dm_build_dm_name failed");
return NULL;
}
len = strlen(dm_dir()) + strlen(name) + 2;
if (!(repstr = dm_pool_zalloc(mem, len))) {
log_error("dm_pool_alloc failed");
return NULL;
}
if (dm_snprintf(repstr, len, "%s/%s", dm_dir(), name) < 0) {
log_error("lv_dmpath snprintf failed");
return NULL;
}
return repstr;
@@ -390,9 +472,9 @@ uint64_t lv_size(const struct logical_volume *lv)
return lv->size;
}
static int _lv_mimage_in_sync(const struct logical_volume *lv)
int lv_mirror_image_in_sync(const struct logical_volume *lv)
{
percent_t percent;
dm_percent_t percent;
struct lv_segment *seg = first_seg(lv);
struct lv_segment *mirror_seg;
@@ -406,13 +488,13 @@ static int _lv_mimage_in_sync(const struct logical_volume *lv)
NULL))
return_0;
return (percent == PERCENT_100) ? 1 : 0;
return (percent == DM_PERCENT_100) ? 1 : 0;
}
static int _lv_raid_image_in_sync(const struct logical_volume *lv)
int lv_raid_image_in_sync(const struct logical_volume *lv)
{
unsigned s;
percent_t percent;
dm_percent_t percent;
char *raid_health;
struct lv_segment *seg, *raid_seg = NULL;
@@ -423,7 +505,7 @@ static int _lv_raid_image_in_sync(const struct logical_volume *lv)
if (!lv_is_active_locally(lv))
return 0; /* Assume not in-sync */
if (!(lv->status & RAID_IMAGE)) {
if (!lv_is_raid_image(lv)) {
log_error(INTERNAL_ERROR "%s is not a RAID image", lv->name);
return 0;
}
@@ -444,7 +526,7 @@ static int _lv_raid_image_in_sync(const struct logical_volume *lv)
if (!lv_raid_percent(raid_seg->lv, &percent))
return_0;
if (percent == PERCENT_100)
if (percent == DM_PERCENT_100)
return 1;
/* Find out which sub-LV this is. */
@@ -473,7 +555,7 @@ static int _lv_raid_image_in_sync(const struct logical_volume *lv)
*
* Returns: 1 if healthy, 0 if device is not health
*/
static int _lv_raid_healthy(const struct logical_volume *lv)
int lv_raid_healthy(const struct logical_volume *lv)
{
unsigned s;
char *raid_health;
@@ -491,7 +573,7 @@ static int _lv_raid_healthy(const struct logical_volume *lv)
return 0;
}
if (lv->status & RAID)
if (lv_is_raid(lv))
raid_seg = first_seg(lv);
else if ((seg = first_seg(lv)))
raid_seg = get_only_segment_using_this_lv(seg->lv);
@@ -510,7 +592,7 @@ static int _lv_raid_healthy(const struct logical_volume *lv)
if (!lv_raid_dev_health(raid_seg->lv, &raid_health))
return_0;
if (lv->status & RAID) {
if (lv_is_raid(lv)) {
if (strchr(raid_health, 'D'))
return 0;
else
@@ -519,8 +601,8 @@ static int _lv_raid_healthy(const struct logical_volume *lv)
/* Find out which sub-LV this is. */
for (s = 0; s < raid_seg->area_count; s++)
if (((lv->status & RAID_IMAGE) && (seg_lv(raid_seg, s) == lv)) ||
((lv->status & RAID_META) && (seg_metalv(raid_seg,s) == lv)))
if ((lv_is_raid_image(lv) && (seg_lv(raid_seg, s) == lv)) ||
(lv_is_raid_metadata(lv) && (seg_metalv(raid_seg,s) == lv)))
break;
if (s == raid_seg->area_count) {
log_error(INTERNAL_ERROR
@@ -537,7 +619,7 @@ static int _lv_raid_healthy(const struct logical_volume *lv)
char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
percent_t snap_percent;
dm_percent_t snap_percent;
struct lvinfo info;
struct lv_segment *seg;
char *repstr;
@@ -551,52 +633,52 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
if (!*lv->name)
goto out;
if (lv->status & PVMOVE)
if (lv_is_pvmove(lv))
repstr[0] = 'p';
else if (lv->status & CONVERTING)
repstr[0] = 'c';
/* Origin takes precedence over mirror and thin volume */
else if (lv_is_origin(lv) || lv_is_external_origin(lv))
repstr[0] = (lv_is_merging_origin(lv)) ? 'O' : 'o';
else if (lv_is_cache_pool_metadata(lv))
else if (lv_is_pool_metadata(lv) ||
lv_is_pool_metadata_spare(lv) ||
lv_is_raid_metadata(lv))
repstr[0] = 'e';
else if (lv_is_cache_type(lv))
repstr[0] = 'C';
else if (lv_is_thin_pool_metadata(lv) ||
lv_is_pool_metadata_spare(lv) ||
(lv->status & RAID_META))
repstr[0] = 'e';
else if (lv->status & RAID)
else if (lv_is_raid(lv))
repstr[0] = (lv->status & LV_NOTSYNCED) ? 'R' : 'r';
else if (lv->status & MIRRORED)
else if (lv_is_mirror(lv))
repstr[0] = (lv->status & LV_NOTSYNCED) ? 'M' : 'm';
else if (lv_is_thin_volume(lv))
repstr[0] = lv_is_merging_origin(lv) ?
'O' : (lv_is_merging_thin_snapshot(lv) ? 'S' : 'V');
else if (lv->status & VIRTUAL)
else if (lv_is_virtual(lv))
repstr[0] = 'v';
else if (lv_is_thin_pool(lv))
repstr[0] = 't';
else if (lv_is_thin_pool_data(lv))
repstr[0] = 'T';
else if (lv->status & MIRROR_IMAGE)
repstr[0] = (_lv_mimage_in_sync(lv)) ? 'i' : 'I';
else if (lv->status & RAID_IMAGE)
else if (lv_is_mirror_image(lv))
repstr[0] = (lv_mirror_image_in_sync(lv)) ? 'i' : 'I';
else if (lv_is_raid_image(lv))
/*
* Visible RAID_IMAGES are sub-LVs that have been exposed for
* top-level use by being split from the RAID array with
* '--splitmirrors 1 --trackchanges'. They always report 'I'.
*/
repstr[0] = (!lv_is_visible(lv) && _lv_raid_image_in_sync(lv)) ?
repstr[0] = (!lv_is_visible(lv) && lv_raid_image_in_sync(lv)) ?
'i' : 'I';
else if (lv->status & MIRROR_LOG)
else if (lv_is_mirror_log(lv))
repstr[0] = 'l';
else if (lv_is_cow(lv))
repstr[0] = (lv_is_merging_cow(lv)) ? 'S' : 's';
else if (lv_is_cache_origin(lv))
repstr[0] = 'o';
else
repstr[0] = '-';
if (lv->status & PVMOVE)
if (lv_is_pvmove(lv))
repstr[1] = '-';
else if (lv->status & LVM_WRITE)
repstr[1] = 'w';
@@ -607,12 +689,15 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
repstr[2] = alloc_policy_char(lv->alloc);
if (lv->status & LOCKED)
if (lv_is_locked(lv))
repstr[2] = toupper(repstr[2]);
repstr[3] = (lv->status & FIXED_MINOR) ? 'm' : '-';
if (lv_info(lv->vg->cmd, lv, 0, &info, 1, 0) && info.exists) {
if (!activation() || !lv_info(lv->vg->cmd, lv, 0, &info, 1, 0)) {
repstr[4] = 'X'; /* Unknown */
repstr[5] = 'X'; /* Unknown */
} else if (info.exists) {
if (info.suspended)
repstr[4] = 's'; /* Suspended */
else if (info.live_table)
@@ -625,13 +710,13 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
/* Snapshot dropped? */
if (info.live_table && lv_is_cow(lv)) {
if (!lv_snapshot_percent(lv, &snap_percent) ||
snap_percent == PERCENT_INVALID) {
snap_percent == DM_PERCENT_INVALID) {
if (info.suspended)
repstr[4] = 'S'; /* Susp Inv snapshot */
else
repstr[4] = 'I'; /* Invalid snapshot */
}
else if (snap_percent == PERCENT_MERGE_FAILED) {
else if (snap_percent == LVM_PERCENT_MERGE_FAILED) {
if (info.suspended)
repstr[4] = 'M'; /* Susp snapshot merge failed */
else
@@ -654,11 +739,11 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
if (lv_is_thin_pool(lv) || lv_is_thin_volume(lv))
repstr[6] = 't';
else if (lv_is_cache_type(lv))
else if (lv_is_cache_pool(lv) || lv_is_cache(lv) || lv_is_cache_origin(lv))
repstr[6] = 'C';
else if (lv_is_raid_type(lv))
repstr[6] = 'r';
else if (lv_is_mirror_type(lv))
else if (lv_is_mirror_type(lv) || lv_is_pvmove(lv))
repstr[6] = 'm';
else if (lv_is_cow(lv) || lv_is_origin(lv))
repstr[6] = 's';
@@ -681,9 +766,11 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv)
repstr[8] = 'p';
else if (lv_is_raid_type(lv)) {
uint64_t n;
if (!_lv_raid_healthy(lv))
if (!activation())
repstr[8] = 'X'; /* Unknown */
else if (!lv_raid_healthy(lv))
repstr[8] = 'r'; /* RAID needs 'r'efresh */
else if (lv->status & RAID) {
else if (lv_is_raid(lv)) {
if (lv_raid_mismatch_count(lv, &n) && n)
repstr[8] = 'm'; /* RAID has 'm'ismatches */
} else if (lv->status & LV_WRITEMOSTLY)
@@ -756,8 +843,8 @@ char *lv_host_dup(struct dm_pool *mem, const struct logical_volume *lv)
static int _lv_is_exclusive(struct logical_volume *lv)
{
/* Some devices require exlusivness */
return seg_is_raid(first_seg(lv)) ||
/* Some devices require exlusiveness */
return lv_is_raid(lv) ||
lv_is_origin(lv) ||
lv_is_thin_type(lv) ||
lv_is_cache_type(lv);
@@ -774,7 +861,7 @@ deactivate:
return_0;
break;
case CHANGE_ALN:
if (_lv_is_exclusive(lv)) {
if (vg_is_clustered(lv->vg) && _lv_is_exclusive(lv)) {
if (!lv_is_active_locally(lv)) {
log_error("Cannot deactivate remotely exclusive device locally.");
return 0;
@@ -823,6 +910,11 @@ char *lv_active_dup(struct dm_pool *mem, const struct logical_volume *lv)
{
const char *s;
if (!activation()) {
s = "unknown";
goto out;
}
if (vg_is_clustered(lv->vg)) {
//const struct logical_volume *lvo = lv;
lv = lv_lock_holder(lv);
@@ -840,7 +932,7 @@ char *lv_active_dup(struct dm_pool *mem, const struct logical_volume *lv)
else /* locally active */
s = lv_is_active_but_not_locally(lv) ?
"remotely" : "locally";
out:
return dm_pool_strdup(mem, s);
}

View File

@@ -60,6 +60,7 @@ char *lv_attr_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_uuid_dup(const struct logical_volume *lv);
char *lv_tags_dup(const struct logical_volume *lv);
char *lv_path_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_dmpath_dup(struct dm_pool *mem, const struct logical_volume *lv);
uint64_t lv_origin_size(const struct logical_volume *lv);
char *lv_move_pv_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_convert_lv_dup(struct dm_pool *mem, const struct logical_volume *lv);
@@ -71,6 +72,9 @@ char *lv_metadata_lv_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_pool_lv_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_modules_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_name_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_fullname_dup(struct dm_pool *mem, const struct logical_volume *lv);
struct logical_volume *lv_parent(const struct logical_volume *lv);
char *lv_parent_dup(struct dm_pool *mem, const struct logical_volume *lv);
char *lv_origin_dup(struct dm_pool *mem, const struct logical_volume *lv);
uint32_t lv_kernel_read_ahead(const struct logical_volume *lv);
uint64_t lvseg_start(const struct lv_segment *seg);
@@ -94,4 +98,7 @@ const struct logical_volume *lv_lock_holder(const struct logical_volume *lv);
struct logical_volume *lv_ondisk(struct logical_volume *lv);
struct profile *lv_config_profile(const struct logical_volume *lv);
char *lv_profile_dup(struct dm_pool *mem, const struct logical_volume *lv);
int lv_mirror_image_in_sync(const struct logical_volume *lv);
int lv_raid_image_in_sync(const struct logical_volume *lv);
int lv_raid_healthy(const struct logical_volume *lv);
#endif /* _LVM_LV_H */

View File

@@ -68,6 +68,9 @@ int lv_add_segment(struct alloc_handle *ah,
int lv_add_mirror_areas(struct alloc_handle *ah,
struct logical_volume *lv, uint32_t le,
uint32_t region_size);
int lv_add_segmented_mirror_image(struct alloc_handle *ah,
struct logical_volume *lv, uint32_t le,
uint32_t region_size);
int lv_add_mirror_lvs(struct logical_volume *lv,
struct logical_volume **sub_lvs,
uint32_t num_extra_areas,
@@ -83,6 +86,7 @@ int lv_add_virtual_segment(struct logical_volume *lv, uint64_t status,
void alloc_destroy(struct alloc_handle *ah);
struct dm_list *build_parallel_areas_from_lv(struct logical_volume *lv,
unsigned use_pvmove_parent_lv);
unsigned use_pvmove_parent_lv,
unsigned create_single_list);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -38,9 +38,19 @@ static int _merge(struct lv_segment *first, struct lv_segment *second)
int lv_merge_segments(struct logical_volume *lv)
{
struct dm_list *segh, *t;
struct lv_segment *current, *prev = NULL;
struct lv_segment *seg, *current, *prev = NULL;
if (lv->status & LOCKED || lv->status & PVMOVE)
/*
* Don't interfere with pvmoves as they rely upon two LVs
* having a matching segment structure.
*/
if (lv_is_locked(lv) || lv_is_pvmove(lv))
return 1;
if (lv_is_mirror_image(lv) &&
(seg = get_only_segment_using_this_lv(lv)) &&
(lv_is_locked(seg->lv) || lv_is_pvmove(seg->lv)))
return 1;
dm_list_iterate_safe(segh, t, &lv->segments) {
@@ -149,7 +159,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
* Check mirror log - which is attached to the mirrored seg
*/
if (complete_vg && seg->log_lv && seg_is_mirrored(seg)) {
if (!(seg->log_lv->status & MIRROR_LOG)) {
if (!lv_is_mirror_log(seg->log_lv)) {
log_error("LV %s: segment %u log LV %s is not "
"a mirror log",
lv->name, seg_count, seg->log_lv->name);
@@ -191,38 +201,26 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
}
}
if (seg_is_thin_pool(seg) || seg_is_cache_pool(seg)) {
if (seg_is_pool(seg)) {
if (seg->area_count != 1 ||
seg_type(seg, 0) != AREA_LV) {
log_error("LV %s: %spool segment %u is missing a pool data LV",
lv->name,
seg_is_thin_pool(seg) ?
"thin " : "cache",
seg_count);
log_error("LV %s: %s segment %u is missing a pool data LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg_lv(seg, 0))) || find_pool_seg(seg2) != seg) {
log_error("LV %s: %spool segment %u data LV does not refer back to pool LV",
lv->name,
seg_is_thin_pool(seg) ?
"thin " : "cache",
seg_count);
log_error("LV %s: %s segment %u data LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
if (!seg->metadata_lv) {
log_error("LV %s: %spool segment %u is missing a pool metadata LV",
lv->name,
seg_is_thin_pool(seg) ?
"thin " : "cache",
seg_count);
log_error("LV %s: %s segment %u is missing a pool metadata LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
} else if (!(seg2 = first_seg(seg->metadata_lv)) ||
find_pool_seg(seg2) != seg) {
log_error("LV %s: %spool segment %u metadata LV does not refer back to pool LV",
lv->name,
seg_is_thin_pool(seg) ?
"thin " : "cache",
seg_count);
log_error("LV %s: %s segment %u metadata LV does not refer back to pool LV",
lv->name, seg->segtype->name, seg_count);
inc_error_count;
}
@@ -232,11 +230,8 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
(seg_is_cache_pool(seg) &&
((seg->chunk_size < DM_CACHE_MIN_DATA_BLOCK_SIZE) ||
(seg->chunk_size > DM_CACHE_MAX_DATA_BLOCK_SIZE)))) {
log_error("LV %s: %spool segment %u has chunk size %u out of range.",
lv->name,
seg_is_thin_pool(seg) ?
"thin " : "cache",
seg_count, seg->chunk_size);
log_error("LV %s: %s segment %u has chunk size %u out of range.",
lv->name, seg->segtype->name, seg_count, seg->chunk_size);
inc_error_count;
}
} else {
@@ -351,7 +346,7 @@ int check_lv_segments(struct logical_volume *lv, int complete_vg)
}
if (complete_vg && seg_lv(seg, s) &&
(seg_lv(seg, s)->status & MIRROR_IMAGE) &&
lv_is_mirror_image(seg_lv(seg, s)) &&
(!(seg2 = find_seg_by_le(seg_lv(seg, s),
seg_le(seg, s))) ||
find_mirror_seg(seg2) != seg)) {

View File

@@ -42,62 +42,63 @@
/* Various flags */
/* Note that the bits no longer necessarily correspond to LVM1 disk format */
#define PARTIAL_VG UINT64_C(0x00000001) /* VG */
#define EXPORTED_VG UINT64_C(0x00000002) /* VG PV */
#define RESIZEABLE_VG UINT64_C(0x00000004) /* VG */
#define PARTIAL_VG UINT64_C(0x0000000000000001) /* VG */
#define EXPORTED_VG UINT64_C(0x0000000000000002) /* VG PV */
#define RESIZEABLE_VG UINT64_C(0x0000000000000004) /* VG */
/* May any free extents on this PV be used or must they be left free? */
#define ALLOCATABLE_PV UINT64_C(0x00000008) /* PV */
#define ALLOCATABLE_PV UINT64_C(0x0000000000000008) /* PV */
#define ARCHIVED_VG ALLOCATABLE_PV /* VG, reuse same bit */
//#define SPINDOWN_LV UINT64_C(0x00000010) /* LV */
//#define BADBLOCK_ON UINT64_C(0x00000020) /* LV */
#define VISIBLE_LV UINT64_C(0x00000040) /* LV */
#define FIXED_MINOR UINT64_C(0x00000080) /* LV */
//#define SPINDOWN_LV UINT64_C(0x0000000000000010) /* LV */
//#define BADBLOCK_ON UINT64_C(0x0000000000000020) /* LV */
#define VISIBLE_LV UINT64_C(0x0000000000000040) /* LV */
#define FIXED_MINOR UINT64_C(0x0000000000000080) /* LV */
#define LVM_READ UINT64_C(0x00000100) /* LV, VG */
#define LVM_WRITE UINT64_C(0x00000200) /* LV, VG */
#define LVM_READ UINT64_C(0x0000000000000100) /* LV, VG */
#define LVM_WRITE UINT64_C(0x0000000000000200) /* LV, VG */
#define CLUSTERED UINT64_C(0x00000400) /* VG */
//#define SHARED UINT64_C(0x00000800) /* VG */
#define CLUSTERED UINT64_C(0x0000000000000400) /* VG */
//#define SHARED UINT64_C(0x0000000000000800) /* VG */
/* FIXME Remove when metadata restructuring is completed */
#define SNAPSHOT UINT64_C(0x00001000) /* LV - internal use only */
#define PVMOVE UINT64_C(0x00002000) /* VG LV SEG */
#define LOCKED UINT64_C(0x00004000) /* LV */
#define MIRRORED UINT64_C(0x00008000) /* LV - internal use only */
//#define VIRTUAL UINT64_C(0x00010000) /* LV - internal use only */
#define MIRROR_LOG UINT64_C(0x00020000) /* LV */
#define MIRROR_IMAGE UINT64_C(0x00040000) /* LV */
#define SNAPSHOT UINT64_C(0x0000000000001000) /* LV - internal use only */
#define PVMOVE UINT64_C(0x0000000000002000) /* VG LV SEG */
#define LOCKED UINT64_C(0x0000000000004000) /* LV */
#define MIRRORED UINT64_C(0x0000000000008000) /* LV - internal use only */
//#define VIRTUAL UINT64_C(0x0000000000010000) /* LV - internal use only */
#define MIRROR UINT64_C(0x0002000000000000) /* LV - Internal use only */
#define MIRROR_LOG UINT64_C(0x0000000000020000) /* LV - Internal use only */
#define MIRROR_IMAGE UINT64_C(0x0000000000040000) /* LV - Internal use only */
#define LV_NOTSYNCED UINT64_C(0x00080000) /* LV */
#define LV_REBUILD UINT64_C(0x00100000) /* LV */
//#define PRECOMMITTED UINT64_C(0x00200000) /* VG - internal use only */
#define CONVERTING UINT64_C(0x00400000) /* LV */
#define LV_NOTSYNCED UINT64_C(0x0000000000080000) /* LV */
#define LV_REBUILD UINT64_C(0x0000000000100000) /* LV */
//#define PRECOMMITTED UINT64_C(0x0000000000200000) /* VG - internal use only */
#define CONVERTING UINT64_C(0x0000000000400000) /* LV */
#define MISSING_PV UINT64_C(0x00800000) /* PV */
#define PARTIAL_LV UINT64_C(0x01000000) /* LV - derived flag, not
#define MISSING_PV UINT64_C(0x0000000000800000) /* PV */
#define PARTIAL_LV UINT64_C(0x0000000001000000) /* LV - derived flag, not
written out in metadata*/
//#define POSTORDER_FLAG UINT64_C(0x02000000) /* Not real flags, reserved for
//#define POSTORDER_OPEN_FLAG UINT64_C(0x04000000) temporary use inside vg_read_internal. */
//#define VIRTUAL_ORIGIN UINT64_C(0x08000000) /* LV - internal use only */
//#define POSTORDER_FLAG UINT64_C(0x0000000002000000) /* Not real flags, reserved for
//#define POSTORDER_OPEN_FLAG UINT64_C(0x0000000004000000) temporary use inside vg_read_internal. */
//#define VIRTUAL_ORIGIN UINT64_C(0x0000000008000000) /* LV - internal use only */
#define MERGING UINT64_C(0x10000000) /* LV SEG */
#define MERGING UINT64_C(0x0000000010000000) /* LV SEG */
#define REPLICATOR UINT64_C(0x20000000) /* LV -internal use only for replicator */
#define REPLICATOR_LOG UINT64_C(0x40000000) /* LV -internal use only for replicator-dev */
#define UNLABELLED_PV UINT64_C(0x80000000) /* PV -this PV had no label written yet */
#define REPLICATOR UINT64_C(0x0000000020000000) /* LV -internal use only for replicator */
#define REPLICATOR_LOG UINT64_C(0x0000000040000000) /* LV -internal use only for replicator-dev */
#define UNLABELLED_PV UINT64_C(0x0000000080000000) /* PV -this PV had no label written yet */
#define RAID UINT64_C(0x0000000100000000) /* LV */
#define RAID_META UINT64_C(0x0000000200000000) /* LV */
#define RAID_IMAGE UINT64_C(0x0000000400000000) /* LV */
#define RAID UINT64_C(0x0000000100000000) /* LV - Internal use only */
#define RAID_META UINT64_C(0x0000000200000000) /* LV - Internal use only */
#define RAID_IMAGE UINT64_C(0x0000000400000000) /* LV - Internal use only */
#define THIN_VOLUME UINT64_C(0x0000001000000000) /* LV */
#define THIN_POOL UINT64_C(0x0000002000000000) /* LV */
#define THIN_POOL_DATA UINT64_C(0x0000004000000000) /* LV */
#define THIN_POOL_METADATA UINT64_C(0x0000008000000000) /* LV */
#define POOL_METADATA_SPARE UINT64_C(0x0000010000000000) /* LV internal */
#define THIN_VOLUME UINT64_C(0x0000001000000000) /* LV - Internal use only */
#define THIN_POOL UINT64_C(0x0000002000000000) /* LV - Internal use only */
#define THIN_POOL_DATA UINT64_C(0x0000004000000000) /* LV - Internal use only */
#define THIN_POOL_METADATA UINT64_C(0x0000008000000000) /* LV - Internal use only */
#define POOL_METADATA_SPARE UINT64_C(0x0000010000000000) /* LV - Internal use only */
#define LV_WRITEMOSTLY UINT64_C(0x0000020000000000) /* LV (RAID1) */
@@ -110,10 +111,12 @@
this flag dropped during single
LVM command execution. */
#define CACHE_POOL UINT64_C(0x0000200000000000) /* LV */
#define CACHE_POOL_DATA UINT64_C(0x0000400000000000) /* LV */
#define CACHE_POOL_METADATA UINT64_C(0x0000800000000000) /* LV */
#define CACHE UINT64_C(0x0001000000000000) /* LV */
#define CACHE_POOL UINT64_C(0x0000200000000000) /* LV - Internal use only */
#define CACHE_POOL_DATA UINT64_C(0x0000400000000000) /* LV - Internal use only */
#define CACHE_POOL_METADATA UINT64_C(0x0000800000000000) /* LV - Internal use only */
#define CACHE UINT64_C(0x0001000000000000) /* LV - Internal use only */
/* Next unused flag: UINT64_C(0x0004000000000000) */
/* Format features flags */
#define FMT_SEGMENTS 0x00000001U /* Arbitrary segment params? */
@@ -133,6 +136,8 @@
/* Mirror conversion type flags */
#define MIRROR_BY_SEG 0x00000001U /* segment-by-segment mirror */
#define MIRROR_BY_LV 0x00000002U /* mirror using whole mimage LVs */
#define MIRROR_BY_SEGMENTED_LV 0x00000004U /* mirror using whole mimage LVs that
* preserve the segment structure */
#define MIRROR_SKIP_INIT_SYNC 0x00000010U /* skip initial sync */
/* vg_read and vg_read_for_update flags */
@@ -160,29 +165,48 @@
#define vg_is_archived(vg) (((vg)->status & ARCHIVED_VG) ? 1 : 0)
#define lv_is_locked(lv) (((lv)->status & LOCKED) ? 1 : 0)
#define lv_is_virtual(lv) (((lv)->status & VIRTUAL) ? 1 : 0)
#define lv_is_merging(lv) (((lv)->status & MERGING) ? 1 : 0)
#define lv_is_converting(lv) (((lv)->status & CONVERTING) ? 1 : 0)
#define lv_is_external_origin(lv) (((lv)->external_count > 0) ? 1 : 0)
#define lv_is_thin_volume(lv) ((lv)->status & THIN_VOLUME ? 1 : 0)
#define lv_is_thin_pool(lv) ((lv)->status & THIN_POOL ? 1 : 0)
#define lv_is_used_thin_pool(lv) (lv_is_thin_pool(lv) && !dm_list_empty(&(lv)->segs_using_this_lv))
#define lv_is_thin_pool_data(lv) ((lv)->status & THIN_POOL_DATA ? 1 : 0)
#define lv_is_thin_pool_metadata(lv) ((lv)->status & THIN_POOL_METADATA ? 1 : 0)
#define lv_is_mirrored(lv) ((lv)->status & MIRRORED ? 1 : 0)
#define lv_is_rlog(lv) ((lv)->status & REPLICATOR_LOG ? 1 : 0)
#define lv_is_thin_type(lv) ((lv)->status & (THIN_POOL | THIN_VOLUME | THIN_POOL_DATA | THIN_POOL_METADATA) ? 1 : 0)
#define lv_is_mirror_type(lv) ((lv)->status & (MIRROR_LOG | MIRROR_IMAGE | MIRRORED | PVMOVE) ? 1 : 0)
#define lv_is_raid(lv) (((lv)->status & (RAID)) ? 1 : 0)
#define lv_is_thin_volume(lv) (((lv)->status & THIN_VOLUME) ? 1 : 0)
#define lv_is_thin_pool(lv) (((lv)->status & THIN_POOL) ? 1 : 0)
#define lv_is_used_thin_pool(lv) (lv_is_thin_pool(lv) && !dm_list_empty(&(lv)->segs_using_this_lv))
#define lv_is_thin_pool_data(lv) (((lv)->status & THIN_POOL_DATA) ? 1 : 0)
#define lv_is_thin_pool_metadata(lv) (((lv)->status & THIN_POOL_METADATA) ? 1 : 0)
#define lv_is_thin_type(lv) (((lv)->status & (THIN_POOL | THIN_VOLUME | THIN_POOL_DATA | THIN_POOL_METADATA)) ? 1 : 0)
#define lv_is_mirrored(lv) (((lv)->status & MIRRORED) ? 1 : 0)
#define lv_is_mirror_image(lv) (((lv)->status & MIRROR_IMAGE) ? 1 : 0)
#define lv_is_mirror_log(lv) (((lv)->status & MIRROR_LOG) ? 1 : 0)
#define lv_is_mirror(lv) (((lv)->status & MIRROR) ? 1 : 0)
#define lv_is_mirror_type(lv) (((lv)->status & (MIRROR | MIRROR_LOG | MIRROR_IMAGE)) ? 1 : 0)
#define lv_is_pvmove(lv) (((lv)->status & PVMOVE) ? 1 : 0)
#define lv_is_raid(lv) (((lv)->status & RAID) ? 1 : 0)
#define lv_is_raid_image(lv) (((lv)->status & RAID_IMAGE) ? 1 : 0)
#define lv_is_raid_metadata(lv) (((lv)->status & RAID_META) ? 1 : 0)
#define lv_is_raid_type(lv) (((lv)->status & (RAID | RAID_IMAGE | RAID_META)) ? 1 : 0)
#define lv_is_cache(lv) (((lv)->status & (CACHE)) ? 1 : 0)
#define lv_is_cache_pool(lv) (((lv)->status & (CACHE_POOL)) ? 1 : 0)
#define lv_is_cache_pool_data(lv) (((lv)->status & (CACHE_POOL_DATA)) ? 1 : 0)
#define lv_is_cache_pool_metadata(lv) (((lv)->status & (CACHE_POOL_METADATA)) ? 1 : 0)
#define lv_is_cache(lv) (((lv)->status & CACHE) ? 1 : 0)
#define lv_is_cache_pool(lv) (((lv)->status & CACHE_POOL) ? 1 : 0)
#define lv_is_cache_pool_data(lv) (((lv)->status & CACHE_POOL_DATA) ? 1 : 0)
#define lv_is_cache_pool_metadata(lv) (((lv)->status & CACHE_POOL_METADATA) ? 1 : 0)
#define lv_is_cache_type(lv) (((lv)->status & (CACHE | CACHE_POOL | CACHE_POOL_DATA | CACHE_POOL_METADATA)) ? 1 : 0)
#define lv_is_virtual(lv) (((lv)->status & VIRTUAL) ? 1 : 0)
#define lv_is_pool(lv) (((lv)->status & (CACHE_POOL | THIN_POOL)) ? 1 : 0)
#define lv_is_pool_metadata(lv) (((lv)->status & (CACHE_POOL_METADATA | THIN_POOL_METADATA)) ? 1 : 0)
#define lv_is_pool_metadata_spare(lv) (((lv)->status & POOL_METADATA_SPARE) ? 1 : 0)
#define lv_is_rlog(lv) (((lv)->status & REPLICATOR_LOG) ? 1 : 0)
int lv_layout_and_role(struct dm_pool *mem, const struct logical_volume *lv,
struct dm_list **layout, struct dm_list **role);
/* Ordered list - see lv_manip.c */
typedef enum {
AREA_UNASSIGNED,
@@ -190,13 +214,11 @@ typedef enum {
AREA_LV
} area_type_t;
/*
* Whether or not to force an operation.
*/
/* Whether or not to force an operation */
typedef enum {
PROMPT = 0, /* Issue yes/no prompt to confirm operation */
DONT_PROMPT = 1, /* Skip yes/no prompt */
DONT_PROMPT_OVERRIDE = 2 /* Skip prompt + override a second condition */
DONT_PROMPT = 1, /* Add more prompts */
DONT_PROMPT_OVERRIDE = 2 /* Add even more dangerous prompts */
} force_t;
typedef enum {
@@ -467,7 +489,7 @@ struct pvcreate_params {
};
struct lvresize_params {
const char *vg_name;
const char *vg_name; /* only-used when VG is not yet opened (in /tools) */
const char *lv_name;
uint32_t stripes;
@@ -485,6 +507,7 @@ struct lvresize_params {
sign_t poolmetadatasign;
uint32_t poolmetadataextents;
int approx_alloc;
int extents_are_pes; /* Is 'extents' counting PEs or LEs? */
percent_type_t percent;
enum {
@@ -690,6 +713,10 @@ int lv_rename(struct cmd_context *cmd, struct logical_volume *lv,
int lv_rename_update(struct cmd_context *cmd, struct logical_volume *lv,
const char *new_name, int update_mda);
/* Updates and reloads metadata for given lv */
int lv_update_and_reload(struct logical_volume *lv);
int lv_update_and_reload_origin(struct logical_volume *lv);
uint64_t extents_from_size(struct cmd_context *cmd, uint64_t size,
uint32_t extent_size);
@@ -697,17 +724,25 @@ struct logical_volume *find_pool_lv(const struct logical_volume *lv);
int pool_is_active(const struct logical_volume *pool_lv);
int pool_supports_external_origin(const struct lv_segment *pool_seg, const struct logical_volume *external_lv);
int thin_pool_feature_supported(const struct logical_volume *pool_lv, int feature);
int recalculate_pool_chunk_size_with_dev_hints(struct logical_volume *pool_lv,
int passed_args,
int chunk_size_calc_policy);
int update_pool_lv(struct logical_volume *lv, int activate);
int update_pool_params(const struct segment_type *segtype,
struct volume_group *vg, unsigned target_attr,
int passed_args, uint32_t data_extents,
uint64_t *pool_metadata_size,
int *chunk_size_calc_policy, uint32_t *chunk_size,
thin_discards_t *discards, int *zero);
int update_profilable_pool_params(struct cmd_context *cmd, struct profile *profile,
int passed_args, int *chunk_size_calc_method,
uint32_t *chunk_size, thin_discards_t *discards,
int *zero);
int update_thin_pool_params(struct volume_group *vg, unsigned attr,
int passed_args,
uint32_t data_extents, uint32_t extent_size,
int passed_args, uint32_t data_extents,
uint64_t *pool_metadata_size,
int *chunk_size_calc_method, uint32_t *chunk_size,
thin_discards_t *discards,
uint64_t *pool_metadata_size, int *zero);
thin_discards_t *discards, int *zero);
int get_pool_discards(const char *str, thin_discards_t *discards);
const char *get_pool_discards_name(thin_discards_t discards);
struct logical_volume *alloc_pool_metadata(struct logical_volume *pool_lv,
@@ -776,7 +811,7 @@ struct lvcreate_params {
const char *origin; /* snap */
const char *pool; /* thin */
const char *vg_name; /* all */
const char *vg_name; /* only-used when VG is not yet opened (in /tools) */
const char *lv_name; /* all */
/* Keep args given by the user on command line */
@@ -893,12 +928,13 @@ int get_pv_list_for_lv(struct dm_pool *mem,
struct lv_segment *first_seg(const struct logical_volume *lv);
struct lv_segment *last_seg(const struct logical_volume *lv);
/*
* Useful functions for managing snapshots.
*/
int lv_is_origin(const struct logical_volume *lv);
int lv_is_virtual_origin(const struct logical_volume *lv);
int lv_is_thin_origin(const struct logical_volume *lv, unsigned *snapshot_count);
int lv_is_cache_origin(const struct logical_volume *lv);
int lv_is_cow(const struct logical_volume *lv);
int lv_is_merging_origin(const struct logical_volume *origin);
int lv_is_merging_cow(const struct logical_volume *snapshot);
@@ -1024,18 +1060,18 @@ int lv_raid_reshape(struct logical_volume *lv,
int lv_raid_replace(struct logical_volume *lv, struct dm_list *remove_pvs,
struct dm_list *allocate_pvs);
int lv_raid_remove_missing(struct logical_volume *lv);
int partial_raid_lv_supports_degraded_activation(struct logical_volume *lv);
/* -- metadata/raid_manip.c */
/* ++ metadata/cache_manip.c */
int update_cache_pool_params(struct volume_group *vg, unsigned attr,
int passed_args,
uint32_t data_extents, uint32_t extent_size,
int *chunk_size_calc_method, uint32_t *chunk_size,
thin_discards_t *discards,
uint64_t *pool_metadata_size, int *zero);
int passed_args, uint32_t data_extents,
uint64_t *pool_metadata_size,
int *chunk_size_calc_method, uint32_t *chunk_size);
struct logical_volume *lv_cache_create(struct logical_volume *pool,
struct logical_volume *origin);
int lv_cache_remove(struct logical_volume *cache_lv);
int get_cache_mode(const char *str, uint32_t *flags);
/* -- metadata/cache_manip.c */
struct cmd_vg *cmd_vg_add(struct dm_pool *mem, struct dm_list *cmd_vgs,
@@ -1065,7 +1101,7 @@ struct dm_list *lvs_using_lv(struct cmd_context *cmd, struct volume_group *vg,
struct logical_volume *lv);
uint32_t find_free_lvnum(struct logical_volume *lv);
percent_t copy_percent(const struct logical_volume *lv_mirr);
dm_percent_t copy_percent(const struct logical_volume *lv_mirr);
char *generate_lv_name(struct volume_group *vg, const char *format,
char *buffer, size_t len);

View File

@@ -19,6 +19,7 @@
#include "toolcontext.h"
#include "lvm-string.h"
#include "lvm-file.h"
#include "lvm-signal.h"
#include "lvmcache.h"
#include "lvmetad.h"
#include "memlock.h"
@@ -360,14 +361,23 @@ out:
return r;
}
int move_pv(struct volume_group *vg_from, struct volume_group *vg_to,
const char *pv_name)
static int _move_pv(struct volume_group *vg_from, struct volume_group *vg_to,
const char *pv_name, int enforce_pv_from_source)
{
struct physical_volume *pv;
struct pv_list *pvl;
/* FIXME: handle tags */
if (!(pvl = find_pv_in_vg(vg_from, pv_name))) {
if (!enforce_pv_from_source &&
find_pv_in_vg(vg_to, pv_name))
/*
* PV has already been moved. This can happen if an
* LV is being moved that has multiple sub-LVs on the
* same PV.
*/
return 1;
log_error("Physical volume %s not in volume group %s",
pv_name, vg_from->name);
return 0;
@@ -391,6 +401,12 @@ int move_pv(struct volume_group *vg_from, struct volume_group *vg_to,
return 1;
}
int move_pv(struct volume_group *vg_from, struct volume_group *vg_to,
const char *pv_name)
{
return _move_pv(vg_from, vg_to, pv_name, 1);
}
int move_pvs_used_by_lv(struct volume_group *vg_from,
struct volume_group *vg_to,
const char *lv_name)
@@ -418,8 +434,8 @@ int move_pvs_used_by_lv(struct volume_group *vg_from,
return_0;
for (s = 0; s < lvseg->area_count; s++) {
if (seg_type(lvseg, s) == AREA_PV) {
if (!move_pv(vg_from, vg_to,
pv_dev_name(seg_pv(lvseg, s))))
if (!_move_pv(vg_from, vg_to,
pv_dev_name(seg_pv(lvseg, s)), 0))
return_0;
} else if (seg_type(lvseg, s) == AREA_LV) {
lv = seg_lv(lvseg, s);
@@ -515,7 +531,7 @@ int remove_lvs_in_vg(struct cmd_context *cmd,
while ((lst = dm_list_first(&vg->lvs))) {
lvl = dm_list_item(lst, struct lv_list);
if (!lv_remove_with_dependencies(cmd, lvl->lv, force, 0))
return 0;
return_0;
}
return 1;
@@ -586,7 +602,7 @@ int vg_remove(struct volume_group *vg)
log_verbose("Removing physical volume \"%s\" from "
"volume group \"%s\"", pv_dev_name(pv), vg->name);
pv->vg_name = vg->fid->fmt->orphan_vg_name;
pv->status = ALLOCATABLE_PV;
pv->status &= ~ALLOCATABLE_PV;
if (!dev_get_size(pv_dev(pv), &pv->size)) {
log_error("%s: Couldn't get size.", pv_dev_name(pv));
@@ -669,8 +685,8 @@ static int vg_extend_single_pv(struct volume_group *vg, char *pv_name,
{
struct physical_volume *pv;
if (!(pv = find_pv_by_name(vg->cmd, pv_name, 1, 1)))
stack;
pv = find_pv_by_name(vg->cmd, pv_name, 1, 1);
if (!pv && !pp) {
log_error("%s not identified as an existing "
"physical volume", pv_name);
@@ -753,7 +769,7 @@ int vg_reduce(struct volume_group *vg, const char *pv_name)
}
log_error("Unable to remove physical volume '%s' from "
"volume group '%s'.", pv_name, vg->name);
"volume group '%s'.", pv_name, vg->name);
return 0;
}
@@ -1313,11 +1329,14 @@ int vg_split_mdas(struct cmd_context *cmd __attribute__((unused)),
* See if we may pvcreate on this device.
* 0 indicates we may not.
*/
static int pvcreate_check(struct cmd_context *cmd, const char *name,
struct pvcreate_params *pp)
static int _pvcreate_check(struct cmd_context *cmd, const char *name,
struct pvcreate_params *pp)
{
struct physical_volume *pv;
struct device *dev;
int r = 0;
int scan_needed = 0;
int filter_refresh_needed = 0;
/* FIXME Check partition type is LVM unless --force is given */
@@ -1329,7 +1348,7 @@ static int pvcreate_check(struct cmd_context *cmd, const char *name,
if (pv && !is_orphan(pv) && pp->force != DONT_PROMPT_OVERRIDE) {
log_error("Can't initialize physical volume \"%s\" of "
"volume group \"%s\" without -ff", name, pv_vg_name(pv));
goto bad;
goto out;
}
/* prompt */
@@ -1337,28 +1356,29 @@ static int pvcreate_check(struct cmd_context *cmd, const char *name,
yes_no_prompt("Really INITIALIZE physical volume \"%s\" of volume group \"%s\" [y/n]? ",
name, pv_vg_name(pv)) == 'n') {
log_error("%s: physical volume not initialized", name);
goto bad;
goto out;
}
if (sigint_caught())
goto_bad;
goto_out;
dev = dev_cache_get(name, cmd->filter);
/* Is there an md superblock here? */
/* FIXME: still possible issues here - rescan cache? */
if (!dev && md_filtering()) {
if (!refresh_filters(cmd))
goto_bad;
goto_out;
init_md_filtering(0);
dev = dev_cache_get(name, cmd->filter);
init_md_filtering(1);
scan_needed = 1;
}
if (!dev) {
log_error("Device %s not found (or ignored by filtering).", name);
goto bad;
goto out;
}
/*
@@ -1368,33 +1388,45 @@ static int pvcreate_check(struct cmd_context *cmd, const char *name,
/* FIXME Detect whether device-mapper itself is still using it */
log_error("Can't open %s exclusively. Mounted filesystem?",
name);
goto bad;
goto out;
}
if (!wipe_known_signatures(cmd, dev, name,
TYPE_LVM1_MEMBER | TYPE_LVM2_MEMBER,
0, pp->yes, pp->force)) {
log_error("Aborting pvcreate on %s.", name);
goto bad;
}
goto out;
} else
filter_refresh_needed = scan_needed = 1;
if (sigint_caught())
goto_bad;
goto_out;
if (pv && !is_orphan(pv) && pp->force) {
if (pv && !is_orphan(pv) && pp->force)
log_warn("WARNING: Forcing physical volume creation on "
"%s%s%s%s", name,
!is_orphan(pv) ? " of volume group \"" : "",
pv_vg_name(pv),
!is_orphan(pv) ? "\"" : "");
}
r = 1;
out:
if (filter_refresh_needed)
if (!refresh_filters(cmd)) {
stack;
r = 0;
}
if (scan_needed)
if (!lvmcache_label_scan(cmd, 2)) {
stack;
r = 0;
}
free_pv_fid(pv);
return 1;
bad:
free_pv_fid(pv);
return 0;
return r;
}
void pvcreate_params_set_defaults(struct pvcreate_params *pp)
@@ -1526,7 +1558,7 @@ struct physical_volume *pvcreate_vol(struct cmd_context *cmd, const char *pv_nam
}
}
if (!pvcreate_check(cmd, pv_name, pp))
if (!_pvcreate_check(cmd, pv_name, pp))
goto_bad;
if (sigint_caught())
@@ -1576,7 +1608,6 @@ static struct physical_volume *_alloc_pv(struct dm_pool *mem, struct device *dev
}
pv->dev = dev;
pv->status = ALLOCATABLE_PV;
dm_list_init(&pv->tags);
dm_list_init(&pv->segments);
@@ -1836,7 +1867,8 @@ struct physical_volume *find_pv_by_name(struct cmd_context *cmd,
lvmcache_seed_infos_from_lvmetad(cmd);
if (!(dev = dev_cache_get(pv_name, cmd->filter))) {
log_error("Physical volume %s not found", pv_name);
if (!allow_unformatted)
log_error("Physical volume %s not found", pv_name);
return_NULL;
}
@@ -2306,8 +2338,9 @@ int vg_validate(struct volume_group *vg)
struct pv_list *pvl;
struct lv_list *lvl;
struct lv_segment *seg;
struct str_list *sl;
struct dm_str_list *sl;
char uuid[64] __attribute__((aligned(8)));
char uuid2[64] __attribute__((aligned(8)));
int r = 1;
unsigned hidden_lv_count = 0, lv_count = 0, lv_visible_count = 0;
unsigned pv_count = 0;
@@ -2404,6 +2437,17 @@ int vg_validate(struct volume_group *vg)
r = 0;
}
if (!id_equal(&lvl->lv->lvid.id[0], &lvl->lv->vg->id)) {
if (!id_write_format(&lvl->lv->lvid.id[0], uuid,
sizeof(uuid)))
stack;
if (!id_write_format(&lvl->lv->vg->id, uuid2,
sizeof(uuid2)))
stack;
log_error(INTERNAL_ERROR "LV %s has VG UUID %s but its VG %s has UUID %s",
lvl->lv->name, uuid, lvl->lv->vg->name, uuid2);
r = 0;
}
if (lv_is_cow(lvl->lv))
num_snapshots++;
@@ -2542,7 +2586,7 @@ int vg_validate(struct volume_group *vg)
}
dm_list_iterate_items(lvl, &vg->lvs) {
if (!(lvl->lv->status & PVMOVE))
if (!lv_is_pvmove(lvl->lv))
continue;
dm_list_iterate_items(seg, &lvl->lv->segments) {
if (seg_is_mirrored(seg)) {
@@ -2597,6 +2641,7 @@ int vg_write(struct volume_group *vg)
struct dm_list *mdah;
struct pv_to_create *pv_to_create;
struct metadata_area *mda;
int revert = 0, wrote = 0;
if (!vg_validate(vg))
return_0;
@@ -2649,39 +2694,45 @@ int vg_write(struct volume_group *vg)
if (!mda->ops->vg_write) {
log_error("Format does not support writing volume"
"group metadata areas");
/* Revert */
dm_list_uniterate(mdah, &vg->fid->metadata_areas_in_use, &mda->list) {
mda = dm_list_item(mdah, struct metadata_area);
if (mda->ops->vg_revert &&
!mda->ops->vg_revert(vg->fid, vg, mda)) {
stack;
}
}
return 0;
revert = 1;
break;
}
if (!mda->ops->vg_write(vg->fid, vg, mda)) {
stack;
/* Revert */
dm_list_uniterate(mdah, &vg->fid->metadata_areas_in_use, &mda->list) {
mda = dm_list_item(mdah, struct metadata_area);
if (mda->ops->vg_revert &&
!mda->ops->vg_revert(vg->fid, vg, mda)) {
stack;
}
if (vg->cmd->handles_missing_pvs) {
log_warn("WARNING: Failed to write an MDA of VG %s.", vg->name);
mda->status |= MDA_FAILED;
} else {
stack;
revert = 1;
break;
}
} else
++ wrote;
}
if (revert || !wrote) {
dm_list_uniterate(mdah, &vg->fid->metadata_areas_in_use, &mda->list) {
mda = dm_list_item(mdah, struct metadata_area);
if (mda->ops->vg_revert &&
!mda->ops->vg_revert(vg->fid, vg, mda)) {
stack;
}
return 0;
}
return 0;
}
/* Now pre-commit each copy of the new metadata */
dm_list_iterate_items(mda, &vg->fid->metadata_areas_in_use) {
if (mda->status & MDA_FAILED)
continue;
if (mda->ops->vg_precommit &&
!mda->ops->vg_precommit(vg->fid, vg, mda)) {
stack;
/* Revert */
dm_list_iterate_items(mda, &vg->fid->metadata_areas_in_use) {
if (mda->status & MDA_FAILED)
continue;
if (mda->ops->vg_revert &&
!mda->ops->vg_revert(vg->fid, vg, mda)) {
stack;
@@ -2722,6 +2773,8 @@ static int _vg_commit_mdas(struct volume_group *vg)
/* Commit to each copy of the metadata area */
dm_list_iterate_items(mda, &vg->fid->metadata_areas_in_use) {
if (mda->status & MDA_FAILED)
continue;
failed = 0;
if (mda->ops->vg_commit &&
!mda->ops->vg_commit(vg->fid, vg, mda)) {
@@ -3546,7 +3599,7 @@ static struct volume_group *_vg_read_by_vgid(struct cmd_context *cmd,
const char *vgname;
struct dm_list *vgnames;
struct volume_group *vg;
struct str_list *strl;
struct dm_str_list *strl;
int consistent = 0;
/* Is corresponding vgname already cached? */
@@ -3777,7 +3830,7 @@ struct dm_list *get_vgids(struct cmd_context *cmd, int include_internal)
static int _get_pvs(struct cmd_context *cmd, int warnings,
struct dm_list *pvslist, struct dm_list *vgslist)
{
struct str_list *strl;
struct dm_str_list *strl;
const char *vgname, *vgid;
struct pv_list *pvl, *pvl_copy;
struct dm_list *vgids;
@@ -4105,6 +4158,9 @@ int vg_check_status(const struct volume_group *vg, uint64_t status)
return !_vg_bad_status_bits(vg, status);
}
/*
* VG is left unlocked on failure
*/
static struct volume_group *_recover_vg(struct cmd_context *cmd,
const char *vg_name, const char *vgid)
{
@@ -4118,11 +4174,14 @@ static struct volume_group *_recover_vg(struct cmd_context *cmd,
if (!lock_vol(cmd, vg_name, LCK_VG_WRITE, NULL))
return_NULL;
if (!(vg = vg_read_internal(cmd, vg_name, vgid, 1, &consistent)))
if (!(vg = vg_read_internal(cmd, vg_name, vgid, 1, &consistent))) {
unlock_vg(cmd, vg_name);
return_NULL;
}
if (!consistent) {
release_vg(vg);
unlock_vg(cmd, vg_name);
return_NULL;
}
@@ -4202,7 +4261,7 @@ static struct volume_group *_vg_lock_and_read(struct cmd_context *cmd, const cha
log_error("Recovery of volume group \"%s\" failed.",
vg_name);
failure |= FAILED_INCONSISTENT;
goto bad;
goto bad_no_unlock;
}
}
@@ -4237,6 +4296,7 @@ bad:
if (!already_locked && !(misc_flags & READ_WITHOUT_LOCK))
unlock_vg(cmd, vg_name);
bad_no_unlock:
return _vg_make_handle(cmd, vg, failure);
}
@@ -4680,7 +4740,7 @@ int pv_change_metadataignore(struct physical_volume *pv, uint32_t mda_ignored)
char *tags_format_and_copy(struct dm_pool *mem, const struct dm_list *tagsl)
{
struct str_list *sl;
struct dm_str_list *sl;
if (!dm_pool_begin_object(mem, 256)) {
log_error("dm_pool_begin_object failed");

View File

@@ -154,6 +154,7 @@ struct metadata_area_ops {
#define MDA_IGNORED 0x00000001
#define MDA_INCONSISTENT 0x00000002
#define MDA_FAILED 0x00000004
struct metadata_area {
struct dm_list list;
@@ -427,7 +428,7 @@ int lv_split_segment(struct logical_volume *lv, uint32_t le);
*/
int add_seg_to_segs_using_this_lv(struct logical_volume *lv, struct lv_segment *seg);
int remove_seg_from_segs_using_this_lv(struct logical_volume *lv, struct lv_segment *seg);
struct lv_segment *get_only_segment_using_this_lv(struct logical_volume *lv);
struct lv_segment *get_only_segment_using_this_lv(const struct logical_volume *lv);
int for_each_sub_lv(struct logical_volume *lv,
int (*fn)(struct logical_volume *lv, void *data),

View File

@@ -42,9 +42,7 @@
*/
int is_temporary_mirror_layer(const struct logical_volume *lv)
{
if (lv->status & MIRROR_IMAGE
&& lv->status & MIRRORED
&& !(lv->status & LOCKED))
if (lv_is_mirror_image(lv) && lv_is_mirrored(lv) && !lv_is_locked(lv))
return 1;
return 0;
@@ -58,7 +56,7 @@ struct logical_volume *find_temporary_mirror(const struct logical_volume *lv)
{
struct lv_segment *seg;
if (!(lv->status & MIRRORED))
if (!lv_is_mirrored(lv))
return NULL;
seg = first_seg(lv);
@@ -109,7 +107,7 @@ uint32_t lv_mirror_count(const struct logical_volume *lv)
struct lv_segment *seg;
uint32_t s, mirrors;
if (!(lv->status & MIRRORED))
if (!lv_is_mirrored(lv))
return 1;
seg = first_seg(lv);
@@ -118,7 +116,7 @@ uint32_t lv_mirror_count(const struct logical_volume *lv)
if (!strcmp(seg->segtype->name, "raid10"))
return 2;
if (lv->status & PVMOVE)
if (lv_is_pvmove(lv))
return seg->area_count;
mirrors = 0;
@@ -281,7 +279,7 @@ static int _init_mirror_log(struct cmd_context *cmd,
struct logical_volume *log_lv, int in_sync,
struct dm_list *tagsl, int remove_on_failure)
{
struct str_list *sl;
struct dm_str_list *sl;
uint64_t orig_status = log_lv->status;
int was_active = 0;
@@ -420,7 +418,7 @@ static int _activate_lv_like_model(struct logical_volume *model,
static int _delete_lv(struct logical_volume *mirror_lv, struct logical_volume *lv)
{
struct cmd_context *cmd = mirror_lv->vg->cmd;
struct str_list *sl;
struct dm_str_list *sl;
/* Inherit tags - maybe needed for activation */
if (!str_list_match_list(&mirror_lv->tags, &lv->tags, NULL)) {
@@ -575,7 +573,7 @@ static int _move_removable_mimages_to_end(struct logical_volume *lv,
static int _mirrored_lv_in_sync(struct logical_volume *lv)
{
percent_t sync_percent;
dm_percent_t sync_percent;
if (!lv_mirror_percent(lv->vg->cmd, lv, 0, &sync_percent,
NULL)) {
@@ -590,7 +588,7 @@ static int _mirrored_lv_in_sync(struct logical_volume *lv)
return 0;
}
return (sync_percent == PERCENT_100) ? 1 : 0;
return (sync_percent == DM_PERCENT_100) ? 1 : 0;
}
/*
@@ -612,7 +610,7 @@ static int _split_mirror_images(struct logical_volume *lv,
struct lv_list *lvl;
struct cmd_context *cmd = lv->vg->cmd;
if (!(lv->status & MIRRORED)) {
if (!lv_is_mirrored(lv)) {
log_error("Unable to split non-mirrored LV, %s",
lv->name);
return 0;
@@ -744,6 +742,7 @@ static int _split_mirror_images(struct logical_volume *lv,
detached_log_lv = detach_mirror_log(mirrored_seg);
if (!remove_layer_from_lv(lv, sub_lv))
return_0;
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
}
@@ -842,7 +841,7 @@ static int _remove_mirror_images(struct logical_volume *lv,
struct logical_volume *sub_lv;
struct logical_volume *detached_log_lv = NULL;
struct logical_volume *temp_layer_lv = NULL;
struct lv_segment *mirrored_seg = first_seg(lv);
struct lv_segment *pvmove_seg, *mirrored_seg = first_seg(lv);
uint32_t old_area_count = mirrored_seg->area_count;
uint32_t new_area_count = mirrored_seg->area_count;
struct lv_list *lvl;
@@ -943,18 +942,24 @@ static int _remove_mirror_images(struct logical_volume *lv,
* mirror. Fix up the flags if we only have one image left.
*/
if (lv_mirror_count(lv) == 1) {
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
}
mirrored_seg = first_seg(lv);
if (remove_log && !detached_log_lv)
detached_log_lv = detach_mirror_log(mirrored_seg);
if (lv_is_pvmove(lv))
dm_list_iterate_items(pvmove_seg, &lv->segments)
pvmove_seg->status |= PVMOVE;
} else if (new_area_count == 0) {
log_very_verbose("All mimages of %s are gone", lv->name);
/* All mirror images are gone.
* It can happen for vgreduce --removemissing. */
detached_log_lv = detach_mirror_log(mirrored_seg);
lv->status &= ~MIRROR;
lv->status &= ~MIRRORED;
lv->status &= ~LV_NOTSYNCED;
if (!replace_lv_with_error_segment(lv))
@@ -1500,9 +1505,10 @@ const char *get_pvmove_pvname_from_lv_mirr(struct logical_volume *lv_mirr)
dm_list_iterate_items(seg, &lv_mirr->segments) {
if (!seg_is_mirrored(seg))
continue;
if (seg_type(seg, 0) != AREA_PV)
continue;
return dev_name(seg_dev(seg, 0));
if (seg_type(seg, 0) == AREA_PV)
return dev_name(seg_dev(seg, 0));
if (seg_type(seg, 0) == AREA_LV)
return dev_name(seg_dev(first_seg(seg_lv(seg, 0)), 0));
}
return NULL;
@@ -1520,7 +1526,7 @@ struct logical_volume *find_pvmove_lv_in_lv(struct logical_volume *lv)
for (s = 0; s < seg->area_count; s++) {
if (seg_type(seg, s) != AREA_LV)
continue;
if (seg_lv(seg, s)->status & PVMOVE)
if (lv_is_pvmove(seg_lv(seg, s)))
return seg_lv(seg, s);
}
}
@@ -1555,12 +1561,29 @@ struct logical_volume *find_pvmove_lv(struct volume_group *vg,
if (!(lv->status & lv_type))
continue;
/* Check segment origins point to pvname */
/*
* If this is an atomic pvmove, the first
* segment will be a mirror containing
* mimages (i.e. AREA_LVs)
*/
if (seg_type(first_seg(lv), 0) == AREA_LV) {
seg = first_seg(lv); /* the mirror segment */
seg = first_seg(seg_lv(seg, 0)); /* mimage_0 segment0 */
if (seg_dev(seg, 0) != dev)
continue;
return lv;
}
/*
* If this is a normal pvmove, check all the segments'
* first areas for the requested device
*/
dm_list_iterate_items(seg, &lv->segments) {
if (seg_type(seg, 0) != AREA_PV)
continue;
if (seg_dev(seg, 0) != dev)
continue;
return lv;
}
}
@@ -1652,20 +1675,21 @@ int fixup_imported_mirrors(struct volume_group *vg)
return 1;
}
/*
* Add mirrors to "linear" or "mirror" segments
*/
int add_mirrors_to_segments(struct cmd_context *cmd, struct logical_volume *lv,
uint32_t mirrors, uint32_t region_size,
struct dm_list *allocatable_pvs, alloc_policy_t alloc)
static int _add_mirrors_that_preserve_segments(struct logical_volume *lv,
uint32_t flags,
uint32_t mirrors,
uint32_t region_size,
struct dm_list *allocatable_pvs,
alloc_policy_t alloc)
{
struct cmd_context *cmd = lv->vg->cmd;
struct alloc_handle *ah;
const struct segment_type *segtype;
struct dm_list *parallel_areas;
uint32_t adjusted_region_size;
int r = 1;
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 1)))
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 1, 0)))
return_0;
if (!(segtype = get_segtype_from_string(cmd, "mirror")))
@@ -1682,15 +1706,37 @@ int add_mirrors_to_segments(struct cmd_context *cmd, struct logical_volume *lv,
return 0;
}
if (!lv_add_mirror_areas(ah, lv, 0, adjusted_region_size)) {
log_error("Failed to add mirror areas to %s", lv->name);
if (flags & MIRROR_BY_SEG) {
if (!lv_add_mirror_areas(ah, lv, 0, adjusted_region_size)) {
log_error("Failed to add mirror areas to %s", lv->name);
r = 0;
}
} else if (flags & MIRROR_BY_SEGMENTED_LV) {
if (!lv_add_segmented_mirror_image(ah, lv, 0,
adjusted_region_size)) {
log_error("Failed to add mirror areas to %s", lv->name);
r = 0;
}
} else {
log_error(INTERNAL_ERROR "Unknown mirror flag");
r = 0;
}
alloc_destroy(ah);
return r;
}
/*
* Add mirrors to "linear" or "mirror" segments
*/
int add_mirrors_to_segments(struct cmd_context *cmd, struct logical_volume *lv,
uint32_t mirrors, uint32_t region_size,
struct dm_list *allocatable_pvs, alloc_policy_t alloc)
{
return _add_mirrors_that_preserve_segments(lv, MIRROR_BY_SEG,
mirrors, region_size,
allocatable_pvs, alloc);
}
/*
* Convert mirror log
*
@@ -1701,7 +1747,7 @@ int remove_mirror_log(struct cmd_context *cmd,
struct dm_list *removable_pvs,
int force)
{
percent_t sync_percent;
dm_percent_t sync_percent;
struct volume_group *vg = lv->vg;
/* Unimplemented features */
@@ -1734,7 +1780,7 @@ int remove_mirror_log(struct cmd_context *cmd,
return 0;
}
if (sync_percent == PERCENT_100)
if (sync_percent == DM_PERCENT_100)
init_mirror_in_sync(1);
else {
/* A full resync will take place */
@@ -1896,7 +1942,7 @@ int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
struct alloc_handle *ah;
const struct segment_type *segtype;
struct dm_list *parallel_areas;
percent_t sync_percent;
dm_percent_t sync_percent;
int in_sync;
struct logical_volume *log_lv;
unsigned old_log_count;
@@ -1927,7 +1973,7 @@ int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
return 1;
}
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0)))
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0, 0)))
return_0;
if (!(segtype = get_segtype_from_string(cmd, "mirror")))
@@ -1964,7 +2010,7 @@ int add_mirror_log(struct cmd_context *cmd, struct logical_volume *lv,
/* check sync status */
if (mirror_in_sync() ||
(lv_mirror_percent(cmd, lv, 0, &sync_percent, NULL) &&
(sync_percent == PERCENT_100)))
(sync_percent == DM_PERCENT_100)))
in_sync = 1;
else
in_sync = 0;
@@ -2000,7 +2046,7 @@ int add_mirror_images(struct cmd_context *cmd, struct logical_volume *lv,
* allocate destination extents
*/
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0)))
if (!(parallel_areas = build_parallel_areas_from_lv(lv, 0, 0)))
return_0;
if (!(segtype = get_segtype_from_string(cmd, "mirror")))
@@ -2072,7 +2118,7 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
if (vg_is_clustered(lv->vg)) {
/* FIXME: move this test out of this function */
/* Skip test for pvmove mirrors, it can use local mirror */
if (!(lv->status & (PVMOVE | LOCKED)) &&
if (!lv_is_pvmove(lv) && !lv_is_locked(lv) &&
lv_is_active(lv) &&
!lv_is_active_exclusive_locally(lv) && /* lv_is_active_remotely */
!_cluster_mirror_is_available(lv)) {
@@ -2112,8 +2158,19 @@ int lv_add_mirrors(struct cmd_context *cmd, struct logical_volume *lv,
return 0;
}
return add_mirrors_to_segments(cmd, lv, mirrors,
region_size, pvs, alloc);
return _add_mirrors_that_preserve_segments(lv, MIRROR_BY_SEG,
mirrors, region_size,
pvs, alloc);
} else if (flags & MIRROR_BY_SEGMENTED_LV) {
if (stripes > 1) {
log_error("Striped-mirroring is not supported on "
"segment-by-segment mirroring");
return 0;
}
return _add_mirrors_that_preserve_segments(lv, MIRROR_BY_SEGMENTED_LV,
mirrors, region_size,
pvs, alloc);
} else if (flags & MIRROR_BY_LV) {
if (!mirrors)
return add_mirror_log(cmd, lv, log_count,
@@ -2156,7 +2213,7 @@ int lv_split_mirror_images(struct logical_volume *lv, const char *split_name,
*/
r = _split_mirror_images(lv, split_name, split_count, removable_pvs);
if (!r)
return 0;
return_0;
return 1;
}
@@ -2196,7 +2253,7 @@ int lv_remove_mirrors(struct cmd_context *cmd __attribute__((unused)),
/* MIRROR_BY_LV */
if (seg_type(seg, 0) == AREA_LV &&
seg_lv(seg, 0)->status & MIRROR_IMAGE)
lv_is_mirror_image(seg_lv(seg, 0)))
return remove_mirror_images(lv, new_mirrors + 1,
is_removable, removable_baton,
log_count ? 1U : 0);

View File

@@ -23,11 +23,14 @@
#include "segtype.h"
#include "lv_alloc.h"
#include "defaults.h"
#include "dev-type.h"
#include "display.h"
#include "toolcontext.h"
int attach_pool_metadata_lv(struct lv_segment *pool_seg,
struct logical_volume *metadata_lv)
{
if (!seg_is_thin_pool(pool_seg) && !seg_is_cache_pool(pool_seg)) {
if (!seg_is_pool(pool_seg)) {
log_error(INTERNAL_ERROR
"Unable to attach pool metadata LV to %s segtype.",
pool_seg->segtype->ops->name(pool_seg));
@@ -41,10 +44,30 @@ int attach_pool_metadata_lv(struct lv_segment *pool_seg,
return add_seg_to_segs_using_this_lv(metadata_lv, pool_seg);
}
int detach_pool_metadata_lv(struct lv_segment *pool_seg, struct logical_volume **metadata_lv)
{
struct logical_volume *lv = pool_seg->metadata_lv;
if (!lv ||
!lv_is_pool_metadata(lv) ||
!remove_seg_from_segs_using_this_lv(lv, pool_seg)) {
log_error(INTERNAL_ERROR "Logical volume %s is not valid pool.",
display_lvname(pool_seg->lv));
return 0;
}
lv_set_visible(lv);
lv->status &= ~(THIN_POOL_METADATA | CACHE_POOL_METADATA);
*metadata_lv = lv;
pool_seg->metadata_lv = NULL;
return 1;
}
int attach_pool_data_lv(struct lv_segment *pool_seg,
struct logical_volume *pool_data_lv)
{
if (!seg_is_thin_pool(pool_seg) && !seg_is_cache_pool(pool_seg)) {
if (!seg_is_pool(pool_seg)) {
log_error(INTERNAL_ERROR
"Unable to attach pool data LV to %s segtype.",
pool_seg->segtype->ops->name(pool_seg));
@@ -199,8 +222,7 @@ struct lv_segment *find_pool_seg(const struct lv_segment *seg)
return NULL;
}
if ((lv_is_thin_type(seg->lv) && !seg_is_thin_pool(pool_seg)) &&
!seg_is_cache_pool(pool_seg)) {
if ((lv_is_thin_type(seg->lv) && !seg_is_pool(pool_seg))) {
log_error("%s on %s is not a %s pool segment",
pool_seg->lv->name, seg->lv->name,
lv_is_thin_type(seg->lv) ? "thin" : "cache");
@@ -210,6 +232,120 @@ struct lv_segment *find_pool_seg(const struct lv_segment *seg)
return pool_seg;
}
/* Greatest common divisor */
static unsigned long _gcd(unsigned long n1, unsigned long n2)
{
unsigned long remainder;
do {
remainder = n1 % n2;
n1 = n2;
n2 = remainder;
} while (n2);
return n1;
}
/* Least common multiple */
static unsigned long _lcm(unsigned long n1, unsigned long n2)
{
if (!n1 || !n2)
return 0;
return (n1 * n2) / _gcd(n1, n2);
}
int recalculate_pool_chunk_size_with_dev_hints(struct logical_volume *pool_lv,
int passed_args,
int chunk_size_calc_policy)
{
struct logical_volume *pool_data_lv;
struct lv_segment *seg;
struct physical_volume *pv;
struct cmd_context *cmd = pool_lv->vg->cmd;
unsigned long previous_hint = 0, hint = 0;
uint32_t default_chunk_size;
uint32_t min_chunk_size, max_chunk_size;
if (passed_args & PASS_ARG_CHUNK_SIZE)
return 1;
if (lv_is_thin_pool(pool_lv)) {
if (find_config_tree_int(cmd, allocation_thin_pool_chunk_size_CFG, NULL))
return 1;
min_chunk_size = DM_THIN_MIN_DATA_BLOCK_SIZE;
max_chunk_size = DM_THIN_MAX_DATA_BLOCK_SIZE;
default_chunk_size = get_default_allocation_thin_pool_chunk_size_CFG(cmd, NULL);
} else if (lv_is_cache_pool(pool_lv)) {
if (find_config_tree_int(cmd, allocation_cache_pool_chunk_size_CFG, NULL))
return 1;
min_chunk_size = DM_CACHE_MIN_DATA_BLOCK_SIZE;
max_chunk_size = DM_CACHE_MAX_DATA_BLOCK_SIZE;
default_chunk_size = get_default_allocation_cache_pool_chunk_size_CFG(cmd, NULL);
} else {
log_error(INTERNAL_ERROR "%s is not a pool logical volume.", display_lvname(pool_lv));
return 0;
}
pool_data_lv = seg_lv(first_seg(pool_lv), 0);
dm_list_iterate_items(seg, &pool_data_lv->segments) {
pv = seg_pv(seg, 0);
if (chunk_size_calc_policy == THIN_CHUNK_SIZE_CALC_METHOD_PERFORMANCE)
hint = dev_optimal_io_size(cmd->dev_types, pv_dev(pv));
else
hint = dev_minimum_io_size(cmd->dev_types, pv_dev(pv));
if (!hint)
continue;
if (previous_hint)
hint = _lcm(previous_hint, hint);
previous_hint = hint;
}
if (!hint)
log_debug_alloc("No usable device hint found while recalculating "
"pool chunk size for %s.", display_lvname(pool_lv));
else if ((hint < min_chunk_size) || (hint > max_chunk_size))
log_debug_alloc("Calculated chunk size %s for pool %s "
"is out of allowed range (%s-%s).",
display_size(cmd, hint), display_lvname(pool_lv),
display_size(cmd, min_chunk_size),
display_size(cmd, max_chunk_size));
else
first_seg(pool_lv)->chunk_size =
(hint >= default_chunk_size) ? hint : default_chunk_size;
return 1;
}
int update_pool_params(const struct segment_type *segtype,
struct volume_group *vg, unsigned target_attr,
int passed_args, uint32_t data_extents,
uint64_t *pool_metadata_size,
int *chunk_size_calc_policy, uint32_t *chunk_size,
thin_discards_t *discards, int *zero)
{
if (segtype_is_cache_pool(segtype) || segtype_is_cache(segtype)) {
if (!update_cache_pool_params(vg, target_attr, passed_args,
data_extents, pool_metadata_size,
chunk_size_calc_policy, chunk_size))
return_0;
} else if (!update_thin_pool_params(vg, target_attr, passed_args,
data_extents, pool_metadata_size,
chunk_size_calc_policy, chunk_size,
discards, zero)) /* thin-pool */
return_0;
if ((uint64_t) *chunk_size > (uint64_t) data_extents * vg->extent_size) {
log_error("Chunk size %s is bigger then pool data size.",
display_size(vg->cmd, *chunk_size));
return 0;
}
log_verbose("Using pool metadata size %s.",
display_size(vg->cmd, *pool_metadata_size));
return 1;
}
int create_pool(struct logical_volume *pool_lv,
const struct segment_type *segtype,
struct alloc_handle *ah, uint32_t stripes, uint32_t stripe_size)
@@ -314,7 +450,8 @@ int create_pool(struct logical_volume *pool_lv,
bad:
if (activation()) {
if (deactivate_lv_local(pool_lv->vg->cmd, pool_lv)) {
if (lv_is_active_locally(pool_lv) &&
deactivate_lv_local(pool_lv->vg->cmd, pool_lv)) {
log_error("Aborting. Could not deactivate pool %s.",
pool_lv->name);
return 0;
@@ -327,3 +464,215 @@ bad:
return 0;
}
struct logical_volume *alloc_pool_metadata(struct logical_volume *pool_lv,
const char *name, uint32_t read_ahead,
uint32_t stripes, uint32_t stripe_size,
uint64_t size, alloc_policy_t alloc,
struct dm_list *pvh)
{
struct logical_volume *metadata_lv;
/* FIXME: Make lvm2api usable */
struct lvcreate_params lvc = {
.activate = CHANGE_ALY,
.alloc = alloc,
.major = -1,
.minor = -1,
.permission = LVM_READ | LVM_WRITE,
.pvh = pvh,
.read_ahead = read_ahead,
.stripe_size = stripe_size,
.stripes = stripes,
.zero = 1,
};
dm_list_init(&lvc.tags);
if (!(lvc.extents = extents_from_size(pool_lv->vg->cmd, size,
pool_lv->vg->extent_size)))
return_0;
if (!(lvc.segtype = get_segtype_from_string(pool_lv->vg->cmd, "striped")))
return_0;
/* FIXME: allocate properly space for metadata_lv */
if (!(metadata_lv = lv_create_single(pool_lv->vg, &lvc)))
return_0;
if (!lv_rename_update(pool_lv->vg->cmd, metadata_lv, name, 0))
return_0;
return metadata_lv;
}
static struct logical_volume *_alloc_pool_metadata_spare(struct volume_group *vg,
uint32_t extents,
struct dm_list *pvh)
{
struct logical_volume *lv;
/* FIXME: Make lvm2api usable */
struct lvcreate_params lp = {
.activate = CHANGE_ALY,
.alloc = ALLOC_INHERIT,
.extents = extents,
.major = -1,
.minor = -1,
.permission = LVM_READ | LVM_WRITE,
.pvh = pvh ? : &vg->pvs,
.read_ahead = DM_READ_AHEAD_AUTO,
.stripes = 1,
.zero = 1,
.temporary = 1,
};
dm_list_init(&lp.tags);
if (!(lp.segtype = get_segtype_from_string(vg->cmd, "striped")))
return_0;
/* FIXME: Maybe using silent mode ? */
if (!(lv = lv_create_single(vg, &lp)))
return_0;
/* Spare LV should not be active */
if (!deactivate_lv_local(vg->cmd, lv)) {
log_error("Unable to deactivate pool metadata spare LV. "
"Manual intervention required.");
return 0;
}
if (!vg_set_pool_metadata_spare(lv))
return_0;
return lv;
}
/*
* Create/resize pool metadata spare LV
* Caller does vg_write(), vg_commit() with pool creation
* extents is 0, max size is determined
*/
int handle_pool_metadata_spare(struct volume_group *vg, uint32_t extents,
struct dm_list *pvh, int poolmetadataspare)
{
struct logical_volume *lv = vg->pool_metadata_spare_lv;
uint32_t seg_mirrors;
struct lv_segment *seg;
const struct lv_list *lvl;
if (!extents)
/* Find maximal size of metadata LV */
dm_list_iterate_items(lvl, &vg->lvs)
if (lv_is_pool_metadata(lvl->lv) &&
(lvl->lv->le_count > extents))
extents = lvl->lv->le_count;
if (!poolmetadataspare) {
/* TODO: Not showing when lvm.conf would define 'n' ? */
if (DEFAULT_POOL_METADATA_SPARE && extents)
/* Warn if there would be any user */
log_warn("WARNING: recovery of pools without pool "
"metadata spare LV is not automated.");
return 1;
}
if (!lv) {
if (!_alloc_pool_metadata_spare(vg, extents, pvh))
return_0;
return 1;
}
seg = last_seg(lv);
seg_mirrors = lv_mirror_count(lv);
/* Check spare LV is big enough and preserve segtype */
if ((lv->le_count < extents) && seg &&
!lv_extend(lv, seg->segtype,
seg->area_count / seg_mirrors,
seg->stripe_size,
seg_mirrors,
seg->region_size,
extents - lv->le_count, NULL,
pvh, lv->alloc, 0))
return_0;
return 1;
}
int vg_set_pool_metadata_spare(struct logical_volume *lv)
{
char new_name[NAME_LEN];
struct volume_group *vg = lv->vg;
if (vg->pool_metadata_spare_lv) {
if (vg->pool_metadata_spare_lv == lv)
return 1;
if (!vg_remove_pool_metadata_spare(vg))
return_0;
}
if (dm_snprintf(new_name, sizeof(new_name), "%s_pmspare", lv->name) < 0) {
log_error("Can't create pool metadata spare. Name of pool LV "
"%s is too long.", lv->name);
return 0;
}
if (!lv_rename_update(vg->cmd, lv, new_name, 0))
return_0;
lv_set_hidden(lv);
lv->status |= POOL_METADATA_SPARE;
vg->pool_metadata_spare_lv = lv;
return 1;
}
int vg_remove_pool_metadata_spare(struct volume_group *vg)
{
char new_name[NAME_LEN];
char *c;
struct logical_volume *lv = vg->pool_metadata_spare_lv;
if (!(lv->status & POOL_METADATA_SPARE)) {
log_error(INTERNAL_ERROR "LV %s is not pool metadata spare.",
lv->name);
return 0;
}
vg->pool_metadata_spare_lv = NULL;
lv->status &= ~POOL_METADATA_SPARE;
lv_set_visible(lv);
/* Cut off suffix _pmspare */
(void) dm_strncpy(new_name, lv->name, sizeof(new_name));
if (!(c = strchr(new_name, '_'))) {
log_error(INTERNAL_ERROR "LV %s has no suffix for pool metadata spare.",
new_name);
return 0;
}
*c = 0;
/* If the name is in use, generate new lvol%d */
if (find_lv_in_vg(vg, new_name) &&
!generate_lv_name(vg, "lvol%d", new_name, sizeof(new_name))) {
log_error("Failed to generate unique name for "
"pool metadata spare logical volume.");
return 0;
}
log_print_unless_silent("Renaming existing pool metadata spare "
"logical volume \"%s/%s\" to \"%s/%s\".",
vg->name, lv->name, vg->name, new_name);
if (!lv_rename_update(vg->cmd, lv, new_name, 0))
return_0;
/* To display default warning */
(void) handle_pool_metadata_spare(vg, 0, 0, 0);
return 1;
}

View File

@@ -246,8 +246,65 @@ int discard_pv_segment(struct pv_segment *peg, uint32_t discard_area_reduction)
return 1;
}
static int _merge_free_pv_segment(struct pv_segment *peg)
{
struct dm_list *l;
struct pv_segment *merge_peg;
if (peg->lvseg) {
log_error(INTERNAL_ERROR
"_merge_free_pv_seg called on a"
" segment that is not free.");
return 0;
}
/*
* FIXME:
* Should we free the list element once it is deleted
* from the list? I think not. It is likely part of
* a mempool.
*/
/* Attempt to merge with Free space before */
if ((l = dm_list_prev(&peg->pv->segments, &peg->list))) {
merge_peg = dm_list_item(l, struct pv_segment);
if (!merge_peg->lvseg) {
merge_peg->len += peg->len;
dm_list_del(&peg->list);
peg = merge_peg;
}
}
/* Attempt to merge with Free space after */
if ((l = dm_list_next(&peg->pv->segments, &peg->list))) {
merge_peg = dm_list_item(l, struct pv_segment);
if (!merge_peg->lvseg) {
peg->len += merge_peg->len;
dm_list_del(&merge_peg->list);
}
}
return 1;
}
/*
* release_pv_segment
* @peg
* @area_reduction
*
* WARNING: When release_pv_segment is called, the freed space may be
* merged into the 'pv_segment's before and after it in the
* list if they are also free. Thus, any iterators of the
* 'pv->segments' list that call this function must be aware
* that the list can change in a way that is unsafe even for
* *_safe iterators. Restart the iterator in these cases.
*
* Returns: 1 on success, 0 on failure
*/
int release_pv_segment(struct pv_segment *peg, uint32_t area_reduction)
{
struct dm_list *l;
struct pv_segment *merge_peg;
if (!peg->lvseg) {
log_error("release_pv_segment with unallocated segment: "
"%s PE %" PRIu32, pv_dev_name(peg->pv), peg->pe);
@@ -261,9 +318,7 @@ int release_pv_segment(struct pv_segment *peg, uint32_t area_reduction)
peg->lvseg = NULL;
peg->lv_area = 0;
/* FIXME merge free space */
return 1;
return _merge_free_pv_segment(peg);
}
if (!pv_split_segment(peg->lvseg->lv->vg->vgmem,
@@ -271,6 +326,12 @@ int release_pv_segment(struct pv_segment *peg, uint32_t area_reduction)
area_reduction, NULL))
return_0;
/* The segment after 'peg' now holds free space, try to merge it */
if ((l = dm_list_next(&peg->pv->segments, &peg->list))) {
merge_peg = dm_list_item(l, struct pv_segment);
return _merge_free_pv_segment(merge_peg);
}
return 1;
}
@@ -733,6 +794,7 @@ int pvremove_single(struct cmd_context *cmd, const char *pv_name,
unsigned prompt)
{
struct device *dev;
struct lvmcache_info *info;
int r = 0;
if (!lock_vol(cmd, VG_ORPHANS, LCK_VG_WRITE, NULL)) {
@@ -749,6 +811,8 @@ int pvremove_single(struct cmd_context *cmd, const char *pv_name,
goto out;
}
info = lvmcache_info_from_pvid(dev->pvid, 1);
if (!dev_test_excl(dev)) {
/* FIXME Detect whether device-mapper is still using the device */
log_error("Can't open %s exclusively - not removing. "
@@ -762,6 +826,9 @@ int pvremove_single(struct cmd_context *cmd, const char *pv_name,
goto out;
}
if (info)
lvmcache_del(info);
if (!lvmetad_pv_gone_by_dev(dev, NULL))
goto_out;

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More