1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-01-03 05:18:29 +03:00
Commit Graph

5823 Commits

Author SHA1 Message Date
Alasdair G Kergon
9b830791ea label: Move _set_label_read_result call into _find_labeller.
Move responsibility for setting the label_read() result parameter down
into _find_labeller().
2018-01-02 15:30:58 +00:00
Alasdair G Kergon
4f4ddb806d label: Move setting result of label_read into separate fn. 2018-01-02 14:19:20 +00:00
Alasdair G Kergon
e6b4b41881 label: Add mempool. 2018-01-02 13:37:12 +00:00
Zdenek Kabelac
3a841515af lvm-string: add function to detect component LV suffix
Add is_component_lvname() function to recognize component LV name.
2017-12-19 15:28:07 +01:00
Alasdair G Kergon
17649d4ac8 device: Move dev_read memory allocation into device layer.
Rename dev_read() to dev_read_buf() - the function that reads data
into a supplied buffer.

Introduce a new dev_read() that allocates the buffer it returns and
switch the important users over to this.  No caller may change the
returned data.  (For now, callers are responsible for freeing it after
use, but later the device layer will take full ownership.)

dev_read_buf() should only be used for tiny buffers or unimportant code
(such as the old disk formats).
2017-12-19 01:31:50 +00:00
David Teigland
3f9ae846b8 lvmlockd: clear coverity complaint
from previous coverity fix, it's never happy.
2017-12-18 15:19:17 -06:00
Alasdair G Kergon
5f45cb90a7 format_text: Transfer circular buf alloc to device layer.
Instead of the caller passing dev_read_circular() a buffer to fill with
data, the device layer itself now allocates it.
2017-12-15 22:34:26 +00:00
Alasdair G Kergon
beee9940a5 format_text: Separate out code paths for buffer wraparound
The creation of wrapped around metadata - where the start of metadata is
written up to the end of the buffer and the remainder follows back at
the start of the buffer - is now restricted to cases where writing the
metadata in one piece wouldn't fit.  This shouldn't happen in 'normal'
usage so let's begin treating the code for this as a special case that
can be ignored when optimising 'normal' cases.
2017-12-15 21:12:19 +00:00
Alasdair G Kergon
145ded10c2 format_text: Supply mempool directly to raw_read_mda_header. 2017-12-15 14:57:05 +00:00
Alasdair G Kergon
3edc25dbdf format_text: Round size written up to multiple of 4096.
Zero-fill metadata up to the next 4096 boundary then write out a
multiple of 4096 bytes to avoid triggering a read-modify-write.
2017-12-12 22:52:22 +00:00
Alasdair G Kergon
78ffa44fc5 format_text: Change metadata alignment from 512 to 4096.
If there is sufficient space in the metadata area, align the next
metadata to a disk offset that is a multiple of 4096 bytes and
don't write it circularly.  If it doesn't all fit at the end
of the metadata area, go back to the start and write it all there
contiguously.

If there is insufficient space to use the new stricter rules, revert to
the original behaviour, aligning on 512-byte boundaries wrapping around
the circular buffer as required.
2017-12-12 20:57:36 +00:00
Alasdair G Kergon
643df602c7 format_text: More refactoring of metadata offset calcs 2017-12-12 18:51:32 +00:00
Alasdair G Kergon
4002f5e206 format_text: Refactor and document metadata offset calculation. 2017-12-12 18:36:54 +00:00
Alasdair G Kergon
e932c5da50 device: Fix an unpaired device close.
dev_open_flags contains an unpaired dev_close_immediate so increment
open_count before calling it.
2017-12-12 17:56:58 +00:00
Alasdair G Kergon
b96862ee11 metadata: Consistently skip metadata areas that failed.
Even after writing some metadata encountered problems, some commands
continue (rightly or wrongly) and attempt to make further changes.

Once an mda is marked MDA_FAILED, don't try to use it again.
This also applies when reverting, where one loop already skips
failed mdas but the other doesn't.

This fixes some device open_count warnings on relevant failure paths.
2017-12-12 17:52:45 +00:00
Alasdair G Kergon
c5ef76bf27 device: Internal error if writing 0 bytes to dev. 2017-12-12 12:57:25 +00:00
Alasdair G Kergon
b76c6951aa format_text: Adjust metadata alignment calculation.
Use new ALIGN_ABSOLUTE macro when calculating the start location
of new metadata and adjust the end of buffer detection so that
there is no longer an imposed gap between old and new metadata.
2017-12-11 20:25:03 +00:00
Alasdair G Kergon
053d35de47 format_text: Use absolute alignment to calculate metadata usage
Currently both start and offset should always be divisible by alignment,
so this should have no effect, but a later patch will increase alignment
so these variables can no longer be optimised out.
2017-12-11 17:14:38 +00:00
Alasdair G Kergon
2db67a8ea0 format_text: Move metadata size checking into separate fn.
Move checks into _metadata_fits_into_buffer() and add macro for alignment.
2017-12-11 17:08:29 +00:00
Alasdair G Kergon
46393bfca0 format_text: Log additional circular buffer information. 2017-12-11 16:07:34 +00:00
Alasdair G Kergon
49d486319f format_text: Replace PRI with FMT. 2017-12-11 15:39:25 +00:00
Zdenek Kabelac
71485ebfc7 thin: regression fix for metadata checking
Fix regression from commit f173274fe4
and restore support for 'disabled' checking via lvm.conf.
2017-12-08 13:21:15 +01:00
Zdenek Kabelac
455b26b8db activation: keep priority till memlock_unlock
Although it doesn't look like it can be a measurable problem
and costs some time to flip priorities outside of activation window.

So just like with memory locking preserve priority until call
memlock_unlock() appears.

(addition to commit c086dfadc3).
2017-12-08 13:21:15 +01:00
Alasdair G Kergon
14b1e5270d format_text: Use explicit alignment in wrapping calc.
Expand out the metadata wrapping calculations to prepare
to support a larger alignment.

The current alignment is 512 bytes so
(mdac_area_start + rlocn->offset) % alignment is zero.
2017-12-08 01:18:46 +00:00
Zdenek Kabelac
f173274fe4 cleanup: reorder calling of pool checking tools
Test for zero header before even starting to create argument list for
execution of thin/cache_check tool.
2017-12-07 21:00:39 +01:00
Alasdair G Kergon
2166d7be72 lvmetad: drop stray underscore 2017-12-07 16:24:14 +00:00
Alasdair G Kergon
d591d04103 device: Tag I/O for each mda on a device separately in log messages.
Mark the first metadata area on each text format PV as MDA_PRIMARY.
Pass this information down to the device layer so that when
there are two metadata areas on a block device, we can easily
distinguish two independent streams of I/O.
2017-12-07 03:48:11 +00:00
David Teigland
54154dc6f1 lvmlockd: clear coverity complaint 2017-12-06 10:49:31 -06:00
David Teigland
b910c34f09 lvmlockd: use pool lock for tmeta access
When a command is run on a named tmeta LV, use
the lock on the pool.
2017-12-05 14:31:03 -06:00
David Teigland
b9e4198500 lvmlockd: fix log print
from previous commit
2017-12-05 13:48:30 -06:00
David Teigland
5d5807b238 lvmlockd: improve error message for VG lock conflict
When there is significant VG lock contention which retries
have not been able to mask, print a better error message.
2017-12-05 11:53:03 -06:00
Heinz Mauelshagen
94632eb155 deactivate_lvs: deactivate any missing RaidLV legs
In case of failed legs, raid replaces those with
e.g. "vg-lv_rimage_0-missing_0_0" mapped to an error target.

Those errouneously remain on deactivation.

Fix by removing them on deactivation/removal of the RaidLV.
2017-12-05 18:48:06 +01:00
Alasdair G Kergon
7195df5aca device: Skip read-modify-write if replacing whole block. 2017-12-05 01:00:38 +00:00
Alasdair G Kergon
e4805e4883 device: categorise block i/o
Introduce enum dev_io_reason to categorise block device I/O
in debug messages so it's obvious what it is for.

DEV_IO_SIGNATURES   /* Scanning device signatures */
DEV_IO_LABEL        /* LVM PV disk label */
DEV_IO_MDA_HEADER   /* Text format metadata area header */
DEV_IO_MDA_CONTENT  /* Text format metadata area content */
DEV_IO_FMT1         /* Original LVM1 metadata format */
DEV_IO_POOL         /* Pool metadata format */
DEV_IO_LV           /* Content written to an LV */
DEV_IO_LOG          /* Logging messages */
2017-12-04 23:45:26 +00:00
Zdenek Kabelac
698483b5a1 activation: also lock memory for clustered locking
Commit  c086dfadc3 missed to lock memory
for clustering suspend part since it's using differnt locking reason.
2017-12-04 23:33:02 +01:00
Zdenek Kabelac
110dac870c cleanup: use existing define with prefix 2017-12-04 15:38:50 +01:00
Zdenek Kabelac
2a22576b2d cleanup: drop unused header
DM_UUID_LEN is no longer needed.
2017-12-04 15:38:50 +01:00
Heinz Mauelshagen
4daad1cf11 lv_manip: allow extension on --nosync raid lv
If the recovery of the repleced leg(s) of a RaidLV created without
initial resynchronization (i.e. "lvcreate --nosync ...") got
interrupted, it can't be extended because of the < 100% sync rate.
2017-12-01 18:38:18 +01:00
Heinz Mauelshagen
d3d18e637c raid: ignore --stripesize on raid4/5 conversion to 1 stripe
In case caller passes in changed stripe size when reshaping raid4/5
to 1 stripe aiming to convert to raid1 and optionally to linear,
ignore it to prevent data corruption.
2017-12-01 15:00:09 +01:00
Zdenek Kabelac
a42c3a0e90 cleanup: remove debug code 2017-12-01 12:19:09 +01:00
Zdenek Kabelac
4dc8184803 suspend: optimize generated list
Avoid adding same LV multiple times into the list.
Just saves couple extra calls and ioctls and makes log shorter.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
7e794b7748 activation: avoid rechecking pvmove node
Use new 3rd. state of trace_pvmove_deps == 2.
In this state we know, we have already seen the node and can skip futher
testing.   Remainging value 1 signals we want to track, and value 0
is for ignoring tracking, but node is still checking in this case.

Reduces large amount of duplicate ioctl queries.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
e4db42e476 activation: extend resume validation
Check also all snapshosts when resume is requested,
the origin volume is already resume, but possibly
some subLV or snapshot LV could be suspended if
we are still in critical_section.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
c086dfadc3 activation: split priority from memory locking
When entering any critical section, lvm2 used to lock process memory
and raised task priority to avoid problem with page swapping and minimize
time of having non-resumed devices in table.

With this patch, memory locking which which is expensive is only used when
entering  'suspending' section as only in this section there is risk
lvm could be suspending a device which later can be needed for paging.

Raised priority is still kept for all section entrances as this is
low-cost operation and may accelerate table resumes - although the real
impact can be still considered later.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
c489dd2e17 pvmove: add missing segment merging
When pvmove is finished and metadata are updated, the code missed
to merge possible mergable segments - so add explicit merging
call after pvmoved volumes are unlocked.

This avoids weird results where i.e. lvs could have been reporting
non-matching segments as lvs upon metadata read is doing silent segment
merging while dm table left after pvmove was still preserving
non-merged segments.
2017-12-01 12:19:09 +01:00
Zdenek Kabelac
fbd8b456db pvmove: move code from tools to lib
Move code manipulating with locking flags into /lib part of lvm.
2017-12-01 12:18:32 +01:00
Alasdair G Kergon
a9812ec9d3 label: Remove unused verify functions.
label_verify has never been used so remove it.
2017-11-28 01:36:55 +00:00
Zdenek Kabelac
02e934c444 cleanup: reuse existing macro
Use existing macro to detect striped raid segment.
2017-11-27 10:34:30 +01:00
Zdenek Kabelac
f70404addb pvmove: enhance delayed_resume logic
ATM we want to support delayed resume purely in pvmove case.
So have libdm logic internal to recognize difference beween
pvmove and other targets that do use delayed resume.

This fixes problem introduced with commit aa68b898ff
for mirror-on-mirror or snapshot-on-mirror problem.

TODO: likely added new API call and let libdm user select
delayed nodes explicitely.
2017-11-26 00:36:48 +01:00
Zdenek Kabelac
8c6fd0933f activation: enhance holders detection
Use code which detectes handlers in a way, which is more
backward-compatible friendly.

Replace read of 'sysfs' uuid entry with dm ioctl call.

Use /sys/block/dm-X/holders path instead of
new path  /sys/dev/block/major:minor/holders.

TODO:
There are few more occurencies of this logic around the code
so some abstract interface should be considered.
2017-11-26 00:31:26 +01:00