IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Add filter which tries to check if scanned device is part
of active multipath.
Firstly, only SCSI major number devices are handled in filter.
Then it checks if device has exactly one holder (in sysfs) and
if it is device-mapper device and DM-UUID is prefixed by "MPATH-".
If so, this device is filtered out.
The whole filter can be switched off by setting
mpath_component_detection in lvm.conf.
https://bugzilla.redhat.com/show_bug.cgi?id=597010
Signed-off-by: Milan Broz <mbroz@redhat.com>
If the extent_size is smaller then the chunk_size we may try
to find better aligment (wasting less space).
i.e. using 4KB extent_size and 64KB chunk size will
lead to creation of 64KB aligned thin volume.
Since we finaly recognize thin creation only after
_determine_snapshot_type() - move _read_activation_params()
after it - so we can support lvcreate -an thin snapshot.
Always make sure table gets reloaded.
For now activate and deactivate pool volume if it's not active.
FIXME: we could do this only if we are sure some thin volume is alive.
that we can also test clustered volume groups (vgcreate -c y) in the test
suite. Unfortunately we can't make this the testing default since cluster
mirrors require further infrastructure, and snapshots probably don't work at
all. I'll eventually add a few test cases that create clustered VGs
specifically.
Support allocation of metadata from the same PV, if the VG
is build only from one PV.
As thinp is not mirror - we do not require 2 PVs
for basic thin usage as user is losing only perfomance.
Since activation of pool is now independent on thin activation,
user may do whatever he needs - thought preferable thin should stay alive,
but it it will be found inactivate, update_pool will bring the pool up.
Replace detach_pool_messages with update_pool_lv.
Move creation code from to 'if' condition into 1.
Ensure creation has finished all previous message operations.
Let's put the overlay device over real thin pool device.
So we can get the proper locking on cluster.
Overwise the pool LV would be activate once implicitely
and in other case explicitely, confusing locking mechanism.
This patch make the activation of pool LV independent on
activation of thin LV since they will both implicitely use
real -thin pool device.
A little code shuffling and adding support for
DM_THIN_ERROR_DEVICE_ID which might be eventually be used
for activation of thin which is going to be deleted.
For now we do not need it lvm.
Normally, restart simply means "stop and start" for systemd. However, if
we're installing new versions of the dmeventd binary/libdevmapper, we need
to restart dmeventd. This fails if we have some devices monitored - we need
to call "dmeventd -R" instead.
The "ExecReload" did not work quite well in some old versions of systemd,
systemd assumed that only the configuration is reloaded on "ExecReload",
not the whole binary itself so it lost track of dmeventd daemon (it lost new
dmeventd PID). This is fixed and seems to be working fine now with recent
versions of dmeventd.
All thins are created with the next activation and VG is updated
without messages. Only some basic commands works.
(i.e. lvcreate -an -V10 -T mvg/pool)
There can be some combination to confuse this system.
This functionality for snapshots is going to be interesting.
Add a new node flag send_messages that is used to simplify
test when to call _node_send_messages().
Add call to _node_send_messages when pool is deeper in the tree.