IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
results in clvmd deadlock
When a logical volume is activated exclusively in a cluster, the
local (non-cluster-aware) target is used. However, when creating
a snapshot on the exclusive LV, the resulting suspend/resume fails
to load the appropriate device-mapper table - instead loading the
cluster-aware target.
This patch adds an 'exclusive' parameter to the pertinent resume
functions to allow for the right target type to be loaded.
As functions compiled within this define are apparently stil part of the public API,
(though lvm2 code is never using them unless this define is used for compilation),
keep functions available in the code for now -> revert.
Fix regresion from 2.02.75 speedup - so currently crc32 is a little bit
more complicated on big-endian CPU as the uint32_t needs to be shifted
on here.
Make configurable default behaviour how to deal with device node creates.
With udev system natural options should be 'resume'.
For older systems where user expect there is node in /dev/mapper immediately
after dmsetup create --notable - use 'create'
FIXME:
Code needs fixing passing this flag through udev cookie.
activated.
In order to achieve this, we need to be able to query whether
the origin is active exclusively (a condition of being able to
add an exclusive snapshot).
Once we are able to query the exclusive activation of an LV, we
can safely create/activate the snapshot.
A change to 'hold_lock' was also made so that a request to aquire
a WRITE lock did not replace an EX lock, which is already a form
of write lock.
Add new function dm_task_set_add_node() to select between 2 types
of node creation in device directory.
DM_ADD_NODE_ON_RESUME is now default and ensures node is created on
resume. Old original behavior is accessible with DM_ADD_NODE_ON_CREATE.
In this case node would be created during dmsetup create --notable.
For the user 2 new options for dmsetup create are added:
[{--addnodeonresume | --addnodeoncreate }]
Properly working node creation on resume is needed for proper operation
stacking and ability to correctly check in which state the device should
after whole udev transation.
Remove temporaly added fs_unlock() calls to fix clmvd usablity.
Now when the message passing is properly working - they are no longer needed.
Simplify no_locking check for VG unlock - as message is always send
for all targets - clustered & non-clustered.
Thanks to CLVMD_CMD_SYNC_NAMES propagation fix the message passing started
to work. So starts to send a message before the VG is unlocked.
Removing also implicit sync in VG unlock from clmvd as now the message
is delievered and processed in do_command().
Also add support for this new message into external locking
and mask this event from further processing.
With the ability to stack many operations in one udev transaction -
in same cases we are adding and removing same device at the same time
(i.e. deactivate followed by activate).
This leads to a problem of checking stacked operations:
i.e. remove /dev/node1 followed by create /dev/node1
If the node creation is handled with udev - there is a problem as
stacked operation gives warning about existing node1 and will try to
remove it - while next operation needs to recreate it.
Current code removes all previous stacked operation if the fs op is
FS_DEL - patch adds similar behavior for FS_ADD - it will try to
remove any 'delete' operation if udev is in use.
For FS_RENAME operation it seems to be more complex. But as we
are always stacking FS_READ_AHEAD after FS_ADD operation -
should be safe to remove all previous operation on the node
when udev is running.
Code does same checking for stacking libdm and liblvm operations.
As a very simple optimization counters were added for each stacked ops
type to avoid unneeded list scans if some operation does not exists in
the list.
Enable skipping of fs_unlock() (udev sync) if only DEL operations are staked.
as we do not use lv_info for already deleted nodes.
This is better way how to fix clustered synchronization with udev.
As the code for message passing needs fixed - put currently
fs_unlock() after every active/deactive command in clvmd to
ensure nodes are properly created in time.
It would be most useful to add "dmsetup ls --tree" to the commands run.
This command helps in answering the question "which devices are actually
underneath a given LV?"
Although the info is available with other existing dmsetup commands,
adding this command gives a much clearer summary of complex setups.
Here's an example of an LVM mirror, with mirror images on partitions
created on top of multipath devices. The multipath devices are on
simple block devices. As you can see, it is easy to see the stacking
from the "dmsetup ls --tree" output:
vgmpathtest-lvmpathmir (253:14)
├─vgmpathtest-lvmpathmir_mimage_1 (253:13)
│ └─mpath5p1 (253:5)
│ └─mpath5 (253:2)
│ ├─ (8:16)
│ └─ (8:0)
├─vgmpathtest-lvmpathmir_mimage_0 (253:12)
│ └─mpath6p1 (253:6)
│ └─mpath6 (253:3)
│ ├─ (8:48)
│ └─ (8:32)
└─vgmpathtest-lvmpathmir_mlog (253:11)
└─mpath7 (253:4)
├─ (8:80)
└─ (8:64)
VolGroup00-LogVol01 (253:1)
└─ (202:2)
vgtest-lvmir (253:10)
├─vgtest-lvmir_mimage_1 (253:9)
│ └─ (7:1)
├─vgtest-lvmir_mimage_0 (253:8)
│ └─ (7:0)
└─vgtest-lvmir_mlog (253:7)
└─ (7:3)
VolGroup00-LogVol00 (253:0)
└─ (202:2)
But it is much harder to see the stacking with only the commands today
("dmsetup info", "dmsetup status", and "dmsetup table"). We could
piece together the stacking from "dmsetup table" but it requires
further processing (take output from "dmsetup info to get
map name to major/minor, then parse "dmsetup table", etc).
There was no effect from having this wrong yet, because the
tree of callers only ever cared about the answer to the first
condition (!response), which determines whether a lock is
held or not. Correct responses, however, are needed soon.
Instead of implicitly syncing udev operation in clustered and
file locking code - call synchronization directly in lock_vol() when
the operation unlocks VG
The problem is missing implicit fs_unlock() in the no_locking code.
This is used with --sysinit on read-only filesystem locking dir.
In this case vgchange -ay could exit before all udev nodes are properly
synchronised and may cause problems with accessing such node right after
vgchange --sysinint command is finished.
Add test case for vgchange --sysinit.
As the option 'set -e -o pipefail' is very sensite on pipe breaking
stop using '-q' for grep commands.
Otherwise this command (with large enough table) would fail:
dmsetup table | egrep -q
with exit code 141 (128 + SIGPIPE)
As Peter suggested, he prefers to keep '-o pipefail' - so make sure all
piped commands will read the whole output and will not exit too early.
This is to avoid any scanning and processing of DM devices while they are in
suspended state (e.g. a rename while the device is suspended - a CHANGE event
is generated!). Otherwise, any scanning in the rules could end up with locking
the calling process until the device is resumed and so we don't receive a
notification about udev rules completion until then (and that effectively
locks out the process awaiting the notification!).
However, we still keep 'disk' and any 'subsystem' related udev rules running.
We trust these and these should check themselves whether a device is suspended
or not, not trying to run any scanning if it is.
strncpy (which check each byte for \0) is not need as we always copy
the length size - so using memcpy is a bit cheaper.
Add missing log_error message for failed allocation.