IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This is better way how to fix clustered synchronization with udev.
As the code for message passing needs fixed - put currently
fs_unlock() after every active/deactive command in clvmd to
ensure nodes are properly created in time.
It would be most useful to add "dmsetup ls --tree" to the commands run.
This command helps in answering the question "which devices are actually
underneath a given LV?"
Although the info is available with other existing dmsetup commands,
adding this command gives a much clearer summary of complex setups.
Here's an example of an LVM mirror, with mirror images on partitions
created on top of multipath devices. The multipath devices are on
simple block devices. As you can see, it is easy to see the stacking
from the "dmsetup ls --tree" output:
vgmpathtest-lvmpathmir (253:14)
├─vgmpathtest-lvmpathmir_mimage_1 (253:13)
│ └─mpath5p1 (253:5)
│ └─mpath5 (253:2)
│ ├─ (8:16)
│ └─ (8:0)
├─vgmpathtest-lvmpathmir_mimage_0 (253:12)
│ └─mpath6p1 (253:6)
│ └─mpath6 (253:3)
│ ├─ (8:48)
│ └─ (8:32)
└─vgmpathtest-lvmpathmir_mlog (253:11)
└─mpath7 (253:4)
├─ (8:80)
└─ (8:64)
VolGroup00-LogVol01 (253:1)
└─ (202:2)
vgtest-lvmir (253:10)
├─vgtest-lvmir_mimage_1 (253:9)
│ └─ (7:1)
├─vgtest-lvmir_mimage_0 (253:8)
│ └─ (7:0)
└─vgtest-lvmir_mlog (253:7)
└─ (7:3)
VolGroup00-LogVol00 (253:0)
└─ (202:2)
But it is much harder to see the stacking with only the commands today
("dmsetup info", "dmsetup status", and "dmsetup table"). We could
piece together the stacking from "dmsetup table" but it requires
further processing (take output from "dmsetup info to get
map name to major/minor, then parse "dmsetup table", etc).
There was no effect from having this wrong yet, because the
tree of callers only ever cared about the answer to the first
condition (!response), which determines whether a lock is
held or not. Correct responses, however, are needed soon.
Instead of implicitly syncing udev operation in clustered and
file locking code - call synchronization directly in lock_vol() when
the operation unlocks VG
The problem is missing implicit fs_unlock() in the no_locking code.
This is used with --sysinit on read-only filesystem locking dir.
In this case vgchange -ay could exit before all udev nodes are properly
synchronised and may cause problems with accessing such node right after
vgchange --sysinint command is finished.
Add test case for vgchange --sysinit.
As the option 'set -e -o pipefail' is very sensite on pipe breaking
stop using '-q' for grep commands.
Otherwise this command (with large enough table) would fail:
dmsetup table | egrep -q
with exit code 141 (128 + SIGPIPE)
As Peter suggested, he prefers to keep '-o pipefail' - so make sure all
piped commands will read the whole output and will not exit too early.
This is to avoid any scanning and processing of DM devices while they are in
suspended state (e.g. a rename while the device is suspended - a CHANGE event
is generated!). Otherwise, any scanning in the rules could end up with locking
the calling process until the device is resumed and so we don't receive a
notification about udev rules completion until then (and that effectively
locks out the process awaiting the notification!).
However, we still keep 'disk' and any 'subsystem' related udev rules running.
We trust these and these should check themselves whether a device is suspended
or not, not trying to run any scanning if it is.
strncpy (which check each byte for \0) is not need as we always copy
the length size - so using memcpy is a bit cheaper.
Add missing log_error message for failed allocation.
Fix assert abort of dmsetup (when compiled with pool debug)
dmsetup splitname --nameprefixes --noheadings --rows gvg-a2
Move pool begin in the inner loop - otherwise it would using
already 'ended' pool object.
return lockspace reference (even if lockspace already exists)
and thus increases DLM lockspace count. It means that after
clvmd restart the lockspace is still in use.
(The only way to clean environment to enable clean cluster
shutdown is call "dlm_tool leave clvmd" several times.)
Because only one clvmd can run in time, we can use simpler logic,
try to open lockspace with dlm_open_lockspace() and only if it fails
try to create new one. This way the lockspace reference doesn not
increase.
Very easily reproducible with "clvmd -S" command.
Patch also fixes return code when clvmd_restart fails and fixes
double free if debug option was specified during restart.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=612862
Improve condition within lock_vol so we are not calling extra unlock
if the volume just has been deactivated.
Patch uses lck_type and replaces negative 'and' condition to more
readable 'or' condition.
Few missing strace traces added.