IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Move the free_vg() to vg.c and replace free_vg with release_vg
and make the _free_vg internal.
Patch is needed for sharing VG in vginfo cache so the release_vg function name
is a better fit here.
Systemd preloads file descriptors for us and passes them in for
newly spawned daemon when using on-demand fifo (or socket)
based activation.
This patch adds checks for file descriptors preloaded by
systemd and uses them instead of opening the FIFOs again
to properly support on-demand FIFO-based activation.
(We'll change FIFOs to sockets soon - but still this
part of the code will stay almost the same.)
The filename to adjust the oom score was changed in 2.6.36.
We should use oom_score_adj instead of oom_adj (which is still
there under /proc, but it's scheduled for removal in August 2012).
New oom_score_adj uses a range from -1000 (OOM_SCORE_ADJ_MIN,
disable oom killing) to 1000 (OOM_SCORE_ADJ_MAX).
very reasonable amount of parallel access, although the hash tables may become
a point of contention under heavy loads. Nevertheless, there should be orders
of magnitude less contention on the hash table locks than we currently have on
block device scanning.
are affected by the move. (Currently it's possible for I/O to become
trapped between suspended devices amongst other problems.
The current fix was selected so as to minimise the testing surface. I
hope eventually to replace it with a cleaner one that extends the
deptree code.
Some lvconvert scenarios still suffer from related problems.
allocates these buffers in such way it adds memory page for each such buffer
and size of unlock memory check will mismatch by 1 or 2 pages.
This happens when we print or read lines without '\n' so these buffers are
used. To avoid this extra allocation, use setvbuf to set these bufffers ahead.
Signed-off-by: Zdenek Kabelac <zkabelac@redhat.com>
Reviewed-by: Milan Broz <mbroz@redhat.com>
Reviewed-by: Petr Rockai <prockai@redhat.com>
'a small step' towards cleaner shutdown sequence.
Normally clvmd doens't care about unreleased memory on exit -
but for valgrind testing it's better to have them cleaned all.
So - few things are left on exit path - this patch starts to remove
just some of them.
1. lvm_thread_fs is made as a thread which could be joined on exit()
2. memory allocated to local_clien_head list is released.
(this part is somewhat more complex if the proper reaction is
needed - and as it requires some heavier code moving - it will
be resolved later.
Fix 2 more functions sending cluster messages to avoid passing uninitilised bytes
and compensate 1 extra byte attached to the message from the clvm_header.args[1]
member variable.
Fixing few issues:
struct clvm_header contains 'char args[1]' - so adding '+ 1' here
for message length calculation is 1 byte off.
Message with last byte uninitialized is then passed to write function.
Update also related arglen.
Initialise xid and clintid to 0.
Memory allocation is checked for NULL
- returned char not needed to be explicitly const
- warn if pipe() fails in clvmd (more fixes here needed for error paths...)
- assign (and ignore) read() output in drain buffer
Fixing some const warnings - with API change in:
int vg_extend(struct volume_group *vg, int pv_count, const char *const *pv_names,
Change is needed - as lvm2api expects const behaviour here.
So vg_extend() is doing local strdup for unescaping.
skip_dev_dir return const char* from const char* vg_name.
Rest of the patch is cleanup of related warnings.
Also using dm_report_filed_string() API change to simplify
casting in _string_disp and _lvname_disp.
New strategy for memory locking to decrease the number of call to
to un/lock memory when processing critical lvm functions.
Introducing functions for critical section.
Inside the critical section - memory is always locked.
When leaving the critical section, the memory stays locked
until memlock_unlock() is called - this happens with
sync_local_dev_names() and sync_dev_names() function call.
memlock_reset() is needed to reset locking numbers after fork
(polldaemon).
The patch itself is mostly rename:
memlock_inc -> critical_section_inc
memlock_dec -> critical_section_dec
memlock -> critical_section
Daemons (clmvd, dmevent) are using memlock_daemon_inc&dec
(mlockall()) thus they will never release or relock memory they've
already locked memory.
Macros sync_local_dev_names() and sync_dev_names() are functions.
It's better for debugging - and also we do not need to add memlock.h
to locking.h header (for memlock_unlock() prototyp).
results in clvmd deadlock
When a logical volume is activated exclusively in a cluster, the
local (non-cluster-aware) target is used. However, when creating
a snapshot on the exclusive LV, the resulting suspend/resume fails
to load the appropriate device-mapper table - instead loading the
cluster-aware target.
This patch adds an 'exclusive' parameter to the pertinent resume
functions to allow for the right target type to be loaded.
activated.
In order to achieve this, we need to be able to query whether
the origin is active exclusively (a condition of being able to
add an exclusive snapshot).
Once we are able to query the exclusive activation of an LV, we
can safely create/activate the snapshot.
A change to 'hold_lock' was also made so that a request to aquire
a WRITE lock did not replace an EX lock, which is already a form
of write lock.
Remove temporaly added fs_unlock() calls to fix clmvd usablity.
Now when the message passing is properly working - they are no longer needed.
Simplify no_locking check for VG unlock - as message is always send
for all targets - clustered & non-clustered.
Thanks to CLVMD_CMD_SYNC_NAMES propagation fix the message passing started
to work. So starts to send a message before the VG is unlocked.
Removing also implicit sync in VG unlock from clmvd as now the message
is delievered and processed in do_command().
Also add support for this new message into external locking
and mask this event from further processing.
This is better way how to fix clustered synchronization with udev.
As the code for message passing needs fixed - put currently
fs_unlock() after every active/deactive command in clvmd to
ensure nodes are properly created in time.
Instead of implicitly syncing udev operation in clustered and
file locking code - call synchronization directly in lock_vol() when
the operation unlocks VG
The problem is missing implicit fs_unlock() in the no_locking code.
This is used with --sysinit on read-only filesystem locking dir.
In this case vgchange -ay could exit before all udev nodes are properly
synchronised and may cause problems with accessing such node right after
vgchange --sysinint command is finished.
Add test case for vgchange --sysinit.
return lockspace reference (even if lockspace already exists)
and thus increases DLM lockspace count. It means that after
clvmd restart the lockspace is still in use.
(The only way to clean environment to enable clean cluster
shutdown is call "dlm_tool leave clvmd" several times.)
Because only one clvmd can run in time, we can use simpler logic,
try to open lockspace with dlm_open_lockspace() and only if it fails
try to create new one. This way the lockspace reference doesn not
increase.
Very easily reproducible with "clvmd -S" command.
Patch also fixes return code when clvmd_restart fails and fixes
double free if debug option was specified during restart.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=612862
Stop calling fs_unlock() from lv_de/activate().
Start using internal lvm fs cookie for dm_tree.
Stop directly calling dm_udev_wait() and
dm_tree_set/get_cookie() from activate code -
it's now called through fs_unlock() function.
Add lvm_do_fs_unlock()
Call fs_unlock() when unlocking vg where implicit unlock solves the
problem also for cluster - thus no extra command for clustering
environment is required - only lvm_do_fs_unlock() function is added
to call lvm's fs_unlock() while holding lvm_lock mutex in clvmd.
Add fs_unlock() also to set_lv() so the command waits until devices
are ready for regular open (i.e. wiping its begining).
Move fs_unlock() prototype to activation.h to keep fs.h private
in lib/activate dir and not expose other functions from this header.
Variable 'ret' assigned from _do_event() was actually not used and replaced with next
assignment without any read of the returned value.
Code is reformated - so the error path is put in the if() branch and normal
code is put after the 'if' together with FIXME comment.
FIXME lowprio: logging needs to be fixed in this code,
- multiple log_errors are printed, stacks are missing...
Call for pthread_join() does not set errno value even though return values
looks like that. For now assign errno from return value and still use
strerror() to print some error message as this seems to be commonly used.
Add also log_sys_error() message for error close of local pipe.
As ternary operator has lower priority then add operation, this check
was not doing what seemed to be expected.
So enclose the test in braces and check for NULL in *buf.
We need to be sure that /var/run and /var/lock is always there.
(E.g. these two directories could be using tmpfs which then loose
all the content after reboot.)
Detect existence of new SELinux selabel interface during configure.
Use new dm_prepare_selinux_context instead of dm_set_selinux_context.
We should set the SELinux context before the actual file system object creation.
The new dm_prepare_selinux_context function sets this using the selabel_lookup
fn in conjuction with the setfscreatecon fn. If selinux/label.h interface
(that should be a part of the selinux library) is not found during configure,
we fallback to the original matchpathcon function instead.
LCK_CACHE is defined as 0x100 so it cannot be passed through
unsigned char parameter - remove it from the sprintf code.
If the LCK_CLUSTER should be printed here - lot of code need
to be reworked - so adding FIXME comment.
The management threads (main_loop, the socket thread) could close a single fd
twice in a row sometimes. At least one other thread can be running at the same
time as the threads doing the double close. That one running thread also
happens to do some IO (namely, open /proc/devices, read from it, close it). If
there was enough "demand" for the local socket, this could happen:
- a connection to clvmd is about to finish, let's say the fd is 13 (it often
happens to be in my test script, don't ask why)
- the local_sock thread calls close(13)
- the lvm thread calls open("/proc/devices"...) and gets 13
- the main_loop thread calls close(13) [OOPS!]
- new connection arrives, and is accept'd by a (new) local_sock thread
- the accept gives an fd of 13 (since it's the lowest free fd at this point)
- the lvm thread gets around to read from it's /proc/devices handle... 13,
again
- the lvm thread hangs forever trying to read from the socket instead of
/proc/devices
Signed-off-by: Petr Rockai <prockai@redhat.com>
Reviewed-by: Milan Broz <mbroz@redhat.com>
Function pull_stateo() checks for NULL 'buf' - but return for this error
path was missing. cmirror code never calls this function with NULL 'buf',
so this fix has no effect on current code base, but makes clang happier.
It's quite new feature which is not supported by older compilers.
So until some better macros are introduced into LVM code - hotfix current
compilation problems and compile this code only for __clang__ defining compilers.
We cast (char*) to (uint32_t*) that changes alignment requierements.
For our case the code has been correct as alloca() returns properly
aligned buffer, however this patch make it cleaner and more readable
and avoids warning generation.
The signalling code (pthread_cond_signal/pthread_cond_wait) in the
pre_and_post_thread was using the wait mutex (see man pthread_cond_wait)
incorrectly, and this could cause clvmd to deadlock when timing was
right. Detailed explanation of the problem follows.
There is a single mutex around (L for Lock, U for Unlock), a signal (S) and a
wait (W). C for pthread_create. Time flows from left to right, each arrow is a
thread.
So first the "naive" scenario, with no mutex (PPT = pre_and_post_thread, MCT =
main clvmd thread; well actually the thread that does read_from_local_sock). I
will also use X, for a moment when MCT actually waits for something to happen
that PPT was supposed to do.
MCT -----C ------S--X-----S----X----------------------S------XXXXXXXXX
| everything OK up to this --> <-- point...
PPT -----WWW-----WWWW------------------------------WWWWWWWWWWWWW
Ok, so pthread API actually does not let you use W/S like that. It goes out of
its way to tell you that you need a mutex to protect the W so that the above
cannot happen. *But* if you are creative and just lock around the W's and S's,
this happens:
MCT ----C-----LSU----X-----LSU----X------------LSU------XXXXXXX
|
PPT ---LWWWU-------LWWWWU-----------------------LWWWWWWWWW
Ooops. Nothing changed (the above is what actually was done by clvmd before
this satch). So let's do it differently, holding L locked *all* the time in
PPT, unless we are actually in W (this is something that the pthread API does
itself, see the man page).
MCT ----C-----LSU------X---LSU---X-----LLLLLLLSU----X----
| (and they live happily ever after)
PPT L---WWWWW---------WWWW----------------W----------
So W actually ensures that L is unlocked *atomically* together with entering
the wait. That means that unless PPT is actually waiting, it cannot be
signalled by MCT. So if MCT happens to signal it too soon (it wasn't waiting
yet), it (MCT) will be blocked on the mutex (L), until PPT is actually ready to
do something.
to lvm.conf in the activation section: 'snapshot_autoextend_threshold' and
'snapshot_autoextend_percent', that define how to handle automatic snapshot
extension. The former defines when the snapshot should be extended: when its
space usage exceeds this many percent. The latter defines how much extra space
should be allocated for the snapshot, in percent of its current size.
can be opprobriously slow if created with '--nosync'.
One of the ways cluster mirrors coordinate I/O and recovery
amoung the different machines is by the use of the log
function 'is_remote_recovering()' which lets nodes know if
a region they wish to perform a write on is currently being
recovered on another node. If the region is being recovered,
the I/O is delayed.
The 'is_remote_recovering' routine has been optimized to
avoid the deluge of requests that would be issued to the
userspace log server by maintaining a marker of how far
the recovery has gotten. It can then immediately return
'not recovering' if the region being inquired about is
less than this mark. Additionally, if the region of
concern is greater than the mark, the function will
limit the number of transmissions to userspace by assuming
the region /is/ being recovered when skipping the
transmission. This limits the amount of processing
and updates the mark in 1/4 sec time steps.
This patch fixes a problem where 'the mark' is not being
updated because of faulty logic in the userspace log
daemon. When '--nosync' is used to create a cluster
mirror, the userspace log daemon never has a chance
to update the mark in the normal way. The fix is to set
the mark to "complete" if the mirror was created with
the --nosync flag.
In all top vg read functions only LCK_VG_READ/WRITE can be used.
All other vg lock definitions are low-level backend machinery.
Moreover, LCK_WRITE cannot be tested through bitmask.
This patch fixes these mistakes.
For _recover_vg() we do not need lock_flags, it can be only
two of above and we always upgrading to LCK_VG_WRITE lock there.
(N.B. that code is racy)
There is no functional change in code (despite wrong masking
it produces correct bits:-)
The lvm repair issues I believe are the superficial symptoms of this
bug - there are worse issues that are not as clearly seen. From my
inline comments:
* If the mirror was successfully recovered, we want to always
* force every machine to write to all devices - otherwise,
* corruption will occur. Here's how:
* Node1 suffers a failure and marks a region out-of-sync
* Node2 attempts a write, gets by is_remote_recovering,
* and queries the sync status of the region - finding
* it out-of-sync.
* Node2 thinks the write should be a nosync write, but it
* hasn't suffered the drive failure that Node1 has yet.
* It then issues a generic_make_request directly to
* the primary image only - which is exactly the device
* that has suffered the failure.
* Node2 suffers a lost write - which completely bypasses the
* mirror layer because it had gone through generic_m_r.
* The file system will likely explode at this point due to
* I/O errors. If it wasn't the primary that failed, it is
* easily possible in this case to issue writes to just one
* of the remaining images - also leaving the mirror inconsistent.
*
* We let in_sync() return 1 in a cluster regardless of what is
* in the bitmap once recovery has successfully completed on a
* mirror. This ensures the mirroring code will continue to
* attempt to write to all mirror images. The worst that can
* happen for reads is that additional read attempts may be
* taken.
Ignore snapshots when performing mirror recovery beneath an origin.
Pass LCK_ORIGIN_ONLY flag around cluster.
Add suspend_lv_origin and resume_lv_origin using LCK_ORIGIN_ONLY.
corruption bug in cmirror. 'dm_bit' is only ever used as a boolean operation
within LVM, but it can return a range of values. If the bit is set, a power of
2 is returned. If the bit is unset, 0 is returned.
'log_test_bit' (a function in the cluster mirror log daemon code) has switched
to using the dm bit operations in rhel6. There are two places in the daemon
code where 'log_test_bit' is not used merely as a boolean, but rather the
return value is used as the return value for the log functions 'is_clean' and
'in_sync' - having assumed that 'dm_bit' was returning 0 or 1 only.
One place the 'in_sync' function is utilized is in 'dm_rh_get_state' - a
function that informs the mirroring code how to treat I/O and which devices to
read/write from. 'dm_rh_get_state' was checking if the return value of
'in_sync' was 1 to determine if the region was DM_RH_CLEAN. Since 'dm_bit'
(and by extension 'log_test_bit' and 'in_sync') was returning powers of 2,
DM_RH_CLEAN was rarely being reported as it should have been. Thinking the
region was out-of-sync, the mirroring code would write only to the primary
device. When the primary device was failed, all of those writes were lost -
leaving the entire mirror corrupted.
Switch dmeventd to use dm_create_lockfile and drop duplicate code.
Allow clvmd pidfile to be configurable.
Switch cmirrord and clvmd to use dm_create_lockfile.
Moreover, in current mirror handling, when it calls activate
on removed but suspended detached log this counter drops below zero
and confuses debug log.
When a mirror is being downconverted in a cluster, a series of suspends and
resumes is executed.
With the change to using UUIDs in dev_manager instead of names, the behaviour
has changed with regards to including an _mlog in the deptree of a logical
volume. In the old (pre-UUID-enabled) code, the _mlog would appear in a deptree
of any volume purely based on a name match: a linear volume foo would include
foo_mlog in its dependencies if that happened to exist. This behaviour was
fixed and the mlog is now only included for mirrors.
By a coincidence, this mlog bug had been hiding a different bug in clvmd. When
a mirror is being dismantled (and converted to a linear volume), it is first
suspended as a whole, then later resumed in parts. Nevertheless, the overall
memlock balance is maintained in this operation. The problem kicks in, because
even though the mirror log was suspended as part of the mirror, when the
dismantled mirror is resumed again, it is no longer a mirror and therefore the
mirror log stays suspended. This would not be a problem in itself, since
_delete_lv (from metadata/mirror.c) is called on it subsequently, which does an
activate/deactivate cycle and removes the LV. The activate/deactivate cycle
correctly prompts clvmd to resume the device: however, in doing this, it will
issue an unpaired resume operation (the suspend that caused the mirror log to
be suspended is paired with resuming the dismantled mirror later). We have
concluded that the path in clvmd should never affect memlock_count, since there
should never be an unmatched explicit suspend preceding this resume.
Because execve stops the command loop,
we never receive response (only socket close) for clvmd -S,
so waiting for response here makes no sense.
But if the calling process (clvmd -S) exits too early, connection
is closed from client side, clvmd takes this as an error and
never run restart code.
Ugly hack(TM).
linux/kdev_t.h even though it wasn't needed. Strangely, it seems
to be causing problems on various architectures (i686) in the
function daemons/cmirrord/functions.c:disk_status_info()->sprintf.
I'm not sure why this is a problem since none of the macros in
kdev_t.h are used in that code, but it certainly doesn't hurt to
pull an unnecessary header and it seems to fix the problem.
Code is mixing up internal DLM and LVM definitions of lock
modes and flags.
OpenAIS and singlenode locking do not depend on DLM but
code currently cannot be compiled without libdlm.h!
LCK_* flags is LVM abstraction, used through all the code.
Only low-level backend (clvmd-cman etc) should use DLM definitions,
also this code should do all needed conversions.
Because there are two DLM flags used in generic code
(NOQUEUE, CONVERT) we define it similar way like lock modes.
(So all needed binary-compatible flags are on one place in locking.h)
(Further code cleaning still needed, though:-)
- allocate environment dynamically (still missing some limit?)
- try to recover, if destroy failed (do not destroy lvm here) and free memory
- check strdup() return codes
- report failure to log
- do not print NULL in exclusive lock loop
Target install_dm_plugin installs files to libdir/device-mapper.
Target install_lvm2_plugin installs files to libdir/lvm2.
Both targets creates relative links to libdir to keep the code
compatible with current dlopen handling.
Once we will be able to read plugins from subdir, links
could be removed.
Internally, we used DM names instead of UUIDs while processing event
handlers. This caused problems while trying to vgrename a VG with active LVs
where the names are being changed and so the devices were not found then.
The patch also contains a little bit of refactoring, moving "build_dlid" code
found in dev_manager.c to "build_dm_uuid", now in lvm-string.c (so we have
build_dm_uuid and build_dm_name at one place).
Patch is inspired by Debian's extra patch.
- removes OWNER & GROUP make vars they are parts of INSTALL command.
- adds INSTALL_PROGRAM for executable, uses $(INSTALL)
- adds INSTALL_DATA for non-executable data, uses ($INSTALL)
- adds INSTALL_WDATA for writable non-executable data, uses ($INSTALL)
- adds configure option --enable-write_install - to support
installatin of writable files used by distribution
- replaces usage of ifeq @LIB_SUFFIX@ with $(LIB_SUFFIX)
- installs .a files from static builds without executable flag
- installs .a files to $(usrlibdir) instead of $(libdir)
- installs all static binaries to $(staticdir)
- create .so links for devel package in $(usrlibdir) instead of
$(libdir)
- makes .so and .so.LIB_VERSION files within builddir
- removes VERSIONED_SHLIB and created versioned LIB_SHARED automagicaly
- install LIB_SHARED via install_lib_shared target
- install plugins via install_lib_shared_plugin target
- prints whole 'install' command during installation instead of less
informative "Installing $(something) $(somewhere)"
- install multiple man pages with one INSTALL command
- use DISTCLEAN_TARGETS instead of creating multiple distclean targets
Usage of VPATH makes troubles when used within $(builddir).
Not only source files are being found through VPATH,
but targets as well. (make --debug=v)
Thus if user builds the code in $(srcdir) and also in some $(builddir)
he gets mangled results as some generated files (i.e. .export.sym)
are 'reused' from $(srcdir) instead of $(builddir).
This patch switches to use vpath were we could explicitly name
suffixes that should be looked via vpath - we must take care,
we do not generate files with these suffixes:
.c, .in, .po, .exported_symbols
If dmeventd runs with -d flag, it doesn't fork into backgroud.
The command kill(getppid(), SIGTERM) attempts to kill the parent dmeventd
process, however, if there is no parent, it kills whatever process spawned
dmeventd. In case of debugging with gdb, the parent is gdb, thus
kill(getppid(), SIGTERM) kills the debugger.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
If the error path of _register_for_event() calls _free_thread_status()
_lib_put() call is missing.
To make thing simpler move this _lib_put() into common error path code.
As the header file <sys/mman.h> was not included in dmeventd.c
thus missed definition of MCL_CURRENT so this patch only makes
it obvious we were not locking memory here.
This patch has no functional change.
Later part of this patch set handles mlockall() via memlock_inc_daemon().
in clvmd, dmevend, man, tests.
Don't include dependency files for clow and cscope.out targets
Improve dependency tracking for dmeventd and liblvm2cmd sources.
to obtain sources. Create make.tmpl target for
simplier generation of cflow files with the help of
CFLOW_LIST, CFLOW_LIST_TARGET, CFLOW_TARGET.
Still cflow usage is not perfect.
Move daemons/ and lib/ subtargets to their Makefiles so we don't get
double cleanup error during execution of distclean target.
Instead of duplicating clean target inside distclean target,
just use it as a subtarget and avoid add duplicating code.
This check-in enables the 'mirrored' log type. It can be specified
by using the '--mirrorlog' option as follows:
#> lvcreate -m1 --mirrorlog mirrored -L 5G -n lv vg
I've also included a couple updates to the testsuite. These updates
include tests for the new log type, and some fixes to some of the
*lvconvert* tests.
clvmd's do_lock_lv() already properly controls dmeventd monitoring based
on LCK_DMEVENTD_MONITOR_MODE in lock_flags -- though one small fix was
needed for this to work: _lock_for_cluster() must treat
dmeventd_monitor_mode()'s return as a tri-state value.
Also cleanup do_lock_lv() to:
- explicitly init_dmeventd_monitor() based on LCK_DMEVENTD_MONITOR_MODE
- no longer reset init_dmeventd_monitor() to default at the end of
do_lock_lv() -- it is unnecessary
. Add "monitoring" option to "activation" section of lvm.conf
. Have clvmd consult the lvm.conf "activation/monitoring" too.
. Introduce toollib.c:get_activation_monitoring_mode().
. Error out when both --monitor and --ignoremonitoring are provided.
. Add --monitor and --ignoremonitoring support to lvcreate. Update
lvcreate man page accordingly.
. Clarify that '--monitor' controls the start and stop of monitoring in
the {vg,lv}change man pages.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
For static builds dependency for SELinux libs is not handled by 'ar'.
Till better solution is found, for static builds STATIC_LIBS is used.
Patch updates SELinux detection to use 3rd & 4th parameter for Success/Fail.
Also removes detection of pthread from this check as we know which
version of libdevmapper we are going to link with lvm after merge.
SELinux header check moved to the SELinux test code.
Create new substituted variable PTHREAD_LIBS and link this library
only with tools/libs which really needs it - i.e. dmeventd.
Check for libpthread only for builds with clvmd or dmeventd.
Remove variable LIB_PTHREAD
*_safe. This had the effect of segfaulting the log daemon when
converting a mirror from one log type to another.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
For mirror repair (and similar tasks) it can happen that full
device rescan is issued from clvmd.
Because code can be in the middle of repair (calling suspend)
clvmd should never try to scan suspended devices
(otherwise it causes deadlock).
Also code must not change ignore_suspended_device flag when
doing refresh_filters (called from lvmcache scan code).
bitmap tracking was switched from the e2fsprogs implementation to
the device-mapper implementation (dm_bitset_t). The latter has a
leading uin32_t field designed to hold the number of bits that are
being tracked. The code was not properly handling this change in
all places. Specifically, when getting the bitmap to/from disk.
Endian adjustments will likely need to be made on the accounting
field as well, since bitmaps are passed between machines on
start-up.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
The logic was that lvconvert repair volumes, marking
PV as MISSING and following vgreduce --removemissing
removes these missing devices.
Previously dmeventd mirror DSO removed all LV and PV
from VG by simply relying on
vgreduce --removemissing --force.
Now, there are two subsequent calls:
lvconvert --repair --use-policies
vgreduce --removemissing
So the VG is locked twice, opening space for all races
between other running lvm processes. If the PV reappears
with old metadata on it (so the winner performs autorepair,
if locking VG for update) the situation is even worse.
Patch simply adds removemissing PV functionality into
lvconcert BUT ONLY if running with --repair and --use-policies
and removing only these empty missing PVs which are
involved in repair.
(This combination is expected to run only from dmeventd.)
This patch tries to correctly track changes in lvmcache related to commit/revert.
For vg_commit: if there is cached precommitted metadata, after successfull commit
these metadata must be tracked as committed.
For vg_revert: remote nodes must drop precommitted metadata and its flag in lvmcache.
(N.B. Patch do not touch LV locks here in any way.)
All this machinery is needed to properly solve remote node cache invalidaton which
cause several problems recently observed.
Lock mode is int masked by LCK_TYPE_MASK, always.
Patch also remove uneccessary masking lock flag on sender side,
if masking is needed, it is don on client side already.
- Add drop_precommitted flag to force drop precommitted metadata
- add lvmcache_commit_metadata() which upgrades precommitted metadata in cache
No functional change in this patch - just preparation for following change.
And decode flags in humar readable form in client.
And clean some trailing whitespaces.
No functional change in this patch (only debugging messages changed).
The LV locks make sense only for clustered LVs.
Properly check cluster flag and never issue cluster lock here.
There are several places in code, where it is already checked, this
patch add this check to all needed calls.
In previous code the lock behaviour was inconsistent,
for example, the pre/post callback can take lock even for local volume,
but deactivate call do not released this lock and it remains held forever.
The local LV lock request now just let run the underlying activation code
on local node, the same process like in local locking.
(Again, this is important for new mirror repair calls, here for local
mirrors but with cluster locking enabled.)
This is unnoticed regression from commit 31672ff60e
The pre/post callback need to convert lock always, local node
is going to modify metadata in this case, it it fails conversion,
the call is ignored.
Also it fixes bug when the lock is not yet held, we cannot set LKF_CONVERT
in this case, it will fail because this lock do not exist.
Note that the automatic conversion is still disabled in activate
call, so the original fix (reactivation of exlusive LV) should
be still in place.
(Code already not fail if unlocking not locked resource.)
This is needed in pre/post lock_lv call, where we can
request the same lock on local node becuase of suspend call.
- do_command and lock_vg expect flags (no change here)
Bug fixes:
- lock_vg should check for NONBLOCK on lock_cmd, flags have this bit masked-out
- do_pre/post_command expect do not mask flag at all, this causes that
the code inside is never run! (see following patches, these functions
expect plain command without flags)
is granted at one mode and an attempt to convert it wthout the LCK_CONVERT
flag set then it will return errno=EBUSY.
This fixes a pretty bad bug in which an LV could be activated exclusively on
one node and lvchange -ay on another would convert it to shared!
It might break some things in other areas, but I doubt it.
When clogd was renamed to cmirrord, somehow git got the remove of the old
files but not the add of the new files. This patch adds the new files.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
interface it should be using, it can still be overriden with -I.
If corosync isn't running or there is no information then the usual
checking will happen.
This code only builds if corosync is available.
This check-in includes the touch-ups, make file changes, copyrights,
and other necessities to include the cluster log daemon into the
build system.
[autoconf still needs to be run to generate the 'configure' and
'Makefile' files.]
Eliminate dependency on outside library, since the same functionality
exists in our tree.
[It is important that the bitops work in the same way, as the bitmaps
must remain backwards compatible. I haven't tested every architecture,
but the x86* archs work. My test involved using the old ext2fsprogs
bitops, memcpy'ing the bits over to the LVM bitset array and ensuring
that only the bits set via the old methods were set.]
The changes to remove LCK_NONBLOCK from the LVM locks broke clvmd because the
code was clearly wrong but working anyway! The constant was being masked rather
than the variable that was supposed to match against it.
A patch to the kernel, adding the 'luid' field to dm_ulog_request,
will allow us to properly identify log instances. We will now
be able to definitively identify which logs are to be removed/
suspended/resumed. This replaces the old faulty behavior of
assuming the logs were the same if they had the same UUID and
incrementing/decrementing a reference count.
made to compensate for the changes in the kernel-side component
that recently went upstream. (Things like: renamed structures,
removal of structure fields, and changes to arguments passed
between userspace and kernel.)
Use lvconvert --repair in dmeventd mirror DSO.
for now.
It replaces bad behaviour in one set of circumstances with bad behaviour
in a different set. We think the behaviour needs to be more configurable.
Current code, when need to ensure that volume is not
active on remote node, it need to try to exclusive
activate volume.
Patch adds simple clvmd command which queries all nodes
for lock for given resource.
The lock type is returned in reply in text.
(But code currently uses CR and EX modes only.)
This means two things:
1) Non-mirrored LVs will be no longer affected by mirror monitoring. (Before,
if you had a LV that went partially missing on a VG where a mirror leg failed,
this LV would be removed automatically by dmeventd... Probably not an
unrecoverable dataloss bug, but still quite unpleasant.)
2) If enough parallel PV space is available at the time of the mirror failure,
the failed devices will be automatically replaced using this spare space. Which
(and whether) free space may be used is still not configurable, but is a
planned feature. Since it is relatively easy to undo the action by converting
the mirror manually, I don't consider this to be a showstopper. In fact, I
think the compromise is much better than what we have now.
Buildsystem support device-mapper only install,
but generic install tagret includes both dm+lvm2.
For distribution which uses separate install_device-mapper,
there is no way how to install lvm2 only
(so after installing lvm2 for packaging purposes
built system must remove installed device-mapper files).
Fix it by allowing lvm2_install target, similarily like
install_cluster for clvmd.
(install = install_device-mapper + install_lvm2)
Run backup of metadata on remote nodes in the
same place like local node - when calling backup().
Introduce backup_locally() which calls only
local backup if needed.
Remote backup is now trigerred by LCK_VG_BACKUP flag
combination (special VG lock).
This lock type will call check_current_backup()
(including backup_locally() call) and updates
metadata on all nodes.
(Patch fixes non-functional remote backup,
current call during VG lock never triggers.)
- Rename unlock_all to destroy_lvhash,
this function is called in cluster shutdown
unlocks everything and clean up allocated info space.
- Tidy lv_hash_lock use
.
Except adding free(lvi) in lv_has destructror
there is no functional change.
(for example when CLVMD_CMD_LOCK_VG for is called during vgscan).
If clvmd calls LV lock, it calls
/* clean the pool for another command */
dm_pool_empty(cmd->mem);
to clean up memory pool after command.
Unfortunately, do_refresh_cache() do not call this
and because during it operation it allocates some memory,
the pool increases.
Also do_refresh_cache should use lvm_lock
(it manipulates with lvm internal data).
The same applies for lvm_backup command.
Signed-off-by: Milan Broz <mbroz@redhat.com>
There is a rudimentary make file in place so people can build by hand
from 'LVM2/daemons/clogd'. It is not hooked into the main build system
yet. I am checking this in to provide people better access to the
source code.
There is still work to be done to make better use of existing code in
the LVM repository. (list.h could be removed in favor of existing list
implementations, for example. Logging might also be removed in favor
of what is already in the tree.)
I will probably defer updating WHATS_NEW_DM until this code is linked
into the main build system (unless otherwise instructed).
Very simple / crude method of removing 'is_static' from initialization.
Why should we require an application tell us whether it is linked
statically or dynamically to libLVM? If the application is linked
statically, but libraries exist and dlopen() calls succeed, why
do we care if it's statically linked?
This allows us to remove one argument from create_toolcontext() and
moves it closer to a generic library init function.
In the arg_*() functions, we just use _the_args() directly.
For now we leave the first parameter to these
arg_*() functions (struct cmd_context *) because
of the number of files involved in removing the
parameter.
Very similar argument to removal of init_debug() and other calls.
create_toolcontext() calls _process_config() which sets
cmd->default_settings.activation, then calls
set_activation(cmd->default_settings.activation). Later, create_toolcontext()
sets cmd->current_settings = cmd->default_settings. So these calls
set_activation(cmd->current_settings.activation) are redundant.
Identical argument to previous patch which removed archive_enable() calls.
We add a new parameter to backup_init() which sets the enable value based
on the cmd->default_settings.backup value. This value was used to set
cmd->current_settings.backup, used in the removed backup_enable() call.
_init_backup() calls archive_init(), which originally set 'enabled' to
a hardcoded '1' value. This seems incorrect based on my read of other
areas of the code so here we add a 'enabled' paramter to archive_init().
We pass in cmd->default_settings.archive, which is obtained from the
config tree. Later in create_toolcontext, cmd->current_settings is
set to cmd->default_settings. The archive_enable() call we remove
here was using cmd->current_settings to set the 'archive' enable
value. The final value of cmd->archive_params->enabled should thus
be equivalent to the original code.
The rationale for removing init_verbose() call is very similar to removing
init_debug() call. create_toolcontext() calls _init_logging() which
makes these calls:
/* Verbose level for tty output */
cmd->default_settings.verbose =
find_config_tree_int(cmd, "log/verbose", DEFAULT_VERBOSE);
init_verbose(cmd->default_settings.verbose + VERBOSE_BASE_LEVEL);
And being that create_toolcontext() copies default_settings into
current_settings at the bottom, the init_verbose() call we are removing:
init_verbose(cmd->current_settings.verbose + VERBOSE_BASE_LEVEL);
is redundant.
We can safely remove because create_toolcontext() calls _init_logging(),
which makes these calls:
/* Debug level for log file output */
cmd->default_settings.debug =
find_config_tree_int(cmd, "log/level", DEFAULT_LOGLEVEL);
init_debug(cmd->default_settings.debug);
Then at the bottom of create_toolcontext() we do this:
cmd->current_settings = cmd->default_settings;
So the call we are removing from init_lvm() functions (clvmd and lvmcmdline):
init_debug(cmd->current_settings.debug);
Just sets the value of debug based on 'cmd->current_settings.debug'.
Since cmd->current_settings is equivalent to cmd->default_settings, and
init_debug() was called with cmd->default_settings, the call we remove is
redundant.
snapshot DSO unregistered itself when snapshot changed state to invalid.
This can cause a race (and several timeouts), when for example another snapshot
is added and in the middle of operation (suspend/resume) the monitoring thread
unregister itself.
Fix it by keeping the snapshot monitored after invalidation - just reset
threshold to not really print any messages to syslog.
failed to link against liblvm2cmd.
Dmeventd DSOs *require* lvm2cmd to be linked in.
For the future:
1) AC_SUBST does not create Makefile variables, only @foo@-style substitutions
2) When using `test', whitespace around `=' is essential:
test a=b is true, as is test a=a
* configure.in (LVM2CMD_LIB): Define if --enable-cmdlib.
* dmeventd/mirror/Makefile.in (CLDFLAGS): Use $(LVM2CMD_LIB) rather
than hard-coding -llvm2cmd.
* dmeventd/snapshot/Makefile.in (CLDFLAGS): Likewise.
which is not used since the switch away from async version saLck
. num_nodes should equal to member_list_entries, i.e.
joined_list_entires is 0 when a node leaves the group.
Thanks to Xinwei Hu for the patch.
It does 2 things.
1. The cpg_deliver_callback make a compare between target_nodeid and our_nodeid.
It turns out openais set target_nodeid to 0 sometimes. for broadcasting ? I change the behavior so that lvm will process_remote also on target_nodeid == 0
2. The joined_list passed to cpg_confchg_callback doesn't include the already exist nodes in the group, which leads to an incomplete node_hash. I simply add all other nodes in member_list to node_hash also.
Thanks to Xinwei Hu for this patch.
* dmeventd/dmeventd.c (_set_oom_adj): When writing to /proc/self/oom_adj,
detect failure even if it's hidden behind ferror. [Using dm_fclose's
extra ferror test here is probably not needed, since the amount written
is nowhere near BUFSIZ, but use it regardless, for consistency. ]
* lib/fs/libdevmapper.c (do_suspend): Detect fclose failure when
writing to suspend.