IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
If the PV is already part of the VG (so the pv->fid == vg->fid), it makes no
sense to attach the mdas information from PV to a VG. Instead, we read new
PV metadata information from cache and attach it to the VG fid.
This function also sets a reference to a new VG format instance for all PVs
that are part of the VG so the PV-VG interconnection is consistent after the
change.
Add supporting functions to work with the format instance and metadata area
structures stored within the format instance. Add support for simple indexing
of metadata areas using PV id and mda order (for on-disk PV only for now, we
can extend the indexing even for other mdas if needed - we only need to define
a proper key for the index).
As the kernel seems to be doing weird things during
mlock -> munlock - allow 1 page locking difference without
warning - and log just debug message for a 1 page difference.
Allocation happens outside critical section probably during
log_warn printing.
Should make tests passing for now.
Fixing some const warnings - with API change in:
int vg_extend(struct volume_group *vg, int pv_count, const char *const *pv_names,
Change is needed - as lvm2api expects const behaviour here.
So vg_extend() is doing local strdup for unescaping.
skip_dev_dir return const char* from const char* vg_name.
Rest of the patch is cleanup of related warnings.
Also using dm_report_filed_string() API change to simplify
casting in _string_disp and _lvname_disp.
New strategy for memory locking to decrease the number of call to
to un/lock memory when processing critical lvm functions.
Introducing functions for critical section.
Inside the critical section - memory is always locked.
When leaving the critical section, the memory stays locked
until memlock_unlock() is called - this happens with
sync_local_dev_names() and sync_dev_names() function call.
memlock_reset() is needed to reset locking numbers after fork
(polldaemon).
The patch itself is mostly rename:
memlock_inc -> critical_section_inc
memlock_dec -> critical_section_dec
memlock -> critical_section
Daemons (clmvd, dmevent) are using memlock_daemon_inc&dec
(mlockall()) thus they will never release or relock memory they've
already locked memory.
Macros sync_local_dev_names() and sync_dev_names() are functions.
It's better for debugging - and also we do not need to add memlock.h
to locking.h header (for memlock_unlock() prototyp).
Add configurable option to define minimal size of
of block device usable as a PV.
pv_min_size() is added to lvm-globals and it's being
initialized through _process_config.
Macro PV_MIN_SIZE is unused and removed.
New define DEFAULT_PV_MIN_SIZE_KB is added to lvm-global
and unlike PV_MIN_SIZE it uses KB units.
Should help users with various slow devices attached to the system,
which cannot be easily filtered out (like FDD on /dev/sdX):
https://bugzilla.redhat.com/show_bug.cgi?id=644578
results in clvmd deadlock
When a logical volume is activated exclusively in a cluster, the
local (non-cluster-aware) target is used. However, when creating
a snapshot on the exclusive LV, the resulting suspend/resume fails
to load the appropriate device-mapper table - instead loading the
cluster-aware target.
This patch adds an 'exclusive' parameter to the pertinent resume
functions to allow for the right target type to be loaded.
Fix regresion from 2.02.75 speedup - so currently crc32 is a little bit
more complicated on big-endian CPU as the uint32_t needs to be shifted
on here.
Make configurable default behaviour how to deal with device node creates.
With udev system natural options should be 'resume'.
For older systems where user expect there is node in /dev/mapper immediately
after dmsetup create --notable - use 'create'
FIXME:
Code needs fixing passing this flag through udev cookie.
activated.
In order to achieve this, we need to be able to query whether
the origin is active exclusively (a condition of being able to
add an exclusive snapshot).
Once we are able to query the exclusive activation of an LV, we
can safely create/activate the snapshot.
A change to 'hold_lock' was also made so that a request to aquire
a WRITE lock did not replace an EX lock, which is already a form
of write lock.
Remove temporaly added fs_unlock() calls to fix clmvd usablity.
Now when the message passing is properly working - they are no longer needed.
Simplify no_locking check for VG unlock - as message is always send
for all targets - clustered & non-clustered.
Thanks to CLVMD_CMD_SYNC_NAMES propagation fix the message passing started
to work. So starts to send a message before the VG is unlocked.
Removing also implicit sync in VG unlock from clmvd as now the message
is delievered and processed in do_command().
Also add support for this new message into external locking
and mask this event from further processing.
With the ability to stack many operations in one udev transaction -
in same cases we are adding and removing same device at the same time
(i.e. deactivate followed by activate).
This leads to a problem of checking stacked operations:
i.e. remove /dev/node1 followed by create /dev/node1
If the node creation is handled with udev - there is a problem as
stacked operation gives warning about existing node1 and will try to
remove it - while next operation needs to recreate it.
Current code removes all previous stacked operation if the fs op is
FS_DEL - patch adds similar behavior for FS_ADD - it will try to
remove any 'delete' operation if udev is in use.
For FS_RENAME operation it seems to be more complex. But as we
are always stacking FS_READ_AHEAD after FS_ADD operation -
should be safe to remove all previous operation on the node
when udev is running.
Code does same checking for stacking libdm and liblvm operations.
As a very simple optimization counters were added for each stacked ops
type to avoid unneeded list scans if some operation does not exists in
the list.
Enable skipping of fs_unlock() (udev sync) if only DEL operations are staked.
as we do not use lv_info for already deleted nodes.
This is better way how to fix clustered synchronization with udev.
As the code for message passing needs fixed - put currently
fs_unlock() after every active/deactive command in clvmd to
ensure nodes are properly created in time.
There was no effect from having this wrong yet, because the
tree of callers only ever cared about the answer to the first
condition (!response), which determines whether a lock is
held or not. Correct responses, however, are needed soon.
Instead of implicitly syncing udev operation in clustered and
file locking code - call synchronization directly in lock_vol() when
the operation unlocks VG
The problem is missing implicit fs_unlock() in the no_locking code.
This is used with --sysinit on read-only filesystem locking dir.
In this case vgchange -ay could exit before all udev nodes are properly
synchronised and may cause problems with accessing such node right after
vgchange --sysinint command is finished.
Add test case for vgchange --sysinit.
strncpy (which check each byte for \0) is not need as we always copy
the length size - so using memcpy is a bit cheaper.
Add missing log_error message for failed allocation.
Improve condition within lock_vol so we are not calling extra unlock
if the volume just has been deactivated.
Patch uses lck_type and replaces negative 'and' condition to more
readable 'or' condition.
Few missing strace traces added.
As sync_local_dev_names() cannot be called within activation context,
add new parametr which allows to select if the sync call is needed
before executing new command.
Stop calling fs_unlock() from lv_de/activate().
Start using internal lvm fs cookie for dm_tree.
Stop directly calling dm_udev_wait() and
dm_tree_set/get_cookie() from activate code -
it's now called through fs_unlock() function.
Add lvm_do_fs_unlock()
Call fs_unlock() when unlocking vg where implicit unlock solves the
problem also for cluster - thus no extra command for clustering
environment is required - only lvm_do_fs_unlock() function is added
to call lvm's fs_unlock() while holding lvm_lock mutex in clvmd.
Add fs_unlock() also to set_lv() so the command waits until devices
are ready for regular open (i.e. wiping its begining).
Move fs_unlock() prototype to activation.h to keep fs.h private
in lib/activate dir and not expose other functions from this header.
Change function import_vg_from_buffer() to import_vg_from_config_tree().
Instead of creating config tree inside the function allow config tree to
be passed as parameter - usable later for caching.
If some allocation for peristent filter fails its memory reference
was lost, fix it by calling filter's destructor.
Fix log_error messages for failing allocation.
Checking for vg being != NULL in this place is not needed.
Pointer vg is already dereferced in this function above this code line.
Also this internal function _read_pv is always called with valid 'vg' pointer.
Add checks for clonning allocation a fail-out when something is
not allocated correctly.
Also move var declaration to the begining of the function
and fix log_error messages.
As const segment_type or const format_type are never released
use their non-const version and remove const downcast from dm_free calls.
This change fixes many gcc warnings we were getting from them.
Change API interface to accept even completely const array patterns.
This should present no change for libdm users and allows to pass
pattern arrays without cast to const char **.
To have better control were the config tree could be modified use more
const pointers and very carefully downcast them back to non-const
(for config tree merge).
Detect existence of new SELinux selabel interface during configure.
Use new dm_prepare_selinux_context instead of dm_set_selinux_context.
We should set the SELinux context before the actual file system object creation.
The new dm_prepare_selinux_context function sets this using the selabel_lookup
fn in conjuction with the setfscreatecon fn. If selinux/label.h interface
(that should be a part of the selinux library) is not found during configure,
we fallback to the original matchpathcon function instead.
Set cmd->independent_metadata_areas if metadata/dirs or disk_areas in use.
- Identify and record this state.
Don't skip full scan when independent mdas are present even if memlock is set.
- Clusters and OOM aren't supported, so no problem doing the proper scans.
Avoid revalidating the label cache immediately after scanning.
- A simple optimisation.
Support scanning for a single VG in independent mdas.
- Not used by the fix but I left it in anyway as later patches might use it.
This reset of vgmem pointer causes access of already released memory.
(_vg_make_handle allocates vg from vgmem pool itself - which is a bit tricky)
Interestingly this memory fault was missed by our test suite.
Add log_error message for lv_info failure and exit from futher
processing.
Replace 'leg' occurence in debug message with 'image' which
is used in other messages.
Add test for NULL from dm_poll_create.
Reorder dm_pool_destroy() before file close and add label out:.
Avoid leaking file descriptor if the allocation fails.
Nicely hidden memory leak in outf macro error path.
This macro is using out_text() and does automagical return_0.
That would leak tag_buffer allocated memory.
As there was same code for tags output - create _out_tags() function.
Set vg to NULL after releasing it as the following memlock() test may
lead to goto for the second call of vg_release() with the already
released vg pointer.
Similar to 'get' property internal functions.
Add specific 'set' function for vg_mda_copies.
Signed-off-by: Dave Wysochanski <wysochanski@pobox.com>
Reviewed-by: Petr Rockai <prockai@redhat.com>
Patch updates exec_cmd() and adds 3rd parameter with pointer for
status value, so caller might examine returned status code.
If the passed pointer is NULL, behavior is unmodified.
Patch allows to confinue with lvresize if the failure from fsadm check is
caused by mounted filesystem as many of filesystem resize tools do support
online filesystem resize. (originally user had to use flag '-n' to bypass
this filesystem check)
A merged snapshot's DM device is made to use the "error" target as part
of lvm's transaction to merge a snapshot. This snapshot merge use-case
aside, any device using the error target shouldn't be scanned.