IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This would be in case the pool segment was not found.
LVM2.2.02.112/lib/metadata/pool_manip.c:238:36: warning: Access to field 'segtype' results in a dereference of a null pointer (loaded from variable 'pool_seg')
LVM2.2.02.112/lib/metadata/cache_manip.c:73: overflow_before_widen: Potentially overflowing expression "*pool_metadata_extents *vg->extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
LVM2.2.02.112/lib/activate/dev_manager.c:217: overflow_before_widen: Potentially overflowing expression "seg_status->seg->len * extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
LVM2.2.02.112/lib/activate/dev_manager.c:217: overflow_before_widen: Potentially overflowing expression "seg_status->seg->le * extent_size" with type "unsigned int" (32 bits, unsigned) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type "uint64_t" (64 bits, unsigned).
LVM2.2.02.112/lib/activate/dev_manager.c:196:5: warning: 'dmtask' may be used uninitialized in this function [-Wmaybe-uninitialized]
In _info_run fn:
switch (type) {
case INFO:
...
case STATUS:
...
case MKNODES:
...
}
The "type" is enum and currently only those three types are supported,
but if we added a new type in the future, this would end up with a bug
(if we forgot to add the new "case" in that "switch"). So let's make
sure proper internal error is printed:
default:
log_error(INTERNAL_ERROR "_info_run: unhandled info type");
return 0;
LVM2.2.02.112/tools/toollib.c:1991: leaked_storage: Variable "iter" going out of scope leaks the storage it points to.
LVM2.2.02.112/lib/filters/filter-usable.c:89: leaked_storage: Variable "f" going out of scope leaks the storage it points to.
LVM2.2.02.112/lib/activate/dev_manager.c:1874: leaked_handle: Handle variable "fd" going out of scope leaks the handle.
When getting status for LV segment types, we need to be sure
that proper segment is selected for the status ioctl.
When reporting fields that require status ioctl,
the "_choose_lv_segment_for_status_report" fn in tools/reporter.c
must be completed properly to choose the proper segment for all
the LV types (at the moment, it just takes the first LV segment
by default).
This works fine with cache LVs surely. The other segment types
need more auditing. We use this status ioctl only for cache status
fields at the moment only, so restrict it to the cache only.
Once the _choose_lv_segment_for_status_report is completed
properly, release the restriction in _get_segment_status_from_target_params.
Similar to LVSINFO type which gathers LV + its DM_DEVICE_INFO, the
new LVSSTATUS/SEGSSTATUS report type will gather LV/segment + its
DM_DEVICE_STATUS.
Since we can report status only for certain segment, in case
of LVSSTATUS we need to choose which segment related to the LV
should be processed that represents the "LV status". In case of
SEGSSTATUS type it's clear - the status is reported for the
segment just processed.
The former struct lv_with_info is renamed to lv_with_info_and_seg_status as it can
hold more than just "info", there's lv's segment status now in addition:
struct lv_with_info_and_seg_status {
struct logical_volume *lv;
struct lvinfo *info;
struct lv_seg_status *seg_status;
}
Where struct lv_seg_status is:
struct lv_seg_status {
struct dm_pool *mem;
struct lv_segment lv_seg;
lv_seg_status_type_t type;
void *status; /* struct dm_status_* */
}
Where lv_seg points to lv's segment that is being reported or
processed in general.
New struct lv_seg_status keeps the information about segment status -
the status retrieved via DM_DEVICE_STATUS ioctl. This information will
be used for reporting dm device target status for the LV segment
specified.
So this patch introduces third level of LV information that is
kept for reuse while reporting fields within one reporting line,
causing only one DM_DEVICE_STATUS ioctl call per LV segment line
reported (otherwise we'd need to call the DM_DEVICE_STATUS for each
segment status field in one LV segment/reporting line which is not
efficient).
This is following exactly the same principle as already introduced
by commit ecb2be5d16.
So currently we have three levels of information that can be used
to report an LV/LV segment:
- LV metadata itself (struct logical_volume *lv)
- LV's DM_DEVICE_INFO ioctl result (struct lvinfo *info)
- LV's segment DM_DEVICE_STATUS ioctl result (this status must be
bound to a segment, not the whole LV as the whole LV may be
composed of several segments of course)
(this is the new struct lv_seg_status *seg_status)
Do not use 'any' policy name as a value in config tree - so we stick
with 'policy_settings' and extra 'policy_name' for libdm params.
Update lvm2 API as well.
Example of supported metadata:
policy = "mq"
policy_settings {
migration_threshold = 2048
sequential_threshold = 512
random_threshold = 4
read_promote_adjustment = 10
}
Support new PASSTHROUGH 'feature' flag.
Add dm_config_node to pass in policy args.
Really use origin_uuid instead of using extra call
to pass seg_areas.
Switch to 64bit feature flag bit set so there is
enough space in future for new bits...
More efficient spare volume creation. Save 1 extra commit
and properly activate this volume according to our cluster
activation rules (using lv_active_change() for this).
Since we 'layer' for cache origin which and we support dropping
cache layer - we need to restore origin name in case
the origin LV is more complex target - i.e. raid.
Drop _corig from name
Cleanup and rename parent -> parent_lv.
Revert part of commit 51a29e6056,
it's probably bad idea to continue with any recovery, when
vg_write() or vg_commit() fail - so it's better to leave it as it is.
When deactivating origin, we may have possibly left table in broken state,
where origin is not active, but snapshot volume is still present.
Let's ensure deactivation of origin detects also all associated
snapshots are inactive - otherwise do not skip deactivation.
(so i.e. 'vgchange -an' would detect errors)
Let's use this function for more activations in the code.
'needs_exlusive' will enforce exlusive type for any given LV.
We may want to activate LV in exlusive mode, even when we know
the LV (as is) supports non-exlusive activation as well.
lvcreate -ay -> exclusive & local
lvcreate -aay -> exclusive & local
lvcreate -aly -> exclusive & local
lvcreate -aey -> exclusive (might be on any node).
Activate of new/unused/empty thin pool volume skips
the 'overlay' part and directly provides 'visible' thin-pool LV to the user.
Such thin pool still gets 'private' -tpool UUID suffix for easier
udev detection of protected lvm2 devices, and also gets udev flags to
avoid any scan.
Such pool device is 'public' LV with regular /dev/vgname/poolname link,
but it's still 'udev' hidden device for any other use.
To display proper active state we need to do few explicit tests
for this condition.
Before it's used for any lvm2 thin volume, deactivation is
now needed to avoid any 'race' with external usage.
Call check_new_thin_pool() to detect in-use thin-pool.
Save extra reactivation of thin-pool when thin pool is not active.
(it's now a bit more expensive to invoke thin_check for new pools.)
For new pools:
We now active locally exclusively thin-pool as 'public' LV.
Validate transaction_id is till 0.
Deactive.
Prepare create message for thin-pool and exclusively active pool.
Active new thin LV.
And deactivate thin pool if it used to be inactive.
Allowing 'external' use of thin-pools requires to validate even
so far 'unused' new thin pools.
Later we may have 'smarter' way to resolve which thin-pools are
owned by lvm2 and which are external.