IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
This patch fixes problem reported here:
https://www.redhat.com/archives/dm-devel/2013-January/msg00311.html
Fixing it by separating function for duplicating string token.
---
When /etc/lvm/lvm.conf is truncated at the first '"' of a line, all LVM
utilities crash with a segfault.
The segfault only seems to occur if the last character is the first '"'
(double quote) of a line. If you truncate it at any other point, lvm
detects the error and report parse error
lvm.conf ends like this.
$hexdump -C lvm.conf
....
69 72 20 3d 20 22 2f 64 65 76 22 0a 0a 0a 20 20 |ir = "/dev"... |
20 20 23 20 41 6e 20 61 72 72 61 79 20 6f 66 20 | # An array of |
64 69 72 65 63 74 6f 72 69 65 73 20 74 68 61 74 |directories that|
20 63 6f 6e 74 61 69 6e 20 74 68 65 20 64 65 76 | contain the dev|
69 63 65 20 6e 6f 64 65 73 20 79 6f 75 20 77 69 |ice nodes you wi|
73 68 0a 20 20 20 20 23 20 74 6f 20 75 73 65 20 |sh. # to use |
77 69 74 68 20 4c 56 4d 32 2e 0a 20 20 20 20 73 |with LVM2.. s|
63 61 6e 20 3d 20 5b 20 22 2f 78 22 2c 0a 20 20 |can = [ "/x",. |
20 20 20 20 20 20 20 20 20 20 20 22 | "|
...
Reported-by: dongmao zhang <dmzhang suse com>
Similar to the way thin* accesses its kernel status, we add a method
for RAID to grab the various values in its status output without the
higher levels (LVM) having to understand how to parse the output.
Added functions include:
- lib/activate/dev_manager.c:dev_manager_raid_status()
Pulls the status line from the kernel
- libdm/libdm-deptree.c:dm_get_status_raid()
Parses status line and puts components into dm_status_raid struct
- lib/activate/activate.c:lv_raid_dev_health()
Accesses dm_status_raid to deliver raid dev_health string
The new structure and functions can provide a more unified way to access
status information. ('lv_raid_percent' could switch to using these
functions, for example.)
Add log/debug_classes to lvm.conf to allow debug messages to be
classified and filtered at runtime.
The dm_errno field is only used by log_error(), so I've redefined it
for log_debug() messages to hold the message class.
By default, all existing messages appear, but we can add categories that
generate high volumes of data, such as logging all traffic to/from
lvmetad.
If the resume of preloaded node fails, do not leave such
node in the table - since it may not be easy to detach such
node later when the node is i.e. internal.
i.e. failing activation of the thin pool with mismatching
chunk size may leave -tpool device in the table, which
could have been then removed only by dmsetup command.
$ export DM_DISABLE_UDEV=1
$ dmsetup create test --table "0 1 zero"
Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory.
$ lvchange -ay vg/lvol0
Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will manage logical volume symlinks in device directory.
Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, LVM will obtain device list by scanning device directory.
Udev is running and DM_DISABLE_UDEV environment variable is set. Bypassing udev, device-mapper library will manage device nodes in device directory.
Setting this environment variable will cause a full fallback
to old direct node and symlink management in libdevmapper and lvm2.
It means:
- disabling udev synchronization
(--noudevsync in dmsetup and --noudevsync + activation/udev_sync=0
lvm2 config)
- disabling dm and any subsystem related udev rules
(--noudevrules in dmsetup and activation/udev_rules=0 lvm2 config)
- management of nodes/symlinks under /dev directly by libdevmapper/lvm2
(--verifyudev in dmsetup and activation/verify_udev_operations=1
lvm2 config)
- not obtaining any device list from udev database
(devices/obtain_device_list_from_udev=0 lvm2 config)
Note: we could set all of these before - there's no functional change!
However the DM_DISABLE_UDEV environment variable is a nice shortcut
to make it easier for libdevmapper users so that one can switch off all
of the udev management off at one go directly on the command line,
without a need to modify any source or add any extra switches.
cookie_set variable found in the struct dm_task should be always
set to 1 after dm_task_set_cookie_call, even if udev_sync is disabled
as the cookie itself carries synchronization informations *as well as*
extra flags to control other aspects of udev support.
For example, one could disable the synchronization itself, but still
direct the libdm code to disable library fallback via
DM_UDEV_DISABLE_LIBRARY_FALLBACK flag. These extra flags still need
to be carried out!
A concrete example:
$ dmsetup create test --table "0 1 zero" --noudevsync
This disables synchronization with udev. As the --verifyudev option is
not used, we don't want to do any corrections. In other words, we
need DM_UDEV_DISABLE_LIBRARY_FALLBACK flag to be used. However,
with --noudevsync this was not the case - the flag was ignored!
This patch fixes the case when noudevsync is used but there are still
some extra flags passed within the cookie flag part. The synchronization
part of the cookie stays zero (which is ok as dm_udev_wait call on such a
cookie is simply a NOOP).
Use log_warn to print non-fatal warning messages.
Use of log_error would confuse checker for testing
whether proper error has been reported for some real error.
On each ioctl return, the device UUID is decoded from \xNN format.
If the UUID of the device being *removed* is malformed (e.g. it
hasn't been corrected before), just remove it without any error
as the UUID is not needed anymore - the device is gone anyway.
Otherwise a misleading error message would be issued just after
the removal:
# dmsetup remove test
The UUID "a b" should be mangled but it contains blacklisted characters.
Command failed
Just like we already have existing mangling support for
device-mapper names, we need exactly the same for device-mapper
UUIDs as their character whitelist is wider than what udev supports.
In case udev is used to create entries in /dev based on UUIDs
and these UUIDs contain characters not supported by udev,
we'll end up with incorrect /dev content for such devices.
So we need to mangle them to a form that is supported by udev.
The mangling used for UUIDs follows the mangling used for names
(that is already supported and used throughout). That means,
setting the name mangling mode via dm_set_name_mangling_mode
affects mangling used for UUIDs in exactly the same manner.
It would be useless to add a new and separate
dm_set_uuid_mangling_mode fn, we'll reuse existing interface.
(un)mangle_name -> (un)mangle_string
check_multiple_mangled_name_allowed -> check_multiple_mangled_string_allowed
Just for clarity as the same functions will be reused to (un)mangle dm UUIDs.
Patch clears the flag if thin pool is stacked over mirror.
Since thin pool could be used to stack device over mirrors,
it needs resume properly i.e. mirrors with corelog which are otherwise
unconditionally skipped (for pvmove functionality).
If we were defining a section (which is a node without a value) and
the value was created automatically on dm_config_create_node call,
we were wasting resources as the next step after creating the config
node itself was assigning NULL for the node's value.
The dm_config_node_create + dm_config_create_value sequence should be
used instead for settings and dm_config_node_create alone for sections.
The majority of the code already used the correct sequence. Though
with dm_config_node_create fn creating the value as well, the pool
memory was being trashed this way.
This patch removes the node value initialization on dm_config_create_node
fn call and keeps it for the direct dm_config_create_value fn call.
This patch adds support for RAID10. It is not the default at this
stage. The user needs to specify '--type raid10' if they would like
RAID10 instead of stacked mirror over stripe.
Adding couple INTERNAL_ERROR reports for unwanted parameters:
Ensure the 'top' metadata node cannot be NULL for lvmetad.
Make obvious vginfo2 cannot be NULL.
Report internal error if handler and vg is undefined.
Check for handle in poll_vg().
Ensure seg is not NULL in dev_manager_transient().
Report missing read_ahead for _lv_read_ahead_single().
Check for report handler in dm_report_object().
Check missing VG in _vgreduce_single().
A regression introduced in 2.02.89 (11e520256b)
caused the lvm dumpconfig <node> to print out
the node as well as its subsequent siblings.
The information about "only_one" mode got lost.
Before this patch (just an example node):
# lvm dumpconfig global/use_lvmetad
use_lvmetad=1
thin_check_executable="/usr/sbin/thin_check"
thin_check_options="-q"
(...all nodes to the end of the section)
With this patch applied:
# lvm dumpconfig global/use_lvmetad
use_lvmetad=1
With latest changes in the udev, some deprecated functions were removed
from libudev amongst which there was the "udev_get_dev_path" function
we used to compare a device directory used in udev and directore set in
libdevmapper. The "/dev" is hardcoded in udev now (udev version >= 183).
Amongst other changes and from packager's point of view, it's also
important to note that the libudev development library ("libudev-devel")
could now be a part of the systemd development library ("systemd-devel")
because of the udev + systemd merge.
Auto mode can't deal with multiple mangled names. We can do that while working
in hex mode, but in auto mode, this would lead to device name ambiguity.
Be more strict when unmangling names on ioctl return - require the name to be
properly mangled in 'auto' and 'hex' mode. There really should not be any
blacklisted character since the names should be renamed already (by means of
renaming it directly or running 'dmsetup mangle' for automatic rename).