1
0
mirror of git://sourceware.org/git/lvm2.git synced 2026-01-25 00:32:58 +03:00

Compare commits

..

3 Commits

Author SHA1 Message Date
David Teigland
90bf285941 use vgck updatemetadata in tests 2019-01-31 11:32:30 -06:00
David Teigland
61dc563bc8 vgck updatemetadata command
A new command that simply calls vg_read, vg_write, vg_commit
which will fix or clean up various simple metadata issues:

. some mda's have old versions of metdadata, it will update
  them with the latest version

. wipe outdated PVs: a PV that was removed from the VG
  while the device was missing, and then the device
  reappears, so the VG needs to be cleared from the PV.

. fix pv_header PV_EXT_USED flag and/or update the
  header version

. strip historical lvs

. clear MISSING_PV flag in the metadata for PVs that
  were flagged with MISSING but have no PE's allocated
  on them.
2019-01-30 16:27:09 -06:00
David Teigland
6c446d272e improve reading and repairing vg metadata
The fact that vg repair is implemented as a part of vg read
has led to a very poor implementation of vg_read.  This splits
read and repair apart.

Summary
-------

- take all kinds of various repairs out of vg_read
- vg_read no longer writes anything
- vg_read now simply reads and returns vg metadata
- vg_read can proceed with a single good copy of metadata
- vg_read should ignore bad or old copies of metadata
- improve error checks and handling when reading
- keep track of bad (corrupt) copies of metadata in lvmcache
- keep track of old (seqno) copies of metadata in lvmcache
- keep track of outdated PVs in lvmcache
- wipe outdated PVs in vg_write instead of vg_read
- fix PV headers in vg_write instead of vg_read
- update old metadata in vg_write instead of vg_read
- do not conflate bad/old metadata with missing devs
- separate commands for other vg repairs will follow

Reading bad/old metadata
------------------------

- "bad metadata" is a copy of the metadata that has been corrupted,
  or can't be read, or has some invalid data that can't be parsed or
  understood by lvm.  It's often reported as a checksum error, but
  not always.  Bad metadata should be replaced with a copy of good
  metadata from another PV (or from a good copy on the same PV.)

- "old metadata" is a copy of the metadata that has a smaller seqno
  than other copies of the metadata.  It could happen if the device
  failed, or io failed, or lvm failed while commiting new metadata
  to all the metadata areas.  Old metadata on a PV that is still in
  the VG should be replaced with a copy of good metadata from another
  PV (or from a good copy on the same PV).  Old metadata on a PV that
  has been removed from the VG should be erased.

When a VG has some PVs with bad/old metadata, lvm can simply ignore
the bad/old copies, and use a good copy.  This is why there are
multiple copies of the metadata -- so it's available even when some
of the copies cannot be used.  The bad/old copies do not have to be
repaired before the VG can be used (the repair can happen later.)

A PV with no good copies of the metadata simply falls back to being
treated like a PV with no mdas; a common and harmless configuration.

When bad/old metadata exists, lvm warns the user about it, and
suggests repairing it using a new metadata repair command.
Bad/old metadata is something that users will often want to
investigate and repair themselves, since it should not generally
happen and may indicate some other problem that needs to be fixed.

PVs with bad/old metadata are not the same as missing devices.
Missing devices will block various kinds of VG modification or
activation, but bad/old metadata will not.

Previously, lvm would attempt to repair bad/old metadata whenever
it was read.  This was unnecessary since lvm does not require every
copy of the metadata to be used.  It would also hide potential
problems that should be investigated by the user.  It was also
dangerous in cases where the VG was on shared storage.  The user
is now allowed to investigate potential problems and decide how
and when to repair them.

Repairing bad/old metadata
--------------------------

When label scan sees bad metadata in an mda, that mda is removed
from the lvmcache info->mdas list.  This means that vg_read will
skip it, and not attempt to read/process it again.  If it was
the only in-use mda on a PV, that PV is treated like a PV with
no mdas.  It also means that vg_write will also skip the bad mda,
and not attempt to write new metadata to it.  The only way to
repair bad metadata is with metadata repair (see next commit).
(We may also want to allow pvchange --metadataignore on a PV
with bad metadata.)

When label scan sees old metadata in an mda, that mda is kept
in the lvmcache info->mdas list.  This means that vg_read will
read/process it again, and likely see the same mismatch with
the other copies of the metadata.  Like the label_scan, the
vg_read will simply ignore the old copy of the metadata and
use the latest copy.  If the command is modifying the vg
(e.g. lvcreate), then vg_write, which writes new metadata to
every mda on info->mdas, will write the new metadata to the
mda that had the problematic old version.  If successful,
this will resolve the old metadata problem (without needing
to run a metadata repair command.)

Outdated PVs
------------

An outdated PV is a PV that has an old copy of VG metadata
that shows it is a member of the VG, but the latest copy of
the VG metadata does not include this PV.  This happens if
the PV is disconnected, vgreduce --removemissing is run to
remove the PV from the VG, then the PV is reconnected.
In this case, the outdated PV needs have its outdated metadata
removed and the PV used flag needs to be cleared.  This repair
will be done by the subsequent repair command.  It is also done
if vgremove is run on the VG.

MISSING PVs
-----------

When a device is missing, most commands will refuse to modify
the VG.  This is the simple case.  More complicated is when
a command is allowed to modify the VG while it is missing a
device.

When a VG is written while a device is missing for one of it's PVs,
the VG metadata includes the MISSING_PV flag on the PV with the
missing device.  When the VG is next used, it needs to be treated
as if this PV with the MISSING flag is still missing, even if the
device has reappeared.

vgreduce --removemissing will remove PVs with missing devices,
or PVs with the MISSING flag where the device has reappeared.

vgextend --restoremissing will clear the MISSING flag on PVs
where the device has reappeared, allowing the VG to be used
normally.  This must be done with caution since the reappeared
device may have old data that is inconsistent with data on other PVs.
2019-01-30 15:59:00 -06:00
465 changed files with 10291 additions and 41444 deletions

15
.gitignore vendored
View File

@@ -25,29 +25,17 @@ make.tmpl
/autom4te.cache/
/autoscan.log
/build/
/config.cache
/config.log
/config.status
/configure.scan
/cscope.*
/html/
/reports/
/cscope.out
/tags
/tmp/
coverity/coverity_model.xml
# gcov files:
*.gcda
*.gcno
tools/man-generator
tools/man-generator.c
test/.lib-dir-stamp
test/.tests-stamp
test/lib/dmsecuretest
test/lib/lvchange
test/lib/lvconvert
test/lib/lvcreate
@@ -72,7 +60,6 @@ test/lib/pvremove
test/lib/pvresize
test/lib/pvs
test/lib/pvscan
test/lib/securetest
test/lib/vgcfgbackup
test/lib/vgcfgrestore
test/lib/vgchange

11
README
View File

@@ -1,6 +1,7 @@
This tree contains the LVM2 and device-mapper tools and libraries.
This is development branch, for stable 2.02 release see stable-2.02 branch.
This is development branch, for stable 2.02 release see 2018-06-01-stable
branch.
For more information about LVM2 read the changelog in the WHATS_NEW file.
Installation instructions are in INSTALL.
@@ -9,6 +10,7 @@ There is no warranty - see COPYING and COPYING.LIB.
Tarballs are available from:
ftp://sourceware.org/pub/lvm2/
ftp://sources.redhat.com/pub/lvm2/
https://github.com/lvmteam/lvm2/releases
The source code is stored in git:
@@ -43,9 +45,6 @@ Report upstream bugs at:
or open issues at:
https://github.com/lvmteam/lvm2/issues
The source code repository used until 7th June 2012 is accessible using CVS:
The source code repository used until 7th June 2012 is accessible here:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/LVM2/?cvsroot=lvm2.
cvs -d :pserver:cvs@sourceware.org:/cvs/lvm2 login cvs
cvs -d :pserver:cvs@sourceware.org:/cvs/lvm2 checkout LVM2
The password is cvs.

View File

@@ -1 +1 @@
2.03.12(2)-git (2021-01-08)
2.03.02(2)-git (2018-10-31)

View File

@@ -1 +1 @@
1.02.177-git (2021-01-08)
1.02.155-git (2018-10-31)

183
WHATS_NEW
View File

@@ -1,178 +1,5 @@
Version 2.03.12 -
Version 2.03.02 -
===================================
Add devices file feature, off by default for now.
Support extension of writecached volumes.
Fix problem with unbound variable usage within fsadm.
Check for presence of VDO target before starting any conversion.
Support metatadata profiles with volume VDO pool conversions.
Support -Zn for conversion of already formated VDO pools.
Avoid removing LVs on error path of lvconvert during creation volumes.
Fix crashing lvdisplay when thin volume was waiting for merge.
Support option --errorwhenfull when converting volume to thin-pool.
Improve thin-performance profile support conversion to thin-pool.
Add workaround to avoid read of internal 'converted' devices.
Prohibit merging snapshot into the read-only thick snapshot origin.
Restore support for flipping rw/r permissions for thin snapshot origin.
Support resize of cached volumes.
Disable autoactivation with global/event_activation=0.
Check if lvcreate passes read_only_volume_list with tags and skips zeroing.
Allocation prints better error when metadata cannot fit on a single PV.
Pvmove can better resolve full thin-pool tree move.
Limit pool metadata spare to 16GiB.
Improves convertsion and allocation of pool metadata.
Support thin pool metadata 15.88GiB, adds 64MiB, thin_pool_crop_metadata=0.
Enhance lvdisplay to report raid availiable/partial.
Support online rename of VDO pools.
Imporove removal of pmspare when last pool is removed.
Fix problem with wiping of converted LVs.
Fix memleak in scanning (2.03.11).
Fix corner case allocation for thin-pools.
Version 2.03.11 - 08th January 2021
===================================
Fix pvck handling MDA at offset different from 4096.
Partial or degraded activation of writecache is not allowed.
Enhance error handling for fsadm and handle correct fsck result.
Dmeventd lvm plugin ignores higher reserved_stack lvm.conf values.
Support using BLKZEROOUT for clearing devices.
Support interruption when wipping LVs.
Support interruption for bcache waiting.
Fix bcache when device has too many failing writes.
Fix bcache waiting for IO completion with failing disks.
Configure use own python path name order to prefer using python3.
Add configure --enable-editline support as an alternative to readline.
Enhance reporting and error handling when creating thin volumes.
Enable vgsplit for VDO volumes.
Lvextend of vdo pool volumes ensure at least 1 new VDO slab is added.
Use revert_lv() on reload error path after vg_revert().
Configure --with-integrity enabled.
Restore lost signal blocking while VG lock is held.
Improve estimation of needed extents when creating thin-pool.
Use extra 1% when resizing thin-pool metadata LV with --use-policy.
Enhance --use-policy percentage rounding.
Configure --with-vdo and --with-writecache as internal segments.
Improving VDO man page examples.
Allow pvmove of writecache origin.
Report integrity fields.
Integrity volumes defaults to journal mode.
Switch code base to use flexible array syntax.
Fix 64bit math when calculation cachevol size.
Preserve uint32_t for seqno handling.
Switch from mmap to plain read when loading regular files.
Update lvmvdo man page and better explain DISCARD usage.
Version 2.03.10 - 09th August 2020
==================================
Add writecache and integrity support to lvmdbusd.
Generate unique cachevol name when default required from lvcreate.
Converting RAID1 volume to one with same number of legs now succeeds with a
warning.
Fix conversion to raid from striped lagging type.
Fix conversion to 'mirrored' mirror log with larger regionsize.
Zero pool metadata on allocation (disable with allocation/zero_metadata=0).
Failure in zeroing or wiping will fail command (bypass with -Zn, -Wn).
Add lvcreate of new cache or writecache lv with single command.
Fix running out of free buffers for async writing for larger writes.
Add integrity with raid capability.
Fix support for lvconvert --repair used by foreign apps (i.e. Docker).
Version 2.03.09 - 26th March 2020
=================================
Fix formating of vdopool (vdo_slab_size_mb was smaller by 2 bits).
Fix showing of a dm kernel error when uncaching a volume with cachevol.
Version 2.03.08 - 11th February 2020
====================================
Prevent problematic snapshots of writecache volumes.
Add error handling for failing allocation in _reserve_area().
Fix memleak in syncing of internal cache.
Fix pvck dump_current_text memleak.
Fix lvmlockd result code on error path for _query_lock_lv().
Update pvck man page and help output.
Reject invalid writecache high/low_watermark setting.
Report writecache status.
Accept more output lines from vdo_format.
Prohibit reshaping of stacked raid LVs.
Avoid running cache input arg validation when creating vdo pool.
Prevent raid reshaping of stacked volumes.
Added VDO lvmdbusd methods for enable/disable compression & dedupe.
Added VDO lvmdbusd method for converting LV to VDO pool.
Version 2.03.07 - 30th November 2019
====================================
Subcommand in vgck for repairing headers and metadata.
Ensure minimum required region size on striped RaidLV creation.
Fix resize of thin-pool with data and metadata of different segtype.
Improve mirror type leg splitting.
Improve error path handling in daemons on shutdown.
Fix activation order when removing merged snapshot.
Experimental VDO support for lvmdbusd.
Version 2.03.06 - 23rd October 2019
===================================
Add _cpool suffix to cache-pool LV name when used by caching LV.
No longer store extra UUID for cmeta and cdata cachevol layer.
Enhance activation of cache devices with cachevols.
Add _cvol in list of protected suffixes and start use it with DM UUID.
Rename LV converted to cachevol to use _cvol suffix.
Use normal LVs for wiping of cachevols.
Reload cleanered cache DM only with cleaner policy.
Fix cmd return when zeroing of cachevol fails.
Extend lvs to show all VDO properties.
Preserve VDO write policy with vdopool.
Increase default vdo bio threads to 4.
Continue report when cache_status fails.
Add support for DM_DEVICE_GET_TARGET_VERSION into device_mapper.
Fix cmirrord usage of header files from device_mapper subdir.
Allow standalone activation of VDO pool just like for thin-pools.
Activate thin-pool layered volume as 'read-only' device.
Ignore crypto devices with UUID signature CRYPT-SUBDEV.
Enhance validation for thin and cache pool conversion and swapping.
Improve internal removal of cached devices.
Synchronize with udev when dropping snapshot.
Add missing device synchronization point before removing pvmove node.
Correctly set read_ahead for LVs when pvmove is finished.
Remove unsupported OPTIONS+="event_timeout" udev rule from 11-dm-lvm.rules.
Prevent creating VGs with PVs with different logical block sizes.
Fix metadata writes from corrupting with large physical block size.
Version 2.03.05 - 15th June 2019
================================
Fix command definition for pvchange -a.
Add vgck --updatemetadata command that will repair metadata problems.
Improve VG reading to work if one good copy of metadata is found.
Report/display/scan commands that read VGs will no longer write/repair.
Move metadata repairs from VG reading to VG writing.
Add config setting md_component_checks to control MD component checks.
Add end of device MD component checks when dev has no udev info.
Version 2.03.04 - 10th June 2019
================================
Remove unused_duplicate_devs from cmd causing segfault in dmeventd.
Version 2.03.03 - 07th June 2019
================================
Report no_discard_passdown for cache LVs with lvs -o+kernel_discards.
Add pvck --dump option to extract metadata.
Fix signal delivery checking race in libdaemon (lvmetad).
Add missing Before=shutdown.target to LVM2 services to fix shutdown ordering.
Skip autoactivation for a PV when PV size does not match device size.
Remove first-pvscan-initialization which should no longer be needed.
Add remote refresh through lvmlockd/dlm for shared LVs after lvextend.
Ignore foreign and shared PVs for pvscan online files.
Add config setting to control fields in debug file and verbose output.
Add command[pid] and timestamp to debug file and verbose output.
Fix missing growth of _pmsmare volume when extending _tmeta volume.
Automatically grow thin metadata, when thin data gets too big.
Add synchronization with udev before removing cached devices.
Add support for caching VDO LVs and VDOPOOL LVs.
Add support for vgsplit with cached devices.
Query mpath device only once per command for its state.
Use device INFO instead of STATUS when checking for mpath device uuid.
Change default io_memory_size from 4 to 8 MiB.
Add config setting io_memory_size to set bcache size.
Fix pvscan autoactivation for concurrent pvscans.
Change scan_lvs default to 0 so LVs are not scanned for PVs.
Thin-pool selects power-of-2 chunk size by default.
Cache selects power-of-2 chunk size by default.
Support reszing for VDOPoolLV and VDOLV.
@@ -187,14 +14,10 @@ Version 2.03.03 - 07th June 2019
Fix missing unlock on lvm2 dmeventd plugin error path initialization.
Improve Makefile dependency tracking.
Move VDO support towards V2 target (6.2) support.
Version 2.03.02 - 18th December 2018
====================================
Fix missing proper initialization of pv_list struct when adding pv.
Fix (de)activation of RaidLVs with visible SubLVs.
Prohibit mirrored 'mirror' log via lvcreate and lvconvert.
Fix (de)activation of RaidLVs with visible SubLVs
Prohibit mirrored 'mirror' log via lvcreate and lvconvert
Use sync io if async io_setup fails, or use_aio=0 is set in config.
Fix more issues reported by coverity scan.
Version 2.03.01 - 31st October 2018
===================================

View File

@@ -1,51 +1,10 @@
Version 1.02.177 -
Version 1.02.155 -
====================================
Add dm_tree_node_add_thin_pool_target_v1 with crop_metadata support.
Version 1.02.175 - 08th January 2021
====================================
Version 1.02.173 - 09th August 2020
===================================
Add support for VDO in blkdeactivate script.
Version 1.02.171 - 26th March 2020
==================================
Try to remove all created devices on dm preload tree error path.
Fix dm_list interators with gcc 10 optimization (-ftree-pta).
Dmeventd handles timer without looping on short intervals.
Version 1.02.169 - 11th February 2020
=====================================
Enhance error messages for device creation.
Version 1.02.167 - 30th November 2019
=====================================
Version 1.02.165 - 23rd October 2019
====================================
Add support for DM_DEVICE_GET_TARGET_VERSION.
Add debug of dmsetup udevcomplete with hexa print DM_COOKIE_COMPLETED.
Fix versioning of dm_stats_create_region and dm_stats_create_region.
Version 1.02.163 - 15th June 2019
=================================
Version 1.02.161 - 10th June 2019
=================================
Version 1.02.159 - 07th June 2019
=================================
Parsing of cache status understand no_discard_passdown.
Ensure migration_threshold for cache is at least 8 chunks.
Version 1.02.155 - 18th December 2018
=====================================
Include correct internal header inside libdm list.c.
Enhance ioctl flattening and add parameters only when needed.
Add DM_DEVICE_ARM_POLL for API completness matching kernel.
Do not add parameters for RESUME with DM_DEVICE_CREATE dm task.
Fix dmstats report printing no output.
Version 1.02.153 - 31st October 2018
====================================

26
aclocal.m4 vendored
View File

@@ -1,6 +1,6 @@
# generated automatically by aclocal 1.16.2 -*- Autoconf -*-
# generated automatically by aclocal 1.15.1 -*- Autoconf -*-
# Copyright (C) 1996-2020 Free Software Foundation, Inc.
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -413,7 +413,7 @@ AS_IF([test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"],
[AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
])dnl PKG_HAVE_DEFINE_WITH_MODULES
# Copyright (C) 1999-2020 Free Software Foundation, Inc.
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -446,12 +446,10 @@ AC_DEFUN([AM_PATH_PYTHON],
[
dnl Find a Python interpreter. Python versions prior to 2.0 are not
dnl supported. (2.0 was released on October 16, 2000).
dnl FIXME: Remove the need to hard-code Python versions here.
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],
[python python2 python3 dnl
python3.9 python3.8 python3.7 python3.6 python3.5 python3.4 python3.3 dnl
python3.2 python3.1 python3.0 dnl
python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 dnl
python2.0])
[python python2 python3 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0])
AC_ARG_VAR([PYTHON], [the Python interpreter])
@@ -496,14 +494,12 @@ AC_DEFUN([AM_PATH_PYTHON],
m4_default([$3], [AC_MSG_ERROR([no suitable Python interpreter found])])
else
dnl Query Python for its version number. Although site.py simply uses
dnl sys.version[:3], printing that failed with Python 3.10, since the
dnl trailing zero was eliminated. So now we output just the major
dnl and minor version numbers, as numbers. Apparently the tertiary
dnl version is not of interest.
dnl Query Python for its version number. Getting [:3] seems to be
dnl the best way to do this; it's what "site.py" does in the standard
dnl library.
AC_CACHE_CHECK([for $am_display_PYTHON version], [am_cv_python_version],
[am_cv_python_version=`$PYTHON -c "import sys; print('%u.%u' % sys.version_info[[:2]])"`])
[am_cv_python_version=`$PYTHON -c "import sys; sys.stdout.write(sys.version[[:3]])"`])
AC_SUBST([PYTHON_VERSION], [$am_cv_python_version])
dnl Use the values of $prefix and $exec_prefix for the corresponding
@@ -653,7 +649,7 @@ for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[[i]]
sys.exit(sys.hexversion < minverhex)"
AS_IF([AM_RUN_LOG([$1 -c "$prog"])], [$3], [$4])])
# Copyright (C) 2001-2020 Free Software Foundation, Inc.
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,

View File

@@ -22,7 +22,7 @@ struct dm_hash_node {
void *data;
unsigned data_len;
unsigned keylen;
char key[];
char key[0];
};
struct dm_hash_table {
@@ -59,27 +59,26 @@ static unsigned char _nums[] = {
209
};
static struct dm_hash_node *_create_node(const void *key, unsigned len)
static struct dm_hash_node *_create_node(const char *str, unsigned len)
{
struct dm_hash_node *n = malloc(sizeof(*n) + len);
if (n) {
memcpy(n->key, key, len);
memcpy(n->key, str, len);
n->keylen = len;
}
return n;
}
static unsigned long _hash(const void *key, unsigned len)
static unsigned long _hash(const char *str, unsigned len)
{
const unsigned char *str = key;
unsigned long h = 0, g;
unsigned i;
for (i = 0; i < len; i++) {
h <<= 4;
h += _nums[*str++];
h += _nums[(unsigned char) *str++];
g = h & ((unsigned long) 0xf << 16u);
if (g) {
h ^= g >> 16u;

View File

@@ -1,8 +1,6 @@
#ifndef BASE_DATA_STRUCT_LIST_H
#define BASE_DATA_STRUCT_LIST_H
#include "base/memory/container_of.h"
//----------------------------------------------------------------
/*
@@ -100,7 +98,7 @@ struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *e
* contained in a structure of type t, return the containing structure.
*/
#define dm_list_struct_base(v, t, head) \
container_of(v, t, head)
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
/*
* Given the address v of an instance of 'struct dm_list list' contained in
@@ -113,7 +111,7 @@ struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *e
* return another element f.
*/
#define dm_struct_field(v, t, e, f) \
(((t *)((uintptr_t)(v) - offsetof(t, e)))->f)
(((t *)((uintptr_t)(v) - (uintptr_t)&((t *) 0)->e))->f)
/*
* Given the address v of a known element e in a known structure of type t,

View File

@@ -47,7 +47,7 @@ struct value_chain {
struct prefix_chain {
struct value child;
unsigned len;
uint8_t prefix[];
uint8_t prefix[0];
};
struct node4 {
@@ -1032,7 +1032,7 @@ void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
{
struct lookup_result lr = _lookup_prefix(&rt->root, kb, ke);
if (lr.kb == ke || _prefix_chain_matches(&lr, ke))
(void) _iterate(lr.v, it);
_iterate(lr.v, it);
}
//----------------------------------------------------------------

View File

@@ -1,4 +1,4 @@
// Copyright (C) 2018 - 2020 Red Hat, Inc. All rights reserved.
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
@@ -13,12 +13,10 @@
#ifndef BASE_MEMORY_CONTAINER_OF_H
#define BASE_MEMORY_CONTAINER_OF_H
#include <stddef.h> // offsetof
//----------------------------------------------------------------
#define container_of(v, t, head) \
((t *)((char *)(v) - offsetof(t, head)))
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
//----------------------------------------------------------------

View File

@@ -78,32 +78,16 @@ devices {
# routines to acquire this information. For example, this information
# is used to drive LVM filtering like MD component detection, multipath
# component detection, partition detection and others.
#
#
# Accepted values:
# none
# No external device information source is used.
# udev
# Reuse existing udev database records. Applicable only if LVM is
# compiled with udev support.
#
#
external_device_info_source = "none"
# Configuration option devices/hints.
# Use a local file to remember which devices have PVs on them.
# Some commands will use this as an optimization to reduce device
# scanning, and will only scan the listed PVs. Removing the hint file
# will cause lvm to generate a new one. Disable hints if PVs will
# be copied onto devices using non-lvm commands, like dd.
#
# Accepted values:
# all
# Use all hints.
# none
# Use no hints.
#
# This configuration option has an automatic default value.
# hints = "all"
# Configuration option devices/preferred_names.
# Select which path name to display for a block device.
# If multiple path names exist for a block device, and LVM needs to
@@ -118,10 +102,10 @@ devices {
# Prefer the name with the least number of slashes.
# Prefer a name that is a symlink.
# Prefer the path with least value in lexicographical order.
#
#
# Example
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option devices/filter.
@@ -139,10 +123,10 @@ devices {
# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
# as the combination might produce unexpected results (test changes.)
# Run vgscan after changing the filter to regenerate the cache.
#
#
# Example
# Accept every block device:
# filter = [ "a|.*|" ]
# filter = [ "a|.*/|" ]
# Reject the cdrom drive:
# filter = [ "r|/dev/cdrom|" ]
# Work with just loopback devices, e.g. for testing:
@@ -150,10 +134,10 @@ devices {
# Accept all loop devices and ide drives except hdc:
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# Use anchors to be very specific:
# filter = [ "a|^/dev/hda8$|", "r|.*|" ]
#
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*|" ]
# filter = [ "a|.*/|" ]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
@@ -163,16 +147,16 @@ devices {
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*|" ]
# global_filter = [ "a|.*/|" ]
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
# maximum number of partitions.
#
#
# Example
# types = [ "fd", 16 ]
#
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
@@ -183,52 +167,17 @@ devices {
sysfs_scan = 1
# Configuration option devices/scan_lvs.
# Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.
# When 1, LVM will detect PVs layered on LVs, and caution must be
# taken to avoid a host accessing a layered VG that may not belong
# to it, e.g. from a guest image. This generally requires excluding
# the LVs with device filters. Also, when this setting is enabled,
# every LVM command will scan every active LV on the system (unless
# filtered), which can cause performance problems on systems with
# many active LVs. When this setting is 0, LVM will not detect or
# use PVs that exist on LVs, and will not allow a PV to be created on
# an LV. The LVs are ignored using a built in device filter that
# identifies and excludes LVs.
scan_lvs = 0
# Scan LVM LVs for layered PVs.
scan_lvs = 1
# Configuration option devices/multipath_component_detection.
# Ignore devices that are components of DM multipath devices.
multipath_component_detection = 1
# Configuration option devices/md_component_detection.
# Enable detection and exclusion of MD component devices.
# An MD component device is a block device that MD uses as part
# of a software RAID virtual device. When an LVM PV is created
# on an MD device, LVM must only use the top level MD device as
# the PV, and should ignore the underlying component devices.
# In cases where the MD superblock is located at the end of the
# component devices, it is more difficult for LVM to consistently
# identify an MD component, see the md_component_checks setting.
# Ignore devices that are components of software RAID (md) devices.
md_component_detection = 1
# Configuration option devices/md_component_checks.
# The checks LVM should use to detect MD component devices.
# MD component devices are block devices used by MD software RAID.
#
# Accepted values:
# auto
# LVM will skip scanning the end of devices when it has other
# indications that the device is not an MD component.
# start
# LVM will only scan the start of devices for MD superblocks.
# This does not incur extra I/O by LVM.
# full
# LVM will scan the start and end of devices for MD superblocks.
# This requires an extra read at the end of devices.
#
# This configuration option has an automatic default value.
# md_component_checks = "auto"
# Configuration option devices/fw_raid_component_detection.
# Ignore devices that are components of firmware RAID devices.
# LVM must use an external_device_info_source other than none for this
@@ -347,12 +296,6 @@ devices {
# Enabling this setting allows the VG to be used as usual even with
# uncertain devices.
allow_changes_with_duplicate_pvs = 0
# Configuration option devices/allow_mixed_block_sizes.
# Allow PVs in the same VG with different logical block sizes.
# When allowed, the user is responsible to ensure that an LV is
# using PVs with matching block sizes when necessary.
allow_mixed_block_sizes = 0
}
# Configuration section allocation.
@@ -367,7 +310,7 @@ allocation {
# defined here, it will check whether any of them are attached to the
# PVs concerned and then seek to match those PV tags between existing
# extents and new extents.
#
#
# Example
# Use the special tag "@*" as a wildcard to match any PV tag:
# cling_tag_list = [ "@*" ]
@@ -375,7 +318,7 @@ allocation {
# PVs are tagged with either @site1 or @site2 to indicate where
# they are situated:
# cling_tag_list = [ "@site1", "@site2" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option allocation/maximise_cling.
@@ -429,30 +372,29 @@ allocation {
# Configuration option allocation/cache_pool_metadata_require_separate_pvs.
# Cache pool metadata and data will always use different PVs.
# This configuration option has an automatic default value.
# cache_pool_metadata_require_separate_pvs = 0
cache_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/cache_metadata_format.
# Sets default metadata format for new cache.
#
#
# Accepted values:
# 0 Automatically detected best available format
# 1 Original format
# 2 Improved 2nd. generation format
#
#
# This configuration option has an automatic default value.
# cache_metadata_format = 0
# Configuration option allocation/cache_mode.
# The default cache mode used for new cache.
#
#
# Accepted values:
# writethrough
# Data blocks are immediately written from the cache to disk.
# writeback
# Data blocks are written from the cache back to disk after some
# delay to improve performance.
#
#
# This setting replaces allocation/cache_pool_cachemode.
# This configuration option has an automatic default value.
# cache_mode = "writethrough"
@@ -489,16 +431,8 @@ allocation {
# This configuration option does not have a default value defined.
# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
# Thin pool metadata and data will always use different PVs.
# This configuration option has an automatic default value.
# thin_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/thin_pool_crop_metadata.
# Older version of lvm2 cropped pool's metadata size to 15.81 GiB.
# This is slightly less then the actual maximum 15.88 GiB.
# For compatibility with older version and use of cropped size set to 1.
# This configuration option has an automatic default value.
# thin_pool_crop_metadata = 0
# Thin pool metdata and data will always use different PVs.
thin_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/thin_pool_zero.
# Thin pool data chunks are zeroed before they are first used.
@@ -508,18 +442,18 @@ allocation {
# Configuration option allocation/thin_pool_discards.
# The discards behaviour of thin pool volumes.
#
#
# Accepted values:
# ignore
# nopassdown
# passdown
#
#
# This configuration option has an automatic default value.
# thin_pool_discards = "passdown"
# Configuration option allocation/thin_pool_chunk_size_policy.
# The chunk size calculation policy for thin pool volumes.
#
#
# Accepted values:
# generic
# If thin_pool_chunk_size is defined, use it. Otherwise, calculate
@@ -531,15 +465,10 @@ allocation {
# the chunk size for performance based on device hints exposed in
# sysfs - the optimal_io_size. The chunk size is always at least
# 512KiB.
#
#
# This configuration option has an automatic default value.
# thin_pool_chunk_size_policy = "generic"
# Configuration option allocation/zero_metadata.
# Zero whole metadata area before use with thin or cache pool.
# This configuration option has an automatic default value.
# zero_metadata = 1
# Configuration option allocation/thin_pool_chunk_size.
# The minimal chunk size in KiB for thin pool volumes.
# Larger chunk sizes may improve performance for plain thin volumes,
@@ -635,7 +564,7 @@ allocation {
# Each additional thread after the first will use an additional 18MiB of RAM,
# plus 1.12 MiB of RAM per megabyte of configured read cache size.
# This configuration option has an automatic default value.
# vdo_bio_threads = 4
# vdo_bio_threads = 1
# Configuration option allocation/vdo_bio_rotation.
# Specifies the number of I/O operations to enqueue for each bio-submission
@@ -791,8 +720,7 @@ log {
# Configuration option log/indent.
# Indent messages according to their severity.
# This configuration option has an automatic default value.
# indent = 0
indent = 1
# Configuration option log/command_names.
# Display the command name on each line of output.
@@ -818,20 +746,6 @@ log {
# available: memory, devices, io, activation, allocation,
# metadata, cache, locking, lvmpolld. Use "all" to see everything.
debug_classes = [ "memory", "devices", "io", "activation", "allocation", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
# Configuration option log/debug_file_fields.
# The fields included in debug output written to log file.
# Use "all" to include everything (the default).
# This configuration option is advanced.
# This configuration option has an automatic default value.
# debug_file_fields = [ "time", "command", "fileline", "message" ]
# Configuration option log/debug_output_fields.
# The fields included in debug output written to stderr.
# Use "all" to include everything (the default).
# This configuration option is advanced.
# This configuration option has an automatic default value.
# debug_output_fields = [ "time", "command", "fileline", "message" ]
}
# Configuration section backup.
@@ -919,6 +833,9 @@ global {
# the error messages.
activation = 1
# Configuration option global/segment_libraries.
# This configuration option does not have a default value defined.
# Configuration option global/proc.
# Location of proc filesystem.
# This configuration option is advanced.
@@ -944,7 +861,8 @@ global {
# a volume group's metadata, instead of always granting the read-only
# requests immediately, delay them to allow the read-write requests to
# be serviced. Without this setting, write access may be stalled by a
# high volume of read-only requests. This option only affects file locks.
# high volume of read-only requests. This option only affects
# locking_type 1 viz. local file-based locking.
prioritise_write_locks = 1
# Configuration option global/library_dir.
@@ -968,7 +886,7 @@ global {
# Configuration option global/mirror_segtype_default.
# The segment type used by the short mirroring option -m.
# The --type mirror|raid1 option overrides this setting.
#
#
# Accepted values:
# mirror
# The original RAID1 implementation from LVM/DM. It is
@@ -988,19 +906,18 @@ global {
# handling a failure. This mirror implementation is not
# cluster-aware and cannot be used in a shared (active/active)
# fashion in a cluster.
#
#
mirror_segtype_default = "@DEFAULT_MIRROR_SEGTYPE@"
# Configuration option global/support_mirrored_mirror_log.
# Enable mirrored 'mirror' log type for testing.
#
#
# This type is deprecated to create or convert to but can
# be enabled to test that activation of existing mirrored
# logs and conversion to disk/core works.
#
#
# Not supported for regular operation!
# This configuration option has an automatic default value.
# support_mirrored_mirror_log = 0
support_mirrored_mirror_log = 0
# Configuration option global/raid10_segtype_default.
# The segment type used by the -i -m combination.
@@ -1008,7 +925,7 @@ global {
# The --stripes/-i and --mirrors/-m options can both be specified
# during the creation of a logical volume to use both striping and
# mirroring for the LV. There are two different implementations.
#
#
# Accepted values:
# raid10
# LVM uses MD's RAID10 personality through DM. This is the
@@ -1018,7 +935,7 @@ global {
# is done by creating a mirror LV on top of striped sub-LVs,
# effectively creating a RAID 0+1 array. The layering is suboptimal
# in terms of providing redundancy and performance.
#
#
raid10_segtype_default = "@DEFAULT_RAID10_SEGTYPE@"
# Configuration option global/sparse_segtype_default.
@@ -1026,7 +943,7 @@ global {
# The --type snapshot|thin option overrides this setting.
# The combination of -V and -L options creates a sparse LV. There are
# two different implementations.
#
#
# Accepted values:
# snapshot
# The original snapshot implementation from LVM/DM. It uses an old
@@ -1038,7 +955,7 @@ global {
# bigger minimal chunk size (64KiB) and uses a separate volume for
# metadata. It has better performance, especially when more data
# is used. It also supports full snapshots.
#
#
sparse_segtype_default = "@DEFAULT_SPARSE_SEGTYPE@"
# Configuration option global/lvdisplay_shows_full_device_path.
@@ -1058,8 +975,7 @@ global {
# activated from these events (the default is all.)
# When event_activation is disabled, the system will generally run
# a direct activation command to activate LVs in complete VGs.
# This configuration option has an automatic default value.
# event_activation = 1
event_activation = 1
# Configuration option global/use_aio.
# Use async I/O when reading and writing devices.
@@ -1136,20 +1052,20 @@ global {
# causing problems. Features include: block_size, discards,
# discards_non_power_2, external_origin, metadata_resize,
# external_origin_extend, error_if_no_space.
#
#
# Example
# thin_disabled_features = [ "discards", "block_size" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option global/cache_disabled_features.
# Features to not use in the cache driver.
# This can be helpful for testing, or to avoid using a feature that is
# causing problems. Features include: policy_mq, policy_smq, metadata2.
#
#
# Example
# cache_disabled_features = [ "policy_smq" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option global/cache_check_executable.
@@ -1201,16 +1117,6 @@ global {
# This configuration option has an automatic default value.
# vdo_format_options = [ "" ]
# Configuration option global/vdo_disabled_features.
# Features to not use in the vdo driver.
# This can be helpful for testing, or to avoid using a feature that is
# causing problems. Features include: online_rename
#
# Example
# vdo_disabled_features = [ "online_rename" ]
#
# This configuration option does not have a default value defined.
# Configuration option global/fsadm_executable.
# The full path to the fsadm command.
# LVM uses this command to help with lvresize -r operations.
@@ -1223,7 +1129,7 @@ global {
# or vgimport.) A VG on shared storage devices is accessible only to
# the host with a matching system ID. See 'man lvmsystemid' for
# information on limitations and correct usage.
#
#
# Accepted values:
# none
# The host has no system ID.
@@ -1240,7 +1146,7 @@ global {
# file
# Use the contents of another file (system_id_file) to set the
# system ID.
#
#
system_id_source = "none"
# Configuration option global/system_id_file.
@@ -1268,16 +1174,6 @@ global {
# When enabled, an LVM command that changes PVs, changes VG metadata,
# or changes the activation state of an LV will send a notification.
notify_dbus = 1
# Configuration option global/io_memory_size.
# The amount of memory in KiB that LVM allocates to perform disk io.
# LVM performance may benefit from more io memory when there are many
# disks or VG metadata is large. Increasing this size may be necessary
# when a single copy of VG metadata is larger than the current setting.
# This value should usually not be decreased from the default; setting
# it too low can result in lvm failing to read VGs.
# This configuration option has an automatic default value.
# io_memory_size = 8192
}
# Configuration section activation.
@@ -1313,8 +1209,7 @@ activation {
# This enables additional checks (and if necessary, repairs) on entries
# in the device directory after udev has completed processing its
# events. Useful for diagnosing problems with LVM/udev interactions.
# This configuration option has an automatic default value.
# verify_udev_operations = 0
verify_udev_operations = 0
# Configuration option activation/retry_deactivation.
# Retry failed LV deactivation.
@@ -1339,34 +1234,30 @@ activation {
# When disabled, the striped target is used. The linear target is an
# optimised version of the striped target that only handles a single
# stripe.
# This configuration option has an automatic default value.
# use_linear_target = 1
use_linear_target = 1
# Configuration option activation/reserved_stack.
# Stack size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
# This configuration option has an automatic default value.
# reserved_stack = 64
reserved_stack = 64
# Configuration option activation/reserved_memory.
# Memory size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
# This configuration option has an automatic default value.
# reserved_memory = 8192
reserved_memory = 8192
# Configuration option activation/process_priority.
# Nice value used while devices are suspended.
# Use a high priority so that LVs are suspended
# for the shortest possible time.
# This configuration option has an automatic default value.
# process_priority = -18
process_priority = -18
# Configuration option activation/volume_list.
# Only LVs selected by this list are activated.
# If this list is defined, an LV is only activated if it matches an
# entry in this list. If this list is undefined, it imposes no limits
# on LV activation (all are allowed).
#
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
@@ -1380,10 +1271,10 @@ activation {
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
#
# Example
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option activation/auto_activation_volume_list.
@@ -1403,7 +1294,7 @@ activation {
# commands run directly by a user. A user may also use the 'a' flag
# directly to perform auto-activation. Also see pvscan(8) for more
# information about auto-activation.
#
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
@@ -1417,10 +1308,10 @@ activation {
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
#
# Example
# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option activation/read_only_volume_list.
@@ -1429,7 +1320,7 @@ activation {
# against this list, and if it matches, it is activated in read-only
# mode. This overrides the permission setting stored in the metadata,
# e.g. from --permission rw.
#
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
@@ -1443,10 +1334,10 @@ activation {
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
#
# Example
# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
#
# This configuration option does not have a default value defined.
# Configuration option activation/raid_region_size.
@@ -1469,15 +1360,14 @@ activation {
# Configuration option activation/readahead.
# Setting to use when there is no readahead setting in metadata.
#
#
# Accepted values:
# none
# Disable readahead.
# auto
# Use default value chosen by kernel.
#
# This configuration option has an automatic default value.
# readahead = "auto"
#
readahead = "auto"
# Configuration option activation/raid_fault_policy.
# Defines how a device failure in a RAID LV is handled.
@@ -1487,7 +1377,7 @@ activation {
# performed by dmeventd automatically, and the steps perfomed by the
# manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#
#
# Accepted values:
# warn
# Use the system log to warn the user that a device in the RAID LV
@@ -1498,7 +1388,7 @@ activation {
# allocate
# Attempt to use any extra physical volumes in the VG as spares and
# replace faulty devices.
#
#
raid_fault_policy = "warn"
# Configuration option activation/mirror_image_fault_policy.
@@ -1510,7 +1400,7 @@ activation {
# determines the steps perfomed by dmeventd automatically, and the steps
# performed by the manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#
#
# Accepted values:
# remove
# Simply remove the faulty device and run without it. If the log
@@ -1535,7 +1425,7 @@ activation {
# the redundant nature of the mirror. This policy acts like
# 'remove' if no suitable device and space can be allocated for the
# replacement.
#
#
mirror_image_fault_policy = "remove"
# Configuration option activation/mirror_log_fault_policy.
@@ -1550,26 +1440,26 @@ activation {
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see snapshot_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_threshold = 70
#
#
snapshot_autoextend_threshold = 100
# Configuration option activation/snapshot_autoextend_percent.
# Auto-extending a snapshot adds this percent extra space.
# The amount of additional space added to a snapshot is this
# percent of its current size.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_percent = 20
#
#
snapshot_autoextend_percent = 20
# Configuration option activation/thin_pool_autoextend_threshold.
@@ -1578,26 +1468,26 @@ activation {
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see thin_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# thin_pool_autoextend_threshold = 70
#
#
thin_pool_autoextend_threshold = 100
# Configuration option activation/thin_pool_autoextend_percent.
# Auto-extending a thin pool adds this percent extra space.
# The amount of additional space added to a thin pool is this
# percent of its current size.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# thin_pool_autoextend_percent = 20
#
#
thin_pool_autoextend_percent = 20
# Configuration option activation/vdo_pool_autoextend_threshold.
@@ -1606,21 +1496,20 @@ activation {
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see vdo_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# vdo_pool_autoextend_threshold = 70
#
# This configuration option has an automatic default value.
# vdo_pool_autoextend_threshold = 100
#
vdo_pool_autoextend_threshold = 100
# Configuration option activation/vdo_pool_autoextend_percent.
# Auto-extending a VDO pool adds this percent extra space.
# The amount of additional space added to a VDO pool is this
# percent of its current size.
#
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
@@ -1639,10 +1528,10 @@ activation {
# pages corresponding to lines that match are not pinned. On some
# systems, locale-archive was found to make up over 80% of the memory
# used by the process.
#
#
# Example
# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
#
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
@@ -1650,8 +1539,7 @@ activation {
# Use the old behavior of mlockall to pin all memory.
# Prior to version 2.02.62, LVM used mlockall() to pin the whole
# process's memory while activating devices.
# This configuration option has an automatic default value.
# use_mlockall = 0
use_mlockall = 0
# Configuration option activation/monitoring.
# Monitor LVs that are activated.
@@ -1666,8 +1554,7 @@ activation {
# intervals of this number of seconds. If this is set to 0 and there
# is only one thing to wait for, there are no progress reports, but
# the process is awoken immediately once the operation is complete.
# This configuration option has an automatic default value.
# polling_interval = 15
polling_interval = 15
# Configuration option activation/auto_set_activation_skip.
# Set the activation skip flag on new thin snapshot LVs.
@@ -1683,7 +1570,7 @@ activation {
# Configuration option activation/activation_mode.
# How LVs with missing devices are activated.
# The --activationmode option overrides this setting.
#
#
# Accepted values:
# complete
# Only allow activation of an LV if all of the Physical Volumes it
@@ -1698,7 +1585,7 @@ activation {
# could cause data loss with a portion of the LV inaccessible.
# This setting should not normally be used, but may sometimes
# assist with data recovery.
#
#
activation_mode = "degraded"
# Configuration option activation/lock_start_list.
@@ -1746,7 +1633,7 @@ activation {
# Configuration option metadata/pvmetadatacopies.
# Number of copies of metadata to store on each PV.
# The --pvmetadatacopies option overrides this setting.
#
#
# Accepted values:
# 2
# Two copies of the VG metadata are stored on the PV, one at the
@@ -1756,7 +1643,7 @@ activation {
# 0
# No copies of VG metadata are stored on the PV. This may be
# useful for VGs containing large numbers of PVs.
#
#
# This configuration option is advanced.
# This configuration option has an automatic default value.
# pvmetadatacopies = 1
@@ -1788,6 +1675,7 @@ activation {
# additional space for VG metadata. The --metadatasize option overrides
# this setting.
# This configuration option does not have a default value defined.
# This configuration option has an automatic default value.
# Configuration option metadata/pvmetadataignore.
# Ignore metadata areas on a new PV.
@@ -1906,7 +1794,7 @@ activation {
# sequences are copied verbatim. Each special character sequence is
# introduced by the '%' character and such sequence is then
# substituted with a value as described below.
#
#
# Accepted values:
# %a
# The abbreviated name of the day of the week according to the
@@ -2029,7 +1917,7 @@ activation {
# The timezone name or abbreviation.
# %%
# A literal '%' character.
#
#
# This configuration option has an automatic default value.
# time_format = "%Y-%m-%d %T %z"
@@ -2223,8 +2111,7 @@ dmeventd {
# failures. It removes failed devices from a volume group and
# reconfigures a mirror as necessary. If no mirror library is
# provided, mirrors are not monitored through dmeventd.
# This configuration option has an automatic default value.
# mirror_library = "libdevmapper-event-lvm2mirror.so"
mirror_library = "libdevmapper-event-lvm2mirror.so"
# Configuration option dmeventd/raid_library.
# This configuration option has an automatic default value.
@@ -2235,16 +2122,14 @@ dmeventd {
# libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the snapshot is filled.
# This configuration option has an automatic default value.
# snapshot_library = "libdevmapper-event-lvm2snapshot.so"
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
# Configuration option dmeventd/thin_library.
# The library dmeventd uses when monitoring a thin device.
# libdevmapper-event-lvm2thin.so monitors the filling of a pool
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the pool is filled.
# This configuration option has an automatic default value.
# thin_library = "libdevmapper-event-lvm2thin.so"
thin_library = "libdevmapper-event-lvm2thin.so"
# Configuration option dmeventd/thin_command.
# The plugin runs command with each 5% increment when thin-pool data volume
@@ -2298,12 +2183,12 @@ dmeventd {
# applied to the local machine as a 'host tag'. If this subsection is
# empty (has no host_list), then the subsection name is always applied
# as a 'host tag'.
#
#
# Example
# The host tag foo is given to all hosts, and the host tag
# bar is given to the hosts named machine1 and machine2.
# tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
#
#
# This configuration section has variable name.
# This configuration section has an automatic default value.
# tag {

View File

@@ -28,13 +28,13 @@ local {
# main configuration file, e.g. lvm.conf. When used, it must be set to
# a unique value among all hosts sharing access to the storage,
# e.g. a host name.
#
#
# Example
# Set no system ID:
# system_id = ""
# Set the system_id to a specific name:
# system_id = "host1"
#
#
# This configuration option has an automatic default value.
# system_id = ""

351
configure vendored
View File

@@ -742,7 +742,6 @@ CLDNOWHOLEARCHIVE
CLDFLAGS
CACHE
BUILD_DMFILEMAPD
BUILD_LOCKDDLM_CONTROL
BUILD_LOCKDDLM
BUILD_LOCKDSANLOCK
BUILD_LVMLOCKD
@@ -753,8 +752,6 @@ BUILD_CMIRRORD
BLKID_PC
MODPROBE_CMD
MSGFMT
EDITLINE_LIBS
EDITLINE_CFLAGS
PYTHON3_CONFIG
pkgpyexecdir
pyexecdir
@@ -774,8 +771,6 @@ BLKID_LIBS
BLKID_CFLAGS
NOTIFY_DBUS_LIBS
NOTIFY_DBUS_CFLAGS
LOCKD_DLM_CONTROL_LIBS
LOCKD_DLM_CONTROL_CFLAGS
LOCKD_DLM_LIBS
LOCKD_DLM_CFLAGS
LOCKD_SANLOCK_LIBS
@@ -920,9 +915,7 @@ enable_cache_check_needs_check
with_vdo
with_vdo_format
with_writecache
with_integrity
enable_readline
enable_editline
enable_realtime
enable_ocf
with_ocfdir
@@ -939,7 +932,6 @@ enable_devmapper
enable_lvmpolld
enable_lvmlockd_sanlock
enable_lvmlockd_dlm
enable_lvmlockd_dlmcontrol
enable_use_lvmlockd
with_lvmlockd_pidfile
enable_use_lvmpolld
@@ -963,7 +955,6 @@ enable_fsadm
enable_blkdeactivate
enable_dmeventd
enable_selinux
enable_blkzeroout
enable_nls
with_localedir
with_confdir
@@ -1009,8 +1000,6 @@ LOCKD_SANLOCK_CFLAGS
LOCKD_SANLOCK_LIBS
LOCKD_DLM_CFLAGS
LOCKD_DLM_LIBS
LOCKD_DLM_CONTROL_CFLAGS
LOCKD_DLM_CONTROL_LIBS
NOTIFY_DBUS_CFLAGS
NOTIFY_DBUS_LIBS
BLKID_CFLAGS
@@ -1019,9 +1008,7 @@ SYSTEMD_CFLAGS
SYSTEMD_LIBS
UDEV_CFLAGS
UDEV_LIBS
PYTHON
EDITLINE_CFLAGS
EDITLINE_LIBS'
PYTHON'
# Initialize some variables set by options.
@@ -1644,7 +1631,6 @@ Optional Features:
--disable-cache_check_needs_check
required if cache_check version is < 0.5
--disable-readline disable readline support
--enable-editline enable editline support
--disable-realtime disable realtime clock support
--enable-ocf enable Open Cluster Framework (OCF) compliant
resource agents
@@ -1657,8 +1643,6 @@ Optional Features:
--enable-lvmlockd-sanlock
enable the LVM lock daemon using sanlock
--enable-lvmlockd-dlm enable the LVM lock daemon using dlm
--enable-lvmlockd-dlmcontrol
enable lvmlockd remote refresh using libdlmcontrol
--disable-use-lvmlockd disable usage of LVM lock daemon
--disable-use-lvmpolld disable usage of LVM Poll Daemon
--enable-dmfilemapd enable the dmstats filemap daemon
@@ -1685,7 +1669,6 @@ Optional Features:
--disable-blkdeactivate disable blkdeactivate
--enable-dmeventd enable the device-mapper event daemon
--disable-selinux disable selinux support
--disable-blkzeroout do not use BLKZEROOUT for device zeroing
--enable-nls enable Native Language Support
Optional Packages:
@@ -1725,15 +1708,14 @@ Optional Packages:
--with-vdo=TYPE vdo support: internal/none [internal]
--with-vdo-format=PATH vdoformat tool: [autodetect]
--with-writecache=TYPE writecache support: internal/none [internal]
--with-integrity=TYPE integrity support: internal/none [internal]
--with-ocfdir=DIR install OCF files in
[PREFIX/lib/ocf/resource.d/lvm2]
--with-default-pid-dir=PID_DIR
default directory to keep PID files in [autodetect]
Default directory to keep PID files in. [autodetect]
--with-default-dm-run-dir=DM_RUN_DIR
default DM run directory [autodetect]
Default DM run directory. [autodetect]
--with-default-run-dir=RUN_DIR
default LVM run directory [autodetect_run_dir/lvm]
Default LVM run directory. [autodetect_run_dir/lvm]
--with-cmirrord-pidfile=PATH
cmirrord pidfile [PID_DIR/cmirrord.pid]
--with-optimisation=OPT C optimisation flag [OPT=-O2]
@@ -1806,10 +1788,6 @@ Some influential environment variables:
C compiler flags for LOCKD_DLM, overriding pkg-config
LOCKD_DLM_LIBS
linker flags for LOCKD_DLM, overriding pkg-config
LOCKD_DLM_CONTROL_CFLAGS
C compiler flags for LOCKD_DLM_CONTROL, overriding pkg-config
LOCKD_DLM_CONTROL_LIBS
linker flags for LOCKD_DLM_CONTROL, overriding pkg-config
NOTIFY_DBUS_CFLAGS
C compiler flags for NOTIFY_DBUS, overriding pkg-config
NOTIFY_DBUS_LIBS
@@ -1824,10 +1802,6 @@ Some influential environment variables:
UDEV_CFLAGS C compiler flags for UDEV, overriding pkg-config
UDEV_LIBS linker flags for UDEV, overriding pkg-config
PYTHON the Python interpreter
EDITLINE_CFLAGS
C compiler flags for EDITLINE, overriding pkg-config
EDITLINE_LIBS
linker flags for EDITLINE, overriding pkg-config
Use these variables to override the choices made by `configure' or to help
it to find libraries and programs with nonstandard names/locations.
@@ -3091,7 +3065,7 @@ if test -z "$CFLAGS"; then :
fi
case "$host_os" in
linux*)
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"} -Wl,--version-script,.export.sym"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"} -Wl,--version-script,.export.sym"
# equivalent to -rdynamic
ELDFLAGS="-Wl,--export-dynamic"
# FIXME Generate list and use --dynamic-list=.dlopen.sym
@@ -3103,7 +3077,6 @@ case "$host_os" in
BUILD_LVMPOLLD=no
LOCKDSANLOCK=no
LOCKDDLM=no
LOCKDDLM_CONTROL=no
ODIRECT=yes
DM_IOCTLS=yes
SELINUX=yes
@@ -3112,7 +3085,7 @@ case "$host_os" in
;;
darwin*)
CFLAGS="$CFLAGS -no-cpp-precomp -fno-common"
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"}"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"}"
ELDFLAGS=
CLDWHOLEARCHIVE="-all_load"
CLDNOWHOLEARCHIVE=
@@ -3125,7 +3098,7 @@ case "$host_os" in
BLKDEACTIVATE=no
;;
*)
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"}"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"}"
;;
esac
@@ -6661,7 +6634,7 @@ $as_echo "#define _REENTRANT 1" >>confdefs.h
################################################################################
for ac_func in ftruncate gethostname getpagesize gettimeofday localtime_r \
memchr memset mkdir mkfifo munmap nl_langinfo pselect realpath rmdir setenv \
memchr memset mkdir mkfifo munmap nl_langinfo realpath rmdir setenv \
setlocale strcasecmp strchr strcspn strdup strerror strncasecmp strndup \
strrchr strspn strstr strtol strtoul uname
do :
@@ -6677,17 +6650,6 @@ else
fi
done
for ac_func in prlimit
do :
ac_fn_c_check_func "$LINENO" "prlimit" "ac_cv_func_prlimit"
if test "x$ac_cv_func_prlimit" = xyes; then :
cat >>confdefs.h <<_ACEOF
#define HAVE_PRLIMIT 1
_ACEOF
fi
done
# The Ultrix 4.2 mips builtin alloca declared by alloca.h only works
# for constant arguments. Useless!
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for working alloca.h" >&5
@@ -9598,7 +9560,7 @@ $as_echo_n "checking whether to include vdo... " >&6; }
if test "${with_vdo+set}" = set; then :
withval=$with_vdo; VDO=$withval
else
VDO="internal"
VDO="none"
fi
@@ -9758,7 +9720,7 @@ $as_echo_n "checking whether to include writecache... " >&6; }
if test "${with_writecache+set}" = set; then :
withval=$with_writecache; WRITECACHE=$withval
else
WRITECACHE="internal"
WRITECACHE="none"
fi
@@ -9775,31 +9737,6 @@ $as_echo "#define WRITECACHE_INTERNAL 1" >>confdefs.h
*) as_fn_error $? "--with-writecache parameter invalid" "$LINENO" 5 ;;
esac
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to include integrity" >&5
$as_echo_n "checking whether to include integrity... " >&6; }
# Check whether --with-integrity was given.
if test "${with_integrity+set}" = set; then :
withval=$with_integrity; INTEGRITY=$withval
else
INTEGRITY="internal"
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $INTEGRITY" >&5
$as_echo "$INTEGRITY" >&6; }
case "$INTEGRITY" in
none) ;;
internal)
$as_echo "#define INTEGRITY_INTERNAL 1" >>confdefs.h
;;
*) as_fn_error $? "--with-integrity parameter invalid" "$LINENO" 5 ;;
esac
################################################################################
# Check whether --enable-readline was given.
if test "${enable_readline+set}" = set; then :
@@ -9809,15 +9746,6 @@ else
fi
################################################################################
# Check whether --enable-editline was given.
if test "${enable_editline+set}" = set; then :
enableval=$enable_editline; EDITLINE=$enableval
else
EDITLINE=no
fi
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable realtime support" >&5
$as_echo_n "checking whether to enable realtime support... " >&6; }
@@ -11030,97 +10958,6 @@ $as_echo "#define LOCKDDLM_SUPPORT 1" >>confdefs.h
BUILD_LVMLOCKD=yes
fi
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lvmlockddlmcontrol" >&5
$as_echo_n "checking whether to build lvmlockddlmcontrol... " >&6; }
# Check whether --enable-lvmlockd-dlmcontrol was given.
if test "${enable_lvmlockd_dlmcontrol+set}" = set; then :
enableval=$enable_lvmlockd_dlmcontrol; LOCKDDLM_CONTROL=$enableval
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $LOCKDDLM_CONTROL" >&5
$as_echo "$LOCKDDLM_CONTROL" >&6; }
BUILD_LOCKDDLM_CONTROL=$LOCKDDLM_CONTROL
if test "$BUILD_LOCKDDLM_CONTROL" = yes; then
pkg_failed=no
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for LOCKD_DLM_CONTROL" >&5
$as_echo_n "checking for LOCKD_DLM_CONTROL... " >&6; }
if test -n "$LOCKD_DLM_CONTROL_CFLAGS"; then
pkg_cv_LOCKD_DLM_CONTROL_CFLAGS="$LOCKD_DLM_CONTROL_CFLAGS"
elif test -n "$PKG_CONFIG"; then
if test -n "$PKG_CONFIG" && \
{ { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libdlmcontrol >= 3.2\""; } >&5
($PKG_CONFIG --exists --print-errors "libdlmcontrol >= 3.2") 2>&5
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then
pkg_cv_LOCKD_DLM_CONTROL_CFLAGS=`$PKG_CONFIG --cflags "libdlmcontrol >= 3.2" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes
else
pkg_failed=yes
fi
else
pkg_failed=untried
fi
if test -n "$LOCKD_DLM_CONTROL_LIBS"; then
pkg_cv_LOCKD_DLM_CONTROL_LIBS="$LOCKD_DLM_CONTROL_LIBS"
elif test -n "$PKG_CONFIG"; then
if test -n "$PKG_CONFIG" && \
{ { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libdlmcontrol >= 3.2\""; } >&5
($PKG_CONFIG --exists --print-errors "libdlmcontrol >= 3.2") 2>&5
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then
pkg_cv_LOCKD_DLM_CONTROL_LIBS=`$PKG_CONFIG --libs "libdlmcontrol >= 3.2" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes
else
pkg_failed=yes
fi
else
pkg_failed=untried
fi
if test $pkg_failed = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
_pkg_short_errors_supported=yes
else
_pkg_short_errors_supported=no
fi
if test $_pkg_short_errors_supported = yes; then
LOCKD_DLM_CONTROL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libdlmcontrol >= 3.2" 2>&1`
else
LOCKD_DLM_CONTROL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libdlmcontrol >= 3.2" 2>&1`
fi
# Put the nasty error message in config.log where it belongs
echo "$LOCKD_DLM_CONTROL_PKG_ERRORS" >&5
$bailout
elif test $pkg_failed = untried; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
$bailout
else
LOCKD_DLM_CONTROL_CFLAGS=$pkg_cv_LOCKD_DLM_CONTROL_CFLAGS
LOCKD_DLM_CONTROL_LIBS=$pkg_cv_LOCKD_DLM_CONTROL_LIBS
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
HAVE_LOCKD_DLM_CONTROL=yes
fi
$as_echo "#define LOCKDDLM_CONTROL_SUPPORT 1" >>confdefs.h
BUILD_LVMLOCKD=yes
fi
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build lvmlockd" >&5
$as_echo_n "checking whether to build lvmlockd... " >&6; }
@@ -11845,7 +11682,6 @@ if test "$BUILD_LVMDBUSD" = yes; then
if test -n "$PYTHON"; then
# If the user set $PYTHON, use it and don't search something else.
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $PYTHON version is >= 3" >&5
@@ -11881,7 +11717,7 @@ if ${am_cv_pathless_PYTHON+:} false; then :
$as_echo_n "(cached) " >&6
else
for am_cv_pathless_PYTHON in python3 python2 python python3.9 python3.8 python3.7 python3.6 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0 none; do
for am_cv_pathless_PYTHON in python python2 python3 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0 none; do
test "$am_cv_pathless_PYTHON" = none && break
prog="import sys
# split strings by '.' and convert to numeric. Append some zeros
@@ -11962,7 +11798,7 @@ $as_echo_n "checking for $am_display_PYTHON version... " >&6; }
if ${am_cv_python_version+:} false; then :
$as_echo_n "(cached) " >&6
else
am_cv_python_version=`$PYTHON -c "import sys; print('%u.%u' % sys.version_info[:2])"`
am_cv_python_version=`$PYTHON -c "import sys; sys.stdout.write(sys.version[:3])"`
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_python_version" >&5
$as_echo "$am_cv_python_version" >&6; }
@@ -12710,61 +12546,6 @@ fi
fi
################################################################################
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for BLKZEROOUT in sys/ioctl.h." >&5
$as_echo_n "checking for BLKZEROOUT in sys/ioctl.h.... " >&6; }
if ${ac_cv_have_blkzeroout+:} false; then :
$as_echo_n "(cached) " >&6
else
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#include <sys/ioctl.h>
#include <linux/fs.h>
int bar(void) { return ioctl(0, BLKZEROOUT, 0); }
int
main ()
{
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
ac_cv_have_blkzeroout=yes
else
ac_cv_have_blkzeroout=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_have_blkzeroout" >&5
$as_echo "$ac_cv_have_blkzeroout" >&6; }
# Check whether --enable-blkzeroout was given.
if test "${enable_blkzeroout+set}" = set; then :
enableval=$enable_blkzeroout; BLKZEROOUT=$enableval
else
BLKZEROOUT=yes
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to use BLKZEROOUT for device zeroing" >&5
$as_echo_n "checking whether to use BLKZEROOUT for device zeroing... " >&6; }
if test "$BLKZEROOUT" = yes; then
if test $ac_cv_have_blkzeroout = yes; then :
$as_echo "#define HAVE_BLKZEROOUT 1" >>confdefs.h
else
BLKZEROOUT=no
fi
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $BLKZEROOUT" >&5
$as_echo "$BLKZEROOUT" >&6; }
################################################################################
RT_LIBS=
HAVE_REALTIME=no
@@ -12885,86 +12666,6 @@ fi
done
################################################################################
if test "$EDITLINE" == yes; then
pkg_failed=no
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for EDITLINE" >&5
$as_echo_n "checking for EDITLINE... " >&6; }
if test -n "$EDITLINE_CFLAGS"; then
pkg_cv_EDITLINE_CFLAGS="$EDITLINE_CFLAGS"
elif test -n "$PKG_CONFIG"; then
if test -n "$PKG_CONFIG" && \
{ { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libedit\""; } >&5
($PKG_CONFIG --exists --print-errors "libedit") 2>&5
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then
pkg_cv_EDITLINE_CFLAGS=`$PKG_CONFIG --cflags "libedit" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes
else
pkg_failed=yes
fi
else
pkg_failed=untried
fi
if test -n "$EDITLINE_LIBS"; then
pkg_cv_EDITLINE_LIBS="$EDITLINE_LIBS"
elif test -n "$PKG_CONFIG"; then
if test -n "$PKG_CONFIG" && \
{ { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libedit\""; } >&5
($PKG_CONFIG --exists --print-errors "libedit") 2>&5
ac_status=$?
$as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
test $ac_status = 0; }; then
pkg_cv_EDITLINE_LIBS=`$PKG_CONFIG --libs "libedit" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes
else
pkg_failed=yes
fi
else
pkg_failed=untried
fi
if test $pkg_failed = yes; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
_pkg_short_errors_supported=yes
else
_pkg_short_errors_supported=no
fi
if test $_pkg_short_errors_supported = yes; then
EDITLINE_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libedit" 2>&1`
else
EDITLINE_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libedit" 2>&1`
fi
# Put the nasty error message in config.log where it belongs
echo "$EDITLINE_PKG_ERRORS" >&5
as_fn_error $? "libedit could not be found which is required for the --enable-readline option." "$LINENO" 5
elif test $pkg_failed = untried; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
$as_echo "no" >&6; }
as_fn_error $? "libedit could not be found which is required for the --enable-readline option." "$LINENO" 5
else
EDITLINE_CFLAGS=$pkg_cv_EDITLINE_CFLAGS
EDITLINE_LIBS=$pkg_cv_EDITLINE_LIBS
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
$as_echo "yes" >&6; }
$as_echo "#define EDITLINE_SUPPORT 1" >>confdefs.h
fi
fi
################################################################################
if test "$READLINE" != no; then
lvm_saved_libs=$LIBS
@@ -13403,28 +13104,6 @@ $as_echo_n "checking whether to enable readline... " >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $READLINE" >&5
$as_echo "$READLINE" >&6; }
if test "$EDITLINE" = yes; then
for ac_header in editline/readline.h editline/history.h
do :
as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh`
ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default"
if eval test \"x\$"$as_ac_Header"\" = x"yes"; then :
cat >>confdefs.h <<_ACEOF
#define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1
_ACEOF
else
hard_bailout
fi
done
fi
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable editline" >&5
$as_echo_n "checking whether to enable editline... " >&6; }
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $EDITLINE" >&5
$as_echo "$EDITLINE" >&6; }
if test "$BUILD_CMIRRORD" = yes; then
for ac_func in atexit
do :
@@ -14104,8 +13783,6 @@ _ACEOF
@@ -15479,8 +15156,8 @@ $as_echo "$as_me: WARNING: You should install latest cache_check vsn 0.7.0 to us
fi
if test -n "$VDO_CONFIGURE_WARN"; then :
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!" >&5
$as_echo "$as_me: WARNING: Unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!" >&2;}
{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!" >&5
$as_echo "$as_me: WARNING: unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!" >&2;}
fi

View File

@@ -30,7 +30,7 @@ AC_CANONICAL_TARGET([])
AS_IF([test -z "$CFLAGS"], [COPTIMISE_FLAG="-O2"])
case "$host_os" in
linux*)
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"} -Wl,--version-script,.export.sym"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"} -Wl,--version-script,.export.sym"
# equivalent to -rdynamic
ELDFLAGS="-Wl,--export-dynamic"
# FIXME Generate list and use --dynamic-list=.dlopen.sym
@@ -42,7 +42,6 @@ case "$host_os" in
BUILD_LVMPOLLD=no
LOCKDSANLOCK=no
LOCKDDLM=no
LOCKDDLM_CONTROL=no
ODIRECT=yes
DM_IOCTLS=yes
SELINUX=yes
@@ -51,7 +50,7 @@ case "$host_os" in
;;
darwin*)
CFLAGS="$CFLAGS -no-cpp-precomp -fno-common"
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"}"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"}"
ELDFLAGS=
CLDWHOLEARCHIVE="-all_load"
CLDNOWHOLEARCHIVE=
@@ -64,7 +63,7 @@ case "$host_os" in
BLKDEACTIVATE=no
;;
*)
CLDFLAGS="${CLDFLAGS-"$LDFLAGS"}"
CLDFLAGS="${CLDFLAGS:"$LDFLAGS"}"
;;
esac
@@ -153,10 +152,9 @@ AC_DEFINE([_REENTRANT], 1, [Define to use re-entrant thread safe versions])
################################################################################
dnl -- Check for functions
AC_CHECK_FUNCS([ftruncate gethostname getpagesize gettimeofday localtime_r \
memchr memset mkdir mkfifo munmap nl_langinfo pselect realpath rmdir setenv \
memchr memset mkdir mkfifo munmap nl_langinfo realpath rmdir setenv \
setlocale strcasecmp strchr strcspn strdup strerror strncasecmp strndup \
strrchr strspn strstr strtol strtoul uname], , [AC_MSG_ERROR(bailing out)])
AC_CHECK_FUNCS([prlimit])
AC_FUNC_ALLOCA
AC_FUNC_CLOSEDIR_VOID
AC_FUNC_CHOWN
@@ -607,7 +605,7 @@ AC_MSG_CHECKING(whether to include vdo)
AC_ARG_WITH(vdo,
AC_HELP_STRING([--with-vdo=TYPE],
[vdo support: internal/none [internal]]),
VDO=$withval, VDO="internal")
VDO=$withval, VDO="none")
AC_MSG_RESULT($VDO)
@@ -655,7 +653,7 @@ AC_MSG_CHECKING(whether to include writecache)
AC_ARG_WITH(writecache,
AC_HELP_STRING([--with-writecache=TYPE],
[writecache support: internal/none [internal]]),
WRITECACHE=$withval, WRITECACHE="internal")
WRITECACHE=$withval, WRITECACHE="none")
AC_MSG_RESULT($WRITECACHE)
@@ -667,36 +665,12 @@ case "$WRITECACHE" in
*) AC_MSG_ERROR([--with-writecache parameter invalid]) ;;
esac
################################################################################
dnl -- integrity inclusion type
AC_MSG_CHECKING(whether to include integrity)
AC_ARG_WITH(integrity,
AC_HELP_STRING([--with-integrity=TYPE],
[integrity support: internal/none [internal]]),
INTEGRITY=$withval, INTEGRITY="internal")
AC_MSG_RESULT($INTEGRITY)
case "$INTEGRITY" in
none) ;;
internal)
AC_DEFINE([INTEGRITY_INTERNAL], 1, [Define to 1 to include built-in support for integrity.])
;;
*) AC_MSG_ERROR([--with-integrity parameter invalid]) ;;
esac
################################################################################
dnl -- Disable readline
AC_ARG_ENABLE([readline],
AC_HELP_STRING([--disable-readline], [disable readline support]),
READLINE=$enableval, READLINE=maybe)
################################################################################
dnl -- Disable editline
AC_ARG_ENABLE([editline],
AC_HELP_STRING([--enable-editline], [enable editline support]),
EDITLINE=$enableval, EDITLINE=no)
################################################################################
dnl -- Disable realtime clock support
AC_MSG_CHECKING(whether to enable realtime support)
@@ -740,7 +714,7 @@ dnl -- Set up pidfile and run directory
AH_TEMPLATE(DEFAULT_PID_DIR)
AC_ARG_WITH(default-pid-dir,
AC_HELP_STRING([--with-default-pid-dir=PID_DIR],
[default directory to keep PID files in [autodetect]]),
[Default directory to keep PID files in. [autodetect]]),
DEFAULT_PID_DIR="$withval", DEFAULT_PID_DIR=$RUN_DIR)
AC_DEFINE_UNQUOTED(DEFAULT_PID_DIR, ["$DEFAULT_PID_DIR"],
[Default directory to keep PID files in.])
@@ -748,7 +722,7 @@ AC_DEFINE_UNQUOTED(DEFAULT_PID_DIR, ["$DEFAULT_PID_DIR"],
AH_TEMPLATE(DEFAULT_DM_RUN_DIR, [Name of default DM run directory.])
AC_ARG_WITH(default-dm-run-dir,
AC_HELP_STRING([--with-default-dm-run-dir=DM_RUN_DIR],
[default DM run directory [autodetect]]),
[ Default DM run directory. [autodetect]]),
DEFAULT_DM_RUN_DIR="$withval", DEFAULT_DM_RUN_DIR=$RUN_DIR)
AC_DEFINE_UNQUOTED(DEFAULT_DM_RUN_DIR, ["$DEFAULT_DM_RUN_DIR"],
[Default DM run directory.])
@@ -756,7 +730,7 @@ AC_DEFINE_UNQUOTED(DEFAULT_DM_RUN_DIR, ["$DEFAULT_DM_RUN_DIR"],
AH_TEMPLATE(DEFAULT_RUN_DIR, [Name of default LVM run directory.])
AC_ARG_WITH(default-run-dir,
AC_HELP_STRING([--with-default-run-dir=RUN_DIR],
[default LVM run directory [autodetect_run_dir/lvm]]),
[Default LVM run directory. [autodetect_run_dir/lvm]]),
DEFAULT_RUN_DIR="$withval", DEFAULT_RUN_DIR="$RUN_DIR/lvm")
AC_DEFINE_UNQUOTED(DEFAULT_RUN_DIR, ["$DEFAULT_RUN_DIR"],
[Default LVM run directory.])
@@ -942,24 +916,6 @@ if test "$BUILD_LOCKDDLM" = yes; then
BUILD_LVMLOCKD=yes
fi
################################################################################
dnl -- Build lvmlockddlmcontrol
AC_MSG_CHECKING(whether to build lvmlockddlmcontrol)
AC_ARG_ENABLE(lvmlockd-dlmcontrol,
AC_HELP_STRING([--enable-lvmlockd-dlmcontrol],
[enable lvmlockd remote refresh using libdlmcontrol]),
LOCKDDLM_CONTROL=$enableval)
AC_MSG_RESULT($LOCKDDLM_CONTROL)
BUILD_LOCKDDLM_CONTROL=$LOCKDDLM_CONTROL
dnl -- Look for libdlmcontrol libraries
if test "$BUILD_LOCKDDLM_CONTROL" = yes; then
PKG_CHECK_MODULES(LOCKD_DLM_CONTROL, libdlmcontrol >= 3.2, [HAVE_LOCKD_DLM_CONTROL=yes], $bailout)
AC_DEFINE([LOCKDDLM_CONTROL_SUPPORT], 1, [Define to 1 to include code that uses lvmlockd dlm control option.])
BUILD_LVMLOCKD=yes
fi
################################################################################
dnl -- Build lvmlockd
AC_MSG_CHECKING(whether to build lvmlockd)
@@ -1224,9 +1180,6 @@ if test "$BUILD_LVMDBUSD" = yes; then
unset am_cv_pathless_PYTHON ac_cv_path_PYTHON am_cv_python_platform
unset am_cv_python_pythondir am_cv_python_version am_cv_python_pyexecdir
unset ac_cv_path_PYTHON_CONFIG ac_cv_path_ac_pt_PYTHON_CONFIG
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],[ python3 python2 python dnl
python3.9 python3.8 python3.7 python3.6 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 dnl
python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0 ])
AM_PATH_PYTHON([3])
PYTHON3=$PYTHON
test -z "$PYTHON3" && AC_MSG_ERROR([python3 is required for --enable-python3_bindings or --enable-dbus-service but cannot be found])
@@ -1354,33 +1307,6 @@ if test "$SELINUX" = yes; then
HAVE_SELINUX=no ])
fi
################################################################################
dnl -- Check BLKZEROOUT support
AC_CACHE_CHECK([for BLKZEROOUT in sys/ioctl.h.],
[ac_cv_have_blkzeroout],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include <sys/ioctl.h>
#include <linux/fs.h>
int bar(void) { return ioctl(0, BLKZEROOUT, 0); }]
)], [ac_cv_have_blkzeroout=yes], [ac_cv_have_blkzeroout=no])])
AC_ARG_ENABLE(blkzeroout,
AC_HELP_STRING([--disable-blkzeroout],
[do not use BLKZEROOUT for device zeroing]),
BLKZEROOUT=$enableval, BLKZEROOUT=yes)
AC_MSG_CHECKING(whether to use BLKZEROOUT for device zeroing)
if test "$BLKZEROOUT" = yes; then
AC_IF_YES(ac_cv_have_blkzeroout,
AC_DEFINE(HAVE_BLKZEROOUT, 1,
[Define if ioctl BLKZEROOUT can be used for device zeroing.]),
BLKZEROOUT=no)
fi
AC_MSG_RESULT($BLKZEROOUT)
################################################################################
dnl -- Check for realtime clock support
RT_LIBS=
@@ -1414,16 +1340,6 @@ AC_IF_YES(ac_cv_stat_st_ctim,
dnl -- Check for getopt
AC_CHECK_HEADERS(getopt.h, AC_DEFINE([HAVE_GETOPTLONG], 1, [Define to 1 if getopt_long is available.]))
################################################################################
dnl -- Check for editline
if test "$EDITLINE" == yes; then
PKG_CHECK_MODULES([EDITLINE], [libedit], [
AC_DEFINE([EDITLINE_SUPPORT], 1,
[Define to 1 to include the LVM editline shell.])], AC_MSG_ERROR(
[libedit could not be found which is required for the --enable-readline option.])
)
fi
################################################################################
dnl -- Check for readline (Shamelessly copied from parted 1.4.17)
if test "$READLINE" != no; then
@@ -1556,12 +1472,6 @@ fi
AC_MSG_CHECKING(whether to enable readline)
AC_MSG_RESULT($READLINE)
if test "$EDITLINE" = yes; then
AC_CHECK_HEADERS(editline/readline.h editline/history.h,,hard_bailout)
fi
AC_MSG_CHECKING(whether to enable editline)
AC_MSG_RESULT($EDITLINE)
if test "$BUILD_CMIRRORD" = yes; then
AC_CHECK_FUNCS(atexit,,hard_bailout)
fi
@@ -1732,7 +1642,6 @@ AC_SUBST(BUILD_LVMPOLLD)
AC_SUBST(BUILD_LVMLOCKD)
AC_SUBST(BUILD_LOCKDSANLOCK)
AC_SUBST(BUILD_LOCKDDLM)
AC_SUBST(BUILD_LOCKDDLM_CONTROL)
AC_SUBST(BUILD_DMFILEMAPD)
AC_SUBST(CACHE)
AC_SUBST(CFLAGS)
@@ -1817,7 +1726,6 @@ AC_SUBST(QUORUM_CFLAGS)
AC_SUBST(QUORUM_LIBS)
AC_SUBST(RT_LIBS)
AC_SUBST(READLINE_LIBS)
AC_SUBST(EDITLINE_LIBS)
AC_SUBST(REPLICATORS)
AC_SUBST(SACKPT_CFLAGS)
AC_SUBST(SACKPT_LIBS)
@@ -1953,7 +1861,7 @@ AS_IF([test -n "$CACHE_CHECK_VERSION_WARN"],
[AC_MSG_WARN([You should install latest cache_check vsn 0.7.0 to use lvm2 cache metadata format 2])])
AS_IF([test -n "$VDO_CONFIGURE_WARN"],
[AC_MSG_WARN([Unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!])])
[AC_MSG_WARN([unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!])])
AS_IF([test "$ODIRECT" != yes],

View File

@@ -46,7 +46,6 @@ const char *find_config_tree_str(struct cmd_context *cmd, int id, struct profile
return "STRING";
}
/*
struct logical_volume *origin_from_cow(const struct logical_volume *lv)
{
if (lv)
@@ -54,7 +53,6 @@ struct logical_volume *origin_from_cow(const struct logical_volume *lv)
__coverity_panic__();
}
*/
/* simple_memccpy() from glibc */
void *memccpy(void *dest, const void *src, int c, size_t n)

View File

@@ -12,8 +12,8 @@
#ifndef _LVM_CLOG_CLUSTER_H
#define _LVM_CLOG_CLUSTER_H
#include "libdm/libdevmapper.h"
#include "libdm/misc/dm-log-userspace.h"
#include "libdm/libdevmapper.h"
#define DM_ULOG_RESPONSE 0x1000U /* in last byte of 32-bit value */
#define DM_ULOG_CHECKPOINT_READY 21

View File

@@ -12,8 +12,7 @@
#ifndef _LVM_CLOG_FUNCTIONS_H
#define _LVM_CLOG_FUNCTIONS_H
#include "libdm/libdevmapper.h"
#include "libdm/misc/dm-log-userspace.h"
#include "device_mapper/misc/dm-log-userspace.h"
#include "cluster.h"
#define LOG_RESUMED 1

View File

@@ -752,7 +752,7 @@ static void _exit_timeout(void *unused __attribute__((unused)))
static void *_timeout_thread(void *unused __attribute__((unused)))
{
struct thread_status *thread;
struct timespec timeout, real_time;
struct timespec timeout;
time_t curr_time;
int ret;
@@ -763,16 +763,7 @@ static void *_timeout_thread(void *unused __attribute__((unused)))
while (!dm_list_empty(&_timeout_registry)) {
timeout.tv_sec = 0;
timeout.tv_nsec = 0;
#ifndef HAVE_REALTIME
curr_time = time(NULL);
#else
if (clock_gettime(CLOCK_REALTIME, &real_time)) {
log_error("Failed to read clock_gettime().");
break;
}
/* 10ms back to the future */
curr_time = real_time.tv_sec + ((real_time.tv_nsec > (1000000000 - 10000000)) ? 1 : 0);
#endif
dm_list_iterate_items_gen(thread, &_timeout_registry, timeout_list) {
if (thread->next_time <= curr_time) {
@@ -1494,34 +1485,37 @@ static int _client_read(struct dm_event_fifos *fifos,
t.tv_usec = 0;
ret = select(fifos->client + 1, &fds, NULL, NULL, &t);
if (!ret && bytes)
continue; /* trying to finish read */
if (!ret && !bytes) /* nothing to read */
return 0;
if (ret <= 0) /* nothing to read */
goto bad;
if (!ret) /* trying to finish read */
continue;
if (ret < 0) /* error */
return 0;
ret = read(fifos->client, buf + bytes, size - bytes);
bytes += ret > 0 ? ret : 0;
if (!msg->data && (bytes == 2 * sizeof(uint32_t))) {
if (header && (bytes == 2 * sizeof(uint32_t))) {
msg->cmd = ntohl(header[0]);
size = msg->size = ntohl(header[1]);
bytes = 0;
if (!(size = msg->size = ntohl(header[1])))
break;
if (!(buf = msg->data = malloc(msg->size)))
goto bad;
if (!size)
break; /* No data -> error */
buf = msg->data = malloc(msg->size);
if (!buf)
break; /* No mem -> error */
header = 0;
}
}
if (bytes == size)
return 1;
if (bytes != size) {
free(msg->data);
msg->data = NULL;
return 0;
}
bad:
free(msg->data);
msg->data = NULL;
return 0;
return 1;
}
/*
@@ -1751,8 +1745,7 @@ static void _init_thread_signals(void)
sigdelset(&my_sigset, SIGHUP);
sigdelset(&my_sigset, SIGQUIT);
if (pthread_sigmask(SIG_BLOCK, &my_sigset, NULL))
log_sys_error("pthread_sigmask", "SIG_BLOCK");
pthread_sigmask(SIG_BLOCK, &my_sigset, NULL);
}
/*
@@ -2028,8 +2021,8 @@ static int _reinstate_registrations(struct dm_event_fifos *fifos)
static void _restart_dmeventd(void)
{
struct dm_event_fifos fifos = {
.client = -1,
.server = -1,
.client = -1,
/* FIXME Make these either configurable or depend directly on dmeventd_path */
.client_path = DM_EVENT_FIFO_CLIENT,
.server_path = DM_EVENT_FIFO_SERVER
@@ -2243,8 +2236,7 @@ int main(int argc, char *argv[])
_init_thread_signals();
if (pthread_mutex_init(&_global_mutex, NULL))
exit(EXIT_FAILURE);
pthread_mutex_init(&_global_mutex, NULL);
if (!_systemd_activation && !_open_fifos(&fifos))
exit(EXIT_FIFO_FAILURE);

View File

@@ -237,16 +237,16 @@ static int _daemon_read(struct dm_event_fifos *fifos,
ret = select(fifos->server + 1, &fds, NULL, NULL, &tval);
if (ret < 0 && errno != EINTR) {
log_error("Unable to read from event server.");
goto bad;
return 0;
}
if ((ret == 0) && (i > 4) && !bytes) {
log_error("No input from event server.");
goto bad;
return 0;
}
}
if (ret < 1) {
log_error("Unable to read from event server.");
goto bad;
return 0;
}
ret = read(fifos->server, buf + bytes, size);
@@ -255,32 +255,25 @@ static int _daemon_read(struct dm_event_fifos *fifos,
continue;
log_error("Unable to read from event server.");
goto bad;
return 0;
}
bytes += ret;
if (!msg->data && (bytes == 2 * sizeof(uint32_t))) {
if (header && (bytes == 2 * sizeof(uint32_t))) {
msg->cmd = ntohl(header[0]);
msg->size = ntohl(header[1]);
buf = msg->data = malloc(msg->size);
size = msg->size;
bytes = 0;
if (!(size = msg->size = ntohl(header[1])))
break;
if (!(buf = msg->data = malloc(msg->size))) {
log_error("Unable to allocate message data.");
return 0;
}
header = 0;
}
}
if (bytes == size)
return 1;
bad:
free(msg->data);
msg->data = NULL;
return 0;
if (bytes != size) {
free(msg->data);
msg->data = NULL;
}
return bytes == size;
}
/* Write message to daemon. */
@@ -615,8 +608,8 @@ static int _do_event(int cmd, char *dmeventd_path, struct dm_event_daemon_messag
{
int ret;
struct dm_event_fifos fifos = {
.client = -1,
.server = -1,
.client = -1,
/* FIXME Make these either configurable or depend directly on dmeventd_path */
.client_path = DM_EVENT_FIFO_CLIENT,
.server_path = DM_EVENT_FIFO_SERVER

View File

@@ -71,7 +71,7 @@ int dmeventd_lvm2_init(void)
if (!_lvm_handle) {
lvm2_log_fn(_lvm2_print_log);
if (!(_lvm_handle = lvm2_init_threaded()))
if (!(_lvm_handle = lvm2_init()))
goto out;
/*

View File

@@ -76,17 +76,14 @@ static int _process_raid_event(struct dso_state *state, char *params, const char
}
if (dead) {
/*
* Use the first event to run a repair ignoring any additonal ones.
*
* We presume lvconvert to do pre-repair
* checks to avoid bloat in this plugin.
*/
if (!state->warned && status->insync_regions < status->total_regions) {
state->warned = 1;
log_warn("WARNING: waiting for resynchronization to finish "
"before initiating repair on RAID device %s.", device);
/* Fall through to allow lvconvert to run. */
if (status->insync_regions < status->total_regions) {
if (!state->warned) {
state->warned = 1;
log_warn("WARNING: waiting for resynchronization to finish "
"before initiating repair on RAID device %s.", device);
}
goto out; /* Not yet done syncing with accessible devices */
}
if (state->failed)

View File

@@ -16,12 +16,7 @@
#include "daemons/dmeventd/plugins/lvm2/dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
/*
* Use parser from new device_mapper library.
* Although during compilation we can see dm_vdo_status_parse()
* in runtime we are linked agains systems libdm 'older' library
* which does not provide this symbol and plugin fails to load
*/
/* Use parser from new device_mapper library */
#include "device_mapper/vdo/status.c"
#include <sys/wait.h>

View File

@@ -47,11 +47,9 @@ BUS_NAME = os.getenv('LVM_DBUS_NAME', 'com.redhat.lvmdbus1')
BASE_INTERFACE = 'com.redhat.lvmdbus1'
PV_INTERFACE = BASE_INTERFACE + '.Pv'
VG_INTERFACE = BASE_INTERFACE + '.Vg'
VG_VDO_INTERFACE = BASE_INTERFACE + '.VgVdo'
LV_INTERFACE = BASE_INTERFACE + '.Lv'
LV_COMMON_INTERFACE = BASE_INTERFACE + '.LvCommon'
THIN_POOL_INTERFACE = BASE_INTERFACE + '.ThinPool'
VDO_POOL_INTERFACE = BASE_INTERFACE + '.VdoPool'
CACHE_POOL_INTERFACE = BASE_INTERFACE + '.CachePool'
LV_CACHED = BASE_INTERFACE + '.CachedLv'
SNAPSHOT_INTERFACE = BASE_INTERFACE + '.Snapshot'
@@ -63,7 +61,6 @@ PV_OBJ_PATH = BASE_OBJ_PATH + '/Pv'
VG_OBJ_PATH = BASE_OBJ_PATH + '/Vg'
LV_OBJ_PATH = BASE_OBJ_PATH + '/Lv'
THIN_POOL_PATH = BASE_OBJ_PATH + "/ThinPool"
VDO_POOL_PATH = BASE_OBJ_PATH + "/VdoPool"
CACHE_POOL_PATH = BASE_OBJ_PATH + "/CachePool"
HIDDEN_LV_PATH = BASE_OBJ_PATH + "/HiddenLv"
MANAGER_OBJ_PATH = BASE_OBJ_PATH + '/Manager'
@@ -74,7 +71,6 @@ pv_id = itertools.count()
vg_id = itertools.count()
lv_id = itertools.count()
thin_id = itertools.count()
vdo_id = itertools.count()
cache_pool_id = itertools.count()
job_id = itertools.count()
hidden_lv = itertools.count()
@@ -83,9 +79,6 @@ hidden_lv = itertools.count()
load = None
event = None
# Boolean to denote if lvm supports VDO integration
vdo_support = False
# Global cached state
db = None

View File

@@ -217,10 +217,7 @@ def options_to_cli_args(options):
else:
rc.append("--%s" % k)
if v != "":
if isinstance(v, int):
rc.append(str(int(v)))
else:
rc.append(str(v))
rc.append(str(v))
return rc
@@ -283,7 +280,7 @@ def vg_remove(vg_name, remove_options):
def vg_lv_create(vg_name, create_options, name, size_bytes, pv_dests):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--size', '%dB' % size_bytes])
cmd.extend(['--size', str(size_bytes) + 'B'])
cmd.extend(['--name', name, vg_name, '--yes'])
pv_dest_ranges(cmd, pv_dests)
return call(cmd)
@@ -295,7 +292,7 @@ def vg_lv_snapshot(vg_name, snapshot_options, name, size_bytes):
cmd.extend(["-s"])
if size_bytes != 0:
cmd.extend(['--size', '%dB' % size_bytes])
cmd.extend(['--size', str(size_bytes) + 'B'])
cmd.extend(['--name', name, vg_name])
return call(cmd)
@@ -306,9 +303,9 @@ def _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool):
cmd.extend(options_to_cli_args(create_options))
if not thin_pool:
cmd.extend(['--size', '%dB' % size_bytes])
cmd.extend(['--size', str(size_bytes) + 'B'])
else:
cmd.extend(['--thin', '--size', '%dB' % size_bytes])
cmd.extend(['--thin', '--size', str(size_bytes) + 'B'])
cmd.extend(['--yes'])
return cmd
@@ -323,10 +320,10 @@ def vg_lv_create_linear(vg_name, create_options, name, size_bytes, thin_pool):
def vg_lv_create_striped(vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool):
cmd = _vg_lv_create_common_cmd(create_options, size_bytes, thin_pool)
cmd.extend(['--stripes', str(int(num_stripes))])
cmd.extend(['--stripes', str(num_stripes)])
if stripe_size_kb != 0:
cmd.extend(['--stripesize', str(int(stripe_size_kb))])
cmd.extend(['--stripesize', str(stripe_size_kb)])
cmd.extend(['--name', name, vg_name])
return call(cmd)
@@ -339,13 +336,13 @@ def _vg_lv_create_raid(vg_name, create_options, name, raid_type, size_bytes,
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--type', raid_type])
cmd.extend(['--size', '%dB' % size_bytes])
cmd.extend(['--size', str(size_bytes) + 'B'])
if num_stripes != 0:
cmd.extend(['--stripes', str(int(num_stripes))])
cmd.extend(['--stripes', str(num_stripes)])
if stripe_size_kb != 0:
cmd.extend(['--stripesize', str(int(stripe_size_kb))])
cmd.extend(['--stripesize', str(stripe_size_kb)])
cmd.extend(['--name', name, vg_name, '--yes'])
return call(cmd)
@@ -366,8 +363,8 @@ def vg_lv_create_mirror(
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--type', 'mirror'])
cmd.extend(['--mirrors', str(int(num_copies))])
cmd.extend(['--size', '%dB' % size_bytes])
cmd.extend(['--mirrors', str(num_copies)])
cmd.extend(['--size', str(size_bytes) + 'B'])
cmd.extend(['--name', name, vg_name, '--yes'])
return call(cmd)
@@ -388,24 +385,6 @@ def vg_create_thin_pool(md_full_name, data_full_name, create_options):
return call(cmd)
def vg_create_vdo_pool_lv_and_lv(vg_name, pool_name, lv_name, data_size,
virtual_size, create_options):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['-y', '--type', 'vdo', '-n', lv_name,
'-L', '%dB' % data_size, '-V', '%dB' % virtual_size,
"%s/%s" % (vg_name, pool_name)])
return call(cmd)
def vg_create_vdo_pool(pool_full_name, lv_name, virtual_size, create_options):
cmd = ['lvconvert']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--type', 'vdo-pool', '-n', lv_name, '--force', '-y',
'-V', '%dB' % virtual_size, pool_full_name])
return call(cmd)
def lv_remove(lv_path, remove_options):
cmd = ['lvremove']
cmd.extend(options_to_cli_args(remove_options))
@@ -439,7 +418,7 @@ def lv_resize(lv_full_name, size_change, pv_dests,
def lv_lv_create(lv_full_name, create_options, name, size_bytes):
cmd = ['lvcreate']
cmd.extend(options_to_cli_args(create_options))
cmd.extend(['--virtualsize', '%dB' % size_bytes, '-T'])
cmd.extend(['--virtualsize', str(size_bytes) + 'B', '-T'])
cmd.extend(['--name', name, lv_full_name, '--yes'])
return call(cmd)
@@ -453,15 +432,6 @@ def lv_cache_lv(cache_pool_full_name, lv_full_name, cache_options):
return call(cmd)
def lv_writecache_lv(cache_lv_full_name, lv_full_name, cache_options):
# lvconvert --type writecache --cachevol VG/CacheLV VG/OriginLV
cmd = ['lvconvert']
cmd.extend(options_to_cli_args(cache_options))
cmd.extend(['-y', '--type', 'writecache', '--cachevol',
cache_lv_full_name, lv_full_name])
return call(cmd)
def lv_detach_cache(lv_full_name, detach_options, destroy_cache):
cmd = ['lvconvert']
if destroy_cache:
@@ -477,28 +447,6 @@ def lv_detach_cache(lv_full_name, detach_options, destroy_cache):
return call(cmd)
def lv_vdo_compression(lv_path, enable, comp_options):
cmd = ['lvchange', '--compression']
if enable:
cmd.append('y')
else:
cmd.append('n')
cmd.extend(options_to_cli_args(comp_options))
cmd.append(lv_path)
return call(cmd)
def lv_vdo_deduplication(lv_path, enable, dedup_options):
cmd = ['lvchange', '--deduplication']
if enable:
cmd.append('y')
else:
cmd.append('n')
cmd.extend(options_to_cli_args(dedup_options))
cmd.append(lv_path)
return call(cmd)
def supports_json():
cmd = ['help']
rc, out, err = call(cmd)
@@ -511,16 +459,6 @@ def supports_json():
return False
def supports_vdo():
cmd = ['segtypes']
rc, out, err = call(cmd)
if rc == 0:
if "vdo" in out:
log_debug("We have VDO support")
return True
return False
def lvm_full_report_json():
pv_columns = ['pv_name', 'pv_uuid', 'pv_fmt', 'pv_size', 'pv_free',
'pv_used', 'dev_size', 'pv_mda_size', 'pv_mda_free',
@@ -548,22 +486,6 @@ def lvm_full_report_json():
lv_seg_columns = ['seg_pe_ranges', 'segtype', 'lv_uuid']
if cfg.vdo_support:
lv_columns.extend(
['vdo_operating_mode', 'vdo_compression_state', 'vdo_index_state',
'vdo_used_size', 'vdo_saving_percent']
)
lv_seg_columns.extend(
['vdo_compression', 'vdo_deduplication',
'vdo_use_metadata_hints', 'vdo_minimum_io_size',
'vdo_block_map_cache_size', 'vdo_block_map_era_length',
'vdo_use_sparse_index', 'vdo_index_memory_size',
'vdo_slab_size', 'vdo_ack_threads', 'vdo_bio_threads',
'vdo_bio_rotation', 'vdo_cpu_threads', 'vdo_hash_zone_threads',
'vdo_logical_threads', 'vdo_physical_threads',
'vdo_max_discard', 'vdo_write_policy', 'vdo_header_size'])
cmd = _dc('fullreport', [
'-a', # Need hidden too
'--configreport', 'pv', '-o', ','.join(pv_columns),
@@ -634,7 +556,7 @@ def pv_resize(device, size_bytes, create_options):
cmd.extend(options_to_cli_args(create_options))
if size_bytes != 0:
cmd.extend(['--yes', '--setphysicalvolumesize', '%dB' % size_bytes])
cmd.extend(['--yes', '--setphysicalvolumesize', str(size_bytes) + 'B'])
cmd.extend([device])
return call(cmd)
@@ -730,12 +652,12 @@ def vg_allocation_policy(vg_name, policy, policy_options):
def vg_max_pv(vg_name, number, max_options):
return _vg_value_set(vg_name, ['--maxphysicalvolumes', str(int(number))],
return _vg_value_set(vg_name, ['--maxphysicalvolumes', str(number)],
max_options)
def vg_max_lv(vg_name, number, max_options):
return _vg_value_set(vg_name, ['-l', str(int(number))], max_options)
return _vg_value_set(vg_name, ['-l', str(number)], max_options)
def vg_uuid_gen(vg_name, ignore, options):
@@ -777,7 +699,6 @@ def activate_deactivate(op, name, activate, control_flags, options):
op += 'n'
cmd.append(op)
cmd.append("-y")
cmd.append(name)
return call(cmd)

View File

@@ -29,26 +29,11 @@ def _main_thread_load(refresh=True, emit_signal=True):
refresh=refresh,
emit_signal=emit_signal,
cache_refresh=False)[1]
lv_changes = load_lvs(
num_total_changes += load_lvs(
refresh=refresh,
emit_signal=emit_signal,
cache_refresh=False)[1]
num_total_changes += lv_changes
# When the LVs change it can cause another change in the VGs which is
# missed if we don't scan through the VGs again. We could achieve this
# the other way and re-scan the LVs, but in general there are more LVs than
# VGs, thus this should be more efficient. This happens when a LV interface
# changes causing the dbus object representing it to be removed and
# recreated.
if refresh and lv_changes > 0:
num_total_changes += load_vgs(
refresh=refresh,
emit_signal=emit_signal,
cache_refresh=False)[1]
return num_total_changes

View File

@@ -10,14 +10,14 @@
from .automatedproperties import AutomatedProperties
from . import utils
from .utils import vg_obj_path_generate, log_error, _handle_execute
from .utils import vg_obj_path_generate, log_error
import dbus
from . import cmdhandler
from . import cfg
from .cfg import LV_INTERFACE, THIN_POOL_INTERFACE, SNAPSHOT_INTERFACE, \
LV_COMMON_INTERFACE, CACHE_POOL_INTERFACE, LV_CACHED, VDO_POOL_INTERFACE
LV_COMMON_INTERFACE, CACHE_POOL_INTERFACE, LV_CACHED
from .request import RequestEntry
from .utils import n, n32, d
from .utils import n, n32
from .loader import common
from .state import State
from . import background
@@ -74,66 +74,23 @@ def lvs_state_retrieve(selection, cache_refresh=True):
lvs = sorted(cfg.db.fetch_lvs(selection), key=get_key)
for l in lvs:
if cfg.vdo_support:
rc.append(LvStateVdo(
l['lv_uuid'], l['lv_name'],
l['lv_path'], n(l['lv_size']),
l['vg_name'],
l['vg_uuid'], l['pool_lv_uuid'],
l['pool_lv'], l['origin_uuid'], l['origin'],
n32(l['data_percent']), l['lv_attr'],
l['lv_tags'], l['lv_active'], l['data_lv'],
l['metadata_lv'], l['segtype'], l['lv_role'],
l['lv_layout'],
n32(l['snap_percent']),
n32(l['metadata_percent']),
n32(l['copy_percent']),
n32(l['sync_percent']),
n(l['lv_metadata_size']),
l['move_pv'],
l['move_pv_uuid'],
l['vdo_operating_mode'],
l['vdo_compression_state'],
l['vdo_index_state'],
n(l['vdo_used_size']),
d(l['vdo_saving_percent']),
l['vdo_compression'],
l['vdo_deduplication'],
l['vdo_use_metadata_hints'],
n32(l['vdo_minimum_io_size']),
n(l['vdo_block_map_cache_size']),
n32(l['vdo_block_map_era_length']),
l['vdo_use_sparse_index'],
n(l['vdo_index_memory_size']),
n(l['vdo_slab_size']),
n32(l['vdo_ack_threads']),
n32(l['vdo_bio_threads']),
n32(l['vdo_bio_rotation']),
n32(l['vdo_cpu_threads']),
n32(l['vdo_hash_zone_threads']),
n32(l['vdo_logical_threads']),
n32(l['vdo_physical_threads']),
n32(l['vdo_max_discard']),
l['vdo_write_policy'],
n32(l['vdo_header_size'])))
else:
rc.append(LvState(
l['lv_uuid'], l['lv_name'],
l['lv_path'], n(l['lv_size']),
l['vg_name'],
l['vg_uuid'], l['pool_lv_uuid'],
l['pool_lv'], l['origin_uuid'], l['origin'],
n32(l['data_percent']), l['lv_attr'],
l['lv_tags'], l['lv_active'], l['data_lv'],
l['metadata_lv'], l['segtype'], l['lv_role'],
l['lv_layout'],
n32(l['snap_percent']),
n32(l['metadata_percent']),
n32(l['copy_percent']),
n32(l['sync_percent']),
n(l['lv_metadata_size']),
l['move_pv'],
l['move_pv_uuid']))
rc.append(LvState(
l['lv_uuid'], l['lv_name'],
l['lv_path'], n(l['lv_size']),
l['vg_name'],
l['vg_uuid'], l['pool_lv_uuid'],
l['pool_lv'], l['origin_uuid'], l['origin'],
n32(l['data_percent']), l['lv_attr'],
l['lv_tags'], l['lv_active'], l['data_lv'],
l['metadata_lv'], l['segtype'], l['lv_role'],
l['lv_layout'],
n32(l['snap_percent']),
n32(l['metadata_percent']),
n32(l['copy_percent']),
n32(l['sync_percent']),
n(l['lv_metadata_size']),
l['move_pv'],
l['move_pv_uuid']))
return rc
@@ -237,8 +194,6 @@ class LvState(State):
def _object_type_create(self):
if self.Attr[0] == 't':
return LvThinPool
elif self.Attr[0] == 'd':
return LvVdoPool
elif self.Attr[0] == 'C':
if 'pool' in self.layout:
return LvCachePool
@@ -265,34 +220,6 @@ class LvState(State):
return (klass, path_method)
class LvStateVdo(LvState):
def __init__(self, Uuid, Name, Path, SizeBytes,
vg_name, vg_uuid, pool_lv_uuid, PoolLv,
origin_uuid, OriginLv, DataPercent, Attr, Tags, active,
data_lv, metadata_lv, segtypes, role, layout, SnapPercent,
MetaDataPercent, CopyPercent, SyncPercent,
MetaDataSizeBytes, move_pv, move_pv_uuid,
vdo_operating_mode, vdo_compression_state, vdo_index_state,
vdo_used_size,vdo_saving_percent,vdo_compression,
vdo_deduplication,vdo_use_metadata_hints,
vdo_minimum_io_size,vdo_block_map_cache_size,
vdo_block_map_era_length,vdo_use_sparse_index,
vdo_index_memory_size,vdo_slab_size,vdo_ack_threads,
vdo_bio_threads,vdo_bio_rotation,vdo_cpu_threads,
vdo_hash_zone_threads,vdo_logical_threads,
vdo_physical_threads,vdo_max_discard,
vdo_write_policy,vdo_header_size):
super(LvStateVdo, self).__init__(Uuid, Name, Path, SizeBytes,
vg_name, vg_uuid, pool_lv_uuid, PoolLv,
origin_uuid, OriginLv, DataPercent, Attr, Tags, active,
data_lv, metadata_lv, segtypes, role, layout, SnapPercent,
MetaDataPercent, CopyPercent, SyncPercent,
MetaDataSizeBytes, move_pv, move_pv_uuid)
utils.init_class_from_arguments(self, "vdo_", snake_to_pascal=True)
# noinspection PyPep8Naming
@utils.dbus_property(LV_COMMON_INTERFACE, 'Uuid', 's')
@utils.dbus_property(LV_COMMON_INTERFACE, 'Name', 's')
@@ -348,7 +275,13 @@ class LvCommon(AutomatedProperties):
@staticmethod
def handle_execute(rc, out, err):
_handle_execute(rc, out, err, LV_INTERFACE)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(lv_uuid, lv_name):
@@ -388,7 +321,6 @@ class LvCommon(AutomatedProperties):
'l': 'mirror log device', 'c': 'under conversion',
'V': 'thin Volume', 't': 'thin pool', 'T': 'Thin pool data',
'e': 'raid or pool metadata or pool metadata spare',
'd': 'vdo pool', 'D': 'vdo pool data', 'g': 'integrity',
'-': 'Unspecified'}
return self.attr_struct(0, type_map)
@@ -524,7 +456,8 @@ class Lv(LvCommon):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Remove the LV, if successful then remove from the model
LvCommon.handle_execute(*cmdhandler.lv_remove(lv_name, remove_options))
rc, out, err = cmdhandler.lv_remove(lv_name, remove_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -544,8 +477,9 @@ class Lv(LvCommon):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Rename the logical volume
LvCommon.handle_execute(*cmdhandler.lv_rename(lv_name, new_name,
rename_options))
rc, out, err = cmdhandler.lv_rename(lv_name, new_name,
rename_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -594,11 +528,13 @@ class Lv(LvCommon):
remainder = space % 512
optional_size = space + 512 - remainder
LvCommon.handle_execute(*cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options,name, optional_size))
rc, out, err = cmdhandler.vg_lv_snapshot(
lv_name, snapshot_options, name, optional_size)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)
@dbus.service.method(
dbus_interface=LV_INTERFACE,
in_signature='stia{sv}',
@@ -634,8 +570,9 @@ class Lv(LvCommon):
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
size_change = new_size_bytes - dbo.SizeBytes
LvCommon.handle_execute(*cmdhandler.lv_resize(
dbo.lvm_id, size_change,pv_dests, resize_options))
rc, out, err = cmdhandler.lv_resize(dbo.lvm_id, size_change,
pv_dests, resize_options)
LvCommon.handle_execute(rc, out, err)
return "/"
@dbus.service.method(
@@ -670,8 +607,9 @@ class Lv(LvCommon):
options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(uuid, lv_name)
LvCommon.handle_execute(*cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options))
rc, out, err = cmdhandler.activate_deactivate(
'lvchange', lv_name, activate, control_flags, options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -705,8 +643,9 @@ class Lv(LvCommon):
def _add_rm_tags(uuid, lv_name, tags_add, tags_del, tag_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(uuid, lv_name)
LvCommon.handle_execute(*cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options))
rc, out, err = cmdhandler.lv_tag(
lv_name, tags_add, tags_del, tag_options)
LvCommon.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -743,152 +682,6 @@ class Lv(LvCommon):
cb, cbe, return_tuple=False)
cfg.worker_q.put(r)
@staticmethod
def _writecache_lv(lv_uuid, lv_name, lv_object_path, cache_options):
# Make sure we have a dbus object representing it
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
# Make sure we have dbus object representing lv to cache
lv_to_cache = cfg.om.get_object_by_path(lv_object_path)
if lv_to_cache:
fcn = lv_to_cache.lv_full_name()
rc, out, err = cmdhandler.lv_writecache_lv(
dbo.lv_full_name(), fcn, cache_options)
if rc == 0:
# When we cache an LV, the cache pool and the lv that is getting
# cached need to be removed from the object manager and
# re-created as their interfaces have changed!
mt_remove_dbus_objects((dbo, lv_to_cache))
cfg.load()
lv_converted = cfg.om.get_object_path_by_lvm_id(fcn)
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
else:
raise dbus.exceptions.DBusException(
LV_INTERFACE, 'LV to cache with object path %s not present!' %
lv_object_path)
return lv_converted
@dbus.service.method(
dbus_interface=LV_INTERFACE,
in_signature='oia{sv}',
out_signature='(oo)',
async_callbacks=('cb', 'cbe'))
def WriteCacheLv(self, lv_object, tmo, cache_options, cb, cbe):
r = RequestEntry(
tmo, Lv._writecache_lv,
(self.Uuid, self.lvm_id, lv_object,
cache_options), cb, cbe)
cfg.worker_q.put(r)
# noinspection PyPep8Naming
@utils.dbus_property(VDO_POOL_INTERFACE, 'OperatingMode', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'CompressionState', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'IndexState', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'UsedSize', 't')
@utils.dbus_property(VDO_POOL_INTERFACE, 'SavingPercent', 'd')
@utils.dbus_property(VDO_POOL_INTERFACE, 'Compression', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'Deduplication', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'UseMetadataHints', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'MinimumIoSize', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'BlockMapCacheSize', "t")
@utils.dbus_property(VDO_POOL_INTERFACE, 'BlockMapEraLength', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'UseSparseIndex', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'IndexMemorySize', 't')
@utils.dbus_property(VDO_POOL_INTERFACE, 'SlabSize', 't')
@utils.dbus_property(VDO_POOL_INTERFACE, 'AckThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'BioThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'BioRotation', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'CpuThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'HashZoneThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'LogicalThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'PhysicalThreads', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'MaxDiscard', 'u')
@utils.dbus_property(VDO_POOL_INTERFACE, 'WritePolicy', 's')
@utils.dbus_property(VDO_POOL_INTERFACE, 'HeaderSize', 'u')
class LvVdoPool(Lv):
_DataLv_meta = ("o", VDO_POOL_INTERFACE)
def __init__(self, object_path, object_state):
super(LvVdoPool, self).__init__(object_path, object_state)
self.set_interface(VDO_POOL_INTERFACE)
self._data_lv, _ = self._get_data_meta()
@property
def DataLv(self):
return dbus.ObjectPath(self._data_lv)
@staticmethod
def _enable_disable_compression(pool_uuid, pool_name, enable, comp_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(pool_uuid, pool_name)
# Rename the logical volume
LvCommon.handle_execute(*cmdhandler.lv_vdo_compression(
pool_name, enable, comp_options))
return '/'
@dbus.service.method(
dbus_interface=VDO_POOL_INTERFACE,
in_signature='ia{sv}',
out_signature='o',
async_callbacks=('cb', 'cbe'))
def EnableCompression(self, tmo, comp_options, cb, cbe):
r = RequestEntry(
tmo, LvVdoPool._enable_disable_compression,
(self.Uuid, self.lvm_id, True, comp_options),
cb, cbe, False)
cfg.worker_q.put(r)
@dbus.service.method(
dbus_interface=VDO_POOL_INTERFACE,
in_signature='ia{sv}',
out_signature='o',
async_callbacks=('cb', 'cbe'))
def DisableCompression(self, tmo, comp_options, cb, cbe):
r = RequestEntry(
tmo, LvVdoPool._enable_disable_compression,
(self.Uuid, self.lvm_id, False, comp_options),
cb, cbe, False)
cfg.worker_q.put(r)
@staticmethod
def _enable_disable_deduplication(pool_uuid, pool_name, enable, dedup_options):
# Make sure we have a dbus object representing it
LvCommon.validate_dbus_object(pool_uuid, pool_name)
# Rename the logical volume
LvCommon.handle_execute(*cmdhandler.lv_vdo_deduplication(
pool_name, enable, dedup_options))
return '/'
@dbus.service.method(
dbus_interface=VDO_POOL_INTERFACE,
in_signature='ia{sv}',
out_signature='o',
async_callbacks=('cb', 'cbe'))
def EnableDeduplication(self, tmo, dedup_options, cb, cbe):
r = RequestEntry(
tmo, LvVdoPool._enable_disable_deduplication,
(self.Uuid, self.lvm_id, True, dedup_options),
cb, cbe, False)
cfg.worker_q.put(r)
@dbus.service.method(
dbus_interface=VDO_POOL_INTERFACE,
in_signature='ia{sv}',
out_signature='o',
async_callbacks=('cb', 'cbe'))
def DisableDeduplication(self, tmo, dedup_options, cb, cbe):
r = RequestEntry(
tmo, LvVdoPool._enable_disable_deduplication,
(self.Uuid, self.lvm_id, False, dedup_options),
cb, cbe, False)
cfg.worker_q.put(r)
# noinspection PyPep8Naming
class LvThinPool(Lv):
@@ -912,8 +705,10 @@ class LvThinPool(Lv):
def _lv_create(lv_uuid, lv_name, name, size_bytes, create_options):
# Make sure we have a dbus object representing it
dbo = LvCommon.validate_dbus_object(lv_uuid, lv_name)
LvCommon.handle_execute(*cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes))
rc, out, err = cmdhandler.lv_lv_create(
lv_name, create_options, name, size_bytes)
LvCommon.handle_execute(rc, out, err)
full_name = "%s/%s" % (dbo.vg_name_lookup(), name)
return cfg.om.get_object_path_by_lvm_id(full_name)

View File

@@ -20,7 +20,7 @@ from lvmdbusd.utils import log_debug, log_error
class DataStore(object):
def __init__(self, usejson=True, vdo_support=False):
def __init__(self, usejson=True):
self.pvs = {}
self.vgs = {}
self.lvs = {}
@@ -43,8 +43,6 @@ class DataStore(object):
else:
self.json = usejson
self.vdo_support = vdo_support
@staticmethod
def _insert_record(table, key, record, allowed_multiple):
if key in table:
@@ -243,7 +241,8 @@ class DataStore(object):
return DataStore._parse_lvs_common(c_lvs, c_lv_full_lookup)
def _parse_lvs_json(self, _all):
@staticmethod
def _parse_lvs_json(_all):
c_lvs = OrderedDict()
c_lv_full_lookup = {}
@@ -263,13 +262,8 @@ class DataStore(object):
if 'seg' in r:
for s in r['seg']:
r = c_lvs[s['lv_uuid']]
r.setdefault('seg_pe_ranges', []).\
append(s['seg_pe_ranges'])
r.setdefault('seg_pe_ranges', []).append(s['seg_pe_ranges'])
r.setdefault('segtype', []).append(s['segtype'])
if self.vdo_support:
for seg_key, seg_val in s.items():
if seg_key.startswith("vdo_"):
r[seg_key] = seg_val
return DataStore._parse_lvs_common(c_lvs, c_lv_full_lookup)

View File

@@ -29,7 +29,7 @@ from .utils import log_debug, log_error
import argparse
import os
import sys
from .cmdhandler import LvmFlightRecorder, supports_vdo
from .cmdhandler import LvmFlightRecorder
from .request import RequestEntry
@@ -44,10 +44,10 @@ def process_request():
try:
req = cfg.worker_q.get(True, 5)
log_debug(
"Method start: %s with args %s (callback = %s)" %
(str(req.method), str(req.arguments), str(req.cb)))
"Running method: %s with args %s" %
(str(req.method), str(req.arguments)))
req.run_cmd()
log_debug("Method complete: %s" % str(req.method))
log_debug("Method complete ")
except queue.Empty:
pass
except Exception:
@@ -127,14 +127,6 @@ def main():
log_error("You cannot specify --lvmshell and --nojson")
sys.exit(1)
# We will dynamically add interfaces which support vdo if it
# exists.
cfg.vdo_support = supports_vdo()
if cfg.vdo_support and not cfg.args.use_json:
log_error("You cannot specify --nojson when lvm has VDO support")
sys.exit(1)
# List of threads that we start up
thread_list = []
@@ -155,12 +147,12 @@ def main():
cfg.om = Lvm(BASE_OBJ_PATH)
cfg.om.register_object(Manager(MANAGER_OBJ_PATH))
cfg.db = lvmdb.DataStore(cfg.args.use_json, cfg.vdo_support)
cfg.db = lvmdb.DataStore(cfg.args.use_json)
# Using a thread to process requests, we cannot hang the dbus library
# thread that is handling the dbus interface
thread_list.append(
threading.Thread(target=process_request, name='process_request'))
thread_list.append(threading.Thread(target=process_request,
name='process_request'))
# Have a single thread handling updating lvm and the dbus model so we
# don't have multiple threads doing this as the same time

View File

@@ -27,7 +27,7 @@ class Manager(AutomatedProperties):
@property
def Version(self):
return dbus.String('1.1.0')
return dbus.String('1.0.0')
@staticmethod
def handle_execute(rc, out, err):
@@ -107,10 +107,10 @@ class Manager(AutomatedProperties):
rc = cfg.load(log=False)
if rc != 0:
utils.log_debug('Manager.Refresh - exit %d %d' % (rc, lc),
utils.log_debug('Manager.Refresh - exit %d' % (rc),
'bg_black', 'fg_light_red')
else:
utils.log_debug('Manager.Refresh - exit %d %d' % (rc, lc))
utils.log_debug('Manager.Refresh - exit %d' % (rc))
return rc + lc
@dbus.service.method(

View File

@@ -14,7 +14,7 @@ import dbus
from .cfg import PV_INTERFACE
from . import cmdhandler
from .utils import vg_obj_path_generate, n, pv_obj_path_generate, \
lv_object_path_method, _handle_execute
lv_object_path_method
from .loader import common
from .request import RequestEntry
from .state import State
@@ -138,12 +138,19 @@ class Pv(AutomatedProperties):
# Remove the PV, if successful then remove from the model
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
Pv.handle_execute(*cmdhandler.pv_remove(pv_name, remove_options))
rc, out, err = cmdhandler.pv_remove(pv_name, remove_options)
Pv.handle_execute(rc, out, err)
return '/'
@staticmethod
def handle_execute(rc, out, err):
return _handle_execute(rc, out, err, PV_INTERFACE)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
PV_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(pv_uuid, pv_name):
@@ -171,8 +178,10 @@ class Pv(AutomatedProperties):
def _resize(pv_uuid, pv_name, new_size_bytes, resize_options):
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
Pv.handle_execute(*cmdhandler.pv_resize(pv_name, new_size_bytes,
resize_options))
rc, out, err = cmdhandler.pv_resize(pv_name, new_size_bytes,
resize_options)
Pv.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -191,8 +200,9 @@ class Pv(AutomatedProperties):
def _allocation_enabled(pv_uuid, pv_name, yes_no, allocation_options):
# Make sure we have a dbus object representing it
Pv.validate_dbus_object(pv_uuid, pv_name)
Pv.handle_execute(*cmdhandler.pv_allocatable(pv_name, yes_no,
allocation_options))
rc, out, err = cmdhandler.pv_allocatable(
pv_name, yes_no, allocation_options)
Pv.handle_execute(rc, out, err)
return '/'
@dbus.service.method(

View File

@@ -26,15 +26,6 @@ import signal
STDOUT_TTY = os.isatty(sys.stdout.fileno())
def _handle_execute(rc, out, err, interface):
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
interface, 'Exit code %s, stderr = %s' % (str(rc), err))
def rtype(dbus_type):
"""
Decorator making sure that the decorated function returns a value of
@@ -66,20 +57,8 @@ def n32(v):
return int(float(v))
@rtype(dbus.Double)
def d(v):
if not v:
return 0.0
return float(v)
def _snake_to_pascal(s):
return ''.join(x.title() for x in s.split('_'))
# noinspection PyProtectedMember
def init_class_from_arguments(
obj_instance, begin_suffix=None, snake_to_pascal=False):
def init_class_from_arguments(obj_instance):
for k, v in list(sys._getframe(1).f_locals.items()):
if k != 'self':
nt = k
@@ -90,17 +69,8 @@ def init_class_from_arguments(
cur = getattr(obj_instance, nt, v)
# print 'Init class %s = %s' % (nt, str(v))
if not (cur and len(str(cur)) and (v is None or len(str(v))) == 0)\
and (begin_suffix is None or nt.startswith(begin_suffix)):
if begin_suffix and nt.startswith(begin_suffix):
name = nt[len(begin_suffix):]
if snake_to_pascal:
name = _snake_to_pascal(name)
setattr(obj_instance, name, v)
else:
setattr(obj_instance, nt, v)
if not (cur and len(str(cur)) and (v is None or len(str(v))) == 0):
setattr(obj_instance, nt, v)
def get_properties(f):
@@ -368,8 +338,6 @@ def lv_object_path_method(name, meta):
return _hidden_lv_obj_path_generate
elif meta[0][0] == 't':
return _thin_pool_obj_path_generate
elif meta[0][0] == 'd':
return _vdo_pool_object_path_generate
elif meta[0][0] == 'C' and 'pool' in meta[1]:
return _cache_pool_obj_path_generate
@@ -387,10 +355,6 @@ def _thin_pool_obj_path_generate():
return cfg.THIN_POOL_PATH + "/%d" % next(cfg.thin_id)
def _vdo_pool_object_path_generate():
return cfg.VDO_POOL_PATH + "/%d" % next(cfg.vdo_id)
def _cache_pool_obj_path_generate():
return cfg.CACHE_POOL_PATH + "/%d" % next(cfg.cache_pool_id)
@@ -482,7 +446,7 @@ _ALLOWABLE_CH_SET = set(_ALLOWABLE_CH)
_ALLOWABLE_VG_LV_CH = string.ascii_letters + string.digits + '.-_+'
_ALLOWABLE_VG_LV_CH_SET = set(_ALLOWABLE_VG_LV_CH)
_LV_NAME_RESERVED = ("_cdata", "_cmeta", "_corig", "_mimage", "_mlog",
"_pmspare", "_rimage", "_rmeta", "_tdata", "_tmeta", "_vorigin", "_vdata")
"_pmspare", "_rimage", "_rmeta", "_tdata", "_tmeta", "_vorigin")
# Tags can have the characters, based on the code
# a-zA-Z0-9._-+/=!:&#

View File

@@ -10,11 +10,10 @@
from .automatedproperties import AutomatedProperties
from . import utils
from .utils import pv_obj_path_generate, vg_obj_path_generate, n, \
_handle_execute
from .utils import pv_obj_path_generate, vg_obj_path_generate, n
import dbus
from . import cfg
from .cfg import VG_INTERFACE, VG_VDO_INTERFACE
from .cfg import VG_INTERFACE
from . import cmdhandler
from .request import RequestEntry
from .loader import common
@@ -47,7 +46,7 @@ def vgs_state_retrieve(selection, cache_refresh=True):
def load_vgs(vg_specific=None, object_path=None, refresh=False,
emit_signal=False, cache_refresh=True):
return common(vgs_state_retrieve, (Vg, VgVdo, ), vg_specific, object_path, refresh,
return common(vgs_state_retrieve, (Vg,), vg_specific, object_path, refresh,
emit_signal, cache_refresh)
@@ -99,11 +98,7 @@ class VgState(State):
if not path:
path = cfg.om.get_object_path_by_uuid_lvm_id(
self.Uuid, self.internal_name, vg_obj_path_generate)
if cfg.vdo_support:
return VgVdo(path, self)
else:
return Vg(path, self)
return Vg(path, self)
# noinspection PyMethodMayBeStatic
def creation_signature(self):
@@ -159,7 +154,13 @@ class Vg(AutomatedProperties):
@staticmethod
def handle_execute(rc, out, err):
return _handle_execute(rc, out, err, VG_INTERFACE)
if rc == 0:
cfg.load()
else:
# Need to work on error handling, need consistent
raise dbus.exceptions.DBusException(
VG_INTERFACE,
'Exit code %s, stderr = %s' % (str(rc), err))
@staticmethod
def validate_dbus_object(vg_uuid, vg_name):
@@ -175,8 +176,9 @@ class Vg(AutomatedProperties):
def _rename(uuid, vg_name, new_name, rename_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_rename(
uuid, new_name, rename_options))
rc, out, err = cmdhandler.vg_rename(
uuid, new_name, rename_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -195,7 +197,8 @@ class Vg(AutomatedProperties):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
# Remove the VG, if successful then remove from the model
Vg.handle_execute(*cmdhandler.vg_remove(vg_name, remove_options))
rc, out, err = cmdhandler.vg_remove(vg_name, remove_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -211,7 +214,8 @@ class Vg(AutomatedProperties):
@staticmethod
def _change(uuid, vg_name, change_options):
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_change(change_options, vg_name))
rc, out, err = cmdhandler.vg_change(change_options, vg_name)
Vg.handle_execute(rc, out, err)
return '/'
# TODO: This should be broken into a number of different methods
@@ -247,8 +251,9 @@ class Vg(AutomatedProperties):
VG_INTERFACE,
'PV Object path not found = %s!' % pv_op)
Vg.handle_execute(*cmdhandler.vg_reduce(
vg_name, missing, pv_devices, reduce_options))
rc, out, err = cmdhandler.vg_reduce(vg_name, missing, pv_devices,
reduce_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -278,8 +283,9 @@ class Vg(AutomatedProperties):
VG_INTERFACE, 'PV Object path not found = %s!' % i)
if len(extend_devices):
Vg.handle_execute(*cmdhandler.vg_extend(
vg_name, extend_devices, extend_options))
rc, out, err = cmdhandler.vg_extend(vg_name, extend_devices,
extend_options)
Vg.handle_execute(rc, out, err)
else:
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'No pv_object_paths provided!')
@@ -333,8 +339,10 @@ class Vg(AutomatedProperties):
pv_dests.append((pv_dbus_obj.lvm_id, pr[1], pr[2]))
Vg.handle_execute(*cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests))
rc, out, err = cmdhandler.vg_lv_create(
vg_name, create_options, name, size_bytes, pv_dests)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
@@ -372,8 +380,11 @@ class Vg(AutomatedProperties):
thin_pool, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool))
rc, out, err = cmdhandler.vg_lv_create_linear(
vg_name, create_options, name, size_bytes, thin_pool)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
@@ -395,9 +406,10 @@ class Vg(AutomatedProperties):
stripe_size_kb, thin_pool, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_lv_create_striped(
rc, out, err = cmdhandler.vg_lv_create_striped(
vg_name, create_options, name, size_bytes,
num_stripes, stripe_size_kb, thin_pool))
num_stripes, stripe_size_kb, thin_pool)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
@@ -422,8 +434,9 @@ class Vg(AutomatedProperties):
num_copies, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies))
rc, out, err = cmdhandler.vg_lv_create_mirror(
vg_name, create_options, name, size_bytes, num_copies)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
@@ -446,9 +459,10 @@ class Vg(AutomatedProperties):
num_stripes, stripe_size_kb, create_options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_lv_create_raid(
rc, out, err = cmdhandler.vg_lv_create_raid(
vg_name, create_options, name, raid_type, size_bytes,
num_stripes, stripe_size_kb))
num_stripes, stripe_size_kb)
Vg.handle_execute(rc, out, err)
return Vg.fetch_new_lv(vg_name, name)
@dbus.service.method(
@@ -546,8 +560,9 @@ class Vg(AutomatedProperties):
raise dbus.exceptions.DBusException(
VG_INTERFACE, 'PV object path = %s not found' % p)
Vg.handle_execute(*cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options))
rc, out, err = cmdhandler.pv_tag(
pv_devices, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -588,8 +603,9 @@ class Vg(AutomatedProperties):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options))
rc, out, err = cmdhandler.vg_tag(
vg_name, tags_add, tags_del, tag_options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -628,7 +644,8 @@ class Vg(AutomatedProperties):
def _vg_change_set(uuid, vg_name, method, value, options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*method(vg_name, value, options))
rc, out, err = method(vg_name, value, options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -688,8 +705,9 @@ class Vg(AutomatedProperties):
options):
# Make sure we have a dbus object representing it
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options))
rc, out, err = cmdhandler.activate_deactivate(
'vgchange', vg_name, activate, control_flags, options)
Vg.handle_execute(rc, out, err)
return '/'
@dbus.service.method(
@@ -777,71 +795,3 @@ class Vg(AutomatedProperties):
@property
def Clustered(self):
return self._attribute(5, 'c')
class VgVdo(Vg):
# noinspection PyUnusedLocal,PyPep8Naming
def __init__(self, object_path, object_state):
super(VgVdo, self).__init__(object_path, vgs_state_retrieve)
self.set_interface(VG_VDO_INTERFACE)
self._object_path = object_path
self.state = object_state
@staticmethod
def _lv_vdo_pool_create_with_lv(uuid, vg_name, pool_name, lv_name,
data_size, virtual_size, create_options):
Vg.validate_dbus_object(uuid, vg_name)
Vg.handle_execute(*cmdhandler.vg_create_vdo_pool_lv_and_lv(
vg_name, pool_name, lv_name, data_size, virtual_size,
create_options))
return Vg.fetch_new_lv(vg_name, pool_name)
@dbus.service.method(
dbus_interface=VG_VDO_INTERFACE,
in_signature='ssttia{sv}',
out_signature='(oo)',
async_callbacks=('cb', 'cbe'))
def CreateVdoPoolandLv(self, pool_name, lv_name, data_size, virtual_size,
tmo, create_options, cb, cbe):
utils.validate_lv_name(VG_VDO_INTERFACE, self.Name, pool_name)
utils.validate_lv_name(VG_VDO_INTERFACE, self.Name, lv_name)
r = RequestEntry(tmo, VgVdo._lv_vdo_pool_create_with_lv,
(self.state.Uuid, self.state.lvm_id,
pool_name, lv_name, round_size(data_size),
round_size(virtual_size),
create_options), cb, cbe)
cfg.worker_q.put(r)
@staticmethod
def _vdo_pool_create(uuid, vg_name, pool_lv, name, virtual_size, create_options):
Vg.validate_dbus_object(uuid, vg_name)
# Retrieve the full name of the pool lv
pool = cfg.om.get_object_by_path(pool_lv)
if not pool:
msg = 'LV with object path %s not present!' % \
(pool_lv)
raise dbus.exceptions.DBusException(VG_VDO_INTERFACE, msg)
Vg.handle_execute(*cmdhandler.vg_create_vdo_pool(
pool.lv_full_name(), name, virtual_size,
create_options))
return Vg.fetch_new_lv(vg_name, pool.Name)
@dbus.service.method(
dbus_interface=VG_VDO_INTERFACE,
in_signature='ostia{sv}',
out_signature='(oo)',
async_callbacks=('cb', 'cbe'))
def CreateVdoPool(self, pool_lv, name, virtual_size,
tmo, create_options, cb, cbe):
utils.validate_lv_name(VG_VDO_INTERFACE, self.Name, name)
r = RequestEntry(tmo, VgVdo._vdo_pool_create,
(self.state.Uuid, self.state.lvm_id,
pool_lv, name,
round_size(virtual_size),
create_options), cb, cbe)
cfg.worker_q.put(r)

View File

@@ -27,7 +27,6 @@ endif
ifeq ("@BUILD_LOCKDDLM@", "yes")
SOURCES += lvmlockd-dlm.c
LOCK_LIBS += -ldlm_lt
LOCK_LIBS += -ldlmcontrol
endif
SOURCES2 = lvmlockctl.c
@@ -38,25 +37,18 @@ TARGETS = lvmlockd lvmlockctl
include $(top_builddir)/make.tmpl
CFLAGS += $(EXTRA_EXEC_CFLAGS)
CFLAGS += $(EXTRA_EXEC_CFLAGS) $(SYSTEMD_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) $(PTHREAD_LIBS)
LDFLAGS += $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(PTHREAD_LIBS) $(SYSTEMD_LIBS)
ifeq ($(USE_SD_NOTIFY),yes)
CFLAGS += $(shell pkg-config --cflags libsystemd) -DUSE_SD_NOTIFY
LIBS += $(shell pkg-config --libs libsystemd)
endif
lvmlockd: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
lvmlockd: $(OBJECTS) $(top_builddir)/libdaemon/server/libdaemonserver.a $(INTERNAL_LIBS)
@echo " [CC] $@"
$(Q) $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LOCK_LIBS) -ldaemonserver $(INTERNAL_LIBS) $(LIBS)
$(Q) $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $+ $(LOCK_LIBS) $(LIBS)
lvmlockctl: lvmlockctl.o $(top_builddir)/libdaemon/client/libdaemonclient.a
lvmlockctl: lvmlockctl.o $(INTERNAL_LIBS)
@echo " [CC] $@"
$(Q) $(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmlockctl.o $(INTERNAL_LIBS) $(LIBS)
$(Q) $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $+ $(LIBS)
install_lvmlockd: lvmlockd
@echo " [INSTALL] $<"

View File

@@ -280,12 +280,13 @@ static void format_info_line(char *line, char *r_name, char *r_type)
static void format_info(void)
{
char line[MAX_LINE] = { 0 };
char r_name[MAX_NAME+1] = { 0 };
char r_type[MAX_NAME+1] = { 0 };
char line[MAX_LINE];
char r_name[MAX_NAME+1];
char r_type[MAX_NAME+1];
int i, j;
j = 0;
memset(line, 0, sizeof(line));
for (i = 0; i < dump_len; i++) {
line[j++] = dump_buf[i];
@@ -325,8 +326,6 @@ static int _lvmlockd_result(daemon_reply reply, int *result)
{
int reply_result;
*result = NO_LOCKD_RESULT;
if (reply.error) {
log_error("lvmlockd_result reply error %d", reply.error);
return 0;
@@ -338,7 +337,7 @@ static int _lvmlockd_result(daemon_reply reply, int *result)
}
reply_result = daemon_reply_int(reply, "op_result", NO_LOCKD_RESULT);
if (reply_result == NO_LOCKD_RESULT) {
if (reply_result == -1000) {
log_error("lvmlockd_result no op_result");
return 0;
}

View File

@@ -14,7 +14,6 @@
#include "libdaemon/client/daemon-client.h"
#define LVMLOCKD_SOCKET DEFAULT_RUN_DIR "/lvmlockd.socket"
#define LVMLOCKD_ADOPT_FILE DEFAULT_RUN_DIR "/lvmlockd.adopt"
/* Wrappers to open/close connection */
@@ -23,9 +22,9 @@ static inline daemon_handle lvmlockd_open(const char *sock)
daemon_info lvmlockd_info = {
.path = "lvmlockd",
.socket = sock ?: LVMLOCKD_SOCKET,
.autostart = 0,
.protocol = "lvmlockd",
.protocol_version = 1,
.autostart = 0
};
return daemon_open(lvmlockd_info);
@@ -33,7 +32,7 @@ static inline daemon_handle lvmlockd_open(const char *sock)
static inline void lvmlockd_close(daemon_handle h)
{
daemon_close(h);
return daemon_close(h);
}
/*

View File

@@ -31,15 +31,13 @@
#include <sys/utsname.h>
#include <sys/un.h>
#ifdef USE_SD_NOTIFY
#ifdef NOTIFYDBUS_SUPPORT
#include <systemd/sd-daemon.h>
#endif
#define EXTERN
#include "lvmlockd-internal.h"
static int str_to_mode(const char *str);
/*
* Basic operation of lvmlockd
*
@@ -144,8 +142,6 @@ static const char *lvmlockd_protocol = "lvmlockd";
static const int lvmlockd_protocol_version = 1;
static int daemon_quit;
static int adopt_opt;
static uint32_t adopt_update_count;
static const char *adopt_file;
/*
* We use a separate socket for dumping daemon info.
@@ -506,10 +502,6 @@ static struct lock *alloc_lock(void)
static void free_action(struct action *act)
{
if (act->path) {
free(act->path);
act->path = NULL;
}
pthread_mutex_lock(&unused_struct_mutex);
if (unused_action_count >= MAX_UNUSED_ACTION) {
free(act);
@@ -733,8 +725,6 @@ static const char *op_str(int x)
return "rename_final";
case LD_OP_RUNNING_LM:
return "running_lm";
case LD_OP_QUERY_LOCK:
return "query_lock";
case LD_OP_FIND_FREE_LOCK:
return "find_free_lock";
case LD_OP_KILL_VG:
@@ -747,8 +737,6 @@ static const char *op_str(int x)
return "dump_info";
case LD_OP_BUSY:
return "busy";
case LD_OP_REFRESH_LV:
return "refresh_lv";
default:
return "op_unknown";
};
@@ -815,146 +803,6 @@ int version_from_args(char *args, unsigned int *major, unsigned int *minor, unsi
return 0;
}
/*
* Write new info when a command exits if that command has acquired a new LV
* lock. If the command has released an LV lock we don't bother updating the
* info. When adopting, we eliminate any LV lock adoptions if there is no dm
* device for that LV. If lvmlockd is terminated after acquiring but before
* writing this file, those LV locks would not be adopted on restart.
*/
#define ADOPT_VERSION_MAJOR 1
#define ADOPT_VERSION_MINOR 0
static void write_adopt_file(void)
{
struct lockspace *ls;
struct resource *r;
struct lock *lk;
time_t t;
FILE *fp;
if (!(fp = fopen(adopt_file, "w")))
return;
adopt_update_count++;
t = time(NULL);
fprintf(fp, "lvmlockd adopt_version %u.%u pid %d updates %u %s",
ADOPT_VERSION_MAJOR, ADOPT_VERSION_MINOR, getpid(), adopt_update_count, ctime(&t));
pthread_mutex_lock(&lockspaces_mutex);
list_for_each_entry(ls, &lockspaces, list) {
if (ls->lm_type == LD_LM_DLM && !strcmp(ls->name, gl_lsname_dlm))
continue;
fprintf(fp, "VG: %38s %s %s %s\n",
ls->vg_uuid, ls->vg_name, lm_str(ls->lm_type), ls->vg_args);
list_for_each_entry(r, &ls->resources, list) {
if (r->type != LD_RT_LV)
continue;
if ((r->mode != LD_LK_EX) && (r->mode != LD_LK_SH))
continue;
list_for_each_entry(lk, &r->locks, list) {
fprintf(fp, "LV: %38s %s %s %s %u\n",
ls->vg_uuid, r->name, r->lv_args, mode_str(r->mode), r->version);
}
}
}
pthread_mutex_unlock(&lockspaces_mutex);
fflush(fp);
fclose(fp);
}
static int read_adopt_file(struct list_head *vg_lockd)
{
char adopt_line[512];
char vg_uuid[72];
char lm_type_str[16];
char mode[8];
struct lockspace *ls = NULL, *ls2;
struct resource *r;
FILE *fp;
if (MAX_ARGS != 64 || MAX_NAME != 64)
return -1;
if (!(fp = fopen(adopt_file, "r")))
return 0;
while (fgets(adopt_line, sizeof(adopt_line), fp)) {
if (adopt_line[0] == '#')
continue;
else if (!strncmp(adopt_line, "lvmlockd", 8)) {
unsigned int v_major = 0, v_minor = 0;
if ((sscanf(adopt_line, "lvmlockd adopt_version %u.%u", &v_major, &v_minor) != 2) ||
(v_major != ADOPT_VERSION_MAJOR))
goto fail;
} else if (!strncmp(adopt_line, "VG:", 3)) {
if (!(ls = alloc_lockspace()))
goto fail;
memset(vg_uuid, 0, sizeof(vg_uuid));
memset(lm_type_str, 0, sizeof(lm_type_str));
if (sscanf(adopt_line, "VG: %63s %64s %15s %64s",
vg_uuid, ls->vg_name, lm_type_str, ls->vg_args) != 4) {
goto fail;
}
memcpy(ls->vg_uuid, vg_uuid, 64);
if ((ls->lm_type = str_to_lm(lm_type_str)) < 0)
goto fail;
list_add(&ls->list, vg_lockd);
} else if (!strncmp(adopt_line, "LV:", 3)) {
if (!(r = alloc_resource()))
goto fail;
r->type = LD_RT_LV;
memset(vg_uuid, 0, sizeof(vg_uuid));
memset(mode, 0, sizeof(mode));
if (sscanf(adopt_line, "LV: %64s %64s %s %7s %u",
vg_uuid, r->name, r->lv_args, mode, &r->version) != 5) {
goto fail;
}
if ((r->adopt_mode = str_to_mode(mode)) == LD_LK_IV)
goto fail;
if (ls && !memcmp(ls->vg_uuid, vg_uuid, 64)) {
list_add(&r->list, &ls->resources);
r = NULL;
} else {
list_for_each_entry(ls2, vg_lockd, list) {
if (memcmp(ls2->vg_uuid, vg_uuid, 64))
continue;
list_add(&r->list, &ls2->resources);
r = NULL;
break;
}
}
if (r) {
log_error("No lockspace found for resource %s vg_uuid %s", r->name, vg_uuid);
goto fail;
}
}
}
fclose(fp);
return 0;
fail:
fclose(fp);
return -1;
}
/*
* These are few enough that arrays of function pointers can
* be avoided.
@@ -1966,9 +1814,9 @@ static void res_process(struct lockspace *ls, struct resource *r,
add_client_result(act);
} else {
/* persistent lock is sh, transient request is ex */
/* FIXME: can we remove this case? do a convert here? */
log_debug("res_process %s existing persistent lock new transient", r->name);
r->last_client_id = act->client_id;
act->flags |= LD_AF_SH_EXISTS;
act->result = -EEXIST;
list_del(&act->list);
add_client_result(act);
@@ -2348,7 +2196,6 @@ static int process_op_during_kill(struct action *act)
case LD_OP_UPDATE:
case LD_OP_RENAME_BEFORE:
case LD_OP_RENAME_FINAL:
case LD_OP_QUERY_LOCK:
case LD_OP_FIND_FREE_LOCK:
return 0;
};
@@ -2374,7 +2221,7 @@ static void *lockspace_thread_main(void *arg_in)
struct action *act_op_free = NULL;
struct list_head tmp_act;
struct list_head act_close;
char tmp_name[MAX_NAME+5];
char tmp_name[MAX_NAME+1];
int free_vg = 0;
int drop_vg = 0;
int error = 0;
@@ -2573,19 +2420,6 @@ static void *lockspace_thread_main(void *arg_in)
break;
}
if (act->op == LD_OP_QUERY_LOCK) {
r = find_resource_act(ls, act, 0);
if (!r)
act->result = -ENOENT;
else {
act->result = 0;
act->mode = r->mode;
}
list_del(&act->list);
add_client_result(act);
continue;
}
if (act->op == LD_OP_FIND_FREE_LOCK && act->rt == LD_RT_VG) {
uint64_t free_offset = 0;
int sector_size = 0;
@@ -2768,10 +2602,8 @@ out_act:
* blank or fill it with garbage, but instead set it to REM:<name>
* to make it easier to follow progress of freeing is via log_debug.
*/
memset(tmp_name, 0, sizeof(tmp_name));
memcpy(tmp_name, "REM:", 4);
strncpy(tmp_name+4, ls->name, sizeof(tmp_name)-4);
memcpy(ls->name, tmp_name, sizeof(ls->name));
dm_strncpy(tmp_name, ls->name, sizeof(tmp_name));
snprintf(ls->name, sizeof(ls->name), "REM:%s", tmp_name);
pthread_mutex_unlock(&lockspaces_mutex);
/* worker_thread will join this thread, and free the ls */
@@ -3573,15 +3405,6 @@ static void *worker_thread_main(void *arg_in)
else
list_add(&act->list, &delayed_list);
} else if (act->op == LD_OP_REFRESH_LV) {
log_debug("work refresh_lv %s %s", act->lv_uuid, act->path);
rv = lm_refresh_lv_start_dlm(act);
if (rv < 0) {
act->result = rv;
add_client_result(act);
} else
list_add(&act->list, &delayed_list);
} else {
log_error("work unknown op %d", act->op);
act->result = -EINVAL;
@@ -3617,19 +3440,6 @@ static void *worker_thread_main(void *arg_in)
act->result = 0;
add_client_result(act);
}
} else if (act->op == LD_OP_REFRESH_LV) {
log_debug("work delayed refresh_lv");
rv = lm_refresh_lv_check_dlm(act);
if (!rv) {
list_del(&act->list);
act->result = 0;
add_client_result(act);
} else if ((rv < 0) && (rv != -EAGAIN)) {
list_del(&act->list);
act->result = rv;
add_client_result(act);
}
}
}
@@ -3835,9 +3645,6 @@ static int client_send_result(struct client *cl, struct action *act)
if ((act->flags & LD_AF_WARN_GL_REMOVED) || gl_vg_removed)
strcat(result_flags, "WARN_GL_REMOVED,");
if (act->flags & LD_AF_SH_EXISTS)
strcat(result_flags, "SH_EXISTS,");
if (act->op == LD_OP_INIT) {
/*
* init is a special case where lock args need
@@ -3866,20 +3673,6 @@ static int client_send_result(struct client *cl, struct action *act)
"result_flags = %s", result_flags[0] ? result_flags : "none",
NULL);
} else if (act->op == LD_OP_QUERY_LOCK) {
log_debug("send %s[%d] cl %u %s %s rv %d mode %d",
cl->name[0] ? cl->name : "client", cl->pid, cl->id,
op_str(act->op), rt_str(act->rt),
act->result, act->mode);
res = daemon_reply_simple("OK",
"op = " FMTd64, (int64_t)act->op,
"op_result = " FMTd64, (int64_t) act->result,
"lock_type = %s", lm_str(act->lm_type),
"mode = %s", mode_str(act->mode),
NULL);
} else if (act->op == LD_OP_DUMP_LOG || act->op == LD_OP_DUMP_INFO) {
/*
* lvmlockctl creates the unix socket then asks us to write to it.
@@ -4210,16 +4003,6 @@ static int str_to_op_rt(const char *req_name, int *op, int *rt)
*rt = 0;
return 0;
}
if (!strcmp(req_name, "query_lock_vg")) {
*op = LD_OP_QUERY_LOCK;
*rt = LD_RT_VG;
return 0;
}
if (!strcmp(req_name, "query_lock_lv")) {
*op = LD_OP_QUERY_LOCK;
*rt = LD_RT_LV;
return 0;
}
if (!strcmp(req_name, "find_free_lock")) {
*op = LD_OP_FIND_FREE_LOCK;
*rt = LD_RT_VG;
@@ -4235,11 +4018,6 @@ static int str_to_op_rt(const char *req_name, int *op, int *rt)
*rt = LD_RT_VG;
return 0;
}
if (!strcmp(req_name, "refresh_lv")) {
*op = LD_OP_REFRESH_LV;
*rt = 0;
return 0;
}
out:
return -1;
}
@@ -4601,7 +4379,6 @@ static void client_recv_action(struct client *cl)
const char *vg_name;
const char *vg_uuid;
const char *vg_sysid;
const char *path;
const char *str;
int64_t val;
uint32_t opts = 0;
@@ -4688,7 +4465,6 @@ static void client_recv_action(struct client *cl)
opts = str_to_opts(str);
str = daemon_request_str(req, "vg_lock_type", NULL);
lm = str_to_lm(str);
path = daemon_request_str(req, "path", NULL);
if (cl_pid && cl_pid != cl->pid)
log_error("client recv bad message pid %d client %d", cl_pid, cl->pid);
@@ -4721,9 +4497,6 @@ static void client_recv_action(struct client *cl)
act->flags = opts;
act->lm_type = lm;
if (path)
act->path = strdup(path);
if (vg_name && strcmp(vg_name, "none"))
strncpy(act->vg_name, vg_name, MAX_NAME);
@@ -4800,7 +4573,6 @@ static void client_recv_action(struct client *cl)
case LD_OP_STOP_ALL:
case LD_OP_RENAME_FINAL:
case LD_OP_RUNNING_LM:
case LD_OP_REFRESH_LV:
add_work_action(act);
rv = 0;
break;
@@ -4810,7 +4582,6 @@ static void client_recv_action(struct client *cl)
case LD_OP_DISABLE:
case LD_OP_FREE:
case LD_OP_RENAME_BEFORE:
case LD_OP_QUERY_LOCK:
case LD_OP_FIND_FREE_LOCK:
case LD_OP_KILL_VG:
case LD_OP_DROP_VG:
@@ -4833,7 +4604,6 @@ static void *client_thread_main(void *arg_in)
struct client *cl;
struct action *act;
struct action *act_un;
uint32_t lock_acquire_count = 0, lock_acquire_written = 0;
int rv;
while (1) {
@@ -4865,9 +4635,6 @@ static void *client_thread_main(void *arg_in)
rv = -1;
}
if (act->flags & LD_AF_LV_LOCK)
lock_acquire_count++;
/*
* The client failed after we acquired an LV lock for
* it, but before getting this reply saying it's done.
@@ -4889,11 +4656,6 @@ static void *client_thread_main(void *arg_in)
continue;
}
if (adopt_opt && (lock_acquire_count > lock_acquire_written)) {
lock_acquire_written = lock_acquire_count;
write_adopt_file();
}
/*
* Queue incoming actions for lockspace threads
*/
@@ -4967,8 +4729,6 @@ static void *client_thread_main(void *arg_in)
pthread_mutex_unlock(&client_mutex);
}
out:
if (adopt_opt && lock_acquire_written)
unlink(adopt_file);
return NULL;
}
@@ -5001,6 +4761,180 @@ static void close_client_thread(void)
log_error("pthread_join client_thread error %d", perrno);
}
/*
* Get a list of all VGs with a lockd type (sanlock|dlm).
* We'll match this list against a list of existing lockspaces that are
* found in the lock manager.
*
* For each of these VGs, also create a struct resource on ls->resources to
* represent each LV in the VG that uses a lock. For each of these LVs
* that are active, we'll attempt to adopt a lock.
*/
static int get_lockd_vgs(struct list_head *vg_lockd)
{
/* FIXME: get VGs some other way */
return -1;
#if 0
struct list_head update_vgs;
daemon_reply reply;
struct dm_config_node *cn;
struct dm_config_node *metadata;
struct dm_config_node *md_cn;
struct dm_config_node *lv_cn;
struct lockspace *ls, *safe;
struct resource *r;
const char *vg_name;
const char *vg_uuid;
const char *lv_uuid;
const char *lock_type;
const char *lock_args;
char find_str_path[PATH_MAX];
int rv = 0;
INIT_LIST_HEAD(&update_vgs);
reply = send_lvmetad("vg_list", "token = %s", "skip", NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
log_error("vg_list from lvmetad failed %d", reply.error);
rv = -EINVAL;
goto destroy;
}
if (!(cn = dm_config_find_node(reply.cft->root, "volume_groups"))) {
log_error("get_lockd_vgs no vgs");
rv = -EINVAL;
goto destroy;
}
/* create an update_vgs list of all vg uuids */
for (cn = cn->child; cn; cn = cn->sib) {
vg_uuid = cn->key;
if (!(ls = alloc_lockspace())) {
rv = -ENOMEM;
break;
}
strncpy(ls->vg_uuid, vg_uuid, 64);
list_add_tail(&ls->list, &update_vgs);
log_debug("get_lockd_vgs %s", vg_uuid);
}
destroy:
daemon_reply_destroy(reply);
if (rv < 0)
goto out;
/* get vg_name and lock_type for each vg uuid entry in update_vgs */
list_for_each_entry(ls, &update_vgs, list) {
reply = send_lvmetad("vg_lookup",
"token = %s", "skip",
"uuid = %s", ls->vg_uuid,
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
log_error("vg_lookup from lvmetad failed %d", reply.error);
rv = -EINVAL;
goto next;
}
vg_name = daemon_reply_str(reply, "name", NULL);
if (!vg_name) {
log_error("get_lockd_vgs %s no name", ls->vg_uuid);
rv = -EINVAL;
goto next;
}
strncpy(ls->vg_name, vg_name, MAX_NAME);
metadata = dm_config_find_node(reply.cft->root, "metadata");
if (!metadata) {
log_error("get_lockd_vgs %s name %s no metadata",
ls->vg_uuid, ls->vg_name);
rv = -EINVAL;
goto next;
}
lock_type = dm_config_find_str(metadata, "metadata/lock_type", NULL);
ls->lm_type = str_to_lm(lock_type);
if ((ls->lm_type != LD_LM_SANLOCK) && (ls->lm_type != LD_LM_DLM)) {
log_debug("get_lockd_vgs %s not lockd type", ls->vg_name);
continue;
}
lock_args = dm_config_find_str(metadata, "metadata/lock_args", NULL);
if (lock_args)
strncpy(ls->vg_args, lock_args, MAX_ARGS);
log_debug("get_lockd_vgs %s lock_type %s lock_args %s",
ls->vg_name, lock_type, lock_args ?: "none");
/*
* Make a record (struct resource) of each lv that uses a lock.
* For any lv that uses a lock, we'll check if the lv is active
* and if so try to adopt a lock for it.
*/
for (md_cn = metadata->child; md_cn; md_cn = md_cn->sib) {
if (strcmp(md_cn->key, "logical_volumes"))
continue;
for (lv_cn = md_cn->child; lv_cn; lv_cn = lv_cn->sib) {
snprintf(find_str_path, PATH_MAX, "%s/lock_args", lv_cn->key);
lock_args = dm_config_find_str(lv_cn, find_str_path, NULL);
if (!lock_args)
continue;
snprintf(find_str_path, PATH_MAX, "%s/id", lv_cn->key);
lv_uuid = dm_config_find_str(lv_cn, find_str_path, NULL);
if (!lv_uuid) {
log_error("get_lock_vgs no lv id for name %s", lv_cn->key);
continue;
}
if (!(r = alloc_resource())) {
rv = -ENOMEM;
goto next;
}
r->use_vb = 0;
r->type = LD_RT_LV;
strncpy(r->name, lv_uuid, MAX_NAME);
if (lock_args)
strncpy(r->lv_args, lock_args, MAX_ARGS);
list_add_tail(&r->list, &ls->resources);
log_debug("get_lockd_vgs %s lv %s %s (name %s)",
ls->vg_name, r->name, lock_args ? lock_args : "", lv_cn->key);
}
}
next:
daemon_reply_destroy(reply);
if (rv < 0)
break;
}
out:
/* Return lockd VG's on the vg_lockd list. */
list_for_each_entry_safe(ls, safe, &update_vgs, list) {
list_del(&ls->list);
if ((ls->lm_type == LD_LM_SANLOCK) || (ls->lm_type == LD_LM_DLM))
list_add_tail(&ls->list, vg_lockd);
else
free(ls);
}
return rv;
#endif
}
static char _dm_uuid[DM_UUID_LEN];
static char *get_dm_uuid(char *dm_name)
@@ -5217,9 +5151,9 @@ static void adopt_locks(void)
INIT_LIST_HEAD(&to_unlock);
/*
* Get list of lockspaces from currently running lock managers.
* Get list of shared VGs from file written by prior lvmlockd.
* Get list of active LVs (in the shared VGs) from the file.
* Get list of lockspaces from lock managers.
* Get list of VGs from lvmetad with a lockd type.
* Get list of active lockd type LVs from /dev.
*/
if (lm_support_dlm() && lm_is_running_dlm()) {
@@ -5243,17 +5177,12 @@ static void adopt_locks(void)
* Adds a struct lockspace to vg_lockd for each lockd VG.
* Adds a struct resource to ls->resources for each LV.
*/
rv = read_adopt_file(&vg_lockd);
rv = get_lockd_vgs(&vg_lockd);
if (rv < 0) {
log_error("adopt_locks read_adopt_file failed");
log_error("adopt_locks get_lockd_vgs failed");
goto fail;
}
if (list_empty(&vg_lockd)) {
log_debug("No lockspaces in adopt file");
return;
}
/*
* For each resource on each lockspace, check if the
* corresponding LV is active. If so, leave the
@@ -5492,7 +5421,7 @@ static void adopt_locks(void)
goto fail;
act->op = LD_OP_LOCK;
act->rt = LD_RT_LV;
act->mode = r->adopt_mode;
act->mode = LD_LK_EX;
act->flags = (LD_AF_ADOPT | LD_AF_PERSISTENT);
act->client_id = INTERNAL_CLIENT_ID;
act->lm_type = ls->lm_type;
@@ -5590,9 +5519,8 @@ static void adopt_locks(void)
* Adopt failed because the orphan has a different mode
* than initially requested. Repeat the lock-adopt operation
* with the other mode. N.B. this logic depends on first
* trying sh then ex for GL/VG locks; for LV locks the mode
* from the adopt file is tried first, the alternate
* (if the mode in adopt file was wrong somehow.)
* trying sh then ex for GL/VG locks, and ex then sh for
* LV locks.
*/
if ((act->rt != LD_RT_LV) && (act->mode == LD_LK_SH)) {
@@ -5600,12 +5528,9 @@ static void adopt_locks(void)
act->mode = LD_LK_EX;
rv = add_lock_action(act);
} else if (act->rt == LD_RT_LV) {
/* LV locks: attempt to adopt the other mode. */
if (act->mode == LD_LK_EX)
act->mode = LD_LK_SH;
else if (act->mode == LD_LK_SH)
act->mode = LD_LK_EX;
} else if ((act->rt == LD_RT_LV) && (act->mode == LD_LK_EX)) {
/* LV locks: attempt to adopt sh after ex failed. */
act->mode = LD_LK_SH;
rv = add_lock_action(act);
} else {
@@ -5740,13 +5665,10 @@ static void adopt_locks(void)
if (count_start_fail || count_adopt_fail)
goto fail;
unlink(adopt_file);
write_adopt_file();
log_debug("adopt_locks done");
return;
fail:
unlink(adopt_file);
log_error("adopt_locks failed, reset host");
}
@@ -6021,8 +5943,6 @@ static void usage(char *prog, FILE *file)
fprintf(file, " Set path to the pid file. [%s]\n", LVMLOCKD_PIDFILE);
fprintf(file, " --socket-path | -s <path>\n");
fprintf(file, " Set path to the socket to listen on. [%s]\n", LVMLOCKD_SOCKET);
fprintf(file, " --adopt-file <path>\n");
fprintf(file, " Set path to the adopt file. [%s]\n", LVMLOCKD_ADOPT_FILE);
fprintf(file, " --syslog-priority | -S err|warning|debug\n");
fprintf(file, " Write log messages from this level up to syslog. [%s]\n", _syslog_num_to_name(LOG_SYSLOG_PRIO));
fprintf(file, " --gl-type | -g <str>\n");
@@ -6040,14 +5960,14 @@ static void usage(char *prog, FILE *file)
int main(int argc, char *argv[])
{
daemon_state ds = {
.name = "lvmlockd",
.daemon_main = main_loop,
.daemon_init = NULL,
.daemon_fini = NULL,
.pidfile = getenv("LVM_LVMLOCKD_PIDFILE"),
.socket_path = getenv("LVM_LVMLOCKD_SOCKET"),
.protocol = lvmlockd_protocol,
.protocol_version = lvmlockd_protocol_version,
.daemon_init = NULL,
.daemon_fini = NULL,
.daemon_main = main_loop,
.name = "lvmlockd",
};
static struct option long_options[] = {
@@ -6058,7 +5978,6 @@ int main(int argc, char *argv[])
{"daemon-debug", no_argument, 0, 'D' },
{"pid-file", required_argument, 0, 'p' },
{"socket-path", required_argument, 0, 's' },
{"adopt-file", required_argument, 0, 128 },
{"gl-type", required_argument, 0, 'g' },
{"host-id", required_argument, 0, 'i' },
{"host-id-file", required_argument, 0, 'F' },
@@ -6081,9 +6000,6 @@ int main(int argc, char *argv[])
switch (c) {
case '0':
break;
case 128:
adopt_file = strdup(optarg);
break;
case 'h':
usage(argv[0], stdout);
exit(EXIT_SUCCESS);
@@ -6145,9 +6061,6 @@ int main(int argc, char *argv[])
if (!ds.socket_path)
ds.socket_path = LVMLOCKD_SOCKET;
if (!adopt_file)
adopt_file = LVMLOCKD_ADOPT_FILE;
/* runs daemon_main/main_loop */
daemon_start(ds);

View File

@@ -24,7 +24,6 @@
* link with non-threaded version of library, libdlm_lt.
*/
#include "libdlm.h"
#include "libdlmcontrol.h"
#include <stddef.h>
#include <poll.h>
@@ -128,18 +127,16 @@ static int read_cluster_name(char *clustername)
return 0;
}
#define MAX_VERSION 16
int lm_init_vg_dlm(char *ls_name, char *vg_name, uint32_t flags, char *vg_args)
{
char clustername[MAX_ARGS+1];
char lock_args_version[MAX_VERSION+1];
char lock_args_version[MAX_ARGS+1];
int rv;
memset(clustername, 0, sizeof(clustername));
memset(lock_args_version, 0, sizeof(lock_args_version));
snprintf(lock_args_version, MAX_VERSION, "%u.%u.%u",
snprintf(lock_args_version, MAX_ARGS, "%u.%u.%u",
VG_LOCK_ARGS_MAJOR, VG_LOCK_ARGS_MINOR, VG_LOCK_ARGS_PATCH);
rv = read_cluster_name(clustername);
@@ -151,9 +148,7 @@ int lm_init_vg_dlm(char *ls_name, char *vg_name, uint32_t flags, char *vg_args)
return -EARGS;
}
rv = snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, clustername);
if (rv >= MAX_ARGS)
log_debug("init_vg_dlm vg_args may be too long %d %s", rv, vg_args);
snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, clustername);
rv = 0;
log_debug("init_vg_dlm done %s vg_args %s", ls_name, vg_args);
@@ -398,18 +393,12 @@ static int lm_adopt_dlm(struct lockspace *ls, struct resource *r, int ld_mode,
(void *)1, (void *)1, (void *)1,
NULL, NULL);
if (rv == -1 && (errno == EAGAIN)) {
if (rv == -1 && errno == -EAGAIN) {
log_debug("S %s R %s adopt_dlm adopt mode %d try other mode",
ls->name, r->name, ld_mode);
rv = -EUCLEAN;
goto fail;
}
if (rv == -1 && (errno == ENOENT)) {
log_debug("S %s R %s adopt_dlm adopt mode %d no lock",
ls->name, r->name, ld_mode);
rv = -ENOENT;
goto fail;
}
if (rv < 0) {
log_debug("S %s R %s adopt_dlm mode %d flags %x error %d errno %d",
ls->name, r->name, mode, flags, rv, errno);
@@ -787,107 +776,3 @@ int lm_is_running_dlm(void)
return 1;
}
#ifdef LOCKDDLM_CONTROL_SUPPORT
int lm_refresh_lv_start_dlm(struct action *act)
{
char path[PATH_MAX];
char command[DLMC_RUN_COMMAND_LEN];
char run_uuid[DLMC_RUN_UUID_LEN];
char *p, *vgname, *lvname;
int rv;
/* split /dev/vgname/lvname into vgname and lvname strings */
strncpy(path, act->path, strlen(act->path));
/* skip past dev */
p = strchr(path + 1, '/');
/* skip past slashes */
while (*p == '/')
p++;
/* start of vgname */
vgname = p;
/* skip past vgname */
while (*p != '/')
p++;
/* terminate vgname */
*p = '\0';
p++;
/* skip past slashes */
while (*p == '/')
p++;
lvname = p;
memset(command, 0, sizeof(command));
memset(run_uuid, 0, sizeof(run_uuid));
/* todo: add --readonly */
snprintf(command, DLMC_RUN_COMMAND_LEN,
"lvm lvchange --refresh --partial --nolocking %s/%s",
vgname, lvname);
rv = dlmc_run_start(command, strlen(command), 0,
DLMC_FLAG_RUN_START_NODE_NONE,
run_uuid);
if (rv < 0) {
log_debug("refresh_lv run_start error %d", rv);
return rv;
}
log_debug("refresh_lv run_start %s", run_uuid);
/* Bit of a hack here, we don't need path once started,
but we do need to save the run_uuid somewhere, so just
replace the path with the uuid. */
free(act->path);
act->path = strdup(run_uuid);
return 0;
}
int lm_refresh_lv_check_dlm(struct action *act)
{
uint32_t check_status = 0;
int rv;
/* NB act->path was replaced with run_uuid */
rv = dlmc_run_check(act->path, strlen(act->path), 0,
DLMC_FLAG_RUN_CHECK_CLEAR,
&check_status);
if (rv < 0) {
log_debug("refresh_lv check error %d", rv);
return rv;
}
log_debug("refresh_lv check %s status %x", act->path, check_status);
if (!(check_status & DLMC_RUN_STATUS_DONE))
return -EAGAIN;
if (check_status & DLMC_RUN_STATUS_FAILED)
return -1;
return 0;
}
#else /* LOCKDDLM_CONTROL_SUPPORT */
int lm_refresh_lv_start_dlm(struct action *act)
{
return 0;
}
int lm_refresh_lv_check_dlm(struct action *act)
{
return 0;
}
#endif /* LOCKDDLM_CONTROL_SUPPORT */

View File

@@ -11,8 +11,6 @@
#ifndef _LVM_LVMLOCKD_INTERNAL_H
#define _LVM_LVMLOCKD_INTERNAL_H
#include "base/memory/container_of.h"
#define MAX_NAME 64
#define MAX_ARGS 64
@@ -55,8 +53,6 @@ enum {
LD_OP_KILL_VG,
LD_OP_DROP_VG,
LD_OP_BUSY,
LD_OP_QUERY_LOCK,
LD_OP_REFRESH_LV,
};
/* resource types */
@@ -109,7 +105,6 @@ struct client {
#define LD_AF_WARN_GL_REMOVED 0x00020000
#define LD_AF_LV_LOCK 0x00040000
#define LD_AF_LV_UNLOCK 0x00080000
#define LD_AF_SH_EXISTS 0x00100000
/*
* Number of times to repeat a lock request after
@@ -132,7 +127,6 @@ struct action {
int max_retries;
int result;
int lm_rv; /* return value from lm_ function */
char *path;
char vg_uuid[64];
char vg_name[MAX_NAME+1];
char lv_name[MAX_NAME+1];
@@ -147,7 +141,6 @@ struct resource {
char name[MAX_NAME+1]; /* vg name or lv name */
int8_t type; /* resource type LD_RT_ */
int8_t mode;
int8_t adopt_mode;
unsigned int sh_count; /* number of sh locks on locks list */
uint32_t version;
uint32_t last_client_id; /* last client_id to lock or unlock resource */
@@ -158,7 +151,7 @@ struct resource {
struct list_head locks;
struct list_head actions;
char lv_args[MAX_ARGS+1];
char lm_data[]; /* lock manager specific data */
char lm_data[0]; /* lock manager specific data */
};
#define LD_LF_PERSISTENT 0x00000001
@@ -219,6 +212,10 @@ struct val_blk {
/* lm_unlock flags */
#define LMUF_FREE_VG 0x00000001
#define container_of(ptr, type, member) ({ \
const typeof( ((type *)0)->member ) *__mptr = (ptr); \
(type *)( (char *)__mptr - offsetof(type,member) );})
static inline void INIT_LIST_HEAD(struct list_head *list)
{
list->next = list;
@@ -392,8 +389,6 @@ int lm_get_lockspaces_dlm(struct list_head *ls_rejoin);
int lm_data_size_dlm(void);
int lm_is_running_dlm(void);
int lm_hosts_dlm(struct lockspace *ls, int notify);
int lm_refresh_lv_start_dlm(struct action *act);
int lm_refresh_lv_check_dlm(struct action *act);
static inline int lm_support_dlm(void)
{
@@ -470,16 +465,6 @@ static inline int lm_hosts_dlm(struct lockspace *ls, int notify)
return 0;
}
static inline int lm_refresh_lv_start_dlm(struct action *act)
{
return 0;
}
static inline int lm_refresh_lv_check_dlm(struct action *act)
{
return 0;
}
#endif /* dlm support */
#ifdef LOCKDSANLOCK_SUPPORT

View File

@@ -500,15 +500,13 @@ static int get_sizes_lockspace(char *path, int *sector_size, int *align_size)
* version and lv name, and returns the real lock_args in vg_args.
*/
#define MAX_VERSION 16
int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_args)
{
struct sanlk_lockspace ss;
struct sanlk_resourced rd;
struct sanlk_disk disk;
char lock_lv_name[MAX_ARGS+1];
char lock_args_version[MAX_VERSION+1];
char lock_args_version[MAX_ARGS+1];
const char *gl_name = NULL;
uint32_t daemon_version;
uint32_t daemon_proto;
@@ -528,7 +526,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
return -EARGS;
}
snprintf(lock_args_version, MAX_VERSION, "%u.%u.%u",
snprintf(lock_args_version, MAX_ARGS, "%u.%u.%u",
VG_LOCK_ARGS_MAJOR, VG_LOCK_ARGS_MINOR, VG_LOCK_ARGS_PATCH);
/* see comment above about input vg_args being only lock_lv_name */
@@ -545,9 +543,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if (daemon_test) {
if (!gl_lsname_sanlock[0])
strncpy(gl_lsname_sanlock, ls_name, MAX_NAME);
rv = snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, lock_lv_name);
if (rv >= MAX_ARGS)
log_debug("init_vg_san vg_args may be too long %d %s", rv, vg_args);
snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, lock_lv_name);
return 0;
}
@@ -639,9 +635,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if (!strcmp(gl_name, R_NAME_GL))
strncpy(gl_lsname_sanlock, ls_name, MAX_NAME);
rv = snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, lock_lv_name);
if (rv >= MAX_ARGS)
log_debug("init_vg_san vg_args may be too long %d %s", rv, vg_args);
snprintf(vg_args, MAX_ARGS, "%s:%s", lock_args_version, lock_lv_name);
log_debug("S %s init_vg_san done vg_args %s", ls_name, vg_args);
@@ -698,7 +692,7 @@ int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
{
struct sanlk_resourced rd;
char lock_lv_name[MAX_ARGS+1];
char lock_args_version[MAX_VERSION+1];
char lock_args_version[MAX_ARGS+1];
uint64_t offset;
int rv;
@@ -713,7 +707,7 @@ int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
return rv;
}
snprintf(lock_args_version, MAX_VERSION, "%u.%u.%u",
snprintf(lock_args_version, MAX_ARGS, "%u.%u.%u",
LV_LOCK_ARGS_MAJOR, LV_LOCK_ARGS_MINOR, LV_LOCK_ARGS_PATCH);
if (daemon_test) {

View File

@@ -92,12 +92,6 @@ const char **cmdargv_ctr(const struct lvmpolld_lv *pdlv, const char *lvm_binary,
if (!add_to_cmd_arr(&cmd_argv, "-An", &i))
goto err;
if (pdlv->devicesfile) {
if (!add_to_cmd_arr(&cmd_argv, "--devicesfile", &i) ||
!add_to_cmd_arr(&cmd_argv, pdlv->devicesfile, &i))
goto err;
}
/* terminating NULL */
if (!add_to_cmd_arr(&cmd_argv, NULL, &i))
goto err;

View File

@@ -555,15 +555,14 @@ static struct lvmpolld_lv *construct_pdlv(request req, struct lvmpolld_state *ls
const char *interval, const char *id,
const char *vgname, const char *lvname,
const char *sysdir, enum poll_type type,
unsigned abort_polling, unsigned uinterval,
const char *devicesfile)
unsigned abort_polling, unsigned uinterval)
{
const char **cmdargv, **cmdenvp;
struct lvmpolld_lv *pdlv;
unsigned handle_missing_pvs = daemon_request_int(req, LVMPD_PARM_HANDLE_MISSING_PVS, 0);
pdlv = pdlv_create(ls, id, vgname, lvname, sysdir, type,
interval, uinterval, pdst, devicesfile);
interval, uinterval, pdst);
if (!pdlv) {
ERROR(ls, "%s: %s", PD_LOG_PREFIX, "failed to create internal LV data structure.");
@@ -622,7 +621,6 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
const char *lvname = daemon_request_str(req, LVMPD_PARM_LVNAME, NULL);
const char *vgname = daemon_request_str(req, LVMPD_PARM_VGNAME, NULL);
const char *sysdir = daemon_request_str(req, LVMPD_PARM_SYSDIR, NULL);
const char *devicesfile = daemon_request_str(req, LVMPD_PARM_DEVICESFILE, NULL);
unsigned abort_polling = daemon_request_int(req, LVMPD_PARM_ABORT, 0);
assert(type < POLL_TYPE_MAX);
@@ -682,7 +680,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
pdlv->init_rq_count++; /* safe. protected by store lock */
} else {
pdlv = construct_pdlv(req, ls, pdst, interval, id, vgname,
lvname, sysdir, type, abort_polling, 2 * uinterval, devicesfile);
lvname, sysdir, type, abort_polling, 2 * uinterval);
if (!pdlv) {
pdst_unlock(pdst);
free(id);
@@ -917,7 +915,7 @@ int main(int argc, char *argv[])
int option_index = 0;
int client = 0, server = 0;
unsigned action = ACTION_MAX;
struct timespec timeout;
struct timeval timeout;
daemon_idle di = { .ptimeout = &timeout };
struct lvmpolld_state ls = { .log_config = "" };
daemon_state s = {

View File

@@ -93,13 +93,11 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
const char *vgname, const char *lvname,
const char *sysdir, enum poll_type type,
const char *sinterval, unsigned pdtimeout,
struct lvmpolld_store *pdst,
const char *devicesfile)
struct lvmpolld_store *pdst)
{
char *lvmpolld_id = strdup(id), /* copy */
*full_lvname = _construct_full_lvname(vgname, lvname), /* copy */
*lvm_system_dir_env = _construct_lvm_system_dir_env(sysdir); /* copy */
char *devicesfile_dup = devicesfile ? strdup(devicesfile) : NULL;
struct lvmpolld_lv tmp = {
.ls = ls,
@@ -107,7 +105,6 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
.lvmpolld_id = lvmpolld_id,
.lvid = _get_lvid(lvmpolld_id, sysdir),
.lvname = full_lvname,
.devicesfile = devicesfile_dup,
.lvm_system_dir_env = lvm_system_dir_env,
.sinterval = strdup(sinterval), /* copy */
.pdtimeout = pdtimeout < MIN_POLLING_TIMEOUT ? MIN_POLLING_TIMEOUT : pdtimeout,
@@ -127,7 +124,6 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
return pdlv;
err:
free((void *)devicesfile_dup);
free((void *)full_lvname);
free((void *)lvmpolld_id);
free((void *)lvm_system_dir_env);
@@ -140,7 +136,6 @@ err:
void pdlv_destroy(struct lvmpolld_lv *pdlv)
{
free((void *)pdlv->lvmpolld_id);
free((void *)pdlv->devicesfile);
free((void *)pdlv->lvname);
free((void *)pdlv->sinterval);
free((void *)pdlv->lvm_system_dir_env);

View File

@@ -49,7 +49,6 @@ struct lvmpolld_lv {
const enum poll_type type;
const char *const lvid;
const char *const lvmpolld_id;
const char *const devicesfile;
const char *const lvname; /* full vg/lv name */
const unsigned pdtimeout; /* in seconds */
const char *const sinterval;
@@ -102,8 +101,7 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
const char *vgname, const char *lvname,
const char *sysdir, enum poll_type type,
const char *sinterval, unsigned pdtimeout,
struct lvmpolld_store *pdst,
const char *devicesfile);
struct lvmpolld_store *pdst);
/* only call with appropriate struct lvmpolld_store lock held */
void pdlv_destroy(struct lvmpolld_lv *pdlv);

View File

@@ -35,7 +35,6 @@
#define LVMPD_PARM_SYSDIR "sysdir"
#define LVMPD_PARM_VALUE "value" /* either retcode or signal value */
#define LVMPD_PARM_VGNAME "vgname"
#define LVMPD_PARM_DEVICESFILE "devicesfile"
#define LVMPD_RESP_FAILED "failed"
#define LVMPD_RESP_FINISHED "finished"

View File

@@ -121,9 +121,7 @@ enum {
DM_DEVICE_SET_GEOMETRY,
DM_DEVICE_ARM_POLL,
DM_DEVICE_GET_TARGET_VERSION
DM_DEVICE_ARM_POLL
};
/*
@@ -164,20 +162,20 @@ struct dm_info {
struct dm_deps {
uint32_t count;
uint32_t filler;
uint64_t device[];
uint64_t device[0];
};
struct dm_names {
uint64_t dev;
uint32_t next; /* Offset to next struct from start of this struct */
char name[];
char name[0];
};
struct dm_versions {
uint32_t next; /* Offset to next struct from start of this struct */
uint32_t version[3];
char name[];
char name[0];
};
int dm_get_library_version(char *version, size_t size);
@@ -234,7 +232,6 @@ int dm_task_suppress_identical_reload(struct dm_task *dmt);
int dm_task_secure_data(struct dm_task *dmt);
int dm_task_retry_remove(struct dm_task *dmt);
int dm_task_deferred_remove(struct dm_task *dmt);
void dm_task_skip_reload_params_compare(struct dm_task *dmt);
/*
* Record timestamp immediately after the ioctl returns.
@@ -384,7 +381,7 @@ int dm_get_status_cache(struct dm_pool *mem, const char *params,
struct dm_status_cache **status);
struct dm_status_writecache {
uint64_t error;
uint32_t error;
uint64_t total_blocks;
uint64_t free_blocks;
uint64_t writeback_blocks;
@@ -393,15 +390,6 @@ struct dm_status_writecache {
int dm_get_status_writecache(struct dm_pool *mem, const char *params,
struct dm_status_writecache **status);
struct dm_status_integrity {
uint64_t number_of_mismatches;
uint64_t provided_data_sectors;
uint64_t recalc_sector;
};
int dm_get_status_integrity(struct dm_pool *mem, const char *params,
struct dm_status_integrity **status);
/*
* Parse params from STATUS call for snapshot target
*
@@ -915,7 +903,6 @@ int dm_tree_node_add_raid_target_with_params_v2(struct dm_tree_node *node,
#define DM_CACHE_FEATURE_WRITETHROUGH 0x00000002
#define DM_CACHE_FEATURE_PASSTHROUGH 0x00000004
#define DM_CACHE_FEATURE_METADATA2 0x00000008 /* cache v1.10 */
#define DM_CACHE_FEATURE_NO_DISCARD_PASSDOWN 0x00000010
struct dm_config_node;
/*
@@ -951,8 +938,6 @@ struct writecache_settings {
uint64_t autocommit_time; /* in milliseconds */
uint32_t fua;
uint32_t nofua;
uint32_t cleaner;
uint32_t max_age;
/*
* Allow an unrecognized key and its val to be passed to the kernel for
@@ -972,8 +957,6 @@ struct writecache_settings {
unsigned autocommit_time_set:1;
unsigned fua_set:1;
unsigned nofua_set:1;
unsigned cleaner_set:1;
unsigned max_age_set:1;
};
int dm_tree_node_add_writecache_target(struct dm_tree_node *node,
@@ -984,42 +967,12 @@ int dm_tree_node_add_writecache_target(struct dm_tree_node *node,
uint32_t writecache_block_size,
struct writecache_settings *settings);
struct integrity_settings {
char mode[8];
uint32_t tag_size;
uint32_t block_size; /* optional table param always set by lvm */
const char *internal_hash; /* optional table param always set by lvm */
uint32_t journal_sectors;
uint32_t interleave_sectors;
uint32_t buffer_sectors;
uint32_t journal_watermark;
uint32_t commit_time;
uint32_t bitmap_flush_interval;
uint64_t sectors_per_bit;
unsigned journal_sectors_set:1;
unsigned interleave_sectors_set:1;
unsigned buffer_sectors_set:1;
unsigned journal_watermark_set:1;
unsigned commit_time_set:1;
unsigned bitmap_flush_interval_set:1;
unsigned sectors_per_bit_set:1;
};
int dm_tree_node_add_integrity_target(struct dm_tree_node *node,
uint64_t size,
const char *origin_uuid,
const char *meta_uuid,
struct integrity_settings *settings,
int recalculate);
/*
* VDO target
*/
int dm_tree_node_add_vdo_target(struct dm_tree_node *node,
uint64_t size,
const char *vdo_pool_name,
const char *data_uuid,
uint64_t data_size,
const struct dm_vdo_target_params *param);
@@ -1072,10 +1025,10 @@ int dm_tree_node_add_replicator_dev_target(struct dm_tree_node *node,
#define DM_THIN_MIN_DATA_BLOCK_SIZE (UINT32_C(128))
#define DM_THIN_MAX_DATA_BLOCK_SIZE (UINT32_C(2097152))
/*
* Max supported size for thin pool metadata device (17045913600 bytes)
* Max supported size for thin pool metadata device (17112760320 bytes)
* Limitation is hardcoded into the kernel and bigger device size
* is not accepted.
* drivers/md/dm-thin-metadata.h THIN_METADATA_MAX_SECTORS
* But here DM_THIN_MAX_METADATA_SIZE got defined incorrectly
* Correct size is (UINT64_C(255) * ((1 << 14) - 64) * (4096 / (1 << 9)))
*/
#define DM_THIN_MAX_METADATA_SIZE (UINT64_C(255) * (1 << 14) * (4096 / (1 << 9)) - 256 * 1024)
@@ -1088,16 +1041,6 @@ int dm_tree_node_add_thin_pool_target(struct dm_tree_node *node,
uint64_t low_water_mark,
unsigned skip_block_zeroing);
int dm_tree_node_add_thin_pool_target_v1(struct dm_tree_node *node,
uint64_t size,
uint64_t transaction_id,
const char *metadata_uuid,
const char *pool_uuid,
uint32_t data_block_size,
uint64_t low_water_mark,
unsigned skip_block_zeroing,
unsigned crop_metadata);
/* Supported messages for thin provision target */
typedef enum {
DM_THIN_MESSAGE_CREATE_SNAP, /* device_id, origin_id */
@@ -1328,7 +1271,7 @@ int dm_bit_get_next(dm_bitset_t bs, int last_bit);
int dm_bit_get_last(dm_bitset_t bs);
int dm_bit_get_prev(dm_bitset_t bs, int last_bit);
#define DM_BITS_PER_INT ((unsigned)sizeof(int) * CHAR_BIT)
#define DM_BITS_PER_INT (sizeof(int) * CHAR_BIT)
#define dm_bit(bs, i) \
((bs)[((i) / DM_BITS_PER_INT) + 1] & (0x1 << ((i) & (DM_BITS_PER_INT - 1))))

View File

@@ -119,9 +119,6 @@ static struct cmd_data _cmd_data_v4[] = {
#ifdef DM_DEV_ARM_POLL
{"armpoll", DM_DEV_ARM_POLL, {4, 36, 0}},
#endif
#ifdef DM_GET_TARGET_VERSION
{"target-version", DM_GET_TARGET_VERSION, {4, 41, 0}},
#endif
};
/* *INDENT-ON* */
@@ -205,7 +202,7 @@ static int _get_proc_number(const char *file, const char *name,
}
while (getline(&line, &len, fl) != -1) {
if (sscanf(line, "%u %255s\n", &num, &nm[0]) == 2) {
if (sscanf(line, "%d %255s\n", &num, &nm[0]) == 2) {
if (!strcmp(name, nm)) {
if (number) {
*number = num;
@@ -805,11 +802,6 @@ int dm_task_suppress_identical_reload(struct dm_task *dmt)
return 1;
}
void dm_task_skip_reload_params_compare(struct dm_task *dmt)
{
dmt->skip_reload_params_compare = 1;
}
int dm_task_set_add_node(struct dm_task *dmt, dm_add_node_t add_node)
{
switch (add_node) {
@@ -1580,36 +1572,11 @@ static int _reload_with_suppression_v4(struct dm_task *dmt)
len = strlen(t2->params);
while (len-- > 0 && t2->params[len] == ' ')
t2->params[len] = '\0';
if (t1->start != t2->start) {
log_debug("reload %u:%u diff start %llu %llu type %s %s", task->major, task->minor,
(unsigned long long)t1->start, (unsigned long long)t2->start, t1->type, t2->type);
if ((t1->start != t2->start) ||
(t1->length != t2->length) ||
(strcmp(t1->type, t2->type)) ||
(strcmp(t1->params, t2->params)))
goto no_match;
}
if (t1->length != t2->length) {
log_debug("reload %u:%u diff length %llu %llu type %s %s", task->major, task->minor,
(unsigned long long)t1->length, (unsigned long long)t2->length, t1->type, t2->type);
goto no_match;
}
if (strcmp(t1->type, t2->type)) {
log_debug("reload %u:%u diff type %s %s", task->major, task->minor, t1->type, t2->type);
goto no_match;
}
if (strcmp(t1->params, t2->params)) {
if (dmt->skip_reload_params_compare) {
log_debug("reload %u:%u diff params ignore for type %s",
task->major, task->minor, t1->type);
log_debug("reload params1 %s", t1->params);
log_debug("reload params2 %s", t2->params);
} else {
log_debug("reload %u:%u diff params for type %s",
task->major, task->minor, t1->type);
log_debug("reload params1 %s", t1->params);
log_debug("reload params2 %s", t2->params);
goto no_match;
}
}
t1 = t1->next;
t2 = t2->next;
}

View File

@@ -59,7 +59,6 @@ struct dm_task {
int skip_lockfs;
int query_inactive_table;
int suppress_identical_reload;
int skip_reload_params_compare;
dm_add_node_t add_node;
uint64_t existing_table_size;
int cookie_set;

View File

@@ -512,7 +512,7 @@ int unmangle_string(const char *str, const char *str_name, size_t len,
int strict = mode != DM_STRING_MANGLING_NONE;
char str_rest[DM_NAME_LEN];
size_t i, j;
unsigned int code;
int code;
int r = 0;
if (!str || !buf)
@@ -1445,7 +1445,7 @@ struct node_op_parms {
char *old_name;
int warn_if_udev_failed;
unsigned rely_on_udev;
char names[];
char names[0];
};
static void _store_str(char **pos, char **ptr, const char *str)
@@ -1874,120 +1874,6 @@ bad:
return r;
}
static int _sysfs_get_dev_major_minor(const char *path, uint32_t major, uint32_t minor)
{
FILE *fp;
uint32_t ma, mi;
int r;
if (!(fp = fopen(path, "r")))
return 0;
r = (fscanf(fp, "%" PRIu32 ":%" PRIu32 , &ma, &mi) == 2) &&
(ma == major) && (mi == minor);
// log_debug("Checking %s %u:%u -> %d", path, ma, mi, r);
if (fclose(fp))
log_sys_error("fclose", path);
return r;
}
static int _sysfs_find_kernel_name(uint32_t major, uint32_t minor, char *buf, size_t buf_size)
{
const char *name, *name_dev;
char path[PATH_MAX];
struct dirent *dirent, *dirent_dev;
DIR *d, *d_dev;
struct stat st;
int r = 0, sz;
if (!*_sysfs_dir ||
dm_snprintf(path, sizeof(path), "%s/block/", _sysfs_dir) < 0) {
log_error("Failed to build sysfs_path.");
return 0;
}
if (!(d = opendir(path))) {
log_sys_error("opendir", path);
return 0;
}
while (!r && (dirent = readdir(d))) {
name = dirent->d_name;
if (!strcmp(name, ".") || !strcmp(name, ".."))
continue;
if ((sz = dm_snprintf(path, sizeof(path), "%sblock/%s/dev",
_sysfs_dir, name)) == -1) {
log_warn("Couldn't create path for %s.", name);
continue;
}
if (_sysfs_get_dev_major_minor(path, major, minor)) {
r = dm_strncpy(buf, name, buf_size);
break; /* found */
}
path[sz - 4] = 0; /* strip /dev from end of path string */
if (stat(path, &st))
continue;
if (S_ISDIR(st.st_mode)) {
/* let's assume there is no tree-complex device in past systems */
if (!(d_dev = opendir(path))) {
log_sys_debug("opendir", path);
continue;
}
while ((dirent_dev = readdir(d_dev))) {
name_dev = dirent_dev->d_name;
/* skip known ignorable paths */
if (!strcmp(name_dev, ".") || !strcmp(name_dev, "..") ||
!strcmp(name_dev, "bdi") ||
!strcmp(name_dev, "dev") ||
!strcmp(name_dev, "device") ||
!strcmp(name_dev, "holders") ||
!strcmp(name_dev, "integrity") ||
!strcmp(name_dev, "loop") ||
!strcmp(name_dev, "queueu") ||
!strcmp(name_dev, "md") ||
!strcmp(name_dev, "mq") ||
!strcmp(name_dev, "power") ||
!strcmp(name_dev, "removable") ||
!strcmp(name_dev, "slave") ||
!strcmp(name_dev, "slaves") ||
!strcmp(name_dev, "subsystem") ||
!strcmp(name_dev, "trace") ||
!strcmp(name_dev, "uevent"))
continue;
if (dm_snprintf(path, sizeof(path), "%sblock/%s/%s/dev",
_sysfs_dir, name, name_dev) == -1) {
log_warn("Couldn't create path for %s/%s.", name, name_dev);
continue;
}
if (_sysfs_get_dev_major_minor(path, major, minor)) {
r = dm_strncpy(buf, name_dev, buf_size);
break; /* found */
}
}
if (closedir(d_dev))
log_sys_debug("closedir", name);
}
}
if (closedir(d))
log_sys_debug("closedir", path);
return r;
}
static int _sysfs_get_kernel_name(uint32_t major, uint32_t minor, char *buf, size_t buf_size)
{
char *name, *sysfs_path, *temp_buf = NULL;
@@ -2010,11 +1896,8 @@ static int _sysfs_get_kernel_name(uint32_t major, uint32_t minor, char *buf, siz
if ((size = readlink(sysfs_path, temp_buf, PATH_MAX - 1)) < 0) {
if (errno != ENOENT)
log_sys_error("readlink", sysfs_path);
else {
else
log_sys_debug("readlink", sysfs_path);
r = _sysfs_find_kernel_name(major, minor, buf, buf_size);
goto out;
}
goto bad;
}
temp_buf[size] = '\0';
@@ -2034,7 +1917,6 @@ static int _sysfs_get_kernel_name(uint32_t major, uint32_t minor, char *buf, siz
strcpy(buf, name);
r = 1;
bad:
out:
free(temp_buf);
free(sysfs_path);

View File

@@ -51,8 +51,6 @@ struct parser {
struct dm_pool *mem;
int no_dup_node_check; /* whether to disable dup node checking */
const char *key; /* last obtained key */
unsigned ignored_creation_time;
};
struct config_output {
@@ -178,7 +176,7 @@ static int _do_dm_config_parse(struct dm_config_tree *cft, const char *start, co
/* TODO? if (start == end) return 1; */
struct parser *p;
if (!(p = dm_pool_zalloc(cft->mem, sizeof(*p))))
if (!(p = dm_pool_alloc(cft->mem, sizeof(*p))))
return_0;
p->mem = cft->mem;
@@ -617,7 +615,6 @@ static struct dm_config_node *_section(struct parser *p, struct dm_config_node *
match(TOK_SECTION_E);
} else {
match(TOK_EQ);
p->key = root->key;
if (!(value = _value(p)))
return_NULL;
if (root->v)
@@ -685,17 +682,8 @@ static struct dm_config_value *_type(struct parser *p)
errno = 0;
v->v.i = strtoll(p->tb, NULL, 0); /* FIXME: check error */
if (errno) {
if (errno == ERANGE && p->key &&
strcmp("creation_time", p->key) == 0) {
/* Due to a bug in some older 32bit builds (<2.02.169),
* lvm was able to produce invalid creation_time string */
v->v.i = 1527120000; /* Pick 2018-05-24 day instead */
if (!p->ignored_creation_time++)
log_warn("WARNING: Invalid creation_time found in metadata (repaired with next metadata update).");
} else {
log_error("Failed to read int token.");
return NULL;
}
log_error("Failed to read int token.");
return NULL;
}
match(TOK_INT);
break;

View File

@@ -38,7 +38,6 @@ enum {
SEG_STRIPED,
SEG_ZERO,
SEG_WRITECACHE,
SEG_INTEGRITY,
SEG_THIN_POOL,
SEG_THIN,
SEG_VDO,
@@ -79,7 +78,6 @@ static const struct {
{ SEG_STRIPED, "striped" },
{ SEG_ZERO, "zero"},
{ SEG_WRITECACHE, "writecache"},
{ SEG_INTEGRITY, "integrity"},
{ SEG_THIN_POOL, "thin-pool"},
{ SEG_THIN, "thin"},
{ SEG_VDO, "vdo" },
@@ -223,11 +221,6 @@ struct load_segment {
int writecache_pmem; /* writecache, 1 if pmem, 0 if ssd */
uint32_t writecache_block_size; /* writecache, in bytes */
struct writecache_settings writecache_settings; /* writecache */
uint64_t integrity_data_sectors; /* integrity (provided_data_sectors) */
struct dm_tree_node *integrity_meta_node; /* integrity */
struct integrity_settings integrity_settings; /* integrity */
int integrity_recalculate; /* integrity */
};
/* Per-device properties */
@@ -274,16 +267,6 @@ struct load_properties {
*/
unsigned delay_resume_if_extended;
/*
* When comparing table lines to decide if a reload is
* needed, ignore any differences betwen the lvm device
* params and the kernel-reported device params.
* dm-integrity reports many internal parameters on the
* table line when lvm does not explicitly set them,
* causing lvm and the kernel to have differing params.
*/
unsigned skip_reload_params_compare;
/*
* Call node_send_messages(), set to 2 if there are messages
* When != 0, it validates matching transaction id, thus thin-pools
@@ -1589,37 +1572,8 @@ static int _thin_pool_node_message(struct dm_tree_node *dnode, struct thin_messa
}
if (!_node_message(dnode->info.major, dnode->info.minor,
tm->expected_errno, buf)) {
switch (m->type) {
case DM_THIN_MESSAGE_CREATE_SNAP:
case DM_THIN_MESSAGE_CREATE_THIN:
if (errno == EEXIST) {
/*
* ATM errno from ioctl() is preserved through code error path chain
* If this would ever change, another way need to be used to
* obtain result from failed DM message
*/
log_error("Thin pool %s already contain thin device with device_id %u.",
_node_name(dnode), m->u.m_create_snap.device_id);
/*
* TODO:
*
* Give some useful advice how to solve this problem,
* until lvconvert --repair can handle this automatically
*/
log_error("Manual intervention may be required to remove device dev_id=%u in thin pool metadata.",
m->u.m_create_snap.device_id);
log_error("Optionally new thin volume with device_id=%u can be manually added into a volume group.",
m->u.m_create_snap.device_id);
log_warn("WARNING: When uncertain how to do this, contact support!");
return 0;
}
/* fall through */
default:
return_0;
}
}
tm->expected_errno, buf))
return_0;
return 1;
}
@@ -1666,15 +1620,6 @@ static int _thin_pool_node_send_messages(struct dm_tree_node *dnode,
if (!have_messages || !send)
return 1; /* transaction_id is matching */
if (stp.fail || stp.read_only || stp.needs_check) {
log_error("Cannot send messages to thin pool %s%s%s%s.",
_node_name(dnode),
stp.fail ? " in failed state" : "",
stp.read_only ? " with read only metadata" : "",
stp.needs_check ? " which needs check first" : "");
return 0;
}
dm_list_iterate_items(tmsg, &seg->thin_messages) {
if (!(_thin_pool_node_message(dnode, tmsg)))
return_0;
@@ -2136,7 +2081,7 @@ int dm_tree_activate_children(struct dm_tree_node *dnode,
return r;
}
static int _create_node(struct dm_tree_node *dnode, struct dm_tree_node *parent)
static int _create_node(struct dm_tree_node *dnode)
{
int r = 0;
struct dm_task *dmt;
@@ -2185,15 +2130,38 @@ static int _create_node(struct dm_tree_node *dnode, struct dm_tree_node *parent)
"Unable to get DM task info for %s.",
dnode->name);
}
if (r)
dm_list_add_h(&parent->activated, &dnode->activated_list);
out:
dm_task_destroy(dmt);
return r;
}
/*
* _remove_node
*
* This function is only used to remove a DM device that has failed
* to load any table.
*/
static int _remove_node(struct dm_tree_node *dnode)
{
if (!dnode->info.exists)
return 1;
if (dnode->info.live_table || dnode->info.inactive_table) {
log_error(INTERNAL_ERROR
"_remove_node called on device with loaded table(s).");
return 0;
}
if (!_deactivate_node(dnode->name, dnode->info.major, dnode->info.minor,
&dnode->dtree->cookie, dnode->udev_flags, 0)) {
log_error("Failed to clean-up device with no table: %s.",
_node_name(dnode));
return 0;
}
return 1;
}
static int _build_dev_string(char *devbuf, size_t bufsize, struct dm_tree_node *node)
{
if (!dm_format_dev(devbuf, bufsize, node->info.major, node->info.minor)) {
@@ -2382,7 +2350,7 @@ static int _mirror_emit_segment_line(struct dm_task *dmt, struct load_segment *s
EMIT_PARAMS(pos, " %u ", seg->mirror_area_count);
if (!_emit_areas_line(dmt, seg, params, paramsize, &pos))
if (_emit_areas_line(dmt, seg, params, paramsize, &pos) <= 0)
return_0;
if (handle_errors)
@@ -2584,7 +2552,7 @@ static int _raid_emit_segment_line(struct dm_task *dmt, uint32_t major,
/* Print number of metadata/data device pairs */
EMIT_PARAMS(pos, " %u", area_count);
if (!_emit_areas_line(dmt, seg, params, paramsize, &pos))
if (_emit_areas_line(dmt, seg, params, paramsize, &pos) <= 0)
return_0;
return 1;
@@ -2685,10 +2653,6 @@ static int _writecache_emit_segment_line(struct dm_task *dmt,
count += 1;
if (seg->writecache_settings.nofua_set)
count += 1;
if (seg->writecache_settings.cleaner_set && seg->writecache_settings.cleaner)
count += 1;
if (seg->writecache_settings.max_age_set)
count += 2;
if (seg->writecache_settings.new_key)
count += 2;
@@ -2732,14 +2696,6 @@ static int _writecache_emit_segment_line(struct dm_task *dmt,
EMIT_PARAMS(pos, " nofua");
}
if (seg->writecache_settings.cleaner_set && seg->writecache_settings.cleaner) {
EMIT_PARAMS(pos, " cleaner");
}
if (seg->writecache_settings.max_age_set) {
EMIT_PARAMS(pos, " max_age %u", seg->writecache_settings.max_age);
}
if (seg->writecache_settings.new_key) {
EMIT_PARAMS(pos, " %s %s",
seg->writecache_settings.new_key,
@@ -2749,84 +2705,6 @@ static int _writecache_emit_segment_line(struct dm_task *dmt,
return 1;
}
static int _integrity_emit_segment_line(struct dm_task *dmt,
struct load_segment *seg,
char *params, size_t paramsize)
{
struct integrity_settings *set = &seg->integrity_settings;
int pos = 0;
int count;
char origin_dev[DM_FORMAT_DEV_BUFSIZE];
char meta_dev[DM_FORMAT_DEV_BUFSIZE];
if (!_build_dev_string(origin_dev, sizeof(origin_dev), seg->origin))
return_0;
if (seg->integrity_meta_node &&
!_build_dev_string(meta_dev, sizeof(meta_dev), seg->integrity_meta_node))
return_0;
count = 3; /* block_size, internal_hash, fix_padding options are always passed */
if (seg->integrity_meta_node)
count++;
if (seg->integrity_recalculate)
count++;
if (set->journal_sectors_set)
count++;
if (set->interleave_sectors_set)
count++;
if (set->buffer_sectors_set)
count++;
if (set->journal_watermark_set)
count++;
if (set->commit_time_set)
count++;
if (set->bitmap_flush_interval_set)
count++;
if (set->sectors_per_bit_set)
count++;
EMIT_PARAMS(pos, "%s 0 %u %s %d fix_padding block_size:%u internal_hash:%s",
origin_dev,
set->tag_size,
set->mode,
count,
set->block_size,
set->internal_hash);
if (seg->integrity_meta_node)
EMIT_PARAMS(pos, " meta_device:%s", meta_dev);
if (seg->integrity_recalculate)
EMIT_PARAMS(pos, " recalculate");
if (set->journal_sectors_set)
EMIT_PARAMS(pos, " journal_sectors:%u", set->journal_sectors);
if (set->interleave_sectors_set)
EMIT_PARAMS(pos, " ineterleave_sectors:%u", set->interleave_sectors);
if (set->buffer_sectors_set)
EMIT_PARAMS(pos, " buffer_sectors:%u", set->buffer_sectors);
if (set->journal_watermark_set)
EMIT_PARAMS(pos, " journal_watermark:%u", set->journal_watermark);
if (set->commit_time_set)
EMIT_PARAMS(pos, " commit_time:%u", set->commit_time);
if (set->bitmap_flush_interval_set)
EMIT_PARAMS(pos, " bitmap_flush_interval:%u", set->bitmap_flush_interval);
if (set->sectors_per_bit_set)
EMIT_PARAMS(pos, " sectors_per_bit:%llu", (unsigned long long)set->sectors_per_bit);
return 1;
}
static int _thin_pool_emit_segment_line(struct dm_task *dmt,
struct load_segment *seg,
char *params, size_t paramsize)
@@ -2877,7 +2755,7 @@ static int _vdo_emit_segment_line(struct dm_task *dmt,
"maxDiscard %u ack %u bio %u bioRotationInterval %u cpu %u hash %u logical %u physical %u",
data_dev,
seg->vdo_data_size / 8, // this parameter is in 4K units
seg->vdo_params.minimum_io_size * UINT32_C(512), // sector to byte units
seg->vdo_params.minimum_io_size,
seg->vdo_params.block_map_cache_size_mb * UINT64_C(256), // 1MiB -> 4KiB units
seg->vdo_params.block_map_era_length,
seg->vdo_params.use_metadata_hints ? "on" : "off" ,
@@ -2927,6 +2805,7 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
size_t paramsize)
{
int pos = 0;
int r;
int target_type_is_raid = 0;
char originbuf[DM_FORMAT_DEV_BUFSIZE], cowbuf[DM_FORMAT_DEV_BUFSIZE];
@@ -2937,7 +2816,8 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
break;
case SEG_MIRRORED:
/* Mirrors are pretty complicated - now in separate function */
if (!_mirror_emit_segment_line(dmt, seg, params, paramsize))
r = _mirror_emit_segment_line(dmt, seg, params, paramsize);
if (!r)
return_0;
break;
case SEG_SNAPSHOT:
@@ -2958,7 +2838,7 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
EMIT_PARAMS(pos, "%u %u ", seg->area_count, seg->stripe_size);
break;
case SEG_VDO:
if (!_vdo_emit_segment_line(dmt, seg, params, paramsize))
if (!(r = _vdo_emit_segment_line(dmt, seg, params, paramsize)))
return_0;
break;
case SEG_CRYPT:
@@ -2987,8 +2867,9 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
case SEG_RAID6_LA_6:
case SEG_RAID6_RA_6:
target_type_is_raid = 1;
if (!_raid_emit_segment_line(dmt, major, minor, seg, seg_start,
params, paramsize))
r = _raid_emit_segment_line(dmt, major, minor, seg, seg_start,
params, paramsize);
if (!r)
return_0;
break;
@@ -3008,10 +2889,6 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
if (!_writecache_emit_segment_line(dmt, seg, params, paramsize))
return_0;
break;
case SEG_INTEGRITY:
if (!_integrity_emit_segment_line(dmt, seg, params, paramsize))
return_0;
break;
}
switch(seg->type) {
@@ -3024,14 +2901,14 @@ static int _emit_segment_line(struct dm_task *dmt, uint32_t major,
case SEG_THIN:
case SEG_CACHE:
case SEG_WRITECACHE:
case SEG_INTEGRITY:
break;
case SEG_CRYPT:
case SEG_LINEAR:
case SEG_STRIPED:
if (!_emit_areas_line(dmt, seg, params, paramsize, &pos))
return_0;
if ((r = _emit_areas_line(dmt, seg, params, paramsize, &pos)) <= 0) {
stack;
return r;
}
if (!params[0]) {
log_error("No parameters supplied for %s target "
"%u:%u.", _dm_segtypes[seg->type].target,
@@ -3128,9 +3005,6 @@ static int _load_node(struct dm_tree_node *dnode)
if (!dm_task_suppress_identical_reload(dmt))
log_warn("WARNING: Failed to suppress reload of identical tables.");
if (dnode->props.skip_reload_params_compare)
dm_task_skip_reload_params_compare(dmt);
if ((r = dm_task_run(dmt))) {
r = dm_task_get_info(dmt, &dnode->info);
if (r && !dnode->info.inactive_table)
@@ -3149,8 +3023,8 @@ static int _load_node(struct dm_tree_node *dnode)
if (!existing_table_size && dnode->props.delay_resume_if_new)
dnode->props.size_changed = 0;
log_debug_activation("Table size changed from %" PRIu64 " to %" PRIu64 " for %s.%s",
existing_table_size,
log_debug_activation("Table size changed from %" PRIu64 " to %"
PRIu64 " for %s.%s", existing_table_size,
seg_start, _node_name(dnode),
dnode->props.size_changed ? "" : " (Ignoring.)");
@@ -3202,16 +3076,6 @@ static int _dm_tree_revert_activated(struct dm_tree_node *parent)
return 1;
}
static int _dm_tree_wait_and_revert_activated(struct dm_tree_node *dnode)
{
if (!dm_udev_wait(dm_tree_get_cookie(dnode)))
stack;
dm_tree_set_cookie(dnode, 0);
return _dm_tree_revert_activated(dnode);
}
int dm_tree_preload_children(struct dm_tree_node *dnode,
const char *uuid_prefix,
size_t uuid_prefix_len)
@@ -3241,7 +3105,7 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
return_0;
/* FIXME Cope if name exists with no uuid? */
if (!child->info.exists && !(node_created = _create_node(child, dnode)))
if (!child->info.exists && !(node_created = _create_node(child)))
return_0;
/* Propagate delayed resume from exteded child node */
@@ -3251,22 +3115,28 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
if (!child->info.inactive_table &&
child->props.segment_count &&
!_load_node(child)) {
stack;
/*
* If the table load fails, try to device in the kernel
* together with other created and preloaded devices.
* If the table load does not succeed, we remove the
* device in the kernel that would otherwise have an
* empty table. This makes the create + load of the
* device atomic. However, if other dependencies have
* already been created and loaded; this code is
* insufficient to remove those - only the node
* encountering the table load failure is removed.
*/
if (!_dm_tree_wait_and_revert_activated(dnode))
stack;
r = 0;
continue;
if (node_created) {
if (!_remove_node(child))
return_0;
if (!dm_udev_wait(dm_tree_get_cookie(dnode)))
stack;
dm_tree_set_cookie(dnode, 0);
(void) _dm_tree_revert_activated(child);
}
return_0;
}
/* No resume for a device without parents or with unchanged or smaller size */
if (!dm_tree_node_num_children(child, 1))
continue;
if (child->props.size_changed <= 0)
if (!dm_tree_node_num_children(child, 1) || (child->props.size_changed <= 0))
continue;
if (!child->info.inactive_table && !child->info.suspended)
@@ -3277,19 +3147,28 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
&child->info, &child->dtree->cookie, child->udev_flags,
child->info.suspended)) {
log_error("Unable to resume %s.", _node_name(child));
if (!_dm_tree_wait_and_revert_activated(dnode))
stack;
/* If the device was not previously active, we might as well remove this node. */
if (!child->info.live_table &&
!_deactivate_node(child->name, child->info.major, child->info.minor,
&child->dtree->cookie, child->udev_flags, 0))
log_error("Unable to deactivate %s.", _node_name(child));
r = 0;
/* Each child is handled independently */
continue;
}
if (node_created) {
/* Collect newly introduced devices for revert */
dm_list_add_h(&dnode->activated, &child->activated_list);
/* When creating new node also check transaction_id. */
if (child->props.send_messages &&
!_node_send_messages(child, uuid_prefix, uuid_prefix_len, 0)) {
stack;
if (!_dm_tree_wait_and_revert_activated(dnode))
if (!dm_udev_wait(dm_tree_get_cookie(dnode)))
stack;
dm_tree_set_cookie(dnode, 0);
(void) _dm_tree_revert_activated(dnode);
r = 0;
continue;
}
@@ -3859,48 +3738,6 @@ int dm_tree_node_add_writecache_target(struct dm_tree_node *node,
return 1;
}
int dm_tree_node_add_integrity_target(struct dm_tree_node *node,
uint64_t size,
const char *origin_uuid,
const char *meta_uuid,
struct integrity_settings *settings,
int recalculate)
{
struct load_segment *seg;
if (!(seg = _add_segment(node, SEG_INTEGRITY, size)))
return_0;
if (!meta_uuid) {
log_error("No integrity meta uuid.");
return 0;
}
if (!(seg->integrity_meta_node = dm_tree_find_node_by_uuid(node->dtree, meta_uuid))) {
log_error("Missing integrity's meta uuid %s.", meta_uuid);
return 0;
}
if (!_link_tree_nodes(node, seg->integrity_meta_node))
return_0;
if (!(seg->origin = dm_tree_find_node_by_uuid(node->dtree, origin_uuid))) {
log_error("Missing integrity's origin uuid %s.", origin_uuid);
return 0;
}
if (!_link_tree_nodes(node, seg->origin))
return_0;
memcpy(&seg->integrity_settings, settings, sizeof(struct integrity_settings));
seg->integrity_recalculate = recalculate;
node->props.skip_reload_params_compare = 1;
return 1;
}
int dm_tree_node_add_replicator_target(struct dm_tree_node *node,
uint64_t size,
const char *rlog_uuid,
@@ -3974,24 +3811,6 @@ int dm_tree_node_add_thin_pool_target(struct dm_tree_node *node,
uint32_t data_block_size,
uint64_t low_water_mark,
unsigned skip_block_zeroing)
{
return dm_tree_node_add_thin_pool_target_v1(node, size, transaction_id,
metadata_uuid, pool_uuid,
data_block_size,
low_water_mark,
skip_block_zeroing,
1);
}
int dm_tree_node_add_thin_pool_target_v1(struct dm_tree_node *node,
uint64_t size,
uint64_t transaction_id,
const char *metadata_uuid,
const char *pool_uuid,
uint32_t data_block_size,
uint64_t low_water_mark,
unsigned skip_block_zeroing,
unsigned crop_metadata)
{
struct load_segment *seg, *mseg;
uint64_t devsize = 0;
@@ -4019,18 +3838,17 @@ int dm_tree_node_add_thin_pool_target_v1(struct dm_tree_node *node,
if (!_link_tree_nodes(node, seg->metadata))
return_0;
if (crop_metadata)
/* FIXME: more complex target may need more tweaks */
dm_list_iterate_items(mseg, &seg->metadata->props.segs) {
devsize += mseg->size;
if (devsize > DM_THIN_MAX_METADATA_SIZE) {
log_debug_activation("Ignoring %" PRIu64 " of device.",
devsize - DM_THIN_MAX_METADATA_SIZE);
mseg->size -= (devsize - DM_THIN_MAX_METADATA_SIZE);
devsize = DM_THIN_MAX_METADATA_SIZE;
/* FIXME: drop remaining segs */
}
/* FIXME: more complex target may need more tweaks */
dm_list_iterate_items(mseg, &seg->metadata->props.segs) {
devsize += mseg->size;
if (devsize > DM_THIN_MAX_METADATA_SIZE) {
log_debug_activation("Ignoring %" PRIu64 " of device.",
devsize - DM_THIN_MAX_METADATA_SIZE);
mseg->size -= (devsize - DM_THIN_MAX_METADATA_SIZE);
devsize = DM_THIN_MAX_METADATA_SIZE;
/* FIXME: drop remaining segs */
}
}
if (!(seg->pool = dm_tree_find_node_by_uuid(node->dtree, pool_uuid))) {
log_error("Missing pool uuid %s.", pool_uuid);
@@ -4382,7 +4200,6 @@ int dm_tree_node_add_cache_target_base(struct dm_tree_node *node,
int dm_tree_node_add_vdo_target(struct dm_tree_node *node,
uint64_t size,
const char *vdo_pool_name,
const char *data_uuid,
uint64_t data_size,
const struct dm_vdo_target_params *vtp)
@@ -4404,7 +4221,7 @@ int dm_tree_node_add_vdo_target(struct dm_tree_node *node,
return_0;
seg->vdo_params = *vtp;
seg->vdo_name = vdo_pool_name;
seg->vdo_name = node->name;
seg->vdo_data_size = data_size;
node->props.send_messages = 2;

View File

@@ -492,7 +492,7 @@ static int _report_field_string_list(struct dm_report *rh,
delimiter = ",";
delimiter_len = strlen(delimiter);
i = pos = 0;
i = pos = len = 0;
dm_list_iterate_items(sl, data) {
arr[i].str = sl->str;
if (!sort) {
@@ -749,11 +749,10 @@ static void _display_fields_more(struct dm_report *rh,
id_len = strlen(type->prefix) + 3;
for (f = 0; fields[f].report_fn; f++) {
if (!(type = _find_type(rh, fields[f].type))) {
log_debug(INTERNAL_ERROR "Field type undefined.");
continue;
}
desc = (type->desc) ? : " ";
if ((type = _find_type(rh, fields[f].type)) && type->desc)
desc = type->desc;
else
desc = " ";
if (desc != last_desc) {
if (*last_desc)
log_warn(" ");

View File

@@ -296,8 +296,6 @@ int dm_get_status_cache(struct dm_pool *mem, const char *params,
s->feature_flags |= DM_CACHE_FEATURE_PASSTHROUGH;
else if (!strncmp(p, "metadata2 ", 10))
s->feature_flags |= DM_CACHE_FEATURE_METADATA2;
else if (!strncmp(p, "no_discard_passdown ", 20))
s->feature_flags |= DM_CACHE_FEATURE_NO_DISCARD_PASSDOWN;
else
log_error("Unknown feature in status: %s", params);
@@ -366,8 +364,8 @@ int dm_get_status_writecache(struct dm_pool *mem, const char *params,
if (!(s = dm_pool_zalloc(mem, sizeof(struct dm_status_writecache))))
return_0;
if (sscanf(params, "%llu %llu %llu %llu",
(unsigned long long *)&s->error,
if (sscanf(params, "%u %llu %llu %llu",
&s->error,
(unsigned long long *)&s->total_blocks,
(unsigned long long *)&s->free_blocks,
(unsigned long long *)&s->writeback_blocks) != 4) {
@@ -380,33 +378,6 @@ int dm_get_status_writecache(struct dm_pool *mem, const char *params,
return 1;
}
int dm_get_status_integrity(struct dm_pool *mem, const char *params,
struct dm_status_integrity **status)
{
struct dm_status_integrity *s;
char recalc_str[16] = "\0";
if (!(s = dm_pool_zalloc(mem, sizeof(*s))))
return_0;
if (sscanf(params, "%llu %llu %s",
(unsigned long long *)&s->number_of_mismatches,
(unsigned long long *)&s->provided_data_sectors,
recalc_str) != 3) {
log_error("Failed to parse integrity params: %s.", params);
dm_pool_free(mem, s);
return 0;
}
if (recalc_str[0] == '-')
s->recalc_sector = 0;
else
s->recalc_sector = strtoull(recalc_str, NULL, 0);
*status = s;
return 1;
}
int parse_thin_pool_status(const char *params, struct dm_status_thin_pool *s)
{
int pos;

View File

@@ -183,7 +183,7 @@ struct dm_target_spec {
struct dm_target_deps {
uint32_t count; /* Array size */
uint32_t padding; /* unused */
uint64_t dev[]; /* out */
uint64_t dev[0]; /* out */
};
/*
@@ -193,7 +193,7 @@ struct dm_name_list {
uint64_t dev;
uint32_t next; /* offset to the next record from
the _start_ of this */
char name[];
char name[0];
};
/*
@@ -203,7 +203,7 @@ struct dm_target_versions {
uint32_t next;
uint32_t version[3];
char name[];
char name[0];
};
/*
@@ -212,7 +212,7 @@ struct dm_target_versions {
struct dm_target_msg {
uint64_t sector; /* Device sector */
char message[];
char message[0];
};
/*
@@ -244,7 +244,6 @@ enum {
DM_TARGET_MSG_CMD,
DM_DEV_SET_GEOMETRY_CMD,
DM_DEV_ARM_POLL_CMD,
DM_GET_TARGET_VERSION_CMD,
};
#define DM_IOCTL 0xfd
@@ -271,8 +270,6 @@ enum {
#define DM_TARGET_MSG _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_GET_TARGET_VERSION _IOWR(DM_IOCTL, DM_GET_TARGET_VERSION_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
#define DM_VERSION_MINOR 36
#define DM_VERSION_PATCHLEVEL 0

View File

@@ -98,7 +98,7 @@ void dm_pools_check_leaks(void)
p->orig_pool,
p->name, p->stats.bytes);
#else
log_error(" [%p] %s", (void *)p, p->name);
log_error(" [%p] %s", p, p->name);
#endif
}
pthread_mutex_unlock(&_dm_pools_mutex);

View File

@@ -74,7 +74,7 @@ enum dm_vdo_write_policy {
// FIXME: review whether we should use the createParams from the userlib
struct dm_vdo_target_params {
uint32_t minimum_io_size; // in sectors
uint32_t minimum_io_size;
uint32_t block_map_cache_size_mb;
uint32_t block_map_era_length; // format period

View File

@@ -23,9 +23,8 @@ bool dm_vdo_validate_target_params(const struct dm_vdo_target_params *vtp,
{
bool valid = true;
/* 512 or 4096 bytes only ATM */
if ((vtp->minimum_io_size != 1) &&
(vtp->minimum_io_size != 8)) {
if ((vtp->minimum_io_size != 512) &&
(vtp->minimum_io_size != 4096)) {
log_error("VDO minimum io size %u is unsupported.",
vtp->minimum_io_size);
valid = false;

View File

@@ -126,9 +126,6 @@
/* Library version */
#undef DM_LIB_VERSION
/* Define to 1 to include the LVM editline shell. */
#undef EDITLINE_SUPPORT
/* Path to fsadm binary. */
#undef FSADM_PATH
@@ -154,9 +151,6 @@
/* Define to 1 if you have the `atexit' function. */
#undef HAVE_ATEXIT
/* Define if ioctl BLKZEROOUT can be used for device zeroing. */
#undef HAVE_BLKZEROOUT
/* Define to 1 if canonicalize_file_name is available. */
#undef HAVE_CANONICALIZE_FILE_NAME
@@ -182,12 +176,6 @@
/* Define to 1 if you don't have `vprintf' but do have `_doprnt.' */
#undef HAVE_DOPRNT
/* Define to 1 if you have the <editline/history.h> header file. */
#undef HAVE_EDITLINE_HISTORY_H
/* Define to 1 if you have the <editline/readline.h> header file. */
#undef HAVE_EDITLINE_READLINE_H
/* Define to 1 if you have the <errno.h> header file. */
#undef HAVE_ERRNO_H
@@ -304,12 +292,6 @@
/* Define to 1 if you have the <paths.h> header file. */
#undef HAVE_PATHS_H
/* Define to 1 if you have the `prlimit' function. */
#undef HAVE_PRLIMIT
/* Define to 1 if you have the `pselect' function. */
#undef HAVE_PSELECT
/* Define to 1 if the system has the type `ptrdiff_t'. */
#undef HAVE_PTRDIFF_T
@@ -543,18 +525,12 @@
/* Define to 1 if the system has the `__builtin_clzll' built-in function */
#undef HAVE___BUILTIN_CLZLL
/* Define to 1 to include built-in support for integrity. */
#undef INTEGRITY_INTERNAL
/* Internalization package */
#undef INTL_PACKAGE
/* Locale-dependent data */
#undef LOCALEDIR
/* Define to 1 to include code that uses lvmlockd dlm control option. */
#undef LOCKDDLM_CONTROL_SUPPORT
/* Define to 1 to include code that uses lvmlockd dlm option. */
#undef LOCKDDLM_SUPPORT

View File

@@ -20,7 +20,6 @@ SOURCES =\
activate/activate.c \
cache/lvmcache.c \
writecache/writecache.c \
integrity/integrity.c \
cache_segtype/cache.c \
commands/toolcontext.c \
config/config.c \
@@ -29,7 +28,6 @@ SOURCES =\
device/bcache.c \
device/bcache-utils.c \
device/dev-cache.c \
device/device_id.c \
device/dev-ext.c \
device/dev-io.c \
device/dev-md.c \
@@ -53,7 +51,6 @@ SOURCES =\
filters/filter-usable.c \
filters/filter-internal.c \
filters/filter-signature.c \
filters/filter-deviceid.c \
format_text/archive.c \
format_text/archiver.c \
format_text/export.c \
@@ -69,8 +66,6 @@ SOURCES =\
locking/locking.c \
log/log.c \
metadata/cache_manip.c \
metadata/writecache_manip.c \
metadata/integrity_manip.c \
metadata/lv.c \
metadata/lv_manip.c \
metadata/merge.c \
@@ -78,7 +73,6 @@ SOURCES =\
metadata/mirror.c \
metadata/pool_manip.c \
metadata/pv.c \
metadata/pv_list.c \
metadata/pv_manip.c \
metadata/pv_map.c \
metadata/raid_manip.c \

View File

@@ -185,8 +185,8 @@ void set_activation(int act, int silent)
if (warned || !act)
return;
log_warn("WARNING: Compiled without libdevmapper support. "
"Can't enable activation.");
log_error("Compiled without libdevmapper support. "
"Can't enable activation.");
warned = 1;
}
@@ -221,13 +221,23 @@ int lv_info(struct cmd_context *cmd, const struct logical_volume *lv, int use_la
{
return 0;
}
int lv_info_with_seg_status(struct cmd_context *cmd,
const struct lv_segment *lv_seg,
int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
struct lvinfo *info, int with_open_count, int with_read_ahead)
{
return 0;
}
int lv_info_with_seg_status(struct cmd_context *cmd, const struct logical_volume *lv,
const struct lv_segment *lv_seg, int use_layer,
struct lv_with_info_and_seg_status *status,
int with_open_count, int with_read_ahead)
{
return 0;
}
int lv_status(struct cmd_context *cmd, const struct lv_segment *lv_seg,
int use_layer, struct lv_seg_status *lv_seg_status)
{
return 0;
}
int lv_cache_status(const struct logical_volume *cache_lv,
struct lv_status_cache **status)
{
@@ -274,17 +284,18 @@ int lv_raid_message(const struct logical_volume *lv, const char *msg)
{
return 0;
}
int lv_writecache_message(const struct logical_volume *lv, const char *msg)
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
dm_percent_t *percent)
{
return 0;
}
int lv_thin_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **thin_pool_status)
int lv_thin_percent(const struct logical_volume *lv, int mapped,
dm_percent_t *percent)
{
return 0;
}
int lv_thin_status(const struct logical_volume *lv, int flush,
struct lv_status_thin **thin_status)
int lv_thin_pool_transaction_id(const struct logical_volume *lv,
uint64_t *transaction_id)
{
return 0;
}
@@ -292,15 +303,6 @@ int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id)
{
return 0;
}
int lv_vdo_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_vdo **vdo_status)
{
return 0;
}
int lv_vdo_pool_percent(const struct logical_volume *lv, dm_percent_t *percent)
{
return 0;
}
int lvs_in_vg_activated(const struct volume_group *vg)
{
return 0;
@@ -466,11 +468,6 @@ static int _passes_readonly_filter(struct cmd_context *cmd,
return _lv_passes_volumes_filter(cmd, lv, cn, activation_read_only_volume_list_CFG);
}
int lv_passes_readonly_filter(const struct logical_volume *lv)
{
return _passes_readonly_filter(lv->vg->cmd, lv);
}
int library_version(char *version, size_t size)
{
if (!activation())
@@ -542,7 +539,25 @@ int target_version(const char *target_name, uint32_t *maj,
int lvm_dm_prefix_check(int major, int minor, const char *prefix)
{
return dev_manager_check_prefix_dm_major_minor(major, minor, prefix);
struct dm_task *dmt;
const char *uuid;
int r;
if (!(dmt = dm_task_create(DM_DEVICE_STATUS)))
return_0;
if (!dm_task_set_minor(dmt, minor) ||
!dm_task_set_major(dmt, major) ||
!dm_task_run(dmt) ||
!(uuid = dm_task_get_uuid(dmt))) {
dm_task_destroy(dmt);
return 0;
}
r = strncasecmp(uuid, prefix, strlen(prefix));
dm_task_destroy(dmt);
return r ? 0 : 1;
}
int module_present(struct cmd_context *cmd, const char *target_name)
@@ -623,7 +638,7 @@ static int _lv_info(struct cmd_context *cmd, const struct logical_volume *lv,
int use_layer, struct lvinfo *info,
const struct lv_segment *seg,
struct lv_seg_status *seg_status,
int with_open_count, int with_read_ahead, int with_name_check)
int with_open_count, int with_read_ahead)
{
struct dm_info dminfo;
@@ -641,7 +656,7 @@ static int _lv_info(struct cmd_context *cmd, const struct logical_volume *lv,
/* New thin-pool has no layer, but -tpool suffix needs to be queried */
if (!use_layer && lv_is_new_thin_pool(lv)) {
/* Check if there isn't existing old thin pool mapping in the table */
if (!dev_manager_info(cmd, lv, NULL, 0, 0, 0, &dminfo, NULL, NULL))
if (!dev_manager_info(cmd, lv, NULL, 0, 0, &dminfo, NULL, NULL))
return_0;
if (!dminfo.exists)
use_layer = 1;
@@ -654,9 +669,8 @@ static int _lv_info(struct cmd_context *cmd, const struct logical_volume *lv,
if (!dev_manager_info(cmd, lv,
(use_layer) ? lv_layer(lv) : NULL,
with_open_count, with_read_ahead, with_name_check,
&dminfo,
(info) ? &info->read_ahead : NULL,
with_open_count, with_read_ahead,
&dminfo, (info) ? &info->read_ahead : NULL,
seg_status))
return_0;
@@ -685,16 +699,7 @@ int lv_info(struct cmd_context *cmd, const struct logical_volume *lv, int use_la
if (!activation())
return 0;
return _lv_info(cmd, lv, use_layer, info, NULL, NULL, with_open_count, with_read_ahead, 0);
}
int lv_info_with_name_check(struct cmd_context *cmd, const struct logical_volume *lv,
int use_layer, struct lvinfo *info)
{
if (!activation())
return 0;
return _lv_info(cmd, lv, use_layer, info, NULL, NULL, 0, 0, 1);
return _lv_info(cmd, lv, use_layer, info, NULL, NULL, with_open_count, with_read_ahead);
}
/*
@@ -724,16 +729,16 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
* STATUS is collected from cache LV */
if (!(lv_seg = get_only_segment_using_this_lv(lv)))
return_0;
(void) _lv_info(cmd, lv_seg->lv, 1, NULL, lv_seg, &status->seg_status, 0, 0, 0);
(void) _lv_info(cmd, lv_seg->lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
}
if (lv_is_thin_pool(lv)) {
/* Always collect status for '-tpool' */
if (_lv_info(cmd, lv, 1, &status->info, lv_seg, &status->seg_status, 0, 0, 0) &&
if (_lv_info(cmd, lv, 1, &status->info, lv_seg, &status->seg_status, 0, 0) &&
(status->seg_status.type == SEG_STATUS_THIN_POOL)) {
/* There is -tpool device, but query 'active' state of 'fake' thin-pool */
if (!_lv_info(cmd, lv, 0, NULL, NULL, NULL, 0, 0, 0) &&
if (!_lv_info(cmd, lv, 0, NULL, NULL, NULL, 0, 0) &&
!status->seg_status.thin_pool->needs_check)
status->info.exists = 0; /* So pool LV is not active */
}
@@ -742,10 +747,10 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
if (lv_is_external_origin(lv)) {
if (!_lv_info(cmd, lv, 0, &status->info, NULL, NULL,
with_open_count, with_read_ahead, 0))
with_open_count, with_read_ahead))
return_0;
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0, 0);
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
}
@@ -758,13 +763,13 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
/* Show INFO for actual origin and grab status for merging origin */
if (!_lv_info(cmd, lv, 0, &status->info, lv_seg,
lv_is_merging_origin(lv) ? &status->seg_status : NULL,
with_open_count, with_read_ahead, 0))
with_open_count, with_read_ahead))
return_0;
if (status->info.exists &&
(status->seg_status.type != SEG_STATUS_SNAPSHOT)) /* Not merging */
/* Grab STATUS from layered -real */
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0, 0);
(void) _lv_info(cmd, lv, 1, NULL, lv_seg, &status->seg_status, 0, 0);
return 1;
}
@@ -773,11 +778,10 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
olv = origin_from_cow(lv);
if (!_lv_info(cmd, olv, 0, &status->info, first_seg(olv), &status->seg_status,
with_open_count, with_read_ahead, 0))
with_open_count, with_read_ahead))
return_0;
if (status->seg_status.type == SEG_STATUS_SNAPSHOT ||
(lv_is_thin_volume(olv) && (status->seg_status.type == SEG_STATUS_THIN))) {
if (status->seg_status.type == SEG_STATUS_SNAPSHOT) {
log_debug_activation("Snapshot merge is in progress, querying status of %s instead.",
display_lvname(lv));
/*
@@ -795,33 +799,21 @@ int lv_info_with_seg_status(struct cmd_context *cmd,
if (lv_is_vdo(lv)) {
if (!_lv_info(cmd, lv, 0, &status->info, NULL, NULL,
with_open_count, with_read_ahead, 0))
with_open_count, with_read_ahead))
return_0;
if (status->info.exists) {
/* Status for VDO pool */
(void) _lv_info(cmd, seg_lv(lv_seg, 0), 1, NULL,
first_seg(seg_lv(lv_seg, 0)),
&status->seg_status, 0, 0, 0);
&status->seg_status, 0, 0);
/* Use VDO pool segtype result for VDO segtype */
status->seg_status.seg = lv_seg;
}
return 1;
}
if (lv_is_vdo_pool(lv)) {
/* Always collect status for '-vpool' */
if (_lv_info(cmd, lv, 1, &status->info, lv_seg, &status->seg_status, 0, 0, 0) &&
(status->seg_status.type == SEG_STATUS_VDO_POOL)) {
/* There is -tpool device, but query 'active' state of 'fake' vdo-pool */
if (!_lv_info(cmd, lv, 0, NULL, NULL, NULL, 0, 0, 0))
status->info.exists = 0; /* So VDO pool LV is not active */
}
return 1;
}
return _lv_info(cmd, lv, 0, &status->info, lv_seg, &status->seg_status,
with_open_count, with_read_ahead, 0);
with_open_count, with_read_ahead);
}
#define OPEN_COUNT_CHECK_RETRIES 25
@@ -1251,52 +1243,86 @@ int lv_cache_status(const struct logical_volume *cache_lv,
return 1;
}
int lv_thin_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **thin_pool_status)
/*
* Returns data or metadata percent usage, depends on metadata 0/1.
* Returns 1 if percent set, else 0 on failure.
*/
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
dm_percent_t *percent)
{
int r;
struct dev_manager *dm;
if (!lv_info(lv->vg->cmd, lv, 1, NULL, 0, 0))
return 0;
log_debug_activation("Checking thin pool status for LV %s.",
display_lvname(lv));
log_debug_activation("Checking thin %sdata percent for LV %s.",
(metadata) ? "meta" : "", display_lvname(lv));
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
return_0;
if (!dev_manager_thin_pool_status(dm, lv, flush, thin_pool_status)) {
dev_manager_destroy(dm);
return_0;
}
if (!(r = dev_manager_thin_pool_percent(dm, lv, metadata, percent)))
stack;
/* User has to call dm_pool_destroy(thin_pool_status->mem)! */
dev_manager_destroy(dm);
return 1;
return r;
}
int lv_thin_status(const struct logical_volume *lv, int flush,
struct lv_status_thin **thin_status)
/*
* Returns 1 if percent set, else 0 on failure.
*/
int lv_thin_percent(const struct logical_volume *lv,
int mapped, dm_percent_t *percent)
{
int r;
struct dev_manager *dm;
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
return 0;
log_debug_activation("Checking thin status for LV %s.",
log_debug_activation("Checking thin percent for LV %s.",
display_lvname(lv));
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
return_0;
if (!dev_manager_thin_status(dm, lv, flush, thin_status)) {
dev_manager_destroy(dm);
if (!(r = dev_manager_thin_percent(dm, lv, mapped, percent)))
stack;
dev_manager_destroy(dm);
return r;
}
/*
* Returns 1 if transaction_id set, else 0 on failure.
*/
int lv_thin_pool_transaction_id(const struct logical_volume *lv,
uint64_t *transaction_id)
{
int r;
struct dev_manager *dm;
struct dm_status_thin_pool *status;
if (!lv_info(lv->vg->cmd, lv, 1, NULL, 0, 0))
return 0;
log_debug_activation("Checking thin-pool transaction id for LV %s.",
display_lvname(lv));
if (!(dm = dev_manager_create(lv->vg->cmd, lv->vg->name, 1)))
return_0;
}
/* User has to call dm_pool_destroy(thin_status->mem)! */
if (!(r = dev_manager_thin_pool_status(dm, lv, &status, 0)))
stack;
else
*transaction_id = status->transaction_id;
return 1;
dev_manager_destroy(dm);
return r;
}
int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id)
@@ -1334,7 +1360,7 @@ int lv_vdo_pool_status(const struct logical_volume *lv, int flush,
int r = 0;
struct dev_manager *dm;
if (!lv_info(lv->vg->cmd, lv, 1, NULL, 0, 0))
if (!lv_info(lv->vg->cmd, lv, 0, NULL, 0, 0))
return 0;
log_debug_activation("Checking VDO pool status for LV %s.",
@@ -1584,8 +1610,6 @@ static char *_build_target_uuid(struct cmd_context *cmd, const struct logical_vo
if (lv_is_thin_pool(lv))
layer = "tpool"; /* Monitor "tpool" for the "thin pool". */
else if (lv_is_vdo_pool(lv))
layer = "vpool"; /* Monitor "vpool" for the "VDO pool". */
else if (lv_is_origin(lv) || lv_is_external_origin(lv))
layer = "real"; /* Monitor "real" for "snapshot-origin". */
else
@@ -2166,6 +2190,8 @@ static int _lv_suspend(struct cmd_context *cmd, const char *lvid_s,
if (laopts->origin_only && lv_is_thin_volume(lv) && lv_is_thin_volume(lv_pre))
lockfs = 1;
critical_section_inc(cmd, "suspending");
if (!lv_is_locked(lv) && lv_is_locked(lv_pre) &&
(pvmove_lv = find_pvmove_lv_in_lv(lv_pre))) {
/*
@@ -2207,23 +2233,16 @@ static int _lv_suspend(struct cmd_context *cmd, const char *lvid_s,
}
dm_list_add(&suspend_lvs, &lvl->list);
}
critical_section_inc(cmd, "suspending");
dm_list_iterate_items(lvl, &suspend_lvs)
if (!_lv_suspend_lv(lvl->lv, laopts, lockfs, 1)) {
critical_section_dec(cmd, "failed suspend");
goto_out; /* FIXME: resume on recovery path? */
}
} else { /* Standard suspend */
critical_section_inc(cmd, "suspending");
} else /* Standard suspend */
if (!_lv_suspend_lv(lv, laopts, lockfs, flush_required)) {
critical_section_dec(cmd, "failed suspend");
goto_out;
}
}
r = 1;
out:
@@ -2243,8 +2262,8 @@ int lv_suspend_if_active(struct cmd_context *cmd, const char *lvid_s, unsigned o
const struct logical_volume *lv, const struct logical_volume *lv_pre)
{
struct lv_activate_opts laopts = {
.exclusive = exclusive,
.origin_only = origin_only
.origin_only = origin_only,
.exclusive = exclusive
};
return _lv_suspend(cmd, lvid_s, &laopts, 0, lv, lv_pre);
@@ -2296,9 +2315,6 @@ static int _lv_resume(struct cmd_context *cmd, const char *lvid_s,
lv_is_thin_volume(lv) ? " thin only" : " without snapshots") : "",
laopts->revert ? " (reverting)" : "");
if (laopts->revert)
goto needs_resume;
if (!lv_info(cmd, lv, laopts->origin_only, &info, 0, 0))
goto_out;
@@ -2358,8 +2374,8 @@ int lv_resume_if_active(struct cmd_context *cmd, const char *lvid_s,
unsigned revert, const struct logical_volume *lv)
{
struct lv_activate_opts laopts = {
.exclusive = exclusive,
.origin_only = origin_only,
.exclusive = exclusive,
.revert = revert
};
@@ -2424,17 +2440,6 @@ int lv_deactivate(struct cmd_context *cmd, const char *lvid_s, const struct logi
}
}
if (lv_is_vdo_pool(lv)) {
/* If someone has remove 'linear' mapping over VDO device
* we may still be able to deactivate the rest of the tree
* i.e. in test-suite we simulate this via 'dmsetup remove' */
if (!lv_info(cmd, lv, 1, &info, 1, 0))
goto_out;
if (info.exists && !info.open_count)
r = 0; /* Unused VDO device left in table? */
}
if (r)
goto out;
}
@@ -2512,13 +2517,6 @@ static int _lv_activate(struct cmd_context *cmd, const char *lvid_s,
goto out;
}
if ((cmd->partial_activation || cmd->degraded_activation) &&
lv_is_partial(lv) && lv_is_raid(lv) && lv_raid_has_integrity((struct logical_volume *)lv)) {
cmd->partial_activation = 0;
cmd->degraded_activation = 0;
log_print("No degraded or partial activation for raid with integrity.");
}
if ((!lv->vg->cmd->partial_activation) && lv_is_partial(lv)) {
if (!lv_is_raid_type(lv) || !partial_raid_lv_supports_degraded_activation(lv)) {
log_error("Refusing activation of partial LV %s. "
@@ -2535,14 +2533,6 @@ static int _lv_activate(struct cmd_context *cmd, const char *lvid_s,
}
}
if ((cmd->partial_activation || cmd->degraded_activation) && lv_is_writecache(lv)) {
struct logical_volume *lv_fast = first_seg(lv)->writecache;
if (lv_is_partial(lv) || (lv_fast && lv_is_partial(lv_fast))) {
log_error("Cannot use partial or degraded activation with writecache.");
goto out;
}
}
if (lv_has_unknown_segments(lv)) {
log_error("Refusing activation of LV %s containing "
"an unrecognised segment.", display_lvname(lv));
@@ -2575,7 +2565,7 @@ static int _lv_activate(struct cmd_context *cmd, const char *lvid_s,
laopts->noscan ? " noscan" : "",
laopts->temporary ? " temporary" : "");
if (!lv_info_with_name_check(cmd, lv, 0, &info))
if (!lv_info(cmd, lv, 0, &info, 0, 0))
goto_out;
/*
@@ -2947,7 +2937,8 @@ int revert_lv(struct cmd_context *cmd, const struct logical_volume *lv)
ret = lv_resume_if_active(cmd, NULL, 0, 0, 1, lv_committed(lv));
critical_section_dec(cmd, "unlocking on revert");
critical_section_dec(cmd, "unlocking on resume");
return ret;
}

View File

@@ -39,7 +39,6 @@ typedef enum {
SEG_STATUS_THIN_POOL,
SEG_STATUS_VDO_POOL,
SEG_STATUS_WRITECACHE,
SEG_STATUS_INTEGRITY,
SEG_STATUS_UNKNOWN
} lv_seg_status_type_t;
@@ -54,7 +53,6 @@ struct lv_seg_status {
struct dm_status_thin *thin;
struct dm_status_thin_pool *thin_pool;
struct dm_status_writecache *writecache;
struct dm_status_integrity *integrity;
struct lv_status_vdo vdo_pool;
};
};
@@ -146,8 +144,8 @@ int revert_lv(struct cmd_context *cmd, const struct logical_volume *lv);
*/
int lv_info(struct cmd_context *cmd, const struct logical_volume *lv, int use_layer,
struct lvinfo *info, int with_open_count, int with_read_ahead);
int lv_info_with_name_check(struct cmd_context *cmd, const struct logical_volume *lv,
int use_layer, struct lvinfo *info);
int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
struct lvinfo *info, int with_open_count, int with_read_ahead);
/*
* Returns 1 if lv_info_and_seg_status structure has been populated,
@@ -191,11 +189,13 @@ int lv_raid_message(const struct logical_volume *lv, const char *msg);
int lv_writecache_message(const struct logical_volume *lv, const char *msg);
int lv_cache_status(const struct logical_volume *cache_lv,
struct lv_status_cache **status);
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
dm_percent_t *percent);
int lv_thin_percent(const struct logical_volume *lv, int mapped,
dm_percent_t *percent);
int lv_thin_pool_transaction_id(const struct logical_volume *lv,
uint64_t *transaction_id);
int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id);
int lv_thin_status(const struct logical_volume *lv, int flush,
struct lv_status_thin **status);
int lv_thin_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **status);
int lv_vdo_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_vdo **status);
int lv_vdo_pool_percent(const struct logical_volume *lv, dm_percent_t *percent);
@@ -208,8 +208,6 @@ int lvs_in_vg_opened(const struct volume_group *vg);
int lv_is_active(const struct logical_volume *lv);
int lv_passes_readonly_filter(const struct logical_volume *lv);
/* Check is any component LV is active */
const struct logical_volume *lv_component_is_active(const struct logical_volume *lv);
const struct logical_volume *lv_holder_is_active(const struct logical_volume *lv);
@@ -262,7 +260,6 @@ void fs_unlock(void);
#define TARGET_NAME_CACHE "cache"
#define TARGET_NAME_WRITECACHE "writecache"
#define TARGET_NAME_INTEGRITY "integrity"
#define TARGET_NAME_ERROR "error"
#define TARGET_NAME_ERROR_OLD "erro" /* Truncated in older kernels */
#define TARGET_NAME_LINEAR "linear"
@@ -280,7 +277,6 @@ void fs_unlock(void);
#define MODULE_NAME_CLUSTERED_MIRROR "clog"
#define MODULE_NAME_CACHE TARGET_NAME_CACHE
#define MODULE_NAME_WRITECACHE TARGET_NAME_WRITECACHE
#define MODULE_NAME_INTEGRITY TARGET_NAME_INTEGRITY
#define MODULE_NAME_ERROR TARGET_NAME_ERROR
#define MODULE_NAME_LOG_CLUSTERED "log-clustered"
#define MODULE_NAME_LOG_USERSPACE "log-userspace"

File diff suppressed because it is too large Load Diff

View File

@@ -47,7 +47,7 @@ void dev_manager_exit(void);
*/
int dev_manager_info(struct cmd_context *cmd, const struct logical_volume *lv,
const char *layer,
int with_open_count, int with_read_ahead, int with_name_check,
int with_open_count, int with_read_ahead,
struct dm_info *dminfo, uint32_t *read_ahead,
struct lv_seg_status *seg_status);
@@ -69,15 +69,19 @@ int dev_manager_writecache_message(struct dev_manager *dm,
int dev_manager_cache_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct lv_status_cache **status);
int dev_manager_thin_status(struct dev_manager *dm,
const struct logical_volume *lv, int flush,
struct lv_status_thin **status);
int dev_manager_thin_pool_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct dm_status_thin_pool **status,
int flush);
int dev_manager_thin_pool_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int metadata, dm_percent_t *percent);
int dev_manager_thin_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int mapped, dm_percent_t *percent);
int dev_manager_thin_device_id(struct dev_manager *dm,
const struct logical_volume *lv,
uint32_t *device_id);
int dev_manager_thin_pool_status(struct dev_manager *dm,
const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **status);
int dev_manager_vdo_pool_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct lv_status_vdo **vdo_status,
@@ -103,6 +107,12 @@ int dev_manager_device_uses_vg(struct device *dev,
int dev_manager_remove_dm_major_minor(uint32_t major, uint32_t minor);
int dev_manager_check_prefix_dm_major_minor(uint32_t major, uint32_t minor, const char *prefix);
int get_cache_single_meta_data(struct cmd_context *cmd,
struct logical_volume *lv,
struct logical_volume *pool_lv,
struct dm_info *info_meta, struct dm_info *info_data);
int remove_cache_single_meta_data(struct cmd_context *cmd,
struct dm_info *info_meta, struct dm_info *info_data);
#endif

View File

@@ -313,7 +313,7 @@ struct fs_op_parms {
char *lv_name;
char *dev;
char *old_lv_name;
char names[];
char names[0];
};
static void _store_str(char **pos, char **ptr, const char *str)
@@ -487,8 +487,7 @@ int fs_rename_lv(const struct logical_volume *lv, const char *dev,
void fs_unlock(void)
{
/* Do not allow syncing device name with suspended devices */
if (!dm_get_suspended_counter()) {
if (!prioritized_section()) {
log_debug_activation("Syncing device names");
/* Wait for all processed udev devices */
if (!dm_udev_wait(_fs_cookie))

1747
lib/cache/lvmcache.c vendored

File diff suppressed because it is too large Load Diff

115
lib/cache/lvmcache.h vendored
View File

@@ -41,9 +41,14 @@ struct lvmcache_vginfo;
/*
* vgsummary represents a summary of the VG that is read
* without a lock during label scan. It's used to populate
* basic lvmcache vginfo/info during label scan prior to
* vg_read().
* without a lock. The info does not come through vg_read(),
* but through reading mdas. It provides information about
* the VG that is needed to lock the VG and then read it fully
* with vg_read(), after which the VG summary should be checked
* against the full VG metadata to verify it was correct (since
* it was read without a lock.)
*
* Once read, vgsummary information is saved in lvmcache_vginfo.
*/
struct lvmcache_vgsummary {
const char *vgname;
@@ -58,8 +63,6 @@ struct lvmcache_vgsummary {
int mda_num; /* 1 = summary from mda1, 2 = summary from mda2 */
unsigned mda_ignored:1;
unsigned zero_offset:1;
unsigned mismatch:1; /* lvmcache sets if this summary differs from previous values */
struct dm_list pvsummaries;
};
int lvmcache_init(struct cmd_context *cmd);
@@ -68,20 +71,18 @@ void lvmcache_destroy(struct cmd_context *cmd, int retain_orphans, int reset);
int lvmcache_label_scan(struct cmd_context *cmd);
int lvmcache_label_rescan_vg(struct cmd_context *cmd, const char *vgname, const char *vgid);
int lvmcache_label_rescan_vg_rw(struct cmd_context *cmd, const char *vgname, const char *vgid);
int lvmcache_label_reopen_vg_rw(struct cmd_context *cmd, const char *vgname, const char *vgid);
/* Add/delete a device */
struct lvmcache_info *lvmcache_add(struct cmd_context *cmd, struct labeller *labeller, const char *pvid,
struct device *dev, uint64_t label_sector,
const char *vgname, const char *vgid,
uint32_t vgstatus, int *is_duplicate);
int lvmcache_add_orphan_vginfo(struct cmd_context *cmd, const char *vgname, struct format_type *fmt);
struct lvmcache_info *lvmcache_add(struct labeller *labeller, const char *pvid,
struct device *dev, uint64_t label_sector,
const char *vgname, const char *vgid,
uint32_t vgstatus, int *is_duplicate);
int lvmcache_add_orphan_vginfo(const char *vgname, struct format_type *fmt);
void lvmcache_del(struct lvmcache_info *info);
void lvmcache_del_dev(struct device *dev);
/* Update things */
int lvmcache_update_vgname_and_id(struct cmd_context *cmd, struct lvmcache_info *info,
int lvmcache_update_vgname_and_id(struct lvmcache_info *info,
struct lvmcache_vgsummary *vgsummary);
int lvmcache_update_vg_from_read(struct volume_group *vg, unsigned precommitted);
int lvmcache_update_vg_from_write(struct volume_group *vg);
@@ -90,8 +91,12 @@ void lvmcache_lock_vgname(const char *vgname, int read_only);
void lvmcache_unlock_vgname(const char *vgname);
/* Queries */
const struct format_type *lvmcache_fmt_from_vgname(struct cmd_context *cmd, const char *vgname, const char *vgid, unsigned revalidate_labels);
int lvmcache_lookup_mda(struct lvmcache_vgsummary *vgsummary);
/* Decrement and test if there are still vg holders in vginfo. */
int lvmcache_vginfo_holders_dec_and_test_for_zero(struct lvmcache_vginfo *vginfo);
struct lvmcache_vginfo *lvmcache_vginfo_from_vgname(const char *vgname,
const char *vgid);
struct lvmcache_vginfo *lvmcache_vginfo_from_vgid(const char *vgid);
@@ -101,11 +106,14 @@ const char *lvmcache_vgid_from_vgname(struct cmd_context *cmd, const char *vgnam
struct device *lvmcache_device_from_pvid(struct cmd_context *cmd, const struct id *pvid, uint64_t *label_sector);
const char *lvmcache_vgname_from_info(struct lvmcache_info *info);
const struct format_type *lvmcache_fmt_from_info(struct lvmcache_info *info);
int lvmcache_vgs_locked(void);
int lvmcache_get_vgnameids(struct cmd_context *cmd,
struct dm_list *vgnameids,
const char *only_this_vgname,
int include_internal);
int lvmcache_get_vgnameids(struct cmd_context *cmd, int include_internal,
struct dm_list *vgnameids);
/* Returns list of struct dm_str_list containing pool-allocated copy of pvids */
struct dm_list *lvmcache_get_pvids(struct cmd_context *cmd, const char *vgname,
const char *vgid);
void lvmcache_drop_metadata(const char *vgname, int drop_precommitted);
void lvmcache_commit_metadata(const char *vgname);
@@ -159,18 +167,19 @@ int lvmcache_foreach_pv(struct lvmcache_vginfo *vginfo,
uint64_t lvmcache_device_size(struct lvmcache_info *info);
void lvmcache_set_device_size(struct lvmcache_info *info, uint64_t size);
struct device *lvmcache_device(struct lvmcache_info *info);
int lvmcache_is_orphan(struct lvmcache_info *info);
unsigned lvmcache_mda_count(struct lvmcache_info *info);
int lvmcache_vgid_is_cached(const char *vgid);
uint64_t lvmcache_smallest_mda_size(struct lvmcache_info *info);
bool lvmcache_has_duplicate_devs(void);
void lvmcache_del_dev_from_duplicates(struct device *dev);
bool lvmcache_dev_is_unused_duplicate(struct device *dev);
int lvmcache_pvid_in_unused_duplicates(const char *pvid);
int lvmcache_get_unused_duplicates(struct cmd_context *cmd, struct dm_list *head);
int vg_has_duplicate_pvs(struct volume_group *vg);
int lvmcache_found_duplicate_pvs(void);
int lvmcache_found_duplicate_vgnames(void);
bool lvmcache_has_duplicate_local_vgname(const char *vgid, const char *vgname);
void lvmcache_pvscan_duplicate_check(struct cmd_context *cmd);
int lvmcache_get_unused_duplicate_devs(struct cmd_context *cmd, struct dm_list *head);
int vg_has_duplicate_pvs(struct volume_group *vg);
int lvmcache_contains_lock_type_sanlock(struct cmd_context *cmd);
@@ -179,18 +188,37 @@ void lvmcache_get_max_name_lengths(struct cmd_context *cmd,
int lvmcache_vg_is_foreign(struct cmd_context *cmd, const char *vgname, const char *vgid);
bool lvmcache_scan_mismatch(struct cmd_context *cmd, const char *vgname, const char *vgid);
void lvmcache_lock_ordering(int enable);
int lvmcache_dev_is_unchosen_duplicate(struct device *dev);
void lvmcache_remove_unchosen_duplicate(struct device *dev);
int lvmcache_pvid_in_unchosen_duplicates(const char *pvid);
int lvmcache_get_vg_devs(struct cmd_context *cmd,
struct lvmcache_vginfo *vginfo,
struct dm_list *devs);
void lvmcache_set_independent_location(const char *vgname);
int lvmcache_scan_mismatch(struct cmd_context *cmd, const char *vgname, const char *vgid);
int lvmcache_vginfo_has_pvid(struct lvmcache_vginfo *vginfo, char *pvid);
uint64_t lvmcache_max_metadata_size(void);
void lvmcache_save_metadata_size(uint64_t val);
/*
* These are clvmd-specific functions and are not related to lvmcache.
* FIXME: rename these with a clvm_ prefix in place of lvmcache_
*/
void lvmcache_save_vg(struct volume_group *vg, int precommitted);
struct volume_group *lvmcache_get_saved_vg(const char *vgid, int precommitted);
struct volume_group *lvmcache_get_saved_vg_latest(const char *vgid);
void lvmcache_drop_saved_vgid(const char *vgid);
int dev_in_device_list(struct device *dev, struct dm_list *head);
bool lvmcache_has_bad_metadata(struct device *dev);
bool lvmcache_has_old_metadata(struct cmd_context *cmd, const char *vgname, const char *vgid, struct device *dev);
void lvmcache_set_bad_metadata(struct lvmcache_info *info, int mda1_bad, int mda2_bad);
int lvmcache_has_bad_metadata(struct device *dev);
int lvmcache_has_old_metadata(struct cmd_context *cmd, const char *vgname, const char *vgid, struct device *dev);
void lvmcache_get_outdated_devs(struct cmd_context *cmd,
const char *vgname, const char *vgid,
@@ -200,30 +228,11 @@ void lvmcache_get_outdated_mdas(struct cmd_context *cmd,
struct device *dev,
struct dm_list **mdas);
bool lvmcache_is_outdated_dev(struct cmd_context *cmd,
const char *vgname, const char *vgid,
struct device *dev);
int lvmcache_is_outdated_dev(struct cmd_context *cmd,
const char *vgname, const char *vgid,
struct device *dev);
void lvmcache_del_outdated_devs(struct cmd_context *cmd,
const char *vgname, const char *vgid);
void lvmcache_save_bad_mda(struct lvmcache_info *info, struct metadata_area *mda);
void lvmcache_get_bad_mdas(struct cmd_context *cmd,
const char *vgname, const char *vgid,
struct dm_list *bad_mda_list);
void lvmcache_get_mdas(struct cmd_context *cmd,
const char *vgname, const char *vgid,
struct dm_list *mda_list);
const char *dev_filtered_reason(struct device *dev);
const char *devname_error_reason(const char *devname);
struct metadata_area *lvmcache_get_dev_mda(struct device *dev, int mda_num);
void lvmcache_extra_md_component_checks(struct cmd_context *cmd);
unsigned int lvmcache_vg_info_count(void);
#endif

View File

@@ -49,7 +49,7 @@ static void _cache_display(const struct lv_segment *seg)
const struct dm_config_node *n;
const struct lv_segment *setting_seg = NULL;
if (seg_is_cache(seg) && lv_is_cache_vol(seg->pool_lv))
if (seg_is_cache(seg) && lv_is_cache_single(seg->pool_lv))
setting_seg = seg;
else if (seg_is_cache_pool(seg))
@@ -504,6 +504,9 @@ static int _cache_text_import(struct lv_segment *seg,
seg->lv->status |= strstr(seg->lv->name, "_corig") ? LV_PENDING_DELETE : 0;
if (!attach_pool_lv(seg, pool_lv, NULL, NULL, NULL))
return_0;
if (!_settings_text_import(seg, sn))
return_0;
@@ -525,36 +528,24 @@ static int _cache_text_import(struct lv_segment *seg,
if (!dm_config_get_uint64(sn, "data_len", &seg->data_len))
return SEG_LOG_ERROR("Couldn't read data_len in");
/* Will use CVOL ID, when metadata_id is not provided */
if (dm_config_has_node(sn, "metadata_id")) {
if (!(seg->metadata_id = dm_pool_alloc(seg->lv->vg->vgmem, sizeof(*seg->metadata_id))))
return SEG_LOG_ERROR("Couldn't allocate metadata_id in");
if (!dm_config_get_str(sn, "metadata_id", &uuid))
return SEG_LOG_ERROR("Couldn't read metadata_id in");
if (!id_read_format(seg->metadata_id, uuid))
return SEG_LOG_ERROR("Couldn't format metadata_id in");
}
if (!dm_config_get_str(sn, "metadata_id", &uuid))
return SEG_LOG_ERROR("Couldn't read metadata_id in");
/* Will use CVOL ID, when data_id is not provided */
if (dm_config_has_node(sn, "data_id")) {
if (!(seg->data_id = dm_pool_alloc(seg->lv->vg->vgmem, sizeof(*seg->data_id))))
return SEG_LOG_ERROR("Couldn't allocate data_id in");
if (!dm_config_get_str(sn, "data_id", &uuid))
return SEG_LOG_ERROR("Couldn't read data_id in");
if (!id_read_format(seg->data_id, uuid))
return SEG_LOG_ERROR("Couldn't format data_id in");
}
pool_lv->status |= LV_CACHE_VOL; /* Mark as cachevol LV */
if (!id_read_format(&seg->metadata_id, uuid))
return SEG_LOG_ERROR("Couldn't format metadata_id in");
if (!dm_config_get_str(sn, "data_id", &uuid))
return SEG_LOG_ERROR("Couldn't read data_id in");
if (!id_read_format(&seg->data_id, uuid))
return SEG_LOG_ERROR("Couldn't format data_id in");
} else {
/* Do not call this when LV is cache_vol. */
/* Do not call this when LV is cache_single. */
/* load order is unknown, could be cache origin or pool LV, so check for both */
if (!dm_list_empty(&pool_lv->segments))
_fix_missing_defaults(first_seg(pool_lv));
}
if (!attach_pool_lv(seg, pool_lv, NULL, NULL, NULL))
return_0;
return 1;
}
@@ -579,7 +570,7 @@ static int _cache_text_export(const struct lv_segment *seg, struct formatter *f)
if (seg->cleaner_policy)
outf(f, "cleaner = 1");
if (lv_is_cache_vol(seg->pool_lv)) {
if (lv_is_cache_single(seg->pool_lv)) {
outf(f, "metadata_format = " FMTu32, seg->cache_metadata_format);
if (!_settings_text_export(seg, f))
@@ -590,17 +581,13 @@ static int _cache_text_export(const struct lv_segment *seg, struct formatter *f)
outf(f, "data_start = " FMTu64, seg->data_start);
outf(f, "data_len = " FMTu64, seg->data_len);
if (seg->metadata_id) {
if (!id_write_format(seg->metadata_id, buffer, sizeof(buffer)))
return_0;
outf(f, "metadata_id = \"%s\"", buffer);
}
if (!id_write_format(&seg->metadata_id, buffer, sizeof(buffer)))
return_0;
outf(f, "metadata_id = \"%s\"", buffer);
if (seg->data_id) {
if (!id_write_format(seg->data_id, buffer, sizeof(buffer)))
return_0;
outf(f, "data_id = \"%s\"", buffer);
}
if (!id_write_format(&seg->data_id, buffer, sizeof(buffer)))
return_0;
outf(f, "data_id = \"%s\"", buffer);
}
return 1;
@@ -633,7 +620,7 @@ static int _cache_add_target_line(struct dev_manager *dm,
cache_pool_seg = first_seg(seg->pool_lv);
if (lv_is_cache_vol(seg->pool_lv))
if (lv_is_cache_single(seg->pool_lv))
setting_seg = seg;
else
setting_seg = cache_pool_seg;
@@ -681,7 +668,7 @@ static int _cache_add_target_line(struct dev_manager *dm,
if (!(origin_uuid = build_dm_uuid(mem, seg_lv(seg, 0), NULL)))
return_0;
if (!lv_is_cache_vol(seg->pool_lv)) {
if (!lv_is_cache_single(seg->pool_lv)) {
/* We don't use start/len when using separate data/meta devices. */
if (seg->metadata_len || seg->data_len) {
log_error(INTERNAL_ERROR "LV %s using unsupported ranges with cache pool.",
@@ -708,13 +695,13 @@ static int _cache_add_target_line(struct dev_manager *dm,
memset(&metadata_lvid, 0, sizeof(metadata_lvid));
memset(&data_lvid, 0, sizeof(data_lvid));
memcpy(&metadata_lvid.id[0], &seg->lv->vg->id, sizeof(struct id));
memcpy(&metadata_lvid.id[1], (seg->metadata_id) ? : &seg->pool_lv->lvid.id[1], sizeof(struct id));
memcpy(&metadata_lvid.id[1], &seg->metadata_id, sizeof(struct id));
memcpy(&data_lvid.id[0], &seg->lv->vg->id, sizeof(struct id));
memcpy(&data_lvid.id[1], (seg->data_id) ? : &seg->pool_lv->lvid.id[1], sizeof(struct id));
memcpy(&data_lvid.id[1], &seg->data_id, sizeof(struct id));
if (!(metadata_uuid = dm_build_dm_uuid(mem, UUID_PREFIX, (const char *)&metadata_lvid.s, "cmeta")))
if (!(metadata_uuid = dm_build_dm_uuid(mem, UUID_PREFIX, (const char *)&metadata_lvid.s, NULL)))
return_0;
if (!(data_uuid = dm_build_dm_uuid(mem, UUID_PREFIX, (const char *)&data_lvid.s, "cdata")))
if (!(data_uuid = dm_build_dm_uuid(mem, UUID_PREFIX, (const char *)&data_lvid.s, NULL)))
return_0;
}

View File

@@ -32,7 +32,6 @@
#include "lib/cache/lvmcache.h"
#include "lib/format_text/archiver.h"
#include "lib/lvmpolld/lvmpolld-client.h"
#include "lib/device/device_id.h"
#include <locale.h>
#include <sys/stat.h>
@@ -233,45 +232,6 @@ static void _get_sysfs_dir(struct cmd_context *cmd, char *buf, size_t buf_size)
strncpy(buf, sys_mnt, buf_size);
}
static uint32_t _parse_debug_fields(struct cmd_context *cmd, int cfg, const char *cfgname)
{
const struct dm_config_node *cn;
const struct dm_config_value *cv;
uint32_t debug_fields = 0;
if (!(cn = find_config_tree_array(cmd, cfg, NULL))) {
log_error(INTERNAL_ERROR "Unable to find configuration for log/%s.", cfgname);
return 0;
}
for (cv = cn->v; cv; cv = cv->next) {
if (cv->type != DM_CFG_STRING) {
log_verbose("log/%s contains a value which is not a string. Ignoring.", cfgname);
continue;
}
if (!strcasecmp(cv->v.str, "all"))
return 0;
if (!strcasecmp(cv->v.str, "time"))
debug_fields |= LOG_DEBUG_FIELD_TIME;
else if (!strcasecmp(cv->v.str, "command"))
debug_fields |= LOG_DEBUG_FIELD_COMMAND;
else if (!strcasecmp(cv->v.str, "fileline"))
debug_fields |= LOG_DEBUG_FIELD_FILELINE;
else if (!strcasecmp(cv->v.str, "message"))
debug_fields |= LOG_DEBUG_FIELD_MESSAGE;
else
log_verbose("Unrecognised value for log/%s: %s", cfgname, cv->v.str);
}
return debug_fields;
}
static int _parse_debug_classes(struct cmd_context *cmd)
{
const struct dm_config_node *cn;
@@ -360,8 +320,8 @@ static void _init_logging(struct cmd_context *cmd)
cmd->default_settings.msg_prefix = find_config_tree_str_allow_empty(cmd, log_prefix_CFG, NULL);
init_msg_prefix(cmd->default_settings.msg_prefix);
/* so that file and verbose output have a command prefix */
init_log_command(0, 0);
cmd->default_settings.cmd_name = find_config_tree_bool(cmd, log_command_names_CFG, NULL);
init_cmd_name(cmd->default_settings.cmd_name);
/* Test mode */
cmd->default_settings.test =
@@ -375,19 +335,21 @@ static void _init_logging(struct cmd_context *cmd)
log_file = find_config_tree_str(cmd, log_file_CFG, NULL);
if (log_file) {
release_log_memory();
fin_log();
init_log_file(log_file, append);
}
log_file = find_config_tree_str(cmd, log_activate_file_CFG, NULL);
if (log_file)
init_log_direct(log_file, append);
init_log_while_suspended(find_config_tree_bool(cmd, log_activation_CFG, NULL));
cmd->default_settings.debug_classes = _parse_debug_classes(cmd);
log_debug("Setting log debug classes to %d", cmd->default_settings.debug_classes);
init_debug_classes_logged(cmd->default_settings.debug_classes);
init_debug_file_fields(_parse_debug_fields(cmd, log_debug_file_fields_CFG, "debug_file_fields"));
init_debug_output_fields(_parse_debug_fields(cmd, log_debug_output_fields_CFG, "debug_output_fields"));
t = time(NULL);
ctime_r(&t, &timebuf[0]);
timebuf[24] = '\0';
@@ -716,8 +678,6 @@ static int _process_config(struct cmd_context *cmd)
if (!_init_system_id(cmd))
return_0;
init_io_memory_size(find_config_tree_int(cmd, global_io_memory_size_CFG, NULL));
return 1;
}
@@ -1067,7 +1027,7 @@ static int _init_dev_cache(struct cmd_context *cmd)
return 1;
}
#define MAX_FILTERS 11
#define MAX_FILTERS 10
static struct dev_filter *_init_filter_chain(struct cmd_context *cmd)
{
@@ -1086,9 +1046,6 @@ static struct dev_filter *_init_filter_chain(struct cmd_context *cmd)
* sysfs filter. Only available on 2.6 kernels. Non-critical.
* Listed first because it's very efficient at eliminating
* unavailable devices.
*
* TODO: I suspect that using the lvm_type and device_id
* filters before this one may be more efficient.
*/
if (find_config_tree_bool(cmd, devices_sysfs_scan_CFG, NULL)) {
if ((filters[nr_filt] = sysfs_filter_create()))
@@ -1127,13 +1084,6 @@ static struct dev_filter *_init_filter_chain(struct cmd_context *cmd)
}
nr_filt++;
/* filter based on the device_ids saved in the devices file */
if (!(filters[nr_filt] = deviceid_filter_create(cmd))) {
log_error("Failed to create deviceid device filter");
goto bad;
}
nr_filt++;
/* usable device filter. Required. */
if (!(filters[nr_filt] = usable_filter_create(cmd, cmd->dev_types, FILTER_MODE_NO_LVMETAD))) {
log_error("Failed to create usabled device filter");
@@ -1287,7 +1237,7 @@ int init_lvmcache_orphans(struct cmd_context *cmd)
struct format_type *fmt;
dm_list_iterate_items(fmt, &cmd->formats)
if (!lvmcache_add_orphan_vginfo(cmd, fmt->orphan_vg_name, fmt))
if (!lvmcache_add_orphan_vginfo(fmt->orphan_vg_name, fmt))
return_0;
return 1;
@@ -1373,11 +1323,6 @@ static int _init_segtypes(struct cmd_context *cmd)
return 0;
#endif
#ifdef INTEGRITY_INTERNAL
if (!init_integrity_segtypes(cmd, &seglib))
return 0;
#endif
return 1;
}
@@ -1496,7 +1441,6 @@ int init_run_by_dmeventd(struct cmd_context *cmd)
init_dmeventd_monitor(DMEVENTD_MONITOR_IGNORE);
init_ignore_suspended_devices(1);
init_disable_dmeventd_monitoring(1); /* Lock settings */
cmd->run_by_dmeventd = 1;
return 0;
}
@@ -1509,8 +1453,6 @@ void destroy_config_context(struct cmd_context *cmd)
dm_pool_destroy(cmd->mem);
if (cmd->libmem)
dm_pool_destroy(cmd->libmem);
if (cmd->pending_delete_mem)
dm_pool_destroy(cmd->pending_delete_mem);
free(cmd);
}
@@ -1539,9 +1481,6 @@ struct cmd_context *create_config_context(void)
if (!(cmd->mem = dm_pool_create("command", 4 * 1024)))
goto out;
if (!(cmd->pending_delete_mem = dm_pool_create("pending_delete", 1024)))
goto_out;
dm_list_init(&cmd->config_files);
dm_list_init(&cmd->tags);
dm_list_init(&cmd->hints);
@@ -1610,7 +1549,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
dm_list_init(&cmd->formats);
dm_list_init(&cmd->segtypes);
dm_list_init(&cmd->tags);
dm_list_init(&cmd->hints);
dm_list_init(&cmd->config_files);
label_init();
@@ -1690,9 +1628,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
goto out;
}
if (!(cmd->pending_delete_mem = dm_pool_create("pending_delete", 1024)))
goto_out;
if (!_init_lvm_conf(cmd))
goto_out;
@@ -1729,8 +1664,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
if (!_init_dev_cache(cmd))
goto_out;
devices_file_init(cmd);
memlock_init(cmd);
if (!_init_formats(cmd))
@@ -1743,6 +1676,8 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
if (!init_lvmcache_orphans(cmd))
goto_out;
dm_list_init(&cmd->unused_duplicate_devs);
if (!_init_segtypes(cmd))
goto_out;
@@ -1762,8 +1697,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
cmd->current_settings = cmd->default_settings;
cmd->initialized.config = 1;
dm_list_init(&cmd->pending_delete);
out:
if (!cmd->initialized.config) {
destroy_toolcontext(cmd);
@@ -1849,14 +1782,13 @@ int refresh_toolcontext(struct cmd_context *cmd)
*/
activation_release();
hints_exit(cmd);
hints_exit();
lvmcache_destroy(cmd, 0, 0);
label_scan_destroy(cmd);
label_exit();
_destroy_segtypes(&cmd->segtypes);
_destroy_formats(cmd, &cmd->formats);
devices_file_exit(cmd);
if (!dev_cache_exit())
stack;
_destroy_dev_types(cmd);
@@ -1936,8 +1868,6 @@ int refresh_toolcontext(struct cmd_context *cmd)
if (!_init_dev_cache(cmd))
return_0;
devices_file_init(cmd);
if (!_init_formats(cmd))
return_0;
@@ -1955,12 +1885,6 @@ int refresh_toolcontext(struct cmd_context *cmd)
cmd->initialized.config = 1;
if (!dm_list_empty(&cmd->pending_delete)) {
log_debug(INTERNAL_ERROR "Unprocessed pending delete for %d devices.",
dm_list_size(&cmd->pending_delete));
dm_list_init(&cmd->pending_delete);
}
if (cmd->initialized.connections && !init_connections(cmd))
return_0;
@@ -1978,7 +1902,7 @@ void destroy_toolcontext(struct cmd_context *cmd)
archive_exit(cmd);
backup_exit(cmd);
hints_exit(cmd);
hints_exit();
lvmcache_destroy(cmd, 0, 0);
label_scan_destroy(cmd);
label_exit();
@@ -1987,7 +1911,6 @@ void destroy_toolcontext(struct cmd_context *cmd)
_destroy_filters(cmd);
if (cmd->mem)
dm_pool_destroy(cmd->mem);
devices_file_exit(cmd);
dev_cache_exit();
_destroy_dev_types(cmd);
_destroy_tags(cmd);
@@ -2002,8 +1925,6 @@ void destroy_toolcontext(struct cmd_context *cmd)
if (cmd->libmem)
dm_pool_destroy(cmd->libmem);
if (cmd->pending_delete_mem)
dm_pool_destroy(cmd->pending_delete_mem);
#ifndef VALGRIND_POOL
if (cmd->linebuffer) {
/* Reset stream buffering to defaults */
@@ -2032,6 +1953,7 @@ void destroy_toolcontext(struct cmd_context *cmd)
lvmpolld_disconnect();
release_log_memory();
activation_exit();
reset_log_duplicated();
fin_log();

View File

@@ -44,6 +44,7 @@ struct config_info {
const char *fmt_name;
const char *dmeventd_executable;
uint64_t unit_factor;
int cmd_name; /* Show command name? */
mode_t umask;
char unit_type;
char _padding[1];
@@ -148,12 +149,10 @@ struct cmd_context {
unsigned unknown_system_id:1;
unsigned include_historical_lvs:1; /* also process/report/display historical LVs */
unsigned record_historical_lvs:1; /* record historical LVs */
unsigned include_exported_vgs:1;
unsigned include_foreign_vgs:1; /* report/display cmds can reveal foreign VGs */
unsigned include_shared_vgs:1; /* report/display cmds can reveal lockd VGs */
unsigned include_active_foreign_vgs:1; /* cmd should process foreign VGs with active LVs */
unsigned vg_read_print_access_error:1; /* print access errors from vg_read */
unsigned allow_mixed_block_sizes:1;
unsigned force_access_clustered:1;
unsigned lockd_gl_disable:1;
unsigned lockd_vg_disable:1;
@@ -161,10 +160,7 @@ struct cmd_context {
unsigned lockd_gl_removed:1;
unsigned lockd_vg_default_sh:1;
unsigned lockd_vg_enforce_sh:1;
unsigned lockd_lv_sh_for_ex:1;
unsigned lockd_global_ex:1; /* set while global lock held ex (lockd) */
unsigned lockf_global_ex:1; /* set while global lock held ex (flock) */
unsigned nolocking:1;
unsigned lockd_lv_sh:1;
unsigned vg_notify:1;
unsigned lv_notify:1;
unsigned pv_notify:1;
@@ -174,7 +170,6 @@ struct cmd_context {
unsigned pvscan_cache_single:1;
unsigned can_use_one_scan:1;
unsigned is_clvmd:1;
unsigned md_component_detection:1;
unsigned use_full_md_check:1;
unsigned is_activating:1;
unsigned enable_hints:1; /* hints are enabled for cmds in general */
@@ -182,27 +177,12 @@ struct cmd_context {
unsigned pvscan_recreate_hints:1; /* enable special case hint handling for pvscan --cache */
unsigned scan_lvs:1;
unsigned wipe_outdated_pvs:1;
unsigned enable_devices_list:1; /* command is using --devices option */
unsigned enable_devices_file:1; /* command is using devices file */
unsigned pending_devices_file:1; /* command may create and enable devices file */
unsigned create_edit_devices_file:1; /* command expects to create and/or edit devices file */
unsigned edit_devices_file:1; /* command expects to edit devices file */
unsigned filter_deviceid_skip:1; /* don't use filter-deviceid */
unsigned filter_regex_with_devices_file:1; /* use filter-regex even when devices file is enabled */
unsigned filter_nodata_only:1; /* only use filters that do not require data from the dev */
unsigned run_by_dmeventd:1; /* command is being run by dmeventd */
unsigned sysinit:1; /* --sysinit is used */
/*
* Devices and filtering.
*/
struct dev_filter *filter;
struct dm_list hints;
struct dm_list use_devices; /* struct dev_use for each entry in devices file */
const char *md_component_checks;
const char *search_for_devnames; /* config file setting */
const char *devicesfile; /* from --devicesfile option */
struct dm_list deviceslist; /* from --devices option, struct dm_str_list */
/*
* Configuration.
@@ -234,7 +214,6 @@ struct cmd_context {
char system_dir[PATH_MAX];
char dev_dir[PATH_MAX];
char proc_dir[PATH_MAX];
char devices_file_path[PATH_MAX];
/*
* Reporting.
@@ -254,8 +233,7 @@ struct cmd_context {
const char *report_list_item_separator;
const char *time_format;
unsigned rand_seed;
struct dm_list pending_delete; /* list of LVs for removal */
struct dm_pool *pending_delete_mem; /* memory pool for pending deletes */
struct dm_list unused_duplicate_devs; /* save preferences between lvmcache instances */
};
/*

View File

@@ -503,10 +503,10 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
{
char *fb, *fe;
int r = 0;
int sz, use_plain_read = 1;
int use_mmap = 1;
off_t mmap_offset = 0;
char *buf = NULL;
struct config_source *cs = dm_config_get_custom(cft);
size_t rsize;
if (!_is_file_based_config_source(cs->type)) {
log_error(INTERNAL_ERROR "config_file_read_fd: expected file, special file "
@@ -515,28 +515,26 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
return 0;
}
/* Only use plain read with regular files */
/* Only use mmap with regular files */
if (!(dev->flags & DEV_REGULAR) || size2)
use_plain_read = 0;
use_mmap = 0;
if (!(buf = malloc(size + size2))) {
log_error("Failed to allocate circular buffer.");
return 0;
}
if (use_plain_read) {
/* Note: also used for lvm.conf to read all settings */
for (rsize = 0; rsize < size; rsize += sz) {
do {
sz = read(dev_fd(dev), buf + rsize, size - rsize);
} while ((sz < 0) && ((errno == EINTR) || (errno == EAGAIN)));
if (sz < 0) {
log_sys_error("read", dev_name(dev));
goto out;
}
if (use_mmap) {
mmap_offset = offset % lvm_getpagesize();
/* memory map the file */
fb = mmap((caddr_t) 0, size + mmap_offset, PROT_READ,
MAP_PRIVATE, dev_fd(dev), offset - mmap_offset);
if (fb == (caddr_t) (-1)) {
log_sys_error("mmap", dev_name(dev));
goto out;
}
fb = fb + mmap_offset;
} else {
if (!(buf = malloc(size + size2))) {
log_error("Failed to allocate circular buffer.");
return 0;
}
if (!dev_read_bytes(dev, offset, size, buf))
goto out;
@@ -544,9 +542,9 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
if (!dev_read_bytes(dev, offset2, size2, buf + size))
goto out;
}
}
fb = buf;
fb = buf;
}
/*
* The checksum passed in is the checksum from the mda_header
@@ -575,7 +573,15 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
r = 1;
out:
free(buf);
if (!use_mmap)
free(buf);
else {
/* unmap the file */
if (munmap(fb - mmap_offset, size + mmap_offset)) {
log_sys_error("munmap", dev_name(dev));
r = 0;
}
}
return r;
}
@@ -710,7 +716,7 @@ static struct dm_config_value *_get_def_array_values(struct cmd_context *cmd,
return array;
}
if (!(token = enc_value = strdup(def_enc_value))) {
if (!(p = token = enc_value = strdup(def_enc_value))) {
log_error("_get_def_array_values: strdup failed");
return NULL;
}
@@ -1708,7 +1714,6 @@ static int _out_prefix_fn(const struct dm_config_node *cn, const char *line, voi
const char *node_type_name = cn->v ? "option" : "section";
char path[CFG_PATH_MAX_LEN];
char commentline[MAX_COMMENT_LINE+1];
int is_deprecated = 0;
if (cn->id <= 0)
return 1;
@@ -1722,14 +1727,13 @@ static int _out_prefix_fn(const struct dm_config_node *cn, const char *line, voi
cfg_def = cfg_def_get_item_p(cn->id);
is_deprecated = _def_node_is_deprecated(cfg_def, out->tree_spec);
if (out->tree_spec->withsummary || out->tree_spec->withcomments) {
_cfg_def_make_path(path, sizeof(path), cfg_def->id, cfg_def, 1);
fprintf(out->fp, "\n");
fprintf(out->fp, "%s# Configuration %s %s.\n", line, node_type_name, path);
if (out->tree_spec->withcomments && is_deprecated && cfg_def->deprecation_comment)
if (out->tree_spec->withcomments &&
_def_node_is_deprecated(cfg_def, out->tree_spec))
fprintf(out->fp, "%s# %s", line, cfg_def->deprecation_comment);
if (cfg_def->comment) {
@@ -1740,14 +1744,14 @@ static int _out_prefix_fn(const struct dm_config_node *cn, const char *line, voi
continue;
commentline[0] = '\0';
}
fprintf(out->fp, "%s#%s%s\n", line, commentline[0] ? " " : "", commentline);
fprintf(out->fp, "%s# %s\n", line, commentline);
/* withsummary prints only the first comment line. */
if (!out->tree_spec->withcomments)
break;
}
}
if (is_deprecated)
if (_def_node_is_deprecated(cfg_def, out->tree_spec))
fprintf(out->fp, "%s# This configuration %s is deprecated.\n", line, node_type_name);
if (cfg_def->flags & CFG_ADVANCED)
@@ -1775,7 +1779,7 @@ static int _out_prefix_fn(const struct dm_config_node *cn, const char *line, voi
return_0;
fprintf(out->fp, "%s# Available since version %s.\n", line, version);
if (is_deprecated) {
if (_def_node_is_deprecated(cfg_def, out->tree_spec)) {
if (!_get_config_node_version(cfg_def->deprecated_since_version, version))
return_0;
fprintf(out->fp, "%s# Deprecated since version %s.\n", line, version);

View File

@@ -205,7 +205,7 @@ cfg_section(local_CFG_SECTION, "local", root_CFG_SECTION, 0, vsn(2, 2, 117), 0,
"# Please take care that each setting only appears once if uncommenting\n" \
"# example settings in this file and never copy this file between hosts.\n\n"
cfg(config_checks_CFG, "checks", config_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 1, vsn(2, 2, 99), NULL, 0, NULL,
cfg(config_checks_CFG, "checks", config_CFG_SECTION, 0, CFG_TYPE_BOOL, 1, vsn(2, 2, 99), NULL, 0, NULL,
"If enabled, any LVM configuration mismatch is reported.\n"
"This implies checking that the configuration key is understood by\n"
"LVM and that the value of the key is the proper type. If disabled,\n"
@@ -213,22 +213,22 @@ cfg(config_checks_CFG, "checks", config_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_
"without any warning (a message about the configuration key not being\n"
"found is issued in verbose mode only).\n")
cfg(config_abort_on_errors_CFG, "abort_on_errors", config_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2,2,99), NULL, 0, NULL,
cfg(config_abort_on_errors_CFG, "abort_on_errors", config_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2,2,99), NULL, 0, NULL,
"Abort the LVM process if a configuration mismatch is found.\n")
cfg_runtime(config_profile_dir_CFG, "profile_dir", config_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_DISALLOW_INTERACTIVE, CFG_TYPE_STRING, vsn(2, 2, 99), 0, NULL,
cfg_runtime(config_profile_dir_CFG, "profile_dir", config_CFG_SECTION, CFG_DISALLOW_INTERACTIVE, CFG_TYPE_STRING, vsn(2, 2, 99), 0, NULL,
"Directory where LVM looks for configuration profiles.\n")
cfg(devices_dir_CFG, "dir", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_DEV_DIR, vsn(1, 0, 0), NULL, 0, NULL,
cfg(devices_dir_CFG, "dir", devices_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_DEV_DIR, vsn(1, 0, 0), NULL, 0, NULL,
"Directory in which to create volume group device nodes.\n"
"Commands also accept this as a prefix on volume group names.\n")
cfg_array(devices_scan_CFG, "scan", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, "#S/dev", vsn(1, 0, 0), NULL, 0, NULL,
cfg_array(devices_scan_CFG, "scan", devices_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_STRING, "#S/dev", vsn(1, 0, 0), NULL, 0, NULL,
"Directories containing device nodes to use with LVM.\n")
cfg_array(devices_loopfiles_CFG, "loopfiles", devices_CFG_SECTION, CFG_DEFAULT_UNDEFINED | CFG_UNSUPPORTED, CFG_TYPE_STRING, NULL, vsn(1, 2, 0), NULL, vsn(2, 3, 0), NULL, NULL)
cfg(devices_obtain_device_list_from_udev_CFG, "obtain_device_list_from_udev", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_OBTAIN_DEVICE_LIST_FROM_UDEV, vsn(2, 2, 85), NULL, 0, NULL,
cfg(devices_obtain_device_list_from_udev_CFG, "obtain_device_list_from_udev", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_OBTAIN_DEVICE_LIST_FROM_UDEV, vsn(2, 2, 85), NULL, 0, NULL,
"Obtain the list of available devices from udev.\n"
"This avoids opening or using any inapplicable non-block devices or\n"
"subdirectories found in the udev directory. Any device node or\n"
@@ -237,7 +237,7 @@ cfg(devices_obtain_device_list_from_udev_CFG, "obtain_device_list_from_udev", de
"directories will be scanned fully. LVM needs to be compiled with\n"
"udev support for this setting to apply.\n")
cfg(devices_external_device_info_source_CFG, "external_device_info_source", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_EXTERNAL_DEVICE_INFO_SOURCE, vsn(2, 2, 116), NULL, 0, NULL,
cfg(devices_external_device_info_source_CFG, "external_device_info_source", devices_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_EXTERNAL_DEVICE_INFO_SOURCE, vsn(2, 2, 116), NULL, 0, NULL,
"Select an external device information source.\n"
"Some information may already be available in the system and LVM can\n"
"use this information to determine the exact type or use of devices it\n"
@@ -255,7 +255,7 @@ cfg(devices_external_device_info_source_CFG, "external_device_info_source", devi
" compiled with udev support.\n"
"#\n")
cfg(devices_hints_CFG, "hints", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_HINTS, vsn(2, 3, 2), NULL, 0, NULL,
cfg(devices_hints_CFG, "hints", devices_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_HINTS, vsn(2, 3, 2), NULL, 0, NULL,
"Use a local file to remember which devices have PVs on them.\n"
"Some commands will use this as an optimization to reduce device\n"
"scanning, and will only scan the listed PVs. Removing the hint file\n"
@@ -288,33 +288,7 @@ cfg_array(devices_preferred_names_CFG, "preferred_names", devices_CFG_SECTION, C
"preferred_names = [ \"^/dev/mpath/\", \"^/dev/mapper/mpath\", \"^/dev/[hs]d\" ]\n"
"#\n")
cfg(devices_use_devicesfile_CFG, "use_devicesfile", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_DEVICES_FILE, vsn(2, 3, 12), NULL, 0, NULL,
"Enable or disable the use of a devices file.\n"
"When enabled, lvm will only use devices that\n"
"are lised in the devices file. A devices file will\n"
"be used, regardless of this setting, when the --devicesfile\n"
"option is set to a specific file name.\n")
cfg(devices_devicesfile_CFG, "devicesfile", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DEVICES_FILE, vsn(2, 3, 12), NULL, 0, NULL,
"The name of the system devices file, listing devices that LVM should use.\n"
"This should not be used to select a non-system devices file.\n"
"The --devicesfile option is intended for alternative devices files.\n")
cfg(devices_search_for_devnames_CFG, "search_for_devnames", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_SEARCH_FOR_DEVNAMES, vsn(2, 3, 12), NULL, 0, NULL,
"Look outside of the devices file for missing devname entries.\n"
"A devname entry is used for a device that does not have a stable\n"
"device id, e.g. wwid, so the unstable device name is used as\n"
"the device id. After reboot, or if the device is reattached,\n"
"the device name may change, in which case lvm will not find\n"
"the expected PV on the device listed in the devices file.\n"
"This setting controls whether lvm will search other devices,\n"
"outside the devices file, to look for the missing PV on a\n"
"renamed device. If \"none\", lvm will not look at other devices,\n"
"and the PV may appear to be missing. If \"auto\", lvm will look\n"
"at other devices, but only those that are likely to have the PV.\n"
"If \"all\", lvm will look at all devices on the system.\n")
cfg_array(devices_filter_CFG, "filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, "#Sa|.*|", vsn(1, 0, 0), NULL, 0, NULL,
cfg_array(devices_filter_CFG, "filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, "#Sa|.*/|", vsn(1, 0, 0), NULL, 0, NULL,
"Limit the block devices that are used by LVM commands.\n"
"This is a list of regular expressions used to accept or reject block\n"
"device path names. Each regex is delimited by a vertical bar '|'\n"
@@ -332,7 +306,7 @@ cfg_array(devices_filter_CFG, "filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENT
"#\n"
"Example\n"
"Accept every block device:\n"
"filter = [ \"a|.*|\" ]\n"
"filter = [ \"a|.*/|\" ]\n"
"Reject the cdrom drive:\n"
"filter = [ \"r|/dev/cdrom|\" ]\n"
"Work with just loopback devices, e.g. for testing:\n"
@@ -340,10 +314,10 @@ cfg_array(devices_filter_CFG, "filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENT
"Accept all loop devices and ide drives except hdc:\n"
"filter = [ \"a|loop|\", \"r|/dev/hdc|\", \"a|/dev/ide|\", \"r|.*|\" ]\n"
"Use anchors to be very specific:\n"
"filter = [ \"a|^/dev/hda8$|\", \"r|.*|\" ]\n"
"filter = [ \"a|^/dev/hda8$|\", \"r|.*/|\" ]\n"
"#\n")
cfg_array(devices_global_filter_CFG, "global_filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, "#Sa|.*|", vsn(2, 2, 98), NULL, 0, NULL,
cfg_array(devices_global_filter_CFG, "global_filter", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, "#Sa|.*/|", vsn(2, 2, 98), NULL, 0, NULL,
"Limit the block devices that are used by LVM system components.\n"
"Because devices/filter may be overridden from the command line, it is\n"
"not suitable for system-wide device filtering, e.g. udev.\n"
@@ -352,16 +326,16 @@ cfg_array(devices_global_filter_CFG, "global_filter", devices_CFG_SECTION, CFG_D
"global_filter are not opened by LVM.\n")
cfg_runtime(devices_cache_CFG, "cache", devices_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(1, 0, 0), vsn(1, 2, 19), NULL,
NULL)
"This setting is no longer used.\n")
cfg_runtime(devices_cache_dir_CFG, "cache_dir", devices_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(1, 2, 19), vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg(devices_cache_file_prefix_CFG, "cache_file_prefix", devices_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, DEFAULT_CACHE_FILE_PREFIX, vsn(1, 2, 19), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg(devices_write_cache_state_CFG, "write_cache_state", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, 1, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg_array(devices_types_CFG, "types", devices_CFG_SECTION, CFG_DEFAULT_UNDEFINED | CFG_ADVANCED, CFG_TYPE_INT | CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL,
"List of additional acceptable block device types.\n"
@@ -372,59 +346,26 @@ cfg_array(devices_types_CFG, "types", devices_CFG_SECTION, CFG_DEFAULT_UNDEFINED
"types = [ \"fd\", 16 ]\n"
"#\n")
cfg(devices_sysfs_scan_CFG, "sysfs_scan", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SYSFS_SCAN, vsn(1, 0, 8), NULL, 0, NULL,
cfg(devices_sysfs_scan_CFG, "sysfs_scan", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SYSFS_SCAN, vsn(1, 0, 8), NULL, 0, NULL,
"Restrict device scanning to block devices appearing in sysfs.\n"
"This is a quick way of filtering out block devices that are not\n"
"present on the system. sysfs must be part of the kernel and mounted.)\n")
cfg(devices_scan_lvs_CFG, "scan_lvs", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SCAN_LVS, vsn(2, 2, 182), NULL, 0, NULL,
"Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.\n"
"When 1, LVM will detect PVs layered on LVs, and caution must be\n"
"taken to avoid a host accessing a layered VG that may not belong\n"
"to it, e.g. from a guest image. This generally requires excluding\n"
"the LVs with device filters. Also, when this setting is enabled,\n"
"every LVM command will scan every active LV on the system (unless\n"
"filtered), which can cause performance problems on systems with\n"
"many active LVs. When this setting is 0, LVM will not detect or\n"
"use PVs that exist on LVs, and will not allow a PV to be created on\n"
"an LV. The LVs are ignored using a built in device filter that\n"
"identifies and excludes LVs.\n")
cfg(devices_scan_lvs_CFG, "scan_lvs", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SCAN_LVS, vsn(2, 2, 182), NULL, 0, NULL,
"Scan LVM LVs for layered PVs.\n")
cfg(devices_multipath_component_detection_CFG, "multipath_component_detection", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_MULTIPATH_COMPONENT_DETECTION, vsn(2, 2, 89), NULL, 0, NULL,
cfg(devices_multipath_component_detection_CFG, "multipath_component_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MULTIPATH_COMPONENT_DETECTION, vsn(2, 2, 89), NULL, 0, NULL,
"Ignore devices that are components of DM multipath devices.\n")
cfg(devices_md_component_detection_CFG, "md_component_detection", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_MD_COMPONENT_DETECTION, vsn(1, 0, 18), NULL, 0, NULL,
"Enable detection and exclusion of MD component devices.\n"
"An MD component device is a block device that MD uses as part\n"
"of a software RAID virtual device. When an LVM PV is created\n"
"on an MD device, LVM must only use the top level MD device as\n"
"the PV, and should ignore the underlying component devices.\n"
"In cases where the MD superblock is located at the end of the\n"
"component devices, it is more difficult for LVM to consistently\n"
"identify an MD component, see the md_component_checks setting.\n")
cfg(devices_md_component_detection_CFG, "md_component_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MD_COMPONENT_DETECTION, vsn(1, 0, 18), NULL, 0, NULL,
"Ignore devices that are components of software RAID (md) devices.\n")
cfg(devices_md_component_checks_CFG, "md_component_checks", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_MD_COMPONENT_CHECKS, vsn(2, 3, 2), NULL, 0, NULL,
"The checks LVM should use to detect MD component devices.\n"
"MD component devices are block devices used by MD software RAID.\n"
"#\n"
"Accepted values:\n"
" auto\n"
" LVM will skip scanning the end of devices when it has other\n"
" indications that the device is not an MD component.\n"
" start\n"
" LVM will only scan the start of devices for MD superblocks.\n"
" This does not incur extra I/O by LVM.\n"
" full\n"
" LVM will scan the start and end of devices for MD superblocks.\n"
" This requires an extra read at the end of devices.\n"
"#\n")
cfg(devices_fw_raid_component_detection_CFG, "fw_raid_component_detection", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_FW_RAID_COMPONENT_DETECTION, vsn(2, 2, 112), NULL, 0, NULL,
cfg(devices_fw_raid_component_detection_CFG, "fw_raid_component_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_FW_RAID_COMPONENT_DETECTION, vsn(2, 2, 112), NULL, 0, NULL,
"Ignore devices that are components of firmware RAID devices.\n"
"LVM must use an external_device_info_source other than none for this\n"
"detection to execute.\n")
cfg(devices_md_chunk_alignment_CFG, "md_chunk_alignment", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_MD_CHUNK_ALIGNMENT, vsn(2, 2, 48), NULL, 0, NULL,
cfg(devices_md_chunk_alignment_CFG, "md_chunk_alignment", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MD_CHUNK_ALIGNMENT, vsn(2, 2, 48), NULL, 0, NULL,
"Align the start of a PV data area with md device's stripe-width.\n"
"This applies if a PV is placed directly on an md device.\n"
"default_data_alignment will be overriden if it is not aligned\n"
@@ -438,7 +379,7 @@ cfg(devices_default_data_alignment_CFG, "default_data_alignment", devices_CFG_SE
"This setting is overriden by data_alignment and the --dataalignment\n"
"option.\n")
cfg(devices_data_alignment_detection_CFG, "data_alignment_detection", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_DATA_ALIGNMENT_DETECTION, vsn(2, 2, 51), NULL, 0, NULL,
cfg(devices_data_alignment_detection_CFG, "data_alignment_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_DATA_ALIGNMENT_DETECTION, vsn(2, 2, 51), NULL, 0, NULL,
"Align the start of a PV data area with sysfs io properties.\n"
"The start of a PV data area will be a multiple of minimum_io_size or\n"
"optimal_io_size exposed in sysfs. minimum_io_size is the smallest\n"
@@ -452,14 +393,14 @@ cfg(devices_data_alignment_detection_CFG, "data_alignment_detection", devices_CF
"This setting is overriden by data_alignment and the --dataalignment\n"
"option.\n")
cfg(devices_data_alignment_CFG, "data_alignment", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(2, 2, 45), NULL, 0, NULL,
cfg(devices_data_alignment_CFG, "data_alignment", devices_CFG_SECTION, 0, CFG_TYPE_INT, 0, vsn(2, 2, 45), NULL, 0, NULL,
"Align the start of a PV data area with this number of KiB.\n"
"When non-zero, this setting overrides default_data_alignment.\n"
"Set to 0 to disable, in which case default_data_alignment\n"
"is used to align the first PE in units of MiB.\n"
"This setting is overriden by the --dataalignment option.\n")
cfg(devices_data_alignment_offset_detection_CFG, "data_alignment_offset_detection", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_DATA_ALIGNMENT_OFFSET_DETECTION, vsn(2, 2, 50), NULL, 0, NULL,
cfg(devices_data_alignment_offset_detection_CFG, "data_alignment_offset_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_DATA_ALIGNMENT_OFFSET_DETECTION, vsn(2, 2, 50), NULL, 0, NULL,
"Shift the start of an aligned PV data area based on sysfs information.\n"
"After a PV data area is aligned, it will be shifted by the\n"
"alignment_offset exposed in sysfs. This offset is often 0, but may\n"
@@ -469,12 +410,12 @@ cfg(devices_data_alignment_offset_detection_CFG, "data_alignment_offset_detectio
"LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).\n"
"This setting is overriden by the --dataalignmentoffset option.\n")
cfg(devices_ignore_suspended_devices_CFG, "ignore_suspended_devices", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_IGNORE_SUSPENDED_DEVICES, vsn(1, 2, 19), NULL, 0, NULL,
cfg(devices_ignore_suspended_devices_CFG, "ignore_suspended_devices", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_IGNORE_SUSPENDED_DEVICES, vsn(1, 2, 19), NULL, 0, NULL,
"Ignore DM devices that have I/O suspended while scanning devices.\n"
"Otherwise, LVM waits for a suspended device to become accessible.\n"
"This should only be needed in recovery situations.\n")
cfg(devices_ignore_lvm_mirrors_CFG, "ignore_lvm_mirrors", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_IGNORE_LVM_MIRRORS, vsn(2, 2, 104), NULL, 0, NULL,
cfg(devices_ignore_lvm_mirrors_CFG, "ignore_lvm_mirrors", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_IGNORE_LVM_MIRRORS, vsn(2, 2, 104), NULL, 0, NULL,
"Do not scan 'mirror' LVs to avoid possible deadlocks.\n"
"This avoids possible deadlocks when using the 'mirror' segment type.\n"
"This setting determines whether LVs using the 'mirror' segment type\n"
@@ -492,19 +433,19 @@ cfg(devices_ignore_lvm_mirrors_CFG, "ignore_lvm_mirrors", devices_CFG_SECTION, C
"apply to LVM RAID types like 'raid1' which handle failures in a\n"
"different way, making them a better choice for VG stacking.\n")
cfg(devices_disable_after_error_count_CFG, "disable_after_error_count", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(2, 2, 75), NULL, vsn(2, 3, 0), NULL,
NULL)
cfg(devices_disable_after_error_count_CFG, "disable_after_error_count", devices_CFG_SECTION, 0, CFG_TYPE_INT, 0, vsn(2, 2, 75), NULL, vsn(2, 3, 0), NULL,
"This setting is no longer used.\n")
cfg(devices_require_restorefile_with_uuid_CFG, "require_restorefile_with_uuid", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_REQUIRE_RESTOREFILE_WITH_UUID, vsn(2, 2, 73), NULL, 0, NULL,
cfg(devices_require_restorefile_with_uuid_CFG, "require_restorefile_with_uuid", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_REQUIRE_RESTOREFILE_WITH_UUID, vsn(2, 2, 73), NULL, 0, NULL,
"Allow use of pvcreate --uuid without requiring --restorefile.\n")
cfg(devices_pv_min_size_CFG, "pv_min_size", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_PV_MIN_SIZE_KB, vsn(2, 2, 85), NULL, 0, NULL,
cfg(devices_pv_min_size_CFG, "pv_min_size", devices_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_PV_MIN_SIZE_KB, vsn(2, 2, 85), NULL, 0, NULL,
"Minimum size in KiB of block devices which can be used as PVs.\n"
"In a clustered environment all nodes must use the same value.\n"
"Any value smaller than 512KiB is ignored. The previous built-in\n"
"value was 512.\n")
cfg(devices_issue_discards_CFG, "issue_discards", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ISSUE_DISCARDS, vsn(2, 2, 85), NULL, 0, NULL,
cfg(devices_issue_discards_CFG, "issue_discards", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ISSUE_DISCARDS, vsn(2, 2, 85), NULL, 0, NULL,
"Issue discards to PVs that are no longer used by an LV.\n"
"Discards are sent to an LV's underlying physical volumes when the LV\n"
"is no longer using the physical volumes' space, e.g. lvremove,\n"
@@ -516,7 +457,7 @@ cfg(devices_issue_discards_CFG, "issue_discards", devices_CFG_SECTION, CFG_DEFAU
"generally do. If enabled, discards will only be issued if both the\n"
"storage and kernel provide support.\n")
cfg(devices_allow_changes_with_duplicate_pvs_CFG, "allow_changes_with_duplicate_pvs", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ALLOW_CHANGES_WITH_DUPLICATE_PVS, vsn(2, 2, 153), NULL, 0, NULL,
cfg(devices_allow_changes_with_duplicate_pvs_CFG, "allow_changes_with_duplicate_pvs", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ALLOW_CHANGES_WITH_DUPLICATE_PVS, vsn(2, 2, 153), NULL, 0, NULL,
"Allow VG modification while a PV appears on multiple devices.\n"
"When a PV appears on multiple devices, LVM attempts to choose the\n"
"best device to use for the PV. If the devices represent the same\n"
@@ -528,11 +469,6 @@ cfg(devices_allow_changes_with_duplicate_pvs_CFG, "allow_changes_with_duplicate_
"Enabling this setting allows the VG to be used as usual even with\n"
"uncertain devices.\n")
cfg(devices_allow_mixed_block_sizes_CFG, "allow_mixed_block_sizes", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2, 3, 6), NULL, 0, NULL,
"Allow PVs in the same VG with different logical block sizes.\n"
"When allowed, the user is responsible to ensure that an LV is\n"
"using PVs with matching block sizes when necessary.\n")
cfg_array(allocation_cling_tag_list_CFG, "cling_tag_list", allocation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 2, 77), NULL, 0, NULL,
"Advise LVM which PVs to use when searching for new space.\n"
"When searching for free space to extend an LV, the 'cling' allocation\n"
@@ -551,14 +487,14 @@ cfg_array(allocation_cling_tag_list_CFG, "cling_tag_list", allocation_CFG_SECTIO
"cling_tag_list = [ \"@site1\", \"@site2\" ]\n"
"#\n")
cfg(allocation_maximise_cling_CFG, "maximise_cling", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_MAXIMISE_CLING, vsn(2, 2, 85), NULL, 0, NULL,
cfg(allocation_maximise_cling_CFG, "maximise_cling", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MAXIMISE_CLING, vsn(2, 2, 85), NULL, 0, NULL,
"Use a previous allocation algorithm.\n"
"Changes made in version 2.02.85 extended the reach of the 'cling'\n"
"policies to detect more situations where data can be grouped onto\n"
"the same disks. This setting can be used to disable the changes\n"
"and revert to the previous algorithm.\n")
cfg(allocation_use_blkid_wiping_CFG, "use_blkid_wiping", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_BLKID_WIPING, vsn(2, 2, 105), "@DEFAULT_USE_BLKID_WIPING@", 0, NULL,
cfg(allocation_use_blkid_wiping_CFG, "use_blkid_wiping", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_BLKID_WIPING, vsn(2, 2, 105), "@DEFAULT_USE_BLKID_WIPING@", 0, NULL,
"Use blkid to detect and erase existing signatures on new PVs and LVs.\n"
"The blkid library can detect more signatures than the native LVM\n"
"detection code, but may take longer. LVM needs to be compiled with\n"
@@ -567,7 +503,7 @@ cfg(allocation_use_blkid_wiping_CFG, "use_blkid_wiping", allocation_CFG_SECTION,
"swap signature, and LUKS signatures. To see the list of signatures\n"
"recognized by blkid, check the output of the 'blkid -k' command.\n")
cfg(allocation_wipe_signatures_when_zeroing_new_lvs_CFG, "wipe_signatures_when_zeroing_new_lvs", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 1, vsn(2, 2, 105), NULL, 0, NULL,
cfg(allocation_wipe_signatures_when_zeroing_new_lvs_CFG, "wipe_signatures_when_zeroing_new_lvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, 1, vsn(2, 2, 105), NULL, 0, NULL,
"Look for and erase any signatures while zeroing a new LV.\n"
"The --wipesignatures option overrides this setting.\n"
"Zeroing is controlled by the -Z/--zero option, and if not specified,\n"
@@ -583,7 +519,7 @@ cfg(allocation_wipe_signatures_when_zeroing_new_lvs_CFG, "wipe_signatures_when_z
"When this setting is disabled, signatures on new LVs are not detected\n"
"or erased unless the --wipesignatures option is used directly.\n")
cfg(allocation_mirror_logs_require_separate_pvs_CFG, "mirror_logs_require_separate_pvs", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_MIRROR_LOGS_REQUIRE_SEPARATE_PVS, vsn(2, 2, 85), NULL, 0, NULL,
cfg(allocation_mirror_logs_require_separate_pvs_CFG, "mirror_logs_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MIRROR_LOGS_REQUIRE_SEPARATE_PVS, vsn(2, 2, 85), NULL, 0, NULL,
"Mirror logs and images will always use different PVs.\n"
"The default setting changed in version 2.02.85.\n")
@@ -594,7 +530,7 @@ cfg(allocation_raid_stripe_all_devices_CFG, "raid_stripe_all_devices", allocatio
"stripes to use.\n"
"This was the default behaviour until release 2.02.162.\n")
cfg(allocation_cache_pool_metadata_require_separate_pvs_CFG, "cache_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 106), NULL, 0, NULL,
cfg(allocation_cache_pool_metadata_require_separate_pvs_CFG, "cache_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_BOOL, DEFAULT_CACHE_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 106), NULL, 0, NULL,
"Cache pool metadata and data will always use different PVs.\n")
cfg(allocation_cache_pool_cachemode_CFG, "cache_pool_cachemode", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_CACHE_MODE, vsn(2, 2, 113), NULL, vsn(2, 2, 128),
@@ -651,13 +587,8 @@ cfg(allocation_cache_pool_max_chunks_CFG, "cache_pool_max_chunks", allocation_CF
"For cache target v1.9 the recommended maximumm is 1000000 chunks.\n"
"Using cache pool with more chunks may degrade cache performance.\n")
cfg(allocation_thin_pool_metadata_require_separate_pvs_CFG, "thin_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 89), NULL, 0, NULL,
"Thin pool metadata and data will always use different PVs.\n")
cfg(allocation_thin_pool_crop_metadata_CFG, "thin_pool_crop_metadata", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_CROP_METADATA, vsn(2, 3, 12), NULL, 0, NULL,
"Older version of lvm2 cropped pool's metadata size to 15.81 GiB.\n"
"This is slightly less then the actual maximum 15.88 GiB.\n"
"For compatibility with older version and use of cropped size set to 1.\n")
cfg(allocation_thin_pool_metadata_require_separate_pvs_CFG, "thin_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 89), NULL, 0, NULL,
"Thin pool metdata and data will always use different PVs.\n")
cfg(allocation_thin_pool_zero_CFG, "thin_pool_zero", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_ZERO, vsn(2, 2, 99), NULL, 0, NULL,
"Thin pool data chunks are zeroed before they are first used.\n"
@@ -688,9 +619,6 @@ cfg(allocation_thin_pool_chunk_size_policy_CFG, "thin_pool_chunk_size_policy", a
" 512KiB.\n"
"#\n")
cfg(allocation_zero_metadata_CFG, "zero_metadata", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ZERO_METADATA, vsn(2, 3, 10), NULL, 0, NULL,
"Zero whole metadata area before use with thin or cache pool.\n")
cfg_runtime(allocation_thin_pool_chunk_size_CFG, "thin_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 99), 0, NULL,
"The minimal chunk size in KiB for thin pool volumes.\n"
"Larger chunk sizes may improve performance for plain thin volumes,\n"
@@ -857,10 +785,10 @@ cfg(log_command_log_selection_CFG, "command_log_selection", log_CFG_SECTION, CFG
"For more information about selection criteria in general, see\n"
"lvm(8) man page.\n")
cfg(log_verbose_CFG, "verbose", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_VERBOSE, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_verbose_CFG, "verbose", log_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_VERBOSE, vsn(1, 0, 0), NULL, 0, NULL,
"Controls the messages sent to stdout or stderr.\n")
cfg(log_silent_CFG, "silent", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SILENT, vsn(2, 2, 98), NULL, 0, NULL,
cfg(log_silent_CFG, "silent", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SILENT, vsn(2, 2, 98), NULL, 0, NULL,
"Suppress all non-essential messages from stdout.\n"
"This has the same effect as -qq. When enabled, the following commands\n"
"still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,\n"
@@ -870,103 +798,95 @@ cfg(log_silent_CFG, "silent", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_B
"Any 'yes' or 'no' questions not overridden by other arguments are\n"
"suppressed and default to 'no'.\n")
cfg(log_syslog_CFG, "syslog", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SYSLOG, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_syslog_CFG, "syslog", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SYSLOG, vsn(1, 0, 0), NULL, 0, NULL,
"Send log messages through syslog.\n")
cfg(log_file_CFG, "file", log_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL,
"Write error and debug log messages to a file specified here.\n")
cfg(log_overwrite_CFG, "overwrite", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_OVERWRITE, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_overwrite_CFG, "overwrite", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_OVERWRITE, vsn(1, 0, 0), NULL, 0, NULL,
"Overwrite the log file each time the program is run.\n")
cfg(log_level_CFG, "level", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_LOGLEVEL, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_level_CFG, "level", log_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_LOGLEVEL, vsn(1, 0, 0), NULL, 0, NULL,
"The level of log messages that are sent to the log file or syslog.\n"
"There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.\n"
"7 is the most verbose (LOG_DEBUG).\n")
cfg(log_indent_CFG, "indent", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_INDENT, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_indent_CFG, "indent", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_INDENT, vsn(1, 0, 0), NULL, 0, NULL,
"Indent messages according to their severity.\n")
cfg(log_command_names_CFG, "command_names", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_CMD_NAME, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_command_names_CFG, "command_names", log_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_CMD_NAME, vsn(1, 0, 0), NULL, 0, NULL,
"Display the command name on each line of output.\n")
cfg(log_prefix_CFG, "prefix", log_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ALLOW_EMPTY, CFG_TYPE_STRING, DEFAULT_MSG_PREFIX, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_prefix_CFG, "prefix", log_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, DEFAULT_MSG_PREFIX, vsn(1, 0, 0), NULL, 0, NULL,
"A prefix to use before the log message text.\n"
"(After the command name, if selected).\n"
"Two spaces allows you to see/grep the severity of each message.\n"
"To make the messages look similar to the original LVM tools use:\n"
"indent = 0, command_names = 1, prefix = \" -- \"\n")
cfg(log_activation_CFG, "activation", log_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(1, 0, 0), NULL, 0, NULL,
cfg(log_activation_CFG, "activation", log_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(1, 0, 0), NULL, 0, NULL,
"Log messages during activation.\n"
"Don't use this in low memory situations (can deadlock).\n")
cfg(log_activate_file_CFG, "activate_file", log_CFG_SECTION, CFG_DEFAULT_UNDEFINED | CFG_UNSUPPORTED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL, NULL)
cfg_array(log_debug_classes_CFG, "debug_classes", log_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ALLOW_EMPTY, CFG_TYPE_STRING, "#Smemory#Sdevices#Sio#Sactivation#Sallocation#Smetadata#Scache#Slocking#Slvmpolld#Sdbus", vsn(2, 2, 99), NULL, 0, NULL,
cfg_array(log_debug_classes_CFG, "debug_classes", log_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, "#Smemory#Sdevices#Sio#Sactivation#Sallocation#Smetadata#Scache#Slocking#Slvmpolld#Sdbus", vsn(2, 2, 99), NULL, 0, NULL,
"Select log messages by class.\n"
"Some debugging messages are assigned to a class and only appear in\n"
"debug output if the class is listed here. Classes currently\n"
"available: memory, devices, io, activation, allocation,\n"
"metadata, cache, locking, lvmpolld. Use \"all\" to see everything.\n")
cfg_array(log_debug_file_fields_CFG, "debug_file_fields", log_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, "#Stime#Scommand#Sfileline#Smessage", vsn(2, 3, 2), NULL, 0, NULL,
"The fields included in debug output written to log file.\n"
"Use \"all\" to include everything (the default).\n")
cfg_array(log_debug_output_fields_CFG, "debug_output_fields", log_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, "#Stime#Scommand#Sfileline#Smessage", vsn(2, 3, 2), NULL, 0, NULL,
"The fields included in debug output written to stderr.\n"
"Use \"all\" to include everything (the default).\n")
cfg(backup_backup_CFG, "backup", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_BACKUP_ENABLED, vsn(1, 0, 0), NULL, 0, NULL,
cfg(backup_backup_CFG, "backup", backup_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_BACKUP_ENABLED, vsn(1, 0, 0), NULL, 0, NULL,
"Maintain a backup of the current metadata configuration.\n"
"Think very hard before turning this off!\n")
cfg_runtime(backup_backup_dir_CFG, "backup_dir", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, vsn(1, 0, 0), 0, NULL,
cfg_runtime(backup_backup_dir_CFG, "backup_dir", backup_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(1, 0, 0), 0, NULL,
"Location of the metadata backup files.\n"
"Remember to back up this directory regularly!\n")
cfg(backup_archive_CFG, "archive", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ARCHIVE_ENABLED, vsn(1, 0, 0), NULL, 0, NULL,
cfg(backup_archive_CFG, "archive", backup_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ARCHIVE_ENABLED, vsn(1, 0, 0), NULL, 0, NULL,
"Maintain an archive of old metadata configurations.\n"
"Think very hard before turning this off.\n")
cfg_runtime(backup_archive_dir_CFG, "archive_dir", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, vsn(1, 0, 0), 0, NULL,
cfg_runtime(backup_archive_dir_CFG, "archive_dir", backup_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(1, 0, 0), 0, NULL,
"Location of the metdata archive files.\n"
"Remember to back up this directory regularly!\n")
cfg(backup_retain_min_CFG, "retain_min", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_ARCHIVE_NUMBER, vsn(1, 0, 0), NULL, 0, NULL,
cfg(backup_retain_min_CFG, "retain_min", backup_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_ARCHIVE_NUMBER, vsn(1, 0, 0), NULL, 0, NULL,
"Minimum number of archives to keep.\n")
cfg(backup_retain_days_CFG, "retain_days", backup_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_ARCHIVE_DAYS, vsn(1, 0, 0), NULL, 0, NULL,
cfg(backup_retain_days_CFG, "retain_days", backup_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_ARCHIVE_DAYS, vsn(1, 0, 0), NULL, 0, NULL,
"Minimum number of days to keep archive files.\n")
cfg(shell_history_size_CFG, "history_size", shell_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_MAX_HISTORY, vsn(1, 0, 0), NULL, 0, NULL,
cfg(shell_history_size_CFG, "history_size", shell_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_MAX_HISTORY, vsn(1, 0, 0), NULL, 0, NULL,
"Number of lines of history to store in ~/.lvm_history.\n")
cfg(global_umask_CFG, "umask", global_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_FORMAT_INT_OCTAL, CFG_TYPE_INT, DEFAULT_UMASK, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_umask_CFG, "umask", global_CFG_SECTION, CFG_FORMAT_INT_OCTAL, CFG_TYPE_INT, DEFAULT_UMASK, vsn(1, 0, 0), NULL, 0, NULL,
"The file creation mask for any files and directories created.\n"
"Interpreted as octal if the first digit is zero.\n")
cfg(global_test_CFG, "test", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_test_CFG, "test", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(1, 0, 0), NULL, 0, NULL,
"No on-disk metadata changes will be made in test mode.\n"
"Equivalent to having the -t option on every command.\n")
cfg(global_units_CFG, "units", global_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_UNITS, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_units_CFG, "units", global_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_STRING, DEFAULT_UNITS, vsn(1, 0, 0), NULL, 0, NULL,
"Default value for --units argument.\n")
cfg(global_si_unit_consistency_CFG, "si_unit_consistency", global_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_SI_UNIT_CONSISTENCY, vsn(2, 2, 54), NULL, 0, NULL,
cfg(global_si_unit_consistency_CFG, "si_unit_consistency", global_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_SI_UNIT_CONSISTENCY, vsn(2, 2, 54), NULL, 0, NULL,
"Distinguish between powers of 1024 and 1000 bytes.\n"
"The LVM commands distinguish between powers of 1024 bytes,\n"
"e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.\n"
"If scripts depend on the old behaviour, disable this setting\n"
"temporarily until they are updated.\n")
cfg(global_suffix_CFG, "suffix", global_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_SUFFIX, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_suffix_CFG, "suffix", global_CFG_SECTION, CFG_PROFILABLE, CFG_TYPE_BOOL, DEFAULT_SUFFIX, vsn(1, 0, 0), NULL, 0, NULL,
"Display unit suffix for sizes.\n"
"This setting has no effect if the units are in human-readable form\n"
"(global/units = \"h\") in which case the suffix is always displayed.\n")
cfg(global_activation_CFG, "activation", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ACTIVATION, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_activation_CFG, "activation", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ACTIVATION, vsn(1, 0, 0), NULL, 0, NULL,
"Enable/disable communication with the kernel device-mapper.\n"
"Disable to use the tools to manipulate LVM metadata without\n"
"activating any logical volumes. If the device-mapper driver\n"
@@ -974,69 +894,70 @@ cfg(global_activation_CFG, "activation", global_CFG_SECTION, CFG_DEFAULT_COMMENT
"the error messages.\n")
cfg(global_fallback_to_lvm1_CFG, "fallback_to_lvm1", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(1, 0, 18), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg(global_format_CFG, "format", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_FORMAT, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg_array(global_format_libraries_CFG, "format_libraries", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.")
cfg_array(global_segment_libraries_CFG, "segment_libraries", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 18), NULL, vsn(2, 3, 3), NULL, NULL)
cfg_array(global_segment_libraries_CFG, "segment_libraries", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 18), NULL, 0, NULL, NULL)
cfg(global_proc_CFG, "proc", global_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_PROC_DIR, vsn(1, 0, 0), NULL, 0, NULL,
cfg(global_proc_CFG, "proc", global_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_PROC_DIR, vsn(1, 0, 0), NULL, 0, NULL,
"Location of proc filesystem.\n")
cfg(global_etc_CFG, "etc", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_ETC_DIR, vsn(2, 2, 117), "@CONFDIR@", 0, NULL,
cfg(global_etc_CFG, "etc", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_ETC_DIR, vsn(2, 2, 117), "@CONFDIR@", 0, NULL,
"Location of /etc system configuration directory.\n")
cfg(global_locking_type_CFG, "locking_type", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 1, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
cfg(global_locking_type_CFG, "locking_type", global_CFG_SECTION, 0, CFG_TYPE_INT, 1, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
"This setting is no longer used.")
cfg(global_wait_for_locks_CFG, "wait_for_locks", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_WAIT_FOR_LOCKS, vsn(2, 2, 50), NULL, 0, NULL,
cfg(global_wait_for_locks_CFG, "wait_for_locks", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_WAIT_FOR_LOCKS, vsn(2, 2, 50), NULL, 0, NULL,
"When disabled, fail if a lock request would block.\n")
cfg(global_fallback_to_clustered_locking_CFG, "fallback_to_clustered_locking", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_FALLBACK_TO_CLUSTERED_LOCKING, vsn(2, 2, 42), NULL, vsn(2, 3, 0), NULL,
NULL)
cfg(global_fallback_to_clustered_locking_CFG, "fallback_to_clustered_locking", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_FALLBACK_TO_CLUSTERED_LOCKING, vsn(2, 2, 42), NULL, vsn(2, 3, 0), NULL,
"This setting is no longer used.\n")
cfg(global_fallback_to_local_locking_CFG, "fallback_to_local_locking", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_FALLBACK_TO_LOCAL_LOCKING, vsn(2, 2, 42), NULL, vsn(2, 3, 0), NULL,
NULL)
cfg(global_fallback_to_local_locking_CFG, "fallback_to_local_locking", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_FALLBACK_TO_LOCAL_LOCKING, vsn(2, 2, 42), NULL, vsn(2, 3, 0), NULL,
"This setting is no longer used.\n")
cfg(global_locking_dir_CFG, "locking_dir", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_LOCK_DIR, vsn(1, 0, 0), "@DEFAULT_LOCK_DIR@", 0, NULL,
cfg(global_locking_dir_CFG, "locking_dir", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_LOCK_DIR, vsn(1, 0, 0), "@DEFAULT_LOCK_DIR@", 0, NULL,
"Directory to use for LVM command file locks.\n"
"Local non-LV directory that holds file-based locks while commands are\n"
"in progress. A directory like /tmp that may get wiped on reboot is OK.\n")
cfg(global_prioritise_write_locks_CFG, "prioritise_write_locks", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_PRIORITISE_WRITE_LOCKS, vsn(2, 2, 52), NULL, 0, NULL,
cfg(global_prioritise_write_locks_CFG, "prioritise_write_locks", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_PRIORITISE_WRITE_LOCKS, vsn(2, 2, 52), NULL, 0, NULL,
"Allow quicker VG write access during high volume read access.\n"
"When there are competing read-only and read-write access requests for\n"
"a volume group's metadata, instead of always granting the read-only\n"
"requests immediately, delay them to allow the read-write requests to\n"
"be serviced. Without this setting, write access may be stalled by a\n"
"high volume of read-only requests. This option only affects file locks.\n")
"high volume of read-only requests. This option only affects\n"
"locking_type 1 viz. local file-based locking.\n")
cfg(global_library_dir_CFG, "library_dir", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL,
"Search this directory first for shared libraries.\n")
cfg(global_locking_library_CFG, "locking_library", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_LOCKING_LIB, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg(global_abort_on_internal_errors_CFG, "abort_on_internal_errors", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ABORT_ON_INTERNAL_ERRORS, vsn(2, 2, 57), NULL, 0, NULL,
cfg(global_abort_on_internal_errors_CFG, "abort_on_internal_errors", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ABORT_ON_INTERNAL_ERRORS, vsn(2, 2, 57), NULL, 0, NULL,
"Abort a command that encounters an internal error.\n"
"Treat any internal errors as fatal errors, aborting the process that\n"
"encountered the internal error. Please only enable for debugging.\n")
cfg(global_detect_internal_vg_cache_corruption_CFG, "detect_internal_vg_cache_corruption", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2, 2, 96), NULL, vsn(2, 2, 174), NULL,
NULL)
cfg(global_detect_internal_vg_cache_corruption_CFG, "detect_internal_vg_cache_corruption", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2, 2, 96), NULL, vsn(2, 2, 174), NULL,
"No longer used.\n")
cfg(global_metadata_read_only_CFG, "metadata_read_only", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_METADATA_READ_ONLY, vsn(2, 2, 75), NULL, 0, NULL,
cfg(global_metadata_read_only_CFG, "metadata_read_only", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_METADATA_READ_ONLY, vsn(2, 2, 75), NULL, 0, NULL,
"No operations that change on-disk metadata are permitted.\n"
"Additionally, read-only commands that encounter metadata in need of\n"
"repair will still be allowed to proceed exactly as if the repair had\n"
"been performed (except for the unchanged vg_seqno). Inappropriate\n"
"use could mess up your system, so seek advice first!\n")
cfg(global_mirror_segtype_default_CFG, "mirror_segtype_default", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_MIRROR_SEGTYPE, vsn(2, 2, 87), "@DEFAULT_MIRROR_SEGTYPE@", 0, NULL,
cfg(global_mirror_segtype_default_CFG, "mirror_segtype_default", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_MIRROR_SEGTYPE, vsn(2, 2, 87), "@DEFAULT_MIRROR_SEGTYPE@", 0, NULL,
"The segment type used by the short mirroring option -m.\n"
"The --type mirror|raid1 option overrides this setting.\n"
"#\n"
@@ -1061,7 +982,7 @@ cfg(global_mirror_segtype_default_CFG, "mirror_segtype_default", global_CFG_SECT
" fashion in a cluster.\n"
"#\n")
cfg(global_support_mirrored_mirror_log_CFG, "support_mirrored_mirror_log", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2, 3, 2), NULL, 0, NULL,
cfg(global_support_mirrored_mirror_log_CFG, "support_mirrored_mirror_log", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2, 3, 2), NULL, 0, NULL,
"Enable mirrored 'mirror' log type for testing.\n"
"#\n"
"This type is deprecated to create or convert to but can\n"
@@ -1071,7 +992,7 @@ cfg(global_support_mirrored_mirror_log_CFG, "support_mirrored_mirror_log", globa
"Not supported for regular operation!\n"
"\n")
cfg(global_raid10_segtype_default_CFG, "raid10_segtype_default", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_RAID10_SEGTYPE, vsn(2, 2, 99), "@DEFAULT_RAID10_SEGTYPE@", 0, NULL,
cfg(global_raid10_segtype_default_CFG, "raid10_segtype_default", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_RAID10_SEGTYPE, vsn(2, 2, 99), "@DEFAULT_RAID10_SEGTYPE@", 0, NULL,
"The segment type used by the -i -m combination.\n"
"The --type raid10|mirror option overrides this setting.\n"
"The --stripes/-i and --mirrors/-m options can both be specified\n"
@@ -1089,7 +1010,7 @@ cfg(global_raid10_segtype_default_CFG, "raid10_segtype_default", global_CFG_SECT
" in terms of providing redundancy and performance.\n"
"#\n")
cfg(global_sparse_segtype_default_CFG, "sparse_segtype_default", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_SPARSE_SEGTYPE, vsn(2, 2, 112), "@DEFAULT_SPARSE_SEGTYPE@", 0, NULL,
cfg(global_sparse_segtype_default_CFG, "sparse_segtype_default", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_SPARSE_SEGTYPE, vsn(2, 2, 112), "@DEFAULT_SPARSE_SEGTYPE@", 0, NULL,
"The segment type used by the -V -L combination.\n"
"The --type snapshot|thin option overrides this setting.\n"
"The combination of -V and -L options creates a sparse LV. There are\n"
@@ -1115,7 +1036,7 @@ cfg(global_lvdisplay_shows_full_device_path_CFG, "lvdisplay_shows_full_device_pa
"Previously this was always shown as /dev/vgname/lvname even when that\n"
"was never a valid path in the /dev filesystem.\n")
cfg(global_event_activation_CFG, "event_activation", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 1, vsn(2, 3, 1), 0, 0, NULL,
cfg(global_event_activation_CFG, "event_activation", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 1, vsn(2, 3, 1), 0, 0, NULL,
"Activate LVs based on system-generated device events.\n"
"When a device appears on the system, a system-generated event runs\n"
"the pvscan command to activate LVs if the new PV completes the VG.\n"
@@ -1124,16 +1045,16 @@ cfg(global_event_activation_CFG, "event_activation", global_CFG_SECTION, CFG_DEF
"When event_activation is disabled, the system will generally run\n"
"a direct activation command to activate LVs in complete VGs.\n")
cfg(global_use_lvmetad_CFG, "use_lvmetad", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2, 2, 93), 0, vsn(2, 3, 0), NULL,
NULL)
cfg(global_use_lvmetad_CFG, "use_lvmetad", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2, 2, 93), 0, vsn(2, 3, 0), NULL,
"This setting is no longer used.\n")
cfg(global_lvmetad_update_wait_time_CFG, "lvmetad_update_wait_time", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(2, 2, 151), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg(global_use_aio_CFG, "use_aio", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_AIO, vsn(2, 2, 183), NULL, 0, NULL,
"Use async I/O when reading and writing devices.\n")
cfg(global_use_lvmlockd_CFG, "use_lvmlockd", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(2, 2, 124), NULL, 0, NULL,
cfg(global_use_lvmlockd_CFG, "use_lvmlockd", global_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2, 2, 124), NULL, 0, NULL,
"Use lvmlockd for locking among hosts using LVM on shared storage.\n"
"Applicable only if LVM is compiled with lockd support in which\n"
"case there is also lvmlockd(8) man page available for more\n"
@@ -1237,20 +1158,11 @@ cfg(global_vdo_format_executable_CFG, "vdo_format_executable", global_CFG_SECTIO
cfg_array(global_vdo_format_options_CFG, "vdo_format_options", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_VDO_FORMAT_OPTIONS_CONFIG, VDO_1ST_VSN, NULL, 0, NULL,
"List of options passed added to standard vdoformat command.\n")
cfg_array(global_vdo_disabled_features_CFG, "vdo_disabled_features", global_CFG_SECTION, CFG_ALLOW_EMPTY | CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 3, 11), NULL, 0, NULL,
"Features to not use in the vdo driver.\n"
"This can be helpful for testing, or to avoid using a feature that is\n"
"causing problems. Features include: online_rename\n"
"#\n"
"Example\n"
"vdo_disabled_features = [ \"online_rename\" ]\n"
"#\n")
cfg(global_fsadm_executable_CFG, "fsadm_executable", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_FSADM_PATH, vsn(2, 2, 170), "@FSADM_PATH@", 0, NULL,
"The full path to the fsadm command.\n"
"LVM uses this command to help with lvresize -r operations.\n")
cfg(global_system_id_source_CFG, "system_id_source", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_SYSTEM_ID_SOURCE, vsn(2, 2, 117), NULL, 0, NULL,
cfg(global_system_id_source_CFG, "system_id_source", global_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_SYSTEM_ID_SOURCE, vsn(2, 2, 117), NULL, 0, NULL,
"The method LVM uses to set the local system ID.\n"
"Volume Groups can also be given a system ID (by vgcreate, vgchange,\n"
"or vgimport.) A VG on shared storage devices is accessible only to\n"
@@ -1280,13 +1192,13 @@ cfg(global_system_id_file_CFG, "system_id_file", global_CFG_SECTION, CFG_DEFAULT
"This is used when system_id_source is set to 'file'.\n"
"Comments starting with the character # are ignored.\n")
cfg(activation_checks_CFG, "checks", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_ACTIVATION_CHECKS, vsn(2, 2, 86), NULL, 0, NULL,
cfg(activation_checks_CFG, "checks", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ACTIVATION_CHECKS, vsn(2, 2, 86), NULL, 0, NULL,
"Perform internal checks of libdevmapper operations.\n"
"Useful for debugging problems with activation. Some of the checks may\n"
"be expensive, so it's best to use this only when there seems to be a\n"
"problem.\n")
cfg(global_use_lvmpolld_CFG, "use_lvmpolld", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_LVMPOLLD, vsn(2, 2, 120), "@DEFAULT_USE_LVMPOLLD@", 0, NULL,
cfg(global_use_lvmpolld_CFG, "use_lvmpolld", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_LVMPOLLD, vsn(2, 2, 120), "@DEFAULT_USE_LVMPOLLD@", 0, NULL,
"Use lvmpolld to supervise long running LVM commands.\n"
"When enabled, control of long running LVM commands is transferred\n"
"from the original LVM command to the lvmpolld daemon. This allows\n"
@@ -1299,20 +1211,12 @@ cfg(global_use_lvmpolld_CFG, "use_lvmpolld", global_CFG_SECTION, CFG_DEFAULT_COM
"commands will supervise long running operations by forking themselves.\n"
"Applicable only if LVM is compiled with lvmpolld support.\n")
cfg(global_notify_dbus_CFG, "notify_dbus", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_NOTIFY_DBUS, vsn(2, 2, 145), NULL, 0, NULL,
cfg(global_notify_dbus_CFG, "notify_dbus", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_NOTIFY_DBUS, vsn(2, 2, 145), NULL, 0, NULL,
"Enable D-Bus notification from LVM commands.\n"
"When enabled, an LVM command that changes PVs, changes VG metadata,\n"
"or changes the activation state of an LV will send a notification.\n")
cfg(global_io_memory_size_CFG, "io_memory_size", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_IO_MEMORY_SIZE_KB, vsn(2, 3, 2), NULL, 0, NULL,
"The amount of memory in KiB that LVM allocates to perform disk io.\n"
"LVM performance may benefit from more io memory when there are many\n"
"disks or VG metadata is large. Increasing this size may be necessary\n"
"when a single copy of VG metadata is larger than the current setting.\n"
"This value should usually not be decreased from the default; setting\n"
"it too low can result in lvm failing to read VGs.\n")
cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_UDEV_SYNC, vsn(2, 2, 51), NULL, 0, NULL,
cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_UDEV_SYNC, vsn(2, 2, 51), NULL, 0, NULL,
"Use udev notifications to synchronize udev and LVM.\n"
"The --nodevsync option overrides this setting.\n"
"When disabled, LVM commands will not wait for notifications from\n"
@@ -1322,25 +1226,25 @@ cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, CFG_DEFAULT_C
"running, and LVM processes are waiting for udev, run the command\n"
"'dmsetup udevcomplete_all' to wake them up.\n")
cfg(activation_udev_rules_CFG, "udev_rules", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_UDEV_RULES, vsn(2, 2, 57), NULL, 0, NULL,
cfg(activation_udev_rules_CFG, "udev_rules", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_UDEV_RULES, vsn(2, 2, 57), NULL, 0, NULL,
"Use udev rules to manage LV device nodes and symlinks.\n"
"When disabled, LVM will manage the device nodes and symlinks for\n"
"active LVs itself. Manual intervention may be required if this\n"
"setting is changed while LVs are active.\n")
cfg(activation_verify_udev_operations_CFG, "verify_udev_operations", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_VERIFY_UDEV_OPERATIONS, vsn(2, 2, 86), NULL, 0, NULL,
cfg(activation_verify_udev_operations_CFG, "verify_udev_operations", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_VERIFY_UDEV_OPERATIONS, vsn(2, 2, 86), NULL, 0, NULL,
"Use extra checks in LVM to verify udev operations.\n"
"This enables additional checks (and if necessary, repairs) on entries\n"
"in the device directory after udev has completed processing its\n"
"events. Useful for diagnosing problems with LVM/udev interactions.\n")
cfg(activation_retry_deactivation_CFG, "retry_deactivation", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_RETRY_DEACTIVATION, vsn(2, 2, 89), NULL, 0, NULL,
cfg(activation_retry_deactivation_CFG, "retry_deactivation", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_RETRY_DEACTIVATION, vsn(2, 2, 89), NULL, 0, NULL,
"Retry failed LV deactivation.\n"
"If LV deactivation fails, LVM will retry for a few seconds before\n"
"failing. This may happen because a process run from a quick udev rule\n"
"temporarily opened the device.\n")
cfg(activation_missing_stripe_filler_CFG, "missing_stripe_filler", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_STRIPE_FILLER, vsn(1, 0, 0), NULL, 0, NULL,
cfg(activation_missing_stripe_filler_CFG, "missing_stripe_filler", activation_CFG_SECTION, CFG_ADVANCED, CFG_TYPE_STRING, DEFAULT_STRIPE_FILLER, vsn(1, 0, 0), NULL, 0, NULL,
"Method to fill missing stripes when activating an incomplete LV.\n"
"Using 'error' will make inaccessible parts of the device return I/O\n"
"errors on access. Using 'zero' will return success (and zero) on I/O\n"
@@ -1349,21 +1253,21 @@ cfg(activation_missing_stripe_filler_CFG, "missing_stripe_filler", activation_CF
"other than 'error' with mirrored or snapshotted volumes is likely to\n"
"result in data corruption.\n")
cfg(activation_use_linear_target_CFG, "use_linear_target", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_LINEAR_TARGET, vsn(2, 2, 89), NULL, 0, NULL,
cfg(activation_use_linear_target_CFG, "use_linear_target", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_LINEAR_TARGET, vsn(2, 2, 89), NULL, 0, NULL,
"Use the linear target to optimize single stripe LVs.\n"
"When disabled, the striped target is used. The linear target is an\n"
"optimised version of the striped target that only handles a single\n"
"stripe.\n")
cfg(activation_reserved_stack_CFG, "reserved_stack", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RESERVED_STACK, vsn(1, 0, 0), NULL, 0, NULL,
cfg(activation_reserved_stack_CFG, "reserved_stack", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RESERVED_STACK, vsn(1, 0, 0), NULL, 0, NULL,
"Stack size in KiB to reserve for use while devices are suspended.\n"
"Insufficent reserve risks I/O deadlock during device suspension.\n")
cfg(activation_reserved_memory_CFG, "reserved_memory", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RESERVED_MEMORY, vsn(1, 0, 0), NULL, 0, NULL,
cfg(activation_reserved_memory_CFG, "reserved_memory", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RESERVED_MEMORY, vsn(1, 0, 0), NULL, 0, NULL,
"Memory size in KiB to reserve for use while devices are suspended.\n"
"Insufficent reserve risks I/O deadlock during device suspension.\n")
cfg(activation_process_priority_CFG, "process_priority", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_PROCESS_PRIORITY, vsn(1, 0, 0), NULL, 0, NULL,
cfg(activation_process_priority_CFG, "process_priority", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_PROCESS_PRIORITY, vsn(1, 0, 0), NULL, 0, NULL,
"Nice value used while devices are suspended.\n"
"Use a high priority so that LVs are suspended\n"
"for the shortest possible time.\n")
@@ -1453,11 +1357,11 @@ cfg_array(activation_read_only_volume_list_CFG, "read_only_volume_list", activat
"read_only_volume_list = [ \"vg1\", \"vg2/lvol1\", \"@tag1\", \"@*\" ]\n"
"#\n")
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
cfg(activation_mirror_region_size_CFG, "mirror_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(1, 0, 0), NULL, vsn(2, 2, 99),
"This has been replaced by the activation/raid_region_size setting.\n",
"Size in KiB of each raid or mirror synchronization region.\n")
cfg(activation_raid_region_size_CFG, "raid_region_size", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(2, 2, 99), NULL, 0, NULL,
cfg(activation_raid_region_size_CFG, "raid_region_size", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_RAID_REGION_SIZE, vsn(2, 2, 99), NULL, 0, NULL,
"Size in KiB of each raid or mirror synchronization region.\n"
"The clean/dirty state of data is tracked for each region.\n"
"The value is rounded down to a power of two if necessary, and\n"
@@ -1472,7 +1376,7 @@ cfg(activation_error_when_full_CFG, "error_when_full", activation_CFG_SECTION, C
"thin pool data space is extended. New thin pools are assigned the\n"
"behavior defined here.\n")
cfg(activation_readahead_CFG, "readahead", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_READ_AHEAD, vsn(1, 0, 23), NULL, 0, NULL,
cfg(activation_readahead_CFG, "readahead", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_READ_AHEAD, vsn(1, 0, 23), NULL, 0, NULL,
"Setting to use when there is no readahead setting in metadata.\n"
"#\n"
"Accepted values:\n"
@@ -1482,7 +1386,7 @@ cfg(activation_readahead_CFG, "readahead", activation_CFG_SECTION, CFG_DEFAULT_C
" Use default value chosen by kernel.\n"
"#\n")
cfg(activation_raid_fault_policy_CFG, "raid_fault_policy", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_RAID_FAULT_POLICY, vsn(2, 2, 89), NULL, 0, NULL,
cfg(activation_raid_fault_policy_CFG, "raid_fault_policy", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_RAID_FAULT_POLICY, vsn(2, 2, 89), NULL, 0, NULL,
"Defines how a device failure in a RAID LV is handled.\n"
"This includes LVs that have the following segment types:\n"
"raid1, raid4, raid5*, and raid6*.\n"
@@ -1503,7 +1407,7 @@ cfg(activation_raid_fault_policy_CFG, "raid_fault_policy", activation_CFG_SECTIO
" replace faulty devices.\n"
"#\n")
cfg_runtime(activation_mirror_image_fault_policy_CFG, "mirror_image_fault_policy", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, vsn(2, 2, 57), 0, NULL,
cfg_runtime(activation_mirror_image_fault_policy_CFG, "mirror_image_fault_policy", activation_CFG_SECTION, 0, CFG_TYPE_STRING, vsn(2, 2, 57), 0, NULL,
"Defines how a device failure in a 'mirror' LV is handled.\n"
"An LV with the 'mirror' segment type is composed of mirror images\n"
"(copies) and a mirror log. A disk log ensures that a mirror LV does\n"
@@ -1539,16 +1443,16 @@ cfg_runtime(activation_mirror_image_fault_policy_CFG, "mirror_image_fault_policy
" replacement.\n"
"#\n")
cfg(activation_mirror_log_fault_policy_CFG, "mirror_log_fault_policy", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_MIRROR_LOG_FAULT_POLICY, vsn(1, 2, 18), NULL, 0, NULL,
cfg(activation_mirror_log_fault_policy_CFG, "mirror_log_fault_policy", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_MIRROR_LOG_FAULT_POLICY, vsn(1, 2, 18), NULL, 0, NULL,
"Defines how a device failure in a 'mirror' log LV is handled.\n"
"The mirror_image_fault_policy description for mirrored LVs also\n"
"applies to mirrored log LVs.\n")
cfg(activation_mirror_device_fault_policy_CFG, "mirror_device_fault_policy", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_MIRROR_DEVICE_FAULT_POLICY, vsn(1, 2, 10), NULL, vsn(2, 2, 57),
cfg(activation_mirror_device_fault_policy_CFG, "mirror_device_fault_policy", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_MIRROR_DEVICE_FAULT_POLICY, vsn(1, 2, 10), NULL, vsn(2, 2, 57),
"This has been replaced by the activation/mirror_image_fault_policy setting.\n",
"Define how a device failure affecting a mirror is handled.\n")
cfg(activation_snapshot_autoextend_threshold_CFG, "snapshot_autoextend_threshold", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_THRESHOLD, vsn(2, 2, 75), NULL, 0, NULL,
cfg(activation_snapshot_autoextend_threshold_CFG, "snapshot_autoextend_threshold", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_THRESHOLD, vsn(2, 2, 75), NULL, 0, NULL,
"Auto-extend a snapshot when its usage exceeds this percent.\n"
"Setting this to 100 disables automatic extension.\n"
"The minimum value is 50 (a smaller value is treated as 50.)\n"
@@ -1562,7 +1466,7 @@ cfg(activation_snapshot_autoextend_threshold_CFG, "snapshot_autoextend_threshold
"snapshot_autoextend_threshold = 70\n"
"#\n")
cfg(activation_snapshot_autoextend_percent_CFG, "snapshot_autoextend_percent", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_PERCENT, vsn(2, 2, 75), NULL, 0, NULL,
cfg(activation_snapshot_autoextend_percent_CFG, "snapshot_autoextend_percent", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_SNAPSHOT_AUTOEXTEND_PERCENT, vsn(2, 2, 75), NULL, 0, NULL,
"Auto-extending a snapshot adds this percent extra space.\n"
"The amount of additional space added to a snapshot is this\n"
"percent of its current size.\n"
@@ -1574,7 +1478,7 @@ cfg(activation_snapshot_autoextend_percent_CFG, "snapshot_autoextend_percent", a
"snapshot_autoextend_percent = 20\n"
"#\n")
cfg(activation_thin_pool_autoextend_threshold_CFG, "thin_pool_autoextend_threshold", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_THRESHOLD, vsn(2, 2, 89), NULL, 0, NULL,
cfg(activation_thin_pool_autoextend_threshold_CFG, "thin_pool_autoextend_threshold", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_THRESHOLD, vsn(2, 2, 89), NULL, 0, NULL,
"Auto-extend a thin pool when its usage exceeds this percent.\n"
"Setting this to 100 disables automatic extension.\n"
"The minimum value is 50 (a smaller value is treated as 50.)\n"
@@ -1588,7 +1492,7 @@ cfg(activation_thin_pool_autoextend_threshold_CFG, "thin_pool_autoextend_thresho
"thin_pool_autoextend_threshold = 70\n"
"#\n")
cfg(activation_thin_pool_autoextend_percent_CFG, "thin_pool_autoextend_percent", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_PERCENT, vsn(2, 2, 89), NULL, 0, NULL,
cfg(activation_thin_pool_autoextend_percent_CFG, "thin_pool_autoextend_percent", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_THIN_POOL_AUTOEXTEND_PERCENT, vsn(2, 2, 89), NULL, 0, NULL,
"Auto-extending a thin pool adds this percent extra space.\n"
"The amount of additional space added to a thin pool is this\n"
"percent of its current size.\n"
@@ -1600,7 +1504,7 @@ cfg(activation_thin_pool_autoextend_percent_CFG, "thin_pool_autoextend_percent",
"thin_pool_autoextend_percent = 20\n"
"#\n")
cfg(activation_vdo_pool_autoextend_threshold_CFG, "vdo_pool_autoextend_threshold", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_VDO_POOL_AUTOEXTEND_THRESHOLD, VDO_1ST_VSN, NULL, 0, NULL,
cfg(activation_vdo_pool_autoextend_threshold_CFG, "vdo_pool_autoextend_threshold", activation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA, CFG_TYPE_INT, DEFAULT_VDO_POOL_AUTOEXTEND_THRESHOLD, VDO_1ST_VSN, NULL, 0, NULL,
"Auto-extend a VDO pool when its usage exceeds this percent.\n"
"Setting this to 100 disables automatic extension.\n"
"The minimum value is 50 (a smaller value is treated as 50.)\n"
@@ -1640,17 +1544,17 @@ cfg_array(activation_mlock_filter_CFG, "mlock_filter", activation_CFG_SECTION, C
"mlock_filter = [ \"locale/locale-archive\", \"gconv/gconv-modules.cache\" ]\n"
"#\n")
cfg(activation_use_mlockall_CFG, "use_mlockall", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_MLOCKALL, vsn(2, 2, 62), NULL, 0, NULL,
cfg(activation_use_mlockall_CFG, "use_mlockall", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_MLOCKALL, vsn(2, 2, 62), NULL, 0, NULL,
"Use the old behavior of mlockall to pin all memory.\n"
"Prior to version 2.02.62, LVM used mlockall() to pin the whole\n"
"process's memory while activating devices.\n")
cfg(activation_monitoring_CFG, "monitoring", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_DMEVENTD_MONITOR, vsn(2, 2, 63), NULL, 0, NULL,
cfg(activation_monitoring_CFG, "monitoring", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_DMEVENTD_MONITOR, vsn(2, 2, 63), NULL, 0, NULL,
"Monitor LVs that are activated.\n"
"The --ignoremonitoring option overrides this setting.\n"
"When enabled, LVM will ask dmeventd to monitor activated LVs.\n")
cfg(activation_polling_interval_CFG, "polling_interval", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_INTERVAL, vsn(2, 2, 63), NULL, 0, NULL,
cfg(activation_polling_interval_CFG, "polling_interval", activation_CFG_SECTION, 0, CFG_TYPE_INT, DEFAULT_INTERVAL, vsn(2, 2, 63), NULL, 0, NULL,
"Check pvmove or lvconvert progress at this interval (seconds).\n"
"When pvmove or lvconvert must wait for the kernel to finish\n"
"synchronising or merging data, they check and report progress at\n"
@@ -1667,7 +1571,7 @@ cfg(activation_auto_set_activation_skip_CFG, "auto_set_activation_skip", activat
"flag set. When this setting is enabled, the activation skip flag is\n"
"set on new thin snapshot LVs.\n")
cfg(activation_mode_CFG, "activation_mode", activation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_ACTIVATION_MODE, vsn(2,2,108), NULL, 0, NULL,
cfg(activation_mode_CFG, "activation_mode", activation_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_ACTIVATION_MODE, vsn(2,2,108), NULL, 0, NULL,
"How LVs with missing devices are activated.\n"
"The --activationmode option overrides this setting.\n"
"#\n"
@@ -1741,7 +1645,7 @@ cfg(metadata_vgmetadatacopies_CFG, "vgmetadatacopies", metadata_CFG_SECTION, CFG
"and allows you to control which metadata areas are used at the\n"
"individual PV level using pvchange --metadataignore y|n.\n")
cfg_runtime(metadata_pvmetadatasize_CFG, "pvmetadatasize", metadata_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(1, 0, 0), 0, NULL,
cfg_runtime(metadata_pvmetadatasize_CFG, "pvmetadatasize", metadata_CFG_SECTION, CFG_DEFAULT_COMMENTED | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(1, 0, 0), 0, NULL,
"The default size of the metadata area in units of 512 byte sectors.\n"
"The metadata area begins at an offset of the page size from the start\n"
"of the device. The first PE is by default at 1 MiB from the start of\n"
@@ -1764,7 +1668,7 @@ cfg(metadata_pvmetadataignore_CFG, "pvmetadataignore", metadata_CFG_SECTION, CFG
cfg(metadata_stripesize_CFG, "stripesize", metadata_CFG_SECTION, CFG_ADVANCED | CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_STRIPESIZE, vsn(1, 0, 0), NULL, 0, NULL, NULL)
cfg_array(metadata_dirs_CFG, "dirs", metadata_CFG_SECTION, CFG_ADVANCED | CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, vsn(2, 3, 0), NULL,
NULL)
"This setting is no longer used.\n")
cfg_section(metadata_disk_areas_CFG_SUBSECTION, "disk_areas", metadata_CFG_SECTION, CFG_UNSUPPORTED | CFG_DEFAULT_COMMENTED, vsn(1, 0, 0), vsn(2, 3, 0), NULL, NULL)
cfg_section(disk_area_CFG_SUBSECTION, "disk_area", metadata_disk_areas_CFG_SUBSECTION, CFG_NAME_VARIABLE | CFG_UNSUPPORTED | CFG_DEFAULT_COMMENTED, vsn(1, 0, 0), vsn(2, 3, 0), NULL, NULL)
@@ -2089,7 +1993,7 @@ cfg(report_two_word_unknown_device_CFG, "two_word_unknown_device", report_CFG_SE
"Use the two words 'unknown device' in place of '[unknown]'.\n"
"This is displayed when the device for a PV is not known.\n")
cfg(dmeventd_mirror_library_CFG, "mirror_library", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_MIRROR_LIB, vsn(1, 2, 3), NULL, 0, NULL,
cfg(dmeventd_mirror_library_CFG, "mirror_library", dmeventd_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_DMEVENTD_MIRROR_LIB, vsn(1, 2, 3), NULL, 0, NULL,
"The library dmeventd uses when monitoring a mirror device.\n"
"libdevmapper-event-lvm2mirror.so attempts to recover from\n"
"failures. It removes failed devices from a volume group and\n"
@@ -2098,13 +2002,13 @@ cfg(dmeventd_mirror_library_CFG, "mirror_library", dmeventd_CFG_SECTION, CFG_DEF
cfg(dmeventd_raid_library_CFG, "raid_library", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_RAID_LIB, vsn(2, 2, 87), NULL, 0, NULL, NULL)
cfg(dmeventd_snapshot_library_CFG, "snapshot_library", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_SNAPSHOT_LIB, vsn(1, 2, 26), NULL, 0, NULL,
cfg(dmeventd_snapshot_library_CFG, "snapshot_library", dmeventd_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_DMEVENTD_SNAPSHOT_LIB, vsn(1, 2, 26), NULL, 0, NULL,
"The library dmeventd uses when monitoring a snapshot device.\n"
"libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots\n"
"and emits a warning through syslog when the usage exceeds 80%. The\n"
"warning is repeated when 85%, 90% and 95% of the snapshot is filled.\n")
cfg(dmeventd_thin_library_CFG, "thin_library", dmeventd_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_DMEVENTD_THIN_LIB, vsn(2, 2, 89), NULL, 0, NULL,
cfg(dmeventd_thin_library_CFG, "thin_library", dmeventd_CFG_SECTION, 0, CFG_TYPE_STRING, DEFAULT_DMEVENTD_THIN_LIB, vsn(2, 2, 89), NULL, 0, NULL,
"The library dmeventd uses when monitoring a thin device.\n"
"libdevmapper-event-lvm2thin.so monitors the filling of a pool\n"
"and emits a warning through syslog when the usage exceeds 80%. The\n"
@@ -2190,4 +2094,4 @@ cfg(local_host_id_CFG, "host_id", local_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_
"This must be unique among all hosts, and must be between 1 and 2000.\n"
"Applicable only if LVM is compiled with lockd support\n")
cfg(CFG_COUNT, NULL, root_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, 0, vsn(0, 0, 0), NULL, 0, NULL, NULL)
cfg(CFG_COUNT, NULL, root_CFG_SECTION, 0, CFG_TYPE_INT, 0, vsn(0, 0, 0), NULL, 0, NULL, NULL)

View File

@@ -118,20 +118,15 @@
#define DEFAULT_THIN_REPAIR_OPTION1 ""
#define DEFAULT_THIN_REPAIR_OPTIONS_CONFIG "#S" DEFAULT_THIN_REPAIR_OPTION1
#define DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS 0
#define DEFAULT_THIN_POOL_CROP_METADATA 0
#define DEFAULT_THIN_POOL_MAX_METADATA_SIZE_V1_KB (UINT64_C(255) * ((1 << 14) - 64) * 4) /* KB */ /* 0x3f8040 blocks */
#define DEFAULT_THIN_POOL_MAX_METADATA_SIZE (DM_THIN_MAX_METADATA_SIZE / 2) /* KB */
#define DEFAULT_THIN_POOL_MIN_METADATA_SIZE 2048 /* KB */
#define DEFAULT_THIN_POOL_OPTIMAL_METADATA_SIZE (128 * 1024) /* KB */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_POLICY "generic"
#define DEFAULT_THIN_POOL_CHUNK_SIZE 64 /* KB */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_PERFORMANCE 512 /* KB */
/* Chunk size big enough it no longer needs jump by power-of-2 */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_ALIGNED 1024 /* KB */
#define DEFAULT_THIN_POOL_DISCARDS "passdown"
#define DEFAULT_THIN_POOL_ZERO 1
#define DEFAULT_POOL_METADATA_SPARE 1 /* thin + cache */
#define DEFAULT_ZERO_METADATA 1 /* thin + cache */
#ifdef CACHE_CHECK_NEEDS_CHECK
# define DEFAULT_CACHE_CHECK_OPTION1 "-q"
@@ -167,7 +162,7 @@
#define DEFAULT_VDO_INDEX_MEMORY_SIZE_MB (DM_VDO_INDEX_MEMORY_SIZE_MINIMUM_MB)
#define DEFAULT_VDO_SLAB_SIZE_MB (2 * 1024) // 2GiB ... 19 slabbits
#define DEFAULT_VDO_ACK_THREADS (1)
#define DEFAULT_VDO_BIO_THREADS (4)
#define DEFAULT_VDO_BIO_THREADS (1)
#define DEFAULT_VDO_BIO_ROTATION (64)
#define DEFAULT_VDO_CPU_THREADS (2)
#define DEFAULT_VDO_HASH_ZONE_THREADS (1)
@@ -226,7 +221,7 @@
#define DEFAULT_VERBOSE 0
#define DEFAULT_SILENT 0
#define DEFAULT_LOGLEVEL 0
#define DEFAULT_INDENT 0
#define DEFAULT_INDENT 1
#define DEFAULT_ABORT_ON_INTERNAL_ERRORS 0
#define DEFAULT_UNITS "r"
#define DEFAULT_SUFFIX 1
@@ -315,17 +310,8 @@
#define DEFAULT_VDO_POOL_AUTOEXTEND_THRESHOLD 100
#define DEFAULT_VDO_POOL_AUTOEXTEND_PERCENT 20
#define DEFAULT_SCAN_LVS 0
#define DEFAULT_SCAN_LVS 1
#define DEFAULT_HINTS "all"
#define DEFAULT_IO_MEMORY_SIZE_KB 8192
#define DEFAULT_MD_COMPONENT_CHECKS "auto"
#define DEFAULT_USE_DEVICES_FILE 0
#define DEFAULT_DEVICES_FILE "system.devices"
#define DEFAULT_SEARCH_FOR_DEVNAMES "auto"
#endif /* _LVM_DEFAULTS_H */

View File

@@ -39,32 +39,32 @@ static uint64_t _min(uint64_t lhs, uint64_t rhs)
//----------------------------------------------------------------
void bcache_prefetch_bytes(struct bcache *cache, int di, uint64_t start, size_t len)
void bcache_prefetch_bytes(struct bcache *cache, int fd, uint64_t start, size_t len)
{
block_address bb, be;
byte_range_to_block_range(cache, start, len, &bb, &be);
while (bb < be) {
bcache_prefetch(cache, di, bb);
bcache_prefetch(cache, fd, bb);
bb++;
}
}
//----------------------------------------------------------------
bool bcache_read_bytes(struct bcache *cache, int di, uint64_t start, size_t len, void *data)
bool bcache_read_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data)
{
struct block *b;
block_address bb, be;
uint64_t block_size = bcache_block_sectors(cache) << SECTOR_SHIFT;
uint64_t block_offset = start % block_size;
bcache_prefetch_bytes(cache, di, start, len);
bcache_prefetch_bytes(cache, fd, start, len);
byte_range_to_block_range(cache, start, len, &bb, &be);
for (; bb != be; bb++) {
if (!bcache_get(cache, di, bb, 0, &b))
if (!bcache_get(cache, fd, bb, 0, &b))
return false;
size_t blen = _min(block_size - block_offset, len);
@@ -79,21 +79,6 @@ bool bcache_read_bytes(struct bcache *cache, int di, uint64_t start, size_t len,
return true;
}
bool bcache_invalidate_bytes(struct bcache *cache, int di, uint64_t start, size_t len)
{
block_address bb, be;
bool result = true;
byte_range_to_block_range(cache, start, len, &bb, &be);
for (; bb != be; bb++) {
if (!bcache_invalidate(cache, di, bb))
result = false;
}
return result;
}
//----------------------------------------------------------------
// Writing bytes and zeroing bytes are very similar, so we factor out
@@ -101,8 +86,8 @@ bool bcache_invalidate_bytes(struct bcache *cache, int di, uint64_t start, size_
struct updater;
typedef bool (*partial_update_fn)(struct updater *u, int di, block_address bb, uint64_t offset, size_t len);
typedef bool (*whole_update_fn)(struct updater *u, int di, block_address bb, block_address be);
typedef bool (*partial_update_fn)(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len);
typedef bool (*whole_update_fn)(struct updater *u, int fd, block_address bb, block_address be);
struct updater {
struct bcache *cache;
@@ -111,7 +96,7 @@ struct updater {
void *data;
};
static bool _update_bytes(struct updater *u, int di, uint64_t start, size_t len)
static bool _update_bytes(struct updater *u, int fd, uint64_t start, size_t len)
{
struct bcache *cache = u->cache;
block_address bb, be;
@@ -124,12 +109,12 @@ static bool _update_bytes(struct updater *u, int di, uint64_t start, size_t len)
// If the last block is partial, we will require a read, so let's
// prefetch it.
if ((start + len) % block_size)
bcache_prefetch(cache, di, (start + len) / block_size);
bcache_prefetch(cache, fd, (start + len) / block_size);
// First block may be partial
if (block_offset) {
size_t blen = _min(block_size - block_offset, len);
if (!u->partial_fn(u, di, bb, block_offset, blen))
if (!u->partial_fn(u, fd, bb, block_offset, blen))
return false;
len -= blen;
@@ -141,7 +126,7 @@ static bool _update_bytes(struct updater *u, int di, uint64_t start, size_t len)
// Now we write out a set of whole blocks
nr_whole = len / block_size;
if (!u->whole_fn(u, di, bb, bb + nr_whole))
if (!u->whole_fn(u, fd, bb, bb + nr_whole))
return false;
bb += nr_whole;
@@ -151,17 +136,17 @@ static bool _update_bytes(struct updater *u, int di, uint64_t start, size_t len)
return true;
// Finally we write a partial end block
return u->partial_fn(u, di, bb, 0, len);
return u->partial_fn(u, fd, bb, 0, len);
}
//----------------------------------------------------------------
static bool _write_partial(struct updater *u, int di, block_address bb,
static bool _write_partial(struct updater *u, int fd, block_address bb,
uint64_t offset, size_t len)
{
struct block *b;
if (!bcache_get(u->cache, di, bb, GF_DIRTY, &b))
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memcpy(((unsigned char *) b->data) + offset, u->data, len);
@@ -171,7 +156,7 @@ static bool _write_partial(struct updater *u, int di, block_address bb,
return true;
}
static bool _write_whole(struct updater *u, int di, block_address bb, block_address be)
static bool _write_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
uint64_t block_size = bcache_block_sectors(u->cache) << SECTOR_SHIFT;
@@ -179,7 +164,7 @@ static bool _write_whole(struct updater *u, int di, block_address bb, block_addr
for (; bb != be; bb++) {
// We don't need to read the block since we are overwriting
// it completely.
if (!bcache_get(u->cache, di, bb, GF_ZERO, &b))
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
memcpy(b->data, u->data, block_size);
u->data = ((unsigned char *) u->data) + block_size;
@@ -189,7 +174,7 @@ static bool _write_whole(struct updater *u, int di, block_address bb, block_addr
return true;
}
bool bcache_write_bytes(struct bcache *cache, int di, uint64_t start, size_t len, void *data)
bool bcache_write_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data)
{
struct updater u;
@@ -198,16 +183,16 @@ bool bcache_write_bytes(struct bcache *cache, int di, uint64_t start, size_t len
u.whole_fn = _write_whole;
u.data = data;
return _update_bytes(&u, di, start, len);
return _update_bytes(&u, fd, start, len);
}
//----------------------------------------------------------------
static bool _zero_partial(struct updater *u, int di, block_address bb, uint64_t offset, size_t len)
static bool _zero_partial(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len)
{
struct block *b;
if (!bcache_get(u->cache, di, bb, GF_DIRTY, &b))
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memset(((unsigned char *) b->data) + offset, 0, len);
@@ -216,12 +201,12 @@ static bool _zero_partial(struct updater *u, int di, block_address bb, uint64_t
return true;
}
static bool _zero_whole(struct updater *u, int di, block_address bb, block_address be)
static bool _zero_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
for (; bb != be; bb++) {
if (!bcache_get(u->cache, di, bb, GF_ZERO, &b))
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
bcache_put(b);
}
@@ -229,7 +214,7 @@ static bool _zero_whole(struct updater *u, int di, block_address bb, block_addre
return true;
}
bool bcache_zero_bytes(struct bcache *cache, int di, uint64_t start, size_t len)
bool bcache_zero_bytes(struct bcache *cache, int fd, uint64_t start, size_t len)
{
struct updater u;
@@ -238,17 +223,17 @@ bool bcache_zero_bytes(struct bcache *cache, int di, uint64_t start, size_t len)
u.whole_fn = _zero_whole;
u.data = NULL;
return _update_bytes(&u, di, start, len);
return _update_bytes(&u, fd, start, len);
}
//----------------------------------------------------------------
static bool _set_partial(struct updater *u, int di, block_address bb, uint64_t offset, size_t len)
static bool _set_partial(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len)
{
struct block *b;
uint8_t val = *((uint8_t *) u->data);
if (!bcache_get(u->cache, di, bb, GF_DIRTY, &b))
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memset(((unsigned char *) b->data) + offset, val, len);
@@ -257,14 +242,14 @@ static bool _set_partial(struct updater *u, int di, block_address bb, uint64_t o
return true;
}
static bool _set_whole(struct updater *u, int di, block_address bb, block_address be)
static bool _set_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
uint8_t val = *((uint8_t *) u->data);
uint64_t len = bcache_block_sectors(u->cache) * 512;
for (; bb != be; bb++) {
if (!bcache_get(u->cache, di, bb, GF_ZERO, &b))
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
memset((unsigned char *) b->data, val, len);
bcache_put(b);
@@ -273,7 +258,7 @@ static bool _set_whole(struct updater *u, int di, block_address bb, block_addres
return true;
}
bool bcache_set_bytes(struct bcache *cache, int di, uint64_t start, size_t len, uint8_t val)
bool bcache_set_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, uint8_t val)
{
struct updater u;
@@ -282,6 +267,6 @@ bool bcache_set_bytes(struct bcache *cache, int di, uint64_t start, size_t len,
u.whole_fn = _set_whole;
u.data = &val;
return _update_bytes(&u, di, start, len);
return _update_bytes(&u, fd, start, len);
}

View File

@@ -33,16 +33,11 @@
#define SECTOR_SHIFT 9L
#define FD_TABLE_INC 1024
static int _fd_table_size;
static int *_fd_table;
//----------------------------------------------------------------
static void log_sys_warn(const char *call)
{
log_warn("WARNING: %s failed: %s.", call, strerror(errno));
log_warn("%s failed: %s", call, strerror(errno));
}
// Assumes the list is not empty.
@@ -66,17 +61,23 @@ struct control_block {
struct cb_set {
struct dm_list free;
struct dm_list allocated;
struct control_block vec[];
struct control_block *vec;
} control_block_set;
static struct cb_set *_cb_set_create(unsigned nr)
{
unsigned i;
struct cb_set *cbs = malloc(sizeof(*cbs) + nr * sizeof(*cbs->vec));
int i;
struct cb_set *cbs = malloc(sizeof(*cbs));
if (!cbs->vec)
if (!cbs)
return NULL;
cbs->vec = malloc(nr * sizeof(*cbs->vec));
if (!cbs->vec) {
free(cbs);
return NULL;
}
dm_list_init(&cbs->free);
dm_list_init(&cbs->allocated);
@@ -92,10 +93,11 @@ static void _cb_set_destroy(struct cb_set *cbs)
// never be in flight IO.
if (!dm_list_empty(&cbs->allocated)) {
// bail out
log_warn("WARNING: async io still in flight.");
log_error("async io still in flight");
return;
}
free(cbs->vec);
free(cbs);
}
@@ -153,11 +155,11 @@ static void _async_destroy(struct io_engine *ioe)
free(e);
}
static int _last_byte_di;
static int _last_byte_fd;
static uint64_t _last_byte_offset;
static int _last_byte_sector_size;
static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
static bool _async_issue(struct io_engine *ioe, enum dir d, int fd,
sector_t sb, sector_t se, void *data, void *context)
{
int r;
@@ -167,7 +169,6 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
sector_t offset;
sector_t nbytes;
sector_t limit_nbytes;
sector_t orig_nbytes;
sector_t extra_nbytes = 0;
if (((uintptr_t) data) & e->page_mask) {
@@ -181,7 +182,7 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
/*
* If bcache block goes past where lvm wants to write, then clamp it.
*/
if ((d == DIR_WRITE) && _last_byte_offset && (di == _last_byte_di)) {
if ((d == DIR_WRITE) && _last_byte_offset && (fd == _last_byte_fd)) {
if (offset > _last_byte_offset) {
log_error("Limit write at %llu len %llu beyond last byte %llu",
(unsigned long long)offset,
@@ -190,41 +191,11 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
return false;
}
/*
* If the bcache block offset+len goes beyond where lvm is
* intending to write, then reduce the len being written
* (which is the bcache block size) so we don't write past
* the limit set by lvm. If after applying the limit, the
* resulting size is not a multiple of the sector size (512
* or 4096) then extend the reduced size to be a multiple of
* the sector size (we don't want to write partial sectors.)
*/
if (offset + nbytes > _last_byte_offset) {
limit_nbytes = _last_byte_offset - offset;
if (limit_nbytes % _last_byte_sector_size) {
if (limit_nbytes % _last_byte_sector_size)
extra_nbytes = _last_byte_sector_size - (limit_nbytes % _last_byte_sector_size);
/*
* adding extra_nbytes to the reduced nbytes (limit_nbytes)
* should make the final write size a multiple of the
* sector size. This should never result in a final size
* larger than the bcache block size (as long as the bcache
* block size is a multiple of the sector size).
*/
if (limit_nbytes + extra_nbytes > nbytes) {
log_warn("Skip extending write at %llu len %llu limit %llu extra %llu sector_size %llu",
(unsigned long long)offset,
(unsigned long long)nbytes,
(unsigned long long)limit_nbytes,
(unsigned long long)extra_nbytes,
(unsigned long long)_last_byte_sector_size);
extra_nbytes = 0;
}
}
orig_nbytes = nbytes;
if (extra_nbytes) {
log_debug("Limit write at %llu len %llu to len %llu rounded to %llu",
(unsigned long long)offset,
@@ -239,22 +210,6 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
(unsigned long long)limit_nbytes);
nbytes = limit_nbytes;
}
/*
* This shouldn't happen, the reduced+extended
* nbytes value should never be larger than the
* bcache block size.
*/
if (nbytes > orig_nbytes) {
log_error("Invalid adjusted write at %llu len %llu adjusted %llu limit %llu extra %llu sector_size %llu",
(unsigned long long)offset,
(unsigned long long)orig_nbytes,
(unsigned long long)nbytes,
(unsigned long long)limit_nbytes,
(unsigned long long)extra_nbytes,
(unsigned long long)_last_byte_sector_size);
return false;
}
}
}
@@ -266,7 +221,7 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
memset(&cb->cb, 0, sizeof(cb->cb));
cb->cb.aio_fildes = (int) _fd_table[di];
cb->cb.aio_fildes = (int) fd;
cb->cb.u.c.buf = data;
cb->cb.u.c.offset = offset;
cb->cb.u.c.nbytes = nbytes;
@@ -274,15 +229,13 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int di,
#if 0
if (d == DIR_READ) {
log_debug("io R off %llu bytes %llu di %d fd %d",
log_debug("io R off %llu bytes %llu",
(unsigned long long)cb->cb.u.c.offset,
(unsigned long long)cb->cb.u.c.nbytes,
di, _fd_table[di]);
(unsigned long long)cb->cb.u.c.nbytes);
} else {
log_debug("io W off %llu bytes %llu di %d fd %d",
log_debug("io W off %llu bytes %llu",
(unsigned long long)cb->cb.u.c.offset,
(unsigned long long)cb->cb.u.c.nbytes,
di, _fd_table[di]);
(unsigned long long)cb->cb.u.c.nbytes);
}
#endif
@@ -318,7 +271,9 @@ static bool _async_wait(struct io_engine *ioe, io_complete_fn fn)
struct async_engine *e = _to_async(ioe);
memset(&event, 0, sizeof(event));
r = io_getevents(e->aio_context, 1, MAX_EVENT, event, NULL);
do {
r = io_getevents(e->aio_context, 1, MAX_EVENT, event, NULL);
} while (r == -EINTR);
if (r < 0) {
log_sys_warn("io_getevents");
@@ -412,7 +367,7 @@ static void _sync_destroy(struct io_engine *ioe)
free(e);
}
static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
static bool _sync_issue(struct io_engine *ioe, enum dir d, int fd,
sector_t sb, sector_t se, void *data, void *context)
{
int rv;
@@ -428,7 +383,7 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
}
where = sb * 512;
off = lseek(_fd_table[di], where, SEEK_SET);
off = lseek(fd, where, SEEK_SET);
if (off == (off_t) -1) {
log_warn("Device seek error %d for offset %llu", errno, (unsigned long long)where);
free(io);
@@ -443,12 +398,11 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
/*
* If bcache block goes past where lvm wants to write, then clamp it.
*/
if ((d == DIR_WRITE) && _last_byte_offset && (di == _last_byte_di)) {
if ((d == DIR_WRITE) && _last_byte_offset && (fd == _last_byte_fd)) {
uint64_t offset = where;
uint64_t nbytes = len;
sector_t limit_nbytes = 0;
sector_t extra_nbytes = 0;
sector_t orig_nbytes = 0;
if (offset > _last_byte_offset) {
log_error("Limit write at %llu len %llu beyond last byte %llu",
@@ -461,30 +415,9 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
if (offset + nbytes > _last_byte_offset) {
limit_nbytes = _last_byte_offset - offset;
if (limit_nbytes % _last_byte_sector_size) {
if (limit_nbytes % _last_byte_sector_size)
extra_nbytes = _last_byte_sector_size - (limit_nbytes % _last_byte_sector_size);
/*
* adding extra_nbytes to the reduced nbytes (limit_nbytes)
* should make the final write size a multiple of the
* sector size. This should never result in a final size
* larger than the bcache block size (as long as the bcache
* block size is a multiple of the sector size).
*/
if (limit_nbytes + extra_nbytes > nbytes) {
log_warn("Skip extending write at %llu len %llu limit %llu extra %llu sector_size %llu",
(unsigned long long)offset,
(unsigned long long)nbytes,
(unsigned long long)limit_nbytes,
(unsigned long long)extra_nbytes,
(unsigned long long)_last_byte_sector_size);
extra_nbytes = 0;
}
}
orig_nbytes = nbytes;
if (extra_nbytes) {
log_debug("Limit write at %llu len %llu to len %llu rounded to %llu",
(unsigned long long)offset,
@@ -499,23 +432,6 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
(unsigned long long)limit_nbytes);
nbytes = limit_nbytes;
}
/*
* This shouldn't happen, the reduced+extended
* nbytes value should never be larger than the
* bcache block size.
*/
if (nbytes > orig_nbytes) {
log_error("Invalid adjusted write at %llu len %llu adjusted %llu limit %llu extra %llu sector_size %llu",
(unsigned long long)offset,
(unsigned long long)orig_nbytes,
(unsigned long long)nbytes,
(unsigned long long)limit_nbytes,
(unsigned long long)extra_nbytes,
(unsigned long long)_last_byte_sector_size);
free(io);
return false;
}
}
where = offset;
@@ -524,9 +440,9 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, int di,
while (pos < len) {
if (d == DIR_READ)
rv = read(_fd_table[di], (char *)data + pos, len - pos);
rv = read(fd, (char *)data + pos, len - pos);
else
rv = write(_fd_table[di], (char *)data + pos, len - pos);
rv = write(fd, (char *)data + pos, len - pos);
if (rv == -1 && errno == EINTR)
continue;
@@ -686,7 +602,7 @@ struct bcache {
//----------------------------------------------------------------
struct key_parts {
uint32_t di;
uint32_t fd;
uint64_t b;
} __attribute__ ((packed));
@@ -695,12 +611,12 @@ union key {
uint8_t bytes[12];
};
static struct block *_block_lookup(struct bcache *cache, int di, uint64_t i)
static struct block *_block_lookup(struct bcache *cache, int fd, uint64_t i)
{
union key k;
union radix_value v;
k.parts.di = di;
k.parts.fd = fd;
k.parts.b = i;
if (radix_tree_lookup(cache->rtree, k.bytes, k.bytes + sizeof(k.bytes), &v))
@@ -714,7 +630,7 @@ static bool _block_insert(struct block *b)
union key k;
union radix_value v;
k.parts.di = b->di;
k.parts.fd = b->fd;
k.parts.b = b->index;
v.ptr = b;
@@ -725,7 +641,7 @@ static void _block_remove(struct block *b)
{
union key k;
k.parts.di = b->di;
k.parts.fd = b->fd;
k.parts.b = b->index;
radix_tree_remove(b->cache->rtree, k.bytes, k.bytes + sizeof(k.bytes));
@@ -867,7 +783,7 @@ static void _issue_low_level(struct block *b, enum dir d)
dm_list_move(&cache->io_pending, &b->list);
if (!cache->engine->issue(cache->engine, d, b->di, sb, se, b->data, b)) {
if (!cache->engine->issue(cache->engine, d, b->fd, sb, se, b->data, b)) {
/* FIXME: if io_submit() set an errno, return that instead of EIO? */
_complete_io(b, -EIO);
return;
@@ -943,26 +859,21 @@ static struct block *_find_unused_clean_block(struct bcache *cache)
return NULL;
}
static struct block *_new_block(struct bcache *cache, int di, block_address i, bool can_wait)
static struct block *_new_block(struct bcache *cache, int fd, block_address i, bool can_wait)
{
struct block *b;
b = _alloc_block(cache);
while (!b) {
while (!b && !dm_list_empty(&cache->clean)) {
b = _find_unused_clean_block(cache);
if (!b) {
if (can_wait) {
if (dm_list_empty(&cache->io_pending))
_writeback(cache, 16); // FIXME: magic number
_wait_all(cache);
if (dm_list_size(&cache->errored) >= cache->max_io) {
log_debug("bcache no new blocks for di %d index %u with >%d errors.",
di, (uint32_t) i, cache->max_io);
return NULL;
}
_wait_io(cache);
} else {
log_debug("bcache no new blocks for di %d index %u",
di, (uint32_t) i);
log_error("bcache no new blocks for fd %d index %u",
fd, (uint32_t) i);
return NULL;
}
}
@@ -971,7 +882,7 @@ static struct block *_new_block(struct bcache *cache, int di, block_address i, b
if (b) {
dm_list_init(&b->list);
b->flags = 0;
b->di = di;
b->fd = fd;
b->index = i;
b->ref_count = 0;
b->error = 0;
@@ -1017,10 +928,10 @@ static void _miss(struct bcache *cache, unsigned flags)
}
static struct block *_lookup_or_read_block(struct bcache *cache,
int di, block_address i,
int fd, block_address i,
unsigned flags)
{
struct block *b = _block_lookup(cache, di, i);
struct block *b = _block_lookup(cache, fd, i);
if (b) {
// FIXME: this is insufficient. We need to also catch a read
@@ -1045,7 +956,7 @@ static struct block *_lookup_or_read_block(struct bcache *cache,
} else {
_miss(cache, flags);
b = _new_block(cache, di, i, true);
b = _new_block(cache, fd, i, true);
if (b) {
if (flags & GF_ZERO)
_zero_block(b);
@@ -1090,7 +1001,6 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
struct bcache *cache;
unsigned max_io = engine->max_io(engine);
long pgsize = sysconf(_SC_PAGESIZE);
int i;
if (pgsize < 0) {
log_warn("WARNING: _SC_PAGESIZE returns negative value.");
@@ -1151,18 +1061,6 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
return NULL;
}
_fd_table_size = FD_TABLE_INC;
if (!(_fd_table = malloc(sizeof(int) * _fd_table_size))) {
cache->engine->destroy(cache->engine);
radix_tree_destroy(cache->rtree);
free(cache);
return NULL;
}
for (i = 0; i < _fd_table_size; i++)
_fd_table[i] = -1;
return cache;
}
@@ -1178,9 +1076,6 @@ void bcache_destroy(struct bcache *cache)
radix_tree_destroy(cache->rtree);
cache->engine->destroy(cache->engine);
free(cache);
free(_fd_table);
_fd_table = NULL;
_fd_table_size = 0;
}
sector_t bcache_block_sectors(struct bcache *cache)
@@ -1198,13 +1093,13 @@ unsigned bcache_max_prefetches(struct bcache *cache)
return cache->max_io;
}
void bcache_prefetch(struct bcache *cache, int di, block_address i)
void bcache_prefetch(struct bcache *cache, int fd, block_address i)
{
struct block *b = _block_lookup(cache, di, i);
struct block *b = _block_lookup(cache, fd, i);
if (!b) {
if (cache->nr_io_pending < cache->max_io) {
b = _new_block(cache, di, i, false);
b = _new_block(cache, fd, i, false);
if (b) {
cache->prefetches++;
_issue_read(b);
@@ -1222,15 +1117,12 @@ static void _recycle_block(struct bcache *cache, struct block *b)
_free_block(b);
}
bool bcache_get(struct bcache *cache, int di, block_address i,
bool bcache_get(struct bcache *cache, int fd, block_address i,
unsigned flags, struct block **result)
{
struct block *b;
if (di >= _fd_table_size)
goto bad;
b = _lookup_or_read_block(cache, di, i, flags);
b = _lookup_or_read_block(cache, fd, i, flags);
if (b) {
if (b->error) {
if (b->io_dir == DIR_READ) {
@@ -1249,10 +1141,10 @@ bool bcache_get(struct bcache *cache, int di, block_address i,
*result = b;
return true;
}
bad:
*result = NULL;
log_error("bcache failed to get block %u di %d", (uint32_t) i, di);
log_error("bcache failed to get block %u fd %d", (uint32_t) i, fd);
return false;
}
@@ -1316,7 +1208,7 @@ static bool _invalidate_block(struct bcache *cache, struct block *b)
if (b->ref_count) {
log_warn("bcache_invalidate: block (%d, %llu) still held",
b->di, (unsigned long long) b->index);
b->fd, (unsigned long long) b->index);
return false;
}
@@ -1333,9 +1225,9 @@ static bool _invalidate_block(struct bcache *cache, struct block *b)
return true;
}
bool bcache_invalidate(struct bcache *cache, int di, block_address i)
bool bcache_invalidate(struct bcache *cache, int fd, block_address i)
{
return _invalidate_block(cache, _block_lookup(cache, di, i));
return _invalidate_block(cache, _block_lookup(cache, fd, i));
}
//----------------------------------------------------------------
@@ -1351,27 +1243,27 @@ static bool _writeback_v(struct radix_tree_iterator *it,
struct block *b = v.ptr;
if (_test_flags(b, BF_DIRTY))
_issue_write(b);
_issue_write(b);
return true;
return true;
}
static bool _invalidate_v(struct radix_tree_iterator *it,
uint8_t *kb, uint8_t *ke, union radix_value v)
{
struct block *b = v.ptr;
struct invalidate_iterator *iit = container_of(it, struct invalidate_iterator, it);
struct invalidate_iterator *iit = container_of(it, struct invalidate_iterator, it);
if (b->error || _test_flags(b, BF_DIRTY)) {
log_warn("WARNING: bcache_invalidate: block (%d, %llu) still dirty.",
b->di, (unsigned long long) b->index);
iit->success = false;
return true;
log_warn("bcache_invalidate: block (%d, %llu) still dirty",
b->fd, (unsigned long long) b->index);
iit->success = false;
return true;
}
if (b->ref_count) {
log_warn("WARNING: bcache_invalidate: block (%d, %llu) still held.",
b->di, (unsigned long long) b->index);
log_warn("bcache_invalidate: block (%d, %llu) still held",
b->fd, (unsigned long long) b->index);
iit->success = false;
return true;
}
@@ -1384,137 +1276,42 @@ static bool _invalidate_v(struct radix_tree_iterator *it,
return true;
}
bool bcache_invalidate_di(struct bcache *cache, int di)
bool bcache_invalidate_fd(struct bcache *cache, int fd)
{
union key k;
union key k;
struct invalidate_iterator it;
k.parts.di = di;
k.parts.fd = fd;
it.it.visit = _writeback_v;
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.di), &it.it);
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd), &it.it);
_wait_all(cache);
it.success = true;
it.it.visit = _invalidate_v;
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.di), &it.it);
if (it.success)
radix_tree_remove_prefix(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.di));
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd), &it.it);
radix_tree_remove_prefix(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd));
return it.success;
}
//----------------------------------------------------------------
static bool _abort_v(struct radix_tree_iterator *it,
uint8_t *kb, uint8_t *ke, union radix_value v)
void bcache_set_last_byte(struct bcache *cache, int fd, uint64_t offset, int sector_size)
{
struct block *b = v.ptr;
if (b->ref_count) {
log_fatal("bcache_abort: block (%d, %llu) still held",
b->di, (unsigned long long) b->index);
return true;
}
_unlink_block(b);
_free_block(b);
// We can't remove the block from the radix tree yet because
// we're in the middle of an iteration.
return true;
}
void bcache_abort_di(struct bcache *cache, int di)
{
union key k;
struct radix_tree_iterator it;
k.parts.di = di;
it.visit = _abort_v;
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.di), &it);
radix_tree_remove_prefix(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.di));
}
//----------------------------------------------------------------
void bcache_set_last_byte(struct bcache *cache, int di, uint64_t offset, int sector_size)
{
_last_byte_di = di;
_last_byte_fd = fd;
_last_byte_offset = offset;
_last_byte_sector_size = sector_size;
if (!sector_size)
_last_byte_sector_size = 512;
}
void bcache_unset_last_byte(struct bcache *cache, int di)
void bcache_unset_last_byte(struct bcache *cache, int fd)
{
if (_last_byte_di == di) {
_last_byte_di = 0;
if (_last_byte_fd == fd) {
_last_byte_fd = 0;
_last_byte_offset = 0;
_last_byte_sector_size = 0;
}
}
int bcache_set_fd(int fd)
{
int *new_table = NULL;
int new_size = 0;
int i;
retry:
for (i = 0; i < _fd_table_size; i++) {
if (_fd_table[i] == -1) {
_fd_table[i] = fd;
return i;
}
}
/* already tried once, shouldn't happen */
if (new_size)
return -1;
new_size = _fd_table_size + FD_TABLE_INC;
new_table = realloc(_fd_table, sizeof(int) * new_size);
if (!new_table) {
log_error("Cannot extend bcache fd table");
return -1;
}
for (i = _fd_table_size; i < new_size; i++)
new_table[i] = -1;
_fd_table = new_table;
_fd_table_size = new_size;
goto retry;
}
/*
* Should we check for unflushed or inprogress io on an fd
* prior to doing clear_fd or change_fd? (To catch mistakes;
* the caller should be smart enough to not do that.)
*/
void bcache_clear_fd(int di)
{
if (di >= _fd_table_size)
return;
_fd_table[di] = -1;
}
int bcache_change_fd(int di, int fd)
{
if (di >= _fd_table_size)
return 0;
if (di < 0) {
log_error(INTERNAL_ERROR "Cannot change not opened DI with FD:%d", fd);
return 0;
}
_fd_table[di] = fd;
return 1;
}

View File

@@ -16,12 +16,19 @@
#define BCACHE_H
#include "device_mapper/all.h"
#include "base/memory/container_of.h"
#include <linux/fs.h>
#include <stdint.h>
#include <stdbool.h>
/*----------------------------------------------------------------*/
// FIXME: move somewhere more sensible
#define container_of(v, t, head) \
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
/*----------------------------------------------------------------*/
enum dir {
DIR_READ,
DIR_WRITE
@@ -34,7 +41,7 @@ typedef void io_complete_fn(void *context, int io_error);
struct io_engine {
void (*destroy)(struct io_engine *e);
bool (*issue)(struct io_engine *e, enum dir d, int di,
bool (*issue)(struct io_engine *e, enum dir d, int fd,
sector_t sb, sector_t se, void *data, void *context);
bool (*wait)(struct io_engine *e, io_complete_fn fn);
unsigned (*max_io)(struct io_engine *e);
@@ -48,7 +55,7 @@ struct io_engine *create_sync_io_engine(void);
struct bcache;
struct block {
/* clients may only access these three fields */
int di;
int fd;
uint64_t index;
void *data;
@@ -106,12 +113,12 @@ unsigned bcache_max_prefetches(struct bcache *cache);
* they complete. But we're talking a very small difference, and it's worth it
* to keep callbacks out of this interface.
*/
void bcache_prefetch(struct bcache *cache, int di, block_address index);
void bcache_prefetch(struct bcache *cache, int fd, block_address index);
/*
* Returns true on success.
*/
bool bcache_get(struct bcache *cache, int di, block_address index,
bool bcache_get(struct bcache *cache, int fd, block_address index,
unsigned flags, struct block **result);
void bcache_put(struct block *b);
@@ -129,42 +136,30 @@ bool bcache_flush(struct bcache *cache);
*
* If the block is currently held false will be returned.
*/
bool bcache_invalidate(struct bcache *cache, int di, block_address index);
bool bcache_invalidate(struct bcache *cache, int fd, block_address index);
/*
* Invalidates all blocks on the given descriptor. Call this before closing
* the descriptor to make sure everything is written back.
*/
bool bcache_invalidate_di(struct bcache *cache, int di);
bool bcache_invalidate_fd(struct bcache *cache, int fd);
/*
* Call this function if flush, or invalidate fail and you do not
* wish to retry the writes. This will throw away any dirty data
* not written. If any blocks for di are held, then it will call
* abort().
*/
void bcache_abort_di(struct bcache *cache, int di);
//----------------------------------------------------------------
// The next four functions are utilities written in terms of the above api.
// Prefetches the blocks neccessary to satisfy a byte range.
void bcache_prefetch_bytes(struct bcache *cache, int di, uint64_t start, size_t len);
void bcache_prefetch_bytes(struct bcache *cache, int fd, uint64_t start, size_t len);
// Reads, writes and zeroes bytes. Returns false if errors occur.
bool bcache_read_bytes(struct bcache *cache, int di, uint64_t start, size_t len, void *data);
bool bcache_write_bytes(struct bcache *cache, int di, uint64_t start, size_t len, void *data);
bool bcache_zero_bytes(struct bcache *cache, int di, uint64_t start, size_t len);
bool bcache_set_bytes(struct bcache *cache, int di, uint64_t start, size_t len, uint8_t val);
bool bcache_invalidate_bytes(struct bcache *cache, int di, uint64_t start, size_t len);
bool bcache_read_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data);
bool bcache_write_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data);
bool bcache_zero_bytes(struct bcache *cache, int fd, uint64_t start, size_t len);
bool bcache_set_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, uint8_t val);
void bcache_set_last_byte(struct bcache *cache, int di, uint64_t offset, int sector_size);
void bcache_unset_last_byte(struct bcache *cache, int di);
void bcache_set_last_byte(struct bcache *cache, int fd, uint64_t offset, int sector_size);
void bcache_unset_last_byte(struct bcache *cache, int fd);
//----------------------------------------------------------------
int bcache_set_fd(int fd); /* returns di */
void bcache_clear_fd(int di);
int bcache_change_fd(int di, int fd);
#endif

View File

@@ -15,8 +15,6 @@
#include "base/memory/zalloc.h"
#include "lib/misc/lib.h"
#include "lib/device/dev-type.h"
#include "lib/device/device_id.h"
#include "lib/datastruct/btree.h"
#include "lib/config/config.h"
#include "lib/commands/toolcontext.h"
@@ -36,7 +34,7 @@ struct dev_iter {
struct dir_list {
struct dm_list list;
char dir[];
char dir[0];
};
static struct {
@@ -65,17 +63,15 @@ static int _insert(const char *path, const struct stat *info,
/* Setup non-zero members of passed zeroed 'struct device' */
static void _dev_init(struct device *dev)
{
dev->phys_block_size = -1;
dev->block_size = -1;
dev->fd = -1;
dev->bcache_fd = -1;
dev->bcache_di = -1;
dev->read_ahead = -1;
dev->part = -1;
dev->ext.enabled = 0;
dev->ext.src = DEV_EXT_NONE;
dm_list_init(&dev->aliases);
dm_list_init(&dev->ids);
}
void dev_destroy_file(struct device *dev)
@@ -324,38 +320,20 @@ static int _compare_paths(const char *path0, const char *path1)
return 1;
}
enum add_hash {
NO_HASH,
HASH,
REHASH
};
static int _add_alias(struct device *dev, const char *path, enum add_hash hash)
static int _add_alias(struct device *dev, const char *path)
{
struct dm_str_list *sl;
struct dm_str_list *sl = _zalloc(sizeof(*sl));
struct dm_str_list *strl;
const char *oldpath;
int prefer_old = 1;
if (hash == REHASH)
dm_hash_remove(_cache.names, path);
if (!sl)
return_0;
/* Is name already there? */
dm_list_iterate_items(strl, &dev->aliases)
if (!strcmp(strl->str, path)) {
path = strl->str;
goto out;
}
if (!(path = dm_pool_strdup(_cache.mem, path)) ||
!(sl = _zalloc(sizeof(*sl)))) {
log_error("Failed to add allias to dev cache.");
return 0;
}
if (!strncmp(path, "/dev/nvme", 9)) {
log_debug("Found nvme device %s", dev_name(dev));
dev->flags |= DEV_IS_NVME;
dm_list_iterate_items(strl, &dev->aliases) {
if (!strcmp(strl->str, path))
return 1;
}
sl->str = path;
@@ -369,31 +347,23 @@ static int _add_alias(struct device *dev, const char *path, enum add_hash hash)
dm_list_add(&dev->aliases, &sl->list);
else
dm_list_add_h(&dev->aliases, &sl->list);
out:
if ((hash != NO_HASH) &&
!dm_hash_insert(_cache.names, path, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
}
int get_sysfs_value(const char *path, char *buf, size_t buf_size, int error_if_no_value)
static int _get_sysfs_value(const char *path, char *buf, size_t buf_size, int error_if_no_value)
{
FILE *fp;
size_t len;
int r = 0;
if (!(fp = fopen(path, "r"))) {
if (error_if_no_value)
log_sys_error("fopen", path);
log_sys_error("fopen", path);
return 0;
}
if (!fgets(buf, buf_size, fp)) {
if (error_if_no_value)
log_sys_error("fgets", path);
log_sys_error("fgets", path);
goto out;
}
@@ -406,7 +376,7 @@ int get_sysfs_value(const char *path, char *buf, size_t buf_size, int error_if_n
r = 1;
out:
if (fclose(fp))
log_sys_debug("fclose", path);
log_sys_error("fclose", path);
return r;
}
@@ -420,7 +390,7 @@ static int _get_dm_uuid_from_sysfs(char *buf, size_t buf_size, int major, int mi
return 0;
}
return get_sysfs_value(path, buf, buf_size, 0);
return _get_sysfs_value(path, buf, buf_size, 0);
}
static struct dm_list *_get_or_add_list_by_index_key(struct dm_hash_table *idx, const char *key)
@@ -450,6 +420,7 @@ static struct device *_insert_sysfs_dev(dev_t devno, const char *devname)
static struct device _fake_dev = { .flags = DEV_USED_FOR_LV };
struct stat stat0;
char path[PATH_MAX];
char *path_copy;
struct device *dev;
if (dm_snprintf(path, sizeof(path), "%s%s", _cache.dev_dir, devname) < 0) {
@@ -467,9 +438,15 @@ static struct device *_insert_sysfs_dev(dev_t devno, const char *devname)
if (!(dev = _dev_create(devno)))
return_NULL;
if (!_add_alias(dev, path, NO_HASH)) {
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("_insert_sysfs_dev: %s: dm_pool_strdup failed", devname);
return NULL;
}
if (!_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
_free(dev);
return_NULL;
return NULL;
}
if (!btree_insert(_cache.sysfs_only_devices, (uint32_t) devno, dev)) {
@@ -494,7 +471,7 @@ static struct device *_get_device_for_sysfs_dev_name_using_devno(const char *dev
return NULL;
}
if (!get_sysfs_value(path, buf, sizeof(buf), 1))
if (!_get_sysfs_value(path, buf, sizeof(buf), 1))
return_NULL;
if (sscanf(buf, "%d:%d", &major, &minor) != 2) {
@@ -711,6 +688,7 @@ static int _insert_dev(const char *path, dev_t d)
struct device *dev;
struct device *dev_by_devt;
struct device *dev_by_path;
char *path_copy;
dev_by_devt = (struct device *) btree_lookup(_cache.devices, (uint32_t) d);
dev_by_path = (struct device *) dm_hash_lookup(_cache.names, path);
@@ -744,8 +722,20 @@ static int _insert_dev(const char *path, dev_t d)
return 0;
}
if (!_add_alias(dev, path, HASH))
return_0;
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
}
@@ -757,8 +747,20 @@ static int _insert_dev(const char *path, dev_t d)
log_debug_devs("Found dev %d:%d %s - new alias.",
(int)MAJOR(d), (int)MINOR(d), path);
if (!_add_alias(dev, path, HASH))
return_0;
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
}
@@ -784,10 +786,25 @@ static int _insert_dev(const char *path, dev_t d)
return 0;
}
if (!_add_alias(dev, path, REHASH))
return_0;
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
dm_hash_remove(_cache.names, path);
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
}
/*
@@ -799,8 +816,22 @@ static int _insert_dev(const char *path, dev_t d)
(int)MAJOR(d), (int)MINOR(d), path,
(int)MAJOR(dev_by_path->dev), (int)MINOR(dev_by_path->dev));
if (!_add_alias(dev, path, REHASH))
return_0;
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
dm_hash_remove(_cache.names, path);
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
}
@@ -933,12 +964,12 @@ static int _dev_cache_iterate_sysfs_for_index(const char *path)
r = !partial_failure;
if (closedir(d))
log_sys_debug("closedir", path);
log_sys_error("closedir", path);
return r;
}
static int dev_cache_index_devs(void)
int dev_cache_index_devs(void)
{
static int sysfs_has_dev_block = -1;
char path[PATH_MAX];
@@ -959,7 +990,7 @@ static int dev_cache_index_devs(void)
return 1;
}
log_sys_debug("stat", path);
log_sys_error("stat", path);
return 0;
}
} else if (!sysfs_has_dev_block)
@@ -1057,7 +1088,7 @@ out:
static void _insert_dirs(struct dm_list *dirs)
{
struct dir_list *dl;
struct udev *udev = NULL;
struct udev *udev;
int with_udev;
with_udev = obtain_device_list_from_udev() &&
@@ -1124,13 +1155,13 @@ static int _insert(const char *path, const struct stat *info,
}
if (rec && !_insert_dir(path))
return 0;
return_0;
} else { /* add a device */
if (!S_ISBLK(info->st_mode))
return 1;
if (!_insert_dev(path, info->st_rdev))
return 0;
return_0;
}
return 1;
@@ -1287,20 +1318,12 @@ int dev_cache_check_for_open_devices(void)
int dev_cache_exit(void)
{
struct device *dev;
struct dm_hash_node *n;
int num_open = 0;
if (_cache.names) {
if (_cache.names)
if ((num_open = _check_for_open_devices(1)) > 0)
log_error(INTERNAL_ERROR "%d device(s) were left open and have been closed.", num_open);
dm_hash_iterate(n, _cache.names) {
dev = (struct device *) dm_hash_get_data(_cache.names, n);
free_dids(&dev->ids);
}
}
if (_cache.mem)
dm_pool_destroy(_cache.mem);
@@ -1397,156 +1420,72 @@ const char *dev_name_confirmed(struct device *dev, int quiet)
return dev_name(dev);
}
struct device *dev_hash_get(const char *name)
/* Provide a custom reason when a device is ignored */
const char *dev_cache_filtered_reason(const char *name)
{
return (struct device *) dm_hash_lookup(_cache.names, name);
}
const char *reason = "not found";
struct device *d = (struct device *) dm_hash_lookup(_cache.names, name);
static void _remove_alias(struct device *dev, const char *name)
{
struct dm_str_list *strl;
if (d)
/* FIXME Record which filter caused the exclusion */
reason = "excluded by a filter";
dm_list_iterate_items(strl, &dev->aliases) {
if (!strcmp(strl->str, name)) {
dm_list_del(&strl->list);
return;
}
}
}
/*
* Check that paths for this dev still refer to the same dev_t. This is known
* to drop invalid paths in the case where lvm deactivates an LV, which causes
* that LV path to go away, but that LV path is not removed from dev-cache (it
* probably should be). Later a new path to a different LV is added to
* dev-cache, where the new LV has the same major:minor as the previously
* deactivated LV. The new LV will find the existing struct dev, and that
* struct dev will have dev->aliases entries that refer to the name of the old
* deactivated LV. Those old paths are all invalid and are dropped here.
*/
static void _verify_aliases(struct device *dev, const char *newname)
{
struct dm_str_list *strl, *strl2;
struct stat st;
dm_list_iterate_items_safe(strl, strl2, &dev->aliases) {
/* newname was just stat'd and added by caller */
if (newname && !strcmp(strl->str, newname))
continue;
if (stat(strl->str, &st) || (st.st_rdev != dev->dev)) {
log_debug("Drop invalid path %s for %d:%d (new path %s).",
strl->str, (int)MAJOR(dev->dev), (int)MINOR(dev->dev), newname ?: "");
dm_hash_remove(_cache.names, strl->str);
dm_list_del(&strl->list);
}
}
return reason;
}
struct device *dev_cache_get(struct cmd_context *cmd, const char *name, struct dev_filter *f)
{
struct device *dev = (struct device *) dm_hash_lookup(_cache.names, name);
struct stat st;
int ret;
struct stat buf;
struct device *d = (struct device *) dm_hash_lookup(_cache.names, name);
int info_available = 0;
int ret = 1;
/*
* DEV_REGULAR means that is "dev" is actually a file, not a device.
* FIXME: I don't think dev-cache is used for files any more and this
* can be dropped?
*/
if (dev && (dev->flags & DEV_REGULAR))
return dev;
/*
* The requested path is invalid, remove any dev-cache
* info for it.
*/
if (stat(name, &st)) {
if (dev) {
log_print("Device path %s is invalid for %d:%d %s.",
name, (int)MAJOR(dev->dev), (int)MINOR(dev->dev), dev_name(dev));
if (d && (d->flags & DEV_REGULAR))
return d;
/* If the entry's wrong, remove it */
if (stat(name, &buf) < 0) {
if (d)
dm_hash_remove(_cache.names, name);
log_sys_very_verbose("stat", name);
d = NULL;
} else
info_available = 1;
_remove_alias(dev, name);
/* Remove any other names in dev->aliases that are incorrect. */
_verify_aliases(dev, NULL);
}
return NULL;
}
if (!S_ISBLK(st.st_mode)) {
log_debug("Not a block device %s.", name);
return NULL;
}
/*
* dev-cache has incorrect info for the requested path.
* Remove incorrect info and then add new dev-cache entry.
*/
if (dev && (st.st_rdev != dev->dev)) {
log_print("Device path %s does not match %d:%d %s.",
name, (int)MAJOR(dev->dev), (int)MINOR(dev->dev), dev_name(dev));
if (d && (buf.st_rdev != d->dev)) {
dm_hash_remove(_cache.names, name);
_remove_alias(dev, name);
/* Remove any other names in dev->aliases that are incorrect. */
_verify_aliases(dev, NULL);
/* Add new dev-cache entry next. */
dev = NULL;
d = NULL;
}
/*
* Either add a new struct dev for st_rdev and name,
* or add name as a new alias for an existing struct dev
* for st_rdev.
*/
if (!dev) {
_insert_dev(name, st.st_rdev);
/* Get the struct dev that was just added. */
dev = (struct device *) dm_hash_lookup(_cache.names, name);
if (!dev) {
log_error("Failed to get device %s", name);
return NULL;
if (!d) {
_insert(name, info_available ? &buf : NULL, 0, obtain_device_list_from_udev());
d = (struct device *) dm_hash_lookup(_cache.names, name);
if (!d) {
dev_cache_scan();
d = (struct device *) dm_hash_lookup(_cache.names, name);
}
_verify_aliases(dev, name);
}
/*
* The caller passed a filter if they only want the dev if it
* passes filters.
*/
if (!f)
return dev;
ret = f->passes_filter(cmd, f, dev, NULL);
/*
* This might happen if this function is called before
* filters can do i/o. I don't think this will happen
* any longer and this EAGAIN case can be removed.
*/
if (ret == -EAGAIN) {
log_debug_devs("dev_cache_get filter deferred %s", dev_name(dev));
dev->flags |= DEV_FILTER_AFTER_SCAN;
ret = 1;
}
if (!ret) {
log_debug_devs("dev_cache_get filter excludes %s", dev_name(dev));
if (!d)
return NULL;
if (d && (d->flags & DEV_REGULAR))
return d;
if (f && !(d->flags & DEV_REGULAR)) {
ret = f->passes_filter(cmd, f, d, NULL);
if (ret == -EAGAIN) {
log_debug_devs("get device by name defer filter %s", dev_name(d));
d->flags |= DEV_FILTER_AFTER_SCAN;
ret = 1;
}
}
return dev;
if (f && !(d->flags & DEV_REGULAR) && !ret)
return NULL;
return d;
}
static struct device *_dev_cache_seek_devt(dev_t dev)
@@ -1566,7 +1505,7 @@ static struct device *_dev_cache_seek_devt(dev_t dev)
* TODO This is very inefficient. We probably want a hash table indexed by
* major:minor for keys to speed up these lookups.
*/
struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct dev_filter *f, int *filtered)
struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct dev_filter *f)
{
char path[PATH_MAX];
const char *sysfs_dir;
@@ -1574,9 +1513,6 @@ struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct
struct device *d = _dev_cache_seek_devt(dev);
int ret;
if (filtered)
*filtered = 0;
if (d && (d->flags & DEV_REGULAR))
return d;
@@ -1584,7 +1520,7 @@ struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct
sysfs_dir = dm_sysfs_dir();
if (sysfs_dir && *sysfs_dir) {
/* First check if dev is sysfs to avoid useless scan */
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d",
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d",
sysfs_dir, (int)MAJOR(dev), (int)MINOR(dev)) < 0) {
log_error("dm_snprintf partition failed.");
return NULL;
@@ -1597,8 +1533,6 @@ struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct
}
}
log_debug_devs("Device num not found in dev_cache repeat dev_cache_scan for %d:%d",
(int)MAJOR(dev), (int)MINOR(dev));
dev_cache_scan();
d = _dev_cache_seek_devt(dev);
}
@@ -1623,8 +1557,6 @@ struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t dev, struct
if (ret)
return d;
if (filtered)
*filtered = 1;
return NULL;
}
@@ -1697,352 +1629,3 @@ const char *dev_name(const struct device *dev)
return (dev && dev->aliases.n) ? dm_list_item(dev->aliases.n, struct dm_str_list)->str :
unknown_device_name();
}
bool dev_cache_has_md_with_end_superblock(struct dev_types *dt)
{
struct btree_iter *iter = btree_first(_cache.devices);
struct device *dev;
while (iter) {
dev = btree_get_data(iter);
if (dev_is_md_with_end_superblock(dt, dev))
return true;
iter = btree_next(iter);
}
return false;
}
static int _setup_devices_list(struct cmd_context *cmd)
{
struct dm_str_list *strl;
struct dev_use *du;
/*
* For each --devices arg, add a du to cmd->use_devices.
* The du has devname is the devices arg value.
*/
dm_list_iterate_items(strl, &cmd->deviceslist) {
if (!(du = zalloc(sizeof(struct dev_use))))
return_0;
if (!(du->devname = strdup(strl->str)))
return_0;
dm_list_add(&cmd->use_devices, &du->list);
}
return 1;
}
static int _setup_devices_file_dmeventd(struct cmd_context *cmd)
{
char path[PATH_MAX];
struct stat st;
/*
* When command is run by dmeventd there is no --devicesfile
* option that can enable/disable the use of a devices file.
*/
if (!find_config_tree_bool(cmd, devices_use_devicesfile_CFG, NULL)) {
cmd->enable_devices_file = 0;
return 1;
}
/*
* If /etc/lvm/devices/dmeventd.devices exists, then use that.
* The optional dmeventd.devices allows the user to control
* which devices dmeventd will look at and use.
* Otherwise, disable the devices file because dmeventd should
* be able to manage LVs in any VG (i.e. LVs in a non-system
* devices file.)
*/
if (dm_snprintf(path, sizeof(path), "%s/devices/dmeventd.devices", cmd->system_dir) < 0) {
log_warn("Failed to copy devices path");
cmd->enable_devices_file = 0;
return 1;
}
if (stat(path, &st)) {
/* No dmeventd.devices, so do not use a devices file. */
cmd->enable_devices_file = 0;
return 1;
}
cmd->enable_devices_file = 1;
(void) dm_strncpy(cmd->devices_file_path, path, sizeof(cmd->devices_file_path));
return 1;
}
int setup_devices_file(struct cmd_context *cmd)
{
char dirpath[PATH_MAX];
const char *filename = NULL;
struct stat st;
int rv;
if (cmd->run_by_dmeventd)
return _setup_devices_file_dmeventd(cmd);
if (cmd->devicesfile) {
/* --devicesfile <filename> or "" has been set which overrides
lvm.conf settings use_devicesfile and devicesfile. */
if (!strlen(cmd->devicesfile))
cmd->enable_devices_file = 0;
else {
cmd->enable_devices_file = 1;
filename = cmd->devicesfile;
}
/* TODO: print a warning if --devicesfile system.devices
while lvm.conf use_devicesfile=0. */
} else {
if (!find_config_tree_bool(cmd, devices_use_devicesfile_CFG, NULL))
cmd->enable_devices_file = 0;
else {
cmd->enable_devices_file = 1;
filename = find_config_tree_str(cmd, devices_devicesfile_CFG, NULL);
if (!validate_name(filename)) {
log_error("Invalid devices file name from config setting \"%s\".", filename);
return 0;
}
}
}
if (!cmd->enable_devices_file)
return 1;
if (dm_snprintf(dirpath, sizeof(dirpath), "%s/devices", cmd->system_dir) < 0) {
log_error("Failed to copy devices dir path");
return 0;
}
if (stat(dirpath, &st)) {
log_debug("Creating %s.", dirpath);
dm_prepare_selinux_context(dirpath, S_IFDIR);
rv = mkdir(dirpath, 0755);
dm_prepare_selinux_context(NULL, 0);
if ((rv < 0) && stat(dirpath, &st)) {
log_error("Failed to create %s %d", dirpath, errno);
return 0;
}
}
if (dm_snprintf(cmd->devices_file_path, sizeof(cmd->devices_file_path),
"%s/devices/%s", cmd->system_dir, filename) < 0) {
log_error("Failed to copy devices file path");
return 0;
}
return 1;
}
/*
* Add all system devices to dev-cache, and attempt to
* match all devices_file entries to dev-cache entries.
*/
int setup_devices(struct cmd_context *cmd)
{
int file_exists;
int lock_mode = 0;
if (cmd->enable_devices_list) {
if (!_setup_devices_list(cmd))
return_0;
goto scan;
}
if (!setup_devices_file(cmd))
return_0;
if (!cmd->enable_devices_file)
goto scan;
file_exists = devices_file_exists(cmd);
/*
* Removing the devices file is another way of disabling the use of
* a devices file, unless the command creates the devices file.
*/
if (!file_exists && !cmd->create_edit_devices_file) {
log_debug("Devices file not found, ignoring.");
cmd->enable_devices_file = 0;
goto scan;
}
/*
* Don't let pvcreate or vgcreate create a new system devices file
* unless it's specified explicitly with --devicesfile. This avoids
* a problem where a system is running with existing PVs, and is
* not using a devices file based on the fact that the system
* devices file doesn't exist. If the user simply uses pvcreate
* to create a new PV, they almost certainly do not want that to
* create a new system devices file containing the new PV and none
* of the existing PVs that the system is already using.
* However, if they use the vgimportdevices or lvmdevices command
* then they are clearly intending to use the devices file, so we
* can create it. Or, if they specify a non-system devices file
* with pvcreate/vgcreate, then they clearly want to use a devices
* file and we can create it (and creating a non-system devices file
* would not cause existing PVs to disappear from the main system.)
*
* An exception is if pvcreate/vgcreate get to device_id_write and
* did not see any existing VGs during label scan. In that case
* they will create a new system devices file, since there will be
* no VGs that the new file would hide.
*/
if (cmd->create_edit_devices_file && !cmd->devicesfile && !file_exists &&
(!strncmp(cmd->name, "pvcreate", 8) || !strncmp(cmd->name, "vgcreate", 8))) {
/* The command will decide in device_ids_write whether to create
a new system devices file. */
cmd->enable_devices_file = 0;
cmd->pending_devices_file = 1;
goto scan;
}
if (!file_exists && cmd->sysinit) {
cmd->enable_devices_file = 0;
goto scan;
}
if (!file_exists) {
/*
* pvcreate/vgcreate/vgimportdevices/lvmdevices-add create
* a new devices file here if it doesn't exist.
* They have the create_edit_devices_file flag set.
* First they create/lock-ex the devices file lockfile.
* Other commands will not use a devices file if none exists.
*/
lock_mode = LOCK_EX;
if (!lock_devices_file(cmd, lock_mode)) {
log_error("Failed to lock the devices file to create.");
return 0;
}
/* The file will be created in device_ids_write() */
if (!devices_file_exists(cmd))
goto scan;
} else {
/*
* Commands that intend to edit the devices file have
* edit_devices_file or create_edit_devices_file set (create if
* they can also create a new devices file) and lock it ex
* here prior to reading. Other commands that intend to just
* read the devices file lock sh.
*/
lock_mode = (cmd->create_edit_devices_file || cmd->edit_devices_file) ? LOCK_EX : LOCK_SH;
if (!lock_devices_file(cmd, lock_mode)) {
log_error("Failed to lock the devices file.");
return 0;
}
}
/*
* Read the list of device ids that lvm can use.
* Adds a struct dev_id to cmd->use_devices for each one.
*/
if (!device_ids_read(cmd)) {
log_error("Failed to read the devices file.");
unlock_devices_file(cmd);
return 0;
}
/*
* When the command is editing the devices file, it acquires
* the ex lock above, will later call device_ids_write(), and
* then unlock the lock after writing the file.
* When the command is just reading the devices file, it's
* locked sh above just before reading the file, and unlocked
* here after reading.
*/
if (lock_mode == LOCK_SH)
unlock_devices_file(cmd);
scan:
/*
* Add a 'struct device' to dev-cache for each device available on the system.
* This will not open or read any devices, but may look at sysfs properties.
* This list of devs comes from looking /dev entries, or from asking libudev.
*/
dev_cache_scan();
/*
* Match entries from cmd->use_devices with device structs in dev-cache.
*/
device_ids_match(cmd);
return 1;
}
/*
* The alternative to setup_devices() when the command is interested
* in using only one PV.
*
* Add one system device to dev-cache, and attempt to
* match its dev-cache entry to a devices_file entry.
*/
int setup_device(struct cmd_context *cmd, const char *devname)
{
struct stat buf;
struct device *dev;
if (cmd->enable_devices_list) {
if (!_setup_devices_list(cmd))
return_0;
goto scan;
}
if (!setup_devices_file(cmd))
return_0;
if (!cmd->enable_devices_file)
goto scan;
if (!devices_file_exists(cmd)) {
log_debug("Devices file not found, ignoring.");
cmd->enable_devices_file = 0;
goto scan;
}
if (!lock_devices_file(cmd, LOCK_SH)) {
log_error("Failed to lock the devices file to read.");
return 0;
}
if (!device_ids_read(cmd)) {
log_error("Failed to read the devices file.");
unlock_devices_file(cmd);
return 0;
}
unlock_devices_file(cmd);
scan:
if (stat(devname, &buf) < 0) {
log_error("Cannot access device %s.", devname);
return 0;
}
if (!S_ISBLK(buf.st_mode)) {
log_error("Invaild device type %s.", devname);
return 0;
}
if (!_insert_dev(devname, buf.st_rdev))
return_0;
if (!(dev = (struct device *) dm_hash_lookup(_cache.names, devname)))
return_0;
/* Match this device to an entry in devices_file so it will not
be rejected by filter-deviceid. */
if (cmd->enable_devices_file)
device_ids_match_dev(cmd, dev);
return 1;
}

View File

@@ -17,7 +17,6 @@
#define _LVM_DEV_CACHE_H
#include "lib/device/device.h"
#include "lib/device/dev-type.h"
#include "lib/misc/lvm-wrappers.h"
struct cmd_context;
@@ -28,12 +27,13 @@ struct cmd_context;
struct dev_filter {
int (*passes_filter) (struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name);
void (*destroy) (struct dev_filter *f);
void (*wipe) (struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name);
void (*wipe) (struct dev_filter *f);
void *private;
unsigned use_count;
const char *name;
};
int dev_cache_index_devs(void);
struct dm_list *dev_cache_get_dev_list_for_vgid(const char *vgid);
struct dm_list *dev_cache_get_dev_list_for_lvid(const char *lvid);
@@ -53,10 +53,10 @@ int dev_cache_has_scanned(void);
int dev_cache_add_dir(const char *path);
struct device *dev_cache_get(struct cmd_context *cmd, const char *name, struct dev_filter *f);
const char *dev_cache_filtered_reason(const char *name);
struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t device, struct dev_filter *f, int *filtered);
struct device *dev_hash_get(const char *name);
// TODO
struct device *dev_cache_get_by_devt(struct cmd_context *cmd, dev_t device, struct dev_filter *f);
void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev);
@@ -72,12 +72,4 @@ void dev_reset_error_count(struct cmd_context *cmd);
void dev_cache_failed_path(struct device *dev, const char *path);
bool dev_cache_has_md_with_end_superblock(struct dev_types *dt);
int get_sysfs_value(const char *path, char *buf, size_t buf_size, int error_if_no_value);
int setup_devices_file(struct cmd_context *cmd);
int setup_devices(struct cmd_context *cmd);
int setup_device(struct cmd_context *cmd, const char *devname);
#endif

View File

@@ -53,6 +53,261 @@
static unsigned _dev_size_seqno = 1;
static const char *_reasons[] = {
"dev signatures",
"PV labels",
"VG metadata header",
"VG metadata content",
"extra VG metadata header",
"extra VG metadata content",
"LVM1 metadata",
"pool metadata",
"LV content",
"logging",
};
static const char *_reason_text(dev_io_reason_t reason)
{
return _reasons[(unsigned) reason];
}
/*-----------------------------------------------------------------
* The standard io loop that keeps submitting an io until it's
* all gone.
*---------------------------------------------------------------*/
static int _io(struct device_area *where, char *buffer, int should_write, dev_io_reason_t reason)
{
int fd = dev_fd(where->dev);
ssize_t n = 0;
size_t total = 0;
if (fd < 0) {
log_error("Attempt to read an unopened device (%s).",
dev_name(where->dev));
return 0;
}
log_debug_io("%s %s:%8" PRIu64 " bytes (sync) at %" PRIu64 "%s (for %s)",
should_write ? "Write" : "Read ", dev_name(where->dev),
where->size, (uint64_t) where->start,
(should_write && test_mode()) ? " (test mode - suppressed)" : "", _reason_text(reason));
/*
* Skip all writes in test mode.
*/
if (should_write && test_mode())
return 1;
if (where->size > SSIZE_MAX) {
log_error("Read size too large: %" PRIu64, where->size);
return 0;
}
if (lseek(fd, (off_t) where->start, SEEK_SET) == (off_t) -1) {
log_error("%s: lseek %" PRIu64 " failed: %s",
dev_name(where->dev), (uint64_t) where->start,
strerror(errno));
return 0;
}
while (total < (size_t) where->size) {
do
n = should_write ?
write(fd, buffer, (size_t) where->size - total) :
read(fd, buffer, (size_t) where->size - total);
while ((n < 0) && ((errno == EINTR) || (errno == EAGAIN)));
if (n < 0)
log_error_once("%s: %s failed after %" PRIu64 " of %" PRIu64
" at %" PRIu64 ": %s", dev_name(where->dev),
should_write ? "write" : "read",
(uint64_t) total,
(uint64_t) where->size,
(uint64_t) where->start, strerror(errno));
if (n <= 0)
break;
total += n;
buffer += n;
}
return (total == (size_t) where->size);
}
/*-----------------------------------------------------------------
* LVM2 uses O_DIRECT when performing metadata io, which requires
* block size aligned accesses. If any io is not aligned we have
* to perform the io via a bounce buffer, obviously this is quite
* inefficient.
*---------------------------------------------------------------*/
/*
* Get the physical and logical block size for a device.
*/
int dev_get_block_size(struct device *dev, unsigned int *physical_block_size, unsigned int *block_size)
{
const char *name = dev_name(dev);
int fd = dev->bcache_fd;
int do_close = 0;
int r = 1;
if ((dev->phys_block_size > 0) && (dev->block_size > 0)) {
*physical_block_size = (unsigned int)dev->phys_block_size;
*block_size = (unsigned int)dev->block_size;
return 1;
}
if (fd <= 0) {
if (!dev->open_count) {
if (!dev_open_readonly(dev))
return_0;
do_close = 1;
}
fd = dev_fd(dev);
}
if (dev->block_size == -1) {
if (ioctl(fd, BLKBSZGET, &dev->block_size) < 0) {
log_sys_error("ioctl BLKBSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Block size is %u bytes", name, dev->block_size);
}
#ifdef BLKPBSZGET
/* BLKPBSZGET is available in kernel >= 2.6.32 only */
if (dev->phys_block_size == -1) {
if (ioctl(fd, BLKPBSZGET, &dev->phys_block_size) < 0) {
log_sys_error("ioctl BLKPBSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Physical block size is %u bytes", name, dev->phys_block_size);
}
#elif defined (BLKSSZGET)
/* if we can't get physical block size, just use logical block size instead */
if (dev->phys_block_size == -1) {
if (ioctl(fd, BLKSSZGET, &dev->phys_block_size) < 0) {
log_sys_error("ioctl BLKSSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Physical block size can't be determined: Using logical block size of %u bytes", name, dev->phys_block_size);
}
#else
/* if even BLKSSZGET is not available, use default 512b */
if (dev->phys_block_size == -1) {
dev->phys_block_size = 512;
log_debug_devs("%s: Physical block size can't be determined: Using block size of %u bytes instead", name, dev->phys_block_size);
}
#endif
*physical_block_size = (unsigned int) dev->phys_block_size;
*block_size = (unsigned int) dev->block_size;
out:
if (do_close && !dev_close_immediate(dev))
stack;
return r;
}
/*
* Widens a region to be an aligned region.
*/
static void _widen_region(unsigned int block_size, struct device_area *region,
struct device_area *result)
{
uint64_t mask = block_size - 1, delta;
memcpy(result, region, sizeof(*result));
/* adjust the start */
delta = result->start & mask;
if (delta) {
result->start -= delta;
result->size += delta;
}
/* adjust the end */
delta = (result->start + result->size) & mask;
if (delta)
result->size += block_size - delta;
}
static int _aligned_io(struct device_area *where, char *buffer,
int should_write, dev_io_reason_t reason)
{
char *bounce, *bounce_buf;
unsigned int physical_block_size = 0;
unsigned int block_size = 0;
unsigned buffer_was_widened = 0;
uintptr_t mask;
struct device_area widened;
int r = 0;
if (!(where->dev->flags & DEV_REGULAR) &&
!dev_get_block_size(where->dev, &physical_block_size, &block_size))
return_0;
if (!block_size)
block_size = lvm_getpagesize();
mask = block_size - 1;
_widen_region(block_size, where, &widened);
/* Did we widen the buffer? When writing, this means means read-modify-write. */
if (where->size != widened.size || where->start != widened.start) {
buffer_was_widened = 1;
log_debug_io("Widening request for %" PRIu64 " bytes at %" PRIu64 " to %" PRIu64 " bytes at %" PRIu64 " on %s (for %s)",
where->size, (uint64_t) where->start, widened.size, (uint64_t) widened.start, dev_name(where->dev), _reason_text(reason));
} else if (!((uintptr_t) buffer & mask))
/* Perform the I/O directly. */
return _io(where, buffer, should_write, reason);
/* Allocate a bounce buffer with an extra block */
if (!(bounce_buf = bounce = malloc((size_t) widened.size + block_size))) {
log_error("Bounce buffer malloc failed");
return 0;
}
/*
* Realign start of bounce buffer (using the extra sector)
*/
if (((uintptr_t) bounce) & mask)
bounce = (char *) ((((uintptr_t) bounce) + mask) & ~mask);
/* Do we need to read into the bounce buffer? */
if ((!should_write || buffer_was_widened) &&
!_io(&widened, bounce, 0, reason)) {
if (!should_write)
goto_out;
/* FIXME Handle errors properly! */
/* FIXME pre-extend the file */
memset(bounce, '\n', widened.size);
}
if (should_write) {
memcpy(bounce + (where->start - widened.start), buffer,
(size_t) where->size);
/* ... then we write */
if (!(r = _io(&widened, bounce, 1, reason)))
stack;
goto out;
}
memcpy(buffer, bounce + (where->start - widened.start),
(size_t) where->size);
r = 1;
out:
free(bounce_buf);
return r;
}
static int _dev_get_size_file(struct device *dev, uint64_t *size)
{
const char *name = dev_name(dev);
@@ -86,9 +341,6 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
int fd = dev->bcache_fd;
int do_close = 0;
if (dm_list_empty(&dev->aliases))
return 0;
if (dev->size_seqno == _dev_size_seqno) {
log_very_verbose("%s: using cached size %" PRIu64 " sectors",
name, dev->size);
@@ -97,16 +349,16 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
}
if (fd <= 0) {
if (!dev_open_readonly_quiet(dev))
if (!dev_open_readonly(dev))
return_0;
fd = dev_fd(dev);
do_close = 1;
}
if (ioctl(fd, BLKGETSIZE64, size) < 0) {
log_warn("WARNING: %s: ioctl BLKGETSIZE64 %s", name, strerror(errno));
log_sys_error("ioctl BLKGETSIZE64", name);
if (do_close && !dev_close_immediate(dev))
stack;
log_sys_error("close", name);
return 0;
}
@@ -117,7 +369,7 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
log_very_verbose("%s: size is %" PRIu64 " sectors", name, *size);
if (do_close && !dev_close_immediate(dev))
stack;
log_sys_error("close", name);
return 1;
}
@@ -131,14 +383,11 @@ static int _dev_read_ahead_dev(struct device *dev, uint32_t *read_ahead)
return 1;
}
if (!dev_open_readonly_quiet(dev)) {
log_warn("WARNING: Failed to open %s to get readahead %s.",
dev_name(dev), strerror(errno));
return 0;
}
if (!dev_open_readonly(dev))
return_0;
if (ioctl(dev->fd, BLKRAGET, &read_ahead_long) < 0) {
log_warn("WARNING: %s: ioctl BLKRAGET %s.", dev_name(dev), strerror(errno));
log_sys_error("ioctl BLKRAGET", dev_name(dev));
if (!dev_close_immediate(dev))
stack;
return 0;
@@ -171,7 +420,7 @@ static int _dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64
test_mode() ? " (test mode - suppressed)" : "");
if (!test_mode() && ioctl(dev->fd, BLKDISCARD, &discard_range) < 0) {
log_warn("WARNING: %s: ioctl BLKDISCARD at offset %" PRIu64 " size %" PRIu64 " failed: %s.",
log_error("%s: BLKDISCARD ioctl at offset %" PRIu64 " size %" PRIu64 " failed: %s.",
dev_name(dev), offset_bytes, size_bytes, strerror(errno));
if (!dev_close_immediate(dev))
stack;
@@ -185,60 +434,6 @@ static int _dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64
return 1;
}
int dev_get_direct_block_sizes(struct device *dev, unsigned int *physical_block_size,
unsigned int *logical_block_size)
{
int fd = dev->bcache_fd;
int do_close = 0;
unsigned int pbs = 0;
unsigned int lbs = 0;
if (dev->physical_block_size || dev->logical_block_size) {
*physical_block_size = dev->physical_block_size;
*logical_block_size = dev->logical_block_size;
return 1;
}
if (fd <= 0) {
if (!dev_open_readonly_quiet(dev))
return 0;
fd = dev_fd(dev);
do_close = 1;
}
#ifdef BLKPBSZGET /* not defined before kernel version 2.6.32 (e.g. rhel5) */
/*
* BLKPBSZGET from kernel comment for blk_queue_physical_block_size:
* "the lowest possible sector size that the hardware can operate on
* without reverting to read-modify-write operations"
*/
if (ioctl(fd, BLKPBSZGET, &pbs)) {
stack;
pbs = 0;
}
#endif
/*
* BLKSSZGET from kernel comment for blk_queue_logical_block_size:
* "the lowest possible block size that the storage device can address."
*/
if (ioctl(fd, BLKSSZGET, &lbs)) {
stack;
lbs = 0;
}
dev->physical_block_size = pbs;
dev->logical_block_size = lbs;
*physical_block_size = pbs;
*logical_block_size = lbs;
if (do_close && !dev_close_immediate(dev))
stack;
return 1;
}
/*-----------------------------------------------------------------
* Public functions
*---------------------------------------------------------------*/
@@ -386,6 +581,7 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
dev->flags |= DEV_O_DIRECT_TESTED;
#endif
dev->open_count++;
dev->flags &= ~DEV_ACCESSED_W;
if (need_rw)
dev->flags |= DEV_OPENED_RW;
@@ -447,11 +643,23 @@ int dev_open_readonly_quiet(struct device *dev)
return dev_open_flags(dev, O_RDONLY, 1, 1);
}
int dev_test_excl(struct device *dev)
{
int flags = 0;
flags |= O_EXCL;
flags |= O_RDWR;
return dev_open_flags(dev, flags, 1, 1);
}
static void _close(struct device *dev)
{
if (close(dev->fd))
log_sys_debug("close", dev_name(dev));
log_sys_error("close", dev_name(dev));
dev->fd = -1;
dev->phys_block_size = -1;
dev->block_size = -1;
log_debug_devs("Closed %s", dev_name(dev));
@@ -467,6 +675,11 @@ static int _dev_close(struct device *dev, int immediate)
return 0;
}
#ifndef O_DIRECT_SUPPORT
if (dev->flags & DEV_ACCESSED_W)
dev_flush(dev);
#endif
if (dev->open_count > 0)
dev->open_count--;
@@ -489,3 +702,131 @@ int dev_close_immediate(struct device *dev)
{
return _dev_close(dev, 1);
}
int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer)
{
struct device_area where;
int ret;
if (!dev->open_count)
return_0;
where.dev = dev;
where.start = offset;
where.size = len;
ret = _aligned_io(&where, buffer, 0, reason);
return ret;
}
/*
* Read from 'dev' into 'buf', possibly in 2 distinct regions, denoted
* by (offset,len) and (offset2,len2). Thus, the total size of
* 'buf' should be len+len2.
*/
int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
uint64_t offset2, size_t len2, dev_io_reason_t reason, char *buf)
{
if (!dev_read(dev, offset, len, reason, buf)) {
log_error("Read from %s failed", dev_name(dev));
return 0;
}
/*
* The second region is optional, and allows for
* a circular buffer on the device.
*/
if (!len2)
return 1;
if (!dev_read(dev, offset2, len2, reason, buf + len)) {
log_error("Circular read from %s failed",
dev_name(dev));
return 0;
}
return 1;
}
/* FIXME If O_DIRECT can't extend file, dev_extend first; dev_truncate after.
* But fails if concurrent processes writing
*/
/* FIXME pre-extend the file */
int dev_append(struct device *dev, size_t len, dev_io_reason_t reason, char *buffer)
{
int r;
if (!dev->open_count)
return_0;
r = dev_write(dev, dev->end, len, reason, buffer);
dev->end += (uint64_t) len;
#ifndef O_DIRECT_SUPPORT
dev_flush(dev);
#endif
return r;
}
int dev_write(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer)
{
struct device_area where;
int ret;
if (!dev->open_count)
return_0;
if (!len) {
log_error(INTERNAL_ERROR "Attempted to write 0 bytes to %s at " FMTu64, dev_name(dev), offset);
return 0;
}
where.dev = dev;
where.start = offset;
where.size = len;
dev->flags |= DEV_ACCESSED_W;
ret = _aligned_io(&where, buffer, 1, reason);
return ret;
}
int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, int value)
{
size_t s;
char buffer[4096] __attribute__((aligned(8)));
if (!dev_open(dev))
return_0;
if ((offset % SECTOR_SIZE) || (len % SECTOR_SIZE))
log_debug_devs("Wiping %s at %" PRIu64 " length %" PRIsize_t,
dev_name(dev), offset, len);
else
log_debug_devs("Wiping %s at sector %" PRIu64 " length %" PRIsize_t
" sectors", dev_name(dev), offset >> SECTOR_SHIFT,
len >> SECTOR_SHIFT);
memset(buffer, value, sizeof(buffer));
while (1) {
s = len > sizeof(buffer) ? sizeof(buffer) : len;
if (!dev_write(dev, offset, s, reason, buffer))
break;
len -= s;
if (!len)
break;
offset += s;
}
dev->flags |= DEV_ACCESSED_W;
if (!dev_close(dev))
stack;
return (len == 0);
}

View File

@@ -16,7 +16,6 @@
#include "lib/misc/lib.h"
#include "lib/device/dev-type.h"
#include "lib/mm/xlate.h"
#include "lib/misc/crc.h"
#ifdef UDEV_SYNC_SUPPORT
#include <libudev.h> /* for MD detection using udev db records */
#include "lib/device/dev-ext-udev-constants.h"
@@ -49,116 +48,48 @@ static int _dev_has_md_magic(struct device *dev, uint64_t sb_offset)
return 0;
}
#define IMSM_SIGNATURE "Intel Raid ISM Cfg Sig. "
#define IMSM_SIG_LEN (sizeof(IMSM_SIGNATURE) - 1)
static int _dev_has_imsm_magic(struct device *dev, uint64_t devsize_sectors)
{
char imsm_signature[IMSM_SIG_LEN];
uint64_t off = (devsize_sectors * 512) - 1024;
unsigned int physical_block_size = 0;
unsigned int logical_block_size = 0;
if (!dev_get_direct_block_sizes(dev, &physical_block_size, &logical_block_size))
return_0;
if (logical_block_size == 4096)
off = (devsize_sectors * 512) - 8192;
else
off = (devsize_sectors * 512) - 1024;
if (!dev_read_bytes(dev, off, IMSM_SIG_LEN, imsm_signature))
return_0;
if (!memcmp(imsm_signature, IMSM_SIGNATURE, IMSM_SIG_LEN))
return 1;
return 0;
}
#define DDF_MAGIC 0xDE11DE11
struct ddf_header {
uint32_t magic;
uint32_t crc;
char guid[24];
char revision[8];
char padding[472];
};
static int _dev_has_ddf_magic(struct device *dev, uint64_t devsize_sectors, uint64_t *sb_offset)
{
struct ddf_header hdr;
uint32_t crc, our_crc;
uint64_t off;
uint64_t devsize_bytes = devsize_sectors * 512;
if (devsize_bytes < 0x30000)
return 0;
/* 512 bytes before the end of device (from libblkid) */
off = ((devsize_bytes / 0x200) - 1) * 0x200;
if (!dev_read_bytes(dev, off, 512, &hdr))
return_0;
if ((hdr.magic == cpu_to_be32(DDF_MAGIC)) ||
(hdr.magic == cpu_to_le32(DDF_MAGIC))) {
crc = hdr.crc;
hdr.crc = 0xffffffff;
our_crc = calc_crc(0, (const uint8_t *)&hdr, 512);
if ((cpu_to_be32(our_crc) == crc) ||
(cpu_to_le32(our_crc) == crc)) {
*sb_offset = off;
return 1;
} else {
log_debug_devs("Found md ddf magic at %llu wrong crc %x disk %x %s",
(unsigned long long)off, our_crc, crc, dev_name(dev));
return 0;
}
}
/* 128KB before the end of device (from libblkid) */
off = ((devsize_bytes / 0x200) - 257) * 0x200;
if (!dev_read_bytes(dev, off, 512, &hdr))
return_0;
if ((hdr.magic == cpu_to_be32(DDF_MAGIC)) ||
(hdr.magic == cpu_to_le32(DDF_MAGIC))) {
crc = hdr.crc;
hdr.crc = 0xffffffff;
our_crc = calc_crc(0, (const uint8_t *)&hdr, 512);
if ((cpu_to_be32(our_crc) == crc) ||
(cpu_to_le32(our_crc) == crc)) {
*sb_offset = off;
return 1;
} else {
log_debug_devs("Found md ddf magic at %llu wrong crc %x disk %x %s",
(unsigned long long)off, our_crc, crc, dev_name(dev));
return 0;
}
}
return 0;
}
/*
* _udev_dev_is_md_component() only works if
* external_device_info_source="udev"
*
* but
*
* udev_dev_is_md_component() in dev-type.c only works if
* obtain_device_list_from_udev=1
*
* and neither of those config setting matches very well
* with what we're doing here.
* Calculate the position of the superblock.
* It is always aligned to a 4K boundary and
* depending on minor_version, it can be:
* 0: At least 8K, but less than 12K, from end of device
* 1: At start of device
* 2: 4K from start of device.
*/
typedef enum {
MD_MINOR_VERSION_MIN,
MD_MINOR_V0 = MD_MINOR_VERSION_MIN,
MD_MINOR_V1,
MD_MINOR_V2,
MD_MINOR_VERSION_MAX = MD_MINOR_V2
} md_minor_version_t;
static uint64_t _v1_sb_offset(uint64_t size, md_minor_version_t minor_version)
{
uint64_t sb_offset;
switch(minor_version) {
case MD_MINOR_V0:
sb_offset = (size - 8 * 2) & ~(4 * 2 - 1ULL);
break;
case MD_MINOR_V1:
sb_offset = 0;
break;
case MD_MINOR_V2:
sb_offset = 4 * 2;
break;
default:
log_warn(INTERNAL_ERROR "WARNING: Unknown minor version %d.",
minor_version);
return 0;
}
sb_offset <<= SECTOR_SHIFT;
return sb_offset;
}
#ifdef UDEV_SYNC_SUPPORT
static int _udev_dev_is_md_component(struct device *dev)
static int _udev_dev_is_md(struct device *dev)
{
const char *value;
struct dev_ext *ext;
@@ -166,17 +97,14 @@ static int _udev_dev_is_md_component(struct device *dev)
if (!(ext = dev_ext_get(dev)))
return_0;
if (!(value = udev_device_get_property_value((struct udev_device *)ext->handle, DEV_EXT_UDEV_BLKID_TYPE))) {
dev->flags |= DEV_UDEV_INFO_MISSING;
if (!(value = udev_device_get_property_value((struct udev_device *)ext->handle, DEV_EXT_UDEV_BLKID_TYPE)))
return 0;
}
return !strcmp(value, DEV_EXT_UDEV_BLKID_TYPE_SW_RAID);
}
#else
static int _udev_dev_is_md_component(struct device *dev)
static int _udev_dev_is_md(struct device *dev)
{
dev->flags |= DEV_UDEV_INFO_MISSING;
return 0;
}
#endif
@@ -184,9 +112,10 @@ static int _udev_dev_is_md_component(struct device *dev)
/*
* Returns -1 on error
*/
static int _native_dev_is_md_component(struct device *dev, uint64_t *offset_found, int full)
static int _native_dev_is_md(struct device *dev, uint64_t *offset_found, int full)
{
uint64_t size, sb_offset = 0;
md_minor_version_t minor;
uint64_t size, sb_offset;
int ret;
if (!scan_bcache)
@@ -201,9 +130,9 @@ static int _native_dev_is_md_component(struct device *dev, uint64_t *offset_foun
return 0;
/*
* Some md versions locate the magic number at the end of the device.
* Those checks can't be satisfied with the initial scan data, and
* require an extra read i/o at the end of every device. Issuing
* Old md versions locate the magic number at the end of the device.
* Those checks can't be satisfied with the initial bcache data, and
* would require an extra read i/o at the end of every device. Issuing
* an extra read to every device in every command, just to check for
* the old md format is a bad tradeoff.
*
@@ -214,81 +143,42 @@ static int _native_dev_is_md_component(struct device *dev, uint64_t *offset_foun
* and set it for commands that could possibly write to an md dev
* (pvcreate/vgcreate/vgextend).
*/
/*
* md superblock version 1.1 at offset 0 from start
*/
if (_dev_has_md_magic(dev, 0)) {
log_debug_devs("Found md magic number at offset 0 of %s.", dev_name(dev));
ret = 1;
goto out;
}
/*
* md superblock version 1.2 at offset 4KB from start
*/
if (_dev_has_md_magic(dev, 4096)) {
log_debug_devs("Found md magic number at offset 4096 of %s.", dev_name(dev));
ret = 1;
goto out;
}
if (!full) {
sb_offset = 0;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset 0 of %s.", dev_name(dev));
ret = 1;
goto out;
}
sb_offset = 8 << SECTOR_SHIFT;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset %d of %s.", (int)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
ret = 0;
goto out;
}
/*
* Handle superblocks at the end of the device.
*/
/*
* md superblock version 0 at 64KB from end of device
* (after end is aligned to 64KB)
*/
/* Check if it is an md component device. */
/* Version 0.90.0 */
sb_offset = MD_NEW_SIZE_SECTORS(size) << SECTOR_SHIFT;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
/*
* md superblock version 1.0 at 8KB from end of device
*/
sb_offset = ((size - 8 * 2) & ~(4 * 2 - 1ULL)) << SECTOR_SHIFT;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
/*
* md imsm superblock 1K from end of device
*/
if (_dev_has_imsm_magic(dev, size)) {
log_debug_devs("Found md imsm magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
sb_offset = 1024;
ret = 1;
goto out;
}
/*
* md ddf superblock 512 bytes from end, or 128KB from end
*/
if (_dev_has_ddf_magic(dev, size, &sb_offset)) {
log_debug_devs("Found md ddf magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
minor = MD_MINOR_VERSION_MIN;
/* Version 1, try v1.0 -> v1.2 */
do {
sb_offset = _v1_sb_offset(size, minor);
if (_dev_has_md_magic(dev, sb_offset)) {
ret = 1;
goto out;
}
} while (++minor <= MD_MINOR_VERSION_MAX);
ret = 0;
out:
@@ -298,7 +188,7 @@ out:
return ret;
}
int dev_is_md_component(struct device *dev, uint64_t *offset_found, int full)
int dev_is_md(struct device *dev, uint64_t *offset_found, int full)
{
int ret;
@@ -308,25 +198,19 @@ int dev_is_md_component(struct device *dev, uint64_t *offset_found, int full)
* information is not in udev db.
*/
if ((dev->ext.src == DEV_EXT_NONE) || offset_found) {
ret = _native_dev_is_md_component(dev, offset_found, full);
ret = _native_dev_is_md(dev, offset_found, full);
if (!full) {
if (!ret || (ret == -EAGAIN)) {
if (udev_dev_is_md_component(dev))
ret = 1;
return 1;
}
}
if (ret && (ret != -EAGAIN))
dev->flags |= DEV_IS_MD_COMPONENT;
return ret;
}
if (dev->ext.src == DEV_EXT_UDEV) {
ret = _udev_dev_is_md_component(dev);
if (ret && (ret != -EAGAIN))
dev->flags |= DEV_IS_MD_COMPONENT;
return ret;
}
if (dev->ext.src == DEV_EXT_UDEV)
return _udev_dev_is_md(dev);
log_error(INTERNAL_ERROR "Missing hook for MD device recognition "
"using external device info source %s", dev_ext_name(dev));
@@ -396,12 +280,12 @@ static int _md_sysfs_attribute_scanf(struct dev_types *dt,
return ret;
if (!(fp = fopen(path, "r"))) {
log_debug("_md_sysfs_attribute_scanf fopen failed %s", path);
log_sys_error("fopen", path);
return ret;
}
if (!fgets(buffer, sizeof(buffer), fp)) {
log_debug("_md_sysfs_attribute_scanf fgets failed %s", path);
log_sys_error("fgets", path);
goto out;
}
@@ -543,7 +427,7 @@ int dev_is_md_with_end_superblock(struct dev_types *dt, struct device *dev)
if (_md_sysfs_attribute_scanf(dt, dev, attribute,
"%s", &version_string) != 1)
return 0;
return -1;
log_very_verbose("Device %s %s is %s.",
dev_name(dev), attribute, version_string);
@@ -555,7 +439,7 @@ int dev_is_md_with_end_superblock(struct dev_types *dt, struct device *dev)
#else
int dev_is_md_component(struct device *dev __attribute__((unused)),
int dev_is_md(struct device *dev __attribute__((unused)),
uint64_t *sb __attribute__((unused)))
{
return 0;

View File

@@ -21,7 +21,6 @@
#include "lib/metadata/metadata.h"
#include "lib/device/bcache.h"
#include "lib/label/label.h"
#include "lib/commands/toolcontext.h"
#ifdef BLKID_WIPING_SUPPORT
#include <blkid.h>
@@ -36,16 +35,48 @@
#include <ctype.h>
/*
* An nvme device has major number 259 (BLKEXT), minor number <minor>,
* and reading /sys/dev/block/259:<minor>/device/dev shows a character
* device cmajor:cminor where cmajor matches the major number of the
* nvme character device entry in /proc/devices. Checking all of that
* is excessive and unnecessary compared to just comparing /dev/name*.
* dev is pmem if /sys/dev/block/<major>:<minor>/queue/dax is 1
*/
int dev_is_nvme(struct dev_types *dt, struct device *dev)
int dev_is_pmem(struct device *dev)
{
return (dev->flags & DEV_IS_NVME) ? 1 : 0;
FILE *fp;
char path[PATH_MAX];
char buffer[64];
int is_pmem = 0;
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d/queue/dax",
dm_sysfs_dir(),
(int) MAJOR(dev->dev),
(int) MINOR(dev->dev)) < 0) {
log_warn("Sysfs path for %s dax is too long.", dev_name(dev));
return 0;
}
if (!(fp = fopen(path, "r")))
return 0;
if (!fgets(buffer, sizeof(buffer), fp)) {
log_warn("Failed to read %s.", path);
if (fclose(fp))
log_sys_debug("fclose", path);
return 0;
} else if (sscanf(buffer, "%d", &is_pmem) != 1) {
log_warn("Failed to parse %s '%s'.", path, buffer);
if (fclose(fp))
log_sys_debug("fclose", path);
return 0;
}
if (fclose(fp))
log_sys_debug("fclose", path);
if (is_pmem) {
log_debug("%s is pmem", dev_name(dev));
return 1;
}
return 0;
}
int dev_is_lv(struct device *dev)
@@ -53,7 +84,6 @@ int dev_is_lv(struct device *dev)
FILE *fp;
char path[PATH_MAX];
char buffer[64];
int ret = 0;
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d/dm/uuid",
dm_sysfs_dir(),
@@ -66,15 +96,17 @@ int dev_is_lv(struct device *dev)
if (!(fp = fopen(path, "r")))
return 0;
if (!fgets(buffer, sizeof(buffer), fp))
log_debug("Failed to read %s.", path);
else if (!strncmp(buffer, "LVM-", 4))
ret = 1;
if (!fgets(buffer, sizeof(buffer), fp)) {
log_warn("Failed to read %s.", path);
fclose(fp);
return 0;
}
if (fclose(fp))
log_sys_debug("fclose", path);
fclose(fp);
return ret;
if (!strncmp(buffer, "LVM-", 4))
return 1;
return 0;
}
struct dev_types *create_dev_types(const char *proc_dir,
@@ -211,7 +243,7 @@ struct dev_types *create_dev_types(const char *proc_dir,
log_error("Expecting string in devices/types "
"in config file");
if (fclose(pd))
log_sys_debug("fclose", proc_devices);
log_sys_error("fclose", proc_devices);
goto bad;
}
dev_len = strlen(cv->v.str);
@@ -222,7 +254,7 @@ struct dev_types *create_dev_types(const char *proc_dir,
"in devices/types in config file",
name);
if (fclose(pd))
log_sys_debug("fclose", proc_devices);
log_sys_error("fclose", proc_devices);
goto bad;
}
if (!cv->v.i) {
@@ -230,7 +262,7 @@ struct dev_types *create_dev_types(const char *proc_dir,
"%s in devices/types in config file",
name);
if (fclose(pd))
log_sys_debug("fclose", proc_devices);
log_sys_error("fclose", proc_devices);
goto bad;
}
if (dev_len <= strlen(line + i) &&
@@ -283,9 +315,6 @@ int dev_subsystem_part_major(struct dev_types *dt, struct device *dev)
const char *dev_subsystem_name(struct dev_types *dt, struct device *dev)
{
if (dev->flags & DEV_IS_NVME)
return "NVME";
if (MAJOR(dev->dev) == dt->device_mapper_major)
return "DM";
@@ -332,6 +361,7 @@ int major_is_scsi_device(struct dev_types *dt, int major)
return (dt->dev_type_array[major].flags & PARTITION_SCSI_DEVICE) ? 1 : 0;
}
static int _loop_is_with_partscan(struct device *dev)
{
FILE *fp;
@@ -363,45 +393,6 @@ static int _loop_is_with_partscan(struct device *dev)
return partscan;
}
int dev_get_partition_number(struct device *dev, int *num)
{
char path[PATH_MAX];
char buf[8] = { 0 };
dev_t devt = dev->dev;
struct stat sb;
if (dev->part != -1) {
*num = dev->part;
return 1;
}
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d/partition",
dm_sysfs_dir(), (int)MAJOR(devt), (int)MINOR(devt)) < 0) {
log_error("Failed to create sysfs path for %s", dev_name(dev));
return 0;
}
if (stat(path, &sb)) {
dev->part = 0;
*num = 0;
return 1;
}
if (!get_sysfs_value(path, buf, sizeof(buf), 0)) {
log_error("Failed to read sysfs path for %s", dev_name(dev));
return 0;
}
if (!buf[0]) {
log_error("Failed to read sysfs partition value for %s", dev_name(dev));
return 0;
}
dev->part = atoi(buf);
*num = dev->part;
return 1;
}
/* See linux/genhd.h and fs/partitions/msdos */
#define PART_MAGIC 0xAA55
#define PART_MAGIC_OFFSET UINT64_C(0x1FE)
@@ -420,28 +411,6 @@ struct partition {
uint32_t nr_sects;
} __attribute__((packed));
static int _has_sys_partition(struct device *dev)
{
char path[PATH_MAX];
struct stat info;
int major = (int) MAJOR(dev->dev);
int minor = (int) MINOR(dev->dev);
/* check if dev is a partition */
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d/partition",
dm_sysfs_dir(), major, minor) < 0) {
log_warn("WARNING: %s: partition path is too long.", dev_name(dev));
return 0;
}
if (stat(path, &info) == -1) {
if (errno != ENOENT)
log_sys_debug("stat", path);
return 0;
}
return 1;
}
static int _is_partitionable(struct dev_types *dt, struct device *dev)
{
int parts = major_max_partitions(dt, MAJOR(dev->dev));
@@ -458,13 +427,6 @@ static int _is_partitionable(struct dev_types *dt, struct device *dev)
_loop_is_with_partscan(dev))
return 1;
if (dev_is_nvme(dt, dev)) {
/* If this dev is already a partition then it's not partitionable. */
if (_has_sys_partition(dev))
return 0;
return 1;
}
if ((parts <= 1) || (MINOR(dev->dev) % parts))
return 0;
@@ -602,22 +564,16 @@ int dev_is_partitioned(struct dev_types *dt, struct device *dev)
*/
int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
{
const char *sysfs_dir = dm_sysfs_dir();
int major = (int) MAJOR(dev->dev);
int minor = (int) MINOR(dev->dev);
char path[PATH_MAX];
char temp_path[PATH_MAX];
char buffer[64];
struct stat info;
FILE *fp = NULL;
int parts, residue, size, ret = 0;
/*
* /dev/nvme devs don't use the major:minor numbering like
* block dev types that have their own major number, so
* the calculation based on minor number doesn't work.
*/
if (dev_is_nvme(dt, dev))
goto sys_partition;
/*
* Try to get the primary dev out of the
* list of known device types first.
@@ -633,14 +589,23 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
goto out;
}
sys_partition:
/*
* If we can't get the primary dev out of the list of known device
* types, try to look at sysfs directly then. This is more complex
* way and it also requires certain sysfs layout to be present
* which might not be there in old kernels!
*/
if (!_has_sys_partition(dev)) {
/* check if dev is a partition */
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d/partition",
sysfs_dir, major, minor) < 0) {
log_error("dm_snprintf partition failed");
goto out;
}
if (stat(path, &info) == -1) {
if (errno != ENOENT)
log_sys_error("stat", path);
*result = dev->dev;
ret = 1;
goto out; /* dev is not a partition! */
@@ -653,31 +618,25 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
* - basename ../../block/md0/md0 = md0
* Parent's 'dev' sysfs attribute = /sys/block/md0/dev
*/
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d",
dm_sysfs_dir(), major, minor) < 0) {
log_warn("WARNING: %s: major:minor sysfs path is too long.", dev_name(dev));
return 0;
}
if ((size = readlink(path, temp_path, sizeof(temp_path) - 1)) < 0) {
log_warn("WARNING: Readlink of %s failed.", path);
if ((size = readlink(dirname(path), temp_path, sizeof(temp_path) - 1)) < 0) {
log_sys_error("readlink", path);
goto out;
}
temp_path[size] = '\0';
if (dm_snprintf(path, sizeof(path), "%s/block/%s/dev",
dm_sysfs_dir(), basename(dirname(temp_path))) < 0) {
log_warn("WARNING: sysfs path for %s is too long.",
basename(dirname(temp_path)));
sysfs_dir, basename(dirname(temp_path))) < 0) {
log_error("dm_snprintf dev failed");
goto out;
}
/* finally, parse 'dev' attribute and create corresponding dev_t */
if (!(fp = fopen(path, "r"))) {
if (errno == ENOENT)
log_debug("sysfs file %s does not exist.", path);
log_error("sysfs file %s does not exist.", path);
else
log_sys_debug("fopen", path);
log_sys_error("fopen", path);
goto out;
}
@@ -687,7 +646,7 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
}
if (sscanf(buffer, "%d:%d", &major, &minor) != 2) {
log_warn("WARNING: sysfs file %s not in expected MAJ:MIN format: %s",
log_error("sysfs file %s not in expected MAJ:MIN format: %s",
path, buffer);
goto out;
}
@@ -695,36 +654,11 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
ret = 2;
out:
if (fp && fclose(fp))
log_sys_debug("fclose", path);
log_sys_error("fclose", path);
return ret;
}
#ifdef BLKID_WIPING_SUPPORT
int get_fs_block_size(struct device *dev, uint32_t *fs_block_size)
{
char *block_size_str = NULL;
if ((block_size_str = blkid_get_tag_value(NULL, "BLOCK_SIZE", dev_name(dev)))) {
*fs_block_size = (uint32_t)atoi(block_size_str);
free(block_size_str);
log_debug("Found blkid BLOCK_SIZE %u for fs on %s", *fs_block_size, dev_name(dev));
return 1;
} else {
log_debug("No blkid BLOCK_SIZE for fs on %s", dev_name(dev));
*fs_block_size = 0;
return 0;
}
}
#else
int get_fs_block_size(struct device *dev, uint32_t *fs_block_size)
{
log_debug("Disabled blkid BLOCK_SIZE for fs.");
*fs_block_size = 0;
return 0;
}
#endif
#ifdef BLKID_WIPING_SUPPORT
static inline int _type_in_flag_list(const char *type, uint32_t flag_list)
@@ -757,7 +691,7 @@ static int _blkid_wipe(blkid_probe probe, struct device *dev, const char *name,
return 0;
}
log_warn("WARNING: " MSG_FAILED_SIG_OFFSET MSG_WIPING_SKIPPED, type, name);
log_error("WARNING: " MSG_FAILED_SIG_OFFSET MSG_WIPING_SKIPPED, type, name);
return 2;
}
if (blkid_probe_lookup_value(probe, "SBMAGIC", &magic, &len)) {
@@ -932,7 +866,7 @@ static int _wipe_known_signatures_with_lvm(struct device *dev, const char *name,
wiped = &wiped_tmp;
*wiped = 0;
if (!_wipe_signature(dev, "software RAID md superblock", name, 4, yes, force, wiped, dev_is_md_component) ||
if (!_wipe_signature(dev, "software RAID md superblock", name, 4, yes, force, wiped, dev_is_md) ||
!_wipe_signature(dev, "swap signature", name, 10, yes, force, wiped, dev_is_swap) ||
!_wipe_signature(dev, "LUKS signature", name, 8, yes, force, wiped, dev_is_luks))
return 0;
@@ -955,9 +889,9 @@ int wipe_known_signatures(struct cmd_context *cmd, struct device *dev,
yes, force, wiped);
#endif
if (blkid_wiping_enabled) {
log_warn("WARNING: allocation/use_blkid_wiping=1 configuration setting is set "
log_warn("allocation/use_blkid_wiping=1 configuration setting is set "
"while LVM is not compiled with blkid wiping support.");
log_warn("WARNING: Falling back to native LVM signature detection.");
log_warn("Falling back to native LVM signature detection.");
}
return _wipe_known_signatures_with_lvm(dev, name,
types_to_exclude,
@@ -973,23 +907,25 @@ static int _snprintf_attr(char *buf, size_t buf_size, const char *sysfs_dir,
if (dm_snprintf(buf, buf_size, "%s/dev/block/%d:%d/%s", sysfs_dir,
(int)MAJOR(dev), (int)MINOR(dev),
attribute) < 0) {
log_warn("WARNING: sysfs path for %s attribute is too long.", attribute);
log_warn("dm_snprintf %s failed.", attribute);
return 0;
}
return 1;
}
static int _dev_sysfs_block_attribute(struct dev_types *dt,
const char *attribute,
struct device *dev,
unsigned long *value)
static unsigned long _dev_topology_attribute(struct dev_types *dt,
const char *attribute,
struct device *dev,
unsigned long default_value)
{
const char *sysfs_dir = dm_sysfs_dir();
char path[PATH_MAX], buffer[64];
FILE *fp;
dev_t primary = 0;
int ret = 0;
struct stat info;
dev_t uninitialized_var(primary);
unsigned long result = default_value;
unsigned long value = 0UL;
if (!attribute || !*attribute)
goto_out;
@@ -998,16 +934,16 @@ static int _dev_sysfs_block_attribute(struct dev_types *dt,
goto_out;
if (!_snprintf_attr(path, sizeof(path), sysfs_dir, attribute, dev->dev))
goto_out;
goto_out;
/*
* check if the desired sysfs attribute exists
* - if not: either the kernel doesn't have topology support
* or the device could be a partition
*/
if (!(fp = fopen(path, "r"))) {
if (stat(path, &info) == -1) {
if (errno != ENOENT) {
log_sys_debug("fopen", path);
log_sys_debug("stat", path);
goto out;
}
if (!dev_get_primary_dev(dt, dev, &primary))
@@ -1017,54 +953,44 @@ static int _dev_sysfs_block_attribute(struct dev_types *dt,
if (!_snprintf_attr(path, sizeof(path), sysfs_dir, attribute, primary))
goto_out;
if (!(fp = fopen(path, "r"))) {
if (stat(path, &info) == -1) {
if (errno != ENOENT)
log_sys_debug("fopen", path);
log_sys_debug("stat", path);
goto out;
}
}
if (!(fp = fopen(path, "r"))) {
log_sys_debug("fopen", path);
goto out;
}
if (!fgets(buffer, sizeof(buffer), fp)) {
log_sys_debug("fgets", path);
goto out_close;
}
if (sscanf(buffer, "%lu", value) != 1) {
log_warn("WARNING: sysfs file %s not in expected format: %s", path, buffer);
if (sscanf(buffer, "%lu", &value) != 1) {
log_warn("sysfs file %s not in expected format: %s", path, buffer);
goto out_close;
}
ret = 1;
log_very_verbose("Device %s: %s is %lu%s.",
dev_name(dev), attribute, value, default_value ? "" : " bytes");
result = value >> SECTOR_SHIFT;
if (!result && value) {
log_warn("WARNING: Device %s: %s is %lu and is unexpectedly less than sector.",
dev_name(dev), attribute, value);
result = 1;
}
out_close:
if (fclose(fp))
log_sys_debug("fclose", path);
out:
return ret;
}
static unsigned long _dev_topology_attribute(struct dev_types *dt,
const char *attribute,
struct device *dev,
unsigned long default_value)
{
unsigned long result = default_value;
unsigned long value = 0UL;
if (_dev_sysfs_block_attribute(dt, attribute, dev, &value)) {
log_very_verbose("Device %s: %s is %lu%s.",
dev_name(dev), attribute, value, default_value ? "" : " bytes");
result = value >> SECTOR_SHIFT;
if (!result && value) {
log_warn("WARNING: Device %s: %s is %lu and is unexpectedly less than sector.",
dev_name(dev), attribute, value);
result = 1;
}
}
return result;
}
@@ -1095,17 +1021,8 @@ unsigned long dev_discard_granularity(struct dev_types *dt, struct device *dev)
int dev_is_rotational(struct dev_types *dt, struct device *dev)
{
unsigned long value;
return _dev_sysfs_block_attribute(dt, "queue/rotational", dev, &value) ? (int) value : 1;
return (int) _dev_topology_attribute(dt, "queue/rotational", dev, 1UL);
}
/* dev is pmem if /sys/dev/block/<major>:<minor>/queue/dax is 1 */
int dev_is_pmem(struct dev_types *dt, struct device *dev)
{
unsigned long value;
return _dev_sysfs_block_attribute(dt, "queue/dax", dev, &value) ? (int) value : 0;
}
#else
int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
@@ -1142,11 +1059,6 @@ int dev_is_rotational(struct dev_types *dt, struct device *dev)
{
return 1;
}
int dev_is_pmem(struct dev_types *dt, struct device *dev)
{
return 0;
}
#endif
#ifdef UDEV_SYNC_SUPPORT
@@ -1206,9 +1118,6 @@ static struct udev_device *_udev_get_dev(struct device *dev)
i + 1, UDEV_DEV_IS_COMPONENT_ITERATION_COUNT,
i * UDEV_DEV_IS_COMPONENT_USLEEP);
if (!udev_sleeping())
break;
usleep(UDEV_DEV_IS_COMPONENT_USLEEP);
i++;
}
@@ -1229,9 +1138,6 @@ int udev_dev_is_mpath_component(struct device *dev)
const char *value;
int ret = 0;
if (!obtain_device_list_from_udev())
return 0;
if (!(udev_device = _udev_get_dev(dev)))
return 0;
@@ -1261,11 +1167,6 @@ int udev_dev_is_md_component(struct device *dev)
const char *value;
int ret = 0;
if (!obtain_device_list_from_udev()) {
dev->flags |= DEV_UDEV_INFO_MISSING;
return 0;
}
if (!(udev_device = _udev_get_dev(dev)))
return 0;
@@ -1273,7 +1174,6 @@ int udev_dev_is_md_component(struct device *dev)
if (value && !strcmp(value, DEV_EXT_UDEV_BLKID_TYPE_SW_RAID)) {
log_debug("Device %s is md raid component based on blkid variable in udev db (%s=\"%s\").",
dev_name(dev), DEV_EXT_UDEV_BLKID_TYPE, value);
dev->flags |= DEV_IS_MD_COMPONENT;
ret = 1;
goto out;
}
@@ -1291,7 +1191,6 @@ int udev_dev_is_mpath_component(struct device *dev)
int udev_dev_is_md_component(struct device *dev)
{
dev->flags |= DEV_UDEV_INFO_MISSING;
return 0;
}

View File

@@ -57,7 +57,7 @@ const char *dev_subsystem_name(struct dev_types *dt, struct device *dev);
int major_is_scsi_device(struct dev_types *dt, int major);
/* Signature/superblock recognition with position returned where found. */
int dev_is_md_component(struct device *dev, uint64_t *sb, int full);
int dev_is_md(struct device *dev, uint64_t *sb, int full);
int dev_is_swap(struct device *dev, uint64_t *signature, int full);
int dev_is_luks(struct device *dev, uint64_t *signature, int full);
int dasd_is_cdl_formatted(struct device *dev);
@@ -83,7 +83,6 @@ int dev_is_md_with_end_superblock(struct dev_types *dt, struct device *dev);
int major_max_partitions(struct dev_types *dt, int major);
int dev_is_partitioned(struct dev_types *dt, struct device *dev);
int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result);
int dev_get_partition_number(struct device *dev, int *num);
/* Various device properties */
unsigned long dev_alignment_offset(struct dev_types *dt, struct device *dev);
@@ -94,12 +93,8 @@ unsigned long dev_discard_granularity(struct dev_types *dt, struct device *dev);
int dev_is_rotational(struct dev_types *dt, struct device *dev);
int dev_is_pmem(struct dev_types *dt, struct device *dev);
int dev_is_nvme(struct dev_types *dt, struct device *dev);
int dev_is_pmem(struct device *dev);
int dev_is_lv(struct device *dev);
int get_fs_block_size(struct device *dev, uint32_t *fs_block_size);
#endif

View File

@@ -20,6 +20,7 @@
#include <fcntl.h>
#define DEV_ACCESSED_W 0x00000001 /* Device written to? */
#define DEV_REGULAR 0x00000002 /* Regular file? */
#define DEV_ALLOCED 0x00000004 /* malloc used */
#define DEV_OPENED_RW 0x00000008 /* Opened RW */
@@ -36,10 +37,6 @@
#define DEV_FILTER_OUT_SCAN 0x00004000 /* filtered out during label scan */
#define DEV_BCACHE_WRITE 0x00008000 /* bcache_fd is open with RDWR */
#define DEV_SCAN_FOUND_LABEL 0x00010000 /* label scan read dev and found label */
#define DEV_IS_MD_COMPONENT 0x00020000 /* device is an md component */
#define DEV_UDEV_INFO_MISSING 0x00040000 /* we have no udev info for this device */
#define DEV_IS_NVME 0x00080000 /* set if dev is nvme */
#define DEV_MATCHED_USE_ID 0x00100000 /* matched an entry from cmd->use_devices */
/*
* Support for external device info.
@@ -58,67 +55,22 @@ struct dev_ext {
void *handle;
};
#define DEV_ID_TYPE_SYS_WWID 0x0001
#define DEV_ID_TYPE_SYS_SERIAL 0x0002
#define DEV_ID_TYPE_MPATH_UUID 0x0003
#define DEV_ID_TYPE_MD_UUID 0x0004
#define DEV_ID_TYPE_LOOP_FILE 0x0005
#define DEV_ID_TYPE_CRYPT_UUID 0x0006
#define DEV_ID_TYPE_LVMLV_UUID 0x0007
#define DEV_ID_TYPE_DEVNAME 0x0008
/*
* A device ID of a certain type for a device.
* A struct device may have multiple dev_id structs on dev->ids.
* One of them will be the one that's used, pointed to by dev->id.
*/
struct dev_id {
struct dm_list list;
struct device *dev;
uint16_t idtype; /* DEV_ID_TYPE_ */
char *idname; /* id string determined by idtype */
};
/*
* A device listed in devices file that lvm should use.
* Each entry in the devices file is represented by a struct dev_use.
* The structs are kept on cmd->use_devices.
* idtype/idname/pvid/part are set when reading the devices file.
* du->dev is set when a struct dev_use is matched to a struct device.
*/
struct dev_use {
struct dm_list list;
struct device *dev;
int part;
uint16_t idtype;
char *idname;
char *devname;
char *pvid;
};
/*
* All devices in LVM will be represented by one of these.
* pointer comparisons are valid.
*/
struct device {
struct dm_list aliases; /* struct dm_str_list */
struct dm_list ids; /* struct dev_id, different entries for different idtypes */
struct dev_id *id; /* points to the the ids entry being used for this dev */
dev_t dev;
/* private */
int fd;
int open_count;
int physical_block_size; /* From BLKPBSZGET: lowest possible sector size that the hardware can operate on without reverting to read-modify-write operations */
int logical_block_size; /* From BLKSSZGET: lowest possible block size that the storage device can address */
int phys_block_size;
int block_size;
int read_ahead;
int bcache_fd;
int bcache_di;
int part; /* partition number */
uint32_t flags;
uint32_t filtered_flags;
unsigned size_seqno;
uint64_t size;
uint64_t end;
@@ -178,8 +130,7 @@ void dev_size_seqno_inc(void);
/*
* All io should use these routines.
*/
int dev_get_direct_block_sizes(struct device *dev, unsigned int *physical_block_size,
unsigned int *logical_block_size);
int dev_get_block_size(struct device *dev, unsigned int *phys_block_size, unsigned int *block_size);
int dev_get_size(struct device *dev, uint64_t *size);
int dev_get_read_ahead(struct device *dev, uint32_t *read_ahead);
int dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64_t size_bytes);
@@ -193,10 +144,17 @@ int dev_open_readonly_buffered(struct device *dev);
int dev_open_readonly_quiet(struct device *dev);
int dev_close(struct device *dev);
int dev_close_immediate(struct device *dev);
int dev_test_excl(struct device *dev);
int dev_fd(struct device *dev);
const char *dev_name(const struct device *dev);
int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer);
int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
uint64_t offset2, size_t len2, dev_io_reason_t reason, char *buf);
int dev_write(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer);
int dev_append(struct device *dev, size_t len, dev_io_reason_t reason, char *buffer);
int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, int value);
void dev_flush(struct device *dev);
struct device *dev_create_file(const char *filename, struct device *dev,

File diff suppressed because it is too large Load Diff

View File

@@ -1,55 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _LVM_DEVICE_ID_H
#define _LVM_DEVICE_ID_H
void free_du(struct dev_use *du);
void free_dus(struct dm_list *list);
void free_did(struct dev_id *did);
void free_dids(struct dm_list *list);
const char *idtype_to_str(uint16_t idtype);
uint16_t idtype_from_str(const char *str);
const char *dev_idtype_for_metadata(struct cmd_context *cmd, struct device *dev);
const char *dev_idname_for_metadata(struct cmd_context *cmd, struct device *dev);
int device_ids_use_devname(struct cmd_context *cmd);
int device_ids_read(struct cmd_context *cmd);
int device_ids_write(struct cmd_context *cmd);
int device_id_add(struct cmd_context *cmd, struct device *dev, const char *pvid,
const char *idtype_arg, const char *id_arg);
void device_id_pvremove(struct cmd_context *cmd, struct device *dev);
void device_ids_match(struct cmd_context *cmd);
int device_ids_match_dev(struct cmd_context *cmd, struct device *dev);
void device_ids_validate(struct cmd_context *cmd, struct dm_list *scanned_devs, int *device_ids_invalid, int noupdate);
int device_ids_version_unchanged(struct cmd_context *cmd);
void device_ids_find_renamed_devs(struct cmd_context *cmd, struct dm_list *dev_list, int *search_count, int noupdate);
const char *device_id_system_read(struct cmd_context *cmd, struct device *dev, uint16_t idtype);
struct dev_use *get_du_for_dev(struct cmd_context *cmd, struct device *dev);
struct dev_use *get_du_for_pvid(struct cmd_context *cmd, const char *pvid);
char *devices_file_version(void);
int devices_file_exists(struct cmd_context *cmd);
int devices_file_touch(struct cmd_context *cmd);
int lock_devices_file(struct cmd_context *cmd, int mode);
int lock_devices_file_try(struct cmd_context *cmd, int mode, int *held);
void unlock_devices_file(struct cmd_context *cmd);
void devices_file_init(struct cmd_context *cmd);
void devices_file_exit(struct cmd_context *cmd);
void unlink_searched_devnames(struct cmd_context *cmd);
#endif

View File

@@ -399,19 +399,17 @@ int lvdisplay_full(struct cmd_context *cmd,
void *handle __attribute__((unused)))
{
struct lvinfo info;
int inkernel, snap_active = 0, partial = 0, raid_is_avail = 1;
int inkernel, snap_active = 0;
char uuid[64] __attribute__((aligned(8)));
const char *access_str;
struct lv_segment *snap_seg = NULL, *mirror_seg = NULL;
struct lv_segment *seg = NULL;
int lvm1compat;
dm_percent_t snap_percent;
int thin_pool_active = 0;
dm_percent_t thin_data_percent = 0, thin_metadata_percent = 0;
int thin_data_active = 0, thin_metadata_active = 0;
dm_percent_t thin_data_percent, thin_metadata_percent;
int thin_active = 0;
dm_percent_t thin_percent = 0;
struct lv_status_thin *thin_status = NULL;
struct lv_status_thin_pool *thin_pool_status = NULL;
dm_percent_t thin_percent;
struct lv_status_cache *cache_status = NULL;
struct lv_status_vdo *vdo_status = NULL;
@@ -475,7 +473,7 @@ int lvdisplay_full(struct cmd_context *cmd,
snap_active ? "active" : "INACTIVE");
}
snap_seg = NULL;
} else if (lv_is_cow(lv) && (snap_seg = find_snapshot(lv))) {
} else if ((snap_seg = find_snapshot(lv))) {
if (inkernel &&
(snap_active = lv_snapshot_percent(snap_seg->cow,
&snap_percent)))
@@ -505,18 +503,15 @@ int lvdisplay_full(struct cmd_context *cmd,
if (seg->merge_lv)
log_print("LV merging to %s",
seg->merge_lv->name);
if (inkernel && (thin_active = lv_thin_status(lv, 0, &thin_status))) {
thin_percent = thin_status->usage;
dm_pool_destroy(thin_status->mem);
}
if (inkernel)
thin_active = lv_thin_percent(lv, 0, &thin_percent);
if (lv_is_merging_origin(lv))
log_print("LV merged with %s",
find_snapshot(lv)->lv->name);
} else if (lv_is_thin_pool(lv)) {
if ((thin_pool_active = lv_thin_pool_status(lv, 0, &thin_pool_status))) {
thin_data_percent = thin_pool_status->data_usage;
thin_metadata_percent = thin_pool_status->metadata_usage;
dm_pool_destroy(thin_pool_status->mem);
if (lv_info(cmd, lv, 1, &info, 1, 1) && info.exists) {
thin_data_active = lv_thin_pool_percent(lv, 0, &thin_data_percent);
thin_metadata_active = lv_thin_pool_percent(lv, 1, &thin_metadata_percent);
}
/* FIXME: display thin_pool targets transid for activated LV as well */
seg = first_seg(lv);
@@ -527,10 +522,10 @@ int lvdisplay_full(struct cmd_context *cmd,
log_print("LV origin of Cache LV %s", seg->lv->name);
} else if (lv_is_cache(lv)) {
seg = first_seg(lv);
if (inkernel && lv_cache_status(lv, &cache_status)) {
log_print("LV Cache pool name %s", seg->pool_lv->name);
log_print("LV Cache origin name %s", seg_lv(seg, 0)->name);
}
if (inkernel && !lv_cache_status(lv, &cache_status))
return_0;
log_print("LV Cache pool name %s", seg->pool_lv->name);
log_print("LV Cache origin name %s", seg_lv(seg, 0)->name);
} else if (lv_is_cache_pool(lv)) {
seg = first_seg(lv);
log_print("LV Pool metadata %s", seg->metadata_lv->name);
@@ -538,7 +533,7 @@ int lvdisplay_full(struct cmd_context *cmd,
} else if (lv_is_vdo_pool(lv)) {
seg = first_seg(lv);
log_print("LV VDO Pool data %s", seg_lv(seg, 0)->name);
if (lv_vdo_pool_status(lv, 0, &vdo_status)) { /* FIXME: flush option? */
if (inkernel && lv_vdo_pool_status(lv, 0, &vdo_status)) { /* FIXME: flush option? */
log_print("LV VDO Pool usage %s%%",
display_percent(cmd, vdo_status->usage));
log_print("LV VDO Pool saving %s%%",
@@ -558,18 +553,11 @@ int lvdisplay_full(struct cmd_context *cmd,
log_print("LV VDO Pool name %s", seg_lv(seg, 0)->name);
}
if (lv_is_partial(lv))
partial = 1;
if (lv_is_raid(lv))
raid_is_avail = raid_is_available(lv) ? 1 : 0;
if (inkernel && info.suspended)
log_print("LV Status suspended");
else if (activation())
log_print("LV Status %savailable%s",
(inkernel && raid_is_avail) ? "" : "NOT ",
partial ? " (partial)" : "");
log_print("LV Status %savailable",
inkernel ? "" : "NOT ");
/********* FIXME lv_number
log_print("LV # %u", lv->lv_number + 1);
@@ -603,12 +591,13 @@ int lvdisplay_full(struct cmd_context *cmd,
dm_pool_destroy(cache_status->mem);
}
if (thin_pool_active) {
if (thin_data_active)
log_print("Allocated pool data %s%%",
display_percent(cmd, thin_data_percent));
if (thin_metadata_active)
log_print("Allocated metadata %s%%",
display_percent(cmd, thin_metadata_percent));
}
if (thin_active)
log_print("Mapped size %s%%",

View File

@@ -60,16 +60,13 @@ static void _composite_destroy(struct dev_filter *f)
free(f);
}
static void _wipe(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)
static void _wipe(struct dev_filter *f)
{
struct dev_filter **filters;
for (filters = (struct dev_filter **) f->private; *filters; ++filters) {
if (use_filter_name && strcmp((*filters)->name, use_filter_name))
continue;
for (filters = (struct dev_filter **) f->private; *filters; ++filters)
if ((*filters)->wipe)
(*filters)->wipe(cmd, *filters, dev, use_filter_name);
}
(*filters)->wipe(*filters);
}
struct dev_filter *composite_filter_create(int n, int use_dev_ext_info, struct dev_filter **filters)

View File

@@ -1,69 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "base/memory/zalloc.h"
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/commands/toolcontext.h"
static int _passes_deviceid_filter(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)
{
dev->filtered_flags &= ~DEV_FILTERED_DEVICES_FILE;
dev->filtered_flags &= ~DEV_FILTERED_DEVICES_LIST;
if (!cmd->enable_devices_file && !cmd->enable_devices_list)
return 1;
if (cmd->filter_deviceid_skip)
return 1;
if (dev->flags & DEV_MATCHED_USE_ID)
return 1;
if (cmd->enable_devices_file)
dev->filtered_flags |= DEV_FILTERED_DEVICES_FILE;
else if (cmd->enable_devices_list)
dev->filtered_flags |= DEV_FILTERED_DEVICES_LIST;
log_debug_devs("%s: Skipping (deviceid)", dev_name(dev));
return 0;
}
static void _destroy_deviceid_filter(struct dev_filter *f)
{
if (f->use_count)
log_error(INTERNAL_ERROR "Destroying deviceid filter while in use %u times.", f->use_count);
free(f);
}
struct dev_filter *deviceid_filter_create(struct cmd_context *cmd)
{
struct dev_filter *f;
if (!(f = zalloc(sizeof(struct dev_filter)))) {
log_error("deviceid filter allocation failed");
return NULL;
}
f->passes_filter = _passes_deviceid_filter;
f->destroy = _destroy_deviceid_filter;
f->use_count = 0;
f->name = "deviceid";
log_debug_devs("deviceid filter initialised.");
return f;
}

View File

@@ -15,7 +15,6 @@
#include "base/memory/zalloc.h"
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/commands/toolcontext.h"
#ifdef UDEV_SYNC_SUPPORT
#include <libudev.h>
@@ -70,11 +69,6 @@ static int _ignore_fwraid(struct cmd_context *cmd, struct dev_filter *f __attrib
{
int ret;
if (cmd->filter_nodata_only)
return 1;
dev->filtered_flags &= ~DEV_FILTERED_FWRAID;
if (!fwraid_filtering())
return 1;
@@ -86,14 +80,12 @@ static int _ignore_fwraid(struct cmd_context *cmd, struct dev_filter *f __attrib
else
log_debug_devs(MSG_SKIPPING " [%s:%p]", dev_name(dev),
dev_ext_name(dev), dev->ext.handle);
dev->filtered_flags |= DEV_FILTERED_FWRAID;
return 0;
}
if (ret < 0) {
log_debug_devs("%s: Skipping: error in firmware RAID component detection",
dev_name(dev));
dev->filtered_flags |= DEV_FILTERED_FWRAID;
return 0;
}

View File

@@ -42,8 +42,6 @@ static int _passes_internal(struct cmd_context *cmd, struct dev_filter *f __attr
{
struct device_list *devl;
dev->filtered_flags &= ~DEV_FILTERED_INTERNAL;
if (!internal_filtering())
return 1;
@@ -52,7 +50,6 @@ static int _passes_internal(struct cmd_context *cmd, struct dev_filter *f __attr
return 1;
}
dev->filtered_flags |= DEV_FILTERED_INTERNAL;
log_debug_devs("%s: Skipping for internal filtering.", dev_name(dev));
return 0;
}

View File

@@ -86,11 +86,6 @@ static int _passes_md_filter(struct cmd_context *cmd, struct dev_filter *f __att
{
int ret;
if (cmd->filter_nodata_only)
return 1;
dev->filtered_flags &= ~DEV_FILTERED_MD_COMPONENT;
/*
* When md_component_dectection=0, don't even try to skip md
* components.
@@ -98,7 +93,7 @@ static int _passes_md_filter(struct cmd_context *cmd, struct dev_filter *f __att
if (!md_filtering())
return 1;
ret = dev_is_md_component(dev, NULL, cmd->use_full_md_check);
ret = dev_is_md(dev, NULL, cmd->use_full_md_check);
if (ret == -EAGAIN) {
/* let pass, call again after scan */
@@ -117,14 +112,12 @@ static int _passes_md_filter(struct cmd_context *cmd, struct dev_filter *f __att
else
log_debug_devs(MSG_SKIPPING " [%s:%p]", dev_name(dev),
dev_ext_name(dev), dev->ext.handle);
dev->filtered_flags |= DEV_FILTERED_MD_COMPONENT;
return 0;
}
if (ret < 0) {
log_debug_devs("%s: Skipping: error in md component detection",
dev_name(dev));
dev->filtered_flags |= DEV_FILTERED_MD_COMPONENT;
return 0;
}

View File

@@ -16,7 +16,6 @@
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/activate/activate.h"
#include "lib/commands/toolcontext.h"
#ifdef UDEV_SYNC_SUPPORT
#include <libudev.h>
#include "lib/device/dev-ext-udev-constants.h"
@@ -28,16 +27,6 @@
#define MPATH_PREFIX "mpath-"
struct mpath_priv {
struct dm_pool *mem;
struct dev_filter f;
struct dev_types *dt;
struct dm_hash_table *hash;
};
/*
* given "/dev/foo" return "foo"
*/
static const char *_get_sysfs_name(struct device *dev)
{
const char *name;
@@ -56,11 +45,6 @@ static const char *_get_sysfs_name(struct device *dev)
return name;
}
/*
* given major:minor
* readlink translates /sys/dev/block/major:minor to /sys/.../foo
* from /sys/.../foo return "foo"
*/
static const char *_get_sysfs_name_by_devt(const char *sysfs_dir, dev_t devno,
char *buf, size_t buf_size)
{
@@ -68,7 +52,7 @@ static const char *_get_sysfs_name_by_devt(const char *sysfs_dir, dev_t devno,
char path[PATH_MAX];
int size;
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d", sysfs_dir,
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d", sysfs_dir,
(int) MAJOR(devno), (int) MINOR(devno)) < 0) {
log_error("Sysfs path string is too long.");
return NULL;
@@ -110,28 +94,27 @@ static int _get_sysfs_string(const char *path, char *buffer, int max_size)
return r;
}
static int _get_sysfs_dm_mpath(struct dev_types *dt, const char *sysfs_dir, const char *holder_name)
static int _get_sysfs_get_major_minor(const char *sysfs_dir, const char *kname, int *major, int *minor)
{
char path[PATH_MAX];
char buffer[128];
char path[PATH_MAX], buffer[64];
if (dm_snprintf(path, sizeof(path), "%sblock/%s/dm/uuid", sysfs_dir, holder_name) < 0) {
if (dm_snprintf(path, sizeof(path), "%s/block/%s/dev", sysfs_dir, kname) < 0) {
log_error("Sysfs path string is too long.");
return 0;
}
buffer[0] = '\0';
if (!_get_sysfs_string(path, buffer, sizeof(buffer)))
return_0;
if (!strncmp(buffer, MPATH_PREFIX, 6))
return 1;
if (sscanf(buffer, "%d:%d", major, minor) != 2) {
log_error("Failed to parse major minor from %s", buffer);
return 0;
}
return 0;
return 1;
}
static int _get_holder_name(const char *dir, char *name, int max_size)
static int _get_parent_mpath(const char *dir, char *name, int max_size)
{
struct dirent *d;
DIR *dr;
@@ -164,7 +147,7 @@ static int _get_holder_name(const char *dir, char *name, int max_size)
}
#ifdef UDEV_SYNC_SUPPORT
static int _udev_dev_is_mpath_component(struct device *dev)
static int _udev_dev_is_mpath(struct device *dev)
{
const char *value;
struct dev_ext *ext;
@@ -183,148 +166,78 @@ static int _udev_dev_is_mpath_component(struct device *dev)
return 0;
}
#else
static int _udev_dev_is_mpath_component(struct device *dev)
static int _udev_dev_is_mpath(struct device *dev)
{
return 0;
}
#endif
static int _native_dev_is_mpath_component(struct cmd_context *cmd, struct dev_filter *f, struct device *dev)
static int _native_dev_is_mpath(struct dev_filter *f, struct device *dev)
{
struct mpath_priv *mp = (struct mpath_priv *) f->private;
struct dev_types *dt = mp->dt;
const char *part_name;
const char *name; /* e.g. "sda" for "/dev/sda" */
char link_path[PATH_MAX]; /* some obscure, unpredictable sysfs path */
char holders_path[PATH_MAX]; /* e.g. "/sys/block/sda/holders/" */
char dm_dev_path[PATH_MAX]; /* e.g. "/dev/dm-1" */
char holder_name[128] = { 0 }; /* e.g. "dm-1" */
const char *sysfs_dir = dm_sysfs_dir();
int dev_major = MAJOR(dev->dev);
int dev_minor = MINOR(dev->dev);
int dm_dev_major;
int dm_dev_minor;
struct dev_types *dt = (struct dev_types *) f->private;
const char *part_name, *name;
struct stat info;
char path[PATH_MAX], parent_name[PATH_MAX];
const char *sysfs_dir = dm_sysfs_dir();
int major = MAJOR(dev->dev);
int minor = MINOR(dev->dev);
dev_t primary_dev;
long look;
/* Limit this filter to SCSI or NVME devices */
if (!major_is_scsi_device(dt, dev_major) && !dev_is_nvme(dt, dev))
/* Limit this filter only to SCSI devices */
if (!major_is_scsi_device(dt, MAJOR(dev->dev)))
return 0;
switch (dev_get_primary_dev(dt, dev, &primary_dev)) {
case 2: /* The dev is partition. */
part_name = dev_name(dev); /* name of original dev for log_debug msg */
/* gets "foo" for "/dev/foo" where "/dev/foo" comes from major:minor */
if (!(name = _get_sysfs_name_by_devt(sysfs_dir, primary_dev, link_path, sizeof(link_path))))
if (!(name = _get_sysfs_name_by_devt(sysfs_dir, primary_dev, parent_name, sizeof(parent_name))))
return_0;
log_debug_devs("%s: Device is a partition, using primary "
"device %s for mpath component detection",
part_name, name);
break;
case 1: /* The dev is already a primary dev. Just continue with the dev. */
/* gets "foo" for "/dev/foo" */
if (!(name = _get_sysfs_name(dev)))
return_0;
break;
default: /* 0, error. */
log_warn("Failed to get primary device for %d:%d.", dev_major, dev_minor);
log_error("Failed to get primary device for %d:%d.", major, minor);
return 0;
}
if (dm_snprintf(holders_path, sizeof(holders_path), "%sblock/%s/holders", sysfs_dir, name) < 0) {
log_warn("Sysfs path to check mpath is too long.");
if (dm_snprintf(path, sizeof(path), "%s/block/%s/holders", sysfs_dir, name) < 0) {
log_error("Sysfs path to check mpath is too long.");
return 0;
}
/* also will filter out partitions */
if (stat(holders_path, &info))
if (stat(path, &info))
return 0;
if (!S_ISDIR(info.st_mode)) {
log_warn("Path %s is not a directory.", holders_path);
log_error("Path %s is not a directory.", path);
return 0;
}
/*
* If holders dir contains an entry such as "dm-1", then this sets
* holder_name to "dm-1".
*
* If holders dir is empty, return 0 (this is generally where
* devs that are not mpath components return.)
*/
if (!_get_holder_name(holders_path, holder_name, sizeof(holder_name)))
if (!_get_parent_mpath(path, parent_name, sizeof(parent_name)))
return 0;
if (dm_snprintf(dm_dev_path, sizeof(dm_dev_path), "%s/%s", cmd->dev_dir, holder_name) < 0) {
log_warn("dm device path to check mpath is too long.");
if (!_get_sysfs_get_major_minor(sysfs_dir, parent_name, &major, &minor))
return_0;
if (major != dt->device_mapper_major)
return 0;
}
/*
* stat "/dev/dm-1" which is the holder of the dev we're checking
* dm_dev_major:dm_dev_minor come from stat("/dev/dm-1")
*/
if (stat(dm_dev_path, &info)) {
log_debug("filter-mpath %s holder %s stat result %d",
dev_name(dev), dm_dev_path, errno);
return 0;
}
dm_dev_major = (int)MAJOR(info.st_rdev);
dm_dev_minor = (int)MINOR(info.st_rdev);
if (dm_dev_major != dt->device_mapper_major) {
log_debug_devs("filter-mpath %s holder %s %d:%d does not have dm major",
dev_name(dev), dm_dev_path, dm_dev_major, dm_dev_minor);
return 0;
}
/*
* Save the result of checking that "/dev/dm-1" is an mpath device
* to avoid repeating it for each path component.
* The minor number of "/dev/dm-1" is added to the hash table with
* const value 2 meaning that dm minor 1 (for /dev/dm-1) is a multipath dev
* and const value 1 meaning that dm minor 1 is not a multipath dev.
*/
look = (long) dm_hash_lookup_binary(mp->hash, &dm_dev_minor, sizeof(dm_dev_minor));
if (look > 0) {
log_debug_devs("filter-mpath %s holder %s %u:%u already checked as %sbeing mpath.",
dev_name(dev), holder_name, dm_dev_major, dm_dev_minor, (look > 1) ? "" : "not ");
return (look > 1) ? 1 : 0;
}
/*
* Returns 1 if /sys/block/<holder_name>/dm/uuid indicates that
* <holder_name> is a dm device with dm uuid prefix mpath-.
* When true, <holder_name> will be something like "dm-1".
*
* (Is a hash table worth it to avoid reading one sysfs file?)
*/
if (_get_sysfs_dm_mpath(dt, sysfs_dir, holder_name)) {
log_debug_devs("filter-mpath %s holder %s %u:%u ignore mpath component",
dev_name(dev), holder_name, dm_dev_major, dm_dev_minor);
(void) dm_hash_insert_binary(mp->hash, &dm_dev_minor, sizeof(dm_dev_minor), (void*)2);
return 1;
}
(void) dm_hash_insert_binary(mp->hash, &dm_dev_minor, sizeof(dm_dev_minor), (void*)1);
return 0;
return lvm_dm_prefix_check(major, minor, MPATH_PREFIX);
}
static int _dev_is_mpath_component(struct cmd_context *cmd, struct dev_filter *f, struct device *dev)
static int _dev_is_mpath(struct dev_filter *f, struct device *dev)
{
if (dev->ext.src == DEV_EXT_NONE)
return _native_dev_is_mpath_component(cmd, f, dev);
return _native_dev_is_mpath(f, dev);
if (dev->ext.src == DEV_EXT_UDEV)
return _udev_dev_is_mpath_component(dev);
return _udev_dev_is_mpath(dev);
log_error(INTERNAL_ERROR "Missing hook for mpath recognition "
"using external device info source %s", dev_ext_name(dev));
@@ -334,17 +247,14 @@ static int _dev_is_mpath_component(struct cmd_context *cmd, struct dev_filter *f
#define MSG_SKIPPING "%s: Skipping mpath component device"
static int _ignore_mpath_component(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)
static int _ignore_mpath(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)
{
dev->filtered_flags &= ~DEV_FILTERED_MPATH_COMPONENT;
if (_dev_is_mpath_component(cmd, f, dev) == 1) {
if (_dev_is_mpath(f, dev) == 1) {
if (dev->ext.src == DEV_EXT_NONE)
log_debug_devs(MSG_SKIPPING, dev_name(dev));
else
log_debug_devs(MSG_SKIPPING " [%s:%p]", dev_name(dev),
dev_ext_name(dev), dev->ext.handle);
dev->filtered_flags |= DEV_FILTERED_MPATH_COMPONENT;
return 0;
}
@@ -353,59 +263,36 @@ static int _ignore_mpath_component(struct cmd_context *cmd, struct dev_filter *f
static void _destroy(struct dev_filter *f)
{
struct mpath_priv *mp = (struct mpath_priv*) f->private;
if (f->use_count)
log_error(INTERNAL_ERROR "Destroying mpath filter while in use %u times.", f->use_count);
dm_hash_destroy(mp->hash);
dm_pool_destroy(mp->mem);
free(f);
}
struct dev_filter *mpath_filter_create(struct dev_types *dt)
{
const char *sysfs_dir = dm_sysfs_dir();
struct mpath_priv *mp;
struct dm_pool *mem;
struct dm_hash_table *hash;
struct dev_filter *f;
if (!*sysfs_dir) {
log_verbose("No proc filesystem found: skipping multipath filter");
return NULL;
}
if (!(hash = dm_hash_create(128))) {
log_error("mpath hash table creation failed.");
if (!(f = zalloc(sizeof(*f)))) {
log_error("mpath filter allocation failed");
return NULL;
}
if (!(mem = dm_pool_create("mpath", 256))) {
log_error("mpath pool creation failed.");
dm_hash_destroy(hash);
return NULL;
}
if (!(mp = dm_pool_zalloc(mem, sizeof(*mp)))) {
log_error("mpath filter allocation failed.");
goto bad;
}
mp->f.passes_filter = _ignore_mpath_component;
mp->f.destroy = _destroy;
mp->f.use_count = 0;
mp->f.private = mp;
mp->f.name = "mpath";
mp->dt = dt;
mp->mem = mem;
mp->hash = hash;
f->passes_filter = _ignore_mpath;
f->destroy = _destroy;
f->use_count = 0;
f->private = dt;
f->name = "mpath";
log_debug_devs("mpath filter initialised.");
return &mp->f;
bad:
dm_pool_destroy(mem);
dm_hash_destroy(hash);
return NULL;
return f;
}
#else

View File

@@ -16,7 +16,6 @@
#include "base/memory/zalloc.h"
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/commands/toolcontext.h"
#define MSG_SKIPPING "%s: Skipping: Partition table signature found"
@@ -25,11 +24,6 @@ static int _passes_partitioned_filter(struct cmd_context *cmd, struct dev_filter
struct dev_types *dt = (struct dev_types *) f->private;
int ret;
if (cmd->filter_nodata_only)
return 1;
dev->filtered_flags &= ~DEV_FILTERED_PARTITIONED;
ret = dev_is_partitioned(dt, dev);
if (ret == -EAGAIN) {
@@ -45,7 +39,6 @@ static int _passes_partitioned_filter(struct cmd_context *cmd, struct dev_filter
else
log_debug_devs(MSG_SKIPPING " [%s:%p]", dev_name(dev),
dev_ext_name(dev), dev->ext.handle);
dev->filtered_flags |= DEV_FILTERED_PARTITIONED;
return 0;
}

View File

@@ -64,17 +64,11 @@ static int _init_hash(struct pfilter *pf)
return 1;
}
static void _persistent_filter_wipe(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)
static void _persistent_filter_wipe(struct dev_filter *f)
{
struct pfilter *pf = (struct pfilter *) f->private;
struct dm_str_list *sl;
if (!dev) {
dm_hash_wipe(pf->devices);
} else {
dm_list_iterate_items(sl, &dev->aliases)
dm_hash_remove(pf->devices, sl->str);
}
dm_hash_wipe(pf->devices);
}
static int _lookup_p(struct cmd_context *cmd, struct dev_filter *f, struct device *dev, const char *use_filter_name)

View File

@@ -15,7 +15,6 @@
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/commands/toolcontext.h"
struct rfilter {
struct dm_pool *mem;
@@ -152,14 +151,6 @@ static int _accept_p(struct cmd_context *cmd, struct dev_filter *f, struct devic
struct rfilter *rf = (struct rfilter *) f->private;
struct dm_str_list *sl;
dev->filtered_flags &= ~DEV_FILTERED_REGEX;
if (cmd->enable_devices_list)
return 1;
if (cmd->enable_devices_file && !cmd->filter_regex_with_devices_file)
return 1;
dm_list_iterate_items(sl, &dev->aliases) {
m = dm_regex_match(rf->engine, sl->str);
@@ -177,10 +168,8 @@ static int _accept_p(struct cmd_context *cmd, struct dev_filter *f, struct devic
first = 0;
}
if (rejected) {
dev->filtered_flags |= DEV_FILTERED_REGEX;
if (rejected)
log_debug_devs("%s: Skipping (regex)", dev_name(dev));
}
/*
* pass everything that doesn't match

View File

@@ -16,7 +16,6 @@
#include "base/memory/zalloc.h"
#include "lib/misc/lib.h"
#include "lib/filters/filter.h"
#include "lib/commands/toolcontext.h"
#ifdef __linux__
@@ -28,11 +27,6 @@ static int _ignore_signature(struct cmd_context *cmd, struct dev_filter *f __att
char buf[BUFSIZE];
int ret = 0;
if (cmd->filter_nodata_only)
return 1;
dev->filtered_flags &= ~DEV_FILTERED_SIGNATURE;
if (!scan_bcache) {
/* let pass, call again after scan */
log_debug_devs("filter signature deferred %s", dev_name(dev));
@@ -46,21 +40,18 @@ static int _ignore_signature(struct cmd_context *cmd, struct dev_filter *f __att
log_debug_devs("%s: Skipping: error in signature detection",
dev_name(dev));
ret = 0;
dev->filtered_flags |= DEV_FILTERED_SIGNATURE;
goto out;
}
if (dev_is_lvm1(dev, buf, BUFSIZE)) {
log_debug_devs("%s: Skipping lvm1 device", dev_name(dev));
ret = 0;
dev->filtered_flags |= DEV_FILTERED_SIGNATURE;
goto out;
}
if (dev_is_pool(dev, buf, BUFSIZE)) {
log_debug_devs("%s: Skipping gfs-pool device", dev_name(dev));
ret = 0;
dev->filtered_flags |= DEV_FILTERED_SIGNATURE;
goto out;
}
ret = 1;

View File

@@ -264,8 +264,6 @@ static int _accept_p(struct cmd_context *cmd, struct dev_filter *f, struct devic
{
struct dev_set *ds = (struct dev_set *) f->private;
dev->filtered_flags &= ~DEV_FILTERED_SYSFS;
if (!ds->initialised)
_init_devs(ds);
@@ -275,7 +273,6 @@ static int _accept_p(struct cmd_context *cmd, struct dev_filter *f, struct devic
if (!_set_lookup(ds, dev->dev)) {
log_debug_devs("%s: Skipping (sysfs)", dev_name(dev));
dev->filtered_flags |= DEV_FILTERED_SYSFS;
return 0;
}

View File

@@ -22,13 +22,10 @@ static int _passes_lvm_type_device_filter(struct cmd_context *cmd, struct dev_fi
struct dev_types *dt = (struct dev_types *) f->private;
const char *name = dev_name(dev);
dev->filtered_flags &= ~DEV_FILTERED_DEVTYPE;
/* Is this a recognised device type? */
if (!dt->dev_type_array[MAJOR(dev->dev)].max_partitions) {
log_debug_devs("%s: Skipping: Unrecognised LVM device type %"
PRIu64, name, (uint64_t) MAJOR(dev->dev));
dev->filtered_flags |= DEV_FILTERED_DEVTYPE;
return 0;
}

View File

@@ -113,9 +113,6 @@ static int _passes_usable_filter(struct cmd_context *cmd, struct dev_filter *f,
struct dev_usable_check_params ucp = {0};
int r = 1;
dev->filtered_flags &= ~DEV_FILTERED_MINSIZE;
dev->filtered_flags &= ~DEV_FILTERED_UNUSABLE;
/* further checks are done on dm devices only */
if (dm_is_dm_major(MAJOR(dev->dev))) {
switch (mode) {
@@ -145,10 +142,8 @@ static int _passes_usable_filter(struct cmd_context *cmd, struct dev_filter *f,
break;
}
if (!(r = device_is_usable(dev, ucp))) {
dev->filtered_flags |= DEV_FILTERED_UNUSABLE;
if (!(r = device_is_usable(dev, ucp)))
log_debug_devs("%s: Skipping unusable device.", dev_name(dev));
}
}
if (r) {
@@ -158,8 +153,6 @@ static int _passes_usable_filter(struct cmd_context *cmd, struct dev_filter *f,
/* fall through */
case FILTER_MODE_PRE_LVMETAD:
r = _check_pv_min_size(dev);
if (!r)
dev->filtered_flags |= DEV_FILTERED_MINSIZE;
break;
case FILTER_MODE_POST_LVMETAD:
/* nothing to do here */

View File

@@ -30,7 +30,6 @@ struct dev_filter *partitioned_filter_create(struct dev_types *dt);
struct dev_filter *persistent_filter_create(struct dev_types *dt, struct dev_filter *f);
struct dev_filter *sysfs_filter_create(void);
struct dev_filter *signature_filter_create(struct dev_types *dt);
struct dev_filter *deviceid_filter_create(struct cmd_context *cmd);
struct dev_filter *internal_filter_create(void);
int internal_filter_allow(struct dm_pool *mem, struct device *dev);
@@ -53,18 +52,4 @@ typedef enum {
} filter_mode_t;
struct dev_filter *usable_filter_create(struct cmd_context *cmd, struct dev_types *dt, filter_mode_t mode);
#define DEV_FILTERED_FWRAID 0x00000001
#define DEV_FILTERED_INTERNAL 0x00000002
#define DEV_FILTERED_MD_COMPONENT 0x00000004
#define DEV_FILTERED_MPATH_COMPONENT 0x00000008
#define DEV_FILTERED_PARTITIONED 0x00000010
#define DEV_FILTERED_REGEX 0x00000020
#define DEV_FILTERED_SIGNATURE 0x00000040
#define DEV_FILTERED_SYSFS 0x00000080
#define DEV_FILTERED_DEVTYPE 0x00000100
#define DEV_FILTERED_MINSIZE 0x00000200
#define DEV_FILTERED_UNUSABLE 0x00000400
#define DEV_FILTERED_DEVICES_FILE 0x00000800
#define DEV_FILTERED_DEVICES_LIST 0x00001000
#endif /* _LVM_FILTER_H */

View File

@@ -315,7 +315,7 @@ struct volume_group *backup_read_vg(struct cmd_context *cmd,
}
dm_list_iterate_items(mda, &tf->metadata_areas_in_use) {
if (!(vg = mda->ops->vg_read(cmd, tf, vg_name, mda, NULL, NULL)))
if (!(vg = mda->ops->vg_read(tf, vg_name, mda, NULL, NULL)))
stack;
break;
}

Some files were not shown because too many files have changed in this diff Show More