mirror of
git://sourceware.org/git/lvm2.git
synced 2024-12-21 13:34:40 +03:00
Merge branch 'master' into 2018-05-11-fork-libdm
This commit is contained in:
commit
dbba1e9b93
52
.gitignore
vendored
52
.gitignore
vendored
@ -79,5 +79,55 @@ test/lib/vgrename
|
||||
test/lib/vgs
|
||||
test/lib/vgscan
|
||||
test/lib/vgsplit
|
||||
|
||||
test/api/lvtest.t
|
||||
test/api/pe_start.t
|
||||
test/api/percent.t
|
||||
test/api/python_lvm_unit.py
|
||||
test/api/test
|
||||
test/api/thin_percent.t
|
||||
test/api/vglist.t
|
||||
test/api/vgtest.t
|
||||
test/lib/aux
|
||||
test/lib/check
|
||||
test/lib/clvmd
|
||||
test/lib/dm-version-expected
|
||||
test/lib/dmeventd
|
||||
test/lib/dmsetup
|
||||
test/lib/dmstats
|
||||
test/lib/fail
|
||||
test/lib/flavour-ndev-cluster
|
||||
test/lib/flavour-ndev-cluster-lvmpolld
|
||||
test/lib/flavour-ndev-lvmetad
|
||||
test/lib/flavour-ndev-lvmetad-lvmpolld
|
||||
test/lib/flavour-ndev-lvmpolld
|
||||
test/lib/flavour-ndev-vanilla
|
||||
test/lib/flavour-udev-cluster
|
||||
test/lib/flavour-udev-cluster-lvmpolld
|
||||
test/lib/flavour-udev-lvmetad
|
||||
test/lib/flavour-udev-lvmetad-lvmpolld
|
||||
test/lib/flavour-udev-lvmlockd-dlm
|
||||
test/lib/flavour-udev-lvmlockd-sanlock
|
||||
test/lib/flavour-udev-lvmlockd-test
|
||||
test/lib/flavour-udev-lvmpolld
|
||||
test/lib/flavour-udev-vanilla
|
||||
test/lib/fsadm
|
||||
test/lib/get
|
||||
test/lib/inittest
|
||||
test/lib/invalid
|
||||
test/lib/lvm
|
||||
test/lib/lvm-wrapper
|
||||
test/lib/lvmchange
|
||||
test/lib/lvmdbusd.profile
|
||||
test/lib/lvmetad
|
||||
test/lib/lvmpolld
|
||||
test/lib/not
|
||||
test/lib/paths
|
||||
test/lib/paths-common
|
||||
test/lib/runner
|
||||
test/lib/should
|
||||
test/lib/test
|
||||
test/lib/thin-performance.profile
|
||||
test/lib/utils
|
||||
test/lib/version-expected
|
||||
test/unit/dmraid_t.c
|
||||
test/unit/unit-test
|
||||
|
@ -1 +1 @@
|
||||
1.02.147-git (2017-12-18)
|
||||
1.02.147-git (2018-05-24)
|
||||
|
25
WHATS_NEW
25
WHATS_NEW
@ -1,16 +1,22 @@
|
||||
Version 2.02.178 -
|
||||
=====================================
|
||||
====================================
|
||||
Use versionsort to fix archive file expiry beyond 100000 files.
|
||||
|
||||
Version 2.02.178-rc1 - 24th May 2018
|
||||
====================================
|
||||
Add libaio dependency for build.
|
||||
Remove lvm1 and pool format handling and add filter to ignore them.
|
||||
Move some filter checks to after disks are read.
|
||||
Rework disk scanning and when it is used.
|
||||
Add new io layer and shift code to using it.
|
||||
lvconvert: don't return success on degraded -m raid1 conversion
|
||||
Fix lvconvert's return code on degraded -m raid1 conversion.
|
||||
--enable-testing switch for ./configure has been removed.
|
||||
--with-snapshots switch for ./configure has been removed.
|
||||
--with-mirrors switch for ./configure has been removed.
|
||||
--with-raid switch for ./configure has been removed.
|
||||
--with-thin switch for ./configure has been removed.
|
||||
--with-cache switch for ./configure has been removed.
|
||||
Include new unit-test framework and unit tests.
|
||||
Extend validation of region_size for mirror segment.
|
||||
Reload whole device stack when reinitilizing mirror log.
|
||||
Mirrors without monitoring are WARNING and not blocking on error.
|
||||
@ -18,7 +24,7 @@ Version 2.02.178 -
|
||||
Fix evaluation of maximal region size for mirror log.
|
||||
Enhance mirror log size estimation and use smaller size when possible.
|
||||
Fix incorrect mirror log size calculation on 32bit arch.
|
||||
Enhnace preloading tree creating.
|
||||
Enhance preloading tree creating.
|
||||
Fix regression on acceptance of any LV on lvconvert.
|
||||
Restore usability of thin LV to be again external origin for another thin.
|
||||
Keep systemd vars on change event in 69-dm-lvm-metad.rules for systemd reload.
|
||||
@ -34,8 +40,8 @@ Version 2.02.178 -
|
||||
Enhance mirror log initialization for old mirror target.
|
||||
Skip private crypto and stratis devices.
|
||||
Skip frozen raid devices from scanning.
|
||||
Activate RAID SubLVs on read_only_volume_list readwrite
|
||||
Offer convenience type raid5_n converting to raid10
|
||||
Activate RAID SubLVs on read_only_volume_list readwrite.
|
||||
Offer convenience type raid5_n converting to raid10.
|
||||
Automatically avoid reading invalid snapshots during device scan.
|
||||
Ensure COW device is writable even for read-only thick snapshots.
|
||||
Support activation of component LVs in read-only mode.
|
||||
@ -53,20 +59,13 @@ Version 2.02.178 -
|
||||
Improve validation of created strings in vgimportclone.
|
||||
Add missing initialisation of mem pool in systemd generator.
|
||||
Do not reopen output streams for multithreaded users of liblvm.
|
||||
Use versionsort to fix archive file expiry beyond 100000 files.
|
||||
Add devices/use_aio, aio_max, aio_memory to configure AIO limits.
|
||||
Support asynchronous I/O when scanning devices.
|
||||
Detect asynchronous I/O capability in configure or accept --disable-aio.
|
||||
Add AIO_SUPPORTED_CODE_PATH to indicate whether AIO may be used.
|
||||
Configure ensures /usr/bin dir is checked for dmpd tools.
|
||||
Restore pvmove support for wide-clustered active volumes (2.02.177).
|
||||
Avoid non-exclusive activation of exclusive segment types.
|
||||
Fix trimming sibling PVs when doing a pvmove of raid subLVs.
|
||||
Preserve exclusive activation during thin snaphost merge.
|
||||
Suppress some repeated reads of the same disk data at the device layer.
|
||||
Avoid exceeding array bounds in allocation tag processing.
|
||||
Refactor metadata reading code to use callback functions.
|
||||
Move memory allocation for the key dev_reads into the device layer.
|
||||
Add --lockopt to common options and add option to skip selected locks.
|
||||
|
||||
Version 2.02.177 - 18th December 2017
|
||||
=====================================
|
||||
|
@ -1,5 +1,8 @@
|
||||
Version 1.02.147 -
|
||||
=====================================
|
||||
====================================
|
||||
|
||||
Version 1.02.147-rc1 - 24th May 2018
|
||||
====================================
|
||||
Reuse uname() result for mirror target.
|
||||
Recognize also mounted btrfs through dm_device_has_mounted_fs().
|
||||
Add missing log_error() into dm_stats_populate() returning 0.
|
||||
|
@ -68,31 +68,43 @@ struct node48 {
|
||||
};
|
||||
|
||||
struct node256 {
|
||||
uint32_t nr_entries;
|
||||
struct value values[256];
|
||||
};
|
||||
|
||||
struct radix_tree {
|
||||
unsigned nr_entries;
|
||||
struct value root;
|
||||
radix_value_dtr dtr;
|
||||
void *dtr_context;
|
||||
};
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
struct radix_tree *radix_tree_create(void)
|
||||
struct radix_tree *radix_tree_create(radix_value_dtr dtr, void *dtr_context)
|
||||
{
|
||||
struct radix_tree *rt = malloc(sizeof(*rt));
|
||||
|
||||
if (rt) {
|
||||
rt->nr_entries = 0;
|
||||
rt->root.type = UNSET;
|
||||
rt->dtr = dtr;
|
||||
rt->dtr_context = dtr_context;
|
||||
}
|
||||
|
||||
return rt;
|
||||
}
|
||||
|
||||
static void _free_node(struct value v, radix_value_dtr dtr, void *context)
|
||||
static inline void _dtr(struct radix_tree *rt, union radix_value v)
|
||||
{
|
||||
unsigned i;
|
||||
if (rt->dtr)
|
||||
rt->dtr(rt->dtr_context, v);
|
||||
}
|
||||
|
||||
// Returns the number of values removed
|
||||
static unsigned _free_node(struct radix_tree *rt, struct value v)
|
||||
{
|
||||
unsigned i, nr = 0;
|
||||
struct value_chain *vc;
|
||||
struct prefix_chain *pc;
|
||||
struct node4 *n4;
|
||||
@ -105,63 +117,69 @@ static void _free_node(struct value v, radix_value_dtr dtr, void *context)
|
||||
break;
|
||||
|
||||
case VALUE:
|
||||
if (dtr)
|
||||
dtr(context, v.value);
|
||||
_dtr(rt, v.value);
|
||||
nr = 1;
|
||||
break;
|
||||
|
||||
case VALUE_CHAIN:
|
||||
vc = v.value.ptr;
|
||||
if (dtr)
|
||||
dtr(context, vc->value);
|
||||
_free_node(vc->child, dtr, context);
|
||||
_dtr(rt, vc->value);
|
||||
nr = 1 + _free_node(rt, vc->child);
|
||||
free(vc);
|
||||
break;
|
||||
|
||||
case PREFIX_CHAIN:
|
||||
pc = v.value.ptr;
|
||||
_free_node(pc->child, dtr, context);
|
||||
nr = _free_node(rt, pc->child);
|
||||
free(pc);
|
||||
break;
|
||||
|
||||
case NODE4:
|
||||
n4 = (struct node4 *) v.value.ptr;
|
||||
for (i = 0; i < n4->nr_entries; i++)
|
||||
_free_node(n4->values[i], dtr, context);
|
||||
nr += _free_node(rt, n4->values[i]);
|
||||
free(n4);
|
||||
break;
|
||||
|
||||
case NODE16:
|
||||
n16 = (struct node16 *) v.value.ptr;
|
||||
for (i = 0; i < n16->nr_entries; i++)
|
||||
_free_node(n16->values[i], dtr, context);
|
||||
nr += _free_node(rt, n16->values[i]);
|
||||
free(n16);
|
||||
break;
|
||||
|
||||
case NODE48:
|
||||
n48 = (struct node48 *) v.value.ptr;
|
||||
for (i = 0; i < n48->nr_entries; i++)
|
||||
_free_node(n48->values[i], dtr, context);
|
||||
nr += _free_node(rt, n48->values[i]);
|
||||
free(n48);
|
||||
break;
|
||||
|
||||
case NODE256:
|
||||
n256 = (struct node256 *) v.value.ptr;
|
||||
for (i = 0; i < 256; i++)
|
||||
_free_node(n256->values[i], dtr, context);
|
||||
nr += _free_node(rt, n256->values[i]);
|
||||
free(n256);
|
||||
break;
|
||||
}
|
||||
|
||||
return nr;
|
||||
}
|
||||
|
||||
void radix_tree_destroy(struct radix_tree *rt, radix_value_dtr dtr, void *context)
|
||||
void radix_tree_destroy(struct radix_tree *rt)
|
||||
{
|
||||
_free_node(rt->root, dtr, context);
|
||||
_free_node(rt, rt->root);
|
||||
free(rt);
|
||||
}
|
||||
|
||||
static bool _insert(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv);
|
||||
unsigned radix_tree_size(struct radix_tree *rt)
|
||||
{
|
||||
return rt->nr_entries;
|
||||
}
|
||||
|
||||
static bool _insert_unset(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv);
|
||||
|
||||
static bool _insert_unset(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
unsigned len = ke - kb;
|
||||
|
||||
@ -169,6 +187,7 @@ static bool _insert_unset(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
// value
|
||||
v->type = VALUE;
|
||||
v->value = rv;
|
||||
rt->nr_entries++;
|
||||
} else {
|
||||
// prefix -> value
|
||||
struct prefix_chain *pc = zalloc(sizeof(*pc) + len);
|
||||
@ -181,12 +200,13 @@ static bool _insert_unset(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
memcpy(pc->prefix, kb, len);
|
||||
v->type = PREFIX_CHAIN;
|
||||
v->value.ptr = pc;
|
||||
rt->nr_entries++;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_value(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_value(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
unsigned len = ke - kb;
|
||||
|
||||
@ -201,7 +221,7 @@ static bool _insert_value(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
return false;
|
||||
|
||||
vc->value = v->value;
|
||||
if (!_insert(&vc->child, kb, ke, rv)) {
|
||||
if (!_insert(rt, &vc->child, kb, ke, rv)) {
|
||||
free(vc);
|
||||
return false;
|
||||
}
|
||||
@ -213,10 +233,10 @@ static bool _insert_value(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_value_chain(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_value_chain(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct value_chain *vc = v->value.ptr;
|
||||
return _insert(&vc->child, kb, ke, rv);
|
||||
return _insert(rt, &vc->child, kb, ke, rv);
|
||||
}
|
||||
|
||||
static unsigned min(unsigned lhs, unsigned rhs)
|
||||
@ -227,7 +247,7 @@ static unsigned min(unsigned lhs, unsigned rhs)
|
||||
return rhs;
|
||||
}
|
||||
|
||||
static bool _insert_prefix_chain(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_prefix_chain(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct prefix_chain *pc = v->value.ptr;
|
||||
|
||||
@ -251,7 +271,7 @@ static bool _insert_prefix_chain(struct value *v, uint8_t *kb, uint8_t *ke, unio
|
||||
pc->child.value.ptr = pc2;
|
||||
pc->len = i;
|
||||
|
||||
if (!_insert(&pc->child, kb + i, ke, rv)) {
|
||||
if (!_insert(rt, &pc->child, kb + i, ke, rv)) {
|
||||
free(pc2);
|
||||
return false;
|
||||
}
|
||||
@ -263,7 +283,7 @@ static bool _insert_prefix_chain(struct value *v, uint8_t *kb, uint8_t *ke, unio
|
||||
return false;
|
||||
|
||||
n4->keys[0] = *kb;
|
||||
if (!_insert(n4->values, kb + 1, ke, rv)) {
|
||||
if (!_insert(rt, n4->values, kb + 1, ke, rv)) {
|
||||
free(n4);
|
||||
return false;
|
||||
}
|
||||
@ -289,7 +309,7 @@ static bool _insert_prefix_chain(struct value *v, uint8_t *kb, uint8_t *ke, unio
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_node4(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_node4(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct node4 *n4 = v->value.ptr;
|
||||
if (n4->nr_entries == 4) {
|
||||
@ -302,7 +322,7 @@ static bool _insert_node4(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
memcpy(n16->values, n4->values, sizeof(n4->values));
|
||||
|
||||
n16->keys[4] = *kb;
|
||||
if (!_insert(n16->values + 4, kb + 1, ke, rv)) {
|
||||
if (!_insert(rt, n16->values + 4, kb + 1, ke, rv)) {
|
||||
free(n16);
|
||||
return false;
|
||||
}
|
||||
@ -311,7 +331,7 @@ static bool _insert_node4(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
v->value.ptr = n16;
|
||||
} else {
|
||||
n4 = v->value.ptr;
|
||||
if (!_insert(n4->values + n4->nr_entries, kb + 1, ke, rv))
|
||||
if (!_insert(rt, n4->values + n4->nr_entries, kb + 1, ke, rv))
|
||||
return false;
|
||||
|
||||
n4->keys[n4->nr_entries] = *kb;
|
||||
@ -320,7 +340,7 @@ static bool _insert_node4(struct value *v, uint8_t *kb, uint8_t *ke, union radix
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_node16(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_node16(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct node16 *n16 = v->value.ptr;
|
||||
|
||||
@ -340,7 +360,7 @@ static bool _insert_node16(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
}
|
||||
|
||||
n48->keys[*kb] = 16;
|
||||
if (!_insert(n48->values + 16, kb + 1, ke, rv)) {
|
||||
if (!_insert(rt, n48->values + 16, kb + 1, ke, rv)) {
|
||||
free(n48);
|
||||
return false;
|
||||
}
|
||||
@ -349,7 +369,7 @@ static bool _insert_node16(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
v->type = NODE48;
|
||||
v->value.ptr = n48;
|
||||
} else {
|
||||
if (!_insert(n16->values + n16->nr_entries, kb + 1, ke, rv))
|
||||
if (!_insert(rt, n16->values + n16->nr_entries, kb + 1, ke, rv))
|
||||
return false;
|
||||
n16->keys[n16->nr_entries] = *kb;
|
||||
n16->nr_entries++;
|
||||
@ -358,7 +378,7 @@ static bool _insert_node16(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_node48(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_node48(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct node48 *n48 = v->value.ptr;
|
||||
if (n48->nr_entries == 48) {
|
||||
@ -374,7 +394,7 @@ static bool _insert_node48(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
n256->values[i] = n48->values[n48->keys[i]];
|
||||
}
|
||||
|
||||
if (!_insert(n256->values + *kb, kb + 1, ke, rv)) {
|
||||
if (!_insert(rt, n256->values + *kb, kb + 1, ke, rv)) {
|
||||
free(n256);
|
||||
return false;
|
||||
}
|
||||
@ -384,7 +404,7 @@ static bool _insert_node48(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
v->value.ptr = n256;
|
||||
|
||||
} else {
|
||||
if (!_insert(n48->values + n48->nr_entries, kb + 1, ke, rv))
|
||||
if (!_insert(rt, n48->values + n48->nr_entries, kb + 1, ke, rv))
|
||||
return false;
|
||||
|
||||
n48->keys[*kb] = n48->nr_entries;
|
||||
@ -394,24 +414,28 @@ static bool _insert_node48(struct value *v, uint8_t *kb, uint8_t *ke, union radi
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _insert_node256(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert_node256(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct node256 *n256 = v->value.ptr;
|
||||
if (!_insert(n256->values + *kb, kb + 1, ke, rv)) {
|
||||
n256->values[*kb].type = UNSET;
|
||||
bool was_unset = n256->values[*kb].type == UNSET;
|
||||
|
||||
if (!_insert(rt, n256->values + *kb, kb + 1, ke, rv))
|
||||
return false;
|
||||
}
|
||||
|
||||
if (was_unset)
|
||||
n256->nr_entries++;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// FIXME: the tree should not be touched if insert fails (eg, OOM)
|
||||
static bool _insert(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
static bool _insert(struct radix_tree *rt, struct value *v, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
if (kb == ke) {
|
||||
if (v->type == UNSET) {
|
||||
v->type = VALUE;
|
||||
v->value = rv;
|
||||
rt->nr_entries++;
|
||||
|
||||
} else if (v->type == VALUE) {
|
||||
v->value = rv;
|
||||
@ -425,34 +449,35 @@ static bool _insert(struct value *v, uint8_t *kb, uint8_t *ke, union radix_value
|
||||
vc->child = *v;
|
||||
v->type = VALUE_CHAIN;
|
||||
v->value.ptr = vc;
|
||||
rt->nr_entries++;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
switch (v->type) {
|
||||
case UNSET:
|
||||
return _insert_unset(v, kb, ke, rv);
|
||||
return _insert_unset(rt, v, kb, ke, rv);
|
||||
|
||||
case VALUE:
|
||||
return _insert_value(v, kb, ke, rv);
|
||||
return _insert_value(rt, v, kb, ke, rv);
|
||||
|
||||
case VALUE_CHAIN:
|
||||
return _insert_value_chain(v, kb, ke, rv);
|
||||
return _insert_value_chain(rt, v, kb, ke, rv);
|
||||
|
||||
case PREFIX_CHAIN:
|
||||
return _insert_prefix_chain(v, kb, ke, rv);
|
||||
return _insert_prefix_chain(rt, v, kb, ke, rv);
|
||||
|
||||
case NODE4:
|
||||
return _insert_node4(v, kb, ke, rv);
|
||||
return _insert_node4(rt, v, kb, ke, rv);
|
||||
|
||||
case NODE16:
|
||||
return _insert_node16(v, kb, ke, rv);
|
||||
return _insert_node16(rt, v, kb, ke, rv);
|
||||
|
||||
case NODE48:
|
||||
return _insert_node48(v, kb, ke, rv);
|
||||
return _insert_node48(rt, v, kb, ke, rv);
|
||||
|
||||
case NODE256:
|
||||
return _insert_node256(v, kb, ke, rv);
|
||||
return _insert_node256(rt, v, kb, ke, rv);
|
||||
}
|
||||
|
||||
// can't get here
|
||||
@ -530,17 +555,216 @@ static struct lookup_result _lookup_prefix(struct value *v, uint8_t *kb, uint8_t
|
||||
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value rv)
|
||||
{
|
||||
struct lookup_result lr = _lookup_prefix(&rt->root, kb, ke);
|
||||
if (_insert(lr.v, lr.kb, ke, rv)) {
|
||||
rt->nr_entries++;
|
||||
return _insert(rt, lr.v, lr.kb, ke, rv);
|
||||
}
|
||||
|
||||
// Note the degrade functions also free the original node.
|
||||
static void _degrade_to_n4(struct node16 *n16, struct value *result)
|
||||
{
|
||||
struct node4 *n4 = zalloc(sizeof(*n4));
|
||||
|
||||
n4->nr_entries = n16->nr_entries;
|
||||
memcpy(n4->keys, n16->keys, n16->nr_entries * sizeof(*n4->keys));
|
||||
memcpy(n4->values, n16->values, n16->nr_entries * sizeof(*n4->values));
|
||||
free(n16);
|
||||
|
||||
result->type = NODE4;
|
||||
result->value.ptr = n4;
|
||||
}
|
||||
|
||||
static void _degrade_to_n16(struct node48 *n48, struct value *result)
|
||||
{
|
||||
struct node4 *n16 = zalloc(sizeof(*n16));
|
||||
|
||||
n16->nr_entries = n48->nr_entries;
|
||||
memcpy(n16->keys, n48->keys, n48->nr_entries * sizeof(*n16->keys));
|
||||
memcpy(n16->values, n48->values, n48->nr_entries * sizeof(*n16->values));
|
||||
free(n48);
|
||||
|
||||
result->type = NODE16;
|
||||
result->value.ptr = n16;
|
||||
}
|
||||
|
||||
static void _degrade_to_n48(struct node256 *n256, struct value *result)
|
||||
{
|
||||
unsigned i, count = 0;
|
||||
struct node4 *n48 = zalloc(sizeof(*n48));
|
||||
|
||||
n48->nr_entries = n256->nr_entries;
|
||||
for (i = 0; i < 256; i++) {
|
||||
if (n256->values[i].type == UNSET)
|
||||
continue;
|
||||
|
||||
n48->keys[count] = i;
|
||||
n48->values[count] = n256->values[i];
|
||||
count++;
|
||||
}
|
||||
free(n256);
|
||||
|
||||
result->type = NODE48;
|
||||
result->value.ptr = n48;
|
||||
}
|
||||
|
||||
static bool _remove(struct radix_tree *rt, struct value *root, uint8_t *kb, uint8_t *ke)
|
||||
{
|
||||
bool r;
|
||||
unsigned i;
|
||||
struct value_chain *vc;
|
||||
struct prefix_chain *pc;
|
||||
struct node4 *n4;
|
||||
struct node16 *n16;
|
||||
struct node48 *n48;
|
||||
struct node256 *n256;
|
||||
|
||||
if (kb == ke) {
|
||||
if (root->type == VALUE) {
|
||||
root->type = UNSET;
|
||||
_dtr(rt, root->value);
|
||||
return true;
|
||||
|
||||
} else if (root->type == VALUE_CHAIN) {
|
||||
vc = root->value.ptr;
|
||||
_dtr(rt, vc->value);
|
||||
memcpy(root, &vc->child, sizeof(*root));
|
||||
free(vc);
|
||||
return true;
|
||||
|
||||
} else
|
||||
return false;
|
||||
}
|
||||
|
||||
switch (root->type) {
|
||||
case UNSET:
|
||||
case VALUE:
|
||||
// this is a value for a prefix of the key
|
||||
return false;
|
||||
|
||||
case VALUE_CHAIN:
|
||||
vc = root->value.ptr;
|
||||
r = _remove(rt, &vc->child, kb, ke);
|
||||
if (r && (vc->child.type == UNSET)) {
|
||||
memcpy(root, &vc->child, sizeof(*root));
|
||||
free(vc);
|
||||
}
|
||||
return r;
|
||||
|
||||
case PREFIX_CHAIN:
|
||||
pc = root->value.ptr;
|
||||
if (ke - kb < pc->len)
|
||||
return false;
|
||||
|
||||
for (i = 0; i < pc->len; i++)
|
||||
if (kb[i] != pc->prefix[i])
|
||||
return false;
|
||||
|
||||
return _remove(rt, &pc->child, kb + pc->len, ke);
|
||||
|
||||
case NODE4:
|
||||
n4 = root->value.ptr;
|
||||
for (i = 0; i < n4->nr_entries; i++) {
|
||||
if (n4->keys[i] == *kb) {
|
||||
r = _remove(rt, n4->values + i, kb + 1, ke);
|
||||
if (r && n4->values[i].type == UNSET) {
|
||||
n4->nr_entries--;
|
||||
if (i < n4->nr_entries)
|
||||
// slide the entries down
|
||||
memmove(n4->keys + i, n4->keys + i + 1,
|
||||
sizeof(*n4->keys) * (n4->nr_entries - i));
|
||||
if (!n4->nr_entries)
|
||||
root->type = UNSET;
|
||||
}
|
||||
return r;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
|
||||
case NODE16:
|
||||
n16 = root->value.ptr;
|
||||
for (i = 0; i < n16->nr_entries; i++) {
|
||||
if (n16->keys[i] == *kb) {
|
||||
r = _remove(rt, n16->values + i, kb + 1, ke);
|
||||
if (r && n16->values[i].type == UNSET) {
|
||||
n16->nr_entries--;
|
||||
if (i < n16->nr_entries)
|
||||
// slide the entries down
|
||||
memmove(n16->keys + i, n16->keys + i + 1,
|
||||
sizeof(*n16->keys) * (n16->nr_entries - i));
|
||||
if (n16->nr_entries <= 4)
|
||||
_degrade_to_n4(n16, root);
|
||||
}
|
||||
return r;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
|
||||
case NODE48:
|
||||
n48 = root->value.ptr;
|
||||
i = n48->keys[*kb];
|
||||
if (i < 48) {
|
||||
r = _remove(rt, n48->values + i, kb + 1, ke);
|
||||
if (r && n48->values[i].type == UNSET) {
|
||||
n48->keys[*kb] = 48;
|
||||
n48->nr_entries--;
|
||||
if (n48->nr_entries <= 16)
|
||||
_degrade_to_n16(n48, root);
|
||||
}
|
||||
return r;
|
||||
}
|
||||
return false;
|
||||
|
||||
case NODE256:
|
||||
n256 = root->value.ptr;
|
||||
r = _remove(rt, n256->values + (*kb), kb + 1, ke);
|
||||
if (r && n256->values[*kb].type == UNSET) {
|
||||
n256->nr_entries--;
|
||||
if (n256->nr_entries <= 48)
|
||||
_degrade_to_n48(n256, root);
|
||||
}
|
||||
return r;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool radix_tree_remove(struct radix_tree *rt, uint8_t *key_begin, uint8_t *key_end)
|
||||
{
|
||||
if (_remove(rt, &rt->root, key_begin, key_end)) {
|
||||
rt->nr_entries--;
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void radix_tree_delete(struct radix_tree *rt, uint8_t *key_begin, uint8_t *key_end)
|
||||
static bool _prefix_chain_matches(struct lookup_result *lr, uint8_t *ke)
|
||||
{
|
||||
assert(0);
|
||||
// It's possible the top node is a prefix chain, and
|
||||
// the remaining key matches part of it.
|
||||
if (lr->v->type == PREFIX_CHAIN) {
|
||||
unsigned i, rlen = ke - lr->kb;
|
||||
struct prefix_chain *pc = lr->v->value.ptr;
|
||||
if (rlen < pc->len) {
|
||||
for (i = 0; i < rlen; i++)
|
||||
if (pc->prefix[i] != lr->kb[i])
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *kb, uint8_t *ke)
|
||||
{
|
||||
unsigned count = 0;
|
||||
struct lookup_result lr = _lookup_prefix(&rt->root, kb, ke);
|
||||
if (lr.kb == ke || _prefix_chain_matches(&lr, ke)) {
|
||||
count = _free_node(rt, *lr.v);
|
||||
lr.v->type = UNSET;
|
||||
}
|
||||
|
||||
rt->nr_entries -= count;
|
||||
return count;
|
||||
}
|
||||
|
||||
bool radix_tree_lookup(struct radix_tree *rt,
|
||||
@ -567,4 +791,72 @@ bool radix_tree_lookup(struct radix_tree *rt,
|
||||
return false;
|
||||
}
|
||||
|
||||
// FIXME: build up the keys too
|
||||
static bool _iterate(struct value *v, struct radix_tree_iterator *it)
|
||||
{
|
||||
unsigned i;
|
||||
struct value_chain *vc;
|
||||
struct prefix_chain *pc;
|
||||
struct node4 *n4;
|
||||
struct node16 *n16;
|
||||
struct node48 *n48;
|
||||
struct node256 *n256;
|
||||
|
||||
switch (v->type) {
|
||||
case UNSET:
|
||||
// can't happen
|
||||
break;
|
||||
|
||||
case VALUE:
|
||||
return it->visit(it, NULL, NULL, v->value);
|
||||
|
||||
case VALUE_CHAIN:
|
||||
vc = v->value.ptr;
|
||||
return it->visit(it, NULL, NULL, vc->value) && _iterate(&vc->child, it);
|
||||
|
||||
case PREFIX_CHAIN:
|
||||
pc = v->value.ptr;
|
||||
return _iterate(&pc->child, it);
|
||||
|
||||
case NODE4:
|
||||
n4 = (struct node4 *) v->value.ptr;
|
||||
for (i = 0; i < n4->nr_entries; i++)
|
||||
if (!_iterate(n4->values + i, it))
|
||||
return false;
|
||||
return true;
|
||||
|
||||
case NODE16:
|
||||
n16 = (struct node16 *) v->value.ptr;
|
||||
for (i = 0; i < n16->nr_entries; i++)
|
||||
if (!_iterate(n16->values + i, it))
|
||||
return false;
|
||||
return true;
|
||||
|
||||
case NODE48:
|
||||
n48 = (struct node48 *) v->value.ptr;
|
||||
for (i = 0; i < n48->nr_entries; i++)
|
||||
if (!_iterate(n48->values + i, it))
|
||||
return false;
|
||||
return true;
|
||||
|
||||
case NODE256:
|
||||
n256 = (struct node256 *) v->value.ptr;
|
||||
for (i = 0; i < 256; i++)
|
||||
if (n256->values[i].type != UNSET && !_iterate(n256->values + i, it))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
// can't get here
|
||||
return false;
|
||||
}
|
||||
|
||||
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
|
||||
struct radix_tree_iterator *it)
|
||||
{
|
||||
struct lookup_result lr = _lookup_prefix(&rt->root, kb, ke);
|
||||
if (lr.kb == ke || _prefix_chain_matches(&lr, ke))
|
||||
_iterate(lr.v, it);
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
@ -25,19 +25,34 @@ union radix_value {
|
||||
uint64_t n;
|
||||
};
|
||||
|
||||
struct radix_tree *radix_tree_create(void);
|
||||
|
||||
typedef void (*radix_value_dtr)(void *context, union radix_value v);
|
||||
|
||||
// dtr may be NULL
|
||||
void radix_tree_destroy(struct radix_tree *rt, radix_value_dtr dtr, void *context);
|
||||
// dtr will be called on any deleted entries. dtr may be NULL.
|
||||
struct radix_tree *radix_tree_create(radix_value_dtr dtr, void *dtr_context);
|
||||
void radix_tree_destroy(struct radix_tree *rt);
|
||||
|
||||
unsigned radix_tree_size(struct radix_tree *rt);
|
||||
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value v);
|
||||
void radix_tree_delete(struct radix_tree *rt, uint8_t *kb, uint8_t *ke);
|
||||
bool radix_tree_remove(struct radix_tree *rt, uint8_t *kb, uint8_t *ke);
|
||||
|
||||
// Returns the number of values removed
|
||||
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *prefix_b, uint8_t *prefix_e);
|
||||
|
||||
bool radix_tree_lookup(struct radix_tree *rt,
|
||||
uint8_t *kb, uint8_t *ke, union radix_value *result);
|
||||
|
||||
// The radix tree stores entries in lexicographical order. Which means
|
||||
// we can iterate entries, in order. Or iterate entries with a particular
|
||||
// prefix.
|
||||
struct radix_tree_iterator {
|
||||
// Returns false if the iteration should end.
|
||||
bool (*visit)(struct radix_tree_iterator *it,
|
||||
uint8_t *kb, uint8_t *ke, union radix_value v);
|
||||
};
|
||||
|
||||
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
|
||||
struct radix_tree_iterator *it);
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
#endif
|
||||
|
@ -832,7 +832,7 @@ void lvm_do_backup(const char *vgname)
|
||||
|
||||
pthread_mutex_lock(&lvm_lock);
|
||||
|
||||
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, WARN_PV_READ, &consistent);
|
||||
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, 0, WARN_PV_READ, &consistent);
|
||||
|
||||
if (vg && consistent)
|
||||
check_current_backup(vg);
|
||||
|
@ -44,6 +44,8 @@ LVMDBUS_BUILDDIR_FILES = \
|
||||
|
||||
LVMDBUSD = lvmdbusd
|
||||
|
||||
CLEAN_DIRS += __pycache__
|
||||
|
||||
include $(top_builddir)/make.tmpl
|
||||
|
||||
.PHONY: install_lvmdbusd
|
||||
|
@ -497,7 +497,7 @@ class Lv(LvCommon):
|
||||
# it is a thin lv
|
||||
if not dbo.IsThinVolume:
|
||||
if optional_size == 0:
|
||||
space = dbo.SizeBytes / 80
|
||||
space = dbo.SizeBytes // 80
|
||||
remainder = space % 512
|
||||
optional_size = space + 512 - remainder
|
||||
|
||||
|
@ -1009,6 +1009,8 @@ static void add_work_action(struct action *act)
|
||||
pthread_mutex_unlock(&worker_mutex);
|
||||
}
|
||||
|
||||
#define ERR_LVMETAD_NOT_RUNNING -200
|
||||
|
||||
static daemon_reply send_lvmetad(const char *id, ...)
|
||||
{
|
||||
daemon_reply reply;
|
||||
@ -1029,9 +1031,9 @@ retry:
|
||||
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0) {
|
||||
err = lvmetad_handle.error ?: lvmetad_handle.socket_fd;
|
||||
pthread_mutex_unlock(&lvmetad_mutex);
|
||||
log_error("lvmetad_open reconnect error %d", err);
|
||||
log_debug("lvmetad_open reconnect error %d", err);
|
||||
memset(&reply, 0, sizeof(reply));
|
||||
reply.error = err;
|
||||
reply.error = ERR_LVMETAD_NOT_RUNNING;
|
||||
va_end(ap);
|
||||
return reply;
|
||||
} else {
|
||||
@ -1265,6 +1267,15 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
|
||||
* caches, and tell lvmetad to set global invalid to 0.
|
||||
*/
|
||||
|
||||
/*
|
||||
* lvmetad not running:
|
||||
* Even if we have not previously found lvmetad running,
|
||||
* we attempt to connect and invalidate in case it has
|
||||
* been started while lvmlockd is running. We don't
|
||||
* want to allow lvmetad to be used with invalid data if
|
||||
* it happens to be enabled and started after lvmlockd.
|
||||
*/
|
||||
|
||||
if (inval_meta && (r->type == LD_RT_VG)) {
|
||||
daemon_reply reply;
|
||||
char *uuid;
|
||||
@ -1284,8 +1295,10 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
|
||||
"version = " FMTd64, (int64_t)new_version,
|
||||
NULL);
|
||||
|
||||
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
|
||||
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
|
||||
if (reply.error != ERR_LVMETAD_NOT_RUNNING)
|
||||
log_error("set_vg_info in lvmetad failed %d", reply.error);
|
||||
}
|
||||
daemon_reply_destroy(reply);
|
||||
}
|
||||
|
||||
@ -1300,8 +1313,10 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
|
||||
"global_invalid = " FMTd64, INT64_C(1),
|
||||
NULL);
|
||||
|
||||
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
|
||||
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
|
||||
if (reply.error != ERR_LVMETAD_NOT_RUNNING)
|
||||
log_error("set_global_info in lvmetad failed %d", reply.error);
|
||||
}
|
||||
daemon_reply_destroy(reply);
|
||||
}
|
||||
|
||||
@ -5848,7 +5863,7 @@ static int main_loop(daemon_state *ds_arg)
|
||||
pthread_mutex_init(&lvmetad_mutex, NULL);
|
||||
lvmetad_handle = lvmetad_open(NULL);
|
||||
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0)
|
||||
log_error("lvmetad_open error %d", lvmetad_handle.error);
|
||||
log_debug("lvmetad_open error %d", lvmetad_handle.error);
|
||||
else
|
||||
lvmetad_connected = 1;
|
||||
|
||||
@ -5856,8 +5871,13 @@ static int main_loop(daemon_state *ds_arg)
|
||||
* Attempt to rejoin lockspaces and adopt locks from a previous
|
||||
* instance of lvmlockd that left behind lockspaces/locks.
|
||||
*/
|
||||
if (adopt_opt)
|
||||
if (adopt_opt) {
|
||||
/* FIXME: implement this without lvmetad */
|
||||
if (!lvmetad_connected)
|
||||
log_error("Cannot adopt locks without lvmetad running.");
|
||||
else
|
||||
adopt_locks();
|
||||
}
|
||||
|
||||
while (1) {
|
||||
rv = poll(pollfd, pollfd_maxi + 1, -1);
|
||||
|
53
doc/release-notes/2.02.178
Normal file
53
doc/release-notes/2.02.178
Normal file
@ -0,0 +1,53 @@
|
||||
Version 2.02.178
|
||||
================
|
||||
|
||||
There are going to be some large changes to the lvm2 codebase
|
||||
over the next year or so. Starting with this release. These
|
||||
changes should be internal rather than having a big effect on
|
||||
the command line. Inevitably these changes will increase the
|
||||
chance of bugs, so please be on the alert.
|
||||
|
||||
|
||||
Remove support for obsolete metadata formats
|
||||
--------------------------------------------
|
||||
|
||||
Support for the GFS pool format, and format used by the
|
||||
original 1990's version of LVM1 have been removed.
|
||||
|
||||
Use asynchronous IO
|
||||
-------------------
|
||||
|
||||
Almost all IO uses libaio now.
|
||||
|
||||
Rewrite label scanning
|
||||
----------------------
|
||||
|
||||
Dave Teigland has reworked the label scanning and metadata reading
|
||||
logic to minimise the amount of IOs issued. Combined with the aio changes
|
||||
this can greatly improve scanning speed for some systems.
|
||||
|
||||
./configure options
|
||||
-------------------
|
||||
|
||||
We're going to try and remove as many options from ./configure as we
|
||||
can. Each option multiplies the number of possible configurations
|
||||
that we should test (this testing is currently not occurring).
|
||||
|
||||
The first batch to be removed are:
|
||||
|
||||
--enable-testing
|
||||
--with-snapshots
|
||||
--with-mirrors
|
||||
--with-raid
|
||||
--with-thin
|
||||
--with-cache
|
||||
|
||||
Stable targets that are in the upstream kernel will just be supported.
|
||||
|
||||
In future optional target flags will be given in two situations:
|
||||
|
||||
1) The target is experimental, or not upstream at all (eg, vdo).
|
||||
2) The target is deprecated and support will be removed at some future date.
|
||||
|
||||
This decision could well be contentious, so could distro maintainers feel
|
||||
free to comment.
|
@ -21,6 +21,7 @@ ifeq ("@CLUSTER@", "shared")
|
||||
endif
|
||||
|
||||
SOURCES =\
|
||||
../base/data-struct/radix-tree.c \
|
||||
activate/activate.c \
|
||||
cache/lvmcache.c \
|
||||
cache_segtype/cache.c \
|
||||
|
123
lib/cache/lvmcache.c
vendored
123
lib/cache/lvmcache.c
vendored
@ -981,11 +981,25 @@ int lvmcache_dev_is_unchosen_duplicate(struct device *dev)
|
||||
* The actual filters are evaluated too early, before a complete
|
||||
* picture of all PVs is available, to eliminate these duplicates.
|
||||
*
|
||||
* By removing the filtered duplicates from unused_duplicate_devs, we remove
|
||||
* By removing some duplicates from unused_duplicate_devs here, we remove
|
||||
* the restrictions that are placed on using duplicate devs or VGs with
|
||||
* duplicate devs.
|
||||
*
|
||||
* There may other kinds of duplicates that we want to ignore.
|
||||
* In cases where we know that two duplicates refer to the same underlying
|
||||
* storage, and we know which dev path to use, it's best for us to just
|
||||
* use that one preferred device path and ignore the others. It is the cases
|
||||
* where we are unsure whether dups refer to the same underlying storage where
|
||||
* we need to keep the unused duplicate referenced in the
|
||||
* unused_duplicate_devs list, and restrict what we allow done with it.
|
||||
*
|
||||
* In the case of md components, we usually filter these out in filter-md,
|
||||
* but in the special case of md superblocks <= 1.0 where the superblock
|
||||
* is at the end of the device, filter-md doesn't always eliminate them
|
||||
* first, so we eliminate them here.
|
||||
*
|
||||
* There may other kinds of duplicates that we want to eliminate at
|
||||
* this point (using the knowledge from the scan) that we couldn't
|
||||
* eliminate in the filters prior to the scan.
|
||||
*/
|
||||
|
||||
static void _filter_duplicate_devs(struct cmd_context *cmd)
|
||||
@ -1004,6 +1018,34 @@ static void _filter_duplicate_devs(struct cmd_context *cmd)
|
||||
dm_free(devl);
|
||||
}
|
||||
}
|
||||
|
||||
if (dm_list_empty(&_unused_duplicate_devs))
|
||||
_found_duplicate_pvs = 0;
|
||||
}
|
||||
|
||||
static void _warn_duplicate_devs(struct cmd_context *cmd)
|
||||
{
|
||||
char uuid[64] __attribute__((aligned(8)));
|
||||
struct lvmcache_info *info;
|
||||
struct device_list *devl, *devl2;
|
||||
|
||||
dm_list_iterate_items_safe(devl, devl2, &_unused_duplicate_devs) {
|
||||
if (!id_write_format((const struct id *)devl->dev->pvid, uuid, sizeof(uuid)))
|
||||
stack;
|
||||
|
||||
log_warn("WARNING: Not using device %s for PV %s.", dev_name(devl->dev), uuid);
|
||||
}
|
||||
|
||||
dm_list_iterate_items_safe(devl, devl2, &_unused_duplicate_devs) {
|
||||
/* info for the preferred device that we're actually using */
|
||||
info = lvmcache_info_from_pvid(devl->dev->pvid, NULL, 0);
|
||||
|
||||
if (!id_write_format((const struct id *)info->dev->pvid, uuid, sizeof(uuid)))
|
||||
stack;
|
||||
|
||||
log_warn("WARNING: PV %s prefers device %s because %s.",
|
||||
uuid, dev_name(info->dev), info->dev->duplicate_prefer_reason);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1028,7 +1070,6 @@ static void _choose_preferred_devs(struct cmd_context *cmd,
|
||||
struct dm_list *del_cache_devs,
|
||||
struct dm_list *add_cache_devs)
|
||||
{
|
||||
char uuid[64] __attribute__((aligned(8)));
|
||||
const char *reason;
|
||||
struct dm_list altdevs;
|
||||
struct dm_list new_unused;
|
||||
@ -1229,9 +1270,7 @@ next:
|
||||
alt = devl;
|
||||
}
|
||||
|
||||
if (!id_write_format((const struct id *)dev1->pvid, uuid, sizeof(uuid)))
|
||||
stack;
|
||||
log_warn("WARNING: PV %s prefers device %s because %s.", uuid, dev_name(dev1), reason);
|
||||
dev1->duplicate_prefer_reason = reason;
|
||||
}
|
||||
|
||||
if (dev1 != info->dev) {
|
||||
@ -1480,11 +1519,21 @@ int lvmcache_label_scan(struct cmd_context *cmd)
|
||||
dm_list_splice(&_unused_duplicate_devs, &del_cache_devs);
|
||||
|
||||
/*
|
||||
* We might want to move the duplicate device warnings until
|
||||
* after this filtering so that we can skip warning about
|
||||
* duplicates that we are filtering out.
|
||||
* This may remove some entries from the unused_duplicates list for
|
||||
* devs that we know are the same underlying dev.
|
||||
*/
|
||||
_filter_duplicate_devs(cmd);
|
||||
|
||||
/*
|
||||
* Warn about remaining duplicates that may actually be separate copies of
|
||||
* the same device.
|
||||
*/
|
||||
_warn_duplicate_devs(cmd);
|
||||
|
||||
if (!_found_duplicate_pvs && lvmetad_used()) {
|
||||
log_warn("WARNING: Disabling lvmetad cache which does not support duplicate PVs.");
|
||||
lvmetad_set_disabled(cmd, LVMETAD_DISABLE_REASON_DUPLICATES);
|
||||
}
|
||||
}
|
||||
|
||||
/* Perform any format-specific scanning e.g. text files */
|
||||
@ -1509,6 +1558,53 @@ int lvmcache_label_scan(struct cmd_context *cmd)
|
||||
return r;
|
||||
}
|
||||
|
||||
/*
|
||||
* When not using lvmetad, lvmcache_label_scan() detects duplicates in
|
||||
* the basic label_scan(), then filters out some dups, and chooses
|
||||
* preferred duplicates to use.
|
||||
*
|
||||
* When using lvmetad, pvscan --cache does not use lvmcache_label_scan(),
|
||||
* only label_scan() which detects the duplicates. This function is used
|
||||
* after pvscan's label_scan() to filter out some dups, print any warnings,
|
||||
* and disable lvmetad if any dups are left.
|
||||
*/
|
||||
|
||||
void lvmcache_pvscan_duplicate_check(struct cmd_context *cmd)
|
||||
{
|
||||
struct device_list *devl;
|
||||
|
||||
/* Check if label_scan() detected any dups. */
|
||||
if (!_found_duplicate_pvs)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Once all the dups are identified, they are moved from the
|
||||
* "found" list to the "unused" list to sort out.
|
||||
*/
|
||||
dm_list_splice(&_unused_duplicate_devs, &_found_duplicate_devs);
|
||||
|
||||
/*
|
||||
* Remove items from the dups list that we know are the same
|
||||
* underlying dev, e.g. md components, that we want to just ignore.
|
||||
*/
|
||||
_filter_duplicate_devs(cmd);
|
||||
|
||||
/*
|
||||
* If no more dups after ignoring some, then we can use lvmetad.
|
||||
*/
|
||||
if (!_found_duplicate_pvs)
|
||||
return;
|
||||
|
||||
/* Duplicates are found where we would have to pick one, so disable lvmetad. */
|
||||
|
||||
dm_list_iterate_items(devl, &_unused_duplicate_devs)
|
||||
log_warn("WARNING: found device with duplicate %s", dev_name(devl->dev));
|
||||
|
||||
log_warn("WARNING: Disabling lvmetad cache which does not support duplicate PVs.");
|
||||
lvmetad_set_disabled(cmd, LVMETAD_DISABLE_REASON_DUPLICATES);
|
||||
lvmetad_make_unused(cmd);
|
||||
}
|
||||
|
||||
int lvmcache_get_vgnameids(struct cmd_context *cmd, int include_internal,
|
||||
struct dm_list *vgnameids)
|
||||
{
|
||||
@ -2303,15 +2399,9 @@ struct lvmcache_info *lvmcache_add(struct labeller *labeller,
|
||||
*/
|
||||
if (!created) {
|
||||
if (info->dev != dev) {
|
||||
log_warn("WARNING: PV %s on %s was already found on %s.",
|
||||
log_debug_cache("PV %s on %s was already found on %s.",
|
||||
uuid, dev_name(dev), dev_name(info->dev));
|
||||
|
||||
if (!_found_duplicate_pvs && lvmetad_used()) {
|
||||
log_warn("WARNING: Disabling lvmetad cache which does not support duplicate PVs.");
|
||||
lvmetad_set_disabled(labeller->fmt->cmd, LVMETAD_DISABLE_REASON_DUPLICATES);
|
||||
}
|
||||
_found_duplicate_pvs = 1;
|
||||
|
||||
strncpy(dev->pvid, pvid_s, sizeof(dev->pvid));
|
||||
|
||||
/*
|
||||
@ -2328,6 +2418,7 @@ struct lvmcache_info *lvmcache_add(struct labeller *labeller,
|
||||
devl->dev = dev;
|
||||
|
||||
dm_list_add(&_found_duplicate_devs, &devl->list);
|
||||
_found_duplicate_pvs = 1;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
2
lib/cache/lvmcache.h
vendored
2
lib/cache/lvmcache.h
vendored
@ -188,6 +188,8 @@ uint64_t lvmcache_smallest_mda_size(struct lvmcache_info *info);
|
||||
|
||||
int lvmcache_found_duplicate_pvs(void);
|
||||
|
||||
void lvmcache_pvscan_duplicate_check(struct cmd_context *cmd);
|
||||
|
||||
int lvmcache_get_unused_duplicate_devs(struct cmd_context *cmd, struct dm_list *head);
|
||||
|
||||
int vg_has_duplicate_pvs(struct volume_group *vg);
|
||||
|
2
lib/cache/lvmetad.c
vendored
2
lib/cache/lvmetad.c
vendored
@ -2350,6 +2350,8 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait)
|
||||
|
||||
label_scan(cmd);
|
||||
|
||||
lvmcache_pvscan_duplicate_check(cmd);
|
||||
|
||||
if (lvmcache_found_duplicate_pvs()) {
|
||||
log_warn("WARNING: Scan found duplicate PVs.");
|
||||
return 0;
|
||||
|
@ -15,6 +15,8 @@
|
||||
#define _GNU_SOURCE
|
||||
|
||||
#include "lib/device/bcache.h"
|
||||
|
||||
#include "base/data-struct/radix-tree.h"
|
||||
#include "lib/log/lvm-logging.h"
|
||||
#include "lib/log/log.h"
|
||||
|
||||
@ -133,6 +135,7 @@ struct async_engine {
|
||||
struct io_engine e;
|
||||
io_context_t aio_context;
|
||||
struct cb_set *cbs;
|
||||
unsigned page_mask;
|
||||
};
|
||||
|
||||
static struct async_engine *_to_async(struct io_engine *e)
|
||||
@ -163,7 +166,7 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, int fd,
|
||||
struct control_block *cb;
|
||||
struct async_engine *e = _to_async(ioe);
|
||||
|
||||
if (((uintptr_t) data) & (PAGE_SIZE - 1)) {
|
||||
if (((uintptr_t) data) & e->page_mask) {
|
||||
log_warn("misaligned data buffer");
|
||||
return false;
|
||||
}
|
||||
@ -275,6 +278,8 @@ struct io_engine *create_async_io_engine(void)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
e->page_mask = sysconf(_SC_PAGESIZE) - 1;
|
||||
|
||||
return &e->e;
|
||||
}
|
||||
|
||||
@ -450,12 +455,7 @@ struct bcache {
|
||||
struct dm_list clean;
|
||||
struct dm_list io_pending;
|
||||
|
||||
/*
|
||||
* Hash table.
|
||||
*/
|
||||
unsigned nr_buckets;
|
||||
unsigned hash_mask;
|
||||
struct dm_list *buckets;
|
||||
struct radix_tree *rtree;
|
||||
|
||||
/*
|
||||
* Statistics
|
||||
@ -470,85 +470,60 @@ struct bcache {
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
/* 2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
|
||||
#define GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL
|
||||
struct key_parts {
|
||||
uint32_t fd;
|
||||
uint64_t b;
|
||||
} __attribute__ ((packed));
|
||||
|
||||
static unsigned _hash(struct bcache *cache, int fd, uint64_t i)
|
||||
union key {
|
||||
struct key_parts parts;
|
||||
uint8_t bytes[12];
|
||||
};
|
||||
|
||||
static struct block *_block_lookup(struct bcache *cache, int fd, uint64_t i)
|
||||
{
|
||||
uint64_t h = (i << 10) & fd;
|
||||
h *= GOLDEN_RATIO_PRIME_64;
|
||||
return h & cache->hash_mask;
|
||||
}
|
||||
union key k;
|
||||
union radix_value v;
|
||||
|
||||
static struct block *_hash_lookup(struct bcache *cache, int fd, uint64_t i)
|
||||
{
|
||||
struct block *b;
|
||||
unsigned h = _hash(cache, fd, i);
|
||||
k.parts.fd = fd;
|
||||
k.parts.b = i;
|
||||
|
||||
dm_list_iterate_items_gen (b, cache->buckets + h, hash)
|
||||
if (b->fd == fd && b->index == i)
|
||||
return b;
|
||||
if (radix_tree_lookup(cache->rtree, k.bytes, k.bytes + sizeof(k.bytes), &v))
|
||||
return v.ptr;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void _hash_insert(struct block *b)
|
||||
static bool _block_insert(struct block *b)
|
||||
{
|
||||
unsigned h = _hash(b->cache, b->fd, b->index);
|
||||
dm_list_add_h(b->cache->buckets + h, &b->hash);
|
||||
union key k;
|
||||
union radix_value v;
|
||||
|
||||
k.parts.fd = b->fd;
|
||||
k.parts.b = b->index;
|
||||
v.ptr = b;
|
||||
|
||||
return radix_tree_insert(b->cache->rtree, k.bytes, k.bytes + sizeof(k.bytes), v);
|
||||
}
|
||||
|
||||
static inline void _hash_remove(struct block *b)
|
||||
static void _block_remove(struct block *b)
|
||||
{
|
||||
dm_list_del(&b->hash);
|
||||
}
|
||||
union key k;
|
||||
|
||||
/*
|
||||
* Must return a power of 2.
|
||||
*/
|
||||
static unsigned _calc_nr_buckets(unsigned nr_blocks)
|
||||
{
|
||||
unsigned r = 8;
|
||||
unsigned n = nr_blocks / 4;
|
||||
k.parts.fd = b->fd;
|
||||
k.parts.b = b->index;
|
||||
|
||||
if (n < 8)
|
||||
n = 8;
|
||||
|
||||
while (r < n)
|
||||
r <<= 1;
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
static bool _hash_table_init(struct bcache *cache, unsigned nr_entries)
|
||||
{
|
||||
unsigned i;
|
||||
|
||||
cache->nr_buckets = _calc_nr_buckets(nr_entries);
|
||||
cache->hash_mask = cache->nr_buckets - 1;
|
||||
cache->buckets = dm_malloc(cache->nr_buckets * sizeof(*cache->buckets));
|
||||
if (!cache->buckets)
|
||||
return false;
|
||||
|
||||
for (i = 0; i < cache->nr_buckets; i++)
|
||||
dm_list_init(cache->buckets + i);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void _hash_table_exit(struct bcache *cache)
|
||||
{
|
||||
dm_free(cache->buckets);
|
||||
radix_tree_remove(b->cache->rtree, k.bytes, k.bytes + sizeof(k.bytes));
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
static bool _init_free_list(struct bcache *cache, unsigned count)
|
||||
static bool _init_free_list(struct bcache *cache, unsigned count, unsigned pgsize)
|
||||
{
|
||||
unsigned i;
|
||||
size_t block_size = cache->block_sectors << SECTOR_SHIFT;
|
||||
unsigned char *data =
|
||||
(unsigned char *) _alloc_aligned(count * block_size, PAGE_SIZE);
|
||||
(unsigned char *) _alloc_aligned(count * block_size, pgsize);
|
||||
|
||||
/* Allocate the data for each block. We page align the data. */
|
||||
if (!data)
|
||||
@ -584,6 +559,11 @@ static struct block *_alloc_block(struct bcache *cache)
|
||||
return dm_list_struct_base(_list_pop(&cache->free), struct block, list);
|
||||
}
|
||||
|
||||
static void _free_block(struct block *b)
|
||||
{
|
||||
dm_list_add(&b->cache->free, &b->list);
|
||||
}
|
||||
|
||||
/*----------------------------------------------------------------
|
||||
* Clean/dirty list management.
|
||||
* Always use these methods to ensure nr_dirty_ is correct.
|
||||
@ -739,7 +719,7 @@ static struct block *_find_unused_clean_block(struct bcache *cache)
|
||||
dm_list_iterate_items (b, &cache->clean) {
|
||||
if (!b->ref_count) {
|
||||
_unlink_block(b);
|
||||
_hash_remove(b);
|
||||
_block_remove(b);
|
||||
return b;
|
||||
}
|
||||
}
|
||||
@ -776,22 +756,12 @@ static struct block *_new_block(struct bcache *cache, int fd, block_address i, b
|
||||
b->ref_count = 0;
|
||||
b->error = 0;
|
||||
|
||||
_hash_insert(b);
|
||||
if (!_block_insert(b)) {
|
||||
log_error("bcache unable to insert block in radix tree (OOM?)");
|
||||
_free_block(b);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
#if 0
|
||||
if (!b) {
|
||||
log_error("bcache no new blocks for fd %d index %u "
|
||||
"clean %u free %u dirty %u pending %u nr_data_blocks %u nr_cache_blocks %u",
|
||||
fd, (uint32_t) i,
|
||||
dm_list_size(&cache->clean),
|
||||
dm_list_size(&cache->free),
|
||||
dm_list_size(&cache->dirty),
|
||||
dm_list_size(&cache->io_pending),
|
||||
(uint32_t)cache->nr_data_blocks,
|
||||
(uint32_t)cache->nr_cache_blocks);
|
||||
}
|
||||
#endif
|
||||
|
||||
return b;
|
||||
}
|
||||
@ -830,7 +800,7 @@ static struct block *_lookup_or_read_block(struct bcache *cache,
|
||||
int fd, block_address i,
|
||||
unsigned flags)
|
||||
{
|
||||
struct block *b = _hash_lookup(cache, fd, i);
|
||||
struct block *b = _block_lookup(cache, fd, i);
|
||||
|
||||
if (b) {
|
||||
// FIXME: this is insufficient. We need to also catch a read
|
||||
@ -899,6 +869,7 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
|
||||
{
|
||||
struct bcache *cache;
|
||||
unsigned max_io = engine->max_io(engine);
|
||||
long pgsize = sysconf(_SC_PAGESIZE);
|
||||
|
||||
if (!nr_cache_blocks) {
|
||||
log_warn("bcache must have at least one cache block");
|
||||
@ -910,7 +881,7 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (block_sectors & ((PAGE_SIZE >> SECTOR_SHIFT) - 1)) {
|
||||
if (block_sectors & ((pgsize >> SECTOR_SHIFT) - 1)) {
|
||||
log_warn("bcache block size must be a multiple of page size");
|
||||
return NULL;
|
||||
}
|
||||
@ -933,7 +904,8 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
|
||||
dm_list_init(&cache->clean);
|
||||
dm_list_init(&cache->io_pending);
|
||||
|
||||
if (!_hash_table_init(cache, nr_cache_blocks)) {
|
||||
cache->rtree = radix_tree_create(NULL, NULL);
|
||||
if (!cache->rtree) {
|
||||
cache->engine->destroy(cache->engine);
|
||||
dm_free(cache);
|
||||
return NULL;
|
||||
@ -946,9 +918,9 @@ struct bcache *bcache_create(sector_t block_sectors, unsigned nr_cache_blocks,
|
||||
cache->write_misses = 0;
|
||||
cache->prefetches = 0;
|
||||
|
||||
if (!_init_free_list(cache, nr_cache_blocks)) {
|
||||
if (!_init_free_list(cache, nr_cache_blocks, pgsize)) {
|
||||
cache->engine->destroy(cache->engine);
|
||||
_hash_table_exit(cache);
|
||||
radix_tree_destroy(cache->rtree);
|
||||
dm_free(cache);
|
||||
return NULL;
|
||||
}
|
||||
@ -964,7 +936,7 @@ void bcache_destroy(struct bcache *cache)
|
||||
bcache_flush(cache);
|
||||
_wait_all(cache);
|
||||
_exit_free_list(cache);
|
||||
_hash_table_exit(cache);
|
||||
radix_tree_destroy(cache->rtree);
|
||||
cache->engine->destroy(cache->engine);
|
||||
dm_free(cache);
|
||||
}
|
||||
@ -986,7 +958,7 @@ unsigned bcache_max_prefetches(struct bcache *cache)
|
||||
|
||||
void bcache_prefetch(struct bcache *cache, int fd, block_address i)
|
||||
{
|
||||
struct block *b = _hash_lookup(cache, fd, i);
|
||||
struct block *b = _block_lookup(cache, fd, i);
|
||||
|
||||
if (!b) {
|
||||
if (cache->nr_io_pending < cache->max_io) {
|
||||
@ -999,11 +971,13 @@ void bcache_prefetch(struct bcache *cache, int fd, block_address i)
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
static void _recycle_block(struct bcache *cache, struct block *b)
|
||||
{
|
||||
_unlink_block(b);
|
||||
_hash_remove(b);
|
||||
dm_list_add(&cache->free, &b->list);
|
||||
_block_remove(b);
|
||||
_free_block(b);
|
||||
}
|
||||
|
||||
bool bcache_get(struct bcache *cache, int fd, block_address i,
|
||||
@ -1037,6 +1011,8 @@ bool bcache_get(struct bcache *cache, int fd, block_address i,
|
||||
return false;
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
static void _put_ref(struct block *b)
|
||||
{
|
||||
if (!b->ref_count) {
|
||||
@ -1057,6 +1033,8 @@ void bcache_put(struct block *b)
|
||||
_preemptive_writeback(b->cache);
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
bool bcache_flush(struct bcache *cache)
|
||||
{
|
||||
// Only dirty data is on the errored list, since bad read blocks get
|
||||
@ -1079,6 +1057,7 @@ bool bcache_flush(struct bcache *cache)
|
||||
return dm_list_empty(&cache->errored);
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
/*
|
||||
* You can safely call this with a NULL block.
|
||||
*/
|
||||
@ -1111,29 +1090,72 @@ static bool _invalidate_block(struct bcache *cache, struct block *b)
|
||||
|
||||
bool bcache_invalidate(struct bcache *cache, int fd, block_address i)
|
||||
{
|
||||
return _invalidate_block(cache, _hash_lookup(cache, fd, i));
|
||||
}
|
||||
|
||||
// FIXME: switch to a trie, or maybe 1 hash table per fd? To save iterating
|
||||
// through the whole cache.
|
||||
bool bcache_invalidate_fd(struct bcache *cache, int fd)
|
||||
{
|
||||
struct block *b, *tmp;
|
||||
bool r = true;
|
||||
|
||||
// Start writing back any dirty blocks on this fd.
|
||||
dm_list_iterate_items_safe (b, tmp, &cache->dirty)
|
||||
if (b->fd == fd)
|
||||
_issue_write(b);
|
||||
|
||||
_wait_all(cache);
|
||||
|
||||
// Everything should be in the clean list now.
|
||||
dm_list_iterate_items_safe (b, tmp, &cache->clean)
|
||||
if (b->fd == fd)
|
||||
r = _invalidate_block(cache, b) && r;
|
||||
|
||||
return r;
|
||||
return _invalidate_block(cache, _block_lookup(cache, fd, i));
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
||||
struct invalidate_iterator {
|
||||
bool success;
|
||||
struct radix_tree_iterator it;
|
||||
};
|
||||
|
||||
static bool _writeback_v(struct radix_tree_iterator *it,
|
||||
uint8_t *kb, uint8_t *ke, union radix_value v)
|
||||
{
|
||||
struct block *b = v.ptr;
|
||||
|
||||
if (_test_flags(b, BF_DIRTY))
|
||||
_issue_write(b);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _invalidate_v(struct radix_tree_iterator *it,
|
||||
uint8_t *kb, uint8_t *ke, union radix_value v)
|
||||
{
|
||||
struct block *b = v.ptr;
|
||||
struct invalidate_iterator *iit = container_of(it, struct invalidate_iterator, it);
|
||||
|
||||
if (b->error || _test_flags(b, BF_DIRTY)) {
|
||||
log_warn("bcache_invalidate: block (%d, %llu) still dirty",
|
||||
b->fd, (unsigned long long) b->index);
|
||||
iit->success = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
if (b->ref_count) {
|
||||
log_warn("bcache_invalidate: block (%d, %llu) still held",
|
||||
b->fd, (unsigned long long) b->index);
|
||||
iit->success = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
_unlink_block(b);
|
||||
_free_block(b);
|
||||
|
||||
// We can't remove the block from the radix tree yet because
|
||||
// we're in the middle of an iteration.
|
||||
return true;
|
||||
}
|
||||
|
||||
bool bcache_invalidate_fd(struct bcache *cache, int fd)
|
||||
{
|
||||
union key k;
|
||||
struct invalidate_iterator it;
|
||||
|
||||
k.parts.fd = fd;
|
||||
|
||||
it.it.visit = _writeback_v;
|
||||
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd), &it.it);
|
||||
|
||||
_wait_all(cache);
|
||||
|
||||
it.success = true;
|
||||
it.it.visit = _invalidate_v;
|
||||
radix_tree_iterate(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd), &it.it);
|
||||
radix_tree_remove_prefix(cache->rtree, k.bytes, k.bytes + sizeof(k.parts.fd));
|
||||
return it.success;
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------
|
||||
|
@ -333,24 +333,16 @@ static int _add_alias(struct device *dev, const char *path)
|
||||
|
||||
/* Is name already there? */
|
||||
dm_list_iterate_items(strl, &dev->aliases) {
|
||||
if (!strcmp(strl->str, path)) {
|
||||
log_debug_devs("%s: Already in device cache", path);
|
||||
if (!strcmp(strl->str, path))
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
sl->str = path;
|
||||
|
||||
if (!dm_list_empty(&dev->aliases)) {
|
||||
oldpath = dm_list_item(dev->aliases.n, struct dm_str_list)->str;
|
||||
prefer_old = _compare_paths(path, oldpath);
|
||||
log_debug_devs("%s: Aliased to %s in device cache%s (%d:%d)",
|
||||
path, oldpath, prefer_old ? "" : " (preferred name)",
|
||||
(int) MAJOR(dev->dev), (int) MINOR(dev->dev));
|
||||
|
||||
} else
|
||||
log_debug_devs("%s: Added to device cache (%d:%d)", path,
|
||||
(int) MAJOR(dev->dev), (int) MINOR(dev->dev));
|
||||
}
|
||||
|
||||
if (prefer_old)
|
||||
dm_list_add(&dev->aliases, &sl->list);
|
||||
@ -666,6 +658,29 @@ struct dm_list *dev_cache_get_dev_list_for_lvid(const char *lvid)
|
||||
return dm_hash_lookup(_cache.lvid_index, lvid);
|
||||
}
|
||||
|
||||
/*
|
||||
* Scanning code calls this when it fails to open a device using
|
||||
* this path. The path is dropped from dev-cache. In the next
|
||||
* dev_cache_scan it may be added again, but it could be for a
|
||||
* different device.
|
||||
*/
|
||||
|
||||
void dev_cache_failed_path(struct device *dev, const char *path)
|
||||
{
|
||||
struct device *dev_by_path;
|
||||
struct dm_str_list *strl;
|
||||
|
||||
if ((dev_by_path = (struct device *) dm_hash_lookup(_cache.names, path)))
|
||||
dm_hash_remove(_cache.names, path);
|
||||
|
||||
dm_list_iterate_items(strl, &dev->aliases) {
|
||||
if (!strcmp(strl->str, path)) {
|
||||
dm_list_del(&strl->list);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Either creates a new dev, or adds an alias to
|
||||
* an existing dev.
|
||||
@ -673,6 +688,8 @@ struct dm_list *dev_cache_get_dev_list_for_lvid(const char *lvid)
|
||||
static int _insert_dev(const char *path, dev_t d)
|
||||
{
|
||||
struct device *dev;
|
||||
struct device *dev_by_devt;
|
||||
struct device *dev_by_path;
|
||||
static dev_t loopfile_count = 0;
|
||||
int loopfile = 0;
|
||||
char *path_copy;
|
||||
@ -685,8 +702,26 @@ static int _insert_dev(const char *path, dev_t d)
|
||||
loopfile = 1;
|
||||
}
|
||||
|
||||
/* is this device already registered ? */
|
||||
if (!(dev = (struct device *) btree_lookup(_cache.devices, (uint32_t) d))) {
|
||||
dev_by_devt = (struct device *) btree_lookup(_cache.devices, (uint32_t) d);
|
||||
dev_by_path = (struct device *) dm_hash_lookup(_cache.names, path);
|
||||
dev = dev_by_devt;
|
||||
|
||||
/*
|
||||
* Existing device, existing path points to the same device.
|
||||
*/
|
||||
if (dev_by_devt && dev_by_path && (dev_by_devt == dev_by_path)) {
|
||||
log_debug_devs("Found dev %d:%d %s - exists. %.8s",
|
||||
(int)MAJOR(d), (int)MINOR(d), path, dev->pvid);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* No device or path found, add devt to cache.devices, add name to cache.names.
|
||||
*/
|
||||
if (!dev_by_devt && !dev_by_path) {
|
||||
log_debug_devs("Found dev %d:%d %s - new.",
|
||||
(int)MAJOR(d), (int)MINOR(d), path);
|
||||
|
||||
if (!(dev = (struct device *) btree_lookup(_cache.sysfs_only_devices, (uint32_t) d))) {
|
||||
/* create new device */
|
||||
if (loopfile) {
|
||||
@ -701,13 +736,6 @@ static int _insert_dev(const char *path, dev_t d)
|
||||
_free(dev);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (dm_hash_lookup(_cache.names, path) == dev) {
|
||||
/* Hash already has matching entry present */
|
||||
log_debug("%s: Path already cached.", path);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
|
||||
log_error("Failed to duplicate path string.");
|
||||
@ -725,6 +753,109 @@ static int _insert_dev(const char *path, dev_t d)
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Existing device, path is new, add path as a new alias for the device.
|
||||
*/
|
||||
if (dev_by_devt && !dev_by_path) {
|
||||
log_debug_devs("Found dev %d:%d %s - new alias.",
|
||||
(int)MAJOR(d), (int)MINOR(d), path);
|
||||
|
||||
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
|
||||
log_error("Failed to duplicate path string.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!loopfile && !_add_alias(dev, path_copy)) {
|
||||
log_error("Couldn't add alias to dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
|
||||
log_error("Couldn't add name to hash in dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* No existing device, but path exists and previously pointed
|
||||
* to a different device.
|
||||
*/
|
||||
if (!dev_by_devt && dev_by_path) {
|
||||
log_debug_devs("Found dev %d:%d %s - new device, path was previously %d:%d.",
|
||||
(int)MAJOR(d), (int)MINOR(d), path,
|
||||
(int)MAJOR(dev_by_path->dev), (int)MINOR(dev_by_path->dev));
|
||||
|
||||
if (!(dev = (struct device *) btree_lookup(_cache.sysfs_only_devices, (uint32_t) d))) {
|
||||
/* create new device */
|
||||
if (loopfile) {
|
||||
if (!(dev = dev_create_file(path, NULL, NULL, 0)))
|
||||
return_0;
|
||||
} else if (!(dev = _dev_create(d)))
|
||||
return_0;
|
||||
}
|
||||
|
||||
if (!(btree_insert(_cache.devices, (uint32_t) d, dev))) {
|
||||
log_error("Couldn't insert device into binary tree.");
|
||||
_free(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
|
||||
log_error("Failed to duplicate path string.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!loopfile && !_add_alias(dev, path_copy)) {
|
||||
log_error("Couldn't add alias to dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
dm_hash_remove(_cache.names, path);
|
||||
|
||||
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
|
||||
log_error("Couldn't add name to hash in dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
* Existing device, and path exists and previously pointed to
|
||||
* a different device.
|
||||
*/
|
||||
if (dev_by_devt && dev_by_path) {
|
||||
log_debug_devs("Found dev %d:%d %s - existing device, path was previously %d:%d.",
|
||||
(int)MAJOR(d), (int)MINOR(d), path,
|
||||
(int)MAJOR(dev_by_path->dev), (int)MINOR(dev_by_path->dev));
|
||||
|
||||
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
|
||||
log_error("Failed to duplicate path string.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!loopfile && !_add_alias(dev, path_copy)) {
|
||||
log_error("Couldn't add alias to dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
dm_hash_remove(_cache.names, path);
|
||||
|
||||
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
|
||||
log_error("Couldn't add name to hash in dev cache.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
log_error("Found dev %d:%d %s - failed to use.", (int)MAJOR(d), (int)MINOR(d), path);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static char *_join(const char *dir, const char *name)
|
||||
@ -1064,10 +1195,8 @@ static int _insert(const char *path, const struct stat *info,
|
||||
if (rec && !_insert_dir(path))
|
||||
return_0;
|
||||
} else { /* add a device */
|
||||
if (!S_ISBLK(info->st_mode)) {
|
||||
log_debug_devs("%s: Not a block device", path);
|
||||
if (!S_ISBLK(info->st_mode))
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (!_insert_dev(path, info->st_rdev))
|
||||
return_0;
|
||||
@ -1080,6 +1209,8 @@ void dev_cache_scan(void)
|
||||
{
|
||||
struct dir_list *dl;
|
||||
|
||||
log_debug_devs("Creating list of system devices.");
|
||||
|
||||
_cache.has_scanned = 1;
|
||||
|
||||
_insert_dirs(&_cache.dirs);
|
||||
|
@ -70,4 +70,6 @@ struct device *dev_iter_get(struct dev_iter *iter);
|
||||
|
||||
void dev_reset_error_count(struct cmd_context *cmd);
|
||||
|
||||
void dev_cache_failed_path(struct device *dev, const char *path);
|
||||
|
||||
#endif
|
||||
|
@ -75,6 +75,7 @@ struct device {
|
||||
uint64_t size;
|
||||
uint64_t end;
|
||||
struct dev_ext ext;
|
||||
const char *duplicate_prefer_reason;
|
||||
|
||||
const char *vgid; /* if device is an LV */
|
||||
const char *lvid; /* if device is an LV */
|
||||
|
@ -20,10 +20,74 @@
|
||||
|
||||
#define MSG_SKIPPING "%s: Skipping md component device"
|
||||
|
||||
static int _ignore_md(struct device *dev, int full)
|
||||
/*
|
||||
* The purpose of these functions is to ignore md component devices,
|
||||
* e.g. if /dev/md0 is a raid1 composed of /dev/loop0 and /dev/loop1,
|
||||
* lvm wants to deal with md0 and ignore loop0 and loop1. md0 should
|
||||
* pass the filter, and loop0,loop1 should not pass the filter so lvm
|
||||
* will ignore them.
|
||||
*
|
||||
* (This is assuming lvm.conf md_component_detection=1.)
|
||||
*
|
||||
* If lvm does *not* ignore the components, then lvm will read lvm
|
||||
* labels from the md dev and from the component devs, and will see
|
||||
* them all as duplicates of each other. LVM duplicate resolution
|
||||
* will then kick in and keep the md dev around to use and ignore
|
||||
* the components.
|
||||
*
|
||||
* It is better to exclude the components as early as possible during
|
||||
* lvm processing, ideally before lvm even looks for labels on the
|
||||
* components, so that duplicate resolution can be avoided. There are
|
||||
* a number of ways that md components can be excluded earlier than
|
||||
* the duplicate resolution phase:
|
||||
*
|
||||
* - When external_device_info_source="udev", lvm discovers a device is
|
||||
* an md component by asking udev during the initial filtering phase.
|
||||
* However, lvm's default is to not use udev for this. The
|
||||
* alternative is "native" detection in which lvm tries to detect
|
||||
* md components itself.
|
||||
*
|
||||
* - When using native detection, lvm's md filter looks for the md
|
||||
* superblock at the start of devices. It will see the md superblock
|
||||
* on the components, exclude them in the md filter, and avoid
|
||||
* handling them later in duplicate resolution.
|
||||
*
|
||||
* - When using native detection, lvm's md filter will not detect
|
||||
* components when the md device has an older superblock version that
|
||||
* places the superblock at the end of the device. This case will
|
||||
* fall back to duplicate resolution to exclude components.
|
||||
*
|
||||
* A variation of the description above occurs for lvm commands that
|
||||
* intend to create new PVs on devices (pvcreate, vgcreate, vgextend).
|
||||
* For these commands, the native md filter also reads the end of all
|
||||
* devices to check for the odd md superblocks.
|
||||
*
|
||||
* (The reason that external_device_info_source is not set to udev by
|
||||
* default is that there have be issues with udev not being promptly
|
||||
* or reliably updated about md state changes, causing the udev info
|
||||
* that lvm uses to be occasionally wrong.)
|
||||
*/
|
||||
|
||||
/*
|
||||
* Returns 0 if:
|
||||
* the device is an md component and it should be ignored.
|
||||
*
|
||||
* Returns 1 if:
|
||||
* the device is not md component and should not be ignored.
|
||||
*
|
||||
* The actual md device will pass this filter and should be used,
|
||||
* it is the md component devices that we are trying to exclude
|
||||
* that will not pass.
|
||||
*/
|
||||
|
||||
static int _passes_md_filter(struct device *dev, int full)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* When md_component_dectection=0, don't even try to skip md
|
||||
* components.
|
||||
*/
|
||||
if (!md_filtering())
|
||||
return 1;
|
||||
|
||||
@ -36,6 +100,9 @@ static int _ignore_md(struct device *dev, int full)
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (ret == 0)
|
||||
return 1;
|
||||
|
||||
if (ret == 1) {
|
||||
if (dev->ext.src == DEV_EXT_NONE)
|
||||
log_debug_devs(MSG_SKIPPING, dev_name(dev));
|
||||
@ -54,16 +121,16 @@ static int _ignore_md(struct device *dev, int full)
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _ignore_md_lite(struct dev_filter *f __attribute__((unused)),
|
||||
static int _passes_md_filter_lite(struct dev_filter *f __attribute__((unused)),
|
||||
struct device *dev)
|
||||
{
|
||||
return _ignore_md(dev, 0);
|
||||
return _passes_md_filter(dev, 0);
|
||||
}
|
||||
|
||||
static int _ignore_md_full(struct dev_filter *f __attribute__((unused)),
|
||||
static int _passes_md_filter_full(struct dev_filter *f __attribute__((unused)),
|
||||
struct device *dev)
|
||||
{
|
||||
return _ignore_md(dev, 1);
|
||||
return _passes_md_filter(dev, 1);
|
||||
}
|
||||
|
||||
static void _destroy(struct dev_filter *f)
|
||||
@ -91,9 +158,9 @@ struct dev_filter *md_filter_create(struct cmd_context *cmd, struct dev_types *d
|
||||
*/
|
||||
|
||||
if (cmd->use_full_md_check)
|
||||
f->passes_filter = _ignore_md_full;
|
||||
f->passes_filter = _passes_md_filter_full;
|
||||
else
|
||||
f->passes_filter = _ignore_md_lite;
|
||||
f->passes_filter = _passes_md_filter_lite;
|
||||
|
||||
f->destroy = _destroy;
|
||||
f->use_count = 0;
|
||||
|
@ -286,10 +286,18 @@ out:
|
||||
static int _lookup_p(struct dev_filter *f, struct device *dev)
|
||||
{
|
||||
struct pfilter *pf = (struct pfilter *) f->private;
|
||||
void *l = dm_hash_lookup(pf->devices, dev_name(dev));
|
||||
void *l;
|
||||
struct dm_str_list *sl;
|
||||
int pass = 1;
|
||||
|
||||
if (dm_list_empty(&dev->aliases)) {
|
||||
log_debug_devs("%d:%d: filter cache skipping (no name)",
|
||||
(int)MAJOR(dev->dev), (int)MINOR(dev->dev));
|
||||
return 0;
|
||||
}
|
||||
|
||||
l = dm_hash_lookup(pf->devices, dev_name(dev));
|
||||
|
||||
/* Cached bad, skip dev */
|
||||
if (l == PF_BAD_DEVICE) {
|
||||
log_debug_devs("%s: filter cache skipping (cached bad)", dev_name(dev));
|
||||
|
@ -135,8 +135,8 @@ static struct dm_list *_scan_archive(struct dm_pool *mem,
|
||||
|
||||
dm_list_init(results);
|
||||
|
||||
/* Sort fails beyond 5-digit indexes */
|
||||
if ((count = scandir(dir, &dirent, NULL, alphasort)) < 0) {
|
||||
/* Use versionsort to handle numbers beyond 5 digits */
|
||||
if ((count = scandir(dir, &dirent, NULL, versionsort)) < 0) {
|
||||
log_error("Couldn't scan the archive directory (%s).", dir);
|
||||
return 0;
|
||||
}
|
||||
|
@ -114,7 +114,7 @@ int label_remove(struct device *dev)
|
||||
|
||||
log_very_verbose("Scanning for labels to wipe from %s", dev_name(dev));
|
||||
|
||||
if (!label_scan_open(dev)) {
|
||||
if (!label_scan_open_excl(dev)) {
|
||||
log_error("Failed to open device %s", dev_name(dev));
|
||||
return 0;
|
||||
}
|
||||
@ -427,7 +427,11 @@ static int _process_block(struct cmd_context *cmd, struct dev_filter *f,
|
||||
|
||||
static int _scan_dev_open(struct device *dev)
|
||||
{
|
||||
struct dm_list *name_list;
|
||||
struct dm_str_list *name_sl;
|
||||
const char *name;
|
||||
struct stat sbuf;
|
||||
int retried = 0;
|
||||
int flags = 0;
|
||||
int fd;
|
||||
|
||||
@ -435,20 +439,30 @@ static int _scan_dev_open(struct device *dev)
|
||||
return 0;
|
||||
|
||||
if (dev->flags & DEV_IN_BCACHE) {
|
||||
log_error("scan_dev_open %s DEV_IN_BCACHE already set", dev_name(dev));
|
||||
/* Shouldn't happen */
|
||||
log_error("Device open %s has DEV_IN_BCACHE already set", dev_name(dev));
|
||||
dev->flags &= ~DEV_IN_BCACHE;
|
||||
}
|
||||
|
||||
if (dev->bcache_fd > 0) {
|
||||
log_error("scan_dev_open %s already open with fd %d",
|
||||
/* Shouldn't happen */
|
||||
log_error("Device open %s already open with fd %d",
|
||||
dev_name(dev), dev->bcache_fd);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!(name = dev_name_confirmed(dev, 1))) {
|
||||
log_error("scan_dev_open %s no name", dev_name(dev));
|
||||
/*
|
||||
* All the names for this device (major:minor) are kept on
|
||||
* dev->aliases, the first one is the primary/preferred name.
|
||||
*/
|
||||
if (!(name_list = dm_list_first(&dev->aliases))) {
|
||||
/* Shouldn't happen */
|
||||
log_error("Device open %s %d:%d has no path names.",
|
||||
dev_name(dev), (int)MAJOR(dev->dev), (int)MINOR(dev->dev));
|
||||
return 0;
|
||||
}
|
||||
name_sl = dm_list_item(name_list, struct dm_str_list);
|
||||
name = name_sl->str;
|
||||
|
||||
flags |= O_RDWR;
|
||||
flags |= O_DIRECT;
|
||||
@ -457,6 +471,8 @@ static int _scan_dev_open(struct device *dev)
|
||||
if (dev->flags & DEV_BCACHE_EXCL)
|
||||
flags |= O_EXCL;
|
||||
|
||||
retry_open:
|
||||
|
||||
fd = open(name, flags, 0777);
|
||||
|
||||
if (fd < 0) {
|
||||
@ -464,7 +480,39 @@ static int _scan_dev_open(struct device *dev)
|
||||
log_error("Can't open %s exclusively. Mounted filesystem?",
|
||||
dev_name(dev));
|
||||
} else {
|
||||
log_error("scan_dev_open %s failed errno %d", dev_name(dev), errno);
|
||||
int major, minor;
|
||||
|
||||
/*
|
||||
* Shouldn't happen, if it does, print stat info to help figure
|
||||
* out what's wrong.
|
||||
*/
|
||||
|
||||
major = (int)MAJOR(dev->dev);
|
||||
minor = (int)MINOR(dev->dev);
|
||||
|
||||
log_error("Device open %s %d:%d failed errno %d", name, major, minor, errno);
|
||||
|
||||
if (stat(name, &sbuf)) {
|
||||
log_debug_devs("Device open %s %d:%d stat failed errno %d",
|
||||
name, major, minor, errno);
|
||||
} else if (sbuf.st_rdev != dev->dev) {
|
||||
log_debug_devs("Device open %s %d:%d stat %d:%d does not match.",
|
||||
name, major, minor,
|
||||
(int)MAJOR(sbuf.st_rdev), (int)MINOR(sbuf.st_rdev));
|
||||
}
|
||||
|
||||
if (!retried) {
|
||||
/*
|
||||
* FIXME: remove this, the theory for this retry is that
|
||||
* there may be a udev race that we can sometimes mask by
|
||||
* retrying. This is here until we can figure out if it's
|
||||
* needed and if so fix the real problem.
|
||||
*/
|
||||
usleep(5000);
|
||||
log_debug_devs("Device open %s retry", dev_name(dev));
|
||||
retried = 1;
|
||||
goto retry_open;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -493,6 +541,37 @@ static int _scan_dev_close(struct device *dev)
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void _drop_bad_aliases(struct device *dev)
|
||||
{
|
||||
struct dm_str_list *strl, *strl2;
|
||||
const char *name;
|
||||
struct stat sbuf;
|
||||
int major = (int)MAJOR(dev->dev);
|
||||
int minor = (int)MINOR(dev->dev);
|
||||
int bad;
|
||||
|
||||
dm_list_iterate_items_safe(strl, strl2, &dev->aliases) {
|
||||
name = strl->str;
|
||||
bad = 0;
|
||||
|
||||
if (stat(name, &sbuf)) {
|
||||
bad = 1;
|
||||
log_debug_devs("Device path check %d:%d %s stat failed errno %d",
|
||||
major, minor, name, errno);
|
||||
} else if (sbuf.st_rdev != dev->dev) {
|
||||
bad = 1;
|
||||
log_debug_devs("Device path check %d:%d %s stat %d:%d does not match.",
|
||||
major, minor, name,
|
||||
(int)MAJOR(sbuf.st_rdev), (int)MINOR(sbuf.st_rdev));
|
||||
}
|
||||
|
||||
if (bad) {
|
||||
log_debug_devs("Device path check %d:%d dropping path %s.", major, minor, name);
|
||||
dev_cache_failed_path(dev, name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Read or reread label/metadata from selected devs.
|
||||
*
|
||||
@ -509,9 +588,10 @@ static int _scan_list(struct cmd_context *cmd, struct dev_filter *f,
|
||||
{
|
||||
struct dm_list wait_devs;
|
||||
struct dm_list done_devs;
|
||||
struct dm_list reopen_devs;
|
||||
struct device_list *devl, *devl2;
|
||||
struct block *bb;
|
||||
int scan_open_errors = 0;
|
||||
int retried_open = 0;
|
||||
int scan_read_errors = 0;
|
||||
int scan_process_errors = 0;
|
||||
int scan_failed_count = 0;
|
||||
@ -524,6 +604,7 @@ static int _scan_list(struct cmd_context *cmd, struct dev_filter *f,
|
||||
|
||||
dm_list_init(&wait_devs);
|
||||
dm_list_init(&done_devs);
|
||||
dm_list_init(&reopen_devs);
|
||||
|
||||
log_debug_devs("Scanning %d devices for VG info", dm_list_size(devs));
|
||||
|
||||
@ -547,9 +628,7 @@ static int _scan_list(struct cmd_context *cmd, struct dev_filter *f,
|
||||
if (!_scan_dev_open(devl->dev)) {
|
||||
log_debug_devs("Scan failed to open %s.", dev_name(devl->dev));
|
||||
dm_list_del(&devl->list);
|
||||
dm_list_add(&done_devs, &devl->list);
|
||||
scan_open_errors++;
|
||||
scan_failed_count++;
|
||||
dm_list_add(&reopen_devs, &devl->list);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@ -578,7 +657,11 @@ static int _scan_list(struct cmd_context *cmd, struct dev_filter *f,
|
||||
scan_failed_count++;
|
||||
lvmcache_del_dev(devl->dev);
|
||||
} else {
|
||||
log_debug_devs("Processing data from device %s fd %d block %p", dev_name(devl->dev), devl->dev->bcache_fd, bb);
|
||||
log_debug_devs("Processing data from device %s %d:%d fd %d block %p",
|
||||
dev_name(devl->dev),
|
||||
(int)MAJOR(devl->dev->dev),
|
||||
(int)MINOR(devl->dev->dev),
|
||||
devl->dev->bcache_fd, bb);
|
||||
|
||||
ret = _process_block(cmd, f, devl->dev, bb, 0, 0, &is_lvm_device);
|
||||
|
||||
@ -612,8 +695,53 @@ static int _scan_list(struct cmd_context *cmd, struct dev_filter *f,
|
||||
if (!dm_list_empty(devs))
|
||||
goto scan_more;
|
||||
|
||||
log_debug_devs("Scanned devices: open errors %d read errors %d process errors %d",
|
||||
scan_open_errors, scan_read_errors, scan_process_errors);
|
||||
/*
|
||||
* We're done scanning all the devs. If we failed to open any of them
|
||||
* the first time through, refresh device paths and retry. We failed
|
||||
* to open the devs on the reopen_devs list.
|
||||
*
|
||||
* FIXME: it's not clear if or why this helps.
|
||||
*/
|
||||
if (!dm_list_empty(&reopen_devs)) {
|
||||
if (retried_open) {
|
||||
/* Don't try again. */
|
||||
scan_failed_count += dm_list_size(&reopen_devs);
|
||||
dm_list_splice(&done_devs, &reopen_devs);
|
||||
goto out;
|
||||
}
|
||||
retried_open = 1;
|
||||
|
||||
dm_list_iterate_items_safe(devl, devl2, &reopen_devs) {
|
||||
_drop_bad_aliases(devl->dev);
|
||||
|
||||
if (dm_list_empty(&devl->dev->aliases)) {
|
||||
log_warn("WARNING: Scan ignoring device %d:%d with no paths.",
|
||||
(int)MAJOR(devl->dev->dev),
|
||||
(int)MINOR(devl->dev->dev));
|
||||
|
||||
dm_list_del(&devl->list);
|
||||
lvmcache_del_dev(devl->dev);
|
||||
scan_failed_count++;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This will search the system's /dev for new path names and
|
||||
* could help us reopen the device if it finds a new preferred
|
||||
* path name for this dev's major:minor. It does that by
|
||||
* inserting a new preferred path name on dev->aliases. open
|
||||
* uses the first name from that list.
|
||||
*/
|
||||
log_debug_devs("Scanning refreshing device paths.");
|
||||
dev_cache_scan();
|
||||
|
||||
/* Put devs that failed to open back on the original list to retry. */
|
||||
dm_list_splice(devs, &reopen_devs);
|
||||
goto scan_more;
|
||||
}
|
||||
out:
|
||||
log_debug_devs("Scanned devices: read errors %d process errors %d failed %d",
|
||||
scan_read_errors, scan_process_errors, scan_failed_count);
|
||||
|
||||
if (failed)
|
||||
*failed = scan_failed_count;
|
||||
@ -977,6 +1105,12 @@ int label_scan_open(struct device *dev)
|
||||
return 1;
|
||||
}
|
||||
|
||||
int label_scan_open_excl(struct device *dev)
|
||||
{
|
||||
dev->flags |= DEV_BCACHE_EXCL;
|
||||
return label_scan_open(dev);
|
||||
}
|
||||
|
||||
bool dev_read_bytes(struct device *dev, uint64_t start, size_t len, void *data)
|
||||
{
|
||||
if (!scan_bcache) {
|
||||
|
@ -114,6 +114,7 @@ int label_read_sector(struct device *dev, uint64_t scan_sector);
|
||||
void label_scan_confirm(struct device *dev);
|
||||
int label_scan_setup_bcache(void);
|
||||
int label_scan_open(struct device *dev);
|
||||
int label_scan_open_excl(struct device *dev);
|
||||
|
||||
/*
|
||||
* Wrappers around bcache equivalents.
|
||||
|
@ -148,11 +148,11 @@ int init_locking(int type, struct cmd_context *cmd, int suppress_messages)
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CLUSTER_LOCKING_INTERNAL
|
||||
log_very_verbose("Falling back to internal clustered locking.");
|
||||
/* Fall through */
|
||||
|
||||
case 3:
|
||||
#ifdef CLUSTER_LOCKING_INTERNAL
|
||||
log_very_verbose("Cluster locking selected.");
|
||||
if (!init_cluster_locking(&_locking, cmd, suppress_messages)) {
|
||||
log_error_suppress(suppress_messages,
|
||||
@ -160,6 +160,20 @@ int init_locking(int type, struct cmd_context *cmd, int suppress_messages)
|
||||
break;
|
||||
}
|
||||
return 1;
|
||||
#else
|
||||
log_warn("WARNING: Using locking_type=1, ignoring locking_type=3.");
|
||||
log_warn("WARNING: See lvmlockd(8) for information on using cluster/clvm VGs.");
|
||||
type = 1;
|
||||
|
||||
log_very_verbose("%sFile-based locking selected.",
|
||||
_blocking_supported ? "" : "Non-blocking ");
|
||||
|
||||
if (!init_file_locking(&_locking, cmd, suppress_messages)) {
|
||||
log_error_suppress(suppress_messages,
|
||||
"File-based locking initialisation failed.");
|
||||
break;
|
||||
}
|
||||
return 1;
|
||||
#endif
|
||||
|
||||
case 4:
|
||||
|
@ -548,6 +548,9 @@ static int _init_vg_dlm(struct cmd_context *cmd, struct volume_group *vg)
|
||||
case -EPROTONOSUPPORT:
|
||||
log_error("VG %s init failed: lock manager dlm is not supported by lvmlockd", vg->name);
|
||||
break;
|
||||
case -EEXIST:
|
||||
log_error("VG %s init failed: a lockspace with the same name exists", vg->name);
|
||||
break;
|
||||
default:
|
||||
log_error("VG %s init failed: %d", vg->name, result);
|
||||
}
|
||||
@ -671,6 +674,9 @@ static int _init_vg_sanlock(struct cmd_context *cmd, struct volume_group *vg, in
|
||||
case -EMSGSIZE:
|
||||
log_error("VG %s init failed: no disk space for leases", vg->name);
|
||||
break;
|
||||
case -EEXIST:
|
||||
log_error("VG %s init failed: a lockspace with the same name exists", vg->name);
|
||||
break;
|
||||
default:
|
||||
log_error("VG %s init failed: %d", vg->name, result);
|
||||
}
|
||||
@ -1547,6 +1553,16 @@ int lockd_gl(struct cmd_context *cmd, const char *def_mode, uint32_t flags)
|
||||
}
|
||||
}
|
||||
|
||||
if (result == -EALREADY) {
|
||||
/*
|
||||
* This should generally not happen because commands should be coded
|
||||
* to avoid reacquiring the global lock. If there is a case that's
|
||||
* missed which causes the command to request the gl when it's already
|
||||
* held, it's not a problem, so let it go.
|
||||
*/
|
||||
log_debug("lockd global mode %s already held.", mode);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (!strcmp(mode, "un"))
|
||||
return 1;
|
||||
@ -2095,8 +2111,9 @@ int lockd_lv_name(struct cmd_context *cmd, struct volume_group *vg,
|
||||
|
||||
if (result == -EEXIST) {
|
||||
/*
|
||||
* This happens if lvchange tries to modify the LV with an ex
|
||||
* LV lock when the LV is already active with a sh LV lock.
|
||||
* This happens if a command like lvchange tries to modify the
|
||||
* LV with an ex LV lock when the LV is already active with a
|
||||
* sh LV lock.
|
||||
*/
|
||||
log_error("LV is already locked with incompatible mode: %s/%s", vg->name, lv_name);
|
||||
return 0;
|
||||
@ -2405,10 +2422,6 @@ int lockd_init_lv_args(struct cmd_context *cmd, struct volume_group *vg,
|
||||
* an LV with no lock_args will do nothing (unless the LV type causes the lock
|
||||
* request to be directed to another LV with a lock, e.g. to the thin pool LV
|
||||
* for thin LVs.)
|
||||
*
|
||||
* Current limitations:
|
||||
* - cache-type LV's in a lockd VG must be created with lvconvert.
|
||||
* - creating a thin pool and thin lv in one command is not allowed.
|
||||
*/
|
||||
|
||||
int lockd_init_lv(struct cmd_context *cmd, struct volume_group *vg, struct logical_volume *lv,
|
||||
@ -2437,13 +2450,15 @@ int lockd_init_lv(struct cmd_context *cmd, struct volume_group *vg, struct logic
|
||||
/* needs_lock_init is set for LVs that need a lockd lock. */
|
||||
return 1;
|
||||
|
||||
} else if (seg_is_cache(lp) || seg_is_cache_pool(lp)) {
|
||||
} else if (seg_is_cache_pool(lp)) {
|
||||
/*
|
||||
* This should not happen because the command defs are
|
||||
* checked and excluded for shared VGs early in lvcreate.
|
||||
* A cache pool does not use a lockd lock because it cannot be
|
||||
* used by itself. When a cache pool is attached to an actual
|
||||
* LV, the lockd lock for that LV covers the LV and the cache
|
||||
* pool attached to it.
|
||||
*/
|
||||
log_error("Use lvconvert for cache with lock type %s", vg->lock_type);
|
||||
return 0;
|
||||
lv->lock_args = NULL;
|
||||
return 1;
|
||||
|
||||
} else if (!seg_is_thin_volume(lp) && lp->snapshot) {
|
||||
struct logical_volume *origin_lv;
|
||||
|
@ -184,7 +184,7 @@ int update_cache_pool_params(struct cmd_context *cmd,
|
||||
* keep user informed he might be using things in untintended direction
|
||||
*/
|
||||
log_print_unless_silent("Using %s chunk size instead of default %s, "
|
||||
"so cache pool has less then " FMTu64 " chunks.",
|
||||
"so cache pool has less than " FMTu64 " chunks.",
|
||||
display_size(cmd, min_chunk_size),
|
||||
display_size(cmd, *chunk_size),
|
||||
max_chunks);
|
||||
@ -193,7 +193,7 @@ int update_cache_pool_params(struct cmd_context *cmd,
|
||||
log_verbose("Setting chunk size to %s.",
|
||||
display_size(cmd, *chunk_size));
|
||||
} else if (*chunk_size < min_chunk_size) {
|
||||
log_error("Chunk size %s is less then required minimal chunk size %s "
|
||||
log_error("Chunk size %s is less than required minimal chunk size %s "
|
||||
"for a cache pool of %s size and limit " FMTu64 " chunks.",
|
||||
display_size(cmd, *chunk_size),
|
||||
display_size(cmd, min_chunk_size),
|
||||
|
@ -7801,10 +7801,20 @@ static struct logical_volume *_lv_create_an_lv(struct volume_group *vg,
|
||||
lv->status |= LV_TEMPORARY;
|
||||
|
||||
if (seg_is_cache(lp)) {
|
||||
if (is_lockd_type(lv->vg->lock_type)) {
|
||||
if (is_change_activating(lp->activate)) {
|
||||
if (!lv_active_change(cmd, lv, CHANGE_AEY, 0)) {
|
||||
log_error("Aborting. Failed to activate LV %s.",
|
||||
display_lvname(lv));
|
||||
goto revert_new_lv;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* FIXME Support remote exclusive activation? */
|
||||
/* Not yet 'cache' LV, it is stripe volume for wiping */
|
||||
if (is_change_activating(lp->activate) &&
|
||||
!activate_lv_excl_local(cmd, lv)) {
|
||||
|
||||
else if (is_change_activating(lp->activate) && !activate_lv_excl_local(cmd, lv)) {
|
||||
log_error("Aborting. Failed to activate LV %s locally exclusively.",
|
||||
display_lvname(lv));
|
||||
goto revert_new_lv;
|
||||
@ -8000,7 +8010,7 @@ deactivate_and_revert_new_lv:
|
||||
|
||||
revert_new_lv:
|
||||
lockd_lv(cmd, lv, "un", LDLV_PERSISTENT);
|
||||
lockd_free_lv(vg->cmd, vg, lp->lv_name, &lv->lvid.id[1], lv->lock_args);
|
||||
lockd_free_lv(vg->cmd, vg, lv->name, &lv->lvid.id[1], lv->lock_args);
|
||||
|
||||
/* FIXME Better to revert to backup of metadata? */
|
||||
if (!lv_remove(lv) || !vg_write(vg) || !vg_commit(vg))
|
||||
@ -8025,8 +8035,14 @@ struct logical_volume *lv_create_single(struct volume_group *vg,
|
||||
if (!(lp->segtype = get_segtype_from_string(vg->cmd, SEG_TYPE_NAME_THIN_POOL)))
|
||||
return_NULL;
|
||||
|
||||
/* We want a lockd lock for the new thin pool, but not the thin lv. */
|
||||
lp->needs_lockd_init = 1;
|
||||
|
||||
if (!(lv = _lv_create_an_lv(vg, lp, lp->pool_name)))
|
||||
return_NULL;
|
||||
|
||||
lp->needs_lockd_init = 0;
|
||||
|
||||
} else if (seg_is_cache(lp)) {
|
||||
if (!lp->origin_name) {
|
||||
/* Until we have --pooldatasize we are lost */
|
||||
|
@ -652,7 +652,7 @@ int vg_write(struct volume_group *vg);
|
||||
int vg_commit(struct volume_group *vg);
|
||||
void vg_revert(struct volume_group *vg);
|
||||
struct volume_group *vg_read_internal(struct cmd_context *cmd, const char *vg_name,
|
||||
const char *vgid, uint32_t warn_flags, int *consistent);
|
||||
const char *vgid, uint32_t lockd_state, uint32_t warn_flags, int *consistent);
|
||||
|
||||
#define get_pvs( cmd ) get_pvs_internal((cmd), NULL, NULL)
|
||||
#define get_pvs_perserve_vg( cmd, pv_list, vg_list ) get_pvs_internal((cmd), (pv_list), (vg_list))
|
||||
@ -1310,6 +1310,7 @@ int validate_vg_rename_params(struct cmd_context *cmd,
|
||||
const char *vg_name_new);
|
||||
|
||||
int is_lockd_type(const char *lock_type);
|
||||
int vg_is_shared(const struct volume_group *vg);
|
||||
|
||||
int is_system_id_allowed(struct cmd_context *cmd, const char *system_id);
|
||||
|
||||
|
@ -227,13 +227,12 @@ static int _pvcreate_check(struct cmd_context *cmd, const char *name,
|
||||
/*
|
||||
* This test will fail if the device belongs to an MD array.
|
||||
*/
|
||||
if (!dev_test_excl(dev)) {
|
||||
if (!label_scan_open_excl(dev)) {
|
||||
/* FIXME Detect whether device-mapper itself is still using it */
|
||||
log_error("Can't open %s exclusively. Mounted filesystem?",
|
||||
name);
|
||||
goto out;
|
||||
}
|
||||
dev_close(dev);
|
||||
|
||||
if (!wipe_known_signatures(cmd, dev, name,
|
||||
TYPE_LVM1_MEMBER | TYPE_LVM2_MEMBER,
|
||||
@ -578,16 +577,6 @@ static int _pvremove_single(struct cmd_context *cmd, const char *pv_name,
|
||||
goto out;
|
||||
}
|
||||
|
||||
// FIXME: why is this called if info is not used?
|
||||
//info = lvmcache_info_from_pvid(dev->pvid, dev, 0);
|
||||
|
||||
if (!dev_test_excl(dev)) {
|
||||
/* FIXME Detect whether device-mapper is still using the device */
|
||||
log_error("Can't open %s exclusively - not removing. "
|
||||
"Mounted filesystem?", dev_name(dev));
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Wipe existing label(s) */
|
||||
if (!label_remove(dev)) {
|
||||
log_error("Failed to wipe existing label(s) on %s", pv_name);
|
||||
|
@ -1050,7 +1050,7 @@ uint32_t extents_from_size(struct cmd_context *cmd, uint64_t size,
|
||||
|
||||
if (size > (uint64_t) MAX_EXTENT_COUNT * extent_size) {
|
||||
log_error("Volume too large (%s) for extent size %s. "
|
||||
"Upper limit is less then %s.",
|
||||
"Upper limit is less than %s.",
|
||||
display_size(cmd, size),
|
||||
display_size(cmd, (uint64_t) extent_size),
|
||||
display_size(cmd, (uint64_t) MAX_EXTENT_COUNT *
|
||||
@ -1413,7 +1413,7 @@ static int _pvcreate_write(struct cmd_context *cmd, struct pv_to_write *pvw)
|
||||
struct device *dev = pv->dev;
|
||||
const char *pv_name = dev_name(dev);
|
||||
|
||||
if (!label_scan_open(dev)) {
|
||||
if (!label_scan_open_excl(dev)) {
|
||||
log_error("%s not opened: device not written", pv_name);
|
||||
return 0;
|
||||
}
|
||||
@ -3541,7 +3541,7 @@ static int _is_foreign_vg(struct volume_group *vg)
|
||||
return vg->cmd->system_id && strcmp(vg->system_id, vg->cmd->system_id);
|
||||
}
|
||||
|
||||
static int _repair_inconsistent_vg(struct volume_group *vg)
|
||||
static int _repair_inconsistent_vg(struct volume_group *vg, uint32_t lockd_state)
|
||||
{
|
||||
unsigned saved_handles_missing_pvs = vg->cmd->handles_missing_pvs;
|
||||
|
||||
@ -3556,9 +3556,8 @@ static int _repair_inconsistent_vg(struct volume_group *vg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* FIXME: do this at higher level where lvmlockd lock can be changed. */
|
||||
if (is_lockd_type(vg->lock_type)) {
|
||||
log_verbose("Skip metadata repair for shared VG.");
|
||||
if (is_lockd_type(vg->lock_type) && !(lockd_state & LDST_EX)) {
|
||||
log_verbose("Skip metadata repair for shared VG without exclusive lock.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -3581,7 +3580,7 @@ static int _repair_inconsistent_vg(struct volume_group *vg)
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int _wipe_outdated_pvs(struct cmd_context *cmd, struct volume_group *vg, struct dm_list *to_check)
|
||||
static int _wipe_outdated_pvs(struct cmd_context *cmd, struct volume_group *vg, struct dm_list *to_check, uint32_t lockd_state)
|
||||
{
|
||||
struct pv_list *pvl, *pvl2;
|
||||
char uuid[64] __attribute__((aligned(8)));
|
||||
@ -3603,14 +3602,8 @@ static int _wipe_outdated_pvs(struct cmd_context *cmd, struct volume_group *vg,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* FIXME: do this at higher level where lvmlockd lock can be changed.
|
||||
* Also if we're reading the VG with the --shared option (not using
|
||||
* lvmlockd), we can see a VG while it's being written by another
|
||||
* host, same as the foreign VG case.
|
||||
*/
|
||||
if (is_lockd_type(vg->lock_type)) {
|
||||
log_debug_metadata("Skip wiping outdated PVs for shared VG.");
|
||||
if (is_lockd_type(vg->lock_type) && !(lockd_state & LDST_EX)) {
|
||||
log_verbose("Skip wiping outdated PVs for shared VG without exclusive lock.");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -3619,6 +3612,8 @@ static int _wipe_outdated_pvs(struct cmd_context *cmd, struct volume_group *vg,
|
||||
if (pvl->pv->dev == pvl2->pv->dev)
|
||||
goto next_pv;
|
||||
}
|
||||
|
||||
|
||||
if (!id_write_format(&pvl->pv->id, uuid, sizeof(uuid)))
|
||||
return_0;
|
||||
log_warn("WARNING: Removing PV %s (%s) that no longer belongs to VG %s",
|
||||
@ -3639,6 +3634,7 @@ next_pv:
|
||||
|
||||
static int _check_or_repair_pv_ext(struct cmd_context *cmd,
|
||||
struct volume_group *vg,
|
||||
uint32_t lockd_state,
|
||||
int repair, int *inconsistent_pvs)
|
||||
{
|
||||
char uuid[64] __attribute__((aligned(8)));
|
||||
@ -3688,10 +3684,7 @@ static int _check_or_repair_pv_ext(struct cmd_context *cmd,
|
||||
"VG %s but not marked as used.",
|
||||
pv_dev_name(pvl->pv), vg->name);
|
||||
*inconsistent_pvs = 1;
|
||||
} else if (is_lockd_type(vg->lock_type)) {
|
||||
/*
|
||||
* FIXME: decide how to handle repair for shared VGs.
|
||||
*/
|
||||
} else if (is_lockd_type(vg->lock_type) && !(lockd_state & LDST_EX)) {
|
||||
log_warn("Skip repair of PV %s that is in shared "
|
||||
"VG %s but not marked as used.",
|
||||
pv_dev_name(pvl->pv), vg->name);
|
||||
@ -3715,7 +3708,7 @@ static int _check_or_repair_pv_ext(struct cmd_context *cmd,
|
||||
|
||||
r = 1;
|
||||
out:
|
||||
if ((pvs_fixed > 0) && !_repair_inconsistent_vg(vg))
|
||||
if ((pvs_fixed > 0) && !_repair_inconsistent_vg(vg, lockd_state))
|
||||
return_0;
|
||||
|
||||
return r;
|
||||
@ -3738,6 +3731,7 @@ out:
|
||||
static struct volume_group *_vg_read(struct cmd_context *cmd,
|
||||
const char *vgname,
|
||||
const char *vgid,
|
||||
uint32_t lockd_state,
|
||||
uint32_t warn_flags,
|
||||
int *consistent, unsigned precommitted)
|
||||
{
|
||||
@ -3765,13 +3759,9 @@ static struct volume_group *_vg_read(struct cmd_context *cmd,
|
||||
struct cached_vg_fmtdata *vg_fmtdata = NULL; /* Additional format-specific data about the vg */
|
||||
unsigned use_previous_vg;
|
||||
|
||||
uuid[0] = '\0';
|
||||
if (vgid && !id_write_format((const struct id*)vgid, uuid, sizeof(uuid)))
|
||||
stack;
|
||||
|
||||
log_very_verbose("Reading VG %s %s", vgname ?: "<no name>", vgid ? uuid : "<no vgid>");
|
||||
|
||||
if (is_orphan_vg(vgname)) {
|
||||
log_very_verbose("Reading VG %s", vgname);
|
||||
|
||||
if (use_precommitted) {
|
||||
log_error(INTERNAL_ERROR "vg_read_internal requires vgname "
|
||||
"with pre-commit.");
|
||||
@ -3780,15 +3770,21 @@ static struct volume_group *_vg_read(struct cmd_context *cmd,
|
||||
return _vg_read_orphans(cmd, warn_flags, vgname, consistent);
|
||||
}
|
||||
|
||||
uuid[0] = '\0';
|
||||
if (vgid && !id_write_format((const struct id*)vgid, uuid, sizeof(uuid)))
|
||||
stack;
|
||||
|
||||
log_very_verbose("Reading VG %s %s", vgname ?: "<no name>", vgid ? uuid : "<no vgid>");
|
||||
|
||||
if (lvmetad_used() && !use_precommitted) {
|
||||
if ((correct_vg = lvmetad_vg_lookup(cmd, vgname, vgid))) {
|
||||
dm_list_iterate_items(pvl, &correct_vg->pvs)
|
||||
reappeared += _check_reappeared_pv(correct_vg, pvl->pv, *consistent);
|
||||
if (reappeared && *consistent)
|
||||
*consistent = _repair_inconsistent_vg(correct_vg);
|
||||
*consistent = _repair_inconsistent_vg(correct_vg, lockd_state);
|
||||
else
|
||||
*consistent = !reappeared;
|
||||
if (_wipe_outdated_pvs(cmd, correct_vg, &correct_vg->pvs_outdated)) {
|
||||
if (_wipe_outdated_pvs(cmd, correct_vg, &correct_vg->pvs_outdated, lockd_state)) {
|
||||
/* clear the list */
|
||||
dm_list_init(&correct_vg->pvs_outdated);
|
||||
lvmetad_vg_clear_outdated_pvs(correct_vg);
|
||||
@ -4308,13 +4304,13 @@ static struct volume_group *_vg_read(struct cmd_context *cmd,
|
||||
dm_list_iterate_items(pvl, &all_pvs)
|
||||
_check_reappeared_pv(correct_vg, pvl->pv, 1);
|
||||
|
||||
if (!_repair_inconsistent_vg(correct_vg)) {
|
||||
if (!_repair_inconsistent_vg(correct_vg, lockd_state)) {
|
||||
_free_pv_list(&all_pvs);
|
||||
release_vg(correct_vg);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (!_wipe_outdated_pvs(cmd, correct_vg, &all_pvs)) {
|
||||
if (!_wipe_outdated_pvs(cmd, correct_vg, &all_pvs, lockd_state)) {
|
||||
_free_pv_list(&all_pvs);
|
||||
release_vg(correct_vg);
|
||||
return_NULL;
|
||||
@ -4338,7 +4334,7 @@ static struct volume_group *_vg_read(struct cmd_context *cmd,
|
||||
}
|
||||
|
||||
/* We have the VG now finally, check if PV ext info is in sync with VG metadata. */
|
||||
if (!cmd->is_clvmd && !_check_or_repair_pv_ext(cmd, correct_vg,
|
||||
if (!cmd->is_clvmd && !_check_or_repair_pv_ext(cmd, correct_vg, lockd_state,
|
||||
skipped_rescan ? 0 : *consistent,
|
||||
&inconsistent_pvs)) {
|
||||
release_vg(correct_vg);
|
||||
@ -4500,13 +4496,15 @@ static int _check_devs_used_correspond_with_vg(struct volume_group *vg)
|
||||
return 1;
|
||||
}
|
||||
|
||||
struct volume_group *vg_read_internal(struct cmd_context *cmd, const char *vgname,
|
||||
const char *vgid, uint32_t warn_flags, int *consistent)
|
||||
struct volume_group *vg_read_internal(struct cmd_context *cmd,
|
||||
const char *vgname, const char *vgid,
|
||||
uint32_t lockd_state, uint32_t warn_flags,
|
||||
int *consistent)
|
||||
{
|
||||
struct volume_group *vg;
|
||||
struct lv_list *lvl;
|
||||
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, warn_flags, consistent, 0)))
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, lockd_state, warn_flags, consistent, 0)))
|
||||
goto_out;
|
||||
|
||||
if (!check_pv_dev_sizes(vg))
|
||||
@ -4614,7 +4612,7 @@ struct volume_group *vg_read_by_vgid(struct cmd_context *cmd,
|
||||
|
||||
label_scan_setup_bcache();
|
||||
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, warn_flags, &consistent, precommitted))) {
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, 0, warn_flags, &consistent, precommitted))) {
|
||||
log_error("Rescan devices to look for missing VG.");
|
||||
goto scan;
|
||||
}
|
||||
@ -4635,7 +4633,7 @@ struct volume_group *vg_read_by_vgid(struct cmd_context *cmd,
|
||||
lvmcache_label_scan(cmd);
|
||||
warn_flags |= SKIP_RESCAN;
|
||||
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, warn_flags, &consistent, precommitted)))
|
||||
if (!(vg = _vg_read(cmd, vgname, vgid, 0, warn_flags, &consistent, precommitted)))
|
||||
goto fail;
|
||||
|
||||
label_scan_destroy(cmd); /* drop bcache to close devs, keep lvmcache */
|
||||
@ -4830,7 +4828,7 @@ static int _get_pvs(struct cmd_context *cmd, uint32_t warn_flags,
|
||||
struct dm_list *pvslist, struct dm_list *vgslist)
|
||||
{
|
||||
struct dm_str_list *strl;
|
||||
const char *vgname, *vgid;
|
||||
const char *vgname, *name, *vgid;
|
||||
struct pv_list *pvl, *pvl_copy;
|
||||
struct dm_list *vgids;
|
||||
struct volume_group *vg;
|
||||
@ -4856,11 +4854,13 @@ static int _get_pvs(struct cmd_context *cmd, uint32_t warn_flags,
|
||||
if (!vgid)
|
||||
continue; /* FIXME Unnecessary? */
|
||||
consistent = 0;
|
||||
if (!(vgname = lvmcache_vgname_from_vgid(NULL, vgid))) {
|
||||
if (!(name = lvmcache_vgname_from_vgid(NULL, vgid))) {
|
||||
stack;
|
||||
continue;
|
||||
}
|
||||
|
||||
vgname = dm_pool_strdup(cmd->mem, name);
|
||||
|
||||
/*
|
||||
* When we are retrieving a list to return toliblvm we need
|
||||
* that list to contain VGs that are modifiable as we are using
|
||||
@ -4872,7 +4872,7 @@ static int _get_pvs(struct cmd_context *cmd, uint32_t warn_flags,
|
||||
|
||||
warn_flags |= WARN_INCONSISTENT;
|
||||
|
||||
if (!(vg = vg_read_internal(cmd, vgname, (!vgslist) ? vgid : NULL, warn_flags, &consistent))) {
|
||||
if (!(vg = vg_read_internal(cmd, vgname, (!vgslist) ? vgid : NULL, 0, warn_flags, &consistent))) {
|
||||
stack;
|
||||
continue;
|
||||
}
|
||||
@ -5185,17 +5185,30 @@ int vg_check_status(const struct volume_group *vg, uint64_t status)
|
||||
* VG is left unlocked on failure
|
||||
*/
|
||||
static struct volume_group *_recover_vg(struct cmd_context *cmd,
|
||||
const char *vg_name, const char *vgid)
|
||||
const char *vg_name, const char *vgid, uint32_t lockd_state)
|
||||
{
|
||||
int consistent = 1;
|
||||
struct volume_group *vg;
|
||||
uint32_t state = 0;
|
||||
|
||||
unlock_vg(cmd, NULL, vg_name);
|
||||
|
||||
if (!lock_vol(cmd, vg_name, LCK_VG_WRITE, NULL))
|
||||
return_NULL;
|
||||
|
||||
if (!(vg = vg_read_internal(cmd, vg_name, vgid, WARN_PV_READ, &consistent))) {
|
||||
/*
|
||||
* Convert vg lock in lvmlockd from sh to ex.
|
||||
*/
|
||||
if (!(lockd_state & LDST_FAIL) && !(lockd_state & LDST_EX)) {
|
||||
log_debug("Upgrade lvmlockd lock to repair vg %s.", vg_name);
|
||||
if (!lockd_vg(cmd, vg_name, "ex", 0, &state)) {
|
||||
log_warn("Skip repair for shared VG without exclusive lock.");
|
||||
return NULL;
|
||||
}
|
||||
lockd_state |= LDST_EX;
|
||||
}
|
||||
|
||||
if (!(vg = vg_read_internal(cmd, vg_name, vgid, lockd_state, WARN_PV_READ, &consistent))) {
|
||||
unlock_vg(cmd, NULL, vg_name);
|
||||
return_NULL;
|
||||
}
|
||||
@ -5469,7 +5482,7 @@ static struct volume_group *_vg_lock_and_read(struct cmd_context *cmd, const cha
|
||||
warn_flags |= WARN_INCONSISTENT;
|
||||
|
||||
/* If consistent == 1, we get NULL here if correction fails. */
|
||||
if (!(vg = vg_read_internal(cmd, vg_name, vgid, warn_flags, &consistent))) {
|
||||
if (!(vg = vg_read_internal(cmd, vg_name, vgid, lockd_state, warn_flags, &consistent))) {
|
||||
if (consistent_in && !consistent) {
|
||||
failure |= FAILED_INCONSISTENT;
|
||||
goto bad;
|
||||
@ -5486,7 +5499,7 @@ static struct volume_group *_vg_lock_and_read(struct cmd_context *cmd, const cha
|
||||
/* consistent == 0 when VG is not found, but failed == FAILED_NOTFOUND */
|
||||
if (!consistent && !failure) {
|
||||
release_vg(vg);
|
||||
if (!(vg = _recover_vg(cmd, vg_name, vgid))) {
|
||||
if (!(vg = _recover_vg(cmd, vg_name, vgid, lockd_state))) {
|
||||
if (is_orphan_vg(vg_name))
|
||||
log_error("Recovery of standalone physical volumes failed.");
|
||||
else
|
||||
@ -6039,6 +6052,11 @@ int is_lockd_type(const char *lock_type)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int vg_is_shared(const struct volume_group *vg)
|
||||
{
|
||||
return (vg->lock_type && is_lockd_type(vg->lock_type));
|
||||
}
|
||||
|
||||
int vg_strip_outdated_historical_lvs(struct volume_group *vg) {
|
||||
struct glv_list *glvl, *tglvl;
|
||||
time_t current_time = time(NULL);
|
||||
|
@ -424,6 +424,11 @@ revert_new_lv:
|
||||
static int _activate_lv_like_model(struct logical_volume *model,
|
||||
struct logical_volume *lv)
|
||||
{
|
||||
/* FIXME: run all cases through lv_active_change when clvm variants are gone. */
|
||||
|
||||
if (is_lockd_type(lv->vg->lock_type))
|
||||
return lv_active_change(lv->vg->cmd, lv, CHANGE_AEY, 0);
|
||||
|
||||
if (lv_is_active_exclusive(model)) {
|
||||
if (!activate_lv_excl(lv->vg->cmd, lv))
|
||||
return_0;
|
||||
@ -705,6 +710,9 @@ static int _split_mirror_images(struct logical_volume *lv,
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!strcmp(lv->vg->lock_type, "dlm"))
|
||||
new_lv->lock_args = lv->lock_args;
|
||||
|
||||
if (!dm_list_empty(&split_images)) {
|
||||
/*
|
||||
* A number of images have been split and
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include "lib/activate/activate.h"
|
||||
#include "lib/metadata/lv_alloc.h"
|
||||
#include "lib/misc/lvm-string.h"
|
||||
#include "lib/locking/lvmlockd.h"
|
||||
|
||||
typedef int (*fn_on_lv_t)(struct logical_volume *lv, void *data);
|
||||
static int _eliminate_extracted_lvs_optional_write_vg(struct volume_group *vg,
|
||||
@ -3315,7 +3316,7 @@ int lv_raid_split(struct logical_volume *lv, int yes, const char *split_name,
|
||||
dm_list_init(&removal_lvs);
|
||||
dm_list_init(&data_list);
|
||||
|
||||
if (is_lockd_type(lv->vg->lock_type)) {
|
||||
if (lv->vg->lock_type && !strcmp(lv->vg->lock_type, "sanlock")) {
|
||||
log_error("Splitting raid image is not allowed with lock_type %s.",
|
||||
lv->vg->lock_type);
|
||||
return 0;
|
||||
@ -3394,6 +3395,9 @@ int lv_raid_split(struct logical_volume *lv, int yes, const char *split_name,
|
||||
|
||||
lvl->lv->name = split_name;
|
||||
|
||||
if (!strcmp(lv->vg->lock_type, "dlm"))
|
||||
lvl->lv->lock_args = lv->lock_args;
|
||||
|
||||
if (!vg_write(lv->vg)) {
|
||||
log_error("Failed to write changes for %s.",
|
||||
display_lvname(lv));
|
||||
@ -3419,7 +3423,13 @@ int lv_raid_split(struct logical_volume *lv, int yes, const char *split_name,
|
||||
* the original RAID LV having possibly had sub-LVs that have been
|
||||
* shifted and renamed.
|
||||
*/
|
||||
if (!activate_lv_excl_local(cmd, lvl->lv))
|
||||
|
||||
/* FIXME: run all cases through lv_active_change when clvm variants are gone. */
|
||||
|
||||
if (is_lockd_type(lvl->lv->vg->lock_type)) {
|
||||
if (!lv_active_change(lv->vg->cmd, lvl->lv, CHANGE_AEY, 0))
|
||||
return_0;
|
||||
} else if (!activate_lv_excl_local(cmd, lvl->lv))
|
||||
return_0;
|
||||
|
||||
dm_list_iterate_items(lvl, &removal_lvs)
|
||||
@ -3473,7 +3483,7 @@ int lv_raid_split_and_track(struct logical_volume *lv,
|
||||
int s;
|
||||
struct lv_segment *seg = first_seg(lv);
|
||||
|
||||
if (is_lockd_type(lv->vg->lock_type)) {
|
||||
if (lv->vg->lock_type && !strcmp(lv->vg->lock_type, "sanlock")) {
|
||||
log_error("Splitting raid image is not allowed with lock_type %s.",
|
||||
lv->vg->lock_type);
|
||||
return 0;
|
||||
@ -3574,6 +3584,10 @@ int lv_raid_merge(struct logical_volume *image_lv)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Ensure primary LV is not active elsewhere. */
|
||||
if (!lockd_lv(vg->cmd, lvl->lv, "ex", 0))
|
||||
return_0;
|
||||
|
||||
lv = lvl->lv;
|
||||
seg = first_seg(lv);
|
||||
for (s = 0; s < seg->area_count; ++s)
|
||||
|
@ -228,7 +228,7 @@ int pool_metadata_min_threshold(const struct lv_segment *pool_seg)
|
||||
*
|
||||
* In the metadata LV there should be minimum from either 4MiB of free space
|
||||
* or at least 25% of free space, which applies when the size of thin pool's
|
||||
* metadata is less then 16MiB.
|
||||
* metadata is less than 16MiB.
|
||||
*/
|
||||
const dm_percent_t meta_min = DM_PERCENT_1 * 25;
|
||||
dm_percent_t meta_free = dm_make_percent(((4096 * 1024) >> SECTOR_SHIFT),
|
||||
|
@ -211,6 +211,7 @@ FIELD(VGS, vg, BIN, "Exported", cmd, 10, vgexported, vg_exported, "Set if VG is
|
||||
FIELD(VGS, vg, BIN, "Partial", cmd, 10, vgpartial, vg_partial, "Set if VG is partial.", 0)
|
||||
FIELD(VGS, vg, STR, "AllocPol", cmd, 10, vgallocationpolicy, vg_allocation_policy, "VG allocation policy.", 0)
|
||||
FIELD(VGS, vg, BIN, "Clustered", cmd, 10, vgclustered, vg_clustered, "Set if VG is clustered.", 0)
|
||||
FIELD(VGS, vg, BIN, "Shared", cmd, 7, vgshared, vg_shared, "Set if VG is shared.", 0)
|
||||
FIELD(VGS, vg, SIZ, "VSize", cmd, 0, vgsize, vg_size, "Total size of VG in current units.", 0)
|
||||
FIELD(VGS, vg, SIZ, "VFree", cmd, 0, vgfree, vg_free, "Total amount of free space in current units.", 0)
|
||||
FIELD(VGS, vg, STR, "SYS ID", cmd, 0, vgsystemid, vg_sysid, "System ID of the VG indicating which host owns it.", 0)
|
||||
|
@ -213,6 +213,8 @@ GET_PV_NUM_PROPERTY_FN(pv_ba_size, SECTOR_SIZE * pv->ba_size)
|
||||
#define _vg_allocation_policy_get prop_not_implemented_get
|
||||
#define _vg_clustered_set prop_not_implemented_set
|
||||
#define _vg_clustered_get prop_not_implemented_get
|
||||
#define _vg_shared_set prop_not_implemented_set
|
||||
#define _vg_shared_get prop_not_implemented_get
|
||||
|
||||
#define _lv_layout_set prop_not_implemented_set
|
||||
#define _lv_layout_get prop_not_implemented_get
|
||||
|
@ -3385,6 +3385,14 @@ static int _vgclustered_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
return _binary_disp(rh, mem, field, clustered, GET_FIRST_RESERVED_NAME(vg_clustered_y), private);
|
||||
}
|
||||
|
||||
static int _vgshared_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
{
|
||||
int shared = (vg_is_shared((const struct volume_group *) data)) != 0;
|
||||
return _binary_disp(rh, mem, field, shared, GET_FIRST_RESERVED_NAME(vg_shared_y), private);
|
||||
}
|
||||
|
||||
static int _lvlayout_disp(struct dm_report *rh, struct dm_pool *mem,
|
||||
struct dm_report_field *field,
|
||||
const void *data, void *private)
|
||||
|
@ -60,6 +60,7 @@ FIELD_RESERVED_BINARY_VALUE(vg_extendable, vg_extendable, "", "extendable")
|
||||
FIELD_RESERVED_BINARY_VALUE(vg_exported, vg_exported, "", "exported")
|
||||
FIELD_RESERVED_BINARY_VALUE(vg_partial, vg_partial, "", "partial")
|
||||
FIELD_RESERVED_BINARY_VALUE(vg_clustered, vg_clustered, "", "clustered")
|
||||
FIELD_RESERVED_BINARY_VALUE(vg_shared, vg_shared, "", "shared")
|
||||
FIELD_RESERVED_VALUE(NAMED, vg_permissions, vg_permissions_rw, "", "writeable", "writeable", "rw", "read-write")
|
||||
FIELD_RESERVED_VALUE(NAMED, vg_permissions, vg_permissions_r, "", "read-only", "read-only", "r", "ro")
|
||||
FIELD_RESERVED_VALUE(NOFLAG, vg_mda_copies, vg_mda_copies_unmanaged, "", &GET_TYPE_RESERVED_VALUE(num_undef_64), "unmanaged")
|
||||
|
90
liblvm/test/vgadd.c
Normal file
90
liblvm/test/vgadd.c
Normal file
@ -0,0 +1,90 @@
|
||||
/*
|
||||
* Copyright (C) 2009 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
* This copyrighted material is made available to anyone wishing to use,
|
||||
* modify, copy, or redistribute it subject to the terms and conditions
|
||||
* of the GNU Lesser General Public License v.2.1.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public License
|
||||
* along with this program; if not, write to the Free Software Foundation,
|
||||
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <unistd.h>
|
||||
#include <inttypes.h>
|
||||
#include <assert.h>
|
||||
|
||||
#include "lvm2app.h"
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
char *vgname = NULL;
|
||||
lvm_t handle;
|
||||
vg_t vg;
|
||||
lv_t lv;
|
||||
lvm_str_list_t *sl;
|
||||
pv_list_t *pvl;
|
||||
lv_list_t *lvl;
|
||||
struct dm_list *vgnames;
|
||||
struct dm_list *vgids;
|
||||
struct dm_list *pvlist;
|
||||
struct dm_list *lvlist;
|
||||
int added = 0;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
vgname = argv[1];
|
||||
|
||||
handle = lvm_init(NULL);
|
||||
if (!handle) {
|
||||
printf("lvm_init failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
vg = lvm_vg_create(handle, vgname);
|
||||
|
||||
for (i = 2; i < argc; i++) {
|
||||
printf("adding %s to vg\n", argv[i]);
|
||||
ret = lvm_vg_extend(vg, argv[i]);
|
||||
|
||||
if (ret) {
|
||||
printf("Failed to add %s to vg\n", argv[i]);
|
||||
goto out;
|
||||
}
|
||||
|
||||
added++;
|
||||
}
|
||||
|
||||
if (!added) {
|
||||
printf("No PVs added, not writing VG.\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
printf("writing vg\n");
|
||||
ret = lvm_vg_write(vg);
|
||||
|
||||
lvm_vg_close(vg);
|
||||
|
||||
sleep(1);
|
||||
|
||||
vg = lvm_vg_open(handle, vgname, "w", 0);
|
||||
if (!vg) {
|
||||
printf("vg open %s failed\n", vgname);
|
||||
goto out;
|
||||
}
|
||||
|
||||
lv = lvm_vg_create_lv_linear(vg, "lv0", 1024*1024);
|
||||
if (!lv) {
|
||||
printf("lv create failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
lvm_vg_close(vg);
|
||||
out:
|
||||
lvm_quit(handle);
|
||||
|
||||
return 0;
|
||||
}
|
95
liblvm/test/vgshow.c
Normal file
95
liblvm/test/vgshow.c
Normal file
@ -0,0 +1,95 @@
|
||||
/*
|
||||
* Copyright (C) 2009 Red Hat, Inc. All rights reserved.
|
||||
*
|
||||
* This file is part of LVM2.
|
||||
*
|
||||
* This copyrighted material is made available to anyone wishing to use,
|
||||
* modify, copy, or redistribute it subject to the terms and conditions
|
||||
* of the GNU Lesser General Public License v.2.1.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public License
|
||||
* along with this program; if not, write to the Free Software Foundation,
|
||||
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <unistd.h>
|
||||
#include <inttypes.h>
|
||||
#include <assert.h>
|
||||
|
||||
#include "lvm2app.h"
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
char *vgname = NULL;
|
||||
lvm_t handle;
|
||||
vg_t vg;
|
||||
lvm_str_list_t *sl;
|
||||
pv_list_t *pvl;
|
||||
lv_list_t *lvl;
|
||||
struct dm_list *vgnames;
|
||||
struct dm_list *vgids;
|
||||
struct dm_list *pvlist;
|
||||
struct dm_list *lvlist;
|
||||
uint64_t val;
|
||||
|
||||
vgname = argv[1];
|
||||
|
||||
handle = lvm_init(NULL);
|
||||
if (!handle) {
|
||||
printf("lvm_init failed\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
vgnames = lvm_list_vg_names(handle);
|
||||
|
||||
dm_list_iterate_items(sl, vgnames)
|
||||
printf("vg name %s\n", sl->str);
|
||||
|
||||
vgids = lvm_list_vg_uuids(handle);
|
||||
|
||||
dm_list_iterate_items(sl, vgids)
|
||||
printf("vg uuid %s\n", sl->str);
|
||||
|
||||
if (!vgname) {
|
||||
printf("No vg name arg\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
vg = lvm_vg_open(handle, vgname, "r", 0);
|
||||
|
||||
if (!vg) {
|
||||
printf("vg open %s failed\n", vgname);
|
||||
goto out;
|
||||
}
|
||||
|
||||
val = lvm_vg_get_seqno(vg);
|
||||
|
||||
printf("vg seqno %llu\n", (unsigned long long)val);
|
||||
|
||||
pvlist = lvm_vg_list_pvs(vg);
|
||||
|
||||
dm_list_iterate_items(pvl, pvlist) {
|
||||
printf("vg pv name %s\n", lvm_pv_get_name(pvl->pv));
|
||||
|
||||
val = lvm_pv_get_dev_size(pvl->pv);
|
||||
|
||||
printf("vg pv size %llu\n", (unsigned long long)val);
|
||||
}
|
||||
|
||||
lvlist = lvm_vg_list_lvs(vg);
|
||||
|
||||
dm_list_iterate_items(lvl, lvlist) {
|
||||
printf("vg lv name %s\n", lvm_lv_get_name(lvl->lv));
|
||||
|
||||
val = lvm_lv_get_size(lvl->lv);
|
||||
|
||||
printf("vg lv size %llu\n", (unsigned long long)val);
|
||||
}
|
||||
|
||||
lvm_vg_close(vg);
|
||||
out:
|
||||
lvm_quit(handle);
|
||||
|
||||
return 0;
|
||||
}
|
@ -310,6 +310,7 @@ LIB_VERSION_APP := $(shell $(AWK) -F '[(). ]' '{printf "%s.%s",$$1,$$4}' $(top_s
|
||||
|
||||
INCLUDES += -I$(top_srcdir) -I$(srcdir) -I$(top_builddir)/include
|
||||
|
||||
|
||||
DEPS = $(top_builddir)/make.tmpl $(top_srcdir)/VERSION \
|
||||
$(top_builddir)/Makefile
|
||||
|
||||
|
@ -123,7 +123,7 @@ Command is executed with environmental variable
|
||||
in this environment will not try to interact with dmeventd.
|
||||
To see the fullness of a thin pool command may check these
|
||||
two environmental variables
|
||||
\fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_DATA\fP.
|
||||
\fBDMEVENTD_THIN_POOL_DATA\fP and \fBDMEVENTD_THIN_POOL_METADATA\fP.
|
||||
Command can also read status with tools like \fBlvs\fP(8).
|
||||
.
|
||||
.SH ENVIRONMENT VARIABLES
|
||||
@ -134,7 +134,7 @@ Variable is set by thin plugin and is available to executed program. Value prese
|
||||
actual usage of thin pool data volume. Variable is not set when error event
|
||||
is processed.
|
||||
.TP
|
||||
.B DMEVENTD_THIN_POOL_DATA
|
||||
.B DMEVENTD_THIN_POOL_METADATA
|
||||
Variable is set by thin plugin and is available to executed program. Value present
|
||||
actual usage of thin pool metadata volume. Variable is not set when error event
|
||||
is processed.
|
||||
|
@ -843,25 +843,19 @@ to a lockd VG".
|
||||
Things that do not yet work in lockd VGs:
|
||||
.br
|
||||
\[bu]
|
||||
creating a new thin pool and a new thin LV in a single command
|
||||
.br
|
||||
\[bu]
|
||||
using lvcreate to create cache pools or cache LVs (use lvconvert)
|
||||
.br
|
||||
\[bu]
|
||||
using external origins for thin LVs
|
||||
.br
|
||||
\[bu]
|
||||
splitting mirrors and snapshots from LVs
|
||||
splitting snapshots from LVs
|
||||
.br
|
||||
\[bu]
|
||||
splitting mirrors in sanlock VGs
|
||||
.br
|
||||
\[bu]
|
||||
pvmove of entire PVs, or under LVs activated with shared locks
|
||||
.br
|
||||
\[bu]
|
||||
vgsplit
|
||||
.br
|
||||
\[bu]
|
||||
vgmerge
|
||||
vgsplit and vgmerge (convert to a local VG to do this)
|
||||
|
||||
|
||||
.SS lvmlockd changes from clvmd
|
||||
|
@ -1,8 +1,19 @@
|
||||
vgexport makes inactive VGs unknown to the system. In this state, all the
|
||||
PVs in the VG can be moved to a different system, from which
|
||||
\fBvgimport\fP(8) can then be run.
|
||||
vgexport changes a VG into the exported state, which ensures that the VG
|
||||
and its disks are not being used, and cannot be used until the VG is
|
||||
imported by \fBvgimport\fP(8). Putting a VG into an unusable, offline
|
||||
state can be useful when doing things like moving a VG's disks to another
|
||||
system. Exporting a VG provides some protection from its LVs being
|
||||
accidentally used, or being used by an automated system before it's ready.
|
||||
|
||||
Most LVM tools ignore exported VGs.
|
||||
A VG cannot be exported until all of its LVs are inactive.
|
||||
|
||||
LVM commands will ignore an exported VG or report an error if a command
|
||||
tries to use it.
|
||||
|
||||
For an exported VG, the vgs command will display \"x\" in the third VG
|
||||
attribute, and the pvs command will display \"x\" in the second PV
|
||||
attribute. Both vgs and pvs will display \"exported\" from the export
|
||||
report field.
|
||||
|
||||
vgexport clears the VG system ID, and vgimport sets the VG system ID to
|
||||
match the host running vgimport (if the host has a system ID).
|
||||
|
@ -8,11 +8,22 @@ vgexport - Unregister volume group(s) from the system
|
||||
[ \fIoption_args\fP ]
|
||||
.br
|
||||
.SH DESCRIPTION
|
||||
vgexport makes inactive VGs unknown to the system. In this state, all the
|
||||
PVs in the VG can be moved to a different system, from which
|
||||
\fBvgimport\fP(8) can then be run.
|
||||
vgexport changes a VG into the exported state, which ensures that the VG
|
||||
and its disks are not being used, and cannot be used until the VG is
|
||||
imported by \fBvgimport\fP(8). Putting a VG into an unusable, offline
|
||||
state can be useful when doing things like moving a VG's disks to another
|
||||
system. Exporting a VG provides some protection from its LVs being
|
||||
accidentally used, or being used by an automated system before it's ready.
|
||||
|
||||
Most LVM tools ignore exported VGs.
|
||||
A VG cannot be exported until all of its LVs are inactive.
|
||||
|
||||
LVM commands will ignore an exported VG or report an error if a command
|
||||
tries to use it.
|
||||
|
||||
For an exported VG, the vgs command will display \"x\" in the third VG
|
||||
attribute, and the pvs command will display \"x\" in the second PV
|
||||
attribute. Both vgs and pvs will display \"exported\" from the export
|
||||
report field.
|
||||
|
||||
vgexport clears the VG system ID, and vgimport sets the VG system ID to
|
||||
match the host running vgimport (if the host has a system ID).
|
||||
|
90
scripts/code-stats.rb
Executable file
90
scripts/code-stats.rb
Executable file
@ -0,0 +1,90 @@
|
||||
#! /usr/bin/env ruby
|
||||
|
||||
require 'date'
|
||||
require 'pp'
|
||||
require 'set'
|
||||
|
||||
REGEX = /(\w+)\s+'(.+)'\s+(.*)/
|
||||
|
||||
Commit = Struct.new(:hash, :time, :author, :stats)
|
||||
CommitStats = Struct.new(:files, :nr_added, :nr_deleted)
|
||||
|
||||
def calc_stats(diff)
|
||||
changed = Set.new
|
||||
added = 0
|
||||
deleted = 0
|
||||
|
||||
diff.lines.each do |l|
|
||||
case l.encode('UTF-8', 'binary', invalid: :replace, undef: :replace, replace: '')
|
||||
when /^\+\+\+ (\S+)/
|
||||
changed << $1
|
||||
when /^\+/
|
||||
added = added + 1
|
||||
when /^---/
|
||||
# do nothing
|
||||
when /^\-/
|
||||
deleted = deleted + 1
|
||||
end
|
||||
end
|
||||
|
||||
CommitStats.new(changed, added, deleted)
|
||||
end
|
||||
|
||||
def select_commits(&block)
|
||||
commits = []
|
||||
|
||||
input = `git log --format="%h '%aI' %an"`
|
||||
input.lines.each do |l|
|
||||
m = REGEX.match(l)
|
||||
|
||||
raise "couldn't parse: ${l}" unless m
|
||||
|
||||
hash = m[1]
|
||||
time = DateTime.iso8601(m[2])
|
||||
author = m[3]
|
||||
|
||||
if block.call(hash, time, author)
|
||||
diff = `git log -1 -p #{hash} | filterdiff -X configure`
|
||||
commits << Commit.new(hash, time, author, calc_stats(diff))
|
||||
end
|
||||
end
|
||||
|
||||
commits
|
||||
end
|
||||
|
||||
def since(date)
|
||||
lambda do |hash, time, author|
|
||||
time >= date
|
||||
end
|
||||
end
|
||||
|
||||
def pad(str, col)
|
||||
str + (' ' * (col - str.size))
|
||||
end
|
||||
|
||||
def code_delta(s)
|
||||
s.nr_added + s.nr_deleted
|
||||
end
|
||||
|
||||
def cmp_stats(lhs, rhs)
|
||||
code_delta(rhs) <=> code_delta(lhs)
|
||||
end
|
||||
|
||||
#-----------------------------------
|
||||
|
||||
commits = select_commits(&since(DateTime.now - 14))
|
||||
|
||||
authors = Hash.new {|hash, key| hash[key] = CommitStats.new(Set.new, 0, 0)}
|
||||
|
||||
commits.each do |c|
|
||||
author_stats = authors[c.author]
|
||||
author_stats.files.merge(c.stats.files)
|
||||
author_stats.nr_added = author_stats.nr_added + c.stats.nr_added
|
||||
author_stats.nr_deleted = author_stats.nr_deleted + c.stats.nr_deleted
|
||||
end
|
||||
|
||||
puts "#{pad("Author", 20)}\tChanged files\tInsertions\tDeletions"
|
||||
authors.keys.sort {|a1, a2| cmp_stats(authors[a1], authors[a2])}.each do |k|
|
||||
v = authors[k]
|
||||
puts "#{pad(k, 20)}\t#{v.files.size}\t\t#{v.nr_added}\t\t#{v.nr_deleted}"
|
||||
end
|
@ -32,6 +32,18 @@ SOURCES = lib/not.c lib/harness.c
|
||||
CXXSOURCES = lib/runner.cpp
|
||||
CXXFLAGS += $(EXTRA_EXEC_CFLAGS)
|
||||
|
||||
CLEAN_DIRS += dbus/__pycache__ $(LVM_TEST_RESULTS)
|
||||
ifneq (.,$(firstword $(srcdir)))
|
||||
CLEAN_TARGETS += $(RUN_BASE) $(addprefix lib/,$(LIB_LVMLOCKD_CONF))
|
||||
endif
|
||||
|
||||
CLEAN_TARGETS += .lib-dir-stamp .tests-stamp $(LIB) $(addprefix lib/,\
|
||||
$(CMDS) clvmd dmeventd dmsetup dmstats lvmetad lvmpolld \
|
||||
harness lvmdbusd.profile thin-performance.profile fsadm \
|
||||
dm-version-expected version-expected \
|
||||
paths-installed paths-installed-t paths-common paths-common-t)
|
||||
|
||||
|
||||
include $(top_builddir)/make.tmpl
|
||||
|
||||
T ?= .
|
||||
@ -83,6 +95,7 @@ help:
|
||||
@echo " check_lvmlockd_sanlock Run tests with lvmlockd and sanlock."
|
||||
@echo " check_lvmlockd_dlm Run tests with lvmlockd and dlm."
|
||||
@echo " check_lvmlockd_test Run tests with lvmlockd --test."
|
||||
@echo " check_lvmlockd_test_lvmetad Run tests with lvmlockd --test and lvmetad."
|
||||
@echo " run-unit-test Run only unit tests (root not needed)."
|
||||
@echo " clean Clean dir."
|
||||
@echo " help Display callable targets."
|
||||
@ -191,6 +204,13 @@ check_lvmlockd_test: .tests-stamp
|
||||
--flavours udev-lvmlockd-test --only $(T) --skip $(S)
|
||||
endif
|
||||
|
||||
ifeq ("@BUILD_LVMLOCKD@", "yes")
|
||||
check_lvmlockd_test_lvmetad: .tests-stamp
|
||||
VERBOSE=$(VERBOSE) ./lib/runner \
|
||||
--testdir . --outdir results \
|
||||
--flavours udev-lvmlockd-test-lvmetad --only $(T) --skip $(S)
|
||||
endif
|
||||
|
||||
run-unit-test unit-test:
|
||||
$(MAKE) -C unit $(@)
|
||||
|
||||
@ -212,6 +232,7 @@ LIB_FLAVOURS = \
|
||||
flavour-udev-lvmlockd-sanlock\
|
||||
flavour-udev-lvmlockd-dlm\
|
||||
flavour-udev-lvmlockd-test\
|
||||
flavour-udev-lvmlockd-test-lvmetad\
|
||||
flavour-udev-vanilla
|
||||
|
||||
LIB_LVMLOCKD_CONF = \
|
||||
@ -352,17 +373,6 @@ LIB = $(addprefix lib/, $(LIB_SHARED) $(LIB_LOCAL) $(LIB_NOT) $(LIB_LINK_NOT) $(
|
||||
$(LN_S) -f $(abs_top_srcdir)/test/lib/$$i lib/; done
|
||||
touch $@
|
||||
|
||||
CLEAN_DIRS += $(LVM_TEST_RESULTS)
|
||||
ifneq (.,$(firstword $(srcdir)))
|
||||
CLEAN_TARGETS += $(RUN_BASE) $(addprefix lib/,$(LIB_LVMLOCKD_CONF))
|
||||
endif
|
||||
|
||||
CLEAN_TARGETS += .lib-dir-stamp .tests-stamp $(LIB) $(addprefix lib/,\
|
||||
$(CMDS) clvmd dmeventd dmsetup dmstats lvmetad lvmpolld \
|
||||
harness lvmdbusd.profile thin-performance.profile fsadm \
|
||||
dm-version-expected version-expected \
|
||||
paths-installed paths-installed-t paths-common paths-common-t)
|
||||
|
||||
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
|
||||
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
|
||||
|
||||
|
@ -38,6 +38,8 @@ SOURCES2 = \
|
||||
|
||||
endif
|
||||
|
||||
PYTEST = python_lvm_unit.py
|
||||
|
||||
include $(top_builddir)/make.tmpl
|
||||
|
||||
DEFS += -D_REENTRANT
|
||||
@ -51,6 +53,9 @@ LIBS += @LVM2APP_LIB@ $(DMEVENT_LIBS)
|
||||
%.t: %.o $(DEPLIBS)
|
||||
$(CC) -o $@ $(<) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) $(LIBS)
|
||||
|
||||
all:
|
||||
test -x $(PYTEST) || chmod 755 $(PYTEST)
|
||||
|
||||
test: $(OBJECTS) $(DEPLIBS)
|
||||
$(CC) -o $@ $(OBJECTS) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) $(LIBS) $(READLINE_LIBS)
|
||||
|
||||
|
@ -31,7 +31,11 @@ aux prepare_dmeventd
|
||||
|
||||
#Locate the python binding library to use.
|
||||
if [[ -n "${abs_top_builddir+varset}" ]]; then
|
||||
python_lib=($(find "$abs_top_builddir" -name lvm*.so))
|
||||
# For python2 look for lvm.so, python3 uses some lengthy names
|
||||
case "$(head -1 $(which python_lvm_unit.py) )" in
|
||||
*2) python_lib=($(find "$abs_top_builddir" -name lvm.so)) ;;
|
||||
*) python_lib=($(find "$abs_top_builddir" -name lvm*gnu.so)) ;;
|
||||
esac
|
||||
if [[ ${#python_lib[*]} -ne 1 ]]; then
|
||||
if [[ ${#python_lib[*]} -gt 1 ]]; then
|
||||
# Unable to test python bindings if multiple libraries found:
|
||||
@ -58,9 +62,9 @@ aux prepare_pvs 6
|
||||
PY_UNIT_PVS=$(cat DEVICES)
|
||||
export PY_UNIT_PVS
|
||||
|
||||
python_lvm_unit.py -v -f TestLvm.test_lv_persistence
|
||||
exit
|
||||
#python_lvm_unit.py -v -f
|
||||
#When needed to run 1 single individual python test
|
||||
#python_lvm_unit.py -v -f TestLvm.test_lv_persistence
|
||||
#exit
|
||||
|
||||
# Run individual tests for shorter error trace
|
||||
for i in \
|
||||
|
@ -112,7 +112,7 @@ class TestLvm(unittest.TestCase):
|
||||
for d in device_list:
|
||||
vg.extend(d)
|
||||
|
||||
vg.createLvLinear(name, vg.getSize() / 2)
|
||||
vg.createLvLinear(name, vg.getSize() // 2)
|
||||
vg.close()
|
||||
vg = None
|
||||
|
||||
@ -124,14 +124,14 @@ class TestLvm(unittest.TestCase):
|
||||
vg.extend(d)
|
||||
|
||||
vg.createLvThinpool(
|
||||
pool_name, vg.getSize() / 2, 0, 0, lvm.THIN_DISCARDS_PASSDOWN, 1)
|
||||
pool_name, vg.getSize() // 2, 0, 0, lvm.THIN_DISCARDS_PASSDOWN, 1)
|
||||
return vg
|
||||
|
||||
@staticmethod
|
||||
def _create_thin_lv(pv_devices, name):
|
||||
thin_pool_name = 'thin_vg_pool_' + rs(4)
|
||||
vg = TestLvm._create_thin_pool(pv_devices, thin_pool_name)
|
||||
vg.createLvThin(thin_pool_name, name, vg.getSize() / 8)
|
||||
vg.createLvThin(thin_pool_name, name, vg.getSize() // 8)
|
||||
vg.close()
|
||||
vg = None
|
||||
|
||||
@ -231,7 +231,7 @@ class TestLvm(unittest.TestCase):
|
||||
curr_size = pv.getSize()
|
||||
dev_size = pv.getDevSize()
|
||||
self.assertTrue(curr_size == dev_size)
|
||||
pv.resize(curr_size / 2)
|
||||
pv.resize(curr_size // 2)
|
||||
with AllowedPVS() as pvs:
|
||||
pv = pvs[0]
|
||||
resized_size = pv.getSize()
|
||||
@ -718,7 +718,7 @@ class TestLvm(unittest.TestCase):
|
||||
def test_percent_to_float(self):
|
||||
self.assertEqual(lvm.percentToFloat(0), 0.0)
|
||||
self.assertEqual(lvm.percentToFloat(1000000), 1.0)
|
||||
self.assertEqual(lvm.percentToFloat(1000000 / 2), 0.5)
|
||||
self.assertEqual(lvm.percentToFloat(1000000 // 2), 0.5)
|
||||
|
||||
def test_scan(self):
|
||||
self.assertEqual(lvm.scan(), None)
|
||||
|
@ -28,7 +28,7 @@ vg_t vg;
|
||||
const char *vg_name;
|
||||
#define MAX_DEVICES 16
|
||||
const char *device[MAX_DEVICES];
|
||||
uint64_t size = 1024;
|
||||
uint64_t size = 4096;
|
||||
|
||||
#define vg_create(vg_name) \
|
||||
printf("Creating VG %s\n", vg_name); \
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python3
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
|
||||
#
|
||||
@ -1027,7 +1027,7 @@ class TestDbusService(unittest.TestCase):
|
||||
vg.Move(
|
||||
dbus.ObjectPath(location),
|
||||
dbus.Struct((0, 0), signature='tt'),
|
||||
dbus.Array([(dst, pv.PeCount / 2, 0), ], '(ott)'),
|
||||
dbus.Array([(dst, pv.PeCount // 2, 0), ], '(ott)'),
|
||||
dbus.Int32(g_tmo),
|
||||
EOD))
|
||||
self.assertEqual(job, '/')
|
||||
@ -1320,7 +1320,7 @@ class TestDbusService(unittest.TestCase):
|
||||
|
||||
original_size = pv.SizeBytes
|
||||
|
||||
new_size = original_size / 2
|
||||
new_size = original_size // 2
|
||||
|
||||
self.handle_return(
|
||||
pv.ReSize(
|
||||
@ -1454,7 +1454,7 @@ class TestDbusService(unittest.TestCase):
|
||||
|
||||
@staticmethod
|
||||
def _write_some_data(device_path, size):
|
||||
blocks = int(size / 512)
|
||||
blocks = int(size // 512)
|
||||
block = bytearray(512)
|
||||
for i in range(0, 512):
|
||||
block[i] = i % 255
|
||||
@ -1481,7 +1481,7 @@ class TestDbusService(unittest.TestCase):
|
||||
interfaces=(LV_COMMON_INT, LV_INT, SNAPSHOT_INT, ))
|
||||
|
||||
# Write some data to snapshot so merge takes some time
|
||||
TestDbusService._write_some_data(ss.LvCommon.Path, ss_size / 2)
|
||||
TestDbusService._write_some_data(ss.LvCommon.Path, ss_size // 2)
|
||||
|
||||
job_path = self.handle_return(
|
||||
ss.Snapshot.Merge(
|
||||
@ -1873,10 +1873,14 @@ class TestDbusService(unittest.TestCase):
|
||||
# when run from lvm2 testsuite. See dbustest.sh.
|
||||
pv_object_path = self.objs[PV_INT][0].object_path
|
||||
|
||||
if not pv_object_path.startswith("/dev"):
|
||||
std_err_print('Skipping test not running in /dev')
|
||||
return
|
||||
|
||||
for i in range(0, 5):
|
||||
pv_object_path = self._create_nested(pv_object_path)
|
||||
|
||||
def test_pv_symlinks(self):
|
||||
def DISABLED_test_pv_symlinks(self):
|
||||
# Lets take one of our test PVs, pvremove it, find a symlink to it
|
||||
# and re-create using the symlink to ensure we return an object
|
||||
# path to it. Additionally, we will take the symlink and do a lookup
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python3
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
|
||||
#
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python3
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
|
||||
#
|
||||
|
@ -755,11 +755,14 @@ prepare_md_dev() {
|
||||
local coption="--chunk"
|
||||
local maj
|
||||
local mddev
|
||||
local mddir="md/"
|
||||
local mdname
|
||||
local mddevdir
|
||||
maj=$(mdadm --version 2>&1) || skip "mdadm tool is missing!"
|
||||
|
||||
cleanup_md_dev
|
||||
|
||||
rm -f debug.log strace.log MD_DEV MD_DEV_PV MD_DEVICES
|
||||
rm -f debug.log strace.log
|
||||
|
||||
case "$level" in
|
||||
"1") coption="--bitmap-chunk" ;;
|
||||
@ -770,9 +773,11 @@ prepare_md_dev() {
|
||||
# - newer mdadm _completely_ defers to udev to create the associated device node
|
||||
maj=${maj##*- v}
|
||||
maj=${maj%%.*}
|
||||
[ "$maj" -ge 3 ] && \
|
||||
mddev=/dev/md/md_lvm_test0 || \
|
||||
mddev=/dev/md_lvm_test0
|
||||
[ "$maj" -ge 3 ] || mddir=""
|
||||
|
||||
mdname="md_lvm_test0"
|
||||
mddev="/dev/${mddir}$mdname"
|
||||
mddevdir="$DM_DEV_DIR/$mddir"
|
||||
|
||||
mdadm --create --metadata=1.0 "$mddev" --auto=md --level "$level" $with_bitmap "$coption"="$rchunk" --raid-devices="$rdevs" "${@:4}" || {
|
||||
# Some older 'mdadm' version managed to open and close devices internaly
|
||||
@ -791,10 +796,11 @@ prepare_md_dev() {
|
||||
|
||||
# LVM/DM will see this device
|
||||
case "$DM_DEV_DIR" in
|
||||
"/dev") readlink -f "$mddev" ;;
|
||||
*) cp -LR "$mddev" "$DM_DEV_DIR"
|
||||
echo "$DM_DEV_DIR/md_lvm_test0" ;;
|
||||
esac > MD_DEV_PV
|
||||
"/dev") readlink -f "$mddev" > MD_DEV_PV ;;
|
||||
*) mkdir -p "$mddevdir"
|
||||
cp -LR "$mddev" "$mddevdir"
|
||||
echo "${mddevdir}${mdname}" > MD_DEV_PV ;;
|
||||
esac
|
||||
echo "$mddev" > MD_DEV
|
||||
notify_lvmetad "$(< MD_DEV_PV)"
|
||||
printf "%s\n" "${@:4}" > MD_DEVICES
|
||||
@ -809,12 +815,14 @@ cleanup_md_dev() {
|
||||
local IFS=$IFS_NL
|
||||
local dev
|
||||
local mddev
|
||||
local mddev_pv
|
||||
mddev=$(< MD_DEV)
|
||||
mddev_pv=$(< MD_DEV_PV)
|
||||
udev_wait
|
||||
mdadm --stop "$mddev" || true
|
||||
test "$DM_DEV_DIR" != "/dev" && rm -f "$DM_DEV_DIR/$(basename "$mddev")"
|
||||
notify_lvmetad "$(< MD_DEV_PV)"
|
||||
notify_lvmetad "$mddev_pv"
|
||||
udev_wait # wait till events are process, not zeroing to early
|
||||
test "$DM_DEV_DIR" != "/dev" && rm -rf "${mddev_pv%/*}"
|
||||
for dev in $(< MD_DEVICES); do
|
||||
mdadm --zero-superblock "$dev" || true
|
||||
notify_lvmetad "$dev"
|
||||
@ -843,7 +851,7 @@ prepare_backing_dev() {
|
||||
return 0
|
||||
elif test "${LVM_TEST_PREFER_BRD-1}" = "1" && \
|
||||
test ! -d /sys/block/ram0 && \
|
||||
kernel_at_least 4 16 && \
|
||||
kernel_at_least 4 16 0 && \
|
||||
test "$size" -lt 16384; then
|
||||
# try to use ramdisk if possible, but for
|
||||
# big allocs (>16G) do not try to use ramdisk
|
||||
@ -1153,7 +1161,7 @@ prepare_vg() {
|
||||
teardown_devs
|
||||
|
||||
prepare_devs "$@"
|
||||
vgcreate -s 512K "$vg" "${DEVICES[@]}"
|
||||
vgcreate $SHARED -s 512K "$vg" "${DEVICES[@]}"
|
||||
}
|
||||
|
||||
extend_filter() {
|
||||
@ -1167,7 +1175,7 @@ extend_filter() {
|
||||
}
|
||||
|
||||
extend_filter_LVMTEST() {
|
||||
extend_filter "a|$DM_DEV_DIR/$PREFIX|"
|
||||
extend_filter "a|$DM_DEV_DIR/$PREFIX|" "$@"
|
||||
}
|
||||
|
||||
hide_dev() {
|
||||
|
@ -1,5 +1,4 @@
|
||||
export LVM_TEST_LOCKING=1
|
||||
export LVM_TEST_LVMETAD=1
|
||||
export LVM_TEST_LVMPOLLD=1
|
||||
export LVM_TEST_LVMLOCKD=1
|
||||
export LVM_TEST_LOCK_TYPE_DLM=1
|
||||
|
@ -1,5 +1,4 @@
|
||||
export LVM_TEST_LOCKING=1
|
||||
export LVM_TEST_LVMETAD=1
|
||||
export LVM_TEST_LVMPOLLD=1
|
||||
export LVM_TEST_LVMLOCKD=1
|
||||
export LVM_TEST_LOCK_TYPE_SANLOCK=1
|
||||
|
9
test/lib/flavour-udev-lvmlockd-test-lvmetad.sh
Normal file
9
test/lib/flavour-udev-lvmlockd-test-lvmetad.sh
Normal file
@ -0,0 +1,9 @@
|
||||
export LVM_TEST_LOCKING=1
|
||||
export LVM_TEST_LVMETAD=1
|
||||
export LVM_TEST_LVMPOLLD=1
|
||||
export LVM_TEST_LVMLOCKD=1
|
||||
export LVM_TEST_LVMLOCKD_TEST=1
|
||||
export LVM_TEST_DEVDIR=/dev
|
||||
|
||||
# FIXME:dct: add option to allow --test with sanlock
|
||||
export LVM_TEST_LVMLOCKD_TEST_DLM=1
|
@ -1,5 +1,4 @@
|
||||
export LVM_TEST_LOCKING=1
|
||||
export LVM_TEST_LVMETAD=1
|
||||
export LVM_TEST_LVMPOLLD=1
|
||||
export LVM_TEST_LVMLOCKD=1
|
||||
export LVM_TEST_LVMLOCKD_TEST=1
|
||||
|
@ -59,7 +59,7 @@ test -n "$SKIP_WITH_CLVMD" && test "$LVM_TEST_LOCKING" = 3 && initskip
|
||||
test -n "$SKIP_WITHOUT_LVMETAD" && test -z "$LVM_TEST_LVMETAD" && initskip
|
||||
test -n "$SKIP_WITH_LVMETAD" && test -n "$LVM_TEST_LVMETAD" && initskip
|
||||
|
||||
test -n "$SKIP_WITH_LVMPOLLD" && test -n "$LVM_TEST_LVMPOLLD" && initskip
|
||||
test -n "$SKIP_WITH_LVMPOLLD" && test -n "$LVM_TEST_LVMPOLLD" && test -z "$LVM_TEST_LVMLOCKD" && initskip
|
||||
|
||||
test -n "$SKIP_WITH_LVMLOCKD" && test -n "$LVM_TEST_LVMLOCKD" && initskip
|
||||
|
||||
@ -172,6 +172,8 @@ test -n "$LVM_TEST_LVMPOLLD" && {
|
||||
aux prepare_lvmpolld
|
||||
}
|
||||
|
||||
export SHARED=""
|
||||
|
||||
if test -n "$LVM_TEST_LVMLOCKD" ; then
|
||||
if test -n "$LVM_TEST_LOCK_TYPE_SANLOCK" ; then
|
||||
aux lvmconf 'local/host_id = 1'
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -17,7 +17,7 @@
|
||||
# instead lvconvert --repair them?)
|
||||
# - linear LVs with bits missing are not activated
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -17,7 +17,7 @@
|
||||
# instead lvconvert --repair them?)
|
||||
# - linear LVs with bits missing are not activated
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_CLVMD=1
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise usage of metadata2 cache metadata format
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
# Until new version of cache_check tools - no integrity validation
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise activation of cache component devices
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise activation of raid component devices
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise activation of thin component devices
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -15,7 +15,7 @@
|
||||
# to improve code coverage
|
||||
#
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
@ -29,7 +29,7 @@ pvcreate --metadatacopies 0 "$dev3"
|
||||
# FIXME takes very long time
|
||||
#pvck "$dev1"
|
||||
|
||||
vgcreate "$vg" "${DEVICES[@]}"
|
||||
vgcreate $SHARED "$vg" "${DEVICES[@]}"
|
||||
|
||||
lvcreate -l 5 -i5 -I256 -n $lv $vg
|
||||
lvcreate -aey -l 5 -n $lv1 $vg
|
||||
|
@ -13,7 +13,7 @@
|
||||
# test support of thin discards
|
||||
#
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
export LVM_TEST_THIN_REPAIR_CMD=${LVM_TEST_THIN_REPAIR_CMD-/bin/false}
|
||||
@ -80,10 +80,10 @@ vgremove -ff $vg
|
||||
# device below does not support it, the kernel value
|
||||
# of discards actually used will be "nopassdown".
|
||||
# This is why we have "-o discards" and "-o kernel_discards".
|
||||
vgcreate -s 1m "${vg}_1" "${DEVICES[@]}"
|
||||
vgcreate $SHARED -s 1m "${vg}_1" "${DEVICES[@]}"
|
||||
lvcreate -l 10 -T ${vg}_1/pool --discards ignore
|
||||
lvcreate -V 9m -T ${vg}_1/pool -n device_with_ignored_discards
|
||||
vgcreate -s 1m ${vg}_2 "$DM_DEV_DIR/${vg}_1/device_with_ignored_discards"
|
||||
vgcreate $SHARED -s 1m ${vg}_2 "$DM_DEV_DIR/${vg}_1/device_with_ignored_discards"
|
||||
lvcreate -l 1 -T ${vg}_2/pool --discards passdown
|
||||
lvcreate -V 1 -T ${vg}_2/pool
|
||||
check lv_field ${vg}_1/pool discards "ignore"
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,6 +12,7 @@
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
||||
. lib/inittest
|
||||
|
||||
# Don't attempt to test stats with driver < 4.33.00
|
||||
|
@ -12,6 +12,7 @@
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
||||
. lib/inittest
|
||||
|
||||
# Don't attempt to test stats with driver < 4.33.00
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Basic usage of zero target
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -11,7 +11,7 @@
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
test_description='Exercise fsadm filesystem resize on crypt devices'
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
# FIXME: cannot use brd (ramdisk) - lsblk is NOT listing it
|
||||
|
@ -11,7 +11,7 @@
|
||||
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
|
||||
test_description='Exercise fsadm operation on renamed device'
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -11,7 +11,7 @@
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
test_description='Exercise fsadm filesystem resize'
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
@ -74,7 +74,7 @@ vgremove -f $vg
|
||||
pvremove -ff "${DEVICES[@]}"
|
||||
pvcreate "${DEVICES[@]}"
|
||||
aux backup_dev "$dev2"
|
||||
vgcreate $vg "$dev1"
|
||||
vgcreate $SHARED $vg "$dev1"
|
||||
vgextend $vg "$dev2"
|
||||
aux restore_dev "$dev2"
|
||||
vgscan $cache
|
||||
|
@ -14,7 +14,6 @@
|
||||
# tests functionality of lvs, pvs, vgs, *display tools
|
||||
#
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
@ -40,7 +39,7 @@ pvdisplay
|
||||
#COMM pvs with segment attributes works even for orphans
|
||||
test "$(pvs --noheadings -o seg_all,pv_all,lv_all,vg_all "${DEVICES[@]}" | wc -l)" -eq 5
|
||||
|
||||
vgcreate $vg "${DEVICES[@]}"
|
||||
vgcreate $SHARED $vg "${DEVICES[@]}"
|
||||
|
||||
check pv_field "$dev1" pv_uuid BADBEE-BAAD-BAAD-BAAD-BAAD-BAAD-BADBEE
|
||||
|
||||
@ -202,17 +201,17 @@ vgremove -ff $vg
|
||||
# all LVs active - VG considered active
|
||||
pvcreate "$dev1" "$dev2" "$dev3"
|
||||
|
||||
vgcreate $vg1 "$dev1"
|
||||
vgcreate $SHARED $vg1 "$dev1"
|
||||
lvcreate -l1 $vg1
|
||||
lvcreate -l1 $vg1
|
||||
|
||||
# at least one LV active - VG considered active
|
||||
vgcreate $vg2 "$dev2"
|
||||
vgcreate $SHARED $vg2 "$dev2"
|
||||
lvcreate -l1 $vg2
|
||||
lvcreate -l1 -an -Zn $vg2
|
||||
|
||||
# no LVs active - VG considered inactive
|
||||
vgcreate $vg3 "$dev3"
|
||||
vgcreate $SHARED $vg3 "$dev3"
|
||||
lvcreate -l1 -an -Zn $vg3
|
||||
lvcreate -l1 -an -Zn $vg3
|
||||
|
||||
|
@ -11,7 +11,7 @@
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
test_description='test some blocking / non-blocking multi-vg operations'
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_CLVMD=1
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
@ -19,7 +19,7 @@ SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
aux prepare_devs 3
|
||||
pvcreate "$dev1" "$dev2"
|
||||
vgcreate $vg "$dev1" "$dev2"
|
||||
vgcreate $SHARED $vg "$dev1" "$dev2"
|
||||
|
||||
# if wait_for_locks set, vgremove should wait for orphan lock
|
||||
# flock process should have exited by the time first vgremove completes
|
||||
@ -33,7 +33,7 @@ test ! -f "$TESTDIR/var/lock/lvm/P_orphans"
|
||||
|
||||
# if wait_for_locks not set, vgremove should fail on non-blocking lock
|
||||
# we must wait for flock process at the end - vgremove won't wait
|
||||
vgcreate $vg "$dev1" "$dev2"
|
||||
vgcreate $SHARED $vg "$dev1" "$dev2"
|
||||
flock -w 5 "$TESTDIR/var/lock/lvm/P_orphans" sleep 10 &
|
||||
|
||||
while ! test -f "$TESTDIR/var/lock/lvm/P_orphans" ; do sleep .1 ; done
|
||||
|
@ -13,7 +13,7 @@
|
||||
# Test parallel use of lvm commands and check locks aren't dropped
|
||||
# RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1049296
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Check how lvm2 handles partitions over losetup -P devices
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
@ -37,7 +37,7 @@ aux extend_filter "a|$LOOP|"
|
||||
|
||||
# creation should fail for 'partitioned' loop device
|
||||
not pvcreate -y "$LOOP"
|
||||
not vgcreate vg "$LOOP"
|
||||
not vgcreate $SHARED vg "$LOOP"
|
||||
|
||||
aux teardown_devs
|
||||
|
||||
@ -61,4 +61,4 @@ aux extend_filter "a|$LOOP|"
|
||||
# creation should pass for 'non-partitioned' loop device
|
||||
pvcreate -y "$LOOP"
|
||||
|
||||
vgcreate vg "$LOOP"
|
||||
vgcreate $SHARED vg "$LOOP"
|
||||
|
@ -12,6 +12,7 @@
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
||||
. lib/inittest
|
||||
|
||||
aux have_thin 1 0 0 || skip
|
||||
@ -20,7 +21,7 @@ get_devs
|
||||
|
||||
aux lvmconf "metadata/record_lvs_history=1"
|
||||
|
||||
vgcreate -s 64K "$vg" "${DEVICES[@]}"
|
||||
vgcreate $SHARED -s 64K "$vg" "${DEVICES[@]}"
|
||||
|
||||
lvcreate -l100%FREE -T ${vg}/pool
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise changing of caching mode on both cache pool and cached LV.
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# Exercise usage of older metadata which are missing some new settings
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
@ -24,7 +24,7 @@ aux prepare_vg 5 80
|
||||
|
||||
|
||||
lvcreate -l 10 --type cache-pool $vg/cpool
|
||||
lvcreate -l 20 -H -n $lv1 $vg/cpool
|
||||
lvcreate -l 20 -H -n $lv1 --cachepool $vg/cpool $vg
|
||||
|
||||
vgcfgbackup -f backup $vg
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# test activation race for raid's --syncaction check
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# FIXME RESYNC doesn't work in cluster with exclusive activation
|
||||
# seriously broken!
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_CLVMD=1
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -10,7 +10,7 @@
|
||||
# along with this program; if not, write to the Free Software Foundation,
|
||||
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
. lib/inittest
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
# test activation race for raid's --syncaction check
|
||||
|
||||
SKIP_WITH_LVMLOCKD=1
|
||||
|
||||
SKIP_WITH_LVMPOLLD=1
|
||||
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user