2004-11-17 21:34:49 +03:00
Tagging aims
============
1) Ability to attach an unordered list of tags to LVM metadata objects.
2) Ability to add or remove tags easily.
3) Ability to select LVM objects for processing according to presence/absence
of specific tags.
4) Ability to control through the config file which VGs/LVs are activated
on different machines using names or tags.
5) Ability to overlay settings from different config files e.g. override
some settings in a global config file locally.
Clarifications
==============
1) Tag character set: A-Za-z0-9_+.-
Can't start with hyphen & max length is 128 (NAME_LEN).
2) LVM object types that can be tagged:
VG, LV, LV segment
PV - tags are stored in VG metadata so disappear when PV becomes orphaned
Snapshots can't be tagged, but their origin may be.
3) A tag can be used in place of any command line LVM object reference that
accepts (a) a list of objects; or (b) a single object as long as the
tag expands to a single object. This is not supported everywhere yet.
Duplicate arguments in a list after argument expansion may get removed
retaining the first copy of each argument.
4) Wherever there may be ambiguity of argument type, a tag must be prefixed
by '@'; elsewhere an '@' prefix is optional.
5) LVM1 objects cannot be tagged, as the disk format doesn't support it.
6) Tags can be added or removed with --addtag or --deltag.
Config file Extensions
======================
To define host tags in config file:
tags {
# Set a tag with the hostname
hosttags = 1
tag1 { }
tag2 {
# If no exact match, tag is not set.
host_list = [ "hostname", "dbase" ]
}
}
Activation config file example
==============================
activation {
volume_list = [ "vg1/lvol0", "@database" ]
}
Matches against vgname, vgname/lvname or @tag set in *metadata*.
@* matches exactly against *any* tag set on the host.
The VG or LV only gets activated if a metadata tag matches.
The default if there is no match is not to activate.
If volume_list is not present and any tags are defined on the host
then it only activates if a host tag matches a metadata tag.
If volume_list is not present and no tags are defined on the host
then it does activate.
Multiple config files
=====================
(a) lvm.conf
(b) lvm_<host_tag>.conf
At startup, load lvm.conf.
Process tag settings.
If any host tags were defined, load lvm_tag.conf for each tag, if present.
When searching for a specific config file entry, search order is (b)
then (a), stopping at the first match.
Within (b) use reverse order tags got set, so file for last tag set is
searched first.
New tags set in (b) *do* trigger additional config file loads.
Usage Examples
==============
1) Simple activation control via metadata with static config files
lvm.conf: (Identical on every machine - global settings)
tags {
hostname_tags = 1
}
From any machine in the cluster, add db1 to the list of machines that
activate vg1/lvol2:
2011-11-22 17:54:05 +04:00
lvchange --addtag @db1 vg1/lvol2
2004-11-17 21:34:49 +03:00
(followed by lvchange -ay to actually activate it)
2) Multiple hosts.
Activate vg1 only on the database hosts, db1 and db2.
Activate vg2 only on the fileserver host fs1.
Activate nothing initially on the fileserver backup host fsb1, but be
prepared for it to take over from fs1.
Option (i) - centralised admin, static configuration replicated between hosts
# Add @database tag to vg1's metadata
2011-11-22 17:54:05 +04:00
vgchange --addtag @database vg1
2004-11-17 21:34:49 +03:00
# Add @fileserver tag to vg2's metadata
2011-11-22 17:54:05 +04:00
vgchange --addtag @fileserver vg2
2004-11-17 21:34:49 +03:00
lvm.conf: (Identical on every machine)
tags {
database {
host_list = [ "db1", "db2" ]
}
fileserver {
host_list = [ "fs1" ]
}
fileserverbackup {
host_list = [ "fsb1" ]
}
}
activation {
# Only activate if host has a tag that matches a metadata tag
volume_list = [ "@*" ]
}
In the event of the fileserver host going down, vg2 can be brought up
2011-11-22 17:54:05 +04:00
on fsb1 by running *on any node* 'vgchange --addtag @fileserverbackup vg2'
2004-11-17 21:34:49 +03:00
followed by 'vgchange -ay vg2'
Option (ii) - localised admin & configuation
(i.e. each host holds *locally* which classes of volumes to activate)
# Add @database tag to vg1's metadata
2011-11-22 17:54:05 +04:00
vgchange --addtag @database vg1
2004-11-17 21:34:49 +03:00
# Add @fileserver tag to vg2's metadata
2011-11-22 17:54:05 +04:00
vgchange --addtag @fileserver vg2
2004-11-17 21:34:49 +03:00
lvm.conf: (Identical on every machine - global settings)
tags {
hosttags = 1
}
lvm_db1.conf: (only needs to be on db1 - could be symlink to lvm_db.conf)
activation {
volume_list = [ "@database" ]
}
lvm_db2.conf: (only needs to be on db2 - could be symlink to lvm_db.conf)
activation {
volume_list = [ "@database" ]
}
lvm_fs1.conf: (only needs to be on fs1 - could be symlink to lvm_fs.conf)
activation {
volume_list = [ "@fileserver" ]
}
If fileserver goes down, to bring a spare machine fsb1 in as fileserver,
create lvm_fsb1.conf on fsb1 (or symlink to lvm_fs.conf):
activation {
volume_list = [ "@fileserver" ]
}
and run 'vgchange -ay vg2' or 'vgchange -ay @fileserver'