1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-08-22 01:49:28 +03:00

Compare commits

..

736 Commits

Author SHA1 Message Date
0791eed33d report: cachevol field for writecache LVs 2018-07-24 17:00:06 -05:00
19637a8d72 Add dm-writecache support
Use dm-writecache to attach a cache volume (CV) to an LV.
The LV type becomes "writecache" (--type writecache can
be optionally specified when attaching.)

  lvconvert --cachevol-attach CV LV
  lvconvert --cachevol-detach CV LV

The cachevol (CV) is a special LV that was created on
cache devices, e.g.

  lvcreate --cachevol -L <size> -n <name> VG

Cache devices are special PVs that are added to the
VG to use for cachevols, e.g.

  vgextend --cachedev VG PV

Example:

  Add persistent memory to a VG as a cache device:
  vgextend --cachedev vg /dev/pmem0

  Create a cache volume from pmem0:
  lvcreate --cachevol -L1G -n cvol0 vg

  Create a second cache volume from pmem0:
  lvcreate --cachevol -L1G -n cvol1 vg

  Attach the cache volumes to LVs:
  lvconvert --cachevol-attach cvol0 vg/lv0
  lvconvert --cachevol-attach cvol1 vg/lv1
2018-07-24 15:46:06 -05:00
4fc2bdb0ba Add cache volumes
A cache volume (CV) is a special LV that can be
attached to a standard LV to do caching.

A cache volume is allocated from cache devices (CDs)
which are special PVs in the VG that are used for
caching.

To create a cache volume, add the --cachevol option
to the lvcreate command.  lvcreate will allocate
extents for the CV from cache devs that have been
added to the VG.

  lvcreate --cachevol -n <name> -L <size> VG [CD...]
2018-07-24 15:46:06 -05:00
7ac01dc7fa Add cache devices
A cache device is a special PV that is added to a VG
to be used for caching, and not for allocating standard LVs.

A cache device (CD) is used to create a cache volume.
A cache volume (CV) is a special LV that can be attached
to a standard LV to do caching.

Terminology:

PV used for caching = cache device = cachedev = CD
LV used for caching = cache volume = cachevol = CV

PV metadata for a cachedev includes the CACHEDEV flag.
LV metadata for a cachevol includes the CACHEVOL flag.

A cachedev is added to a VG with:

  vgextend --cachedev VG PV

The --cachedev option tells lvm that the PV should
be added to the VG as a cache device, not a standard PV.

(cache volumes are added by the following commit)
2018-07-24 15:46:06 -05:00
2f43f0393e test: new lvcreate-raid1-error-read.sh
Test for MD RAID kernel bug in read_balance() preventing
reads of failed sectors to get rescheduled to another leg.
2018-07-24 20:41:26 +02:00
706606627f spec: Fix conditional 2018-07-24 16:22:23 +02:00
5ff18f51b9 build: Update configure 2018-07-24 16:17:42 +02:00
279f3bfdc0 spec: Add vdo files 2018-07-24 15:41:30 +02:00
97506a7e2a build: Remove lvmetad leftovers 2018-07-24 15:02:32 +02:00
7709b70f97 spec: Remove unsupported config options 2018-07-24 15:00:12 +02:00
86c3940537 spec: Remove python bindings 2018-07-24 14:55:32 +02:00
bf4be80669 spec: Remove lvmetad 2018-07-24 14:50:52 +02:00
2214dc12c3 lvconvert: reject conversions of LVs under snapshot
Conversions of LVs under snapshot to thinpool or cachepool
correctly fail but leave them inactive and provide cryptic
error messages like 'Internal error: #LVs (10) != #visible
LVs (2) + #snapshots (1) + #internal LVs (5) in VG VG'.

Reject and provide better error message.

Resolves: rhbz1514146
2018-07-23 19:35:34 +02:00
778ce8d808 lvconvert: improve text about splitmirrors
in messages and man page.
2018-07-23 12:28:48 -05:00
8a66c81b9b lvconvert: restrict command matching for no option variant
The 'lvconvert LV' command def has caused multiple problems
for command matching because it matches the required options
of any lvconvert command.  Any lvconvert with incorrect options
ends up matching 'lvconvert LV', which then produces an error
about incorrect options being used for 'lvconvert LV'.  This
prevents suggestions from nearest-command partial command matches.

Add a special case for 'lvconvert LV' so that it won't be used
as a partial match for a command that has options specified.
2018-07-23 11:12:38 -05:00
63ec42f428 tests: remove lvmetad tests 2018-07-11 11:27:54 -05:00
117160b27e Remove lvmetad
Native disk scanning is now both reduced and
async/parallel, which makes it comparable in
performance (and often faster) when compared
to lvm using lvmetad.

Autoactivation now uses local temp files to record
online PVs, and no longer requires lvmetad.

There should be no apparent command-level change
in behavior.
2018-07-11 11:26:42 -05:00
edf3f86184 tests: fix mkdir pvs_online 2018-07-10 14:19:46 -05:00
06439a2562 tests: autoactivation tests for use without lvmetad
Adjust a few lvmetad pvscan/autoactivation tests to be
used without lvmetad, and add a test to cover some cases
that have not been tested before.
2018-07-10 10:49:34 -05:00
db741e75a2 pvscan: autoactivate without lvmetad
When lvmetad is not used, use temporary files to record
which PVs have appeared.  Use these temp files to determine
when a VG is complete, to trigger autoactivation.

This change allows us to remove lvmetad while keeping the
same autoactivation behavior that lvmetad provides.

The temp files are created in /run/lvm/pvs_online/ and are
named for the PVID of the PV.  The files contain the
major:minor of the device the PV was read from.

e.g. if VG foo has dev1 and dev2, then:

. pvscan --cache -aay dev1
  reads vg metadata from dev1
  creates /run/lvm/pvs_online/<pvid-of-dev1>
  checks if all vg->pvs are online: no

. pvscan --cache -aay dev2
  reads vg metadata from dev2
  creates /run/lvm/pvs_online/<pvid-of-dev2>
  checks if all vg->pvs are online: yes
  autoactivates vg

A 'pvscan --cache dev' (without -aay) still records that
dev is online.

A 'pvscan --cache --major X --minor Y' after a device is
gone will remove the temp file for it.

A 'pvscan --cache [-aay]' (no devs) resets the state of
temp files by removing them all, then scanning all devs
and creating temp files for PVs that are found.

If no online files exist, the first pvscan --cache scans
all devs and creates temp files for any PVs found.

The scope of the temp files is only pvscan, and they are only
used for pvscan-based autoactivation.  No other commands are
concerned with or aware of these temp files.  When lvm creates
or removes PVs, no attempt is made to update the temp files.
2018-07-09 16:11:24 -05:00
c47655f231 tests: initial vdo tests
Basic functionality of lvcreate, lvchange.
2018-07-09 15:29:16 +02:00
faa126882a dmeventd: lvm vdo support 2018-07-09 15:29:16 +02:00
12213445b5 vgchange: vdo support
Support vgchange usage with VDO segtype.
Also changing extent size need small update for vdo virtual extent.

TODO: API needs enhancements so it's not about adding ifs() everywhere.
2018-07-09 15:29:16 +02:00
7b8aa4af57 lvconvert: support to convert lv into vdopool
Support:

lvconvert --type vdo-pool  vg/lv

lvconvert --vdopool  vg/lv   --virtualsize 10G
2018-07-09 15:29:16 +02:00
6206bd0e79 lvchange: vdo support compression deduplication change
Add basic support for changing compression and deduplication state
of a VDO pool volume.

Allowing to access it also via top-level VDO volume.
2018-07-09 15:29:15 +02:00
c58733ca15 lvcreate: vdo support
Supports basic:  'lvcreate --vdo -LXXXG -VYYYG vg/vdoname -n lvname'
Allows to create basic VDO pool volume and virtual VDO volume.
2018-07-09 15:29:12 +02:00
6945bbdbc6 lvresize: vdo support
Unsupported ATM.

Wait till VDO kernel target starts to use updated resize sequence,
LOAD, SUSPEND, RESUME.
2018-07-09 15:28:35 +02:00
96e9929f2f args: new options for vdo segment
Introduce new options usable with commands supporting VDO:
 --compression, --deduplication, --vdo, --vdopool
2018-07-09 15:28:35 +02:00
a821b88a43 toollib: support new command rules queries
Add: LV_vdo, LV_vdopool, LV_vdopooldata
2018-07-09 15:28:35 +02:00
44c99a8822 vdo: data percentage
Display percentage of used virtual size of vdo-pool volume.
2018-07-09 15:28:35 +02:00
5807993bbf display: basic vdo segment lvdisplay and lvs support
Print some basic info about vdo segment.

'lvdisplay -m' ATM shows the most.
lvs  shows usage percentage.
2018-07-09 15:28:35 +02:00
4f708e8709 dev_manager: add dev_manager_vdo_pool_status 2018-07-09 15:28:35 +02:00
493ffe7a0f lv_manip: layout and role support for vdo segment 2018-07-09 15:28:35 +02:00
00990ed53e check_lv_segment: internal vdo segment validation
Check if settings for vdo segment are correct.
2018-07-09 15:28:35 +02:00
0dafd159a8 vdo_manip: parsing status of VDO device 2018-07-09 15:28:35 +02:00
aa63dfbe39 vdo: support functions to map enums to string names
Translate VDO enums to printable strings.
2018-07-09 15:28:35 +02:00
aff69ecf39 vdo: component activation of VDO data LV
Allow component activation of VDO data LV.
2018-07-09 15:28:35 +02:00
4b7a57c9ed vdo: with created names use vpool
When user create vdo-pool - use different automatic name.
So unlike with traditional LVs using  lvol0, lvol1
use vpool0, vpool1...

TODO: apply similar for thin-pool  & cache-pool...
2018-07-09 15:28:35 +02:00
a8f84f7801 vdo: introduce segment types and manip functions
Core functionality introducing lvm VDO support.
2018-07-09 15:28:35 +02:00
c66a960149 build: install VDO small allocation profile
Profile shows all VDO configurables.

Usable with: lvcreate --metadataprofile vdo-small ...
2018-07-09 15:28:35 +02:00
d8a41f22e9 device_mapper: basic support for vdo dm target 2018-07-09 15:28:35 +02:00
0d9a4c6989 lib: new vdo segment configurable options
Configurable for vdo segment with their default values.
Also specify their ranges with minimal and maximal values.
2018-07-09 15:28:35 +02:00
4a90b0c4c9 build: add vdo configuration option --with-vdo=
Checks whether VDO support is enabled.
Detects presence of 'vdoformat' tool which is required for to format VDO pool.

ATM build of VDO is NOT automatically enabled (None is default).
To enable build of LVM with VDO support use:

configure --with-vdo=internal

TODO: Maybe future version may switch to link some small VDO library for formating
(would require linking and package dependency).
2018-07-09 15:28:35 +02:00
2e05f6018b activate: kvdo modprobe workaround
To support autoloading of VDO dm target driver loading of 'kvdo'
kernel module is needed - ATM it's not using 'dm-vdo' name.
So to support this strange name - add temporarily solution to
autoload  kvdo kernel module in this case.
2018-07-09 15:28:35 +02:00
80e6097ea6 dmeventd: base vdo plugin
Introduce VDO plugin for monitoring VDO devices.

This plugin can be used also by other users, as plugin checks
for UUID prefix 'LVM-' and run  lvm actions only on those
devices.

Non LVM- device are only monitored and log warnings
when usage threshold reaches 80%.
2018-07-09 15:28:32 +02:00
b98846998b build: not yet merged
status.c will get linked with VDO support.
2018-07-09 10:37:39 +02:00
5f3eff8eae tests: update vdo unit test to dm prefix
Update prefix and reindent.
2018-07-09 10:30:34 +02:00
9b6b4f14d8 device_mapper: convert vdo to use dm_ prefix
Keep using DM_/dm_ prefixes for device_mapper code.
2018-07-09 10:30:34 +02:00
4a64bb9573 build: unit test Makefile update
Update makefile to link with more libs since now whole liblvm-internal.a
is linked-in and  this library has futher dependencies.

Avoid including deps for run-unit-test.

Drop linking separate status.c as it's already linked via internal libs.
2018-07-09 10:30:34 +02:00
5cf0923e18 vdo: fix parsing vdo status
Recent updates relay on zerod status structure memory (device ptr is
NULL) and also dm_strncpy need to count with '\0'.
2018-07-09 10:30:34 +02:00
e9d1f676b3 allocation: add check for passing log allocation
Updates previous commit.
2018-07-09 00:59:34 +02:00
333eb8667e tests: check how thin-pool allocation works
Check allocation of thin-pool works on 2PVs, when one is so full,
that even metadata do not fit there (as they need at least 2M,
while 99% of 63MB fills >62MB)
2018-07-09 00:23:35 +02:00
6d1c983122 cleanup: use last_seg
More readable code.
2018-07-09 00:23:35 +02:00
a55d4b6051 build: libdm preload dir is no longer needed
Since we do not build lvm code with libdm, drop preload.
2018-07-09 00:23:32 +02:00
c8b4f9414c dev_io: no discard in testmode
When lvm2 command is executed in test mode, discard ioctl is skipped.
This may cause even data-loose in case, issuing discard for released
areas was enabled and user 'tested'  lvreduce.
2018-07-09 00:19:30 +02:00
b697aa9646 allocator: fix thin-pool allocation
When allocating thin-pool with more then 1 device - try to
allocate 'metadataLV' with reuse of log-type allocation for mirror LV.
It should be naturally place on other device then 'dataLV'.

However due to somewhat hard to follow allocation logic code,
it's been rejected allocation in cases where there was not
enough space for data or metadata on single PV, thus to successed,
usage of segments was mandatory.

While user may use:

allocation/thin_pool_metadata_require_separate_pvs=1

to enforce separe meta and data LV - on default settings, this is not
enable thus segment allocation is meant to work.

NOTE:

As already said - the original intention of this whole  'if()' is unclear,
so try to split this test into multiple more simple tests that are more readable.

TODO: more validation.
2018-07-09 00:19:30 +02:00
c96400b6c7 vdo: enhance status parser
Add support for using mempool for allocations inside status parser.
Convert some string pointers to arrays.
Reindent tabs.
2018-07-02 10:25:35 +02:00
c1a6b10d09 device_mapper: relocate code for sending messages
To be able to send messages for recently resumed devices,
move code into inner loop.
2018-07-02 10:25:35 +02:00
d56e400d44 device_mapper: deactive new nodes when load fails
When node loading fails, there is not much the caller can do,
since there is 'unknown' set of devices preloaded.

Only suspend during preload knows future precommitted 'metadata',
so it's non-trivial to drop 'preloaded' entries with any later call.

However dm tree tracks newly loaded entries - so in this case it
may simplify the recovery path by dropping preloaded entries so
they are not leaked in the DM table.
2018-07-02 10:25:35 +02:00
f2b856c994 lv_manip: do not check extents for any virtual target
Allow creation of any virtual segment type with just --virtualsize
specified without any real extent size give.

TODO: likely --type error,zero might be later enhanced to use -V
(along with -L) - but since those targets do not allocate real
space, supporting -V makes sense with them.
2018-07-02 10:24:23 +02:00
2bb9627d01 lv_manip: add name of failing LV into error message 2018-07-02 10:24:23 +02:00
ed3428b7ed memlock: extend exception list
Amound of linked libraries grows.
Most of them we don't need to lock in, since we are not using
them in locked section, so skip locking them in memory.
2018-07-02 10:24:20 +02:00
0bae9a1bff locking: memory locking ONLY with suspending reason
It's important to lock memory beforo running SUSPEND ioctl - but whole
lvm preload runs in memory unlocked environment - as in this phase
memory allocation is allowed and is meant to happen.

Once all targets are preload and ready (confirmed from all targets)
we start suspending tree - and here the memory allocation (or i.e.
opening files) is no longer allowed - as it may cause kernel deadlock.
2018-07-02 10:21:42 +02:00
b55d30956d build: drop some more old files 2018-07-02 10:21:42 +02:00
52b07672f8 build: avoid rebuild deps for top-level makefiles 2018-07-02 10:21:42 +02:00
29b9ccd261 dmsetup: fix error propagation in _display_info_cols()
Commit 3f35146 added a check on the value returned by the
_display_info_cols() function:

  1024         if (!_switches[COLS_ARG])
  1025                 _display_info_long(dmt, &info);
  1026         else
  1027                 r = _display_info_cols(dmt, &info);
  1028
  1029         return r;

This exposes a bug in the dmstats code in _display_info_cols:
the fact that a device has no regions is explicitly not an error
(and is documented as such in the code), but since the return
code is not changed before leaving the function it is now treated
as an error leading to:

  # dmstats list
  Command failed.

When no regions exist.

Set the return code to the correct value before returning.
2018-06-28 14:25:30 +01:00
f96fd9961d Revert "man: fix lvreduce example"
-l -3 is correct, meaning reduce by 3.

This reverts commit d5bcc56eef.
2018-06-27 09:20:21 -05:00
163a30d784 man: fix lvreduce example 2018-06-27 08:59:41 -05:00
a14f21bf1d bcache: Fix null pointer dereferencing 2018-06-26 17:04:18 +02:00
4194fc9bbd device_mapper: add new _dm_task_create_device_status
Introduce new function _dm_task_create_device_status for grabbing
status of device for better code sharing.
2018-06-25 15:07:55 +02:00
739a213d2e device_mapper: split code for sending message
Move message sending from _thin_pool_node_message to
new _node_message for possible better code sharing.
2018-06-25 15:07:55 +02:00
a1c81c009a device_mapper: split _node_send_message
For better code reuse split _node_send_messages into commont
messaging part and separate _thin_pool_node_send_messages.

Patch makes it possible to better reuse common code for messaging
other targets.
2018-06-25 15:07:55 +02:00
19b92ae3f3 tests: update with --yes
vgcfgrestore needs to confirm restore while LVs from VG are present.
2018-06-25 15:07:55 +02:00
cea88a9e4e lv_manip: use vgmem pool
Switch to vgmem pool for allocation associated with modification
of particular VG.
2018-06-25 15:07:55 +02:00
357e9f9572 cache: use new api function 2018-06-25 15:07:55 +02:00
9c0d92d957 lv_manip: add new internal api function 2018-06-25 15:07:55 +02:00
8949903fbb cache: set areas count prior using it
Set correct counter, so it's not failing on internal error check.
2018-06-25 15:07:32 +02:00
6b3a4aac09 vcfgrestore: add prompt with active volumes
Add check for active device with names matching restored VG.
When such devices are present in dm table, prompt user, if he
wish to continue.
2018-06-22 23:37:36 +02:00
106ee05ba0 lv_manip: add extra internal error
Catch error early, when trying to store data into non-allocated area.
2018-06-22 23:37:02 +02:00
6c84a36b53 utils: add clzll
Check for __builtin_clzll and add wrapper when missing.
2018-06-22 23:37:02 +02:00
8215e3503d tests: fix rules for mke2fs.conf install 2018-06-22 23:36:54 +02:00
fa58fc3257 build: support --disable-silent-rules
Add support for standardized option for have verbose builds.
Useful for distro builds where more details can be useful.
2018-06-22 23:36:19 +02:00
c728d88e11 build: include configure.h
It's important to consistenly include  configure.h as the 1st. header.
It containts #defines influencing behavior of other included header
files.
2018-06-22 23:11:44 +02:00
086f1ef4a0 build: make generate 2018-06-22 15:36:34 +02:00
254e5c5d11 radix-tree: squash a pointer arithmetic warning 2018-06-21 17:41:56 +01:00
18528180d9 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-06-21 17:12:09 +01:00
72e2e92f4c radix-tree: fix bug when erasing elts in remove_prefix
_erase_elt() now zeroes the last element of the array (ie. sets to
UNSET).  Previously remove() was doing this, but not remove_prefix().
2018-06-21 17:10:05 +01:00
dd7ebec120 filter: use pointers to real addresses
instead of casting values 1 and 2 to pointers
which gcc optimization can have problems with.
2018-06-21 10:54:43 -05:00
15826214f9 Remove code for using files as devices
It appears this has not been used in a long time,
and it seems to have no point since loop devices exist.
2018-06-21 09:33:21 -05:00
e166d2b14c lvmlockd: fix another missing lock_type null check
Same as 347c807f8.
2018-06-21 09:24:51 -05:00
40c1f7889f radix-tree: More debugging of remove
There's now a pretty printer called radix_tree_dump()

n4, n16, and n48 weren't UNSETting the last entry after
sliding values down.
2018-06-21 09:49:43 +01:00
c8cfbfa605 radix_tree: add new test case
Check that value destructors are called by radix_tree_destroy()
2018-06-21 09:49:25 +01:00
20b9746c5d radix-tree: FIx various bugs to do with removal
Add radix_tree_is_well_formed() which does some sanity checking
of the tree.

Call the above a lot in the unit tests.

Fix revealed bugs.
2018-06-21 09:49:08 +01:00
42f7caf1c2 scan: work around udev problems by avoiding open RDWR
udev creates a train wreck of events if we open devices
with RDWR.  Until we can fix/disable/scrap udev, work around
this by opening RDONLY and then closing/reopening RDWR when
a write is needed.  This invalidates the bcache blocks for
the device before writing so it can trigger unnecessary
rereading.
2018-06-20 14:08:12 -05:00
f85a010a6b bcache: remove extraneous error message
an error from io_submit is already recognized by
the caller like errors during completion.
2018-06-18 12:02:22 -05:00
565df4e732 Print advice about changing clustered VGs to shared 2018-06-18 10:59:11 -05:00
428514a07f Drop --ignoreskippedcluster option
It's no longer needed.  Clustered VGs are now handled in
the same way as foreign VGs, and as shared VGs that
can't be accessed:

- A command processing all VGs sees a clustered VG,
  prints a message ("Skipping clustered VG foo."),
  skips it, and does not fail.

- A command where the clustered VG is explicitly
  named on the command line, prints a message and fails.
  "Cannot access clustered VG foo, see lvmlockd(8)."

The option is listed in the set of ignored options for
the commands that previously accepted it.  (Removing it
entirely would cause commands/scripts to fail if they
set it.)
2018-06-15 15:59:34 -05:00
ccab4a1994 report: show empty lock_type for none
Sometimes lock_type would be displayed as "none"
(after changing it) and sometimes as empty.
Make it consistently empty.
2018-06-15 14:14:39 -05:00
328303d4d4 Remove unused device error counting 2018-06-15 14:04:39 -05:00
54f61e7dcc config: add deprecated version for recently removed settings
assumes that the next version from this branch is 3.0.0
2018-06-15 13:56:26 -05:00
3fd75d1bcd scan: use full md filter when md 1.0 devices are present
The md filter can operate in two native modes:
- normal: reads only the start of each device
- full: reads both the start and end of each device

md 1.0 devices place the superblock at the end of the device,
so components of this version will only be identified and
excluded when lvm uses the full md filter.

Previously, the full md filter was only used in commands
that could write to the device.  Now, the full md filter
is also applied when there is an md 1.0 device present
on the system.  This means the 'pvs' command can avoid
displaying md 1.0 components (at the cost of doubling
the i/o to every device on the system.)

(The md filter can operate in a third mode, using udev,
but this is disabled by default because there have been
problems with reliability of the info returned from udev.)
2018-06-15 12:21:25 -05:00
8eab37593e Add cmd arg to more functions
so that it can be used in the filter code
2018-06-15 11:03:55 -05:00
27c647d6ce rpm: drop no longer present clvmd, lvm2app 2018-06-15 00:47:35 +02:00
2a7f2a3a24 tests: more tolerable makefile 2018-06-15 00:46:54 +02:00
7d8bd97187 scripts: clvmd gone 2018-06-15 00:46:24 +02:00
9d2b9e5bc6 man: stop installing clvmd man page 2018-06-15 00:46:08 +02:00
52e7270e23 man-generator: drop macro redefines 2018-06-14 23:22:42 +02:00
faf3cc8f71 tests: drop some clvmd refs
Do not try to link clvmd binary.
Ensure lib is created new and does not refer old binaries.
2018-06-14 23:22:42 +02:00
b2cb8f846a build: cmirrord with internal dm lib 2018-06-14 23:14:04 +02:00
b1729dbcdd tests: bigger lv
Although throttling slows down things considerable, it still could
reach the end before next test so use bigger LV.
2018-06-14 22:02:01 +02:00
f4abbafde7 debug: missing trace 2018-06-14 22:02:01 +02:00
b58160a191 systemd: add conficting sockets
Since we are using "DefaultDependencies=no" we do not get automatic STOP
job on socket connection - so automatically refuse connection on
shutdown by adding this Conflict definition to socket Unit.
2018-06-14 22:02:01 +02:00
a35098b110 vgchange: start polling with activation
Shuffle code for better readability as set of conditions was
hard to follow.

Make it obvious the refresh & activate path is handling
monitoring and polling on its own.

So the only --monitor and --poll option needs explicit care.
Option --monitor without option --poll will now as a result
of this patch NOT start polling.

So command: vgchange --monitor n    is no longer a polling starter.
2018-06-14 22:02:01 +02:00
218c57410c pvscan: move start of polling into vgchange
Restoring polling for activated volumes lost with my recent commit:
75fed05d3e and move start of polling
directly into _activate_lvs_in_vg() - as there we know exactly
if there was some volume even activated.

Also make it sharing same code for pvscan -aay.
2018-06-14 22:02:01 +02:00
752c39d91d pvscan: code reshape 2018-06-14 22:02:01 +02:00
33703995ae vgchange: trace faling activation
Trace failed activation and directly assign 0 returning failure.
2018-06-14 22:02:01 +02:00
f38a54227d vgchange: move active assing
Make eval of activate_ARG reusable.
2018-06-14 22:02:01 +02:00
70b159d145 vgchange: fix error code in error path
This rather hard to hit error path used wrong return value to signal
real error.
2018-06-14 22:02:01 +02:00
3eff3aa4f8 device_mapper: drop unneeded function
Subdir without stats.
2018-06-14 22:02:01 +02:00
5b515db71b build: better srcdir builddir support
With the move to top-level makefile - there are some issues
with subdir recursive makefile.
Make the building more tolerant for now until fully resolved.
2018-06-14 22:02:01 +02:00
52ab3c1584 build: drop libdm referring from lvm code
Avoid adding /libdm  paths into lvm building.
2018-06-14 22:02:01 +02:00
a457566e91 build: drop some lvm references from libdm making
Some simplification, more may follow...
2018-06-14 22:02:01 +02:00
c6be409609 build: ensure libdm is built before dm-tools
Making libs before entering dm-tools subdir,
so the tool will not link i.e. system library if present.
2018-06-14 22:02:01 +02:00
327f62a255 man: update lvmsystemid wording
to refer to "shared VG" instead of "lockd VG".
2018-06-14 12:35:00 -05:00
b5f444d447 man: updates to lvmlockd
The terminology has migrated toward using "shared VG"
rather than "lockd VG".

Also improve the wording in a number of places.
2018-06-14 12:35:00 -05:00
e84e9cd115 device_mapper: remove libdm-stats.c
We don't use it in lvm.
2018-06-14 14:32:17 +01:00
fededfbbbc dmfilemapd: Move to libdm/dm-tools
No longer uses any lvm code.
2018-06-14 14:27:19 +01:00
0524829af6 dmsetup: move to libdm/dm-tools/dmsetup
links against libdevmapper again.
no longer includes code from lvm.
2018-06-14 13:10:17 +01:00
e53cfc6a88 lvmlockd: update method for changing clustered VG
The previous method for forcibly changing a clustered VG
to a local VG involved using -cn and locking_type 0.
Since those options are deprecated, replace it with
the same command used for other forced lock type changes:
vgchange --locktype none --lockopt force.
2018-06-13 15:30:28 -05:00
9b79f0244a Remove makefile entries for removed script 2018-06-13 15:04:26 -05:00
fa00fce97c Remove systemd script for starting shared VG
Shared VGs will generally be started and activated by
the resource agent.  Without the agent, this script doesn't
have a good way to know which LVs to activate.
2018-06-13 14:37:16 -05:00
a163d5341a tests: remove vgconvert usage 2018-06-13 14:16:28 -05:00
d067263f51 tests: remove metadata-dirs
metadata dirs are removed
2018-06-13 14:14:23 -05:00
5fca75877d Remove vgconvert
it has no use without lvm1
2018-06-13 14:14:03 -05:00
22c5467add filters: remove cache file in persistent filter
It creates problems because it's not always correct,
and it doesn't actually help much.
2018-06-13 14:00:47 -05:00
17f5572bc9 Remove independent metadata areas
in which metadata is stored in files on the local fs
instead of on PVs.
2018-06-13 12:25:19 -05:00
9df6f601e0 Remove code for loading other metadata formats
other formats are not used.
2018-06-13 12:03:42 -05:00
885e57cb27 tests: lvmetad-pvscan-cache expect command to fail 2018-06-12 12:44:23 -05:00
7824bb710d tests: lvconvert-repair remove cluster test 2018-06-12 11:35:45 -05:00
be3af7f93e Remove the unused lock_hash in lvmcache
It kept track of which VGs were locked, but is
no longer used, so remove it.
2018-06-12 11:29:56 -05:00
981a3ba98e Clean up repair and result values in vg_read
Fix the confusing mix of input and output values
in the single variable.
2018-06-12 11:08:26 -05:00
9a8c36b891 Fix use of orphan lock in commands
vgreduce, vgremove and vgcfgrestore were acquiring
the orphan lock in the midst of command processing
instead of at the start of the command.  (The orphan
lock moved to being acquired at the start of the
command back when pvcreate/vgcreate/vgextend were
reworked based on pvcreate_each_device.)

vgsplit also needed a small update to avoid reacquiring
a VG lock that it already held (for the new VG name).
2018-06-12 09:46:11 -05:00
c4153a8dfc Remove checking for locked VGs
A few places were calling a function to check if a
VG lock was held.  The only place it was actually
needed is for pvcreate which wants to do its own
locking (and scanning) around process_each_pv.

The locking/scanning exceptions for pvcreate in
process_each_pv/vg_read can be enabled by just passing
a couple of flags instead of checking if the VG is
already locked.  This also means that these special
cases won't be enabled unknowingly in other places
where they shouldn't be used.
2018-06-12 09:46:04 -05:00
3b6b7f8f9b lvmlockd: skip repair lock upgrade for non shared vgs
Only attempt lvmlockd lock upgrade for shared VGs.
2018-06-12 09:44:05 -05:00
1c79cf9830 build: ensure configure.h comes first
Fix header order so configure.h is 1st. included header.
2018-06-11 22:38:51 +02:00
77d5caae90 snapshot: improve checking of merging snapshot
Add runtime detection for 'lvs -o+seg_monitor' and 'vgchange --monitor'.
This fix should avoid unnecessary timeout on systemd shutdown.
2018-06-11 22:25:42 +02:00
75fed05d3e vgchange: start polling with option
Polling start either with '--refresh'
or with '--poll' option specified.
2018-06-11 22:25:42 +02:00
e82b70e739 build: use internal libs for lvm2cmd 2018-06-11 22:25:42 +02:00
9c7ee1e1c4 build: link dmeventd plugins with internal libs 2018-06-11 22:25:42 +02:00
5c2f7f083c build: make generate 2018-06-11 22:25:42 +02:00
2a307ce33c build: update configure 2018-06-11 22:25:42 +02:00
b48e10d9e6 Remove lvmcache CACHE_LOCKED flag
and the functions that set it.  It's no longer used.
2018-06-08 15:11:47 -05:00
ebd147ff24 Remove locking for non-vgs
Locks for VGs are the only thing that locking.[ch]
now handles, so references to other variations
can be removed.
2018-06-08 14:34:50 -05:00
4ce9579099 tests: remove vgsplit-usage cluster test 2018-06-08 14:01:05 -05:00
1c59140f5f Remove unused cluster-related locking flags 2018-06-08 14:01:00 -05:00
a8759dc7a6 Remove unused cache management from locking
This code was for managing lvmcache for clvm
and it no longer does anything.
2018-06-08 12:30:43 -05:00
5e672df6ae Removing locking layer from sync_local_dev_names
the indirection is not needed without clvm
2018-06-08 12:18:57 -05:00
8266b7e951 tests: remove use of vgcreate -c option 2018-06-08 10:51:07 -05:00
ae961a192a Remove python bindings for liblvm2app 2018-06-08 10:33:47 -05:00
669b1295ae Remove header declarations for removed functions 2018-06-08 10:01:05 -05:00
dbc3e62cc0 tests: don't look for liblvm 2018-06-08 09:36:03 -05:00
73b7e6fde7 Remove more code that was only used by liblvm2app 2018-06-08 09:29:11 -05:00
7c4b19c335 Merge branch '2018-06-04-data-structs' 2018-06-08 14:21:07 +01:00
0ac89fb860 various: some missing #include zalloc.h 2018-06-08 14:18:09 +01:00
61e67e51e1 device_mapper: move hash.[hc] to base/data-struct 2018-06-08 13:54:19 +01:00
962a3eb3fa device_mapper: remove c++ guards from the header
This isn't a public header anymore, so not needed.
2018-06-08 13:44:43 +01:00
d5da55ed85 device_mapper: remove dbg_malloc.
I wrote dbg_malloc before we had valgrind.  These days there's just
no need.
2018-06-08 13:40:53 +01:00
0e2a358da9 tests: check pvresize with metadata size
Test new size of a PV can keep also metadata.
2018-06-08 14:37:31 +02:00
59dc9b445d tests: updates test for raid scanning 2018-06-08 14:37:31 +02:00
f20e828ec2 tests: drop unit subdir
Until we resolve top-level making, drop inclusion of subdir Makefile
written for top-level usage so at least integrational tests are running.
2018-06-08 14:37:31 +02:00
bb7c064b23 tests: initial testing code for lvs while pvmove runs 2018-06-08 14:37:31 +02:00
c93e0932e8 tests: check proper support of fmt2 with cleaner policy 2018-06-08 14:37:31 +02:00
8b111f28b0 cleanup: updates message with dots 2018-06-08 14:37:31 +02:00
bc8c8d2f87 build: drop exported symbols
This libs are no longer possible to create,
drop maintanence of exported symbols.
2018-06-08 14:37:31 +02:00
5cb4b2a424 cache: cleaner policy also uses fmt2
Format 2 is also with cleaner policy.
2018-06-08 14:37:29 +02:00
1f5f8382ae pvresize: update message
There is always at least PV header update even if the size
of PV remains same (so it's not really resized).
Try to make it a slightly less confusing.
2018-06-08 14:36:59 +02:00
fb171edd45 pvresize: add missing return
Log error path missed return 0.
Also fix some unneded bactraces (since log_error already shows
position).
2018-06-08 14:36:56 +02:00
0c62ae3f89 pvmove: improve lvs
When pvmoving LV - the target for LV is a mirror so the validation
that checked the type is matching was incorrect.

While we need a more generic enhancment of LVS output for pvmoved LVs,
for now at least stop showing internal errors and  'X' symbols in attrs.
2018-06-08 14:35:42 +02:00
c78239d860 libdm: Stop libdm/misc/dmlib.h from including lib/misc/lib.h 2018-06-08 13:01:41 +01:00
286c1ba336 device_mapper: rename libdevmapper.h -> all.h
I'm paranoid a file will include the global one in /usr/include
by accident.
2018-06-08 12:31:45 +01:00
88ae928ca3 base: Move list to base/data-struct 2018-06-08 11:24:18 +01:00
9573ff3a3b test/unit: Rename Makefile.in -> Makefile
There's nothing being expanded.
2018-06-08 09:50:40 +01:00
b67ef90438 Merge branch '2018-06-05-remove-applib' 2018-06-08 09:42:22 +01:00
cc87f55e25 Update WHATS_NEW 2018-06-08 09:42:05 +01:00
0d22b58172 liblvm: remove lvmapi
This has been deprecated for a while.
2018-06-08 09:38:05 +01:00
e6bb780d24 Rework lock-override options and locking_type settings
The last commit related to this was incomplete:
  "Implement lock-override options without locking type"

This is further reworking and reduction of the locking.[ch]
layer which handled all clustering, but is now only used
for file locking.  The "locking types" that this layer
implemented were removed previously, leaving only the
standard file locking.  (Some cluster-related artifacts
remain to be cleared out later.)

Command options to override or modify locking behavior
are reimplemented here without using the locking types.
Also, deprecated locking_type values are recognized,
and implemented as if one of the equivalent override
options was set.

Options that override file locking are:

. --nolocking disables all file locking.

. --readonly grants read lock requests without actually
  taking a file lock, and refuses write lock requests.

. --ignorelockingfailure tries to set up file locks and
  uses them normally if possible.  When not possible, it
  behaves like --readonly, but allows activation.

. --sysinit is the same as ignorelockingfailure.

. global/metadata_read_only acquires actual read file
  locks, and refuses write lock requests.

(Some of these options could probably be deprecated
because they were added as workarounds to various
locking_type behaviors that are now deprecated.)

The locking_type setting now has one valid value: 1 which
refers to standard file locking.  Configs that contain
deprecated values are recognized and still work in
largely the same way:

. 0 disabled all locking, now implemented like --nolocking
  is set.  Allow the nolocking option in all commands.

. 1 is the normal file locking setting and is unchanged.

. 2 was for external locking which was not used, and
  reverts to normal file locking.

. 3 was for cluster/clvm.  This reverts to normal file
  locking, and prints messages about lvmlockd.

. 4 was equivalent to readonly, now implemented like
  --readonly is set.

. 5 disabled all locking, now implemented like
  --nolocking is set.
2018-06-07 16:47:15 -05:00
c7c7017f0c man lvmlockd: remove unnecessary reference to lvmetad
it's optional to use it with lvmlockd
2018-06-07 13:44:05 -05:00
60db97ae1d test/unit: activation generator unit tests 2018-06-07 16:24:42 +01:00
00befc04d0 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-06-07 16:20:49 +01:00
6e6ef95ba6 Implement lock-override options without locking type
The options: --nolocking, --readonly, --sysinit
override, or make exceptions to, the normal file locking
behavior.  Implement these by just checking for the
options in the file locking path instead of using
special locking types.
2018-06-07 16:17:04 +01:00
e966752b86 tests: system_id remove clustered vg test 2018-06-07 16:17:04 +01:00
229582c97c tests: remove -cn option from some commands 2018-06-07 16:17:04 +01:00
da30b4a786 Remove locking infrastructure from activation paths
Basic LV functions:

  activate_lv(), deactivate_lv(),
  suspend_lv(), resume_lv()

were routed through the locking infrastruture on the way to:

  lv_activate_with_filter(), lv_deactivate(),
  lv_suspend_if_active(), lv_resume_if_active()

This commit removes the locking infrastructure from the
middle and calls the later functions directly from the former.

There were a couple of ancillary steps that the locking
infrastructure added along the way which are still included:

  - critical section inc/dec during suspend/resume
  - checking for active component LVs during activate

The "activation" file lock (serializing activation) has not
been kept because activation commands have been changed to
take the VG file lock exclusively which makes the activation
lock unused and unnecessary.
2018-06-07 16:17:04 +01:00
616eeba6f2 use exclusive file lock on VG for activation
Make activation commands:
  vgchange -ay, lvchange -ay, pvscan -aay

take an exclusive file lock on the VG to serialize
multiple concurrent activation commands which could
otherwise interfere with each other.
2018-06-07 16:17:04 +01:00
e7aa51c70f Remove VG lock ordering check
Four commands lock two VGs at a time:

- vgsplit and vgmerge already have their own logic to
  acquire the locks in the correct order.

- vgimportclone and vgrename disable this ordering check.
2018-06-07 16:17:04 +01:00
18259d5559 Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-07 16:17:04 +01:00
e4d9099e19 Remove more clvm code 2018-06-07 16:17:04 +01:00
d154dd6638 lvmlockd: fix missing lock_type null check
Missed checking if vg->lock_type is NULL in commit db8d3bdfa:
  lvmlockd: enable mirror split and merge with dlm lock_type
2018-06-07 16:17:04 +01:00
1539e51721 devices: clean up io error messages
Remove the io error message from bcache.c since it is not
very useful without the device path.

Make the io error messages from dev_read_bytes/dev_write_bytes
more user friendly.
2018-06-07 16:17:04 +01:00
bd8c6cf862 scripts/lvm2_activation_generator_systemd_red_hat: rewrite to use lvmconfig
Unit tested the new code, but not run functional tests (assuming they exist).
2018-06-07 16:15:04 +01:00
f2ff06d675 Implement lock-override options without locking type
The options: --nolocking, --readonly, --sysinit
override, or make exceptions to, the normal file locking
behavior.  Implement these by just checking for the
options in the file locking path instead of using
special locking types.
2018-06-06 16:31:59 -05:00
55521be2cb tests: system_id remove clustered vg test 2018-06-06 14:35:27 -05:00
802382e21f tests: remove -cn option from some commands 2018-06-06 14:04:19 -05:00
b7da704566 Remove locking infrastructure from activation paths
Basic LV functions:

  activate_lv(), deactivate_lv(),
  suspend_lv(), resume_lv()

were routed through the locking infrastruture on the way to:

  lv_activate_with_filter(), lv_deactivate(),
  lv_suspend_if_active(), lv_resume_if_active()

This commit removes the locking infrastructure from the
middle and calls the later functions directly from the former.

There were a couple of ancillary steps that the locking
infrastructure added along the way which are still included:

  - critical section inc/dec during suspend/resume
  - checking for active component LVs during activate

The "activation" file lock (serializing activation) has not
been kept because activation commands have been changed to
take the VG file lock exclusively which makes the activation
lock unused and unnecessary.
2018-06-06 13:58:34 -05:00
58a9254252 use exclusive file lock on VG for activation
Make activation commands:
  vgchange -ay, lvchange -ay, pvscan -aay

take an exclusive file lock on the VG to serialize
multiple concurrent activation commands which could
otherwise interfere with each other.
2018-06-06 13:58:34 -05:00
d2d8dd7f7f Remove VG lock ordering check
Four commands lock two VGs at a time:

- vgsplit and vgmerge already have their own logic to
  acquire the locks in the correct order.

- vgimportclone and vgrename disable this ordering check.
2018-06-06 13:58:34 -05:00
c157c43f7c Remove unused clvm variations for active LVs
Different flavors of activate_lv() and lv_is_active()
which are meaningful in a clustered VG can be eliminated
and replaced with whatever that flavor already falls back
to in a local VG.

e.g. lv_is_active_exclusive_locally() is distinct from
lv_is_active() in a clustered VG, but in a local VG they
are equivalent.  So, all instances of the variant are
replaced with the basic local equivalent.

For local VGs, the same behavior remains as before.
For shared VGs, lvmlockd was written with the explicit
requirement of local behavior from these functions
(lvmlockd requires locking_type 1), so the behavior
in shared VGs also remains the same.
2018-06-06 13:58:34 -05:00
eb60029245 Remove more clvm code 2018-06-06 13:58:34 -05:00
3c657adc0a lvmlockd: fix missing lock_type null check
Missed checking if vg->lock_type is NULL in commit db8d3bdfa:
  lvmlockd: enable mirror split and merge with dlm lock_type
2018-06-06 13:58:03 -05:00
c67bd8b47b devices: clean up io error messages
Remove the io error message from bcache.c since it is not
very useful without the device path.

Make the io error messages from dev_read_bytes/dev_write_bytes
more user friendly.
2018-06-06 10:08:25 -05:00
74460cd009 device_mapper: fixup a couple of includes
"libdevmapper.h" -> "device_mapper/libdevmapper.h"
2018-06-06 14:45:16 +01:00
3e781ea446 Remove clvmd and associated code
More code reduction and simplification can follow.
2018-06-05 11:09:13 -05:00
11384637fb WHATS_NEW 2018-06-05 16:24:19 +02:00
3810fd8d0d test: add convcenience conversion tests linear <-> striped
Add tests for linear <-> striped|raid* conversions.

Add region_size config to reshape tests to avoid test
failures in case of it being defined unexpectedly in lvm.conf.

Related: rhbz1439925
Related: rhbz1447809
2018-06-05 16:23:18 +02:00
bd7cdd0b09 lvconvert: support linear <-> striped convenience conversions
"lvconvert --type {linear|striped|raid*} ..." on a striped/linear
LV provides convenience interim type to convert to the requested
final layout similar to the given raid* <-> raid* conveninece types.

Whilst on it, add missing raid5_n convenince type from raid5* to raid10.

Resolves: rhbz1439925
Resolves: rhbz1447809
Resolves: rhbz1573255
2018-06-05 16:23:18 +02:00
de66704253 segtype: add linear
Add linear segtype addressing FIXME in preparation
for linear <-> striped convenience conversion support
2018-06-05 16:23:18 +02:00
2eda683a20 build: base/Makefile
.gitignore hid it.
2018-06-04 15:37:35 +01:00
232918fb86 build: libbase.a 2018-06-04 13:53:07 +01:00
29abba3785 build: get separate builddir working again 2018-06-04 13:22:14 +01:00
66b10275c5 build: More tweaks to python include dirs. 2018-06-04 12:28:17 +01:00
f6eeb218b2 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-06-04 11:59:49 +01:00
891f8dc19d build: Get python dir building without the include/ dir 2018-06-04 11:59:13 +01:00
1140d70893 build: fixes 2018-06-04 12:28:13 +02:00
eebf070d32 build: remove any leftover file
In case repository is used after building older version of lvm2
(i.e. git bisect) make sure clean erases any possible old symlinks.
2018-06-04 12:26:38 +02:00
21a5be2364 build: link lvm2_activation_generator_systemd_red_hat with libdevice-mapper.a 2018-06-04 10:00:44 +01:00
6a1f458bb7 build: compile fixes 2018-06-01 21:12:31 +02:00
4d19321fd3 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-06-01 19:19:11 +01:00
02c4901d89 build: get clvmd building again 2018-06-01 19:18:36 +01:00
7b5b1a9b6f scan: clean exit for alloc failure 2018-06-01 13:15:22 -05:00
0625c7f372 devs: clear coverity warning about null info
a theoretical possibility.
2018-06-01 13:15:22 -05:00
09177b53dd lvmlockd: clarify lock_type use for coverity
Make it clearer when vg->lock_type will be used so
coverity doesn't worry about it.
2018-06-01 13:15:22 -05:00
b6f0f20da2 lvmlockd: primarily use vg_is_shared
to check if a vg uses an lvmlockd lock_type,
instead of the equivalent but longer is_lockd_type.
2018-06-01 13:15:22 -05:00
c4497ee9e8 build: Link with -lrt
Needed for older version of glibc
2018-06-01 17:20:48 +01:00
15a8142f6d build: make sure selinux, udev and blkid libraries are linked.
Fixes breakage from the recent libdm split.  Though these didn't
ever appear to be linked (could they have piggy backed from libdevmapper.so
being linked to them?).
2018-06-01 16:53:20 +01:00
dbba1e9b93 Merge branch 'master' into 2018-05-11-fork-libdm 2018-06-01 13:04:12 +01:00
cb379c86c4 Merge branch '2018-05-30-bcache-radix-tree' 2018-06-01 12:45:33 +01:00
81f07c3cca man lvmlockd: update list of limitations 2018-05-31 16:38:39 -05:00
885eb2024f tests: enable non-working tests with lvmlockd
Those that are failing for reasons other than lvmlockd
restrictions.
2018-05-31 16:18:53 -05:00
00f6a8466e tests: enable more working tests with lvmlockd 2018-05-31 16:13:58 -05:00
06b2e5c176 lvmlockd: improve error message for existing lockspace
When a VG/lockspace already exists with the same name
don't just print the error number.
2018-05-31 15:52:23 -05:00
caa600a409 tests: enable lvcreate-pvtags with lvmlockd 2018-05-31 15:37:25 -05:00
b9c1cef817 lvmlockd: fix reverting new lv in error path
The wrong name was being used to free the LV lock
in lvmlockd in the error exit path.
2018-05-31 15:35:48 -05:00
4a01e4f389 tests: enable metadata-balance with lvmlockd 2018-05-31 15:12:34 -05:00
08771bbbbf tests: enable lvmlockd with tests using lvcreate -H -L LV 2018-05-31 14:49:16 -05:00
8d9d32b315 lvmlockd: enable lvcreate -H -L LV
Allow this command in a shared VG which had previously been
disallowed.
2018-05-31 14:20:11 -05:00
d4d39d0f90 Merge branch 'master' into 2018-05-30-bcache-radix-tree 2018-05-31 16:36:04 +01:00
fdaa7e2e87 vgs: add report field for shared
equivalent to a non-empty -o locktype.
2018-05-31 10:23:03 -05:00
2beb3009bd tests: change lvcreate syntax to allow lvmlockd
Using the less ambiguous lvcreate syntax for creating a
cache LV allows more tests to run with lvmlockd.
2018-05-30 16:40:03 -05:00
214235367b tests: enable lvcreate cache tests with lvmlockd
Tests that want to use lvcreate to create a new
origin LV and then combine it with an existing
cache pool to create a cache LV.
2018-05-30 15:56:08 -05:00
c516321325 lvmlockd: enable lvcreate of new LV plus existing cache pool
In this command, lvcreate creates a new LV and then combines
it with an existing cache pool, producing a cache LV.  This
command was previously not allowed in in a shared VG.
2018-05-30 15:24:24 -05:00
27495a3555 tests: enable pvmove-restart with lvmlockd 2018-05-30 13:56:06 -05:00
05ee83579b tests: enable vg repair tests with lvmlockd 2018-05-30 12:57:46 -05:00
6cd0523337 lvmlockd: enable repairing shared VG while reading it
When the lvmlockd lock is shared, upgrade it to ex
when repair (writing) is needed during vg_read.

Pass the lockd state through additional read-related
functions so the instances of repair scattered through
vg_read can be handled.

(Temporary solution until the ad hoc repairs can be
pulled out of vg_read into a top level, centralized
repair function.)
2018-05-30 12:56:46 -05:00
063d065388 tests: add missing file 2018-05-30 09:25:45 -05:00
abba06fb3b tests: process-each-duplicate-pvs update for lvmlockd 2018-05-30 09:25:45 -05:00
3759a1f62b pvremove: skip lvmlockd locks for forced clearing
pvremove -ff to force clear a PV shouldn't care if
lvmlockd locks fail.
2018-05-30 09:25:45 -05:00
5c5e449dc5 lvmlockd: fix vgimportclone of a shared VG
The new VG from the duplicate PV is imported
as a local VG.
2018-05-30 09:25:45 -05:00
a40d447a02 tests: vgchange-usage update for lvmlockd 2018-05-30 09:25:45 -05:00
95cf127134 tests: vgcreate-usage update for lvmlockd 2018-05-30 09:25:45 -05:00
595196bc29 tests: enable lvmlockd for passing tests 2018-05-30 09:25:45 -05:00
403c87c1aa lvmlockd: enable creation of cache pool with lvcreate
Previously, cache pools needed to be created with lvconvert.
2018-05-30 09:25:45 -05:00
948f2d9979 lvmlockd: enable lvcreate of thin pool and thin lv in one command
Previously, thin pools and thin lvs need needed to be
created with separate commands, now the combined command
is permitted.
2018-05-30 09:25:45 -05:00
db8d3bdfa9 lvmlockd: enable mirror split and merge with dlm lock_type 2018-05-30 09:25:45 -05:00
3a4fe54ca1 config: revert to normal locking when no cluster
and suggest lvmlockd
2018-05-30 09:25:45 -05:00
7f7ec769d9 lvmlockd: do not use an LV lock for some lvchange options
Some lvchange options can be used even if the LV is active.
2018-05-30 09:25:45 -05:00
cd369d8a7f tests: separate lvmlockd tests with or without lvmetad 2018-05-30 09:25:45 -05:00
0c1d3db8db lvmlockd: accept repeated global lock requests
It's not an error if a command requests the global lock
when it has already acquired it.  It shouldn't happen,
but there could be cases we've not found.
2018-05-30 09:25:45 -05:00
6a44dceb48 tests: some missed skip with lvmlockd 2018-05-30 09:25:45 -05:00
5ac9f8d631 tests: fix skipping logic for lvmpolld and lvmlockd 2018-05-30 09:25:45 -05:00
6d14d5d16b scan: removed failed paths for devices
Drop a device path when the scan fails to open it.
2018-05-30 09:05:18 -05:00
06c789eda1 radix-tree: fix some bugs in remove_prefix and iterate
These weren't working if the prefix key was part of a prefix_chain.
2018-05-30 14:21:27 +01:00
7635df8cce bcache: switch to storing blocks in a radix tree.
Rather than a hash table.  This will make invalidate_fd() more
efficient since we can iterate just those blocks that are on
a particular dev.
2018-05-30 14:17:26 +01:00
272ec3fa73 radix-tree: fix some bugs in remove_prefix and iterate
These weren't working if the prefix key was part of a prefix_chain.
2018-05-30 14:14:59 +01:00
1924426ad1 radix-tree: radix_tree_iterate() 2018-05-29 17:58:58 +01:00
c2a8bbed3b radix-tree: radix_tree_remove_prefix() 2018-05-29 13:25:59 +01:00
9b41efae82 radix-tree: call the value dtr when removing an entry. 2018-05-29 11:23:36 +01:00
0181c77e3f Merge branch '2018-05-29-radix-tree-iterate' into 2018-05-23-radix-tree-remove 2018-05-29 11:04:32 +01:00
033df741e2 data-struct/radix-tree: pass the value dtr into create.
Rather than having to pass it into every method that removes items.
2018-05-29 11:03:10 +01:00
28c8e95d19 scan: refresh paths and retry open
If scanning fails to open any devices, refresh the
device paths in dev cache, and retry the opens.
2018-05-25 13:09:07 -05:00
9a730233c9 format_text: Use versionsort to sort archive files
Ensure that vg_100000-* follows vg_99999-* so that the expiry logic
doesn't stop too early.

   https://bugzilla.redhat.com/1481085
2018-05-24 17:51:03 +02:00
0ecf232194 Merge remote-tracking branch 'origin/master' 2018-05-24 17:32:42 +02:00
3702f39ef3 tests: improve usability on older systems 2018-05-24 16:02:31 +02:00
d6f2445996 man: another missed typo for thin plugin 2018-05-24 16:02:31 +02:00
264077907e post-release 2018-05-24 15:23:08 +02:00
adae8ee1c2 pre-release 2018-05-24 15:13:10 +02:00
7e85361c34 release note: typos 2018-05-24 12:32:16 +01:00
fab063cfcb release note: typo 2018-05-24 12:26:34 +01:00
9337ff48bc release note: 2.02.178 2018-05-24 12:22:11 +01:00
a90de76fd8 tests: checking scanning correctness 2018-05-24 11:22:32 +02:00
f865e1bf87 tests: passthrough args with extend_filter_LVMTEST
Don't rebuild config twice.
2018-05-24 11:22:59 +02:00
89f34eaf0c tests: correcting symlink manipulation
Fix symlink and add 'verbose' pvs for a while for checking
scanning correctness.
2018-05-24 11:22:32 +02:00
76a45424a7 tests: aux improve for mdadm support
Correcting some symlink handling.
2018-05-24 11:03:47 +02:00
c46dbfb14e man: make generate 2018-05-23 19:46:47 +02:00
4be1ec3da4 man: fix cut and paste bug
Fixing missing 'META' in DMEVENTD_THIN_POOL_METADATA.
2018-05-23 19:45:53 +02:00
c35d3242a8 gitignore 2018-05-23 16:53:18 +02:00
6cd798f556 radix_tree_t: knock out some debug 2018-05-23 12:54:02 +01:00
b7fd8ac8eb radix_tree: add remove method 2018-05-23 12:48:06 +01:00
87291a2832 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-23 09:14:29 +01:00
61583281e5 filters: clarify some parts of md filter
Rename some functions to be consistent with the return values,
and add some comments about how it works.
2018-05-22 14:07:13 -05:00
a60416a13f WHATS_NEW: typo 2018-05-22 09:46:59 +01:00
3c9ed33f83 scan: move warnings about duplicate devices
We have been warning about duplicate devices (and disabling lvmetad)
immediately when the dup was detected (during label_scan).  Move the
warnings (and the disabling) to happen later, after label_scan is
finished.

This lets us avoid an unwanted warning message about duplicates
in the special case were md components are eliminated during the
duplicate device resolution.
2018-05-21 16:48:02 -05:00
73ae68e1c4 man vgexport: expand description 2018-05-21 16:26:49 -05:00
6029d6d8d8 tests: disable symlink test
It's quite unclear what the test is meant to do - disable it just like
within python test.
2018-05-21 11:59:39 +02:00
25a66737e3 tests: use 4K extent size
To work with for 4k backend devices.
2018-05-21 11:58:10 +02:00
a9f2c1e1f5 lvmlockd: suppress error messages related to lvmetad
Log lvmetad related messages as debug, not as errors,
when using lvmlockd without lvmetad.
2018-05-18 16:00:54 -05:00
bc275bcddf fullreport: fix with lvmetad and only orphan PVs are visible
The report uses process_each_vg() which populates lvmcache
based on a VG list from lvmetad.  If there are no VGs,
but only orphan PVs, the orphans are not shown.  Add an
explicit call to populate lvmcache with PV info from lvmetad.
2018-05-18 14:31:52 -05:00
0253f5a21d fix id_write_format on non-uuid string
orphan vgs using the vgname "#orphans" as the vgid,
and valgrind complains about calling id_write_format
on that invalid uuid.
2018-05-18 13:41:20 -05:00
b2574c2f3a python: use // for integer division 2018-05-18 16:25:44 +02:00
3bbdde808a tests: pick either python2 or python3 .so
Use matching PYTHON library implementation.
2018-05-18 16:25:44 +02:00
fbf64fe730 tests; make sure python_lvm_unit.py is executable 2018-05-18 16:25:44 +02:00
43fb32e761 python: use python3 paths directly
Do not use /usr/bin/env for path of python3 as this is seen
as 'unwanted' and should be avoided.
2018-05-18 16:25:44 +02:00
5b86b0e3dc build: set clean vars earlier
For better cleaning of test dirs.
2018-05-18 16:25:44 +02:00
f7435cd8c7 liblvm2app: add a couple tests
trivial sanity-check programs using liblvm2app
2018-05-17 15:55:44 -05:00
286c9c78b4 liblvm2app: fix valgrind memory warning 2018-05-17 15:18:11 -05:00
a39eaea27d tests: fix kernal_at_least argument in aux.sh 2018-05-17 14:41:47 +02:00
5052970da3 bcache: Don't call sysconf for every io 2018-05-17 10:05:10 +01:00
7ee0a6e44d Merge branch 'master' of git://sourceware.org/git/lvm2 2018-05-17 09:52:57 +01:00
3417d6229d scripts/code-stats.rb: count files better, handle bad utf8 2018-05-17 09:52:13 +01:00
c6ca81a38d bcache: don't use PAGE_SIZE compile const
PAGE_SIZE is not a compile time constant. Use sysconf instead like
elsewhere in the code.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2018-05-17 10:38:16 +02:00
8c453e2e5e cleanup: fix grammar in output - less then -> less than
This minor patch fixes grammar in a few messages which get
printed to users. It also fixes the same grammar mistake in
several comments.

Signed-off-by: Rick Elrod <relrod@redhat.com>
--
2018-05-17 10:37:45 +02:00
28d35e5c59 scan: fix missing close in lib
lib was using dev_test_excl which wasn't closing the device.
Switch code to new io layer with excl open.
Also use exclusive open in some other places.
2018-05-16 14:48:30 -05:00
64dd656ef7 scripts: add a little scripts to show git history for the last 2 weeks. 2018-05-16 15:27:52 +01:00
89fdc0b588 Merge branch 'master' into 2018-05-11-fork-libdm 2018-05-16 13:43:02 +01:00
ccc35e2647 device-mapper: Fork libdm internally.
The device-mapper directory now holds a copy of libdm source.  At
the moment this code is identical to libdm.  Over time code will
migrate out to appropriate places (see doc/refactoring.txt).

The libdm directory still exists, and contains the source for the
libdevmapper shared library, which we will continue to ship (though
not neccessarily update).

All code using libdm should now use the version in device-mapper.
2018-05-16 13:00:50 +01:00
7c852c75c3 unit-tests: remove a couple of debug printfs 2018-05-16 10:25:30 +01:00
e296f784c9 Merge branch 'master' of git://sourceware.org/git/lvm2 2018-05-16 10:11:58 +01:00
df2acbbb97 bcache: nr_ios_pending wasn't being incremented
... but it was being decremented on completion.  Which meant
it wrapped, and no prefetches were ever issued after the
first completion.
2018-05-16 10:09:17 +01:00
ed799404f8 doc: add some performance info 2018-05-15 15:17:36 -05:00
3bbc17a670 scan: use up to 1024 max bcache blocks
Create bcache with one block per device that
will be scanned up to 1024 max blocks.
2018-05-15 15:17:31 -05:00
fb0aca86f8 lvmapp: do not unlock not locked VGs
After recent changes this seems to give some help, explore more...
2018-05-15 22:02:41 +02:00
99cd7108d3 tests: better check for python libpath
Find also python3 lvm.so name.

And ATM just run a single test, otherwise we get too many cores.
2018-05-15 22:02:41 +02:00
f8745dc23e python: specify libdm path for linking 2018-05-15 22:02:41 +02:00
550380c1a4 tests: aux fixes
Properly check for kernel version.
Also detect sysfs throttling support.
2018-05-15 22:02:41 +02:00
3b3ee66b1f tests: time limit waiting on lvmetad kill 2018-05-15 22:02:41 +02:00
b5da4fdfce tests: drop junk 2018-05-15 22:02:41 +02:00
be154e30e8 tests: move into generated file
Since python path is evaluated and we cannot use anymore /usr/bin/env
switch to generated file.
2018-05-15 22:02:41 +02:00
ad756bb708 build: configure detect libaio
No point to start building lvm without this header file.

Although there could be 'some point' in supporting standalone build
of 'just' libdm  where the libaio might be avoided.

TODO: think about configure option for building libdm only.
2018-05-15 22:02:41 +02:00
c1abcee142 WHATS_NEW: updates 2018-05-15 10:49:06 -05:00
889558fedb conf: update conf
Matching patch 2eba7c7755
2018-05-15 16:58:28 +02:00
d25c135806 tests: fix size of COW
Needs to be changed to match 4K extent_size.
2018-05-15 16:49:53 +02:00
0217c53b24 tests: dont try to use DAX based brd device
Unfortunatelly on kernels <4.16 lvm2 can't user brd ramdisks
for backend device as number of test is failing with this kernel
message:

device-mapper: ioctl: can't change device type after initial table load.

caused by DAX request-based handling, and lvm2 tries to replace device
with backend 'error' bio-based device and such table reload is being
rejected.

So ATM keep ramdisk only on most recent kernel to experiment a bit,
for older machines just stay safe and keep old slower loop backend.
2018-05-15 16:07:13 +02:00
2eba7c7755 clean-up: example.conf.in typo 2018-05-14 16:17:01 -05:00
11ceb77867 lvmcache: fix loop freeing infos
valgrind was concerned about loop through vginfo->infos,
so grab info from dev.
2018-05-14 13:45:55 -05:00
517d6cc418 scan: add some missing frees
some objects had been moved out of mem pools.
2018-05-14 13:38:16 -05:00
7f97c7ea9a build: Don't generate symlinks in include/ dir
As we start refactoring the code to break dependencies (see doc/refactoring.txt),
I want us to use full paths in the includes (eg, #include "base/data-struct/list.h").
This makes it more obvious when we're breaking abstraction boundaries, eg, including a file in
metadata/ from base/
2018-05-14 10:30:20 +01:00
0e56fa6892 tests: old systems do not have even throttling
When even throttling is not available, skip or use  should
with particular test piece.
2018-05-12 23:37:30 +02:00
0a5edc1f12 tests: swith to mkstemp
As mkostemp is only on newer systems, switch to more old version
which effectively does exactly the same thing for given list of
open flags.
2018-05-12 23:23:54 +02:00
9640320aea tests: start to use 4k mkfs
While newer system can detect need for 4K mkfs, on older test machines
running test suite over 4k is reporting problems.
Some more generic solution is needed thought.
2018-05-12 23:22:20 +02:00
ca87674ea4 tests: fix check sysfs
Commit 810f856c24 missed to move
assign of P after setting maj & min.
2018-05-12 23:01:52 +02:00
edede1d20f tests: do not try to create 1K extents 2018-05-12 22:52:41 +02:00
093428b067 tests: restore functionality
Forgotten revert of tracing patch. Restoring previous functinality.
2018-05-12 22:51:43 +02:00
7b8b13c62b tests: aux detecs supported segments
Replace previous compilation detection of present supported segtypes
with runtime check.
2018-05-12 22:50:36 +02:00
35ffc3f8eb build: lcov reporting for unit tests
List also lcov for processed unit tests.
2018-05-12 18:18:23 +02:00
67c02877a1 build: install unit-test 2018-05-12 18:18:23 +02:00
4c7565b65d tests: add unit-test
Allow unit-test to be run as part of standard 'make check'.
2018-05-12 18:18:23 +02:00
fa8d0b5766 tests: detect running bcache test on tmpfs
When test happens to run in tmpfs, it cannot use O_DIRECT (unsupported
with tmpfs).

CHECKME: unsure if detection of tmpfs is 'valid' but kind of works and
is very simple.
2018-05-12 18:18:23 +02:00
79b2961399 build: rename device-mapper to device_mapper
As Makefiles already do use target with name 'device-mapper'
rename this new device-mapper dir to non-conflicting name.
We also seem to already use '_' in other dir names.

Also rename device_mapper/Makefile to source for generating Makefile.in
so we can use it for build in other source dirs properly.
2018-05-12 18:18:23 +02:00
e2c766d37e build: fix build rules for srcdir
It's very hard to use some 'non-recurive' Makefiles with
rest of system running 'recursively'.

So ATM drop inclusion of subdir makefile and add support
for 2 new top-level targets:

unit-test  (builds test/unit dir)
run-unit-test (build & run test/unit/unit-test run)
2018-05-12 18:18:23 +02:00
ac768a9d2b bcache: do not use libdm header files
Logging for libdm differs from lvm logging - keep using consisten
logging function calls.
2018-05-12 18:18:23 +02:00
83e362cd32 build: make generate 2018-05-12 18:18:23 +02:00
0b465d1543 tests: drop cache checking
Just like 52656c89fd
when now cache is compiled in 'unditionally'.

This patch is actually enforce by changes in
commit: 2bc896f2a3
where CACHE value is not set anymore.
2018-05-12 18:18:23 +02:00
d38a2d64f0 tests: add support to run unit test 2018-05-12 18:18:23 +02:00
7616a7f46e build: properly track source file for lmvlockctl
Ensure the source file is tracked by various cleanup functions.
2018-05-12 18:05:50 +02:00
cbe81a0b05 tests: inittest may run without root
If the test does not need root, it can use 'SKIP_ROOT_DM_CHECK'.

For such test no actions needed root to initilize DM devices and
nodes will be take and test can check i.e. functional unit tests.
2018-05-12 18:05:50 +02:00
0221ebfd64 tests: inittest compare string
Avoid logging warning when compared string is empty with -eq.
2018-05-12 18:05:50 +02:00
a7a23e7dd2 tests: aux extra protection for rm -rf 2018-05-12 18:05:50 +02:00
38b4354494 tests: again disable this raid test
Still kills testing machines even with 4.17-rc4 kernels
on reshaping.
2018-05-12 18:05:50 +02:00
ec0f5c2bf6 tests: drop delaying
Here seem delaying dev has no use.
2018-05-12 18:05:50 +02:00
86c8f0f01f tests: using throttling 2018-05-12 17:48:31 +02:00
7362ed68be tests: move device discard 2018-05-12 17:48:31 +02:00
f5da325d70 tests: use throttle_dm_mirror
In this case it's better to use throttling of mirror sync,
that delay everything with dm_delay.
2018-05-12 17:48:31 +02:00
172d8fb355 tests: aux support throttling of dm mirror
Usage of dm_delay looks to be slowing not just 'delayed' portion
of device, but due to the fact it's also slows down ANY flush
operation on such device it's overal speed impact is huge.

In some case we can however user other methods to slowdown disk writes,
in case of old dm 'mirror'  target we can throttle  I/O of mirror
synchronisation giving the next commands enough time to test couple
race conditions.

Usage:

throttle_dm_mirror [percentage]

Thtrottle down sync speed (lowest is '1' which is also default when
unspecified)

restore_dm_mirror

Restores the value of throttling before call of  'throttle_dm_mirror'
Usually it should '100'
2018-05-12 17:48:31 +02:00
0cadfdd69d tests: try running tests over ramdisk
Currently usage of loop device over backend file in ramdisk (tmpfs)
is actually causing unnecassary memory consution, since just
reading such loop device is causing RAM provisioning.

This patch add another possible way how to use ramdisk directly
through 'brd' device when possible (and allowed).

This however has it's limitation as well -  brd does not support
TRIM, so the only way how to erase is to remove brd module ??

Alse there is 4K sector size limitation imposed by ramdisk.

Anyway - for some mirror test that were using large amount of
disk space (tens of MB) this brings noticable speed boost.
(But could be worth to solve the slowness of loop in kernel?)

To prevent using 'brd' for testing set LVM_TEST_PREFER_BRD=0
like this:

make check_local LVM_TEST_PREFER_BRD=0
2018-05-12 17:48:31 +02:00
842b3074b7 tests: crypt test cannot run on ramdisk
This test can't use brd (ramdisk) as backend since for some
weird reason  lsblk is not listing these device.

TODO: test could be probably rewritten to avoid using lsblk somehow??
2018-05-12 17:48:31 +02:00
6f48741062 tests: happy using of 4K backend devices
When the backend device supports only 4K blocks (like ramdisk)
we cannot use for testing any smaller blocksize.

So recalc test for 4K extent size.

We may possibly introduce one list extra test that
can be executed on devices with 512b sectors to
check lvm2 support those min extent sizes...
2018-05-12 17:48:31 +02:00
e2be14e2d5 tests: raise min size for XFS
Seems XFS now requires at least 1605 blocks.
2018-05-12 17:48:31 +02:00
6740c78e83 poll: add stdout fflush after poll query
ATM it's a bit ugly to enforce flushing of 'stdio' here, but works as quick
hot-fix.

log_print*() is using buffered I/O.

But for pooling with typical 1s interval this may take a while before
buffer about continues progress gets flushed.
So ATM fflush().

TODO: either add  log_print*_with_flush() or maybe directly use just
line buffering with log_print() and only log_debug() keep using buffered
I/O mode.
2018-05-12 11:30:05 +02:00
09fcc8eaa8 scan: ignore duplicates that are md component devs
md devices using an older superblock version have
superblocks at the end of the md device.  For commands
that skip reading the end of devices during filtering,
the md component devs will be scanned, and will appear
as duplicate PVs to the original md device.  Remove
these md components from the list of unused duplicate
devices, so they are treated as if they had been
ignored during filtering.  This avoids the restrictions
that are placed on using PVs with duplicates.
2018-05-11 15:52:22 -05:00
73578e36fa dev_cache: remove the lvmcache check when closing fd
This is no longer used since devices are not held
open in dev_cache.
2018-05-11 14:30:10 -05:00
3e3cb22f2a dev_cache: fix close in utility functions
All these functions are now used as utilities,
e.g. for ioctl (not for io), and need to
open/close the device each time they are called.
(Many of the opens can probably be eliminated by
just using the bcache fd for the ioctl.)
2018-05-11 14:25:08 -05:00
5c9dcd99fd scan: remove unused args from label_read 2018-05-11 14:16:49 -05:00
b5d9914628 devs: recognize md devices in subsystem check
If md components appear as duplicate PVs, let the
existing subsystem check recognize the md device.
2018-05-11 14:00:19 -05:00
ccab54677c dev_cache: fix close in dev_get_block_size 2018-05-11 13:53:19 -05:00
bbb8040456 dev_cache: drop open_list
devices are now held open only in bcache,
so drop the dev_cache list of open devices
which is unused.
2018-05-11 12:47:56 -05:00
4362013872 bcache: disable fallback to old io
All io has been converted to bcache.
2018-05-11 11:35:56 -05:00
228ed56455 pvck: allow checking at user specified offsets
with the --labelsector option.  We probably don't
need all this code to support any value for this
option; it's unclear how, when, why it would be
used.
2018-05-11 11:23:51 -05:00
02b99be57e Revert "Revert "build: Calculate dependencies at same time as compiling.""
This reverts commit ed837e6971.
2018-05-11 14:40:05 +01:00
413488edc6 radix-tree: fix a function decl 2018-05-11 11:40:47 +01:00
30a4c7988e radix-tree: remove some unneccessary includes 2018-05-11 09:46:34 +01:00
0a31fb4aa3 doc: add a little document describing new directory structure. 2018-05-11 06:46:25 +01:00
576dd1fc41 radix-tree: First drop of radix tree.
An implementation of an adaptive radix tree.  Has the following nice
properties:

  - At least as fast as the hash table
  - Uses less memory
  - You don't need to give an expected size when you create
  - It scales nicely (ie. no large reallocations like the hash table).
  - You can iterate the keys in lexicographical order.

Only insert and lookup are implemented so far.  Plus there's a lot
more performance to come.
2018-05-11 06:10:01 +01:00
3b02b35c3e Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-11 05:39:27 +01:00
5f780813f2 bcache/sync io engine: handle short ios 2018-05-11 05:37:47 +01:00
9ad42e5f06 io: write log header with bcache 2018-05-10 16:25:33 -05:00
d974644db7 pvscan: remove unused var warning 2018-05-10 16:18:36 -05:00
57bb46c5e7 filter: use bcache for filter reads
Filters are still applied before any device reading or
the label scan, but any filter checks that want to read
the device are skipped and the device is flagged.

After bcache is populated, but before lvm looks for
devices (i.e. before label scan), the filters are
reapplied to the devices that were flagged above.
The filters will then find the data they need in
bcache.
2018-05-10 16:03:19 -05:00
39ce38eb88 label/lv_manip: squash some warnings 2018-05-10 15:14:39 +01:00
3c0f5bdd08 functional-tests/vdo: fix mem leak in test 2018-05-10 14:31:16 +01:00
ae50374811 bcache: Add sync io engine
Something to fall back to when testing.
2018-05-10 14:29:26 +01:00
67b80e2d9d bcache: knock out err param.
Dave used this for debugging.  Not needed in general.
2018-05-10 13:26:08 +01:00
2b96bb403c Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-10 13:02:27 +01:00
2ae4a04710 vdo status: Unit tests + fix bugs 2018-05-10 13:01:26 +01:00
e649f71022 Merge branch 'master' into 2018-04-30-vdo-support 2018-05-10 12:34:04 +01:00
38f33251b1 doc: add filter info to scanning 2018-05-09 12:54:38 -05:00
9a5bd01b0c io: replace dev_set with bcache equivalents 2018-05-09 11:29:52 -05:00
3600caa71d Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-09 11:07:24 +01:00
1c5c99afce bcache-utils: bcache_set_bytes() 2018-05-09 11:05:29 +01:00
2e1869b923 unit-test/bcache-utils: Tweak zero tests 2018-05-09 10:50:31 +01:00
a2310e2de0 doc: lvm disk reading 2018-05-04 10:54:29 -05:00
c9729022bf tests: bump raid target version in reshape tests
Adjust to target version allowing tests to succeed.
2018-05-04 16:58:11 +02:00
8bf92875f7 tests: don't rely on cache target in component-raid.sh
Lead to unnecessary skips of the test.
2018-05-04 16:54:01 +02:00
d2840b0ec1 Merge branch 'master' into 2018-04-30-vdo-support 2018-05-04 13:32:07 +01:00
bc50dc6e70 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-04 09:49:55 +01:00
ed837e6971 Revert "build: Calculate dependencies at same time as compiling."
This reverts commit 0931067dc5.

The dep files should be in the build dir, which is not necc. the src dir.

Easy to fix, but reverting for now until I have time to revisit.
2018-05-04 09:48:40 +01:00
f4a60fe004 clvmd: saved_vg code and comment formatting 2018-05-03 14:54:48 -05:00
822a8b62be clvmd: don't save cft and buf for saved_vg 2018-05-03 14:54:48 -05:00
c016b573ee clvmd: separate saved_vg from vginfo
The clvmd saved_vg data is independent from the normal lvm
lvmcache vginfo data, so separate saved_vg from vginfo.
Normal lvm doesn't need to use save_vg at all, and in clvmd,
lvmcache changes on vginfo can be made without worrying
about unwanted effects on saved_vg.
2018-05-03 14:54:48 -05:00
a5e13f2eef clvmd: defer freeing saved vgs
To avoid the chance of freeing a saved vg while another
code path is using it, defer freeing saved vgs until
all the lvmcache content is dropped for the vg.
2018-05-03 14:54:48 -05:00
88fe07ad0a raid: use new internal APIs
Use APIs introduced with commit 4ebfd8e8eb
where appropriate to minimize redundant code.
2018-05-03 21:36:50 +02:00
49db9b5e0b Merge branch '2018-05-03-improve-bcache-utils' 2018-05-03 20:15:13 +01:00
ac18164a52 unit-test: a bunch of tests for bcache-utils 2018-05-03 20:13:13 +01:00
4ebfd8e8eb lvconvert: don't return success on degraded -m raid1 conversion
In case "lvconvert -mN RaidLV" was used on a degraded
raid1 LV, success was returned instead of an error.

Provide message to inform about the need to repair first
before changing number of mirrors and exit with error.

Add new lvconvert-m-raid1-degraded.sh test.

Resolves: rhbz1573960
2018-05-03 18:48:00 +02:00
b393fbec00 configure.ac: bad configure generated due to stray ;; 2018-05-03 15:38:05 +01:00
2bb02e24bf Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-03 14:23:12 +01:00
52656c89fd functional tests: Update have_raid function
No need to check if it's built in.
2018-05-03 14:22:24 +01:00
9cab005797 configure.ac: Remove some more remnants of optional RAID
(It's now always 'internal')
2018-05-03 14:21:21 +01:00
dfc320f5b8 bcache-utils: rewrite
They take care to avoid redundant reads now.
2018-05-03 11:36:29 +01:00
2688aafefb bcache: rename bcache_write_zeroes() -> bcache_zero_bytes()
Now matches the other util functions:

bcache_{prefetch,read,write,zero}_bytes()
2018-05-03 10:21:14 +01:00
8b755f1e04 bcache: rewrite bcache_write_zeros()
It now uses GF_ZERO to avoid reading blocks that are going to be
completely zeroed.
2018-05-03 10:14:56 +01:00
dc30d4b2f2 bcache: switch off_t -> uint64_t
We always want it to be 64bit
2018-05-03 09:37:43 +01:00
efad84ebc2 bcache: Move the utils to a separate file.
This makes it clearer that they don't access the cache internals.
2018-05-03 09:34:41 +01:00
b3c41bce3d bcache: add bcache_block_sectors() query fn 2018-05-03 09:33:55 +01:00
65912ce44d bcache: add a comment 2018-05-03 09:21:10 +01:00
977d0a3613 filters: increase MAX_FILTERS for new filter
The new signature filter was added without increasing this.
2018-05-02 14:10:30 -05:00
90d0ff6636 bcache: reorder includes in .c file too 2018-05-02 19:45:06 +01:00
8fd300f7df device/bcache: reorder includes 2018-05-02 18:59:43 +01:00
972b535220 build: add -D_FILE_OFFSET_BITS=64
I don't like having this in a common header because it means you end
up including too much and causing unneccessary dependencies.  eg,
lib/misc/lib.h includes libdevmapper.h, internationalisation, and
logging stuff.
2018-05-02 18:40:38 +01:00
9fe0be871c unit-test/matcher_t: Fixup Kabi's test
The matcher matches the regexes in reverse order.
2018-05-02 13:53:43 +01:00
506ab29bfd unit-test/matcher_t: add another (failing!) test for Kabi 2018-05-02 13:31:57 +01:00
6abc3f10ae vdo: get status parser compiling 2018-05-02 11:15:35 +01:00
11d9b0cae7 Merge branch 'master' into 2018-04-30-vdo-support 2018-05-02 10:09:20 +01:00
11436b00e0 tests: add gfs-pool test
Put back a test like the old one that was removed
in d709d8445f.

It verifies that lvm will ignore and not use a
gfs-pool device.
2018-05-01 15:24:42 -05:00
24e7745d7a devices: ignore lvm1 and pool devices 2018-05-01 15:18:47 -05:00
db0560c1b0 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-01 20:04:30 +01:00
1553993ea1 Revert "build: Stop creating the symlinks in include/ on the fly."
This reverts commit cdcea0bf55.
2018-05-01 20:03:51 +01:00
39f05855c0 tests: remove use of lvm1 metadatatype 2018-05-01 13:29:57 -05:00
d709d8445f tests: remove gfs pool test 2018-05-01 13:25:40 -05:00
9687ee2a74 tests: update lvmetad-disabled to not use lvm1 2018-05-01 11:33:39 -05:00
8dcc973bbb bcache_write_bytes needs to be followed by flush
The improved bcache_write_bytes is not flushing, so
the caller needs to do that.
2018-05-01 09:33:55 -05:00
a418f88b76 lvmcache: fix typo in lvmcache_get_saved_vg 2018-05-01 09:06:57 -05:00
3ea862bdfc unit-test/bcache_t: test was using too large a block size 2018-05-01 14:17:12 +01:00
bfc61a9543 bcache: squash some warnings on rhel6 2018-05-01 13:21:53 +01:00
de042fa13d unit-test/bcache_t: Use a stripped down fixture for some tests 2018-05-01 12:54:57 +01:00
61153d90e5 build: update ./configure and configure.h.in
Fallout from Dave's removal of format1 and pool.
2018-05-01 12:12:07 +01:00
f564e78d98 bcache: rewrite bcache_{write,zero}_bytes
These are utility functions so should only use the public interface.

Also write_bytes was flushing, which will kill performance.
2018-05-01 12:07:33 +01:00
c863c9581d Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-05-01 10:48:42 +01:00
7aba7fe68b unit-test/io_engine_t: add a little test for bcache_{read,write}_bytes 2018-05-01 10:47:40 +01:00
f6459757af unit-test/bcache_t: fixup a test.
Problem found with valgrind.
2018-05-01 09:17:55 +01:00
c1cd18f21e Remove lvm1 and pool disk formats
There are likely more bits of code that can be removed,
e.g. lvm1/pool-specific bits of code that were identified
using FMT flags.

The vgconvert command can likely be reduced further.

The lvm1-specific config settings should probably have
some other fields set for proper deprecation.
2018-04-30 16:55:02 -05:00
029a76b4f8 clvmd: don't repair vg from vg_read in clvmd
The mixed up vg repair code in vg_read was trying
to repair a vg when vg_read was called by clvmd.
The clvmd daemon isn't supposed to be repairing
or writing a vg.

(This is a temporary workaround; vg repair will soon
be pulled out of vg_read so it can be called in a
controlled way and consolidated instead of spread
around.)
2018-04-30 15:56:51 -05:00
c365d7de4f tests: fix THIN built-in check 2018-04-30 13:12:17 -05:00
89935ace29 clvmd: keep old saved_vg if it matches new
There is no need to release the old saved_vg
if it matches the new version.
2018-04-30 13:03:15 -05:00
39f24a169c unit-test/io_engine_t: Improve the read test.
Now verifies what it reads.
2018-04-30 17:09:24 +01:00
ef79d639fe unit-test/io_engine_t: use posix_memalign() rather than aligned_alloc()
Not present on older systems.
2018-04-30 16:55:19 +01:00
cca815d240 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-04-30 16:33:57 +01:00
1b08797419 configure: Remove --enable-testing 2018-04-30 16:31:33 +01:00
52ebad31ba vdo: Code drop for status parsing.
Doesn't even compile yet.  Squash this patch.
2018-04-30 16:16:58 +01:00
1ddbbb67e0 build: fix typo in dmeventd/plugins/Makefile.in 2018-04-30 15:31:57 +01:00
bdf7479449 toollib: fix wrong dev reference in process_each_label 2018-04-30 09:08:40 -05:00
9384b2b5c5 build: Remove unused Makefiles from configure.ac
Should have been in earlier patch.
2018-04-30 14:58:45 +01:00
2bc896f2a3 build: remove --with-{snapshots,mirrors,raid,thin,cache} options from ./configure
It now behaves as if the were all set as 'internal'
2018-04-30 10:11:23 +01:00
545ca59468 Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 2018-04-30 09:56:04 +01:00
0a2b5d5748 [scripts] remove scripts/vg_convert
- it doesn't do anything other than tell you to run vgconvert
- it used to convert from lvm1 format, which is obsolete
2018-04-30 09:46:05 +01:00
65d6118e47 [metadata-liblvm.c] comment out some dead code and add a FIXME 2018-04-30 09:45:39 +01:00
513e9e3264 [lvmetad.h] Use static inline functions to stub out functions.
The macros were causing warnings because the arguments were percieved as
unused.
2018-04-30 09:45:13 +01:00
475626fb6c [build] uncomment 'serial 3' in an m4 file.
Squashes another autoreconf warning
2018-04-30 09:44:27 +01:00
865a9c5873 build: rename configure.in -> configure.ac
Squashes a warning from autotools
2018-04-30 09:42:11 +01:00
b904d6653d tests: add also snapshot monitoring 2018-04-30 10:41:51 +02:00
fade45b1d1 mirror: improve table update
Shift refresh of mirror table right into monitor_dev_for_events().
Use  !vg_write_lock_held() to recognize use of lvchange/vgchange.
(this shall change if this would no longer work, but requires
futher some API changes).

With this patch  dm mirror table is only refreshed when necassary.

Also update WARNING message about mirror usage without monitoring
and display LV name.
2018-04-30 10:41:51 +02:00
dd7ac793a0 aux: enhance teardown to better handle weird names
When 'dmsetup' reports result with --nameprefixes it currently
incorrectly 'escapes' problematic characters.

Letting pass such string though shell 'eval' function is hard task.
So instead cut away substring.

Once dmsetup will start to properly escape backslash and apostrophe
this function may need further tuning.
2018-04-30 10:41:51 +02:00
877c2f2ffb Merge branch 'master' of git+ssh://sourceware.org/git/lvm2 into merge 2018-04-30 09:34:12 +01:00
0931067dc5 build: Calculate dependencies at same time as compiling.
Speeds up the build slightly.
2018-04-30 09:32:14 +01:00
138225a3a8 test: remove pv-duplicate
This wasn't testing duplicate PVs, which are tested by
process-each-duplicate-pvs.sh.
2018-04-27 16:25:41 -05:00
ab63923d19 unit-tests: Move to test/unit 2018-04-27 16:55:07 +01:00
cdcea0bf55 build: Stop creating the symlinks in include/ on the fly.
Git handles symlinks, tar handles symlinks.  So I've just put the
links themselves into git.

This simplifies dependencies a little, and stop some build loops I was
hitting.

External build dir now works too.
2018-04-27 16:06:59 +01:00
5c878167a2 Revert "build: Stop creating the symlinks in include/ on the fly."
This reverts commit f8f6219513.

It wasn't taking builds outside the src dir into account.
2018-04-27 15:30:08 +01:00
f8f6219513 build: Stop creating the symlinks in include/ on the fly.
Git handles symlinks, tar handles symlinks.  So I've just put the
links themselves into git.

This simplifies dependencies a little, and stop some build loops I was
hitting.
2018-04-27 15:12:15 +01:00
54856b2965 bcache: write some sanity checks for the asyn io engine
Mainly checks aio is installed properly.
2018-04-27 14:24:05 +01:00
e890c37704 [bcache] Some work on bcache_invalidate()
bcache_invalidate() now returns a bool to indicate success.  If fails
if the block is currently held, or the block is dirty and writeback
fails.

Added a bunch of unit tests for the invalidate functions.

Fixed some bugs to do with invalidating errored blocks.
2018-04-27 10:56:13 +01:00
8a14b8a733 [bcache] Add some unit tests for invalidate block.
Trying to identify dct's lockup.
2018-04-27 09:12:57 +01:00
5b6e62dc1f clvmd: drop old saved_vg when returning new saved_vg
In some pvmove tests, clvmd uses the new (precommitted)
saved_vg, but then requests the old saved_vg, and
expects that the new saved_vg be returned instead of
the old.  So, when returning the new saved_vg, forget
the old one so we don't return it again.
2018-04-26 14:57:45 -05:00
cdb8400de2 scan: refresh filters before scan
The filters save information about devices that should
be ignored, so if we need to repeat a scan  (unusual,
but happens in clvmd), we need to update the filters.
2018-04-26 14:48:13 -05:00
1c97fda425 [bcache] get all unit tests passing again 2018-04-26 13:13:27 +01:00
ea34dad66f [unit-test] Push the new unit test framwork.
See doc/unit-test.txt for details.

Some bcache tests failing.  Probably due to dct changing semantics, will
fix in follow up patch.
2018-04-26 11:59:39 +01:00
c7fdacbc50 pvmove: in fork mode destroy bcache in child
When pvmove was run in background mode and forks
instead of using lvmpolld, the child pvmove process
was not clearing the bcache from the parent, so all
the aio ops in the child were failing.
2018-04-25 16:40:36 -05:00
0fe4f65f65 scan: don't use cmd mem pool in scan
Make it consistent with all the other allocations
in scanning.
2018-04-25 16:40:08 -05:00
4670e9f698 skip some clvmd-specific code in common cases
This, or something like it, can probably be done
in many other places.
2018-04-25 16:40:08 -05:00
47bfac21ca clvmd: skip dev rescan after full scan
When clvmd does a full label scan just prior to
calling _vg_read(), pass a new flag into _vg_read
to indicate that the normal rescan of VG devs is
not needed.
2018-04-25 16:39:43 -05:00
1fec86571f clvmd: reuse a vg struct for sequential LV operations
After reading a VG, stash it in lvmcache as "saved_vg".
Before reading the VG again, try to use the saved_vg.
The saved_vg is dropped on VG lock operations.
2018-04-25 16:39:43 -05:00
f8616ac2d8 lvmcache: rename suspended_vg to saved_vg
The copy of the VG which clvmd stashes in lvmcache should
not only be used between suspend and resume, but between
sequential LV operations in clvmd, so that clvmd does not
need to reread the VG for each one.  Prepare for that by
renaming the stashed VG as "saved_vg".
2018-04-25 16:39:43 -05:00
28a9fcd94b Merge remote-tracking branch 'sourceware/master' into upstream 2018-04-25 09:18:42 +01:00
dcb5434a7f tests: more zero usage
Another case where usage of zero backend for mirror legs is more
effective then using delayed_dev.
2018-04-23 22:42:18 +02:00
fc3ed8856f tests: update testing to not use delay dev
Instead of using delayer device user 'zero' device and let mirror
do some real work which takes some time.

In case the test machine is too fast - mirror might need to be made bigger
to meet needed criteria.

Also move all test needed this 'zero' PV trick to the end of test
so  $dev2  and $dev4 are covered with 'zero' and can take any amount of
write without consuming any real space.
2018-04-23 22:42:18 +02:00
c492fbb51c debug: more explanatory error message 2018-04-23 22:42:18 +02:00
66f4f8c27f lvconvert: preserve regionsize from existing mirror
When adding leg to existing mirror - preserve its regionsize.
2018-04-23 22:42:18 +02:00
ae27461777 lvchange: update mirror table when changing monitoring
Since for non-monitored mirrors we let mirror running without
error handling - when monitoring changes for mirror, updated
table (refresh) is needed.
2018-04-23 22:42:18 +02:00
fcdac700f9 gcc: remove duplicate typedef 2018-04-23 22:42:18 +02:00
f2504257e4 [git] Update .gitignore 2018-04-23 09:49:37 +01:00
1409c4a1c2 clvm: rescan when VG or PV not found
Rescan devices to update lvmcache content when
clvmd vg_read doesn't find a VG or PV.
2018-04-20 16:09:49 -05:00
c42a18d372 liblvm2app: missed the addition of lvmcache_label_scan 2018-04-20 12:00:49 -05:00
aee27dc7ba scan: skip device rescan in vg_read
For reporting commands (pvs,vgs,lvs,pvdisplay,vgdisplay,lvdisplay)
we do not need to repeat the label scan of devices in vg_read if
they all had matching metadata in the initial label scan.  The
data read by label scan can just be reused for the vg_read.
This cuts the amount of device i/o in half, from two reads of
each device to one.  We have to be careful to avoid repairing
the VG if we've skipped rescanning.  (The VG repair code is very
poor, and will be redone soon.)
2018-04-20 11:23:14 -05:00
7b0a8f47be lvmpolld: update to use new scanning correctly 2018-04-20 11:22:48 -05:00
aa833bdd8a bcache: intercept test mode before write
Don't allow writes in test mode.  test mode should be
more sophisticated than just faking writes, and this
should be a last defense for cases where test mode is
not being checked correctly.
2018-04-20 11:22:48 -05:00
9b6a62f944 lvmcache: simplify
Recent changes allow some major simplification of the way
lvmcache works and is used.  lvmcache_label_scan is now
called in a controlled fashion at the start of commands,
and not via various unpredictable side effects.  Remove
various calls to it from other places.  lvmcache_label_scan
should not be called from anywhere during a command, because
it produces an incorrect representation of PVs with no MDAs,
and misclassifies them as orphans.  This has been a long
standing problem.  The invalid flag and rescanning based on
that is no longer used and removed.  The 'force' variation is
no longer needed and removed.
2018-04-20 11:22:48 -05:00
c0973e70a5 dev_cache: clean up scan
Pull out all of the twisted logic and simply call dev_cache_scan
at the start of the command prior to label scan.
2018-04-20 11:22:48 -05:00
89c65d4f71 remove unnecessary REQUIRES_FULL_LABEL_SCAN
we always scan all devices
2018-04-20 11:22:48 -05:00
45e5e702c1 scan: improve io error checking and reporting 2018-04-20 11:22:48 -05:00
6d05859862 bcache: let caller see an error 2018-04-20 11:22:48 -05:00
ae21305ee7 scan: drop bcache between lvm shell commands
A running lvm shell keeps all lvm devices open
unless the bcache is dropped.
2018-04-20 11:22:48 -05:00
a01a8d7172 tests: vgck now exits with error for bad vg 2018-04-20 11:22:48 -05:00
a9b0aa5c17 lvmetad: more fixes related to bcache
Need to open devs prior to bcache io.
2018-04-20 11:22:48 -05:00
e351f8bc66 lvmetad: need to set up bcache in another place
We need to find one common place to set up bcache
for the lvmetad case, instead of adding calls in
various places.
2018-04-20 11:22:48 -05:00
7e33bd1335 lvmetad: fix process_each_label
Was missing the call to populate lvmcache info from lvmetad
at the start of process_each_label.
2018-04-20 11:22:48 -05:00
ddb5de7a98 clvm: fix bcache scan handling
We can't let clvmd keep all scanned devs open,
which prevents them from being removed.  So
drop the bcache data (and close fds) affter
doing a label scan.

Also set up bcache before the clvm-specific
vg_read (which needs to rescan the vg's devs
using bcache) and destroy the bcache after.
2018-04-20 11:22:48 -05:00
196579af1f scan: check for errors in text layer
The scanning code in the format_text layer
has previously ignored errors.  Start checking
for and returning them.
2018-04-20 11:22:47 -05:00
44726ed9cb scan: remove lvmcache info for failed devs
When scanning a device fails, drop an lvmcache
info struct for it.
2018-04-20 11:22:47 -05:00
1717d4cb17 lvmcache: add shorter way to delete dev info
Don't make the caller look up the info first.
2018-04-20 11:22:47 -05:00
570c6239ee bcache: fix error handling
The error handling code wasn't working, but it
appears that just removing it is what we need.
The doesn't really need any different behavior
related to bcache blocks on an io error, it just
wants to know if there was an error.
2018-04-20 11:22:47 -05:00
217f3f8741 scan: add function to drop bcache blocks
which can be a little more efficient that destroy.
2018-04-20 11:22:47 -05:00
da2b155a9d scan: invalidate bcache for dev after errors
If there are errors reading or writing dev,
invalidate bcache for it.
2018-04-20 11:22:47 -05:00
4331182964 bcache: add some error messages for debugging 2018-04-20 11:22:47 -05:00
21057676a1 scan: create bcache with minimum number of blocks
In some odd cases (e.g. tests) there are very few devices
which results in creating too few blocks in bcache, so
create bcache with a minimum number of blocks.
2018-04-20 11:22:47 -05:00
e49b114f7e bcache: use wrappers for bcache read write in lvm
Using a wrapper makes it easier to disable bcache if needed.
2018-04-20 11:22:47 -05:00
8065492046 bcache: do all writes through bcache 2018-04-20 11:22:47 -05:00
8b26a007b1 misc bcache fixes from ejt 2018-04-20 11:22:47 -05:00
0da296003d vgchange: invalidate bcache for stacked LVs when deactivating
An LV with a stacked PV will be open in bcache and needs to be
invalidated to close the fd before attempting to deactivate.
2018-04-20 11:22:47 -05:00
34fd818caf scan: drop bcache and close fd for LV with stacked PV
When a PV is stacked on an LV, the LV will be kept in
bcache, and the open fd on the LV may interfere with
processing the LV.  So, drop/close a bcache fd for
an LV before processing the LV.
2018-04-20 11:22:47 -05:00
c2b10daf69 scan: put dev back on caller's list
Commit 6e442875613915e506440e59a290b56756df2521 missed
adding devs back to caller's list.
2018-04-20 11:22:47 -05:00
e7670d3338 pvck: use bcache 2018-04-20 11:22:47 -05:00
b504bb809e scan: use 128K bcache block size 2018-04-20 11:22:46 -05:00
ae093df3f1 test: vgsplit-usage if LVM1 tests 2018-04-20 11:22:46 -05:00
d75aa55784 disable LVM1 tests 2018-04-20 11:22:46 -05:00
96a61337b0 lvmdiskscan: use the new label_scan
instead of doing it's own.
2018-04-20 11:22:46 -05:00
28255e3eee scan: always setup bcache for commands using lvmetad
Do this at the start of the command so that it doesn't
need to be checked and set up in every function that
could need it.
2018-04-20 11:22:46 -05:00
f328532f05 scan: leave the caller's dev list unchanged
When scanning the list of devs from the caller
they are moved to another temporary list, but
were never returned to the original list.
2018-04-20 11:22:46 -05:00
7bce66c5e8 scan: setup bcache for commands using lvmetad
Commands using lvmetad will not begin with a proper
label_scan which initializes bcache, but may later
decide they need to scan a set of devs, in which case
they'll need bcache set up at that point.
2018-04-20 11:22:46 -05:00
6e580465b5 vgremove: fix force remove on devs with damaged metadata
The improved detection of bad metadata when scanning
(where errors were ignored before) means we now have to
override some errors when forcibly erasing damaged metadata.
2018-04-20 11:22:46 -05:00
37471bb477 scan: skip extra scan in vg_read
Drop an extra label scan in the recovery part
of vg_read.  This is a temporary improvement
until the pending replacement for the broken
recovery code burried in vg_read.
2018-04-20 11:22:46 -05:00
e4f478d86d scan: handle request to scan missing dev 2018-04-20 11:22:46 -05:00
89f54a5094 remove debugging print 2018-04-20 11:22:46 -05:00
c29899b910 remove unused variable in _pvremove_check_single 2018-04-20 11:22:46 -05:00
a1e3398ffc scan: handle no devices
Still create bcache.
2018-04-20 11:22:46 -05:00
9d2add1361 scan: add a dev to bcache before each read to handle write path
This is a temporary hacky workaround to the problem of
reads going through bcache and writes not using bcache.
The write path wants to read parts of data that it is
incrementally writing to disk, but the reads (using
bcache) don't work because the writes are not in the
bcache.  For now, add a dev to bcache before each attempt
to read it in case it's being used on the write path.
2018-04-20 11:22:46 -05:00
6c67c7557c scan: use separate fd for bcache
Create a new dev->bcache_fd that the scanning code owns
and is in charge of opening/closing.  This prevents other
parts of lvm code (which do various open/close) from
interfering with the bcache fd.  A number of dev_open
and dev_close are removed from the reading path since
the read path now uses the bcache.

With that in place, open(O_EXCL) for pvcreate/pvremove
can then be fixed.  That wouldn't work previously because
of other open fds.
2018-04-20 11:22:46 -05:00
4343280ebc process_each_label: use lvmcache
In the same way as the other process_each functions.
In the common case all the info that's needed can be
used from lvmcache after a label scan.  But this means
that unchosen devs for duplicate PVs need to be handled
explicitly.
2018-04-20 11:22:46 -05:00
f17c2cf7c6 pvremove: device check doesn't require label_read
It just needs to check if the device was found during
the scan, which means checking if it exists in lvmcache.
2018-04-20 11:22:45 -05:00
29c6c17121 format-text.c log message fixes 2018-04-20 11:22:45 -05:00
d9a77e8bb4 lvmcache: simplify metadata cache
The copy of VG metadata stored in lvmcache was not being used
in general.  It pretended to be a generic VG metadata cache,
but was not being used except for clvmd activation.  There
it was used to avoid reading from disk while devices were
suspended, i.e. in resume.

This removes the code that attempted to make this look
like a generic metadata cache, and replaces with with
something narrowly targetted to what it's actually used for.

This is a way of passing the VG from suspend to resume in
clvmd.  Since in the case of clvmd one caller can't simply
pass the same VG to both suspend and resume, suspend needs
to stash the VG somewhere that resume can grab it from.
(resume doesn't want to read it from disk since devices
are suspended.)  The lvmcache vginfo struct is used as a
convenient place to stash the VG to pass it from suspend
to resume, even though it isn't related to the lvmcache
or vginfo.  These suspended_vg* vginfo fields should
not be used or touched anywhere else, they are only to
be used for passing the VG data from suspend to resume
in clvmd.  The VG data being passed between suspend and
resume is never modified, and will only exist in the
brief period between suspend and resume in clvmd.

suspend has both old (current) and new (precommitted)
copies of the VG metadata.  It stashes both of these in
the vginfo prior to suspending devices.  When vg_commit
is successful, it sets a flag in vginfo as before,
signaling the transition from old to new metadata.

resume grabs the VG stashed by suspend.  If the vg_commit
happened, it grabs the new VG, and if the vg_commit didn't
happen it grabs the old VG.  The VG is then used to resume
LVs.

This isolates clvmd-specific code and usage from the
normal lvm vg_read code, making the code simpler and
the behavior easier to verify.

Sequence of operations:

- lv_suspend() has both vg_old and vg_new
  and stashes a copy of each onto the vginfo:
  lvmcache_save_suspended_vg(vg_old);
  lvmcache_save_suspended_vg(vg_new);

- vg_commit() happens, which causes all clvmd
  instances to call lvmcache_commit_metadata(vg).
  A flag is set in the vginfo indicating the
  transition from the old to new VG:
  vginfo->suspended_vg_committed = 1;

- lv_resume() needs either vg_old or vg_new
  to use in resuming LVs.  It doesn't want to
  read the VG from disk since devices are
  suspended, so it gets the VG stashed by
  lv_suspend:
  vg = lvmcache_get_suspended_vg(vgid);

If the vg_commit did not happen, suspended_vg_committed
will not be set, and in this case, lvmcache_get_suspended_vg()
will return the old VG instead of the new VG, and it will
resume LVs based on the old metadata.
2018-04-20 11:22:45 -05:00
79c4971210 label_scan: remove extra label scan and read for orphan PVs
When process_each_pv() calls vg_read() on the orphan VG, the
internal implementation was doing an unnecessary
lvmcache_label_scan() and two unnecessary label_read() calls
on each orphan.  Some of those unnecessary label scans/reads
would sometimes be skipped due to caching, but the code was
always doing at least one unnecessary read on each orphan.

The common format_text case was also unecessarily calling into
the format-specific pv_read() function which actually did nothing.

By analyzing each case in which vg_read() was being called on
the orphan VG, we can say that all of the label scans/reads
in vg_read_orphans are unnecessary:

1. reporting commands: the information saved in lvmcache by
the original label scan can be reported.  There is no advantage
to repeating the label scan on the orphans a second time before
reporting it.

2. pvcreate/vgcreate/vgextend: these all share a common
implementation in pvcreate_each_device().  That function
already rescans labels after acquiring the orphan VG lock,
which ensures that the command is using valid lvmcache
information.
2018-04-20 11:22:45 -05:00
5f138f3604 vgcreate: improve the use of label_scan
The old code was doing unnecessary label scans when
checking to see if the new VG name exists.  A single
label_scan is sufficient if it is done after the
new VG lock is held.
2018-04-20 11:22:45 -05:00
e3e5beec74 lvmetad: use new label_scan for update from pvscan
Take advantage of the common implementation with aio
and reduced disk reads.
2018-04-20 11:22:43 -05:00
9c71fa0214 lvmetad: use new label_scan for update from lvmlockd
When lvmlockd indicates that the lvmetad cache is out of
date because of changes by another node, lvmetad_pvscan_vg()
rescans the devices in the VG to update lvmetad.  Use the
new label_scan in this function to use the common code and
take advantage of the new aio and reduced reads.
2018-04-20 11:21:41 -05:00
098c843c50 independent metadata areas: fix bogus code
Fix mixing bitwise & and logical && which was
always 1 in any case.
2018-04-20 11:21:41 -05:00
d9ef9eb330 label_scan: fix independent metadata areas
This fixes the use of lvmcache_label_rescan_vg() in the previous
commit for the special case of independent metadata areas.

label scan is about discovering VG name to device associations
using information from disks, but devices in VGs with
independent metadata areas have no information on disk, so
the label scan does nothing for these VGs/devices.
With independent metadata areas, only the VG metadata found
in files is used.  This metadata is found and read in
vg_read in the processing phase.

lvmcache_label_rescan_vg() drops lvmcache info for the VG devices
before repeating the label scan on them.  In the case of
independent metadata areas, there is no metadata on devices, so the
label scan of the devices will find nothing, so will not recreate
the necessary vginfo/info data in lvmcache for the VG.  Fix this
by setting a flag in the lvmcache vginfo struct indicating that
the VG uses independent metadata areas, and label rescanning should
be skipped.

In the case of independent metadata areas, it is the metadata
processing in the vg_read phase that sets up the lvmcache
vginfo/info information, and label scan has no role.
2018-04-20 11:21:41 -05:00
748f29b42a scan: do scanning at the start of a command
Move the location of scans to make it clearer and avoid
unnecessary repeated scanning.  There should be one scan
at the start of a command which is then used through the
rest of command processing.

Previously, the initial label scan was called as a side effect
from various utility functions.  This would lead to it being called
unnecessarily.  It is an expensive operation, and should only be
called when necessary.  Also, this is a primary step in the
function of the command, and as such it should be called prominently
at the top level of command processing, not as a hidden side effect
of a utility function.  lvm knows exactly where and when the
label scan needs to be done.  Because of this, move the label scan
calls from the internal functions to the top level of processing.

Other specific instances of lvmcache_label_scan() are still called
unnecessarily or unclearly by specific commands that do not use
the common process_each functions.  These will be improved in
future commits.

During the processing phase, rescanning labels for devices in a VG
needs to be done after the VG lock is acquired in case things have
changed since the initial label scan.  This was being done by way
of rescanning devices that had the INVALID flag set in lvmcache.
This usually approximated the right set of devices, but it was not
exact, and obfuscated the real requirement.  Correct this by using
a new function that rescans the devices in the VG:
lvmcache_label_rescan_vg().

Apart from being inexact, the rescanning was extremely well hidden.
_vg_read() would call ->create_instance(), _text_create_text_instance(),
_create_vg_text_instance() which would call lvmcache_label_scan()
which would call _scan_invalid() which repeats the label scan on
devices flagged INVALID.  lvmcache_label_rescan_vg() is now called
prominently by _vg_read() directly.
2018-04-20 11:21:38 -05:00
4507ba3596 scan: use new label_scan for lvmcache_label_scan
To do label scanning, lvm code calls lvmcache_label_scan().
Change lvmcache_label_scan() to use the new label_scan()
based on bcache.

Also add lvmcache_label_rescan_vg() which calls the new
label_scan_devs() which does label scanning on only the
specified devices.  This is for a subsequent commit and
is not yet used.
2018-04-20 11:19:32 -05:00
a7cb76ae94 scan: use bcache for label scan and vg read
New label_scan function populates bcache for each device
on the system.

The two read paths are updated to get data from bcache.

The bcache is not yet used for writing.  bcache blocks
for a device are invalidated when the device is written.
2018-04-20 11:19:24 -05:00
697fa7aa1d [makefile] add -laio to makefiles 2018-04-20 11:13:17 -05:00
93fc937429 [device/bcache] bcache_read_bytes should put blocks 2018-04-20 11:12:50 -05:00
7be54bd687 [device/bcache] fix min() function 2018-04-20 11:12:50 -05:00
d9e6298edb [device/bcache] fix missing max_io fn in bcache async engine 2018-04-20 11:12:50 -05:00
dc8034f5eb [device/bcache] more work on bcache 2018-04-20 11:12:50 -05:00
1cde30eba0 [device/bcache] More fiddling with tests 2018-04-20 11:12:50 -05:00
6a57ed17a2 [device/bcache] add bcache_prefetch_bytes() and bcache_read_bytes()
Not tested yet.
2018-04-20 11:12:50 -05:00
467adfa082 [device/bcache] More tests and some bug fixes 2018-04-20 11:12:50 -05:00
8ae3b244fc [build] include test/unit/Makefile rather than recursive build
FIXME: unit tests are not currently run as part of make check.
2018-04-20 11:12:50 -05:00
b03e55a513 [device/bcache] rename a unit test 2018-04-20 11:12:50 -05:00
0d0fab3d2d [device/bcache] another unit test 2018-04-20 11:12:50 -05:00
19647d1cd4 [device/bcache] fix bug in _alloc_block 2018-04-20 11:12:50 -05:00
1563b93691 [device/bcache] Add bcache_max_prefetches()
Ignore prefetches if max io is in flight.
2018-04-20 11:12:50 -05:00
c4c4acfd42 [device/bcache] Add a couple of invalidate methods 2018-04-20 11:12:50 -05:00
0f0eb04edb [device/bcache] some more work on bcache 2018-04-20 11:12:50 -05:00
46867a45d2 [device/bcache] stub a unit test 2018-04-20 11:12:50 -05:00
cb2c4542a6 [git] Update .gitignore 2018-04-20 11:11:56 -05:00
38d77898ae [unit tests] remove old unit tests that weren't built or run. 2018-04-20 11:10:46 -05:00
7a475bef32 [build] Quieten the build down
It was hard to see warnings with the long command lines scrolling by so
quickly.

Use 'make V=1' if you need to see all the gritty details.
2018-04-20 11:10:45 -05:00
da7e13ef88 [lib/device/bcache] Tweaks after Kabi's review 2018-04-20 11:10:45 -05:00
acb42ec465 [device/bcache] Initial code drop.
Compiles.  Not written tests yet.
2018-04-20 11:10:45 -05:00
00f1b208a1 [io paths] Unpick agk's aio stuff 2018-04-20 11:03:58 -05:00
d51429254f tests: improve mirror_images_redundant
Use only passed VG for lvs and avoid 1 extra uneeded use of lvs.
2018-04-20 12:17:01 +02:00
ac18005de9 tests: update mirror test
Since lvconvert again is able to wait on mirror synchronization,
drop 'should'.

Also add FIXME about  'lvreduce' and too big region size.
2018-04-20 12:17:01 +02:00
fa5ba7e42d coverity: ensure 0 end string
Use dm_strncpy() to enusure string ends with '\0'.
In case uuid does not fit, report error.
2018-04-20 12:17:01 +02:00
037c234eaa cleanup: avoid compiler warn
When variable is unused...
2018-04-20 12:17:01 +02:00
73cda0437f cleanup: correcting macro wrapping
Use proper do {} while(0) so ';' after macros are correctly
interpretted..
2018-04-20 12:17:01 +02:00
9731d48691 cleanup: enhance debug message 2018-04-20 12:17:01 +02:00
d437bd86ff cleanup: display_lvname update message
Add more display_lvname usage.
Update some error messages.
Indent.
2018-04-20 12:17:01 +02:00
7323557379 cleanup: add _mb_ to regiosize option
Just like with others mentions default unit in function name.
2018-04-20 12:17:01 +02:00
e878c3fc32 cleanup: correct casting 2018-04-20 12:17:01 +02:00
27a1a0e5c0 cleanup: reorder condition
There is no point to wait for sync for non-locally active LV.
2018-04-20 12:17:01 +02:00
1287edf626 cleanup: call uname once
Call uname() once and keep result for mirror use-case.
2018-04-20 12:16:58 +02:00
d81e3f9b06 mirror: use vg mempool
Use vg mempool with mirror log metadata update.
2018-04-20 12:16:14 +02:00
05f954ee9b mirror: checking for mirror segtype
Checking more correctly for mirror segtype here instead of
mirrored one which can be also 'raid'.
2018-04-20 12:16:14 +02:00
79d214032b mirror: validate region_size for mirrors
Check for region size properties of mirror segments.
2018-04-20 12:16:13 +02:00
1693fef529 mirror: properly reload table for log init
Since mirror can be stacked, we need to properly reload whole
table stack, otherwice we may mishandle devices in dm table.
2018-04-20 12:15:36 +02:00
55d83f9f6e mirror: block_on_error only with monitoring
When user configured lvm2 to NOT user monitoring, activated mirror
actually hang upon error and it's quite unusable moment.

So instead Warn those 'brave' non-monitoring users about possible
problem and activation mirror without blocking error handling.

This also makes it a bit simpler for test suite to handle trouble
cases when test is running without  dmeventd.
2018-04-20 12:13:51 +02:00
66400d003d mirror: fix region_size for clustered VG
When adjusting region size for clustered VG it always needs to fit
2 full bitset into 1MB due to old limits of CPG.

This is relatively big amount of bits, but we have still limitation
for region size to fit into 32bits (0x8000000).

So for too big mirrors this operation needs to fail - so whenever
function returns now 0, it means we can't find matching region_size.

Since return 0 is now 'error' we need to also pass proper region_size
when creating pvmove mirror.
2018-04-20 12:13:48 +02:00
a19456b868 mirror: fix calcs for maximal region_size
Since extent_size is no longer power_of_2 this max region size
evalution was rather producing random bitsize as a combination
of lowest bit from number of extents and extent size itself.

Correct calculation to use whole LV size and pick biggest
possible power of 2 value smaller then UINT32_MAX.
2018-04-20 12:13:08 +02:00
91965af9b1 mirror: improve mirror log size estimation
Drop mirrored mirror log limitation that applies only in very limited
use-case and actually mirrored mirror log is deprecated anyway.

So 'disk' mirror log is selecting the correct minimal size, and
bigger size is only enforced with real mirrored mirror log.

Also for mirrored mirror log we let use 'smalled' region size if needed
so if user uses  1G region size, we still keep small mirror log
with much smaller region size in this case when needed.

Also mirror log extent calculation is now properly detecting error
with too big mirrors where previosly trimmed uint32_t was applies
unintentionally.
2018-04-20 12:11:42 +02:00
73189170f5 mirror: fix 32bit size calculation
On 32bit arch  size_t remains 4-byte wide - so size can't
get correct result for multiplication of 32bit numbers.
2018-04-20 12:08:57 +02:00
ff3ffe30e4 activation: add generic rule for visibility change
Whenever we make visible LV out of previously invisible one,
reload it's table - the is mandator for proper udev rule
processing as well as ensure content of dm table is correct.

TODO: this new generic rule probably make extra raid rules unnecessary.
2018-04-20 12:07:36 +02:00
9068de011d lvconvert: drop limitation for converting lv
Fixing regresion on argument acceptance where any lv can be passed
with paramaterless lvconvert which is meant to figure out needed
operation - i.e. wait for  mirror synchronization.

User has no other 'effective' method to wait for mirror getting in-sync.
2018-04-20 12:06:51 +02:00
a7d077b89b thin: restore usability of thin for external origin
With command definition it's been lost support for thin LV being
an external origin for another thinLV.
2018-04-20 12:06:03 +02:00
ace97c9f9c pvmove: support properly subLV locking
Since we support snapshot of mirrors, we do need to properly check
for stacked lock holder - fixes problem of pvmove in cluster
with mirrors under snapshot.

WHATS_NEW for this patch goes with 'Restore pvmove support...'
2018-04-20 12:03:16 +02:00
7a7b8a7778 udev: keep systemd vars on change event in 69-dm-lvm-metad.rules for systemd reload
The current logic that avoids setting SYSTEMD_ALIAS and SYSTEMD_WANTS
on "change" events is flawed in the default "systemd background job"
configuration. For systemd, it's important that device properties don't
change spuriously.

If an "add" event starts lvm2-pvscan@.service for a device, and a
"change" event follows, removing SYSTEMD_ALIAS and SYSTEMD_WANTS from the
udev db, information about unit dependencies between the device and the
pvscan service can be lost in systemd, in particular if the daemon
configuration is reloaded.

Steps to reproduce problem:

- create a device with an LVM PV
- remove device
- add device (generates "add" and "change" uevents for the device)
  (at this point SYSTEMD_ALIAS and SYSTEMD_WANTS are clear in udev db)
- systemctl daemon-reload
  (systemd reloads udev db)
- vgchange -a n
- remove device

=> the lvm2-pvscan@.service for the device is still active although the
device is gone.

- add device again

=> the PV is not detected, because systemd sees the lvm2-pvscan@.service
as active and thus doesn't restart it.

The original purpose of this logic was to avoid volumes being scanned
over and over again. With systemd background jobs, that isn't necessary,
because systemd will not restart the job as long as it's active.

Signed-off-by: Martin Wilck <mwilck@suse.com>
2018-04-17 11:38:12 +02:00
99bfbbf229 udev: explicit pvscan rule in 69-dm-lvm-metad.rules
Make the distinction between the cases with and without systemd
background jobs explicit in 69-dm-lvm-metad.rules rather than
substituting the rule from the Makefile. At this stage,
this improves only readibility, at the cost of one GOTO statement.

This patch introduces no functional change to the udev rules.

Signed-off-by: Martin Wilck <mwilck@suse.com>
2018-04-17 11:32:52 +02:00
bc286910ec test: add lvcreate-raid-volume_list
Test that no (Sub)LV remnants persist if the volume group is
not listed in configuration variable activation/volume_list,
hence not activatable thus causing initialization of rmeta
SubLVs to fail.

Related: rhbz1161347
2018-04-06 15:26:38 +02:00
3a48fb47b7 tests: shellcheck misc
Few more minor complains from ShellCheck.
2018-03-23 17:25:00 +01:00
1507956383 tests: shellcheck split assing
Keep possibly error unmasked by assign
2018-03-23 17:25:00 +01:00
397b7891ff tests: shellcheck liter 2018-03-23 17:25:00 +01:00
410c992744 tests: shellcheck use grep -E
Replace egrep with grep -E
2018-03-23 17:25:00 +01:00
14abe1e87b tests: shellcheck prevention check
Always make sure variable is set to something else the /dev/*
2018-03-23 17:25:00 +01:00
cafcc5813a fsadm: shellcheck prefer explicit escaping
Backslash is literal in "\t". Prefer explicit escaping: "\\t".
2018-03-23 17:25:00 +01:00
fe69731d31 tests: handle setting better
When using 'make check...  LVM_TEST_AUX_TRACE=0'  make it behaving
like other supported VARS in use so it's like disabled.
2018-03-23 17:25:00 +01:00
30975a3328 libdm: enhance mounted fs detection
btrfs is using fake major:minor device numbers.
try to be smarter and detect used node via DM device name.

This shortens delays, where i.e. lvm2 is asked to deactivate
volume with mounted btrfs as such operation is not retryed
and user is informed about device being in use.
2018-03-23 17:24:58 +01:00
8c02cc9e8f tests: update no tool test
Correct testing with format 1 and mq policy.

Add testing of 'smq'

Fix testing with clvmd - where logged message is part of clvmd log
and we can only check command status.
2018-03-19 12:08:04 +01:00
4e0c0417ce cleanup: typo fix 2018-03-19 12:05:57 +01:00
8d7ece126b cache: disallow to combine format 2 with mq
Only policy 'smq' is meant to be used with format version 2.
Code used to let pass 'mq' policy also with format 2. But 'mq'
is obsoloted wth smq and kernel currently matches it. But this
is incompatible with older original mq logic - so disallow creation
of this rather useless combination.
2018-03-19 12:02:08 +01:00
08487a3098 tests: use 4k extents
Use 4K chunks since some older kernels are not capable
to create striped volumes with smaller size.

TODO: lvm2 should detect this ahead and avoid kernel
reporting "Invalid chunk".
2018-03-18 00:30:43 +01:00
e5b40e0488 tests: check activation of cache without cache_check 2018-03-17 23:33:58 +01:00
9e7b00a3b9 tests: test striped COW LV 2018-03-17 23:33:58 +01:00
c82ab92d04 cleanup: use zalloc
Replace malloc() + memset()   with zalloc().
2018-03-17 23:33:58 +01:00
5c40e81a7e cleanup: use direct initializer 2018-03-17 23:33:58 +01:00
f4383a70ba coverity: drop unused local static var 2018-03-17 23:33:58 +01:00
aa75e181be coverity: drop unneeded header files 2018-03-17 23:33:58 +01:00
b4c69320fc coverity: move declaration out of the loop
Move declaration of count counter outside the while loop.
2018-03-17 23:33:58 +01:00
f2d0eefa77 coverity: make use of defined variable
Since we declare 'r', let's use the value for something.
2018-03-17 23:33:58 +01:00
26c58027fb coverity: validate descriptor
Since this function is called with 'fd == -1', but Coverity can't see
this path can't be visited with this argument, add explicit check for
valid descriptor.
2018-03-17 23:33:58 +01:00
f331eb1c0d coverity: ensure lock_type is not NULL 2018-03-17 23:33:58 +01:00
fd6661dfcf coverity: add missing error check for str_list_add
Validate success.
2018-03-17 23:33:58 +01:00
d727382275 lvconvert: accept striped LV as snapshot COW LV
Restore back acceptance of striped LV to be valid COW LV.
2018-03-17 23:33:58 +01:00
67fbe980a7 raid: fix version check of target
Comparision missed to check patch level for matching minor version.
Howerver since all checked patchlevels were 0 - the fix doesn't change result.
2018-03-17 23:30:14 +01:00
689af32313 pools: skip checks when tools are missing
If the tools for checking thin_pool or cache metadata are missing,
issue rather just a WARNING, but let the operation of activation
continue.

This has the advantage, the if user is missing those tools,
but he already started to use thinpool or cacheing, he can
access these volumes with a WARNING.

Also if the user is using too old tools i.e. for CacheV2 format
dmpd tool 0.7 is required - provide informative WARNING and
skip failure from older tool version which can't understand
new format V2.
2018-03-17 23:29:11 +01:00
d68d71013f lvcreate: remove RaidLV on creation failure
In case a newly created RaidLV is blacklisted using config
\"activation { volume list = [ ... ] }\" (i.e. its SubLVs stay inactive),
the metadata SubLVs can't get wiped thus failing the creation.

As a result, the RaidLV together with its SubLVs
is left behind in an inconsistent state.

Fix by removing the RaidLV and provide a hint about volume_list reasoning.

Resolves: rhbz1161347
2018-03-16 15:57:53 +01:00
9553dc7761 activation: separate prioritized counter
While prioritized_section() based on raised priority works
nicely for standard lvm comman - separate counter is actually needed
when it's used in daemons like clvmd/dmeventd  where priority
stays raised all the time.
2018-03-15 12:30:45 +01:00
f6f8f0c7fd tests: skip test when not enough space
Make the test skipped instead of failing when there was not
enough space.
2018-03-15 11:01:04 +01:00
bed869a8a0 tests: use DM_DEBUG_WITH_LINE_NUMBERS
Use src:line also for debugging of tools like dmsetup.
2018-03-15 11:01:04 +01:00
750fc2e876 tests: fix running tests on systems without udevd
Variable was unbound on systems without running udevd.
2018-03-15 11:01:04 +01:00
285413b502 cleanup: missing dots and indent 2018-03-15 11:01:04 +01:00
d794444715 activation: check for prioritized_section
Detect we are in prioritezed section instead of critical one,
since these operation were supposed to NOT be happining during
whole set of operation.

This patch fixes verification of udev operations.
2018-03-15 11:01:04 +01:00
6365f011b0 locking: introduce prioritized_section
Introduce prioritized_section() as a closer match to previous logic
of critical_section() that has been held over longer sequence of
ioctl commands - essentially it's matching operation on a single
cookie.

While 'critical_section()' now corresponds to locked memory - we hold
this memory only between suspend/resume thus notion of 'cookie' was
lost.

This patch restores some logic unintentionaly lost with dropping
memory locking for just activation/deactivation calls.
2018-03-15 10:59:42 +01:00
043f58452a libdm-stats: fix error messages
When function dm_stats_populate() returns 0 it's an error and needs
log_error() message -  function can't have 'success' returning 0 or
error without reasons.
2018-03-15 10:56:31 +01:00
a082ce2613 dmstatus: check nr_regions ahead of find call
Prevent call of dm_stats_populate(), when there has been no
stats region detected for a DM device.
Such skip is evaluated as 'correct' visit of stats call and
not causing 'dmstats' command failure.
2018-03-15 10:54:19 +01:00
4c925692f5 dmsetup: loop output table as verbose
Resulting loop table line was streamed to 'stderr' stream - assuming this
was not a feature when user used '-v' for more verbose output
and properly show it via  'log_verbose()' on 'stdout'.
2018-03-15 10:50:30 +01:00
70ad633638 devcache: add reason and always log_error
With these read errors it's useful to know the reason.
Also avoid to log error just once so we know exactly
how many times we did failing read.

On the other hand reduce repeated log_error() on code 'backtrace'
path and change severity of message to just log_debug() so the
actual read error is printed once for one read.
2018-03-15 10:50:28 +01:00
2b3b486a37 libdm: support for DM_DEBUG_WITH_LINE_NUMBERS
For any libdm tool using default debugging function allow
to show source filename and code line number when this
functionality is available.
2018-03-15 10:49:24 +01:00
eae54b67d8 test: Skip tests which require too much RAM
- Tests for RAID reshape under load require too much RAM
2018-03-13 13:42:45 +01:00
90512910e5 tests: try unfreezeing raids
With problematic kernels raid devices can be occasionaly left with
'frozen' status - try to 'unfreeze' them with idle message on teardown.

Also replace couple greps with 'built-in' dmsetup --select feature.

Note: dmsetup --select  currently reports 'No devices found' on stdout
and return success - looks like a bug to fix.
2018-03-13 12:58:57 +01:00
b1ace8ce19 dmsetup: indent 2018-03-13 12:58:57 +01:00
e9cadbe105 cleanup: matching signess 2018-03-13 12:58:57 +01:00
49a8c786d5 dmsetup: report close as debug
Since close() failures are not causing command errors,
issue error via debug log stream only.
2018-03-13 12:58:57 +01:00
06c1f71897 dmsetup: use dm_snprintf 2018-03-13 12:58:57 +01:00
3f351466f7 dmsetup: update _display_info
Handle error code.
2018-03-13 12:58:57 +01:00
7ac7cc0ac8 dmsetup: update messages 2018-03-13 12:58:57 +01:00
9476cf8cdc dmsetup: join large fprintf
Concatenate strings and make binary slightly smaller.
2018-03-13 12:58:57 +01:00
5f5db7cf41 dmsetup: stderr to log_error 2018-03-13 12:58:57 +01:00
f203d4e206 dmsetup: cleanup err usage
Macro err() add '\n'.
2018-03-13 12:58:57 +01:00
3b7834af17 dmsetup: use stderr for error output
When dmsetup command returns error, the message goes to stderr.
2018-03-13 12:58:57 +01:00
29b2cfba06 mirror: correct locking for mirror log initialization
The code was not acking proper lock holding LVs when trying to
initialize mirror log to predefined values.
2018-03-13 12:58:27 +01:00
1bd57b4c1d scanning: skip more private devices
Just like lvm2 has internal devices like _tdata which is using UUID with
suffix, there is similar private type of device for crypto device where
they are using CRYPT-TEMP uuid prefix.

Also ignore stratis.
2018-03-13 12:57:33 +01:00
e095586d9e cleanup: use path on stack 2018-03-13 12:57:08 +01:00
0edd89fadc raid: skip frozen raid devices
Some kernel version suffer from bad state transition where a device
steps into 'frozen' mode. Any application that tries to read such
raid gets unfortunatelly bloked.

As some sort of protection try to skip such raid device from being
scanned to minimize chances to block lvm2 command on such scan.

When such device is found, warning gets printed.
2018-03-13 12:57:01 +01:00
a8a579b154 cleanup: all tests needs target_type
Simplify code.
2018-03-13 12:53:59 +01:00
0646fd465e dev_manager: always activate RAID SubLVs readwrite
RaidLVs on read_only_volume_list have their SubLVs
activated readonly thus disabling metadata updates
or image resynchronization/recovery.  Bug also causes
automatic repairs to fail.

Fix by always activating the RAID SubLVs readwrite.

Resolves: rhbz1208269
2018-03-12 22:29:54 +01:00
dd88a0f05c raid: support raid5_n convenience type on conversion to raid10
Fix requesting a conversion on raid5_{ls,rs,la,ra} -> raid10
not offering offering interim convenience type raid5_n.

Resolves: rhbz1468600
2018-03-09 21:23:16 +01:00
6cb2c35d16 cleanup: use log_warn
There message are not causing command failure thus turn them
into warnings.
2018-03-08 10:40:27 +01:00
ee37838b11 cache: fix lock usage for cache conversion
Just like with lvcreate, this lvconvert case also need to properly
check which LV actually holds lock for cached origin - as it might
be i.e. thin-pool tdata subLV.
2018-03-08 10:39:47 +01:00
7421252edc snapshot: skip invalid snapshost
When scanning DM device, skip automatically invalid snapshot devices.
They behave just like 'error' device.
2018-03-08 10:39:44 +01:00
a6fdb9d9d7 snapshot: keep COW writable for read-only volumes
When snapshot is created in read-only mode with 'lvcreate -s -pr...',
lvm2 still needs to be able to write to layered -cow volume
to store metadata and exceptions blocks.

TODO: in some case we might be able to do full tree with read-only
volume but this probably needs futher validation:
1. checking snapshot header already exist
2. origin & snapshot are both in read-only mode.
2018-03-08 10:39:03 +01:00
15b6793528 tests: skipping test waiting for fixed kernel
Once working kernel is released, reenable me...
2018-03-06 15:42:49 +01:00
b05caca77e tests: component activation 2018-03-06 15:42:49 +01:00
eb3597acb3 activation: support proper /dev names for component LVs
When LV is activated AS componet LV - ensure there will
be /dev/vgname/lvname  link present for such LV.
2018-03-06 15:42:49 +01:00
112846ce0b activation: support activation of component LVs
Occasionaly users may need to peek into 'component devices.
Normally lvm2 does not let users activation component.

This patch adds special mode where user can activate
component LV in a 'read-only' mode i.e.:

lvchange -ay vg/pool_tdata

All devices can be deactivated with:

lvchange -an vg  |  vgchange -an....
2018-03-06 15:42:46 +01:00
6134a71a90 lvconvert: support for convertsion with active component devices
If componet devices could be activated alone, ensure they are not breaking
common commands.

TODO: mostly likely this is not a definite list of all needed checks
and more will come later.
2018-03-06 15:42:07 +01:00
f92b6f9930 lvremove: ensure no subLV is active
Since component activation is going to be enabled, enusure,
no subLV is active when we deactivate LV.
2018-03-06 15:42:07 +01:00
73e93ef5e5 lvremove: validate removed component LV is not active
This is the 'last' place where a LV is present in metadata.
Any removed device should not be left active in dm table.
So this check is an extra validation protection to capture any
forgotten deactivation (adding 1 extra ioctl into lvremove path)
2018-03-06 15:42:07 +01:00
ca9cbd92c4 activation: add base lv component function
Introduce:

lv_is_component() check is LV is actually a component device.

lv_component_is_active() checking if any component device is active.

lv_holder_is_active() is any component holding device is active.
2018-03-06 15:42:05 +01:00
6481471c9d debug: update comment 2018-03-06 15:40:34 +01:00
b6e7a0b490 cleanup: more usage of dm_strncpy
Use existing wrapper function arournd  strncpy + buf[] = 0;
2018-03-06 15:40:34 +01:00
f04abd1f8a lvremove: drop duplicate check for active LV
Since this code branch already tested LV is active,
avoid repeating same query.
2018-03-06 15:40:31 +01:00
23de09aeb8 lvcreate: fix activation of cached LV
Since LV for caching can be already a stacked LV, proper activation
needs to use lock holding LV.
2018-03-06 15:39:27 +01:00
b2f1254c14 raid: move VG update after archiving happened
Update of LV le_count needs to happen after archive().
2018-03-06 15:38:15 +01:00
ce199db848 raid: fix error path for lv_raid_data_offset
Avoid using allocated status on error path.
2018-03-06 15:36:11 +01:00
9be086fbee thin: pass environment to scripts
When dmeventd thin plugin forks a configurable script, switch to use
execvp to pass whole environment present to dmeventd - so all configured
paths present at dmeventd startup are visible to script.

This was likely not a problem for common user enviroment,
however in test suite case variable like LVM_SYSTEM_DIR were
not actually used from test itself but rather from
a system present lvm.conf and this may have cause strange
behavior of a testing script.
2018-03-06 15:35:04 +01:00
406d6de651 cleanup: indent 2018-02-28 21:15:55 +01:00
16c209c613 cleanup: use lv_is_used_cache_pool
Use lv_is_used_cache_pool() to simplify the code.
Function was introduced later and this code missed to use it.
2018-02-28 21:15:55 +01:00
e643de6e61 cleanup: explicitely ignore result code
ATM too long prefix is silently ignored.
2018-02-28 21:15:55 +01:00
805bf6ec74 cleanup: unused header file 2018-02-28 21:15:55 +01:00
6ba94fdd81 debug: change message severity
Although it's internal issue - in this case command continue without
any reported error - thus hide this internal error into debug.
2018-02-28 21:15:55 +01:00
cc4855acbe tests: check inactive extorig resize 2018-02-28 21:15:55 +01:00
052f28746d lvresize: check external origin with new size
Instead of checking with existing size of external origin LV,
use correctly the new 'wanted' size of this LV whether it fits
the limitiation requirements for older thin-pool target.

Otherwise code started to the the resize, updates metadata and
just fails during 'resize' in case the LV was active. For
inactive LV operation could have actually passed.
2018-02-28 21:15:55 +01:00
b09ea3b6f7 lvremove: drop unneded check
Checking here for cache_pool is not necessary and in effect
the check is not even right - since there are internal
states that do allow to active such LV.
2018-02-28 21:08:40 +01:00
749372caf3 command: use bigger buffer
Instead of use 'silently' shortened passed string - always
make sure we take either a full copy or return error.
2018-02-28 21:08:40 +01:00
bc1adc32cb lv_manip: enhance for_each_sub_lv
Fix missing 'externalLV' traversing for thins with external origins.

Replace extra for_each_sub_lv_except_pools() with better
internal logic allowing selectively to cut of processed subLV tree.

Extend error code for function 'fn()' when it returns -1 it will
stop futher tree scan for given LV.

Also a bit simplify code to have only one place that
is calling 'fn()' and use level counter to know
depth of traversing.

Update renaming travering to skip trees for pools
and external origins.
2018-02-28 21:08:38 +01:00
6b48868cf0 io: keep 64b arithmetic
Widen to 64b arithmetic from start.
2018-02-28 21:05:18 +01:00
261e6c3df6 raid: add free for error path
Recent patch forget to release now allocated 'dso' on error path.
2018-02-28 21:05:18 +01:00
9bfc8881cb coverity: missing free on error path 2018-02-28 21:05:18 +01:00
32bcdd90ae tests: check vgsplit thin-data and ext.origin 2018-02-27 14:37:47 +01:00
8e5305f630 tests: correct usage of pipe
This is somewhat tricky - for test suite we keep using
'set -e -o pipefail'  - the effect here is - we get error report
from any 'failing' command in whole pipeline - thus when something
like this:   'lvs | head -1'  is used - and  'head' finishes before
lead 'lvs' is done - it recieves SIGPIPE and exits with error,
and somewhat misleading gets occasionally reported depending
of speed of commands.

For this case we have to avoid using standard pipes and rather
switch to using streamed results with temporary output file.
This is all nicely handled with bash feature '< <()'.

For more info:
https://stackoverflow.com/questions/41516177/bash-zcat-head-causes-pipefail
2018-02-19 16:45:10 +01:00
e7f1329cae debug: capture internal error for too long resource name
Should never happen, so just put in internal error instead of silently
passing some shortened resource name.
2018-02-19 16:45:10 +01:00
c3bb2b29d4 locking: move cache dropping to primary locking code
While 'file-locking' code always dropped cached VG before
lock was taken - other locking types actually missed this.

So while the cache dropping has been implement for i.e. clvmd,
actually running command in cluster keept using cache even
when the lock has been i.e. dropped and taken again.

This rather 'hard-to-hit' error was noticable in some
tests running in cluster where content of PV has been
changed (metadata-balance.sh)

Fix the code by moving cache dropping directly lock_vol() function.

TODO: it's kind of strange we should ever need drop_cached_metadata()
used in several places - this all should happen automatically
this some futher thinking here is likely needed.
2018-02-19 16:45:05 +01:00
e87fa7c9ce sanlock: set proper return value
In last patch one error path missed to assign correct return value.
Assing it directly to 'ret' as log_error was already reported.
2018-02-19 16:44:10 +01:00
1671b83585 doc: Fixing VDO document 2018-02-16 17:10:54 +01:00
f5401fbd34 tests: update 2018-02-15 13:56:35 +01:00
552e60b3a1 pvmove: enhance accepted states of active LVs
Improve pvmove to accept 'locally' active LVs together with
exclusive active LVs.

In the 1st. phase it now recognizes whether exclusive pvmove is needed.
For this case only 'exclusively' or 'locally-only without remote
activative state' LVs are acceptable and all others are skipped.

During build-up of pvmove 'activation' steps are taken, so if
there is any problem we can now 'skip' LVs from pvmove operation
rather then giving-up whole pvmove operation.

Also when pvmove is restarted, recognize need of exclusive pvmove,
and use it whenever there is LV, that require exclusive activation.
2018-02-15 13:55:38 +01:00
a2d2fe3a8c locking: exclusive can be either remote or local
When LOCK is exclusive and LV is already locally active,
it cannot be active remotely.
2018-02-15 13:54:55 +01:00
a1195aaa66 cleanup: add missing WARNING
ATM log_warn() is supposed to be used with WARNING: prefix.
2018-02-15 13:52:02 +01:00
d67f160200 mirror: Add deprecation warning for mirrored log 2018-02-14 13:32:04 +01:00
dd6fbcbb69 test: mirrored mirrorlog is not supposed to work in cluster 2018-02-14 13:10:52 +01:00
c3642957c5 gcc: remove warns about free of const 2018-02-13 19:56:02 +01:00
0eb9daf602 segtype: no libmem pool usage for name allocation
Allocate name with plain malloc & free.
2018-02-13 19:11:28 +01:00
32febed8d5 segtype: replace mempool allocation
So this is a bit more complex and possibly worth futher checking.

ATM  clvmd drops  cmd->mem  mempool AFTER refresh of cmd.
So anything allocating from cmd->mem during toolcontext init
will likely die at some point in time.

As a quick fix - just use regular malloc/free for 'dso' alloction.

It's worth to note -  cmd->libmem seems to be often misused
causing hidden memleaking for clvmd.
2018-02-13 19:11:28 +01:00
e40768ac32 debug: add stack tracking 2018-02-12 22:15:03 +01:00
27399755fd segtype: better get_monitor_dso_path api
Instead of allocating always 4K for dso path, use only real needed size.
Also simplify API call and move common functionality into function
itself.
2018-02-12 22:15:03 +01:00
e113df129e cleanup: decode dso path just once
Build dso plugin name during  segtype initialisation and just
use the string during command life-time.

Also slightlt update message verbosity and make it very_verbose
when operation is going to be made and 'verbose' when it's done.
2018-02-12 22:15:03 +01:00
6dff5dc653 activation: cleanup error to warning
Since for the code it's not fatal to fail on monitoring,
issue correct warning message instead of error.
2018-02-12 22:15:03 +01:00
d90a647802 activation: separate reporting of error and monitoring status
Avoid using same return code for reporting 2 different things
and stricly report error code by return value and add new
parameter for reporting monitoring status.

This makes easier to recognize which error we got from dm_event
and continue only with  ENOENT.
2018-02-12 22:14:59 +01:00
12fba201be cleanup: detect dmeventd_executable just once
Avoid repeating debug messages about dmeventd executable
and just remember it once for whole cmd lifetime.
2018-02-12 22:14:25 +01:00
4f278324c7 lvmlockd: improve dm path creation for sanlock LV
Use devmapper function to create matching dm name with mangling.
Drop extra '-1' from buffer passed to snprintf.
2018-02-12 22:14:25 +01:00
7239a45b79 clean: drop unneeded -1 for snprintf
man gives:
snprintf() and vsnprintf() write at most size bytes
(including the terminating null byte ('\0')) to str.
2018-02-12 22:14:25 +01:00
d94036f8ed vgimportclone: add some dm_snprintf checks
Check if the generated vg name still fits the buffer.
So too long strings are rejected.
Drop -1  from size passed to snprintf - as the \0 is already included.
2018-02-12 22:14:22 +01:00
60b61f2db3 libdm-stats: correct checking of dm_snprintf error
Function dm_snprintf returns -1 on error, while 0 is still
considered valid result code so correcting error path testing.
2018-02-12 22:13:57 +01:00
afdbb28f72 toolcontext: light context missed to set-up mem mempool
If cmd->mem was null, then systemd generator was failing on:

(gdb) bt
dm_pool_alloc_aligned (p=0x0, s=96, alignment=8) at mm/pool-fast.c:95
dm_pool_alloc (p=0x0, s=96) at mm/pool-fast.c:90
dm_pool_zalloc (p=0x0, s=96) at mm/pool.c:74
config_file_read_fd (mem=0x0, cft=0x55f4339dbad0, dev=0x55f4339dfac0, reason=DEV_IO_MDA_CONTENT, offset=0, size=82293, offset2=0, size2=0,
    checksum_fn=0x0, checksum=0, checksum_only=0, no_dup_node_check=0, ioflags=0, config_file_read_fd_callback=0x0, config_file_read_fd_context=0x0) at config/config.c:567
config_file_read (mem=0x0, cft=0x55f4339dbad0) at config/config.c:658
config_file_open_and_read (config_file=0x7f49aef14540 <config_file> "/var/tmp/lvm/etc/lvm/lvm.conf", source=CONFIG_FILE, cmd=0x55f4339d6260)
    at config/config.c:282
_load_config_file (cmd=0x55f4339d6260, tag=0x7f49aeca15da "", local=0) at commands/toolcontext.c:824
_init_lvm_conf (cmd=0x55f4339d6260) at commands/toolcontext.c:853
create_config_context () at commands/toolcontext.c:1814
lvm_config_find_bool (libh=0x0, config_path=0x55f431a884ad "global/use_lvmetad", fail=0) at lvm_base.c:144
main ()
2018-02-12 22:13:53 +01:00
34a9e3d3cd python: add devmapper library to linking
On occasional gcc releases it's better to specify also -ldevmapper
to linking logic for python object.

It's in fact more correct since the liblvm.c code is using
libdevmapper functions - that were linked in only via
liblvm2app library.
2018-02-09 11:00:18 +01:00
7cfe5ab9bc partial revert "command: Skip some memory zeroing."
This partially reverts commit da37cbd24f.
As the _cmdline structure use mempool for allocated ellement
that is being release on cmd_context close.

Before the better fix is made - restore previous logic and
reinitialize cmd structures again for new cmd_context.

Problem can be hit with e.g. this test run:

make check_local T=foreign LVM_VALGRIND_DMEVENTD=1

Invalid read of size 1
   at 0x4C31C83: strcmp (vg_replace_strmem.c:846)
   by 0x6BA0939: _find_command (lvmcmdline.c:1555)
   by 0x6BA4304: lvm_run_command (lvmcmdline.c:2810)
   by 0x6BD5E02: lvm2_run (lvmcmdlib.c:91)
   by 0x685607E: dmeventd_lvm2_run (dmeventd_lvm.c:118)
   by 0x6652684: _use_policy (dmeventd_thin.c:117)
   by 0x6652E56: process_event (dmeventd_thin.c:298)
   by 0x10CC5A: _do_process_event (dmeventd.c:945)
   by 0x10CF83: _monitor_thread (dmeventd.c:1033)
   by 0x54B35E0: start_thread (in /usr/lib64/libpthread-2.26.9000.so)
   by 0x57C30EE: clone (in /usr/lib64/libc-2.26.9000.so)
 Address 0x6266270 is 4,352 bytes inside a block of size 8,192 free'd
   at 0x4C2ED68: free (vg_replace_malloc.c:530)
   by 0x5289142: dm_free_wrapper (dbg_malloc.c:393)
   by 0x528998A: _free_chunk (pool-fast.c:318)
   by 0x52892A6: dm_pool_destroy (pool-fast.c:78)
   by 0x6A8E52C: destroy_toolcontext (toolcontext.c:2254)
   by 0x6BA5BD6: lvm_fin (lvmcmdline.c:3327)
   by 0x6BD5EA7: lvm2_exit (lvmcmdlib.c:123)
   by 0x6856013: dmeventd_lvm2_exit (dmeventd_lvm.c:103)
   by 0x66535B8: unregister_device (dmeventd_thin.c:432)
   by 0x10CBBC: _do_unregister_device (dmeventd.c:926)
   by 0x10CD74: _monitor_unregister (dmeventd.c:979)
   by 0x10D094: _monitor_thread (dmeventd.c:1066)
   by 0x54B35E0: start_thread (in /usr/lib64/libpthread-2.26.9000.so)
   by 0x57C30EE: clone (in /usr/lib64/libc-2.26.9000.so)
 Block was alloc'd at
   at 0x4C2DBBB: malloc (vg_replace_malloc.c:299)
   by 0x5288F46: dm_malloc_aux (dbg_malloc.c:287)
   by 0x52890AC: dm_malloc_wrapper (dbg_malloc.c:371)
   by 0x52898E6: _new_chunk (pool-fast.c:286)
   by 0x52893BA: dm_pool_alloc_aligned (pool-fast.c:106)
   by 0x5289310: dm_pool_alloc (pool-fast.c:90)
   by 0x6A8A21A: _load_config_file (toolcontext.c:808)
   by 0x6A8A3D9: _init_lvm_conf (toolcontext.c:842)
   by 0x6A8D3BD: create_toolcontext (toolcontext.c:1941)
   by 0x6BA5B24: init_lvm (lvmcmdline.c:3308)
   by 0x6BD5B7C: cmdlib_lvm2_init (lvmcmdlib.c:34)
   by 0x6BD5EB8: lvm2_init (lvm2cmd.c:20)
   by 0x6855EA7: dmeventd_lvm2_init (dmeventd_lvm.c:67)
   by 0x665305F: register_device (dmeventd_thin.c:352)
   by 0x10CB7A: _do_register_device (dmeventd.c:916)
   by 0x10CEE4: _monitor_thread (dmeventd.c:1006)
   by 0x54B35E0: start_thread (in /usr/lib64/libpthread-2.26.9000.so)
   by 0x57C30EE: clone (in /usr/lib64/libc-2.26.9000.so)
2018-02-09 10:59:07 +01:00
83258e3385 toolcontext: do not change stream for pthreaded programs
With pthreaded daemons like 'dmeventd' using  liblvm via plugin,
lvm2 actually should not 'play' with streams at all - as there
could be parallel outputs running.

As a current quick workaround just disable change for pthreaded
program (gettid() != getpid()).

TODO: it's possible the change of buffering actually doesn't serve us
any measurable benefit and could be dropped as whole later...

Meanwhile this patch is fixing this occasional valgrind race report:

Invalid read of size 4
   at 0x571892C: vfprintf (in /usr/lib64/libc-2.26.9000.so)
   by 0x57216B3: fprintf (in /usr/lib64/libc-2.26.9000.so)
   by 0x5042886: dm_event_log (libdevmapper-event.c:925)
   by 0x10B015: _dmeventd_log (dmeventd.c:125)
   by 0x10D289: _unregister_for_event (dmeventd.c:1146)
   by 0x10E52E: _handle_request (dmeventd.c:1583)
   by 0x10E6D7: _do_process_request (dmeventd.c:1631)
   by 0x10E7C6: _process_request (dmeventd.c:1660)
   by 0x1101A4: main (dmeventd.c:2285)
 Address 0x6264d30 is 192 bytes inside a block of size 552 free'd
   at 0x4C2ED68: free (vg_replace_malloc.c:530)
   by 0x573907D: fclose@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.9000.so)
   by 0x6AC5C00: reopen_standard_stream (log.c:189)
   by 0x6A8E62C: destroy_toolcontext (toolcontext.c:2271)
   by 0x6BA5C22: lvm_fin (lvmcmdline.c:3339)
   by 0x6BD5EF3: lvm2_exit (lvmcmdlib.c:123)
   by 0x6856013: dmeventd_lvm2_exit (dmeventd_lvm.c:103)
   by 0x66535B8: unregister_device (dmeventd_thin.c:432)
   by 0x10CBBC: _do_unregister_device (dmeventd.c:926)
   by 0x10CD74: _monitor_unregister (dmeventd.c:979)
   by 0x10D094: _monitor_thread (dmeventd.c:1066)
   by 0x54B35E0: start_thread (in /usr/lib64/libpthread-2.26.9000.so)
   by 0x57C30EE: clone (in /usr/lib64/libc-2.26.9000.so)
 Block was alloc'd at
   at 0x4C2DBBB: malloc (vg_replace_malloc.c:299)
   by 0x573932B: fdopen@@GLIBC_2.2.5 (in /usr/lib64/libc-2.26.9000.so)
   by 0x6AC5DC2: reopen_standard_stream (log.c:200)
   by 0x6A8D11D: create_toolcontext (toolcontext.c:1898)
   by 0x6BA5B6B: init_lvm (lvmcmdline.c:3319)
   by 0x6BD5BC8: cmdlib_lvm2_init (lvmcmdlib.c:34)
   by 0x6BD5F04: lvm2_init (lvm2cmd.c:20)
   by 0x6855EA7: dmeventd_lvm2_init (dmeventd_lvm.c:67)
   by 0x665305F: register_device (dmeventd_thin.c:352)
   by 0x10CB7A: _do_register_device (dmeventd.c:916)
   by 0x10CEE4: _monitor_thread (dmeventd.c:1006)
   by 0x54B35E0: start_thread (in /usr/lib64/libpthread-2.26.9000.so)
   by 0x57C30EE: clone (in /usr/lib64/libc-2.26.9000.so)
....
Process terminating with default action of signal 6 (SIGABRT): dumping core
   at 0x570016B: raise (in /usr/lib64/libc-2.26.9000.so)
   by 0x5701520: abort (in /usr/lib64/libc-2.26.9000.so)
   by 0x57437D8: __libc_message (in /usr/lib64/libc-2.26.9000.so)
   by 0x5743831: __libc_fatal (in /usr/lib64/libc-2.26.9000.so)
   by 0x5744056: _IO_vtable_check (in /usr/lib64/libc-2.26.9000.so)
   by 0x574751C: __overflow (in /usr/lib64/libc-2.26.9000.so)
   by 0x574191A: fputc (in /usr/lib64/libc-2.26.9000.so)
   by 0x50428E3: dm_event_log (libdevmapper-event.c:934)
   by 0x10B015: _dmeventd_log (dmeventd.c:125)
   by 0x10D289: _unregister_for_event (dmeventd.c:1146)
   by 0x10E52E: _handle_request (dmeventd.c:1583)
   by 0x10E6D7: _do_process_request (dmeventd.c:1631)
   by 0x10E7C6: _process_request (dmeventd.c:1660)
   by 0x1101A4: main (dmeventd.c:2285)
2018-02-09 10:56:40 +01:00
1b6d0346a3 format_text: Use versionsort to sort archive files
Ensure that vg_100000-* follows vg_99999-* so that the expiry logic
doesn't stop too early.

   https://bugzilla.redhat.com/1481085
2018-02-09 01:08:55 +00:00
847 changed files with 49822 additions and 51787 deletions

101
.gitignore vendored
View File

@ -1,6 +1,7 @@
*.5
*.7
*.8
*.8_gen
*.a
*.d
*.o
@ -30,3 +31,103 @@ make.tmpl
/cscope.out
/tags
/tmp/
tools/man-generator
tools/man-generator.c
test/lib/lvchange
test/lib/lvconvert
test/lib/lvcreate
test/lib/lvdisplay
test/lib/lvextend
test/lib/lvmconfig
test/lib/lvmdiskscan
test/lib/lvmsadc
test/lib/lvmsar
test/lib/lvreduce
test/lib/lvremove
test/lib/lvrename
test/lib/lvresize
test/lib/lvs
test/lib/lvscan
test/lib/pvchange
test/lib/pvck
test/lib/pvcreate
test/lib/pvdisplay
test/lib/pvmove
test/lib/pvremove
test/lib/pvresize
test/lib/pvs
test/lib/pvscan
test/lib/vgcfgbackup
test/lib/vgcfgrestore
test/lib/vgchange
test/lib/vgck
test/lib/vgconvert
test/lib/vgcreate
test/lib/vgdisplay
test/lib/vgexport
test/lib/vgextend
test/lib/vgimport
test/lib/vgimportclone
test/lib/vgmerge
test/lib/vgmknodes
test/lib/vgreduce
test/lib/vgremove
test/lib/vgrename
test/lib/vgs
test/lib/vgscan
test/lib/vgsplit
test/api/lvtest.t
test/api/pe_start.t
test/api/percent.t
test/api/python_lvm_unit.py
test/api/test
test/api/thin_percent.t
test/api/vglist.t
test/api/vgtest.t
test/lib/aux
test/lib/check
test/lib/clvmd
test/lib/dm-version-expected
test/lib/dmeventd
test/lib/dmsetup
test/lib/dmstats
test/lib/fail
test/lib/flavour-ndev-cluster
test/lib/flavour-ndev-cluster-lvmpolld
test/lib/flavour-ndev-lvmetad
test/lib/flavour-ndev-lvmetad-lvmpolld
test/lib/flavour-ndev-lvmpolld
test/lib/flavour-ndev-vanilla
test/lib/flavour-udev-cluster
test/lib/flavour-udev-cluster-lvmpolld
test/lib/flavour-udev-lvmetad
test/lib/flavour-udev-lvmetad-lvmpolld
test/lib/flavour-udev-lvmlockd-dlm
test/lib/flavour-udev-lvmlockd-sanlock
test/lib/flavour-udev-lvmlockd-test
test/lib/flavour-udev-lvmpolld
test/lib/flavour-udev-vanilla
test/lib/fsadm
test/lib/get
test/lib/inittest
test/lib/invalid
test/lib/lvm
test/lib/lvm-wrapper
test/lib/lvmchange
test/lib/lvmdbusd.profile
test/lib/lvmetad
test/lib/lvmpolld
test/lib/not
test/lib/paths
test/lib/paths-common
test/lib/runner
test/lib/should
test/lib/test
test/lib/thin-performance.profile
test/lib/utils
test/lib/version-expected
test/unit/dmraid_t.c
test/unit/unit-test

View File

@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@ -28,14 +28,6 @@ ifeq ("@INTL@", "yes")
SUBDIRS += po
endif
ifeq ("@APPLIB@", "yes")
SUBDIRS += liblvm
endif
ifeq ("@PYTHON_BINDINGS@", "yes")
SUBDIRS += python
endif
ifeq ($(MAKECMDGOALS),clean)
SUBDIRS += test
endif
@ -43,8 +35,7 @@ endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = conf include man test scripts \
libdaemon lib tools daemons libdm \
udev po liblvm python \
unit-tests/datastruct unit-tests/mm unit-tests/regex
udev po
tools.distclean: test.distclean
endif
DISTCLEAN_DIRS += lcov_reports*
@ -55,17 +46,16 @@ include make.tmpl
libdm: include
libdaemon: include
lib: libdm libdaemon
liblvm: lib
daemons: lib libdaemon tools
tools: lib libdaemon device-mapper
po: tools daemons
man: tools
all_man: tools
scripts: liblvm libdm
scripts: libdm
test: tools daemons
lib.device-mapper: include.device-mapper
libdm.device-mapper: include.device-mapper
liblvm.device-mapper: include.device-mapper
daemons.device-mapper: libdm.device-mapper
tools.device-mapper: libdm.device-mapper
scripts.device-mapper: include.device-mapper
@ -79,10 +69,6 @@ po.pofile: tools.pofile daemons.pofile
pofile: po.pofile
endif
ifeq ("@PYTHON_BINDINGS@", "yes")
python: liblvm
endif
ifneq ("$(CFLOW_CMD)", "")
tools.cflow: libdm.cflow lib.cflow
daemons.cflow: tools.cflow
@ -97,7 +83,7 @@ endif
DISTCLEAN_TARGETS += cscope.out
CLEAN_DIRS += autom4te.cache
check check_system check_cluster check_local check_lvmetad check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock unit: all
check check_system check_cluster check_local check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock: test
$(MAKE) -C test $(@)
conf.generate man.generate: tools
@ -146,7 +132,7 @@ install_system_dirs:
$(INSTALL_ROOT_DIR) $(DESTDIR)$(DEFAULT_RUN_DIR)
$(INSTALL_ROOT_DATA) /dev/null $(DESTDIR)$(DEFAULT_CACHE_DIR)/.cache
install_initscripts:
install_initscripts:
$(MAKE) -C scripts install_initscripts
install_systemd_generators:
@ -159,19 +145,14 @@ install_systemd_units:
install_all_man:
$(MAKE) -C man install_all_man
ifeq ("@PYTHON_BINDINGS@", "yes")
install_python_bindings:
$(MAKE) -C liblvm/python install_python_bindings
endif
install_tmpfiles_configuration:
$(MAKE) -C scripts install_tmpfiles_configuration
LCOV_TRACES = libdm.info lib.info liblvm.info tools.info \
LCOV_TRACES = libdm.info lib.info tools.info \
libdaemon/client.info libdaemon/server.info \
test/unit.info \
daemons/clvmd.info \
daemons/dmeventd.info \
daemons/lvmetad.info \
daemons/lvmlockd.info \
daemons/lvmpolld.info
@ -211,30 +192,10 @@ endif
endif
ifeq ("$(TESTING)", "yes")
# testing and report generation
RUBY=ruby1.9 -Ireport-generators/lib -Ireport-generators/test
.PHONY: unit-test ruby-test test-programs
# FIXME: put dependencies on libdm and liblvm
# FIXME: Should be handled by Makefiles in subdirs, not here at top level.
test-programs:
cd unit-tests/regex && $(MAKE)
cd unit-tests/datastruct && $(MAKE)
cd unit-tests/mm && $(MAKE)
unit-test: test-programs
$(RUBY) report-generators/unit_test.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
memcheck: test-programs
$(RUBY) report-generators/memcheck.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
ruby-test:
$(RUBY) report-generators/test/ts.rb
endif
# FIXME: Drop once top-level make is resolved
-include test/unit/Makefile
include $(top_srcdir)/device_mapper/Makefile
include $(top_srcdir)/base/Makefile
ifneq ($(shell which ctags),)
.PHONY: tags

View File

@ -1 +1 @@
2.02.178(2)-git (2017-12-18)
2.02.178(2)-git (2018-05-24)

View File

@ -1 +1 @@
1.02.147-git (2017-12-18)
1.02.147-git (2018-05-24)

View File

@ -1,18 +1,84 @@
Version 2.02.178 -
=====================================
Add devices/use_aio, aio_max, aio_memory to configure AIO limits.
Support asynchronous I/O when scanning devices.
Detect asynchronous I/O capability in configure or accept --disable-aio.
Add AIO_SUPPORTED_CODE_PATH to indicate whether AIO may be used.
Version 3.0.0
=============
Add basic creation support for VDO target.
Never send any discard ioctl with test mode.
Fix thin-pool alloc which needs same PV for data and metadata.
Extend list of non-memlocked areas with newly linked libs.
Enhance vgcfgrestore to check for active LVs in restored VG.
Configure supports --disable-silent-rules for verbose builds.
Fix unmonitoring of merging snapshots.
Cache can uses metadata format 2 with cleaner policy.
Fix check if resized PV can also fit metadata area.
Avoid showing internal error in lvs output or pvmoved LVs.
Remove clvmd
Remove lvmlib (api)
lvconvert: provide possible layouts between linear and striped/raid
Use versionsort to fix archive file expiry beyond 100000 files.
Version 2.02.178-rc1 - 24th May 2018
====================================
Add libaio dependency for build.
Remove lvm1 and pool format handling and add filter to ignore them.
Move some filter checks to after disks are read.
Rework disk scanning and when it is used.
Add new io layer and shift code to using it.
Fix lvconvert's return code on degraded -m raid1 conversion.
--enable-testing switch for ./configure has been removed.
--with-snapshots switch for ./configure has been removed.
--with-mirrors switch for ./configure has been removed.
--with-raid switch for ./configure has been removed.
--with-thin switch for ./configure has been removed.
--with-cache switch for ./configure has been removed.
Include new unit-test framework and unit tests.
Extend validation of region_size for mirror segment.
Reload whole device stack when reinitilizing mirror log.
Mirrors without monitoring are WARNING and not blocking on error.
Detect too big region_size with clustered mirrors.
Fix evaluation of maximal region size for mirror log.
Enhance mirror log size estimation and use smaller size when possible.
Fix incorrect mirror log size calculation on 32bit arch.
Enhance preloading tree creating.
Fix regression on acceptance of any LV on lvconvert.
Restore usability of thin LV to be again external origin for another thin.
Keep systemd vars on change event in 69-dm-lvm-metad.rules for systemd reload.
Write systemd and non-systemd rule in 69-dm-lvm-metad.rules, GOTO active one.
Add test for activation/volume_list (Sub)LV remnants.
Disallow usage of cache format 2 with mq cache policy.
Again accept striped LV as COW LV with lvconvert -s (2.02.169).
Fix raid target version testing for supported features.
Allow activation of pools when thin/cache_check tool is missing.
Remove RaidLV on creation failure when rmeta devices can't be activated.
Add prioritized_section() to restore cookie boundaries (2.02.177).
Enhance error messages when read error happens.
Enhance mirror log initialization for old mirror target.
Skip private crypto and stratis devices.
Skip frozen raid devices from scanning.
Activate RAID SubLVs on read_only_volume_list readwrite.
Offer convenience type raid5_n converting to raid10.
Automatically avoid reading invalid snapshots during device scan.
Ensure COW device is writable even for read-only thick snapshots.
Support activation of component LVs in read-only mode.
Extend internal library to recognize and work with component LV.
Skip duplicate check for active LV when prompting for its removal.
Activate correct lock holding LV when it is cached.
Do not modify archived metadata when removing striped raid.
Fix memleak on error path when obtaining lv_raid_data_offset.
Fix compatibility size test of extended external origin.
Add external_origin visiting in for_each_sub_lv().
Ensure cluster commands drop their device cache before locking VG.
Do not report LV as remotely active when it's locally exclusive in cluster.
Add deprecate messages for usage of mirrors with mirrorlog.
Separate reporting of monitoring status and error status.
Improve validation of created strings in vgimportclone.
Add missing initialisation of mem pool in systemd generator.
Do not reopen output streams for multithreaded users of liblvm.
Configure ensures /usr/bin dir is checked for dmpd tools.
Restore pvmove support for wide-clustered active volumes (2.02.177).
Avoid non-exclusive activation of exclusive segment types.
Fix trimming sibling PVs when doing a pvmove of raid subLVs.
Preserve exclusive activation during thin snaphost merge.
Suppress some repeated reads of the same disk data at the device layer.
Avoid exceeding array bounds in allocation tag processing.
Refactor metadata reading code to use callback functions.
Move memory allocation for the key dev_reads into the device layer.
Add --lockopt to common options and add option to skip selected locks.
Version 2.02.177 - 18th December 2017
=====================================

View File

@ -1,5 +1,16 @@
Version 1.02.147 -
=====================================
Version 1.02.147 -
====================================
Add vdo plugin for monitoring VDO devices.
Version 1.02.147-rc1 - 24th May 2018
====================================
Reuse uname() result for mirror target.
Recognize also mounted btrfs through dm_device_has_mounted_fs().
Add missing log_error() into dm_stats_populate() returning 0.
Avoid calling dm_stats_populat() for DM devices without any stats regions.
Support DM_DEBUG_WITH_LINE_NUMBERS envvar for debug msg with source:line.
Configured command for thin pool threshold handling gets whole environment.
Fix tests for failing dm_snprintf() in stats code.
Parsing mirror status accepts 'userspace' keyword in status.
Introduce dm_malloc_aligned for page alignment of buffers.

View File

@ -155,7 +155,7 @@ AC_DEFUN([AC_TRY_LDFLAGS],
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 3
serial 3
AC_DEFUN([AX_GCC_BUILTIN], [
AS_VAR_PUSHDEF([ac_var], [ax_cv_have_$1])

293
aclocal.m4 vendored
View File

@ -1,6 +1,6 @@
# generated automatically by aclocal 1.15 -*- Autoconf -*-
# generated automatically by aclocal 1.15.1 -*- Autoconf -*-
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@ -13,7 +13,7 @@
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_python_module.html
# https://www.gnu.org/software/autoconf-archive/ax_python_module.html
# ===========================================================================
#
# SYNOPSIS
@ -37,7 +37,7 @@ m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 8
#serial 9
AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
AC_DEFUN([AX_PYTHON_MODULE],[
@ -69,32 +69,63 @@ AC_DEFUN([AX_PYTHON_MODULE],[
fi
])
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 1 (pkg-config-0.24)
#
# Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 11 (pkg-config-0.29.1)
# PKG_PROG_PKG_CONFIG([MIN-VERSION])
# ----------------------------------
dnl Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
dnl Copyright © 2012-2015 Dan Nicholson <dbn.lists@gmail.com>
dnl
dnl This program is free software; you can redistribute it and/or modify
dnl it under the terms of the GNU General Public License as published by
dnl the Free Software Foundation; either version 2 of the License, or
dnl (at your option) any later version.
dnl
dnl This program is distributed in the hope that it will be useful, but
dnl WITHOUT ANY WARRANTY; without even the implied warranty of
dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
dnl General Public License for more details.
dnl
dnl You should have received a copy of the GNU General Public License
dnl along with this program; if not, write to the Free Software
dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
dnl 02111-1307, USA.
dnl
dnl As a special exception to the GNU General Public License, if you
dnl distribute this file as part of a program that contains a
dnl configuration script generated by Autoconf, you may include it under
dnl the same distribution terms that you use for the rest of that
dnl program.
dnl PKG_PREREQ(MIN-VERSION)
dnl -----------------------
dnl Since: 0.29
dnl
dnl Verify that the version of the pkg-config macros are at least
dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
dnl installed version of pkg-config, this checks the developer's version
dnl of pkg.m4 when generating configure.
dnl
dnl To ensure that this macro is defined, also add:
dnl m4_ifndef([PKG_PREREQ],
dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
dnl
dnl See the "Since" comment for each macro you use to see what version
dnl of the macros you require.
m4_defun([PKG_PREREQ],
[m4_define([PKG_MACROS_VERSION], [0.29.1])
m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
[m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
])dnl PKG_PREREQ
dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
dnl ----------------------------------
dnl Since: 0.16
dnl
dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
dnl first found in the path. Checks that the version of pkg-config found
dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
dnl used since that's the first version where most current features of
dnl pkg-config existed.
AC_DEFUN([PKG_PROG_PKG_CONFIG],
[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
@ -116,18 +147,19 @@ if test -n "$PKG_CONFIG"; then
PKG_CONFIG=""
fi
fi[]dnl
])# PKG_PROG_PKG_CONFIG
])dnl PKG_PROG_PKG_CONFIG
# PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
#
# Check to see whether a particular set of modules exists. Similar
# to PKG_CHECK_MODULES(), but does not set variables or print errors.
#
# Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
# only at the first occurence in configure.ac, so if the first place
# it's called might be skipped (such as if it is within an "if", you
# have to call PKG_CHECK_EXISTS manually
# --------------------------------------------------------------
dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------------------------------
dnl Since: 0.18
dnl
dnl Check to see whether a particular set of modules exists. Similar to
dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
dnl
dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
dnl only at the first occurence in configure.ac, so if the first place
dnl it's called might be skipped (such as if it is within an "if", you
dnl have to call PKG_CHECK_EXISTS manually
AC_DEFUN([PKG_CHECK_EXISTS],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
if test -n "$PKG_CONFIG" && \
@ -137,8 +169,10 @@ m4_ifvaln([$3], [else
$3])dnl
fi])
# _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
# ---------------------------------------------
dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
dnl ---------------------------------------------
dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
dnl pkg_failed based on the result.
m4_define([_PKG_CONFIG],
[if test -n "$$1"; then
pkg_cv_[]$1="$$1"
@ -150,10 +184,11 @@ m4_define([_PKG_CONFIG],
else
pkg_failed=untried
fi[]dnl
])# _PKG_CONFIG
])dnl _PKG_CONFIG
# _PKG_SHORT_ERRORS_SUPPORTED
# -----------------------------
dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl ---------------------------
dnl Internal check to see if pkg-config supports short errors.
AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
@ -161,19 +196,17 @@ if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
else
_pkg_short_errors_supported=no
fi[]dnl
])# _PKG_SHORT_ERRORS_SUPPORTED
])dnl _PKG_SHORT_ERRORS_SUPPORTED
# PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
# [ACTION-IF-NOT-FOUND])
#
#
# Note that if there is a possibility the first call to
# PKG_CHECK_MODULES might not happen, you should be sure to include an
# explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
#
#
# --------------------------------------------------------------
dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl --------------------------------------------------------------
dnl Since: 0.4.0
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
AC_DEFUN([PKG_CHECK_MODULES],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
@ -227,16 +260,40 @@ else
AC_MSG_RESULT([yes])
$3
fi[]dnl
])# PKG_CHECK_MODULES
])dnl PKG_CHECK_MODULES
# PKG_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable pkgconfigdir as the location where a module
# should install pkg-config .pc files. By default the directory is
# $libdir/pkgconfig, but the default can be changed by passing
# DIRECTORY. The user can override through the --with-pkgconfigdir
# parameter.
dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl ---------------------------------------------------------------------
dnl Since: 0.29
dnl
dnl Checks for existence of MODULES and gathers its build flags with
dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
dnl and VARIABLE-PREFIX_LIBS from --libs.
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
dnl configure.ac.
AC_DEFUN([PKG_CHECK_MODULES_STATIC],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
_save_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES($@)
PKG_CONFIG=$_save_PKG_CONFIG[]dnl
])dnl PKG_CHECK_MODULES_STATIC
dnl PKG_INSTALLDIR([DIRECTORY])
dnl -------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable pkgconfigdir as the location where a module
dnl should install pkg-config .pc files. By default the directory is
dnl $libdir/pkgconfig, but the default can be changed by passing
dnl DIRECTORY. The user can override through the --with-pkgconfigdir
dnl parameter.
AC_DEFUN([PKG_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
m4_pushdef([pkg_description],
@ -247,16 +304,18 @@ AC_ARG_WITH([pkgconfigdir],
AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
]) dnl PKG_INSTALLDIR
])dnl PKG_INSTALLDIR
# PKG_NOARCH_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable noarch_pkgconfigdir as the location where a
# module should install arch-independent pkg-config .pc files. By
# default the directory is $datadir/pkgconfig, but the default can be
# changed by passing DIRECTORY. The user can override through the
# --with-noarch-pkgconfigdir parameter.
dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
dnl --------------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable noarch_pkgconfigdir as the location where a
dnl module should install arch-independent pkg-config .pc files. By
dnl default the directory is $datadir/pkgconfig, but the default can be
dnl changed by passing DIRECTORY. The user can override through the
dnl --with-noarch-pkgconfigdir parameter.
AC_DEFUN([PKG_NOARCH_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
m4_pushdef([pkg_description],
@ -267,13 +326,15 @@ AC_ARG_WITH([noarch-pkgconfigdir],
AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
]) dnl PKG_NOARCH_INSTALLDIR
])dnl PKG_NOARCH_INSTALLDIR
# PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
# [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
# -------------------------------------------
# Retrieves the value of the pkg-config variable for the given module.
dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------
dnl Since: 0.28
dnl
dnl Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
@ -282,9 +343,77 @@ _PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])# PKG_CHECK_VAR
])dnl PKG_CHECK_VAR
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
dnl PKG_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND],
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------
dnl
dnl Prepare a "--with-" configure option using the lowercase
dnl [VARIABLE-PREFIX] name, merging the behaviour of AC_ARG_WITH and
dnl PKG_CHECK_MODULES in a single macro.
AC_DEFUN([PKG_WITH_MODULES],
[
m4_pushdef([with_arg], m4_tolower([$1]))
m4_pushdef([description],
[m4_default([$5], [build with ]with_arg[ support])])
m4_pushdef([def_arg], [m4_default([$6], [auto])])
m4_pushdef([def_action_if_found], [AS_TR_SH([with_]with_arg)=yes])
m4_pushdef([def_action_if_not_found], [AS_TR_SH([with_]with_arg)=no])
m4_case(def_arg,
[yes],[m4_pushdef([with_without], [--without-]with_arg)],
[m4_pushdef([with_without],[--with-]with_arg)])
AC_ARG_WITH(with_arg,
AS_HELP_STRING(with_without, description[ @<:@default=]def_arg[@:>@]),,
[AS_TR_SH([with_]with_arg)=def_arg])
AS_CASE([$AS_TR_SH([with_]with_arg)],
[yes],[PKG_CHECK_MODULES([$1],[$2],$3,$4)],
[auto],[PKG_CHECK_MODULES([$1],[$2],
[m4_n([def_action_if_found]) $3],
[m4_n([def_action_if_not_found]) $4])])
m4_popdef([with_arg])
m4_popdef([description])
m4_popdef([def_arg])
])dnl PKG_WITH_MODULES
dnl PKG_HAVE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl -----------------------------------------------
dnl
dnl Convenience macro to trigger AM_CONDITIONAL after PKG_WITH_MODULES
dnl check._[VARIABLE-PREFIX] is exported as make variable.
AC_DEFUN([PKG_HAVE_WITH_MODULES],
[
PKG_WITH_MODULES([$1],[$2],,,[$3],[$4])
AM_CONDITIONAL([HAVE_][$1],
[test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"])
])dnl PKG_HAVE_WITH_MODULES
dnl PKG_HAVE_DEFINE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------------------
dnl
dnl Convenience macro to run AM_CONDITIONAL and AC_DEFINE after
dnl PKG_WITH_MODULES check. HAVE_[VARIABLE-PREFIX] is exported as make
dnl and preprocessor variable.
AC_DEFUN([PKG_HAVE_DEFINE_WITH_MODULES],
[
PKG_HAVE_WITH_MODULES([$1],[$2],[$3],[$4])
AS_IF([test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"],
[AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
])dnl PKG_HAVE_DEFINE_WITH_MODULES
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@ -317,8 +446,9 @@ AC_DEFUN([AM_PATH_PYTHON],
[
dnl Find a Python interpreter. Python versions prior to 2.0 are not
dnl supported. (2.0 was released on October 16, 2000).
dnl FIXME: Remove the need to hard-code Python versions here.
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],
[python python2 python3 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
[python python2 python3 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0])
AC_ARG_VAR([PYTHON], [the Python interpreter])
@ -519,7 +649,7 @@ for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[[i]]
sys.exit(sys.hexversion < minverhex)"
AS_IF([AM_RUN_LOG([$1 -c "$prog"])], [$3], [$4])])
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@ -536,5 +666,4 @@ AC_DEFUN([AM_RUN_LOG],
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
m4_include([acinclude.m4])

31
base/Makefile Normal file
View File

@ -0,0 +1,31 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
BASE_SOURCE=\
base/data-struct/radix-tree.c \
base/data-struct/hash.c \
base/data-struct/list.c
BASE_DEPENDS=$(addprefix $(top_builddir)/,$(subst .c,.d,$(BASE_SOURCE)))
BASE_OBJECTS=$(addprefix $(top_builddir)/,$(subst .c,.o,$(BASE_SOURCE)))
CLEAN_TARGETS+=$(BASE_DEPENDS) $(BASE_OBJECTS)
-include $(BASE_DEPENDS)
$(BASE_OBJECTS): INCLUDES+=-I$(top_srcdir)/base/
$(top_builddir)/base/libbase.a: $(BASE_OBJECTS)
@echo " [AR] $@"
$(Q) $(RM) $@
$(Q) $(AR) rsv $@ $(BASE_OBJECTS) > /dev/null
CLEAN_TARGETS+=$(top_builddir)/base/libbase.a

394
base/data-struct/hash.c Normal file
View File

@ -0,0 +1,394 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "hash.h"
struct dm_hash_node {
struct dm_hash_node *next;
void *data;
unsigned data_len;
unsigned keylen;
char key[0];
};
struct dm_hash_table {
unsigned num_nodes;
unsigned num_slots;
struct dm_hash_node **slots;
};
/* Permutation of the Integers 0 through 255 */
static unsigned char _nums[] = {
1, 14, 110, 25, 97, 174, 132, 119, 138, 170, 125, 118, 27, 233, 140, 51,
87, 197, 177, 107, 234, 169, 56, 68, 30, 7, 173, 73, 188, 40, 36, 65,
49, 213, 104, 190, 57, 211, 148, 223, 48, 115, 15, 2, 67, 186, 210, 28,
12, 181, 103, 70, 22, 58, 75, 78, 183, 167, 238, 157, 124, 147, 172,
144,
176, 161, 141, 86, 60, 66, 128, 83, 156, 241, 79, 46, 168, 198, 41, 254,
178, 85, 253, 237, 250, 154, 133, 88, 35, 206, 95, 116, 252, 192, 54,
221,
102, 218, 255, 240, 82, 106, 158, 201, 61, 3, 89, 9, 42, 155, 159, 93,
166, 80, 50, 34, 175, 195, 100, 99, 26, 150, 16, 145, 4, 33, 8, 189,
121, 64, 77, 72, 208, 245, 130, 122, 143, 55, 105, 134, 29, 164, 185,
194,
193, 239, 101, 242, 5, 171, 126, 11, 74, 59, 137, 228, 108, 191, 232,
139,
6, 24, 81, 20, 127, 17, 91, 92, 251, 151, 225, 207, 21, 98, 113, 112,
84, 226, 18, 214, 199, 187, 13, 32, 94, 220, 224, 212, 247, 204, 196,
43,
249, 236, 45, 244, 111, 182, 153, 136, 129, 90, 217, 202, 19, 165, 231,
71,
230, 142, 96, 227, 62, 179, 246, 114, 162, 53, 160, 215, 205, 180, 47,
109,
44, 38, 31, 149, 135, 0, 216, 52, 63, 23, 37, 69, 39, 117, 146, 184,
163, 200, 222, 235, 248, 243, 219, 10, 152, 131, 123, 229, 203, 76, 120,
209
};
static struct dm_hash_node *_create_node(const char *str, unsigned len)
{
struct dm_hash_node *n = malloc(sizeof(*n) + len);
if (n) {
memcpy(n->key, str, len);
n->keylen = len;
}
return n;
}
static unsigned long _hash(const char *str, unsigned len)
{
unsigned long h = 0, g;
unsigned i;
for (i = 0; i < len; i++) {
h <<= 4;
h += _nums[(unsigned char) *str++];
g = h & ((unsigned long) 0xf << 16u);
if (g) {
h ^= g >> 16u;
h ^= g >> 5u;
}
}
return h;
}
struct dm_hash_table *dm_hash_create(unsigned size_hint)
{
size_t len;
unsigned new_size = 16u;
struct dm_hash_table *hc = zalloc(sizeof(*hc));
if (!hc)
return_0;
/* round size hint up to a power of two */
while (new_size < size_hint)
new_size = new_size << 1;
hc->num_slots = new_size;
len = sizeof(*(hc->slots)) * new_size;
if (!(hc->slots = zalloc(len)))
goto_bad;
return hc;
bad:
free(hc->slots);
free(hc);
return 0;
}
static void _free_nodes(struct dm_hash_table *t)
{
struct dm_hash_node *c, *n;
unsigned i;
for (i = 0; i < t->num_slots; i++)
for (c = t->slots[i]; c; c = n) {
n = c->next;
free(c);
}
}
void dm_hash_destroy(struct dm_hash_table *t)
{
_free_nodes(t);
free(t->slots);
free(t);
}
static struct dm_hash_node **_find(struct dm_hash_table *t, const void *key,
uint32_t len)
{
unsigned h = _hash(key, len) & (t->num_slots - 1);
struct dm_hash_node **c;
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len))
break;
}
return c;
}
void *dm_hash_lookup_binary(struct dm_hash_table *t, const void *key,
uint32_t len)
{
struct dm_hash_node **c = _find(t, key, len);
return *c ? (*c)->data : 0;
}
int dm_hash_insert_binary(struct dm_hash_table *t, const void *key,
uint32_t len, void *data)
{
struct dm_hash_node **c = _find(t, key, len);
if (*c)
(*c)->data = data;
else {
struct dm_hash_node *n = _create_node(key, len);
if (!n)
return 0;
n->data = data;
n->next = 0;
*c = n;
t->num_nodes++;
}
return 1;
}
void dm_hash_remove_binary(struct dm_hash_table *t, const void *key,
uint32_t len)
{
struct dm_hash_node **c = _find(t, key, len);
if (*c) {
struct dm_hash_node *old = *c;
*c = (*c)->next;
free(old);
t->num_nodes--;
}
}
void *dm_hash_lookup(struct dm_hash_table *t, const char *key)
{
return dm_hash_lookup_binary(t, key, strlen(key) + 1);
}
int dm_hash_insert(struct dm_hash_table *t, const char *key, void *data)
{
return dm_hash_insert_binary(t, key, strlen(key) + 1, data);
}
void dm_hash_remove(struct dm_hash_table *t, const char *key)
{
dm_hash_remove_binary(t, key, strlen(key) + 1);
}
static struct dm_hash_node **_find_str_with_val(struct dm_hash_table *t,
const void *key, const void *val,
uint32_t len, uint32_t val_len)
{
struct dm_hash_node **c;
unsigned h;
h = _hash(key, len) & (t->num_slots - 1);
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len) && (*c)->data) {
if (((*c)->data_len == val_len) &&
!memcmp(val, (*c)->data, val_len))
return c;
}
}
return NULL;
}
int dm_hash_insert_allow_multiple(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node *n;
struct dm_hash_node *first;
int len = strlen(key) + 1;
unsigned h;
n = _create_node(key, len);
if (!n)
return 0;
n->data = (void *)val;
n->data_len = val_len;
h = _hash(key, len) & (t->num_slots - 1);
first = t->slots[h];
if (first)
n->next = first;
else
n->next = 0;
t->slots[h] = n;
t->num_nodes++;
return 1;
}
/*
* Look through multiple entries with the same key for one that has a
* matching val and return that. If none have maching val, return NULL.
*/
void *dm_hash_lookup_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node **c;
c = _find_str_with_val(t, key, val, strlen(key) + 1, val_len);
return (c && *c) ? (*c)->data : 0;
}
/*
* Look through multiple entries with the same key for one that has a
* matching val and remove that.
*/
void dm_hash_remove_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node **c;
c = _find_str_with_val(t, key, val, strlen(key) + 1, val_len);
if (c && *c) {
struct dm_hash_node *old = *c;
*c = (*c)->next;
free(old);
t->num_nodes--;
}
}
/*
* Look up the value for a key and count how many
* entries have the same key.
*
* If no entries have key, return NULL and set count to 0.
*
* If one entry has the key, the function returns the val,
* and sets count to 1.
*
* If N entries have the key, the function returns the val
* from the first entry, and sets count to N.
*/
void *dm_hash_lookup_with_count(struct dm_hash_table *t, const char *key, int *count)
{
struct dm_hash_node **c;
struct dm_hash_node **c1 = NULL;
uint32_t len = strlen(key) + 1;
unsigned h;
*count = 0;
h = _hash(key, len) & (t->num_slots - 1);
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len)) {
(*count)++;
if (!c1)
c1 = c;
}
}
if (!c1)
return NULL;
else
return *c1 ? (*c1)->data : 0;
}
unsigned dm_hash_get_num_entries(struct dm_hash_table *t)
{
return t->num_nodes;
}
void dm_hash_iter(struct dm_hash_table *t, dm_hash_iterate_fn f)
{
struct dm_hash_node *c, *n;
unsigned i;
for (i = 0; i < t->num_slots; i++)
for (c = t->slots[i]; c; c = n) {
n = c->next;
f(c->data);
}
}
void dm_hash_wipe(struct dm_hash_table *t)
{
_free_nodes(t);
memset(t->slots, 0, sizeof(struct dm_hash_node *) * t->num_slots);
t->num_nodes = 0u;
}
char *dm_hash_get_key(struct dm_hash_table *t __attribute__((unused)),
struct dm_hash_node *n)
{
return n->key;
}
void *dm_hash_get_data(struct dm_hash_table *t __attribute__((unused)),
struct dm_hash_node *n)
{
return n->data;
}
static struct dm_hash_node *_next_slot(struct dm_hash_table *t, unsigned s)
{
struct dm_hash_node *c = NULL;
unsigned i;
for (i = s; i < t->num_slots && !c; i++)
c = t->slots[i];
return c;
}
struct dm_hash_node *dm_hash_get_first(struct dm_hash_table *t)
{
return _next_slot(t, 0);
}
struct dm_hash_node *dm_hash_get_next(struct dm_hash_table *t, struct dm_hash_node *n)
{
unsigned h = _hash(n->key, n->keylen) & (t->num_slots - 1);
return n->next ? n->next : _next_slot(t, h + 1);
}

94
base/data-struct/hash.h Normal file
View File

@ -0,0 +1,94 @@
#ifndef BASE_DATA_STRUCT_HASH_H
#define BASE_DATA_STRUCT_HASH_H
#include <stdint.h>
//----------------------------------------------------------------
struct dm_hash_table;
struct dm_hash_node;
typedef void (*dm_hash_iterate_fn) (void *data);
struct dm_hash_table *dm_hash_create(unsigned size_hint)
__attribute__((__warn_unused_result__));
void dm_hash_destroy(struct dm_hash_table *t);
void dm_hash_wipe(struct dm_hash_table *t);
void *dm_hash_lookup(struct dm_hash_table *t, const char *key);
int dm_hash_insert(struct dm_hash_table *t, const char *key, void *data);
void dm_hash_remove(struct dm_hash_table *t, const char *key);
void *dm_hash_lookup_binary(struct dm_hash_table *t, const void *key, uint32_t len);
int dm_hash_insert_binary(struct dm_hash_table *t, const void *key, uint32_t len,
void *data);
void dm_hash_remove_binary(struct dm_hash_table *t, const void *key, uint32_t len);
unsigned dm_hash_get_num_entries(struct dm_hash_table *t);
void dm_hash_iter(struct dm_hash_table *t, dm_hash_iterate_fn f);
char *dm_hash_get_key(struct dm_hash_table *t, struct dm_hash_node *n);
void *dm_hash_get_data(struct dm_hash_table *t, struct dm_hash_node *n);
struct dm_hash_node *dm_hash_get_first(struct dm_hash_table *t);
struct dm_hash_node *dm_hash_get_next(struct dm_hash_table *t, struct dm_hash_node *n);
/*
* dm_hash_insert() replaces the value of an existing
* entry with a matching key if one exists. Otherwise
* it adds a new entry.
*
* dm_hash_insert_with_val() inserts a new entry if
* another entry with the same key already exists.
* val_len is the size of the data being inserted.
*
* If two entries with the same key exist,
* (added using dm_hash_insert_allow_multiple), then:
* . dm_hash_lookup() returns the first one it finds, and
* dm_hash_lookup_with_val() returns the one with a matching
* val_len/val.
* . dm_hash_remove() removes the first one it finds, and
* dm_hash_remove_with_val() removes the one with a matching
* val_len/val.
*
* If a single entry with a given key exists, and it has
* zero val_len, then:
* . dm_hash_lookup() returns it
* . dm_hash_lookup_with_val(val_len=0) returns it
* . dm_hash_remove() removes it
* . dm_hash_remove_with_val(val_len=0) removes it
*
* dm_hash_lookup_with_count() is a single call that will
* both lookup a key's value and check if there is more
* than one entry with the given key.
*
* (It is not meant to retrieve all the entries with the
* given key. In the common case where a single entry exists
* for the key, it is useful to have a single call that will
* both look up the value and indicate if multiple values
* exist for the key.)
*
* dm_hash_lookup_with_count:
* . If no entries exist, the function returns NULL, and
* the count is set to 0.
* . If only one entry exists, the value of that entry is
* returned and count is set to 1.
* . If N entries exists, the value of the first entry is
* returned and count is set to N.
*/
void *dm_hash_lookup_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
void dm_hash_remove_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
int dm_hash_insert_allow_multiple(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
void *dm_hash_lookup_with_count(struct dm_hash_table *t, const char *key, int *count);
#define dm_hash_iterate(v, h) \
for (v = dm_hash_get_first((h)); v; \
v = dm_hash_get_next((h), v))
//----------------------------------------------------------------
#endif

170
base/data-struct/list.c Normal file
View File

@ -0,0 +1,170 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "list.h"
#include <assert.h>
#include <stdlib.h>
/*
* Initialise a list before use.
* The list head's next and previous pointers point back to itself.
*/
void dm_list_init(struct dm_list *head)
{
head->n = head->p = head;
}
/*
* Insert an element before 'head'.
* If 'head' is the list head, this adds an element to the end of the list.
*/
void dm_list_add(struct dm_list *head, struct dm_list *elem)
{
assert(head->n);
elem->n = head;
elem->p = head->p;
head->p->n = elem;
head->p = elem;
}
/*
* Insert an element after 'head'.
* If 'head' is the list head, this adds an element to the front of the list.
*/
void dm_list_add_h(struct dm_list *head, struct dm_list *elem)
{
assert(head->n);
elem->n = head->n;
elem->p = head;
head->n->p = elem;
head->n = elem;
}
/*
* Delete an element from its list.
* Note that this doesn't change the element itself - it may still be safe
* to follow its pointers.
*/
void dm_list_del(struct dm_list *elem)
{
elem->n->p = elem->p;
elem->p->n = elem->n;
}
/*
* Remove an element from existing list and insert before 'head'.
*/
void dm_list_move(struct dm_list *head, struct dm_list *elem)
{
dm_list_del(elem);
dm_list_add(head, elem);
}
/*
* Is the list empty?
*/
int dm_list_empty(const struct dm_list *head)
{
return head->n == head;
}
/*
* Is this the first element of the list?
*/
int dm_list_start(const struct dm_list *head, const struct dm_list *elem)
{
return elem->p == head;
}
/*
* Is this the last element of the list?
*/
int dm_list_end(const struct dm_list *head, const struct dm_list *elem)
{
return elem->n == head;
}
/*
* Return first element of the list or NULL if empty
*/
struct dm_list *dm_list_first(const struct dm_list *head)
{
return (dm_list_empty(head) ? NULL : head->n);
}
/*
* Return last element of the list or NULL if empty
*/
struct dm_list *dm_list_last(const struct dm_list *head)
{
return (dm_list_empty(head) ? NULL : head->p);
}
/*
* Return the previous element of the list, or NULL if we've reached the start.
*/
struct dm_list *dm_list_prev(const struct dm_list *head, const struct dm_list *elem)
{
return (dm_list_start(head, elem) ? NULL : elem->p);
}
/*
* Return the next element of the list, or NULL if we've reached the end.
*/
struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *elem)
{
return (dm_list_end(head, elem) ? NULL : elem->n);
}
/*
* Return the number of elements in a list by walking it.
*/
unsigned int dm_list_size(const struct dm_list *head)
{
unsigned int s = 0;
const struct dm_list *v;
dm_list_iterate(v, head)
s++;
return s;
}
/*
* Join two lists together.
* This moves all the elements of the list 'head1' to the end of the list
* 'head', leaving 'head1' empty.
*/
void dm_list_splice(struct dm_list *head, struct dm_list *head1)
{
assert(head->n);
assert(head1->n);
if (dm_list_empty(head1))
return;
head1->p->n = head;
head1->n->p = head->p;
head->p->n = head1->n;
head->p = head1->p;
dm_list_init(head1);
}

209
base/data-struct/list.h Normal file
View File

@ -0,0 +1,209 @@
#ifndef BASE_DATA_STRUCT_LIST_H
#define BASE_DATA_STRUCT_LIST_H
//----------------------------------------------------------------
/*
* A list consists of a list head plus elements.
* Each element has 'next' and 'previous' pointers.
* The list head's pointers point to the first and the last element.
*/
struct dm_list {
struct dm_list *n, *p;
};
/*
* String list.
*/
struct dm_str_list {
struct dm_list list;
const char *str;
};
/*
* Initialise a list before use.
* The list head's next and previous pointers point back to itself.
*/
#define DM_LIST_HEAD_INIT(name) { &(name), &(name) }
#define DM_LIST_INIT(name) struct dm_list name = DM_LIST_HEAD_INIT(name)
void dm_list_init(struct dm_list *head);
/*
* Insert an element before 'head'.
* If 'head' is the list head, this adds an element to the end of the list.
*/
void dm_list_add(struct dm_list *head, struct dm_list *elem);
/*
* Insert an element after 'head'.
* If 'head' is the list head, this adds an element to the front of the list.
*/
void dm_list_add_h(struct dm_list *head, struct dm_list *elem);
/*
* Delete an element from its list.
* Note that this doesn't change the element itself - it may still be safe
* to follow its pointers.
*/
void dm_list_del(struct dm_list *elem);
/*
* Remove an element from existing list and insert before 'head'.
*/
void dm_list_move(struct dm_list *head, struct dm_list *elem);
/*
* Join 'head1' to the end of 'head'.
*/
void dm_list_splice(struct dm_list *head, struct dm_list *head1);
/*
* Is the list empty?
*/
int dm_list_empty(const struct dm_list *head);
/*
* Is this the first element of the list?
*/
int dm_list_start(const struct dm_list *head, const struct dm_list *elem);
/*
* Is this the last element of the list?
*/
int dm_list_end(const struct dm_list *head, const struct dm_list *elem);
/*
* Return first element of the list or NULL if empty
*/
struct dm_list *dm_list_first(const struct dm_list *head);
/*
* Return last element of the list or NULL if empty
*/
struct dm_list *dm_list_last(const struct dm_list *head);
/*
* Return the previous element of the list, or NULL if we've reached the start.
*/
struct dm_list *dm_list_prev(const struct dm_list *head, const struct dm_list *elem);
/*
* Return the next element of the list, or NULL if we've reached the end.
*/
struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *elem);
/*
* Given the address v of an instance of 'struct dm_list' called 'head'
* contained in a structure of type t, return the containing structure.
*/
#define dm_list_struct_base(v, t, head) \
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
/*
* Given the address v of an instance of 'struct dm_list list' contained in
* a structure of type t, return the containing structure.
*/
#define dm_list_item(v, t) dm_list_struct_base((v), t, list)
/*
* Given the address v of one known element e in a known structure of type t,
* return another element f.
*/
#define dm_struct_field(v, t, e, f) \
(((t *)((uintptr_t)(v) - (uintptr_t)&((t *) 0)->e))->f)
/*
* Given the address v of a known element e in a known structure of type t,
* return the list head 'list'
*/
#define dm_list_head(v, t, e) dm_struct_field(v, t, e, list)
/*
* Set v to each element of a list in turn.
*/
#define dm_list_iterate(v, head) \
for (v = (head)->n; v != head; v = v->n)
/*
* Set v to each element in a list in turn, starting from the element
* in front of 'start'.
* You can use this to 'unwind' a list_iterate and back out actions on
* already-processed elements.
* If 'start' is 'head' it walks the list backwards.
*/
#define dm_list_uniterate(v, head, start) \
for (v = (start)->p; v != head; v = v->p)
/*
* A safe way to walk a list and delete and free some elements along
* the way.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_safe(v, t, head) \
for (v = (head)->n, t = v->n; v != head; v = t, t = v->n)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
*/
#define dm_list_iterate_items_gen(v, head, field) \
for (v = dm_list_struct_base((head)->n, __typeof__(*v), field); \
&v->field != (head); \
v = dm_list_struct_base(v->field.n, __typeof__(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
*/
#define dm_list_iterate_items(v, head) dm_list_iterate_items_gen(v, (head), list)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_items_gen_safe(v, t, head, field) \
for (v = dm_list_struct_base((head)->n, __typeof__(*v), field), \
t = dm_list_struct_base(v->field.n, __typeof__(*v), field); \
&v->field != (head); \
v = t, t = dm_list_struct_base(v->field.n, __typeof__(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_items_safe(v, t, head) \
dm_list_iterate_items_gen_safe(v, t, (head), list)
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
*/
#define dm_list_iterate_back_items_gen(v, head, field) \
for (v = dm_list_struct_base((head)->p, __typeof__(*v), field); \
&v->field != (head); \
v = dm_list_struct_base(v->field.p, __typeof__(*v), field))
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
*/
#define dm_list_iterate_back_items(v, head) dm_list_iterate_back_items_gen(v, (head), list)
/*
* Return the number of elements in a list by walking it.
*/
unsigned int dm_list_size(const struct dm_list *head);
//----------------------------------------------------------------
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,64 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_DATA_STRUCT_RADIX_TREE_H
#define BASE_DATA_STRUCT_RADIX_TREE_H
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
//----------------------------------------------------------------
struct radix_tree;
union radix_value {
void *ptr;
uint64_t n;
};
typedef void (*radix_value_dtr)(void *context, union radix_value v);
// dtr will be called on any deleted entries. dtr may be NULL.
struct radix_tree *radix_tree_create(radix_value_dtr dtr, void *dtr_context);
void radix_tree_destroy(struct radix_tree *rt);
unsigned radix_tree_size(struct radix_tree *rt);
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value v);
bool radix_tree_remove(struct radix_tree *rt, uint8_t *kb, uint8_t *ke);
// Returns the number of values removed
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *prefix_b, uint8_t *prefix_e);
bool radix_tree_lookup(struct radix_tree *rt,
uint8_t *kb, uint8_t *ke, union radix_value *result);
// The radix tree stores entries in lexicographical order. Which means
// we can iterate entries, in order. Or iterate entries with a particular
// prefix.
struct radix_tree_iterator {
// Returns false if the iteration should end.
bool (*visit)(struct radix_tree_iterator *it,
uint8_t *kb, uint8_t *ke, union radix_value v);
};
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
struct radix_tree_iterator *it);
// Checks that some constraints on the shape of the tree are
// being held. For debug only.
bool radix_tree_is_well_formed(struct radix_tree *rt);
void radix_tree_dump(struct radix_tree *rt, FILE *out);
//----------------------------------------------------------------
#endif

View File

@ -0,0 +1,23 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_CONTAINER_OF_H
#define BASE_MEMORY_CONTAINER_OF_H
//----------------------------------------------------------------
#define container_of(v, t, head) \
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
//----------------------------------------------------------------
#endif

31
base/memory/zalloc.h Normal file
View File

@ -0,0 +1,31 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_ZALLOC_H
#define BASE_MEMORY_ZALLOC_H
#include <stdlib.h>
#include <string.h>
//----------------------------------------------------------------
static inline void *zalloc(size_t len)
{
void *ptr = malloc(len);
if (ptr)
memset(ptr, 0, len);
return ptr;
}
//----------------------------------------------------------------
#endif

View File

@ -1,5 +1,5 @@
#
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@ -25,6 +25,7 @@ PROFILES=$(PROFILE_TEMPLATES) \
$(srcdir)/cache-smq.profile \
$(srcdir)/thin-generic.profile \
$(srcdir)/thin-performance.profile \
$(srcdir)/vdo-small.profile \
$(srcdir)/lvmdbusd.profile
include $(top_builddir)/make.tmpl
@ -32,8 +33,8 @@ include $(top_builddir)/make.tmpl
.PHONY: install_conf install_localconf install_profiles
generate:
LD_LIBRARY_PATH=$(top_builddir)/libdm:$(LD_LIBRARY_PATH) $(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withgeneralpreamble --withcomments --ignorelocal --withspaces > example.conf.in
LD_LIBRARY_PATH=$(top_builddir)/libdm:$(LD_LIBRARY_PATH) $(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withlocalpreamble --withcomments --withspaces local > lvmlocal.conf.in
$(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withgeneralpreamble --withcomments --ignorelocal --withspaces > example.conf.in
$(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withlocalpreamble --withcomments --withspaces local > lvmlocal.conf.in
install_conf: $(CONFSRC)
@if [ ! -e $(confdir)/$(CONFDEST) ]; then \

View File

@ -59,22 +59,6 @@ devices {
# This configuration option is advanced.
scan = [ "/dev" ]
# Configuration option devices/use_aio.
# Use linux asynchronous I/O for parallel device access where possible.
# This configuration option has an automatic default value.
# use_aio = 1
# Configuration option devices/aio_max.
# Maximum number of asynchronous I/Os to issue concurrently.
# This configuration option has an automatic default value.
# aio_max = 128
# Configuration option devices/aio_memory.
# Approximate maximum total amount of memory (in MB) used
# for asynchronous I/O buffers.
# This configuration option has an automatic default value.
# aio_memory = 10
# Configuration option devices/obtain_device_list_from_udev.
# Obtain the list of available devices from udev.
# This avoids opening or using any inapplicable non-block devices or
@ -167,21 +151,15 @@ devices {
# global_filter = [ "a|.*/|" ]
# Configuration option devices/cache_dir.
# Directory in which to store the device cache file.
# The results of filtering are cached on disk to avoid rescanning dud
# devices (which can take a very long time). By default this cache is
# stored in a file named .cache. It is safe to delete this file; the
# tools regenerate it. If obtain_device_list_from_udev is enabled, the
# list of devices is obtained from udev and any existing .cache file
# is removed.
# This setting is no longer used.
cache_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_CACHE_SUBDIR@"
# Configuration option devices/cache_file_prefix.
# A prefix used before the .cache file name. See devices/cache_dir.
# This setting is no longer used.
cache_file_prefix = ""
# Configuration option devices/write_cache_state.
# Enable/disable writing the cache file. See devices/cache_dir.
# This setting is no longer used.
write_cache_state = 1
# Configuration option devices/types.
@ -285,11 +263,7 @@ devices {
ignore_lvm_mirrors = 1
# Configuration option devices/disable_after_error_count.
# Number of I/O errors after which a device is skipped.
# During each LVM operation, errors received from each device are
# counted. If the counter of a device exceeds the limit set here,
# no further I/O is sent to that device for the remainder of the
# operation. Setting this to 0 disables the counters altogether.
# This setting is no longer used.
disable_after_error_count = 0
# Configuration option devices/require_restorefile_with_uuid.
@ -514,6 +488,149 @@ allocation {
# Default physical extent size in KiB to use for new VGs.
# This configuration option has an automatic default value.
# physical_extent_size = 4096
# Configuration option allocation/vdo_use_compression.
# Enables or disables compression when creating a VDO volume.
# Compression may be disabled if necessary to maximize performance
# or to speed processing of data that is unlikely to compress.
# This configuration option has an automatic default value.
# vdo_use_compression = 1
# Configuration option allocation/vdo_use_deduplication.
# Enables or disables deduplication when creating a VDO volume.
# Deduplication may be disabled in instances where data is not expected
# to have good deduplication rates but compression is still desired.
# This configuration option has an automatic default value.
# vdo_use_deduplication = 1
# Configuration option allocation/vdo_emulate_512_sectors.
# Specifies that the VDO volume is to emulate a 512 byte block device.
# This configuration option has an automatic default value.
# vdo_emulate_512_sectors = 0
# Configuration option allocation/vdo_block_map_cache_size_mb.
# Specifies the amount of memory in MiB allocated for caching block map
# pages for VDO volume. The value must be a multiple of 4096 and must be
# at least 128MiB and less than 16TiB. The cache must be at least 16MiB
# per logical thread. Note that there is a memory overhead of 15%.
# This configuration option has an automatic default value.
# vdo_block_map_cache_size_mb = 128
# Configuration option allocation/vdo_block_map_period.
# Tunes the quantity of block map updates that can accumulate
# before cache pages are flushed to disk. The value must be
# at least 1 and less then 16380.
# A lower value means shorter recovery time but lower performance.
# This configuration option has an automatic default value.
# vdo_block_map_period = 16380
# Configuration option allocation/vdo_check_point_frequency.
# The default check point frequency for VDO volume.
# This configuration option has an automatic default value.
# vdo_check_point_frequency = 0
# Configuration option allocation/vdo_use_sparse_index.
# Enables sparse indexing for VDO volume.
# This configuration option has an automatic default value.
# vdo_use_sparse_index = 0
# Configuration option allocation/vdo_index_memory_size_mb.
# Specifies the amount of index memory in MiB for VDO volume.
# The value must be at least 256MiB and at most 1TiB.
# This configuration option has an automatic default value.
# vdo_index_memory_size_mb = 256
# Configuration option allocation/vdo_use_read_cache.
# Enables or disables the read cache within the VDO volume.
# The cache should be enabled if write workloads are expected
# to have high levels of deduplication, or for read intensive
# workloads of highly compressible data.
# This configuration option has an automatic default value.
# vdo_use_read_cache = 0
# Configuration option allocation/vdo_read_cache_size_mb.
# Specifies the extra VDO volume read cache size in MiB.
# This space is in addition to a system-defined minimum.
# The value must be less then 16TiB and 1.12 MiB of memory
# will be used per MiB of read cache specified, per bio thread.
# This configuration option has an automatic default value.
# vdo_read_cache_size_mb = 0
# Configuration option allocation/vdo_slab_size_mb.
# Specifies the size in MiB of the increment by which a VDO is grown.
# Using a smaller size constrains the total maximum physical size
# that can be accommodated. Must be a power of two between 128MiB and 32GiB.
# This configuration option has an automatic default value.
# vdo_slab_size_mb = 2048
# Configuration option allocation/vdo_ack_threads.
# Specifies the number of threads to use for acknowledging
# completion of requested VDO I/O operations.
# The value must be at in range [0..100].
# This configuration option has an automatic default value.
# vdo_ack_threads = 1
# Configuration option allocation/vdo_bio_threads.
# Specifies the number of threads to use for submitting I/O
# operations to the storage device of VDO volume.
# The value must be in range [1..100]
# Each additional thread after the first will use an additional 18MiB of RAM,
# plus 1.12 MiB of RAM per megabyte of configured read cache size.
# This configuration option has an automatic default value.
# vdo_bio_threads = 1
# Configuration option allocation/vdo_bio_rotation.
# Specifies the number of I/O operations to enqueue for each bio-submission
# thread before directing work to the next. The value must be in range [1..1024].
# This configuration option has an automatic default value.
# vdo_bio_rotation = 64
# Configuration option allocation/vdo_cpu_threads.
# Specifies the number of threads to use for CPU-intensive work such as
# hashing or compression for VDO volume. The value must be in range [1..100]
# This configuration option has an automatic default value.
# vdo_cpu_threads = 2
# Configuration option allocation/vdo_hash_zone_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# The value must be at in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_hash_zone_threads = 1
# Configuration option allocation/vdo_logical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# A logical thread count of 9 or more will require explicitly specifying
# a sufficiently large block map cache size, as well.
# The value must be in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_logical_threads = 1
# Configuration option allocation/vdo_physical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on physical block addresses.
# Each additional thread after the first will use an additional 10MiB of RAM.
# The value must be in range [0..16].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_physical_threads = 1
# Configuration option allocation/vdo_write_policy.
# Specifies the write policy:
# auto - VDO will check the storage device and determine whether it supports flushes.
# If it does, VDO will run in async mode, otherwise it will run in sync mode.
# sync - Writes are acknowledged only after data is stably written.
# This policy is not supported if the underlying storage is not also synchronous.
# async - Writes are acknowledged after data has been cached for writing to stable storage.
# Data which has not been flushed is not guaranteed to persist in this mode.
# This configuration option has an automatic default value.
# vdo_write_policy = "auto"
}
# Configuration section log.
@ -718,29 +835,17 @@ global {
activation = 1
# Configuration option global/fallback_to_lvm1.
# Try running LVM1 tools if LVM cannot communicate with DM.
# This option only applies to 2.4 kernels and is provided to help
# switch between device-mapper kernels and LVM1 kernels. The LVM1
# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
# They will stop working once the lvm2 on-disk metadata format is used.
# This setting is no longer used.
# This configuration option has an automatic default value.
# fallback_to_lvm1 = @DEFAULT_FALLBACK_TO_LVM1@
# fallback_to_lvm1 = 0
# Configuration option global/format.
# The default metadata format that commands should use.
# The -M 1|2 option overrides this setting.
#
# Accepted values:
# lvm1
# lvm2
#
# This setting is no longer used.
# This configuration option has an automatic default value.
# format = "lvm2"
# Configuration option global/format_libraries.
# Shared libraries that process different metadata formats.
# If support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# This setting is no longer used.
# This configuration option does not have a default value defined.
# Configuration option global/segment_libraries.
@ -756,34 +861,7 @@ global {
etc = "@CONFDIR@"
# Configuration option global/locking_type.
# Type of locking to use.
#
# Accepted values:
# 0
# Turns off locking. Warning: this risks metadata corruption if
# commands run concurrently.
# 1
# LVM uses local file-based locking, the standard mode.
# 2
# LVM uses the external shared library locking_library.
# 3
# LVM uses built-in clustered locking with clvmd.
# This is incompatible with lvmetad. If use_lvmetad is enabled,
# LVM prints a warning and disables lvmetad use.
# 4
# LVM uses read-only locking which forbids any operations that
# might change metadata.
# 5
# Offers dummy locking for tools that do not need any locks.
# You should not need to set this directly; the tools will select
# when to use it instead of the configured locking_type.
# Do not use lvmetad or the kernel device-mapper driver with this
# locking type. It is used by the --readonly option that offers
# read-only access to Volume Group metadata that cannot be locked
# safely because it belongs to an inaccessible domain and might be
# in use, for example a virtual machine image or a disk that is
# shared by a clustered machine.
#
# This setting is no longer used.
locking_type = 1
# Configuration option global/wait_for_locks.
@ -791,19 +869,11 @@ global {
wait_for_locks = 1
# Configuration option global/fallback_to_clustered_locking.
# Attempt to use built-in cluster locking if locking_type 2 fails.
# If using external locking (type 2) and initialisation fails, with
# this enabled, an attempt will be made to use the built-in clustered
# locking. Disable this if using a customised locking_library.
# This setting is no longer used.
fallback_to_clustered_locking = 1
# Configuration option global/fallback_to_local_locking.
# Use locking_type 1 (local) if locking_type 2 or 3 fail.
# If an attempt to initialise type 2 or type 3 locking failed, perhaps
# because cluster components such as clvmd are not running, with this
# enabled, an attempt will be made to use local file-based locking
# (type 1). If this succeeds, only commands against local VGs will
# proceed. VGs marked as clustered will be ignored.
# This setting is no longer used.
fallback_to_local_locking = 1
# Configuration option global/locking_dir.
@ -827,7 +897,7 @@ global {
# This configuration option does not have a default value defined.
# Configuration option global/locking_library.
# The external locking library to use for locking_type 2.
# This setting is no longer used.
# This configuration option has an automatic default value.
# locking_library = "liblvm2clusterlock.so"
@ -837,13 +907,6 @@ global {
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Configuration option global/detect_internal_vg_cache_corruption.
# Internal verification of VG structures.
# Check if CRC matches when a parsed VG is used multiple times. This
# is useful to catch unexpected changes to cached VG structures.
# Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# Configuration option global/metadata_read_only.
# No operations that change on-disk metadata are permitted.
# Additionally, read-only commands that encounter metadata in need of
@ -1086,6 +1149,17 @@ global {
# This configuration option has an automatic default value.
# cache_repair_options = [ "" ]
# Configuration option global/vdo_format_executable.
# The full path to the vdoformat command.
# LVM uses this command to initial data volume for VDO type logical volume
# This configuration option has an automatic default value.
# vdo_format_executable = "@VDO_FORMAT_CMD@"
# Configuration option global/vdo_format_options.
# List of options passed added to standard vdoformat command.
# This configuration option has an automatic default value.
# vdo_format_options = [ "" ]
# Configuration option global/fsadm_executable.
# The full path to the fsadm command.
# LVM uses this command to help with lvresize -r operations.
@ -1459,6 +1533,33 @@ activation {
#
thin_pool_autoextend_percent = 20
# Configuration option activation/vdo_pool_autoextend_threshold.
# Auto-extend a VDO pool when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see vdo_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# vdo_pool_autoextend_threshold = 70
#
vdo_pool_autoextend_threshold = 100
# Configuration option activation/vdo_pool_autoextend_percent.
# Auto-extending a VDO pool adds this percent extra space.
# The amount of additional space added to a VDO pool is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# This configuration option has an automatic default value.
# vdo_pool_autoextend_percent = 20
# Configuration option activation/mlock_filter.
# Do not mlock these memory areas.
# While activating devices, I/O to devices being (re)configured is
@ -1627,20 +1728,7 @@ activation {
# stripesize = 64
# Configuration option metadata/dirs.
# Directories holding live copies of text format metadata.
# These directories must not be on logical volumes!
# It's possible to use LVM with a couple of directories here,
# preferably on different (non-LV) filesystems, and with no other
# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
# to on-disk metadata areas. The feature was originally added to
# simplify testing and is not supported under low memory situations -
# the machine could lock up. Never edit any files in these directories
# by hand unless you are absolutely sure you know what you are doing!
# Use the supplied toolset to make changes (e.g. vgcfgrestore).
#
# Example
# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
#
# This setting is no longer used.
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# }
@ -2093,6 +2181,23 @@ dmeventd {
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/vdo_library.
# The library dmeventd uses when monitoring a VDO pool device.
# libdevmapper-event-lvm2vdo.so monitors the filling of a pool
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the pool is filled.
# This configuration option has an automatic default value.
# vdo_library = "libdevmapper-event-lvm2vdo.so"
# Configuration option dmeventd/vdo_command.
# The plugin runs command with each 5% increment when VDO pool volume
# gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# User handler is specified with the full path starting with '/'.
# This configuration option has an automatic default value.
# vdo_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.

25
conf/vdo-small.profile Normal file
View File

@ -0,0 +1,25 @@
# Demo configuration for 'VDO' using less memory.
#
allocation {
vdo_use_compression = 1
vdo_use_deduplication = 1
vdo_emulate_512_sectors = 0
vdo_block_map_cache_size_mb = 128
vdo_block_map_period = 16380
vdo_check_point_frequency = 0
vdo_use_sparse_index = 0
vdo_index_memory_size_mb = 256
vdo_use_read_cache = 0
vdo_read_cache_size_mb = 0
vdo_slab_size_mb = 2048
vdo_ack_threads = 1
vdo_bio_threads = 1
vdo_bio_rotation = 64
vdo_cpu_threads = 2
vdo_hash_zone_threads = 1
vdo_logical_threads = 1
vdo_physical_threads = 1
vdo_write_policy = "auto"
}

2616
configure vendored

File diff suppressed because it is too large Load Diff

View File

@ -39,15 +39,12 @@ case "$host_os" in
LDDEPS="$LDDEPS .export.sym"
LIB_SUFFIX=so
DEVMAPPER=yes
AIO=yes
BUILD_LVMETAD=no
BUILD_LVMPOLLD=no
LOCKDSANLOCK=no
LOCKDDLM=no
ODIRECT=yes
DM_IOCTLS=yes
SELINUX=yes
CLUSTER=internal
FSADM=yes
BLKDEACTIVATE=yes
;;
@ -59,11 +56,9 @@ case "$host_os" in
CLDNOWHOLEARCHIVE=
LIB_SUFFIX=dylib
DEVMAPPER=yes
AIO=no
ODIRECT=no
DM_IOCTLS=no
SELINUX=no
CLUSTER=none
FSADM=no
BLKDEACTIVATE=no
;;
@ -105,7 +100,7 @@ AC_HEADER_SYS_WAIT
AC_HEADER_TIME
AC_CHECK_HEADERS([assert.h ctype.h dirent.h errno.h fcntl.h float.h \
getopt.h inttypes.h langinfo.h libgen.h limits.h locale.h paths.h \
getopt.h inttypes.h langinfo.h libaio.h libgen.h limits.h locale.h paths.h \
signal.h stdarg.h stddef.h stdio.h stdlib.h string.h sys/file.h \
sys/ioctl.h syslog.h sys/mman.h sys/param.h sys/resource.h sys/stat.h \
sys/time.h sys/types.h sys/utsname.h sys/wait.h time.h \
@ -144,6 +139,7 @@ AC_TYPE_UINT16_T
AC_TYPE_UINT32_T
AC_TYPE_UINT64_T
AX_GCC_BUILTIN([__builtin_clz])
AX_GCC_BUILTIN([__builtin_clzll])
################################################################################
dnl -- Check for functions
@ -174,6 +170,15 @@ AC_ARG_ENABLE(dependency-tracking,
USE_TRACKING=$enableval, USE_TRACKING=yes)
AC_MSG_RESULT($USE_TRACKING)
################################################################################
dnl -- Disable silence rules
AC_MSG_CHECKING(whether to build silently)
AC_ARG_ENABLE(silent-rules,
AC_HELP_STRING([--disable-silent-rules], [disable silent building]),
SILENT_RULES=$enableval, SILENT_RULES=yes)
AC_MSG_RESULT($SILENT_RULES)
################################################################################
dnl -- Enables statically-linked tools
AC_MSG_CHECKING(whether to use static linking)
@ -284,74 +289,6 @@ esac
AC_MSG_RESULT($MANGLING)
AC_DEFINE_UNQUOTED([DEFAULT_DM_NAME_MANGLING], $mangling, [Define default name mangling behaviour])
################################################################################
dnl -- LVM1 tool fallback option
AC_MSG_CHECKING(whether to enable lvm1 fallback)
AC_ARG_ENABLE(lvm1_fallback,
AC_HELP_STRING([--enable-lvm1_fallback],
[use this to fall back and use LVM1 binaries if
device-mapper is missing from the kernel]),
LVM1_FALLBACK=$enableval, LVM1_FALLBACK=no)
AC_MSG_RESULT($LVM1_FALLBACK)
if test "$LVM1_FALLBACK" = yes; then
DEFAULT_FALLBACK_TO_LVM1=1
AC_DEFINE([LVM1_FALLBACK], 1, [Define to 1 if 'lvm' should fall back to using LVM1 binaries if device-mapper is missing from the kernel])
else
DEFAULT_FALLBACK_TO_LVM1=0
fi
AC_DEFINE_UNQUOTED(DEFAULT_FALLBACK_TO_LVM1, [$DEFAULT_FALLBACK_TO_LVM1],
[Fall back to LVM1 by default if device-mapper is missing from the kernel.])
################################################################################
dnl -- format1 inclusion type
AC_MSG_CHECKING(whether to include support for lvm1 metadata)
AC_ARG_WITH(lvm1,
AC_HELP_STRING([--with-lvm1=TYPE],
[LVM1 metadata support: internal/shared/none [internal]]),
LVM1=$withval, LVM1=internal)
AC_MSG_RESULT($LVM1)
case "$LVM1" in
none|shared) ;;
internal) AC_DEFINE([LVM1_INTERNAL], 1,
[Define to 1 to include built-in support for LVM1 metadata.]) ;;
*) AC_MSG_ERROR([--with-lvm1 parameter invalid]) ;;
esac
################################################################################
dnl -- format_pool inclusion type
AC_MSG_CHECKING(whether to include support for GFS pool metadata)
AC_ARG_WITH(pool,
AC_HELP_STRING([--with-pool=TYPE],
[GFS pool read-only support: internal/shared/none [internal]]),
POOL=$withval, POOL=internal)
AC_MSG_RESULT($POOL)
case "$POOL" in
none|shared) ;;
internal) AC_DEFINE([POOL_INTERNAL], 1,
[Define to 1 to include built-in support for GFS pool metadata.]) ;;
*) AC_MSG_ERROR([--with-pool parameter invalid])
esac
################################################################################
dnl -- cluster_locking inclusion type
AC_MSG_CHECKING(whether to include support for cluster locking)
AC_ARG_WITH(cluster,
AC_HELP_STRING([--with-cluster=TYPE],
[cluster LVM locking support: internal/shared/none [internal]]),
CLUSTER=$withval)
AC_MSG_RESULT($CLUSTER)
case "$CLUSTER" in
none|shared) ;;
internal) AC_DEFINE([CLUSTER_LOCKING_INTERNAL], 1,
[Define to 1 to include built-in support for clustered LVM locking.]) ;;
*) AC_MSG_ERROR([--with-cluster parameter invalid]) ;;
esac
################################################################################
dnl -- snapshots inclusion type
AC_MSG_CHECKING(whether to include snapshots)
@ -386,13 +323,6 @@ esac
################################################################################
dnl -- raid inclusion type
AC_MSG_CHECKING(whether to include raid)
AC_ARG_WITH(raid,
AC_HELP_STRING([--with-raid=TYPE],
[raid support: internal/shared/none [internal]]),
RAID=$withval, RAID=internal)
AC_MSG_RESULT($RAID)
AC_ARG_WITH(default-mirror-segtype,
AC_HELP_STRING([--with-default-mirror-segtype=TYPE],
[default mirror segtype: raid1/mirror [raid1]]),
@ -401,14 +331,9 @@ AC_ARG_WITH(default-raid10-segtype,
AC_HELP_STRING([--with-default-raid10-segtype=TYPE],
[default mirror segtype: raid10/mirror [raid10]]),
DEFAULT_RAID10_SEGTYPE=$withval, DEFAULT_RAID10_SEGTYPE="raid10")
case "$RAID" in
none) test "$DEFAULT_MIRROR_SEGTYPE" = "raid1" && DEFAULT_MIRROR_SEGTYPE="mirror"
test "$DEFAULT_RAID10_SEGTYPE" = "raid10" && DEFAULT_RAID10_SEGTYPE="mirror" ;;
shared) ;;
internal) AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.]) ;;
*) AC_MSG_ERROR([--with-raid parameter invalid]) ;;
esac
AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.])
AC_DEFINE_UNQUOTED([DEFAULT_MIRROR_SEGTYPE], ["$DEFAULT_MIRROR_SEGTYPE"],
[Default segtype used for mirror volumes.])
@ -666,6 +591,53 @@ AC_DEFINE_UNQUOTED([CACHE_REPAIR_CMD], ["$CACHE_REPAIR_CMD"],
AC_DEFINE_UNQUOTED([CACHE_RESTORE_CMD], ["$CACHE_RESTORE_CMD"],
[The path to 'cache_restore', if available.])
################################################################################
dnl -- cache inclusion type
AC_MSG_CHECKING(whether to include vdo)
AC_ARG_WITH(vdo,
AC_HELP_STRING([--with-vdo=TYPE],
[vdo support: internal/none [internal]]),
VDO=$withval, VDO="none")
AC_MSG_RESULT($VDO)
AC_ARG_WITH(vdo-format,
AC_HELP_STRING([--with-vdo-format=PATH],
[vdoformat tool: [autodetect]]),
VDO_FORMAT_CMD=$withval, VDO_FORMAT_CMD="autodetect")
case "$VDO" in
none) ;;
internal)
AC_DEFINE([VDO_INTERNAL], 1, [Define to 1 to include built-in support for vdo.])
if test "$VDO_FORMAT_CMD" = "autodetect"; then
AC_PATH_TOOL(VDO_FORMAT_CMD, vdoformat, [], [$PATH])
if test -z "$VDO_FORMAT_CMD"; then
AC_MSG_WARN([vdoformat not found in path $PATH])
VDO_FORMAT_CMD=/usr/bin/vdoformat
VDO_CONFIGURE_WARN=y
fi
fi
;;
*) AC_MSG_ERROR([--with-vdo parameter invalid]) ;;
esac
AC_DEFINE_UNQUOTED([VDO_FORMAT_CMD], ["$VDO_FORMAT_CMD"],
[The path to 'vdoformat', if available.])
#
# Do we need to use the API??
# Do we want to link lvm2 with a big library for vdoformating ?
#
#AC_ARG_WITH(vdo-include,
# AC_HELP_STRING([--with-vdo-include=PATH],
# [vdo support: Path to utils headers: [/usr/include/vdo/utils]]),
# VDO_INCLUDE=$withval, VDO_INCLUDE="/usr/include/vdo/utils")
#AC_MSG_RESULT($VDO_INCLUDE)
#
#AC_ARG_WITH(vdo-lib,
# AC_HELP_STRING([--with-vdo-lib=PATH],
# [vdo support: Path to utils lib: [/usr/lib]]),
# VDO_LIB=$withval, VDO_LIB="/usr/lib")
#AC_MSG_RESULT($VDO_LIB)
################################################################################
dnl -- Disable readline
@ -737,241 +709,6 @@ AC_ARG_WITH(default-run-dir,
AC_DEFINE_UNQUOTED(DEFAULT_RUN_DIR, ["$DEFAULT_RUN_DIR"],
[Default LVM run directory.])
################################################################################
dnl -- Build cluster LVM daemon
AC_MSG_CHECKING(whether to build cluster LVM daemon)
AC_ARG_WITH(clvmd,
[ --with-clvmd=TYPE build cluster LVM Daemon
The following cluster manager combinations are valid:
* cman (RHEL5 or equivalent)
* cman,corosync,openais (or selection of them)
* singlenode (localhost only)
* all (autodetect)
* none (disable build)
[[none]]],
CLVMD=$withval, CLVMD=none)
test "$CLVMD" = yes && CLVMD=all
AC_MSG_RESULT($CLVMD)
dnl -- If clvmd enabled without cluster locking, automagically include it
test "$CLVMD" != none -a "$CLUSTER" = none && CLUSTER=internal
dnl -- init pkgconfig if required
test "$CLVMD" != none && pkg_config_init
dnl -- Express clvmd init script Required-Start / Required-Stop
CLVMD_CMANAGERS=""
dnl -- On RHEL4/RHEL5, qdiskd is started from a separate init script.
dnl -- Enable if we are build for cman.
CLVMD_NEEDS_QDISKD=no
dnl -- define build types
if [[ `expr x"$CLVMD" : '.*gulm.*'` != 0 ]]; then
AC_MSG_ERROR([Since version 2.02.87 GULM locking is no longer supported.]);
fi
if [[ `expr x"$CLVMD" : '.*cman.*'` != 0 ]]; then
BUILDCMAN=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS cman"
CLVMD_NEEDS_QDISKD=yes
fi
if [[ `expr x"$CLVMD" : '.*corosync.*'` != 0 ]]; then
BUILDCOROSYNC=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS corosync"
fi
if [[ `expr x"$CLVMD" : '.*openais.*'` != 0 ]]; then
BUILDOPENAIS=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS openais"
fi
test "$CLVMD_NEEDS_QDISKD" != no && CLVMD_CMANAGERS="$CLVMD_CMANAGERS qdiskd"
dnl -- define a soft bailout if we are autodetecting
soft_bailout() {
NOTFOUND=1
}
hard_bailout() {
AC_MSG_ERROR([bailing out])
}
dnl -- if clvmd=all then set soft_bailout (we do not want to error)
dnl -- and set all builds to yes. We need to do this here
dnl -- to skip the openais|corosync sanity check above.
if test "$CLVMD" = all; then
bailout=soft_bailout
BUILDCMAN=yes
BUILDCOROSYNC=yes
BUILDOPENAIS=yes
else
bailout=hard_bailout
fi
dnl -- helper macro to check libs without adding them to LIBS
check_lib_no_libs() {
lib_no_libs_arg1=$1
shift
lib_no_libs_arg2=$1
shift
lib_no_libs_args=$@
AC_CHECK_LIB([$lib_no_libs_arg1],
[$lib_no_libs_arg2],,
[$bailout],
[$lib_no_libs_args])
LIBS=$ac_check_lib_save_LIBS
}
dnl -- Look for cman libraries if required.
if test "$BUILDCMAN" = yes; then
PKG_CHECK_MODULES(CMAN, libcman, [HAVE_CMAN=yes],
[NOTFOUND=0
AC_CHECK_HEADERS(libcman.h,,$bailout)
check_lib_no_libs cman cman_init
if test $NOTFOUND = 0; then
AC_MSG_RESULT([no pkg for libcman, using -lcman])
CMAN_LIBS="-lcman"
HAVE_CMAN=yes
fi])
CHECKCONFDB=yes
CHECKDLM=yes
fi
dnl -- Look for corosync that is required also for openais build
dnl -- only enough recent version of corosync ship pkg-config files.
dnl -- We can safely rely on that to detect the correct bits.
if test "$BUILDCOROSYNC" = yes -o "$BUILDOPENAIS" = yes; then
PKG_CHECK_MODULES(COROSYNC, corosync, [HAVE_COROSYNC=yes], $bailout)
CHECKCONFDB=yes
CHECKCMAP=yes
fi
dnl -- Look for corosync libraries if required.
if test "$BUILDCOROSYNC" = yes; then
PKG_CHECK_MODULES(QUORUM, libquorum, [HAVE_QUORUM=yes], $bailout)
CHECKCPG=yes
CHECKDLM=yes
fi
dnl -- Look for openais libraries if required.
if test "$BUILDOPENAIS" = yes; then
PKG_CHECK_MODULES(SALCK, libSaLck, [HAVE_SALCK=yes], $bailout)
CHECKCPG=yes
fi
dnl -- Below are checks for libraries common to more than one build.
dnl -- Check confdb library.
dnl -- mandatory for corosync < 2.0 build.
dnl -- optional for openais/cman build.
if test "$CHECKCONFDB" = yes; then
PKG_CHECK_MODULES(CONFDB, libconfdb,
[HAVE_CONFDB=yes], [HAVE_CONFDB=no])
AC_CHECK_HEADERS([corosync/confdb.h],
[HAVE_CONFDB_H=yes], [HAVE_CONFDB_H=no])
if test "$HAVE_CONFDB" != yes -a "$HAVE_CONFDB_H" = yes; then
check_lib_no_libs confdb confdb_initialize
AC_MSG_RESULT([no pkg for confdb, using -lconfdb])
CONFDB_LIBS="-lconfdb"
HAVE_CONFDB=yes
fi
fi
dnl -- Check cmap library
dnl -- mandatory for corosync >= 2.0 build.
if test "$CHECKCMAP" = yes; then
PKG_CHECK_MODULES(CMAP, libcmap,
[HAVE_CMAP=yes], [HAVE_CMAP=no])
AC_CHECK_HEADERS([corosync/cmap.h],
[HAVE_CMAP_H=yes], [HAVE_CMAP_H=no])
if test "$HAVE_CMAP" != yes -a "$HAVE_CMAP_H" = yes; then
check_lib_no_libs cmap cmap_initialize
AC_MSG_RESULT([no pkg for cmap, using -lcmap])
CMAP_LIBS="-lcmap"
HAVE_CMAP=yes
fi
fi
if test "$BUILDCOROSYNC" = yes -a \
"$HAVE_CMAP" != yes -a "$HAVE_CONFDB" != yes -a "$CLVMD" != all; then
AC_MSG_ERROR([bailing out... cmap (corosync >= 2.0) or confdb (corosync < 2.0) library is required])
fi
dnl -- Check cpg library.
if test "$CHECKCPG" = yes; then
PKG_CHECK_MODULES(CPG, libcpg, [HAVE_CPG=yes], [$bailout])
fi
dnl -- Check dlm library.
if test "$CHECKDLM" = yes; then
PKG_CHECK_MODULES(DLM, libdlm, [HAVE_DLM=yes],
[NOTFOUND=0
AC_CHECK_HEADERS(libdlm.h,,[$bailout])
check_lib_no_libs dlm dlm_lock -lpthread
if test $NOTFOUND = 0; then
AC_MSG_RESULT([no pkg for libdlm, using -ldlm])
DLM_LIBS="-ldlm -lpthread"
HAVE_DLM=yes
fi])
fi
dnl -- If we are autodetecting, we need to re-create
dnl -- the depedencies checks and set a proper CLVMD,
dnl -- together with init script Required-Start/Stop entries.
if test "$CLVMD" = all; then
CLVMD=none
CLVMD_CMANAGERS=""
CLVMD_NEEDS_QDISKD=no
if test "$HAVE_CMAN" = yes -a \
"$HAVE_DLM" = yes; then
AC_MSG_RESULT([Enabling clvmd cman cluster manager])
CLVMD="$CLVMD,cman"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS cman"
CLVMD_NEEDS_QDISKD=yes
fi
if test "$HAVE_COROSYNC" = yes -a \
"$HAVE_QUORUM" = yes -a \
"$HAVE_CPG" = yes -a \
"$HAVE_DLM" = yes; then
if test "$HAVE_CONFDB" = yes -o "$HAVE_CMAP" = yes; then
AC_MSG_RESULT([Enabling clvmd corosync cluster manager])
CLVMD="$CLVMD,corosync"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS corosync"
fi
fi
if test "$HAVE_COROSYNC" = yes -a \
"$HAVE_CPG" = yes -a \
"$HAVE_SALCK" = yes; then
AC_MSG_RESULT([Enabling clvmd openais cluster manager])
CLVMD="$CLVMD,openais"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS openais"
fi
test "$CLVMD_NEEDS_QDISKD" != no && CLVMD_CMANAGERS="$CLVMD_CMANAGERS qdiskd"
test "$CLVMD" = none && AC_MSG_RESULT([Disabling clvmd build. No cluster manager detected.])
fi
dnl -- Fixup CLVMD_CMANAGERS with new corosync
dnl -- clvmd built with corosync >= 2.0 needs dlm (either init or systemd service)
dnl -- to be started.
if [[ `expr x"$CLVMD" : '.*corosync.*'` != 0 ]]; then
test "$HAVE_CMAP" = yes && CLVMD_CMANAGERS="$CLVMD_CMANAGERS dlm"
fi
################################################################################
dnl -- clvmd pidfile
if test "$CLVMD" != none; then
AC_ARG_WITH(clvmd-pidfile,
AC_HELP_STRING([--with-clvmd-pidfile=PATH],
[clvmd pidfile [PID_DIR/clvmd.pid]]),
CLVMD_PIDFILE=$withval,
CLVMD_PIDFILE="$DEFAULT_PID_DIR/clvmd.pid")
AC_DEFINE_UNQUOTED(CLVMD_PIDFILE, ["$CLVMD_PIDFILE"],
[Path to clvmd pidfile.])
fi
################################################################################
dnl -- Build cluster mirror log daemon
AC_MSG_CHECKING(whether to build cluster mirror log daemon)
@ -1000,11 +737,6 @@ dnl -- Look for corosync libraries if required.
if [[ "$BUILD_CMIRRORD" = yes ]]; then
pkg_config_init
AC_DEFINE([CMIRROR_HAS_CHECKPOINT], 1, [Define to 1 to include libSaCkpt.])
PKG_CHECK_MODULES(SACKPT, libSaCkpt, [HAVE_SACKPT=yes],
[AC_MSG_RESULT([no libSaCkpt, compiling without it])
AC_DEFINE([CMIRROR_HAS_CHECKPOINT], 0, [Define to 0 to exclude libSaCkpt.])])
if test "$HAVE_CPG" != yes; then
PKG_CHECK_MODULES(CPG, libcpg)
fi
@ -1069,20 +801,6 @@ if test "$PROFILING" = yes; then
fi
fi
################################################################################
dnl -- Enable testing
AC_MSG_CHECKING(whether to enable unit testing)
AC_ARG_ENABLE(testing,
AC_HELP_STRING([--enable-testing],
[enable testing targets in the makefile]),
TESTING=$enableval, TESTING=no)
AC_MSG_RESULT($TESTING)
if test "$TESTING" = yes; then
pkg_config_init
PKG_CHECK_MODULES(CUNIT, cunit >= 2.0)
fi
################################################################################
dnl -- Set LVM2 testsuite data
TESTSUITE_DATA='${datarootdir}/lvm2-testsuite'
@ -1124,34 +842,6 @@ if test "$DEVMAPPER" = yes; then
AC_DEFINE([DEVMAPPER_SUPPORT], 1, [Define to 1 to enable LVM2 device-mapper interaction.])
fi
################################################################################
dnl -- Disable aio
AC_MSG_CHECKING(whether to use asynchronous I/O)
AC_ARG_ENABLE(aio,
AC_HELP_STRING([--disable-aio],
[disable asynchronous I/O]),
AIO=$enableval)
AC_MSG_RESULT($AIO)
if test "$AIO" = yes; then
AC_CHECK_LIB(aio, io_setup,
[AC_DEFINE([AIO_SUPPORT], 1, [Define to 1 if aio is available.])
AIO_LIBS="-laio"
AIO_SUPPORT=yes],
[AIO_LIBS=
AIO_SUPPORT=no ])
fi
################################################################################
dnl -- Build lvmetad
AC_MSG_CHECKING(whether to build LVMetaD)
AC_ARG_ENABLE(lvmetad,
AC_HELP_STRING([--enable-lvmetad],
[enable the LVM Metadata Daemon]),
LVMETAD=$enableval)
test -n "$LVMETAD" && BUILD_LVMETAD=$LVMETAD
AC_MSG_RESULT($BUILD_LVMETAD)
################################################################################
dnl -- Build lvmpolld
AC_MSG_CHECKING(whether to build lvmpolld)
@ -1207,9 +897,7 @@ AC_MSG_RESULT($BUILD_LVMLOCKD)
if test "$BUILD_LVMLOCKD" = yes; then
AS_IF([test "$LVMPOLLD" = no], [AC_MSG_ERROR([cannot build lvmlockd with --disable-lvmpolld.])])
AS_IF([test "$LVMETAD" = no], [AC_MSG_ERROR([cannot build lvmlockd with --disable-lvmetad.])])
AS_IF([test "$BUILD_LVMPOLLD" = no], [BUILD_LVMPOLLD=yes; AC_MSG_WARN([Enabling lvmpolld - required by lvmlockd.])])
AS_IF([test "$BUILD_LVMETAD" = no], [BUILD_LVMETAD=yes; AC_MSG_WARN([Enabling lvmetad - required by lvmlockd.])])
AC_MSG_CHECKING([defaults for use_lvmlockd])
AC_ARG_ENABLE(use_lvmlockd,
AC_HELP_STRING([--disable-use-lvmlockd],
@ -1234,33 +922,6 @@ fi
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMLOCKD, [$DEFAULT_USE_LVMLOCKD],
[Use lvmlockd by default.])
################################################################################
dnl -- Check lvmetad
if test "$BUILD_LVMETAD" = yes; then
AC_MSG_CHECKING([defaults for use_lvmetad])
AC_ARG_ENABLE(use_lvmetad,
AC_HELP_STRING([--disable-use-lvmetad],
[disable usage of LVM Metadata Daemon]),
[case ${enableval} in
yes) DEFAULT_USE_LVMETAD=1 ;;
*) DEFAULT_USE_LVMETAD=0 ;;
esac], DEFAULT_USE_LVMETAD=1)
AC_MSG_RESULT($DEFAULT_USE_LVMETAD)
AC_DEFINE([LVMETAD_SUPPORT], 1, [Define to 1 to include code that uses lvmetad.])
AC_ARG_WITH(lvmetad-pidfile,
AC_HELP_STRING([--with-lvmetad-pidfile=PATH],
[lvmetad pidfile [PID_DIR/lvmetad.pid]]),
LVMETAD_PIDFILE=$withval,
LVMETAD_PIDFILE="$DEFAULT_PID_DIR/lvmetad.pid")
AC_DEFINE_UNQUOTED(LVMETAD_PIDFILE, ["$LVMETAD_PIDFILE"],
[Path to lvmetad pidfile.])
else
DEFAULT_USE_LVMETAD=0
fi
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMETAD, [$DEFAULT_USE_LVMETAD],
[Use lvmetad by default.])
################################################################################
dnl -- Check lvmpolld
if test "$BUILD_LVMPOLLD" = yes; then
@ -1464,20 +1125,6 @@ if test "$ODIRECT" = yes; then
AC_DEFINE([O_DIRECT_SUPPORT], 1, [Define to 1 to enable O_DIRECT support.])
fi
################################################################################
dnl -- Enable liblvm2app.so
AC_MSG_CHECKING(whether to build liblvm2app.so application library)
AC_ARG_ENABLE(applib,
AC_HELP_STRING([--enable-applib], [build application library]),
APPLIB=$enableval, APPLIB=no)
AC_MSG_RESULT($APPLIB)
AC_SUBST([LVM2APP_LIB])
test "$APPLIB" = yes \
&& LVM2APP_LIB=-llvm2app \
|| LVM2APP_LIB=
AS_IF([test "$APPLIB"],
[AC_MSG_WARN([liblvm2app is deprecated. Use D-Bus API])])
################################################################################
dnl -- Enable cmdlib
AC_MSG_CHECKING(whether to compile liblvm2cmd.so)
@ -1501,44 +1148,9 @@ AS_IF([test "$NOTIFYDBUS_SUPPORT" = yes && test "BUILD_LVMDBUSD" = yes],
[AC_MSG_WARN([Building D-Bus support without D-Bus notifications.])])
################################################################################
dnl -- Enable Python liblvm2app bindings
AC_MSG_CHECKING(whether to build Python wrapper for liblvm2app.so)
AC_ARG_ENABLE(python_bindings,
AC_HELP_STRING([--enable-python_bindings], [build default Python applib bindings]),
PYTHON_BINDINGS=$enableval, PYTHON_BINDINGS=no)
AC_MSG_RESULT($PYTHON_BINDINGS)
dnl -- Enable Python dbus library
AC_MSG_CHECKING(whether to build Python2 wrapper for liblvm2app.so)
AC_ARG_ENABLE(python2_bindings,
AC_HELP_STRING([--enable-python2_bindings], [build Python2 applib bindings]),
PYTHON2_BINDINGS=$enableval, PYTHON2_BINDINGS=no)
AC_MSG_RESULT($PYTHON2_BINDINGS)
AC_MSG_CHECKING(whether to build Python3 wrapper for liblvm2app.so)
AC_ARG_ENABLE(python3_bindings,
AC_HELP_STRING([--enable-python3_bindings], [build Python3 applib bindings]),
PYTHON3_BINDINGS=$enableval, PYTHON3_BINDINGS=no)
AC_MSG_RESULT($PYTHON3_BINDINGS)
if test "$PYTHON_BINDINGS" = yes; then
AC_MSG_ERROR([--enable-python-bindings is replaced by --enable-python2-bindings and --enable-python3-bindings])
fi
if test "$PYTHON2_BINDINGS" = yes; then
AM_PATH_PYTHON([2])
AC_PATH_TOOL(PYTHON2, python2)
test -z "$PYTHON2" && AC_MSG_ERROR([python2 is required for --enable-python2_bindings but cannot be found])
AC_PATH_TOOL(PYTHON2_CONFIG, python2-config)
test -z "$PYTHON2_CONFIG" && AC_PATH_TOOL(PYTHON2_CONFIG, python-config)
test -z "$PYTHON2_CONFIG" && AC_MSG_ERROR([python headers are required for --enable-python2_bindings but cannot be found])
PYTHON2_INCDIRS=`"$PYTHON2_CONFIG" --includes`
PYTHON2_LIBDIRS=`"$PYTHON2_CONFIG" --libs`
PYTHON2DIR=$pythondir
PYTHON_BINDINGS=yes
fi
if test "$PYTHON3_BINDINGS" = yes -o "$BUILD_LVMDBUSD" = yes; then
if test "$BUILD_LVMDBUSD" = yes; then
unset PYTHON PYTHON_CONFIG
unset am_cv_pathless_PYTHON ac_cv_path_PYTHON am_cv_python_platform
unset am_cv_python_pythondir am_cv_python_version am_cv_python_pyexecdir
@ -1552,19 +1164,12 @@ if test "$PYTHON3_BINDINGS" = yes -o "$BUILD_LVMDBUSD" = yes; then
PYTHON3_LIBDIRS=`"$PYTHON3_CONFIG" --libs`
PYTHON3DIR=$pythondir
test "$PYTHON3_BINDINGS" = yes && PYTHON_BINDINGS=yes
fi
if test "$BUILD_LVMDBUSD" = yes; then
# To get this macro, install autoconf-archive package then run autoreconf
AC_PYTHON_MODULE([pyudev], [Required], python3)
AC_PYTHON_MODULE([dbus], [Required], python3)
fi
if test "$PYTHON_BINDINGS" = yes -o "$PYTHON2_BINDINGS" = yes -o "$PYTHON3_BINDINGS" = yes; then
AC_MSG_WARN([Python bindings are deprecated. Use D-Bus API])
test "$APPLIB" != yes && AC_MSG_ERROR([Python_bindings require --enable-applib])
fi
################################################################################
dnl -- Enable pkg-config
AC_ARG_ENABLE(pkgconfig,
@ -1636,9 +1241,7 @@ AC_CHECK_LIB(dl, dlopen,
################################################################################
dnl -- Check for shared/static conflicts
if [[ \( "$LVM1" = shared -o "$POOL" = shared -o "$CLUSTER" = shared \
-o "$SNAPSHOTS" = shared -o "$MIRRORS" = shared \
-o "$RAID" = shared -o "$CACHE" = shared \
if [[ \( "$LVM1" = shared -o "$POOL" = shared \
\) -a "$STATIC_LINK" = yes ]]; then
AC_MSG_ERROR([Features cannot be 'shared' when building statically])
fi
@ -1858,18 +1461,6 @@ if test "$BUILD_LVMPOLLD" = yes; then
AC_FUNC_STRERROR_R
fi
if test "$CLVMD" != none; then
AC_CHECK_HEADERS(mntent.h netdb.h netinet/in.h pthread.h search.h sys/mount.h sys/socket.h sys/uio.h sys/un.h utmpx.h,,AC_MSG_ERROR(bailing out))
AC_CHECK_FUNCS(dup2 getmntent memmove select socket,,hard_bailout)
AC_FUNC_GETMNTENT
AC_FUNC_SELECT_ARGTYPES
fi
if test "$CLUSTER" != none; then
AC_CHECK_HEADERS(sys/socket.h sys/un.h,,hard_bailout)
AC_CHECK_FUNCS(socket,,hard_bailout)
fi
if test "$BUILD_DMEVENTD" = yes; then
AC_CHECK_HEADERS(arpa/inet.h,,hard_bailout)
fi
@ -1903,9 +1494,10 @@ SBINDIR="$(eval echo $(eval echo $sbindir))"
LVM_PATH="$SBINDIR/lvm"
AC_DEFINE_UNQUOTED(LVM_PATH, ["$LVM_PATH"], [Path to lvm binary.])
LVMCONFIG_PATH="$$BINDIR/lvmconfig"
AC_DEFINE_UNQUOTED(LVMCONFIG_PATH, ["$LVMCONFIG_PATH"], [Path to lvmconfig binary.])
USRSBINDIR="$(eval echo $(eval echo $usrsbindir))"
CLVMD_PATH="$USRSBINDIR/clvmd"
AC_DEFINE_UNQUOTED(CLVMD_PATH, ["$CLVMD_PATH"], [Path to clvmd binary.])
FSADM_PATH="$SBINDIR/fsadm"
AC_DEFINE_UNQUOTED(FSADM_PATH, ["$FSADM_PATH"], [Path to fsadm binary.])
@ -2025,13 +1617,11 @@ LVM_LIBAPI=`echo "$VER" | $AWK -F '[[()]]' '{print $2}'`
AC_DEFINE_UNQUOTED(LVM_CONFIGURE_LINE, "$CONFIGURE_LINE", [configure command line used])
################################################################################
AC_SUBST(APPLIB)
AC_SUBST(AWK)
AC_SUBST(BLKID_PC)
AC_SUBST(BUILD_CMIRRORD)
AC_SUBST(BUILD_DMEVENTD)
AC_SUBST(BUILD_LVMDBUSD)
AC_SUBST(BUILD_LVMETAD)
AC_SUBST(BUILD_LVMPOLLD)
AC_SUBST(BUILD_LVMLOCKD)
AC_SUBST(BUILD_LOCKDSANLOCK)
@ -2044,14 +1634,6 @@ AC_SUBST(CHMOD)
AC_SUBST(CLDFLAGS)
AC_SUBST(CLDNOWHOLEARCHIVE)
AC_SUBST(CLDWHOLEARCHIVE)
AC_SUBST(CLUSTER)
AC_SUBST(CLVMD)
AC_SUBST(CLVMD_CMANAGERS)
AC_SUBST(CLVMD_PATH)
AC_SUBST(CMAN_CFLAGS)
AC_SUBST(CMAN_LIBS)
AC_SUBST(CMAP_CFLAGS)
AC_SUBST(CMAP_LIBS)
AC_SUBST(CMDLIB)
AC_SUBST(CONFDB_CFLAGS)
AC_SUBST(CONFDB_LIBS)
@ -2067,7 +1649,6 @@ AC_SUBST(DEFAULT_CACHE_SUBDIR)
AC_SUBST(DEFAULT_DATA_ALIGNMENT)
AC_SUBST(DEFAULT_DM_RUN_DIR)
AC_SUBST(DEFAULT_LOCK_DIR)
AC_SUBST(DEFAULT_FALLBACK_TO_LVM1)
AC_SUBST(DEFAULT_MIRROR_SEGTYPE)
AC_SUBST(DEFAULT_PID_DIR)
AC_SUBST(DEFAULT_PROFILE_SUBDIR)
@ -2077,15 +1658,12 @@ AC_SUBST(DEFAULT_SPARSE_SEGTYPE)
AC_SUBST(DEFAULT_SYS_DIR)
AC_SUBST(DEFAULT_SYS_LOCK_DIR)
AC_SUBST(DEFAULT_USE_BLKID_WIPING)
AC_SUBST(DEFAULT_USE_LVMETAD)
AC_SUBST(DEFAULT_USE_LVMPOLLD)
AC_SUBST(DEFAULT_USE_LVMLOCKD)
AC_SUBST(DEVMAPPER)
AC_SUBST(AIO)
AC_SUBST(DLM_CFLAGS)
AC_SUBST(DLM_LIBS)
AC_SUBST(DL_LIBS)
AC_SUBST(AIO_LIBS)
AC_SUBST(DMEVENTD_PATH)
AC_SUBST(DM_LIB_PATCHLEVEL)
AC_SUBST(ELDFLAGS)
@ -2100,8 +1678,6 @@ AC_SUBST(JOBS)
AC_SUBST(LDDEPS)
AC_SUBST(LIBS)
AC_SUBST(LIB_SUFFIX)
AC_SUBST(LVM1)
AC_SUBST(LVM1_FALLBACK)
AC_SUBST(LVM_VERSION)
AC_SUBST(LVM_LIBAPI)
AC_SUBST(LVM_MAJOR)
@ -2118,7 +1694,6 @@ AC_SUBST(OCF)
AC_SUBST(OCFDIR)
AC_SUBST(ODIRECT)
AC_SUBST(PKGCONFIG)
AC_SUBST(POOL)
AC_SUBST(M_LIBS)
AC_SUBST(PTHREAD_LIBS)
AC_SUBST(PYTHON2)
@ -2134,7 +1709,6 @@ AC_SUBST(PYTHON2DIR)
AC_SUBST(PYTHON3DIR)
AC_SUBST(QUORUM_CFLAGS)
AC_SUBST(QUORUM_LIBS)
AC_SUBST(RAID)
AC_SUBST(RT_LIBS)
AC_SUBST(READLINE_LIBS)
AC_SUBST(REPLICATORS)
@ -2150,7 +1724,6 @@ AC_SUBST(SYSTEMD_LIBS)
AC_SUBST(SNAPSHOTS)
AC_SUBST(STATICDIR)
AC_SUBST(STATIC_LINK)
AC_SUBST(TESTING)
AC_SUBST(TESTSUITE_DATA)
AC_SUBST(THIN)
AC_SUBST(THIN_CHECK_CMD)
@ -2168,14 +1741,17 @@ AC_SUBST(UDEV_SYSTEMD_BACKGROUND_JOBS)
AC_SUBST(UDEV_RULE_EXEC_DETECTION)
AC_SUBST(UDEV_HAS_BUILTIN_BLKID)
AC_SUBST(USE_TRACKING)
AC_SUBST(SILENT_RULES)
AC_SUBST(USRSBINDIR)
AC_SUBST(VALGRIND_POOL)
AC_SUBST(VDO)
AC_SUBST(VDO_FORMAT_CMD)
AC_SUBST(VDO_INCLUDE)
AC_SUBST(VDO_LIB)
AC_SUBST(WRITE_INSTALL)
AC_SUBST(DMEVENTD_PIDFILE)
AC_SUBST(LVMETAD_PIDFILE)
AC_SUBST(LVMPOLLD_PIDFILE)
AC_SUBST(LVMLOCKD_PIDFILE)
AC_SUBST(CLVMD_PIDFILE)
AC_SUBST(CMIRRORD_PIDFILE)
AC_SUBST(interface)
AC_SUBST(kerneldir)
@ -2196,8 +1772,8 @@ dnl -- keep utility scripts running properly
AC_CONFIG_FILES([
Makefile
make.tmpl
libdm/make.tmpl
daemons/Makefile
daemons/clvmd/Makefile
daemons/cmirrord/Makefile
daemons/dmeventd/Makefile
daemons/dmeventd/libdevmapper-event.pc
@ -2207,13 +1783,12 @@ daemons/dmeventd/plugins/raid/Makefile
daemons/dmeventd/plugins/mirror/Makefile
daemons/dmeventd/plugins/snapshot/Makefile
daemons/dmeventd/plugins/thin/Makefile
daemons/dmfilemapd/Makefile
daemons/dmeventd/plugins/vdo/Makefile
daemons/lvmdbusd/Makefile
daemons/lvmdbusd/lvmdbusd
daemons/lvmdbusd/lvmdb.py
daemons/lvmdbusd/lvm_shell_proxy.py
daemons/lvmdbusd/path.py
daemons/lvmetad/Makefile
daemons/lvmpolld/Makefile
daemons/lvmlockd/Makefile
conf/Makefile
@ -2221,50 +1796,30 @@ conf/example.conf
conf/lvmlocal.conf
conf/command_profile_template.profile
conf/metadata_profile_template.profile
include/.symlinks
include/Makefile
lib/Makefile
lib/format1/Makefile
lib/format_pool/Makefile
lib/locking/Makefile
lib/mirror/Makefile
include/lvm-version.h
lib/raid/Makefile
lib/snapshot/Makefile
lib/thin/Makefile
lib/cache_segtype/Makefile
libdaemon/Makefile
libdaemon/client/Makefile
libdaemon/server/Makefile
libdm/Makefile
libdm/dm-tools/Makefile
libdm/libdevmapper.pc
liblvm/Makefile
liblvm/liblvm2app.pc
man/Makefile
po/Makefile
python/Makefile
python/setup.py
scripts/blkdeactivate.sh
scripts/blk_availability_init_red_hat
scripts/blk_availability_systemd_red_hat.service
scripts/clvmd_init_red_hat
scripts/cmirrord_init_red_hat
scripts/com.redhat.lvmdbus1.service
scripts/dm_event_systemd_red_hat.service
scripts/dm_event_systemd_red_hat.socket
scripts/lvm2_cluster_activation_red_hat.sh
scripts/lvm2_cluster_activation_systemd_red_hat.service
scripts/lvm2_clvmd_systemd_red_hat.service
scripts/lvm2_cmirrord_systemd_red_hat.service
scripts/lvm2_lvmdbusd_systemd_red_hat.service
scripts/lvm2_lvmetad_init_red_hat
scripts/lvm2_lvmetad_systemd_red_hat.service
scripts/lvm2_lvmetad_systemd_red_hat.socket
scripts/lvm2_lvmpolld_init_red_hat
scripts/lvm2_lvmpolld_systemd_red_hat.service
scripts/lvm2_lvmpolld_systemd_red_hat.socket
scripts/lvm2_lvmlockd_systemd_red_hat.service
scripts/lvm2_lvmlocking_systemd_red_hat.service
scripts/lvm2_monitoring_init_red_hat
scripts/lvm2_monitoring_systemd_red_hat.service
scripts/lvm2_pvscan_systemd_red_hat@.service
@ -2272,13 +1827,8 @@ scripts/lvm2_tmpfiles_red_hat.conf
scripts/lvmdump.sh
scripts/Makefile
test/Makefile
test/api/Makefile
test/unit/Makefile
tools/Makefile
udev/Makefile
unit-tests/datastruct/Makefile
unit-tests/regex/Makefile
unit-tests/mm/Makefile
])
AC_OUTPUT
@ -2294,6 +1844,9 @@ AS_IF([test -n "$CACHE_CONFIGURE_WARN"],
AS_IF([test -n "$CACHE_CHECK_VERSION_WARN"],
[AC_MSG_WARN([You should install latest cache_check vsn 0.7.0 to use lvm2 cache metadata format 2])])
AS_IF([test -n "$VDO_CONFIGURE_WARN"],
[AC_MSG_WARN([unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!])])
AS_IF([test "$ODIRECT" != yes],
[AC_MSG_WARN([O_DIRECT disabled: low-memory pvmove may lock up])])

View File

@ -15,11 +15,7 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
.PHONY: dmeventd clvmd cmirrord lvmetad lvmpolld lvmlockd
ifneq ("@CLVMD@", "none")
SUBDIRS += clvmd
endif
.PHONY: dmeventd cmirrord lvmpolld lvmlockd
ifeq ("@BUILD_CMIRRORD@", "yes")
SUBDIRS += cmirrord
@ -32,10 +28,6 @@ daemons.cflow: dmeventd.cflow
endif
endif
ifeq ("@BUILD_LVMETAD@", "yes")
SUBDIRS += lvmetad
endif
ifeq ("@BUILD_LVMPOLLD@", "yes")
SUBDIRS += lvmpolld
endif
@ -48,12 +40,8 @@ ifeq ("@BUILD_LVMDBUSD@", "yes")
SUBDIRS += lvmdbusd
endif
ifeq ("@BUILD_DMFILEMAPD@", "yes")
SUBDIRS += dmfilemapd
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd dmfilemapd
SUBDIRS = cmirrord dmeventd lvmpolld lvmlockd lvmdbusd
endif
include $(top_builddir)/make.tmpl

View File

@ -1 +0,0 @@
clvmd

View File

@ -1,94 +0,0 @@
#
# Copyright (C) 2004 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
CMAN_LIBS = @CMAN_LIBS@
CMAN_CFLAGS = @CMAN_CFLAGS@
CMAP_LIBS = @CMAP_LIBS@
CMAP_CFLAGS = @CMAP_CFLAGS@
CONFDB_LIBS = @CONFDB_LIBS@
CONFDB_CFLAGS = @CONFDB_CFLAGS@
CPG_LIBS = @CPG_LIBS@
CPG_CFLAGS = @CPG_CFLAGS@
DLM_LIBS = @DLM_LIBS@
DLM_CFLAGS = @DLM_CFLAGS@
QUORUM_LIBS = @QUORUM_LIBS@
QUORUM_CFLAGS = @QUORUM_CFLAGS@
SALCK_LIBS = @SALCK_LIBS@
SALCK_CFLAGS = @SALCK_CFLAGS@
SOURCES = \
clvmd-command.c\
clvmd.c\
lvm-functions.c\
refresh_clvmd.c
ifneq (,$(findstring cman,, "@CLVMD@,"))
SOURCES += clvmd-cman.c
LMLIBS += $(CMAN_LIBS) $(CONFDB_LIBS) $(DLM_LIBS)
CFLAGS += $(CMAN_CFLAGS) $(CONFDB_CFLAGS) $(DLM_CFLAGS)
DEFS += -DUSE_CMAN
endif
ifneq (,$(findstring openais,, "@CLVMD@,"))
SOURCES += clvmd-openais.c
LMLIBS += $(CONFDB_LIBS) $(CPG_LIBS) $(SALCK_LIBS)
CFLAGS += $(CONFDB_CFLAGS) $(CPG_CFLAGS) $(SALCK_CFLAGS)
DEFS += -DUSE_OPENAIS
endif
ifneq (,$(findstring corosync,, "@CLVMD@,"))
SOURCES += clvmd-corosync.c
LMLIBS += $(CMAP_LIBS) $(CONFDB_LIBS) $(CPG_LIBS) $(DLM_LIBS) $(QUORUM_LIBS)
CFLAGS += $(CMAP_CFLAGS) $(CONFDB_CFLAGS) $(CPG_CFLAGS) $(DLM_CFLAGS) $(QUORUM_CFLAGS)
DEFS += -DUSE_COROSYNC
endif
ifneq (,$(findstring singlenode,, &quot;@CLVMD@,&quot;))
SOURCES += clvmd-singlenode.c
DEFS += -DUSE_SINGLENODE
endif
ifeq ($(MAKECMDGOALS),distclean)
SOURCES += clvmd-cman.c
SOURCES += clvmd-openais.c
SOURCES += clvmd-corosync.c
SOURCES += clvmd-singlenode.c
endif
TARGETS = \
clvmd
include $(top_builddir)/make.tmpl
LIBS += $(LVMINTERNAL_LIBS) -ldevmapper $(PTHREAD_LIBS)
CFLAGS += -fno-strict-aliasing $(EXTRA_EXEC_CFLAGS)
INSTALL_TARGETS = \
install_clvmd
clvmd: $(OBJECTS) $(top_builddir)/lib/liblvm-internal.a
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) \
-o clvmd $(OBJECTS) $(LMLIBS) $(LIBS)
.PHONY: install_clvmd
install_clvmd: $(TARGETS)
$(INSTALL_PROGRAM) -D clvmd $(usrsbindir)/clvmd
install: $(INSTALL_TARGETS)
install_cluster: $(INSTALL_TARGETS)

View File

@ -1,85 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* Definitions for CLVMD server and clients */
/*
* The protocol spoken over the cluster and across the local socket.
*/
#ifndef _CLVM_H
#define _CLVM_H
#include "configure.h"
#include <inttypes.h>
struct clvm_header {
uint8_t cmd; /* See below */
uint8_t flags; /* See below */
uint16_t xid; /* Transaction ID */
uint32_t clientid; /* Only used in Daemon->Daemon comms */
int32_t status; /* For replies, whether request succeeded */
uint32_t arglen; /* Length of argument below.
If >1500 then it will be passed
around the cluster in the system LV */
char node[1]; /* Actually a NUL-terminated string, node name.
If this is empty then the command is
forwarded to all cluster nodes unless
FLAG_LOCAL or FLAG_REMOTE is also set. */
char args[1]; /* Arguments for the command follow the
node name, This member is only
valid if the node name is empty */
} __attribute__ ((packed));
/* Flags */
#define CLVMD_FLAG_LOCAL 1 /* Only do this on the local node */
#define CLVMD_FLAG_SYSTEMLV 2 /* Data in system LV under my node name */
#define CLVMD_FLAG_NODEERRS 4 /* Reply has errors in node-specific portion */
#define CLVMD_FLAG_REMOTE 8 /* Do this on all nodes except for the local node */
/* Name of the local socket to communicate between lvm and clvmd */
#define CLVMD_SOCKNAME DEFAULT_RUN_DIR "/clvmd.sock"
/* Internal commands & replies */
#define CLVMD_CMD_REPLY 1
#define CLVMD_CMD_VERSION 2 /* Send version around cluster when we start */
#define CLVMD_CMD_GOAWAY 3 /* Die if received this - we are running
an incompatible version */
#define CLVMD_CMD_TEST 4 /* Just for mucking about */
#define CLVMD_CMD_LOCK 30
#define CLVMD_CMD_UNLOCK 31
/* Lock/Unlock commands */
#define CLVMD_CMD_LOCK_LV 50
#define CLVMD_CMD_LOCK_VG 51
#define CLVMD_CMD_LOCK_QUERY 52
/* Misc functions */
#define CLVMD_CMD_REFRESH 40
#define CLVMD_CMD_GET_CLUSTERNAME 41
#define CLVMD_CMD_SET_DEBUG 42
#define CLVMD_CMD_VG_BACKUP 43
#define CLVMD_CMD_RESTART 44
#define CLVMD_CMD_SYNC_NAMES 45
/* Used internally by some callers, but not part of the protocol.*/
#ifndef NODE_ALL
# define NODE_ALL "*"
# define NODE_LOCAL "."
# define NODE_REMOTE "^"
#endif
#endif

View File

@ -1,505 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* CMAN communication layer for clvmd.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvmd-comms.h"
#include "clvm.h"
#include "clvmd.h"
#include "lvm-functions.h"
#include <libdlm.h>
#include <syslog.h>
#define LOCKSPACE_NAME "clvmd"
struct clvmd_node
{
struct cman_node *node;
int clvmd_up;
};
static int num_nodes;
static struct cman_node *nodes = NULL;
static struct cman_node this_node;
static int count_nodes; /* size of allocated nodes array */
static struct dm_hash_table *node_updown_hash;
static dlm_lshandle_t *lockspace;
static cman_handle_t c_handle;
static void count_clvmds_running(void);
static void get_members(void);
static int nodeid_from_csid(const char *csid);
static int name_from_nodeid(int nodeid, char *name);
static void event_callback(cman_handle_t handle, void *private, int reason, int arg);
static void data_callback(cman_handle_t handle, void *private,
char *buf, int len, uint8_t port, int nodeid);
struct lock_wait {
pthread_cond_t cond;
pthread_mutex_t mutex;
struct dlm_lksb lksb;
};
static int _init_cluster(void)
{
node_updown_hash = dm_hash_create(100);
/* Open the cluster communication socket */
c_handle = cman_init(NULL);
if (!c_handle) {
syslog(LOG_ERR, "Can't open cluster manager socket: %m");
return -1;
}
DEBUGLOG("Connected to CMAN\n");
if (cman_start_recv_data(c_handle, data_callback, CLUSTER_PORT_CLVMD)) {
syslog(LOG_ERR, "Can't bind cluster socket: %m");
return -1;
}
if (cman_start_notification(c_handle, event_callback)) {
syslog(LOG_ERR, "Can't start cluster event listening");
return -1;
}
/* Get the cluster members list */
get_members();
count_clvmds_running();
DEBUGLOG("CMAN initialisation complete\n");
/* Create a lockspace for LV & VG locks to live in */
lockspace = dlm_open_lockspace(LOCKSPACE_NAME);
if (!lockspace) {
lockspace = dlm_create_lockspace(LOCKSPACE_NAME, 0600);
if (!lockspace) {
syslog(LOG_ERR, "Unable to create DLM lockspace for CLVM: %m");
return -1;
}
DEBUGLOG("Created DLM lockspace for CLVMD.\n");
} else
DEBUGLOG("Opened existing DLM lockspace for CLVMD.\n");
dlm_ls_pthread_init(lockspace);
DEBUGLOG("DLM initialisation complete\n");
return 0;
}
static void _cluster_init_completed(void)
{
clvmd_cluster_init_completed();
}
static int _get_main_cluster_fd(void)
{
return cman_get_fd(c_handle);
}
static int _get_num_nodes(void)
{
int i;
int nnodes = 0;
/* return number of ACTIVE nodes */
for (i=0; i<num_nodes; i++) {
if (nodes[i].cn_member && nodes[i].cn_nodeid)
nnodes++;
}
return nnodes;
}
/* send_message with the fd check removed */
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
int nodeid = 0;
if (csid)
memcpy(&nodeid, csid, CMAN_MAX_CSID_LEN);
if (cman_send_data(c_handle, buf, msglen, 0, CLUSTER_PORT_CLVMD, nodeid) <= 0)
{
log_error("%s", errtext);
}
return msglen;
}
static void _get_our_csid(char *csid)
{
if (this_node.cn_nodeid == 0) {
cman_get_node(c_handle, 0, &this_node);
}
memcpy(csid, &this_node.cn_nodeid, CMAN_MAX_CSID_LEN);
}
/* Call a callback routine for each node is that known (down means not running a clvmd) */
static int _cluster_do_node_callback(struct local_client *client,
void (*callback) (struct local_client *,
const char *,
int))
{
int i;
int somedown = 0;
for (i = 0; i < _get_num_nodes(); i++) {
if (nodes[i].cn_member && nodes[i].cn_nodeid) {
int up = (int)(long)dm_hash_lookup_binary(node_updown_hash, (char *)&nodes[i].cn_nodeid, sizeof(int));
callback(client, (char *)&nodes[i].cn_nodeid, up);
if (!up)
somedown = -1;
}
}
return somedown;
}
/* Process OOB messages from the cluster socket */
static void event_callback(cman_handle_t handle, void *private, int reason, int arg)
{
char namebuf[MAX_CLUSTER_MEMBER_NAME_LEN];
switch (reason) {
case CMAN_REASON_PORTCLOSED:
name_from_nodeid(arg, namebuf);
log_notice("clvmd on node %s has died\n", namebuf);
DEBUGLOG("Got port closed message, removing node %s\n", namebuf);
dm_hash_insert_binary(node_updown_hash, (char *)&arg, sizeof(int), (void *)0);
break;
case CMAN_REASON_STATECHANGE:
DEBUGLOG("Got state change message, re-reading members list\n");
get_members();
break;
#if defined(LIBCMAN_VERSION) && LIBCMAN_VERSION >= 2
case CMAN_REASON_PORTOPENED:
/* Ignore this, wait for startup message from clvmd itself */
break;
case CMAN_REASON_TRY_SHUTDOWN:
DEBUGLOG("Got try shutdown, sending OK\n");
cman_replyto_shutdown(c_handle, 1);
break;
#endif
default:
/* ERROR */
DEBUGLOG("Got unknown event callback message: %d\n", reason);
break;
}
}
static struct local_client *cman_client;
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
/* Save this for data_callback */
cman_client = fd;
/* We never return a new client */
*new_client = NULL;
return cman_dispatch(c_handle, 0);
}
static void data_callback(cman_handle_t handle, void *private,
char *buf, int len, uint8_t port, int nodeid)
{
/* Ignore looped back messages */
if (nodeid == this_node.cn_nodeid)
return;
process_message(cman_client, buf, len, (char *)&nodeid);
}
static void _add_up_node(const char *csid)
{
/* It's up ! */
int nodeid = nodeid_from_csid(csid);
dm_hash_insert_binary(node_updown_hash, (char *)&nodeid, sizeof(int), (void *)1);
DEBUGLOG("Added new node %d to updown list\n", nodeid);
}
static void _cluster_closedown(void)
{
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
cman_finish(c_handle);
}
static int is_listening(int nodeid)
{
int status;
do {
status = cman_is_listening(c_handle, nodeid, CLUSTER_PORT_CLVMD);
if (status < 0 && errno == EBUSY) { /* Don't busywait */
sleep(1);
errno = EBUSY; /* In case sleep trashes it */
}
}
while (status < 0 && errno == EBUSY);
return status;
}
/* Populate the list of CLVMDs running.
called only at startup time */
static void count_clvmds_running(void)
{
int i;
for (i = 0; i < num_nodes; i++) {
int nodeid = nodes[i].cn_nodeid;
if (is_listening(nodeid) == 1)
dm_hash_insert_binary(node_updown_hash, (void *)&nodeid, sizeof(int), (void*)1);
else
dm_hash_insert_binary(node_updown_hash, (void *)&nodeid, sizeof(int), (void*)0);
}
}
/* Get a list of active cluster members */
static void get_members(void)
{
int retnodes;
int status;
int i;
int high_nodeid = 0;
num_nodes = cman_get_node_count(c_handle);
if (num_nodes == -1) {
log_error("Unable to get node count");
return;
}
/* Not enough room for new nodes list ? */
if (num_nodes > count_nodes && nodes) {
free(nodes);
nodes = NULL;
}
if (nodes == NULL) {
count_nodes = num_nodes + 10; /* Overallocate a little */
nodes = malloc(count_nodes * sizeof(struct cman_node));
if (!nodes) {
log_error("Unable to allocate nodes array\n");
exit(5);
}
}
status = cman_get_nodes(c_handle, count_nodes, &retnodes, nodes);
if (status < 0) {
log_error("Unable to get node details");
exit(6);
}
/* Get the highest nodeid */
for (i=0; i<retnodes; i++) {
if (nodes[i].cn_nodeid > high_nodeid)
high_nodeid = nodes[i].cn_nodeid;
}
}
/* Convert a node name to a CSID */
static int _csid_from_name(char *csid, const char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (strcmp(name, nodes[i].cn_name) == 0) {
memcpy(csid, &nodes[i].cn_nodeid, CMAN_MAX_CSID_LEN);
return 0;
}
}
return -1;
}
/* Convert a CSID to a node name */
static int _name_from_csid(const char *csid, char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (memcmp(csid, &nodes[i].cn_nodeid, CMAN_MAX_CSID_LEN) == 0) {
strcpy(name, nodes[i].cn_name);
return 0;
}
}
/* Who?? */
strcpy(name, "Unknown");
return -1;
}
/* Convert a node ID to a node name */
static int name_from_nodeid(int nodeid, char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (nodeid == nodes[i].cn_nodeid) {
strcpy(name, nodes[i].cn_name);
return 0;
}
}
/* Who?? */
strcpy(name, "Unknown");
return -1;
}
/* Convert a CSID to a node ID */
static int nodeid_from_csid(const char *csid)
{
int nodeid;
memcpy(&nodeid, csid, CMAN_MAX_CSID_LEN);
return nodeid;
}
static int _is_quorate(void)
{
return cman_is_quorate(c_handle);
}
static void sync_ast_routine(void *arg)
{
struct lock_wait *lwait = arg;
pthread_mutex_lock(&lwait->mutex);
pthread_cond_signal(&lwait->cond);
pthread_mutex_unlock(&lwait->mutex);
}
static int _sync_lock(const char *resource, int mode, int flags, int *lockid)
{
int status;
struct lock_wait lwait;
if (!lockid) {
errno = EINVAL;
return -1;
}
DEBUGLOG("sync_lock: '%s' mode:%d flags=%d\n", resource,mode,flags);
/* Conversions need the lockid in the LKSB */
if (flags & LKF_CONVERT)
lwait.lksb.sb_lkid = *lockid;
pthread_cond_init(&lwait.cond, NULL);
pthread_mutex_init(&lwait.mutex, NULL);
pthread_mutex_lock(&lwait.mutex);
status = dlm_ls_lock(lockspace,
mode,
&lwait.lksb,
flags,
resource,
strlen(resource),
0, sync_ast_routine, &lwait, NULL, NULL);
if (status)
return status;
/* Wait for it to complete */
pthread_cond_wait(&lwait.cond, &lwait.mutex);
pthread_mutex_unlock(&lwait.mutex);
*lockid = lwait.lksb.sb_lkid;
errno = lwait.lksb.sb_status;
DEBUGLOG("sync_lock: returning lkid %x\n", *lockid);
if (lwait.lksb.sb_status)
return -1;
else
return 0;
}
static int _sync_unlock(const char *resource /* UNUSED */, int lockid)
{
int status;
struct lock_wait lwait;
DEBUGLOG("sync_unlock: '%s' lkid:%x\n", resource, lockid);
pthread_cond_init(&lwait.cond, NULL);
pthread_mutex_init(&lwait.mutex, NULL);
pthread_mutex_lock(&lwait.mutex);
status = dlm_ls_unlock(lockspace, lockid, 0, &lwait.lksb, &lwait);
if (status)
return status;
/* Wait for it to complete */
pthread_cond_wait(&lwait.cond, &lwait.mutex);
pthread_mutex_unlock(&lwait.mutex);
errno = lwait.lksb.sb_status;
if (lwait.lksb.sb_status != EUNLOCK)
return -1;
else
return 0;
}
static int _get_cluster_name(char *buf, int buflen)
{
cman_cluster_t cluster_info;
int status;
status = cman_get_cluster(c_handle, &cluster_info);
if (!status) {
strncpy(buf, cluster_info.ci_name, buflen);
}
return status;
}
static struct cluster_ops _cluster_cman_ops = {
.name = "cman",
.cluster_init_completed = _cluster_init_completed,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _sync_lock,
.sync_unlock = _sync_unlock,
};
struct cluster_ops *init_cman_cluster(void)
{
if (!_init_cluster())
return &_cluster_cman_ops;
else
return NULL;
}

View File

@ -1,416 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
CLVMD Cluster LVM daemon command processor.
To add commands to the daemon simply add a processor in do_command and return
and messages back in buf and the length in *retlen. The initial value of
buflen is the maximum size of the buffer. if buf is not large enough then it
may be reallocated by the functions in here to a suitable size bearing in
mind that anything larger than the passed-in size will have to be returned
using the system LV and so performance will suffer.
The status return will be negated and passed back to the originating node.
pre- and post- command routines are called only on the local node. The
purpose is primarily to get and release locks, though the pre- routine should
also do any other local setups required by the command (if any) and can
return a failure code that prevents the command from being distributed around
the cluster
The pre- and post- routines are run in their own thread so can block as long
they like, do_command is run in the main clvmd thread so should not block for
too long. If the pre-command returns an error code (!=0) then the command
will not be propogated around the cluster but the post-command WILL be called
Also note that the pre and post routine are *always* called on the local
node, even if the command to be executed was only requested to run on a
remote node. It may peek inside the client structure to check the status of
the command.
The clients of the daemon must, naturally, understand the return messages and
codes.
Routines in here may only READ the values in the client structure passed in
apart from client->private which they are free to do what they like with.
*/
#include "clvmd-common.h"
#include "clvmd-comms.h"
#include "clvm.h"
#include "clvmd.h"
#include "lvm-globals.h"
#include "lvm-functions.h"
#include "locking.h"
#include <sys/utsname.h>
extern struct cluster_ops *clops;
static int restart_clvmd(void);
/* This is where all the real work happens:
NOTE: client will be NULL when this is executed on a remote node */
int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
char **buf, int buflen, int *retlen)
{
char *args = msg->node + strlen(msg->node) + 1;
int arglen = msglen - sizeof(struct clvm_header) - strlen(msg->node);
int status = 0;
char *lockname;
const char *locktype;
struct utsname nodeinfo;
unsigned char lock_cmd;
unsigned char lock_flags;
/* Do the command */
switch (msg->cmd) {
/* Just a test message */
case CLVMD_CMD_TEST:
if (arglen > buflen) {
char *new_buf;
buflen = arglen + 200;
new_buf = realloc(*buf, buflen);
if (new_buf == NULL) {
status = errno;
free (*buf);
}
*buf = new_buf;
}
if (*buf) {
if (uname(&nodeinfo))
memset(&nodeinfo, 0, sizeof(nodeinfo));
*retlen = 1 + dm_snprintf(*buf, buflen,
"TEST from %s: %s v%s",
nodeinfo.nodename, args,
nodeinfo.release);
}
break;
case CLVMD_CMD_LOCK_VG:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
/* Check to see if the VG is in use by LVM1 */
status = do_check_lvm1(lockname);
do_lock_vg(lock_cmd, lock_flags, lockname);
break;
case CLVMD_CMD_LOCK_LV:
/* This is the biggie */
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = do_lock_lv(lock_cmd, lock_flags, lockname);
/* Replace EIO with something less scary */
if (status == EIO) {
*retlen = 1 + dm_snprintf(*buf, buflen, "%s",
get_last_lvm_error());
return EIO;
}
break;
case CLVMD_CMD_LOCK_QUERY:
lockname = &args[2];
if (buflen < 3)
return EIO;
if ((locktype = do_lock_query(lockname)))
*retlen = 1 + dm_snprintf(*buf, buflen, "%s", locktype);
break;
case CLVMD_CMD_REFRESH:
do_refresh_cache();
break;
case CLVMD_CMD_SYNC_NAMES:
lvm_do_fs_unlock();
break;
case CLVMD_CMD_SET_DEBUG:
clvmd_set_debug((debug_t) args[0]);
break;
case CLVMD_CMD_RESTART:
status = restart_clvmd();
break;
case CLVMD_CMD_GET_CLUSTERNAME:
status = clops->get_cluster_name(*buf, buflen);
if (!status)
*retlen = strlen(*buf)+1;
break;
case CLVMD_CMD_VG_BACKUP:
/*
* Do not run backup on local node, caller should do that.
*/
if (!client)
lvm_do_backup(&args[2]);
break;
default:
/* Won't get here because command is validated in pre_command */
break;
}
/* Check the status of the command and return the error text */
if (status) {
if (*buf)
*retlen = dm_snprintf(*buf, buflen, "%s", strerror(status)) + 1;
else
*retlen = 0;
}
return status;
}
static int lock_vg(struct local_client *client)
{
struct dm_hash_table *lock_hash;
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
unsigned char lock_cmd;
int lock_mode;
char *args = header->node + strlen(header->node) + 1;
int lkid;
int status;
char *lockname;
/*
* Keep a track of VG locks in our own hash table. In current
* practice there should only ever be more than two VGs locked
* if a user tries to merge lots of them at once
*/
if (!client->bits.localsock.private) {
if (!(lock_hash = dm_hash_create(3)))
return ENOMEM;
client->bits.localsock.private = (void *) lock_hash;
} else
lock_hash = (struct dm_hash_table *) client->bits.localsock.private;
lock_cmd = args[0] & (LCK_NONBLOCK | LCK_HOLD | LCK_SCOPE_MASK | LCK_TYPE_MASK);
lock_mode = ((int) lock_cmd & LCK_TYPE_MASK);
/* lock_flags = args[1]; */
lockname = &args[2];
DEBUGLOG("(%p) doing PRE command LOCK_VG '%s' at %x\n", client, lockname, lock_cmd);
if (lock_mode == LCK_UNLOCK) {
if (!(lkid = (int) (long) dm_hash_lookup(lock_hash, lockname)))
return EINVAL;
if ((status = sync_unlock(lockname, lkid)))
status = errno;
else
dm_hash_remove(lock_hash, lockname);
} else {
/* Read locks need to be PR; other modes get passed through */
if (lock_mode == LCK_READ)
lock_mode = LCK_PREAD;
if ((status = sync_lock(lockname, lock_mode, (lock_cmd & LCK_NONBLOCK) ? LCKF_NOQUEUE : 0, &lkid)))
status = errno;
else if (!dm_hash_insert(lock_hash, lockname, (void *) (long) lkid))
return ENOMEM;
}
return status;
}
/* Pre-command is a good place to get locks that are needed only for the duration
of the commands around the cluster (don't forget to free them in post-command),
and to sanity check the command arguments */
int do_pre_command(struct local_client *client)
{
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
unsigned char lock_cmd;
unsigned char lock_flags;
char *args = header->node + strlen(header->node) + 1;
int lockid = 0;
int status = 0;
char *lockname;
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_lock("CLVMD_TEST", LCK_EXCL, 0, &lockid);
client->bits.localsock.private = (void *)(long)lockid;
break;
case CLVMD_CMD_LOCK_VG:
lockname = &args[2];
/* We take out a real lock unless LCK_CACHE was set */
if (!strncmp(lockname, "V_", 2) ||
!strncmp(lockname, "P_#", 3))
status = lock_vg(client);
break;
case CLVMD_CMD_LOCK_LV:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = pre_lock_lv(lock_cmd, lock_flags, lockname);
break;
case CLVMD_CMD_REFRESH:
case CLVMD_CMD_GET_CLUSTERNAME:
case CLVMD_CMD_SET_DEBUG:
case CLVMD_CMD_VG_BACKUP:
case CLVMD_CMD_SYNC_NAMES:
case CLVMD_CMD_LOCK_QUERY:
case CLVMD_CMD_RESTART:
break;
default:
log_error("Unknown command %d received\n", header->cmd);
status = EINVAL;
}
return status;
}
/* Note that the post-command routine is called even if the pre-command or the real command
failed */
int do_post_command(struct local_client *client)
{
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
int status = 0;
unsigned char lock_cmd;
unsigned char lock_flags;
char *args = header->node + strlen(header->node) + 1;
char *lockname;
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_unlock("CLVMD_TEST", (int) (long) client->bits.localsock.private);
client->bits.localsock.private = NULL;
break;
case CLVMD_CMD_LOCK_LV:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = post_lock_lv(lock_cmd, lock_flags, lockname);
break;
default:
/* Nothing to do here */
break;
}
return status;
}
/* Called when the client is about to be deleted */
void cmd_client_cleanup(struct local_client *client)
{
struct dm_hash_node *v;
struct dm_hash_table *lock_hash;
int lkid;
char *lockname;
DEBUGLOG("(%p) Client thread cleanup\n", client);
if (!client->bits.localsock.private)
return;
lock_hash = (struct dm_hash_table *)client->bits.localsock.private;
dm_hash_iterate(v, lock_hash) {
lkid = (int)(long)dm_hash_get_data(lock_hash, v);
lockname = dm_hash_get_key(lock_hash, v);
DEBUGLOG("(%p) Cleanup: Unlocking lock %s %x\n", client, lockname, lkid);
(void) sync_unlock(lockname, lkid);
}
dm_hash_destroy(lock_hash);
client->bits.localsock.private = NULL;
}
static int restart_clvmd(void)
{
const char **argv;
char *lv_name;
int argc = 0, max_locks = 0;
struct dm_hash_node *hn = NULL;
char debug_arg[16];
const char *clvmd = getenv("LVM_CLVMD_BINARY") ? : CLVMD_PATH;
DEBUGLOG("clvmd restart requested\n");
/* Count exclusively-open LVs */
do {
hn = get_next_excl_lock(hn, &lv_name);
if (lv_name) {
max_locks++;
if (!*lv_name)
break; /* FIXME: Is this error ? */
}
} while (hn);
/* clvmd + locks (-E uuid) + debug (-d X) + NULL */
if (!(argv = malloc((max_locks * 2 + 6) * sizeof(*argv))))
goto_out;
/*
* Build the command-line
*/
argv[argc++] = "clvmd";
/* Propagate debug options */
if (clvmd_get_debug()) {
if (dm_snprintf(debug_arg, sizeof(debug_arg), "-d%u", clvmd_get_debug()) < 0)
goto_out;
argv[argc++] = debug_arg;
}
/* Propagate foreground options */
if (clvmd_get_foreground())
argv[argc++] = "-f";
argv[argc++] = "-I";
argv[argc++] = clops->name;
/* Now add the exclusively-open LVs */
hn = NULL;
do {
hn = get_next_excl_lock(hn, &lv_name);
if (lv_name) {
if (!*lv_name)
break; /* FIXME: Is this error ? */
argv[argc++] = "-E";
argv[argc++] = lv_name;
DEBUGLOG("excl lock: %s\n", lv_name);
}
} while (hn);
argv[argc] = NULL;
/* Exec new clvmd */
DEBUGLOG("--- Restarting %s ---\n", clvmd);
for (argc = 1; argv[argc]; argc++) DEBUGLOG("--- %d: %s\n", argc, argv[argc]);
/* NOTE: This will fail when downgrading! */
execvp(clvmd, (char **)argv);
out:
/* We failed */
DEBUGLOG("Restart of clvmd failed.\n");
free(argv);
return EIO;
}

View File

@ -1,119 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* Abstraction layer for clvmd cluster communications
*/
#ifndef _CLVMD_COMMS_H
#define _CLVMD_COMMS_H
struct local_client;
struct cluster_ops {
const char *name;
void (*cluster_init_completed) (void);
int (*cluster_send_message) (const void *buf, int msglen,
const char *csid,
const char *errtext);
int (*name_from_csid) (const char *csid, char *name);
int (*csid_from_name) (char *csid, const char *name);
int (*get_num_nodes) (void);
int (*cluster_fd_callback) (struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client);
int (*get_main_cluster_fd) (void); /* gets accept FD or cman cluster socket */
int (*cluster_do_node_callback) (struct local_client *client,
void (*callback) (struct local_client *,
const char *csid,
int node_up));
int (*is_quorate) (void);
void (*get_our_csid) (char *csid);
void (*add_up_node) (const char *csid);
void (*reread_config) (void);
void (*cluster_closedown) (void);
int (*get_cluster_name)(char *buf, int buflen);
int (*sync_lock) (const char *resource, int mode,
int flags, int *lockid);
int (*sync_unlock) (const char *resource, int lockid);
};
#ifdef USE_CMAN
# include <netinet/in.h>
# include "libcman.h"
# define CMAN_MAX_CSID_LEN 4
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN CMAN_MAX_CSID_LEN
# endif
# undef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN CMAN_MAX_NODENAME_LEN
# define CMAN_MAX_CLUSTER_MESSAGE 1500
# define CLUSTER_PORT_CLVMD 11
struct cluster_ops *init_cman_cluster(void);
#endif
#ifdef USE_OPENAIS
# include <openais/saAis.h>
# include <corosync/totem/totem.h>
# define OPENAIS_CSID_LEN (sizeof(int))
# define OPENAIS_MAX_CLUSTER_MESSAGE MESSAGE_SIZE_MAX
# define OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN SA_MAX_NAME_LENGTH
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN SA_MAX_NAME_LENGTH
# endif
# ifndef CMAN_MAX_CLUSTER_MESSAGE
# define CMAN_MAX_CLUSTER_MESSAGE MESSAGE_SIZE_MAX
# endif
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_openais_cluster(void);
#endif
#ifdef USE_COROSYNC
# include <corosync/corotypes.h>
# define COROSYNC_CSID_LEN (sizeof(int))
# define COROSYNC_MAX_CLUSTER_MESSAGE 65535
# define COROSYNC_MAX_CLUSTER_MEMBER_NAME_LEN CS_MAX_NAME_LENGTH
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN CS_MAX_NAME_LENGTH
# endif
# ifndef CMAN_MAX_CLUSTER_MESSAGE
# define CMAN_MAX_CLUSTER_MESSAGE 65535
# endif
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_corosync_cluster(void);
#endif
#ifdef USE_SINGLENODE
# define SINGLENODE_CSID_LEN (sizeof(int))
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN 64
# endif
# define SINGLENODE_MAX_CLUSTER_MESSAGE 65535
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_singlenode_cluster(void);
#endif
#endif

View File

@ -1,662 +0,0 @@
/*
* Copyright (C) 2009-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* This provides the interface between clvmd and corosync/DLM as the cluster
* and lock manager.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include "lvm-functions.h"
#include "locking.h"
#include <corosync/cpg.h>
#include <corosync/quorum.h>
#ifdef HAVE_COROSYNC_CONFDB_H
# include <corosync/confdb.h>
#elif defined HAVE_COROSYNC_CMAP_H
# include <corosync/cmap.h>
#else
# error "Either HAVE_COROSYNC_CONFDB_H or HAVE_COROSYNC_CMAP_H must be defined."
#endif
#include <libdlm.h>
#include <syslog.h>
/* Timeout value for several corosync calls */
#define LOCKSPACE_NAME "clvmd"
static void corosync_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len);
static void corosync_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries);
static void _cluster_closedown(void);
/* Hash list of nodes in the cluster */
static struct dm_hash_table *node_hash;
/* Number of active nodes */
static int num_nodes;
static unsigned int our_nodeid;
static struct local_client *cluster_client;
/* Corosync handles */
static cpg_handle_t cpg_handle;
static quorum_handle_t quorum_handle;
/* DLM Handle */
static dlm_lshandle_t *lockspace;
static struct cpg_name cpg_group_name;
/* Corosync callback structs */
cpg_callbacks_t corosync_cpg_callbacks = {
.cpg_deliver_fn = corosync_cpg_deliver_callback,
.cpg_confchg_fn = corosync_cpg_confchg_callback,
};
quorum_callbacks_t quorum_callbacks = {
.quorum_notify_fn = NULL,
};
struct node_info
{
enum {NODE_DOWN, NODE_CLVMD} state;
int nodeid;
};
/* Set errno to something approximating the right value and return 0 or -1 */
static int cs_to_errno(cs_error_t err)
{
switch(err)
{
case CS_OK:
return 0;
case CS_ERR_LIBRARY:
errno = EINVAL;
break;
case CS_ERR_VERSION:
errno = EINVAL;
break;
case CS_ERR_INIT:
errno = EINVAL;
break;
case CS_ERR_TIMEOUT:
errno = ETIME;
break;
case CS_ERR_TRY_AGAIN:
errno = EAGAIN;
break;
case CS_ERR_INVALID_PARAM:
errno = EINVAL;
break;
case CS_ERR_NO_MEMORY:
errno = ENOMEM;
break;
case CS_ERR_BAD_HANDLE:
errno = EINVAL;
break;
case CS_ERR_BUSY:
errno = EBUSY;
break;
case CS_ERR_ACCESS:
errno = EPERM;
break;
case CS_ERR_NOT_EXIST:
errno = ENOENT;
break;
case CS_ERR_NAME_TOO_LONG:
errno = ENAMETOOLONG;
break;
case CS_ERR_EXIST:
errno = EEXIST;
break;
case CS_ERR_NO_SPACE:
errno = ENOSPC;
break;
case CS_ERR_INTERRUPT:
errno = EINTR;
break;
case CS_ERR_NAME_NOT_FOUND:
errno = ENOENT;
break;
case CS_ERR_NO_RESOURCES:
errno = ENOMEM;
break;
case CS_ERR_NOT_SUPPORTED:
errno = EOPNOTSUPP;
break;
case CS_ERR_BAD_OPERATION:
errno = EINVAL;
break;
case CS_ERR_FAILED_OPERATION:
errno = EIO;
break;
case CS_ERR_MESSAGE_ERROR:
errno = EIO;
break;
case CS_ERR_QUEUE_FULL:
errno = EXFULL;
break;
case CS_ERR_QUEUE_NOT_AVAILABLE:
errno = EINVAL;
break;
case CS_ERR_BAD_FLAGS:
errno = EINVAL;
break;
case CS_ERR_TOO_BIG:
errno = E2BIG;
break;
case CS_ERR_NO_SECTIONS:
errno = ENOMEM;
break;
default:
errno = EINVAL;
break;
}
return -1;
}
static char *print_corosync_csid(const char *csid)
{
static char buf[128];
int id;
memcpy(&id, csid, sizeof(int));
sprintf(buf, "%d", id);
return buf;
}
static void corosync_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len)
{
int target_nodeid;
memcpy(&target_nodeid, msg, COROSYNC_CSID_LEN);
DEBUGLOG("%u got message from nodeid %d for %d. len %zd\n",
our_nodeid, nodeid, target_nodeid, msg_len-4);
if (nodeid != our_nodeid)
if (target_nodeid == our_nodeid || target_nodeid == 0)
process_message(cluster_client, (char *)msg+COROSYNC_CSID_LEN,
msg_len-COROSYNC_CSID_LEN, (char*)&nodeid);
}
static void corosync_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries)
{
int i;
struct node_info *ninfo;
DEBUGLOG("confchg callback. %zd joined, %zd left, %zd members\n",
joined_list_entries, left_list_entries, member_list_entries);
for (i=0; i<joined_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&joined_list[i].nodeid,
COROSYNC_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = joined_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
COROSYNC_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
for (i=0; i<left_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&left_list[i].nodeid,
COROSYNC_CSID_LEN);
if (ninfo)
ninfo->state = NODE_DOWN;
}
num_nodes = member_list_entries;
}
static int _init_cluster(void)
{
cs_error_t err;
#ifdef QUORUM_SET /* corosync/quorum.h */
uint32_t quorum_type;
#endif
node_hash = dm_hash_create(100);
err = cpg_initialize(&cpg_handle,
&corosync_cpg_callbacks);
if (err != CS_OK) {
syslog(LOG_ERR, "Cannot initialise Corosync CPG service: %d",
err);
DEBUGLOG("Cannot initialise Corosync CPG service: %d", err);
return cs_to_errno(err);
}
#ifdef QUORUM_SET
err = quorum_initialize(&quorum_handle,
&quorum_callbacks,
&quorum_type);
if (quorum_type != QUORUM_SET) {
syslog(LOG_ERR, "Corosync quorum service is not configured");
DEBUGLOG("Corosync quorum service is not configured");
return EINVAL;
}
#else
err = quorum_initialize(&quorum_handle,
&quorum_callbacks);
#endif
if (err != CS_OK) {
syslog(LOG_ERR, "Cannot initialise Corosync quorum service: %d",
err);
DEBUGLOG("Cannot initialise Corosync quorum service: %d", err);
return cs_to_errno(err);
}
/* Create a lockspace for LV & VG locks to live in */
lockspace = dlm_open_lockspace(LOCKSPACE_NAME);
if (!lockspace) {
lockspace = dlm_create_lockspace(LOCKSPACE_NAME, 0600);
if (!lockspace) {
syslog(LOG_ERR, "Unable to create DLM lockspace for CLVM: %m");
return -1;
}
DEBUGLOG("Created DLM lockspace for CLVMD.\n");
} else
DEBUGLOG("Opened existing DLM lockspace for CLVMD.\n");
dlm_ls_pthread_init(lockspace);
DEBUGLOG("DLM initialisation complete\n");
/* Connect to the clvmd group */
strcpy((char *)cpg_group_name.value, "clvmd");
cpg_group_name.length = strlen((char *)cpg_group_name.value);
err = cpg_join(cpg_handle, &cpg_group_name);
if (err != CS_OK) {
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
syslog(LOG_ERR, "Cannot join clvmd process group");
DEBUGLOG("Cannot join clvmd process group: %d\n", err);
return cs_to_errno(err);
}
err = cpg_local_get(cpg_handle,
&our_nodeid);
if (err != CS_OK) {
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
syslog(LOG_ERR, "Cannot get local node id\n");
return cs_to_errno(err);
}
DEBUGLOG("Our local node id is %d\n", our_nodeid);
DEBUGLOG("Connected to Corosync\n");
return 0;
}
static void _cluster_closedown(void)
{
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
}
static void _get_our_csid(char *csid)
{
memcpy(csid, &our_nodeid, sizeof(int));
}
/* Corosync doesn't really have nmode names so we
just use the node ID in hex instead */
static int _csid_from_name(char *csid, const char *name)
{
int nodeid;
struct node_info *ninfo;
if (sscanf(name, "%x", &nodeid) == 1) {
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (ninfo)
return nodeid;
}
return -1;
}
static int _name_from_csid(const char *csid, char *name)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (!ninfo)
{
sprintf(name, "UNKNOWN %s", print_corosync_csid(csid));
return -1;
}
sprintf(name, "%x", ninfo->nodeid);
return 0;
}
static int _get_num_nodes(void)
{
DEBUGLOG("num_nodes = %d\n", num_nodes);
return num_nodes;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (!ninfo) {
DEBUGLOG("corosync_add_up_node no node_hash entry for csid %s\n",
print_corosync_csid(csid));
return;
}
DEBUGLOG("corosync_add_up_node %d\n", ninfo->nodeid);
ninfo->state = NODE_CLVMD;
return;
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
struct dm_hash_node *hn;
struct node_info *ninfo;
dm_hash_iterate(hn, node_hash)
{
char csid[COROSYNC_CSID_LEN];
ninfo = dm_hash_get_data(node_hash, hn);
memcpy(csid, dm_hash_get_key(node_hash, hn), COROSYNC_CSID_LEN);
DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
ninfo->state);
if (ninfo->state == NODE_CLVMD)
callback(master_client, csid, 1);
}
return 0;
}
/* Real locking */
static int _lock_resource(const char *resource, int mode, int flags, int *lockid)
{
struct dlm_lksb lksb;
int err;
DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
if (flags & LKF_CONVERT)
lksb.sb_lkid = *lockid;
err = dlm_ls_lock_wait(lockspace,
mode,
&lksb,
flags,
resource,
strlen(resource),
0,
NULL, NULL, NULL);
if (err != 0)
{
DEBUGLOG("dlm_ls_lock returned %d\n", errno);
return err;
}
if (lksb.sb_status != 0)
{
DEBUGLOG("dlm_ls_lock returns lksb.sb_status %d\n", lksb.sb_status);
errno = lksb.sb_status;
return -1;
}
DEBUGLOG("lock_resource returning %d, lock_id=%x\n", err, lksb.sb_lkid);
*lockid = lksb.sb_lkid;
return 0;
}
static int _unlock_resource(const char *resource, int lockid)
{
struct dlm_lksb lksb;
int err;
DEBUGLOG("unlock_resource: %s lockid: %x\n", resource, lockid);
lksb.sb_lkid = lockid;
err = dlm_ls_unlock_wait(lockspace,
lockid,
0,
&lksb);
if (err != 0)
{
DEBUGLOG("Unlock returned %d\n", err);
return err;
}
if (lksb.sb_status != EUNLOCK)
{
DEBUGLOG("dlm_ls_unlock_wait returns lksb.sb_status: %d\n", lksb.sb_status);
errno = lksb.sb_status;
return -1;
}
return 0;
}
static int _is_quorate(void)
{
int quorate;
if (quorum_getquorate(quorum_handle, &quorate) == CS_OK)
return quorate;
else
return 0;
}
static int _get_main_cluster_fd(void)
{
int select_fd;
cpg_fd_get(cpg_handle, &select_fd);
return select_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
cluster_client = fd;
*new_client = NULL;
cpg_dispatch(cpg_handle, CS_DISPATCH_ONE);
return 1;
}
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
static pthread_mutex_t _mutex = PTHREAD_MUTEX_INITIALIZER;
struct iovec iov[2];
cs_error_t err;
int target_node;
if (csid)
memcpy(&target_node, csid, COROSYNC_CSID_LEN);
else
target_node = 0;
iov[0].iov_base = &target_node;
iov[0].iov_len = sizeof(int);
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
pthread_mutex_lock(&_mutex);
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
pthread_mutex_unlock(&_mutex);
return cs_to_errno(err);
}
#ifdef HAVE_COROSYNC_CONFDB_H
/*
* We are not necessarily connected to a Red Hat Cluster system,
* but if we are, this returns the cluster name from cluster.conf.
* I've used confdb rather than ccs to reduce the inter-package
* dependancies as well as to allow people to set a cluster name
* for themselves even if they are not running on RH cluster.
*/
static int _get_cluster_name(char *buf, int buflen)
{
confdb_handle_t handle;
int result;
size_t namelen = buflen;
hdb_handle_t cluster_handle;
confdb_callbacks_t callbacks = {
.confdb_key_change_notify_fn = NULL,
.confdb_object_create_change_notify_fn = NULL,
.confdb_object_delete_change_notify_fn = NULL
};
/* This is a default in case everything else fails */
strncpy(buf, "Corosync", buflen);
/* Look for a cluster name in confdb */
result = confdb_initialize (&handle, &callbacks);
if (result != CS_OK)
return 0;
result = confdb_object_find_start(handle, OBJECT_PARENT_HANDLE);
if (result != CS_OK)
goto out;
result = confdb_object_find(handle, OBJECT_PARENT_HANDLE, (void *)"cluster", strlen("cluster"), &cluster_handle);
if (result != CS_OK)
goto out;
result = confdb_key_get(handle, cluster_handle, (void *)"name", strlen("name"), buf, &namelen);
if (result != CS_OK)
goto out;
buf[namelen] = '\0';
out:
confdb_finalize(handle);
return 0;
}
#elif defined HAVE_COROSYNC_CMAP_H
static int _get_cluster_name(char *buf, int buflen)
{
cmap_handle_t cmap_handle = 0;
int result;
char *name = NULL;
/* This is a default in case everything else fails */
strncpy(buf, "Corosync", buflen);
/* Look for a cluster name in cmap */
result = cmap_initialize(&cmap_handle);
if (result != CS_OK)
return 0;
result = cmap_get_string(cmap_handle, "totem.cluster_name", &name);
if (result != CS_OK)
goto out;
memset(buf, 0, buflen);
strncpy(buf, name, buflen - 1);
out:
if (name)
free(name);
cmap_finalize(cmap_handle);
return 0;
}
#endif
static struct cluster_ops _cluster_corosync_ops = {
.name = "corosync",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _lock_resource,
.sync_unlock = _unlock_resource,
};
struct cluster_ops *init_corosync_cluster(void)
{
if (!_init_cluster())
return &_cluster_corosync_ops;
else
return NULL;
}

View File

@ -1,687 +0,0 @@
/*
* Copyright (C) 2007-2009 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* This provides the interface between clvmd and OpenAIS as the cluster
* and lock manager.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include <fcntl.h>
#include <syslog.h>
#include <openais/saAis.h>
#include <openais/saLck.h>
#include <corosync/corotypes.h>
#include <corosync/cpg.h>
#include "locking.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "lvm-functions.h"
#include "clvmd.h"
/* Timeout value for several openais calls */
#define TIMEOUT 10
static void openais_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len);
static void openais_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries);
static void _cluster_closedown(void);
/* Hash list of nodes in the cluster */
static struct dm_hash_table *node_hash;
/* For associating lock IDs & resource handles */
static struct dm_hash_table *lock_hash;
/* Number of active nodes */
static int num_nodes;
static unsigned int our_nodeid;
static struct local_client *cluster_client;
/* OpenAIS handles */
static cpg_handle_t cpg_handle;
static SaLckHandleT lck_handle;
static struct cpg_name cpg_group_name;
/* Openais callback structs */
cpg_callbacks_t openais_cpg_callbacks = {
.cpg_deliver_fn = openais_cpg_deliver_callback,
.cpg_confchg_fn = openais_cpg_confchg_callback,
};
struct node_info
{
enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
int nodeid;
};
struct lock_info
{
SaLckResourceHandleT res_handle;
SaLckLockIdT lock_id;
SaNameT lock_name;
};
/* Set errno to something approximating the right value and return 0 or -1 */
static int ais_to_errno(SaAisErrorT err)
{
switch(err)
{
case SA_AIS_OK:
return 0;
case SA_AIS_ERR_LIBRARY:
errno = EINVAL;
break;
case SA_AIS_ERR_VERSION:
errno = EINVAL;
break;
case SA_AIS_ERR_INIT:
errno = EINVAL;
break;
case SA_AIS_ERR_TIMEOUT:
errno = ETIME;
break;
case SA_AIS_ERR_TRY_AGAIN:
errno = EAGAIN;
break;
case SA_AIS_ERR_INVALID_PARAM:
errno = EINVAL;
break;
case SA_AIS_ERR_NO_MEMORY:
errno = ENOMEM;
break;
case SA_AIS_ERR_BAD_HANDLE:
errno = EINVAL;
break;
case SA_AIS_ERR_BUSY:
errno = EBUSY;
break;
case SA_AIS_ERR_ACCESS:
errno = EPERM;
break;
case SA_AIS_ERR_NOT_EXIST:
errno = ENOENT;
break;
case SA_AIS_ERR_NAME_TOO_LONG:
errno = ENAMETOOLONG;
break;
case SA_AIS_ERR_EXIST:
errno = EEXIST;
break;
case SA_AIS_ERR_NO_SPACE:
errno = ENOSPC;
break;
case SA_AIS_ERR_INTERRUPT:
errno = EINTR;
break;
case SA_AIS_ERR_NAME_NOT_FOUND:
errno = ENOENT;
break;
case SA_AIS_ERR_NO_RESOURCES:
errno = ENOMEM;
break;
case SA_AIS_ERR_NOT_SUPPORTED:
errno = EOPNOTSUPP;
break;
case SA_AIS_ERR_BAD_OPERATION:
errno = EINVAL;
break;
case SA_AIS_ERR_FAILED_OPERATION:
errno = EIO;
break;
case SA_AIS_ERR_MESSAGE_ERROR:
errno = EIO;
break;
case SA_AIS_ERR_QUEUE_FULL:
errno = EXFULL;
break;
case SA_AIS_ERR_QUEUE_NOT_AVAILABLE:
errno = EINVAL;
break;
case SA_AIS_ERR_BAD_FLAGS:
errno = EINVAL;
break;
case SA_AIS_ERR_TOO_BIG:
errno = E2BIG;
break;
case SA_AIS_ERR_NO_SECTIONS:
errno = ENOMEM;
break;
default:
errno = EINVAL;
break;
}
return -1;
}
static char *print_openais_csid(const char *csid)
{
static char buf[128];
int id;
memcpy(&id, csid, sizeof(int));
sprintf(buf, "%d", id);
return buf;
}
static int add_internal_client(int fd, fd_callback_t callback)
{
struct local_client *client;
DEBUGLOG("Add_internal_client, fd = %d\n", fd);
if (!(client = dm_zalloc(sizeof(*client)))) {
DEBUGLOG("malloc failed\n");
return -1;
}
client->fd = fd;
client->type = CLUSTER_INTERNAL;
client->callback = callback;
add_client(client);
/* Set Close-on-exec */
fcntl(fd, F_SETFD, 1);
return 0;
}
static void openais_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len)
{
int target_nodeid;
memcpy(&target_nodeid, msg, OPENAIS_CSID_LEN);
DEBUGLOG("%u got message from nodeid %d for %d. len %" PRIsize_t "\n",
our_nodeid, nodeid, target_nodeid, msg_len-4);
if (nodeid != our_nodeid)
if (target_nodeid == our_nodeid || target_nodeid == 0)
process_message(cluster_client, (char *)msg+OPENAIS_CSID_LEN,
msg_len-OPENAIS_CSID_LEN, (char*)&nodeid);
}
static void openais_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries)
{
int i;
struct node_info *ninfo;
DEBUGLOG("confchg callback. %" PRIsize_t " joined, "
FMTsize_t " left, %" PRIsize_t " members\n",
joined_list_entries, left_list_entries, member_list_entries);
for (i=0; i<joined_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&joined_list[i].nodeid,
OPENAIS_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = joined_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
OPENAIS_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
for (i=0; i<left_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&left_list[i].nodeid,
OPENAIS_CSID_LEN);
if (ninfo)
ninfo->state = NODE_DOWN;
}
for (i=0; i<member_list_entries; i++) {
if (member_list[i].nodeid == 0) continue;
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&member_list[i].nodeid,
OPENAIS_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = member_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
OPENAIS_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
num_nodes = member_list_entries;
}
static int lck_dispatch(struct local_client *client, char *buf, int len,
const char *csid, struct local_client **new_client)
{
*new_client = NULL;
saLckDispatch(lck_handle, SA_DISPATCH_ONE);
return 1;
}
static int _init_cluster(void)
{
SaAisErrorT err;
SaVersionT ver = { 'B', 1, 1 };
int select_fd;
node_hash = dm_hash_create(100);
lock_hash = dm_hash_create(10);
err = cpg_initialize(&cpg_handle,
&openais_cpg_callbacks);
if (err != SA_AIS_OK) {
syslog(LOG_ERR, "Cannot initialise OpenAIS CPG service: %d",
err);
DEBUGLOG("Cannot initialise OpenAIS CPG service: %d", err);
return ais_to_errno(err);
}
err = saLckInitialize(&lck_handle,
NULL,
&ver);
if (err != SA_AIS_OK) {
cpg_initialize(&cpg_handle, &openais_cpg_callbacks);
syslog(LOG_ERR, "Cannot initialise OpenAIS lock service: %d",
err);
DEBUGLOG("Cannot initialise OpenAIS lock service: %d\n\n", err);
return ais_to_errno(err);
}
/* Connect to the clvmd group */
strcpy((char *)cpg_group_name.value, "clvmd");
cpg_group_name.length = strlen((char *)cpg_group_name.value);
err = cpg_join(cpg_handle, &cpg_group_name);
if (err != SA_AIS_OK) {
cpg_finalize(cpg_handle);
saLckFinalize(lck_handle);
syslog(LOG_ERR, "Cannot join clvmd process group");
DEBUGLOG("Cannot join clvmd process group: %d\n", err);
return ais_to_errno(err);
}
err = cpg_local_get(cpg_handle,
&our_nodeid);
if (err != SA_AIS_OK) {
cpg_finalize(cpg_handle);
saLckFinalize(lck_handle);
syslog(LOG_ERR, "Cannot get local node id\n");
return ais_to_errno(err);
}
DEBUGLOG("Our local node id is %d\n", our_nodeid);
saLckSelectionObjectGet(lck_handle, (SaSelectionObjectT *)&select_fd);
add_internal_client(select_fd, lck_dispatch);
DEBUGLOG("Connected to OpenAIS\n");
return 0;
}
static void _cluster_closedown(void)
{
saLckFinalize(lck_handle);
cpg_finalize(cpg_handle);
}
static void _get_our_csid(char *csid)
{
memcpy(csid, &our_nodeid, sizeof(int));
}
/* OpenAIS doesn't really have nmode names so we
just use the node ID in hex instead */
static int _csid_from_name(char *csid, const char *name)
{
int nodeid;
struct node_info *ninfo;
if (sscanf(name, "%x", &nodeid) == 1) {
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (ninfo)
return nodeid;
}
return -1;
}
static int _name_from_csid(const char *csid, char *name)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (!ninfo)
{
sprintf(name, "UNKNOWN %s", print_openais_csid(csid));
return -1;
}
sprintf(name, "%x", ninfo->nodeid);
return 0;
}
static int _get_num_nodes()
{
DEBUGLOG("num_nodes = %d\n", num_nodes);
return num_nodes;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (!ninfo) {
DEBUGLOG("openais_add_up_node no node_hash entry for csid %s\n",
print_openais_csid(csid));
return;
}
DEBUGLOG("openais_add_up_node %d\n", ninfo->nodeid);
ninfo->state = NODE_CLVMD;
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
struct dm_hash_node *hn;
struct node_info *ninfo;
int somedown = 0;
dm_hash_iterate(hn, node_hash)
{
char csid[OPENAIS_CSID_LEN];
ninfo = dm_hash_get_data(node_hash, hn);
memcpy(csid, dm_hash_get_key(node_hash, hn), OPENAIS_CSID_LEN);
DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
ninfo->state);
if (ninfo->state != NODE_DOWN)
callback(master_client, csid, ninfo->state == NODE_CLVMD);
if (ninfo->state != NODE_CLVMD)
somedown = -1;
}
return somedown;
}
/* Real locking */
static int _lock_resource(char *resource, int mode, int flags, int *lockid)
{
struct lock_info *linfo;
SaLckResourceHandleT res_handle;
SaAisErrorT err;
SaLckLockIdT lock_id;
SaLckLockStatusT lockStatus;
/* This needs to be converted from DLM/LVM2 value for OpenAIS LCK */
if (flags & LCK_NONBLOCK) flags = SA_LCK_LOCK_NO_QUEUE;
linfo = malloc(sizeof(struct lock_info));
if (!linfo)
return -1;
DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
linfo->lock_name.length = strlen(resource)+1;
strcpy((char *)linfo->lock_name.value, resource);
err = saLckResourceOpen(lck_handle, &linfo->lock_name,
SA_LCK_RESOURCE_CREATE, TIMEOUT, &res_handle);
if (err != SA_AIS_OK)
{
DEBUGLOG("ResourceOpen returned %d\n", err);
free(linfo);
return ais_to_errno(err);
}
err = saLckResourceLock(
res_handle,
&lock_id,
mode,
flags,
0,
SA_TIME_END,
&lockStatus);
if (err != SA_AIS_OK && lockStatus != SA_LCK_LOCK_GRANTED)
{
free(linfo);
saLckResourceClose(res_handle);
return ais_to_errno(err);
}
/* Wait for it to complete */
DEBUGLOG("lock_resource returning %d, lock_id=%" PRIx64 "\n",
err, lock_id);
linfo->lock_id = lock_id;
linfo->res_handle = res_handle;
dm_hash_insert(lock_hash, resource, linfo);
return ais_to_errno(err);
}
static int _unlock_resource(char *resource, int lockid)
{
SaAisErrorT err;
struct lock_info *linfo;
DEBUGLOG("unlock_resource %s\n", resource);
linfo = dm_hash_lookup(lock_hash, resource);
if (!linfo)
return 0;
DEBUGLOG("unlock_resource: lockid: %" PRIx64 "\n", linfo->lock_id);
err = saLckResourceUnlock(linfo->lock_id, SA_TIME_END);
if (err != SA_AIS_OK)
{
DEBUGLOG("Unlock returned %d\n", err);
return ais_to_errno(err);
}
/* Release the resource */
dm_hash_remove(lock_hash, resource);
saLckResourceClose(linfo->res_handle);
free(linfo);
return ais_to_errno(err);
}
static int _sync_lock(const char *resource, int mode, int flags, int *lockid)
{
int status;
char lock1[strlen(resource)+3];
char lock2[strlen(resource)+3];
snprintf(lock1, sizeof(lock1), "%s-1", resource);
snprintf(lock2, sizeof(lock2), "%s-2", resource);
switch (mode)
{
case LCK_EXCL:
status = _lock_resource(lock1, SA_LCK_EX_LOCK_MODE, flags, lockid);
if (status)
goto out;
/* If we can't get this lock too then bail out */
status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, LCK_NONBLOCK,
lockid);
if (status == SA_LCK_LOCK_NOT_QUEUED)
{
_unlock_resource(lock1, *lockid);
status = -1;
errno = EAGAIN;
}
break;
case LCK_PREAD:
case LCK_READ:
status = _lock_resource(lock1, SA_LCK_PR_LOCK_MODE, flags, lockid);
if (status)
goto out;
_unlock_resource(lock2, *lockid);
break;
case LCK_WRITE:
status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, flags, lockid);
if (status)
goto out;
_unlock_resource(lock1, *lockid);
break;
default:
status = -1;
errno = EINVAL;
break;
}
out:
*lockid = mode;
return status;
}
static int _sync_unlock(const char *resource, int lockid)
{
int status = 0;
char lock1[strlen(resource)+3];
char lock2[strlen(resource)+3];
snprintf(lock1, sizeof(lock1), "%s-1", resource);
snprintf(lock2, sizeof(lock2), "%s-2", resource);
_unlock_resource(lock1, lockid);
_unlock_resource(lock2, lockid);
return status;
}
/* We are always quorate ! */
static int _is_quorate()
{
return 1;
}
static int _get_main_cluster_fd(void)
{
int select_fd;
cpg_fd_get(cpg_handle, &select_fd);
return select_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
cluster_client = fd;
*new_client = NULL;
cpg_dispatch(cpg_handle, SA_DISPATCH_ONE);
return 1;
}
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
struct iovec iov[2];
SaAisErrorT err;
int target_node;
if (csid)
memcpy(&target_node, csid, OPENAIS_CSID_LEN);
else
target_node = 0;
iov[0].iov_base = &target_node;
iov[0].iov_len = sizeof(int);
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
return ais_to_errno(err);
}
/* We don't have a cluster name to report here */
static int _get_cluster_name(char *buf, int buflen)
{
strncpy(buf, "OpenAIS", buflen);
return 0;
}
static struct cluster_ops _cluster_openais_ops = {
.name = "openais",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _sync_lock,
.sync_unlock = _sync_unlock,
};
struct cluster_ops *init_openais_cluster(void)
{
if (!_init_cluster())
return &_cluster_openais_ops;
return NULL;
}

View File

@ -1,382 +0,0 @@
/*
* Copyright (C) 2009-2013 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "locking.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include <sys/un.h>
#include <sys/socket.h>
#include <fcntl.h>
static const char SINGLENODE_CLVMD_SOCKNAME[] = DEFAULT_RUN_DIR "/clvmd_singlenode.sock";
static int listen_fd = -1;
static struct dm_hash_table *_locks;
static int _lockid;
static pthread_mutex_t _lock_mutex = PTHREAD_MUTEX_INITIALIZER;
/* Using one common condition for all locks for simplicity */
static pthread_cond_t _lock_cond = PTHREAD_COND_INITIALIZER;
struct lock {
struct dm_list list;
int lockid;
int mode;
};
static void close_comms(void)
{
if (listen_fd != -1 && close(listen_fd))
stack;
(void)unlink(SINGLENODE_CLVMD_SOCKNAME);
listen_fd = -1;
}
static int init_comms(void)
{
mode_t old_mask;
struct sockaddr_un addr = { .sun_family = AF_UNIX };
if (!dm_strncpy(addr.sun_path, SINGLENODE_CLVMD_SOCKNAME,
sizeof(addr.sun_path))) {
DEBUGLOG("%s: singlenode socket name too long.",
SINGLENODE_CLVMD_SOCKNAME);
return -1;
}
close_comms();
(void) dm_prepare_selinux_context(SINGLENODE_CLVMD_SOCKNAME, S_IFSOCK);
old_mask = umask(0077);
listen_fd = socket(PF_UNIX, SOCK_STREAM, 0);
if (listen_fd < 0) {
DEBUGLOG("Can't create local socket: %s\n", strerror(errno));
goto error;
}
/* Set Close-on-exec */
if (fcntl(listen_fd, F_SETFD, 1)) {
DEBUGLOG("Setting CLOEXEC on client fd failed: %s\n", strerror(errno));
goto error;
}
if (bind(listen_fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
DEBUGLOG("Can't bind local socket: %s\n", strerror(errno));
goto error;
}
if (listen(listen_fd, 10) < 0) {
DEBUGLOG("Can't listen local socket: %s\n", strerror(errno));
goto error;
}
umask(old_mask);
(void) dm_prepare_selinux_context(NULL, 0);
return 0;
error:
umask(old_mask);
(void) dm_prepare_selinux_context(NULL, 0);
close_comms();
return -1;
}
static int _init_cluster(void)
{
int r;
if (!(_locks = dm_hash_create(128))) {
DEBUGLOG("Failed to allocate single-node hash table.\n");
return 1;
}
r = init_comms();
if (r) {
dm_hash_destroy(_locks);
_locks = NULL;
return r;
}
DEBUGLOG("Single-node cluster initialised.\n");
return 0;
}
static void _cluster_closedown(void)
{
close_comms();
/* If there is any awaited resource, kill it softly */
pthread_mutex_lock(&_lock_mutex);
dm_hash_destroy(_locks);
_locks = NULL;
_lockid = 0;
pthread_cond_broadcast(&_lock_cond); /* wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
}
static void _get_our_csid(char *csid)
{
int nodeid = 1;
memcpy(csid, &nodeid, sizeof(int));
}
static int _csid_from_name(char *csid, const char *name)
{
return 1;
}
static int _name_from_csid(const char *csid, char *name)
{
strcpy(name, "SINGLENODE");
return 0;
}
static int _get_num_nodes(void)
{
return 1;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
return 0;
}
int _lock_file(const char *file, uint32_t flags);
static const char *_get_mode(int mode)
{
switch (mode) {
case LCK_NULL: return "NULL";
case LCK_READ: return "READ";
case LCK_PREAD: return "PREAD";
case LCK_WRITE: return "WRITE";
case LCK_EXCL: return "EXCLUSIVE";
case LCK_UNLOCK: return "UNLOCK";
default: return "????";
}
}
/* Real locking */
static int _lock_resource(const char *resource, int mode, int flags, int *lockid)
{
/* DLM table of allowed transition states */
static const int _dlm_table[6][6] = {
/* Mode NL CR CW PR PW EX */
/* NL */ { 1, 1, 1, 1, 1, 1},
/* CR */ { 1, 1, 1, 1, 1, 0},
/* CW */ { 1, 1, 1, 0, 0, 0},
/* PR */ { 1, 1, 0, 1, 0, 0},
/* PW */ { 1, 1, 0, 0, 0, 0},
/* EX */ { 1, 0, 0, 0, 0, 0}
};
struct lock *lck = NULL, *lckt;
struct dm_list *head;
DEBUGLOG("Locking resource %s, flags=0x%02x (%s%s%s), mode=%s (%d)\n",
resource, flags,
(flags & LCKF_NOQUEUE) ? "NOQUEUE" : "",
((flags & (LCKF_NOQUEUE | LCKF_CONVERT)) ==
(LCKF_NOQUEUE | LCKF_CONVERT)) ? "|" : "",
(flags & LCKF_CONVERT) ? "CONVERT" : "",
_get_mode(mode), mode);
mode &= LCK_TYPE_MASK;
pthread_mutex_lock(&_lock_mutex);
retry:
if (!(head = dm_hash_lookup(_locks, resource))) {
if (flags & LCKF_CONVERT) {
/* In real DLM, lock is identified only by lockid, resource is not used */
DEBUGLOG("Unlocked resource %s cannot be converted\n", resource);
goto_bad;
}
/* Add new locked resource */
if (!(head = dm_malloc(sizeof(struct dm_list))) ||
!dm_hash_insert(_locks, resource, head)) {
dm_free(head);
goto_bad;
}
dm_list_init(head);
} else /* Update/convert locked resource */
dm_list_iterate_items(lck, head) {
/* Check is all locks are compatible with requested lock */
if (flags & LCKF_CONVERT) {
if (lck->lockid != *lockid)
continue;
DEBUGLOG("Converting resource %s lockid=%d mode:%s -> %s...\n",
resource, lck->lockid, _get_mode(lck->mode), _get_mode(mode));
dm_list_iterate_items(lckt, head) {
if ((lckt->lockid != *lockid) &&
!_dlm_table[mode][lckt->mode]) {
if (!(flags & LCKF_NOQUEUE) &&
/* TODO: Real dlm uses here conversion queues */
!pthread_cond_wait(&_lock_cond, &_lock_mutex) &&
_locks) /* End of the game? */
goto retry;
goto bad;
}
}
lck->mode = mode; /* Lock is now converted */
goto out;
} else if (!_dlm_table[mode][lck->mode]) {
DEBUGLOG("Resource %s already locked lockid=%d, mode:%s\n",
resource, lck->lockid, _get_mode(lck->mode));
if (!(flags & LCKF_NOQUEUE) &&
!pthread_cond_wait(&_lock_cond, &_lock_mutex) &&
_locks) { /* End of the game? */
DEBUGLOG("Resource %s retrying lock in mode:%s...\n",
resource, _get_mode(mode));
goto retry;
}
goto bad;
}
}
if (!(flags & LCKF_CONVERT)) {
if (!(lck = dm_malloc(sizeof(struct lock))))
goto_bad;
*lockid = lck->lockid = ++_lockid;
lck->mode = mode;
dm_list_add(head, &lck->list);
}
out:
pthread_cond_broadcast(&_lock_cond); /* to wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Locked resource %s, lockid=%d, mode=%s\n",
resource, lck->lockid, _get_mode(lck->mode));
return 0;
bad:
pthread_cond_broadcast(&_lock_cond); /* to wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Failed to lock resource %s\n", resource);
return 1; /* fail */
}
static int _unlock_resource(const char *resource, int lockid)
{
struct lock *lck;
struct dm_list *head;
int r = 1;
if (lockid < 0) {
DEBUGLOG("Not tracking unlock of lockid -1: %s, lockid=%d\n",
resource, lockid);
return 1;
}
DEBUGLOG("Unlocking resource %s, lockid=%d\n", resource, lockid);
pthread_mutex_lock(&_lock_mutex);
pthread_cond_broadcast(&_lock_cond); /* wakeup waiters */
if (!(head = dm_hash_lookup(_locks, resource))) {
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Resource %s is not locked.\n", resource);
return 1;
}
dm_list_iterate_items(lck, head)
if (lck->lockid == lockid) {
dm_list_del(&lck->list);
dm_free(lck);
r = 0;
goto out;
}
DEBUGLOG("Resource %s has wrong lockid %d.\n", resource, lockid);
out:
if (dm_list_empty(head)) {
//DEBUGLOG("Resource %s is no longer hashed (lockid=%d).\n", resource, lockid);
dm_hash_remove(_locks, resource);
dm_free(head);
}
pthread_mutex_unlock(&_lock_mutex);
return r;
}
static int _is_quorate(void)
{
return 1;
}
static int _get_main_cluster_fd(void)
{
return listen_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
return 1;
}
static int _cluster_send_message(const void *buf, int msglen,
const char *csid,
const char *errtext)
{
return 0;
}
static int _get_cluster_name(char *buf, int buflen)
{
return dm_strncpy(buf, "localcluster", buflen) ? 0 : 1;
}
static struct cluster_ops _cluster_singlenode_ops = {
.name = "singlenode",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _lock_resource,
.sync_unlock = _unlock_resource,
};
struct cluster_ops *init_singlenode_cluster(void)
{
if (!_init_cluster())
return &_cluster_singlenode_ops;
return NULL;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,126 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _CLVMD_H
#define _CLVMD_H
#define CLVMD_MAJOR_VERSION 0
#define CLVMD_MINOR_VERSION 2
#define CLVMD_PATCH_VERSION 1
/* Default time (in seconds) we will wait for all remote commands to execute
before declaring them dead */
#define DEFAULT_CMD_TIMEOUT 60
/* One of these for each reply we get from command execution on a node */
struct node_reply {
char node[MAX_CLUSTER_MEMBER_NAME_LEN];
char *replymsg;
int status;
struct node_reply *next;
};
typedef enum {DEBUG_OFF, DEBUG_STDERR, DEBUG_SYSLOG} debug_t;
/*
* These exist for the use of local sockets only when we are
* collecting responses from all cluster nodes
*/
struct localsock_bits {
struct node_reply *replies;
int num_replies;
int expected_replies;
time_t sent_time; /* So we can check for timeouts */
int in_progress; /* Only execute one cmd at a time per client */
int sent_out; /* Flag to indicate that a command was sent
to remote nodes */
void *private; /* Private area for command processor use */
void *cmd; /* Whole command as passed down local socket */
int cmd_len; /* Length of above */
int pipe; /* Pipe to send PRE completion status down */
int finished; /* Flag to tell subthread to exit */
int all_success; /* Set to 0 if any node (or the pre_command)
failed */
int cleanup_needed; /* helper for cleanup_zombie */
struct local_client *pipe_client;
pthread_t threadid;
enum { PRE_COMMAND, POST_COMMAND } state;
pthread_mutex_t mutex; /* Main thread and worker synchronisation */
pthread_cond_t cond;
};
/* Entries for PIPE clients */
struct pipe_bits {
struct local_client *client; /* Actual (localsock) client */
pthread_t threadid; /* Our own copy of the thread id */
};
/* Entries for Network socket clients */
struct netsock_bits {
void *private;
int flags;
};
typedef int (*fd_callback_t) (struct local_client * fd, char *buf, int len,
const char *csid,
struct local_client ** new_client);
/* One of these for each fd we are listening on */
struct local_client {
int fd;
enum { CLUSTER_MAIN_SOCK, CLUSTER_DATA_SOCK, LOCAL_RENDEZVOUS,
LOCAL_SOCK, THREAD_PIPE, CLUSTER_INTERNAL } type;
struct local_client *next;
unsigned short xid;
fd_callback_t callback;
uint8_t removeme;
union {
struct localsock_bits localsock;
struct pipe_bits pipe;
struct netsock_bits net;
} bits;
};
#define DEBUGLOG(fmt, args...) debuglog(fmt, ## args)
#ifndef max
#define max(a,b) ((a)>(b)?(a):(b))
#endif
/* The real command processor is in clvmd-command.c */
extern int do_command(struct local_client *client, struct clvm_header *msg,
int msglen, char **buf, int buflen, int *retlen);
/* Pre and post command routines are called only on the local node */
extern int do_pre_command(struct local_client *client);
extern int do_post_command(struct local_client *client);
extern void cmd_client_cleanup(struct local_client *client);
extern int add_client(struct local_client *new_client);
extern void clvmd_cluster_init_completed(void);
extern void process_message(struct local_client *client, char *buf,
int len, const char *csid);
extern void debuglog(const char *fmt, ... )
__attribute__ ((format(printf, 1, 2)));
void clvmd_set_debug(debug_t new_de);
debug_t clvmd_get_debug(void);
int clvmd_get_foreground(void);
int sync_lock(const char *resource, int mode, int flags, int *lockid);
int sync_unlock(const char *resource, int lockid);
#endif

View File

@ -1,939 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include "lvm-functions.h"
/* LVM2 headers */
#include "toolcontext.h"
#include "lvmcache.h"
#include "lvm-globals.h"
#include "activate.h"
#include "archiver.h"
#include "memlock.h"
#include <syslog.h>
static struct cmd_context *cmd = NULL;
static struct dm_hash_table *lv_hash = NULL;
static pthread_mutex_t lv_hash_lock;
static pthread_mutex_t lvm_lock;
static char last_error[1024];
struct lv_info {
int lock_id;
int lock_mode;
};
static const char *decode_full_locking_cmd(uint32_t cmdl)
{
static char buf[128];
const char *type;
const char *scope;
const char *command;
switch (cmdl & LCK_TYPE_MASK) {
case LCK_NULL:
type = "NULL";
break;
case LCK_READ:
type = "READ";
break;
case LCK_PREAD:
type = "PREAD";
break;
case LCK_WRITE:
type = "WRITE";
break;
case LCK_EXCL:
type = "EXCL";
break;
case LCK_UNLOCK:
type = "UNLOCK";
break;
default:
type = "unknown";
break;
}
switch (cmdl & LCK_SCOPE_MASK) {
case LCK_VG:
scope = "VG";
command = "LCK_VG";
break;
case LCK_LV:
scope = "LV";
switch (cmdl & LCK_MASK) {
case LCK_LV_EXCLUSIVE & LCK_MASK:
command = "LCK_LV_EXCLUSIVE";
break;
case LCK_LV_SUSPEND & LCK_MASK:
command = "LCK_LV_SUSPEND";
break;
case LCK_LV_RESUME & LCK_MASK:
command = "LCK_LV_RESUME";
break;
case LCK_LV_ACTIVATE & LCK_MASK:
command = "LCK_LV_ACTIVATE";
break;
case LCK_LV_DEACTIVATE & LCK_MASK:
command = "LCK_LV_DEACTIVATE";
break;
default:
command = "unknown";
break;
}
break;
default:
scope = "unknown";
command = "unknown";
break;
}
sprintf(buf, "0x%x %s (%s|%s%s%s%s%s)", cmdl, command, type, scope,
cmdl & LCK_NONBLOCK ? "|NONBLOCK" : "",
cmdl & LCK_HOLD ? "|HOLD" : "",
cmdl & LCK_CLUSTER_VG ? "|CLUSTER_VG" : "",
cmdl & LCK_CACHE ? "|CACHE" : "");
return buf;
}
/*
* Only processes 8 bits: excludes LCK_CACHE.
*/
static const char *decode_locking_cmd(unsigned char cmdl)
{
return decode_full_locking_cmd((uint32_t) cmdl);
}
static const char *decode_flags(unsigned char flags)
{
static char buf[128];
int len;
len = sprintf(buf, "0x%x ( %s%s%s%s%s%s%s%s)", flags,
flags & LCK_PARTIAL_MODE ? "PARTIAL_MODE|" : "",
flags & LCK_MIRROR_NOSYNC_MODE ? "MIRROR_NOSYNC|" : "",
flags & LCK_DMEVENTD_MONITOR_MODE ? "DMEVENTD_MONITOR|" : "",
flags & LCK_ORIGIN_ONLY_MODE ? "ORIGIN_ONLY|" : "",
flags & LCK_TEST_MODE ? "TEST|" : "",
flags & LCK_CONVERT_MODE ? "CONVERT|" : "",
flags & LCK_DMEVENTD_MONITOR_IGNORE ? "DMEVENTD_MONITOR_IGNORE|" : "",
flags & LCK_REVERT_MODE ? "REVERT|" : "");
if (len > 1)
buf[len - 2] = ' ';
else
buf[0] = '\0';
return buf;
}
char *get_last_lvm_error(void)
{
return last_error;
}
/*
* Hash lock info helpers
*/
static struct lv_info *lookup_info(const char *resource)
{
struct lv_info *lvi;
pthread_mutex_lock(&lv_hash_lock);
lvi = dm_hash_lookup(lv_hash, resource);
pthread_mutex_unlock(&lv_hash_lock);
return lvi;
}
static int insert_info(const char *resource, struct lv_info *lvi)
{
int ret;
pthread_mutex_lock(&lv_hash_lock);
ret = dm_hash_insert(lv_hash, resource, lvi);
pthread_mutex_unlock(&lv_hash_lock);
return ret;
}
static void remove_info(const char *resource)
{
int num_open;
pthread_mutex_lock(&lv_hash_lock);
dm_hash_remove(lv_hash, resource);
/* When last lock is remove, validate there are not left opened devices */
if (!dm_hash_get_first(lv_hash)) {
if (critical_section())
log_error(INTERNAL_ERROR "No volumes are locked however clvmd is in activation mode critical section.");
if ((num_open = dev_cache_check_for_open_devices()))
log_error(INTERNAL_ERROR "No volumes are locked however %d devices are still open.", num_open);
}
pthread_mutex_unlock(&lv_hash_lock);
}
/*
* Return the mode a lock is currently held at (or -1 if not held)
*/
static int get_current_lock(char *resource)
{
struct lv_info *lvi;
if ((lvi = lookup_info(resource)))
return lvi->lock_mode;
return -1;
}
void init_lvhash(void)
{
/* Create hash table for keeping LV locks & status */
lv_hash = dm_hash_create(1024);
pthread_mutex_init(&lv_hash_lock, NULL);
pthread_mutex_init(&lvm_lock, NULL);
}
/* Called at shutdown to tidy the lockspace */
void destroy_lvhash(void)
{
struct dm_hash_node *v;
struct lv_info *lvi;
char *resource;
int status;
pthread_mutex_lock(&lv_hash_lock);
dm_hash_iterate(v, lv_hash) {
lvi = dm_hash_get_data(lv_hash, v);
resource = dm_hash_get_key(lv_hash, v);
if ((status = sync_unlock(resource, lvi->lock_id)))
DEBUGLOG("unlock_all. unlock failed(%d): %s\n",
status, strerror(errno));
dm_free(lvi);
}
dm_hash_destroy(lv_hash);
lv_hash = NULL;
pthread_mutex_unlock(&lv_hash_lock);
}
/* Gets a real lock and keeps the info in the hash table */
static int hold_lock(char *resource, int mode, int flags)
{
int status;
int saved_errno;
struct lv_info *lvi;
/* Mask off invalid options */
flags &= LCKF_NOQUEUE | LCKF_CONVERT;
lvi = lookup_info(resource);
if (lvi) {
if (lvi->lock_mode == mode) {
DEBUGLOG("hold_lock, lock mode %d already held\n",
mode);
return 0;
}
if ((lvi->lock_mode == LCK_EXCL) && (mode == LCK_WRITE)) {
DEBUGLOG("hold_lock, lock already held LCK_EXCL, "
"ignoring LCK_WRITE request\n");
return 0;
}
}
/* Only allow explicit conversions */
if (lvi && !(flags & LCKF_CONVERT)) {
errno = EBUSY;
return -1;
}
if (lvi) {
/* Already exists - convert it */
status = sync_lock(resource, mode, flags, &lvi->lock_id);
saved_errno = errno;
if (!status)
lvi->lock_mode = mode;
else
DEBUGLOG("hold_lock. convert to %d failed: %s\n", mode,
strerror(errno));
errno = saved_errno;
} else {
if (!(lvi = dm_malloc(sizeof(struct lv_info)))) {
errno = ENOMEM;
return -1;
}
lvi->lock_mode = mode;
lvi->lock_id = 0;
status = sync_lock(resource, mode, flags & ~LCKF_CONVERT, &lvi->lock_id);
saved_errno = errno;
if (status) {
dm_free(lvi);
DEBUGLOG("hold_lock. lock at %d failed: %s\n", mode,
strerror(errno));
} else
if (!insert_info(resource, lvi)) {
errno = ENOMEM;
return -1;
}
errno = saved_errno;
}
return status;
}
/* Unlock and remove it from the hash table */
static int hold_unlock(char *resource)
{
struct lv_info *lvi;
int status;
int saved_errno;
if (!(lvi = lookup_info(resource))) {
DEBUGLOG("hold_unlock, lock not already held\n");
return 0;
}
status = sync_unlock(resource, lvi->lock_id);
saved_errno = errno;
if (!status) {
remove_info(resource);
dm_free(lvi);
} else {
DEBUGLOG("hold_unlock. unlock failed(%d): %s\n", status,
strerror(errno));
}
errno = saved_errno;
return status;
}
/* Watch the return codes here.
liblvm API functions return 1(true) for success, 0(false) for failure and don't set errno.
libdlm API functions return 0 for success, -1 for failure and do set errno.
These functions here return 0 for success or >0 for failure (where the retcode is errno)
*/
/* Activate LV exclusive or non-exclusive */
static int do_activate_lv(char *resource, unsigned char command, unsigned char lock_flags, int mode)
{
int oldmode;
int status;
int activate_lv;
int exclusive = 0;
struct lvinfo lvi;
/* Is it already open ? */
oldmode = get_current_lock(resource);
if (oldmode == mode && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_activate_lv, lock already held at %d\n", oldmode);
return 0; /* Nothing to do */
}
/* Does the config file want us to activate this LV ? */
if (!lv_activation_filter(cmd, resource, &activate_lv, NULL))
return EIO;
if (!activate_lv)
return 0; /* Success, we did nothing! */
/* Do we need to activate exclusively? */
if ((activate_lv == 2) || (mode == LCK_EXCL)) {
exclusive = 1;
mode = LCK_EXCL;
}
/*
* Try to get the lock if it's a clustered volume group.
* Use lock conversion only if requested, to prevent implicit conversion
* of exclusive lock to shared one during activation.
*/
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_lock(resource, mode, LCKF_NOQUEUE | ((lock_flags & LCK_CONVERT_MODE) ? LCKF_CONVERT:0));
if (status) {
/* Return an LVM-sensible error for this.
* Forcing EIO makes the upper level return this text
* rather than the strerror text for EAGAIN.
*/
if (errno == EAGAIN) {
sprintf(last_error, "Volume is busy on another node");
errno = EIO;
}
return errno;
}
}
/* If it's suspended then resume it */
if (!lv_info_by_lvid(cmd, resource, 0, &lvi, 0, 0))
goto error;
if (lvi.suspended) {
critical_section_inc(cmd, "resuming");
if (!lv_resume(cmd, resource, 0, NULL)) {
critical_section_dec(cmd, "resumed");
goto error;
}
}
/* Now activate it */
if (!lv_activate(cmd, resource, exclusive, 0, 0, NULL))
goto error;
return 0;
error:
if (!test_mode() && (oldmode == -1 || oldmode != mode))
(void)hold_unlock(resource);
return EIO;
}
/* Resume the LV if it was active */
static int do_resume_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode, origin_only, exclusive, revert;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_resume_lv, lock not already held\n");
return 0; /* We don't need to do anything */
}
origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
exclusive = (oldmode == LCK_EXCL) ? 1 : 0;
revert = (lock_flags & LCK_REVERT_MODE) ? 1 : 0;
if (!lv_resume_if_active(cmd, resource, origin_only, exclusive, revert, NULL))
return EIO;
return 0;
}
/* Suspend the device if active */
static int do_suspend_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode;
unsigned origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
unsigned exclusive;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_suspend_lv, lock not already held\n");
return 0; /* Not active, so it's OK */
}
exclusive = (oldmode == LCK_EXCL) ? 1 : 0;
/* Always call lv_suspend to read commited and precommited data */
if (!lv_suspend_if_active(cmd, resource, origin_only, exclusive, NULL, NULL))
return EIO;
return 0;
}
static int do_deactivate_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode;
int status;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_deactivate_lock, lock not already held\n");
return 0; /* We don't need to do anything */
}
if (!lv_deactivate(cmd, resource, NULL))
return EIO;
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_unlock(resource);
if (status)
return errno;
}
return 0;
}
const char *do_lock_query(char *resource)
{
int mode;
const char *type;
mode = get_current_lock(resource);
switch (mode) {
case LCK_NULL: type = "NL"; break;
case LCK_READ: type = "CR"; break;
case LCK_PREAD:type = "PR"; break;
case LCK_WRITE:type = "PW"; break;
case LCK_EXCL: type = "EX"; break;
default: type = NULL;
}
DEBUGLOG("do_lock_query: resource '%s', mode %i (%s)\n", resource, mode, type ?: "--");
return type;
}
/* This is the LOCK_LV part that happens on all nodes in the cluster -
it is responsible for the interaction with device-mapper and LVM */
int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
{
int status = 0;
DEBUGLOG("do_lock_lv: resource '%s', cmd = %s, flags = %s, critical_section = %d\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags), critical_section());
if (!cmd->initialized.config || config_files_changed(cmd)) {
/* Reinitialise various settings inc. logging, filters */
if (do_refresh_cache()) {
log_error("Updated config file invalid. Aborting.");
return EINVAL;
}
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
if (lock_flags & LCK_MIRROR_NOSYNC_MODE)
init_mirror_in_sync(1);
if (lock_flags & LCK_DMEVENTD_MONITOR_IGNORE)
init_dmeventd_monitor(DMEVENTD_MONITOR_IGNORE);
else {
if (lock_flags & LCK_DMEVENTD_MONITOR_MODE)
init_dmeventd_monitor(1);
else
init_dmeventd_monitor(0);
}
cmd->partial_activation = (lock_flags & LCK_PARTIAL_MODE) ? 1 : 0;
/* clvmd should never try to read suspended device */
init_ignore_suspended_devices(1);
switch (command & LCK_MASK) {
case LCK_LV_EXCLUSIVE:
status = do_activate_lv(resource, command, lock_flags, LCK_EXCL);
break;
case LCK_LV_SUSPEND:
status = do_suspend_lv(resource, command, lock_flags);
break;
case LCK_UNLOCK:
case LCK_LV_RESUME: /* if active */
status = do_resume_lv(resource, command, lock_flags);
break;
case LCK_LV_ACTIVATE:
status = do_activate_lv(resource, command, lock_flags, LCK_READ);
break;
case LCK_LV_DEACTIVATE:
status = do_deactivate_lv(resource, command, lock_flags);
break;
default:
DEBUGLOG("Invalid LV command 0x%x\n", command);
status = EINVAL;
break;
}
if (lock_flags & LCK_MIRROR_NOSYNC_MODE)
init_mirror_in_sync(0);
cmd->partial_activation = 0;
/* clean the pool for another command */
dm_pool_empty(cmd->mem);
init_test(0);
pthread_mutex_unlock(&lvm_lock);
DEBUGLOG("Command return is %d, critical_section is %d\n", status, critical_section());
return status;
}
/* Functions to do on the local node only BEFORE the cluster-wide stuff above happens */
int pre_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
{
/* Nearly all the stuff happens cluster-wide. Apart from SUSPEND. Here we get the
lock out on this node (because we are the node modifying the metadata)
before suspending cluster-wide.
LCKF_CONVERT is used always, local node is going to modify metadata
*/
if ((command & (LCK_SCOPE_MASK | LCK_TYPE_MASK)) == LCK_LV_SUSPEND &&
(command & LCK_CLUSTER_VG)) {
DEBUGLOG("pre_lock_lv: resource '%s', cmd = %s, flags = %s\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags));
if (!(lock_flags & LCK_TEST_MODE) &&
hold_lock(resource, LCK_WRITE, LCKF_NOQUEUE | LCKF_CONVERT))
return errno;
}
return 0;
}
/* Functions to do on the local node only AFTER the cluster-wide stuff above happens */
int post_lock_lv(unsigned char command, unsigned char lock_flags,
char *resource)
{
int status;
unsigned origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
/* Opposite of above, done on resume after a metadata update */
if ((command & (LCK_SCOPE_MASK | LCK_TYPE_MASK)) == LCK_LV_RESUME &&
(command & LCK_CLUSTER_VG)) {
int oldmode;
DEBUGLOG("post_lock_lv: resource '%s', cmd = %s, flags = %s\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags));
/* If the lock state is PW then restore it to what it was */
oldmode = get_current_lock(resource);
if (oldmode == LCK_WRITE) {
struct lvinfo lvi;
pthread_mutex_lock(&lvm_lock);
status = lv_info_by_lvid(cmd, resource, origin_only, &lvi, 0, 0);
pthread_mutex_unlock(&lvm_lock);
if (!status)
return EIO;
if (!(lock_flags & LCK_TEST_MODE)) {
if (lvi.exists) {
if (hold_lock(resource, LCK_READ, LCKF_CONVERT))
return errno;
} else if (hold_unlock(resource))
return errno;
}
}
}
return 0;
}
/* Check if a VG is in use by LVM1 so we don't stomp on it */
int do_check_lvm1(const char *vgname)
{
int status;
status = check_lvm1_vg_inactive(cmd, vgname);
return status == 1 ? 0 : EBUSY;
}
int do_refresh_cache(void)
{
DEBUGLOG("Refreshing context\n");
log_notice("Refreshing context");
pthread_mutex_lock(&lvm_lock);
if (!refresh_toolcontext(cmd)) {
pthread_mutex_unlock(&lvm_lock);
return -1;
}
init_full_scan_done(0);
init_ignore_suspended_devices(1);
lvmcache_force_next_label_scan();
lvmcache_label_scan(cmd);
dm_pool_empty(cmd->mem);
pthread_mutex_unlock(&lvm_lock);
return 0;
}
/*
* Handle VG lock - drop metadata or update lvmcache state
*/
void do_lock_vg(unsigned char command, unsigned char lock_flags, char *resource)
{
uint32_t lock_cmd = command;
char *vgname = resource + 2;
lock_cmd &= (LCK_SCOPE_MASK | LCK_TYPE_MASK | LCK_HOLD);
/*
* Check if LCK_CACHE should be set. All P_ locks except # are cache related.
*/
if (strncmp(resource, "P_#", 3) && !strncmp(resource, "P_", 2))
lock_cmd |= LCK_CACHE;
DEBUGLOG("do_lock_vg: resource '%s', cmd = %s, flags = %s, critical_section = %d\n",
resource, decode_full_locking_cmd(lock_cmd), decode_flags(lock_flags), critical_section());
/* P_#global causes a full cache refresh */
if (!strcmp(resource, "P_" VG_GLOBAL)) {
do_refresh_cache();
return;
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
switch (lock_cmd) {
case LCK_VG_COMMIT:
DEBUGLOG("vg_commit notification for VG %s\n", vgname);
lvmcache_commit_metadata(vgname);
break;
case LCK_VG_REVERT:
DEBUGLOG("vg_revert notification for VG %s\n", vgname);
lvmcache_drop_metadata(vgname, 1);
break;
case LCK_VG_DROP_CACHE:
default:
DEBUGLOG("Invalidating cached metadata for VG %s\n", vgname);
lvmcache_drop_metadata(vgname, 0);
}
init_test(0);
pthread_mutex_unlock(&lvm_lock);
}
/*
* Ideally, clvmd should be started before any LVs are active
* but this may not be the case...
* I suppose this also comes in handy if clvmd crashes, not that it would!
*/
static int get_initial_state(struct dm_hash_table *excl_uuid)
{
int lock_mode;
char lv[65], vg[65], flags[26], vg_flags[26]; /* with space for '\0' */
char uuid[65];
char line[255];
char *lvs_cmd;
const char *lvm_binary = getenv("LVM_BINARY") ? : LVM_PATH;
FILE *lvs;
if (dm_asprintf(&lvs_cmd, "%s lvs --config 'log{command_names=0 prefix=\"\"}' "
"--nolocking --noheadings -o vg_uuid,lv_uuid,lv_attr,vg_attr",
lvm_binary) < 0)
return_0;
/* FIXME: Maybe link and use liblvm2cmd directly instead of fork */
if (!(lvs = popen(lvs_cmd, "r"))) {
dm_free(lvs_cmd);
return 0;
}
while (fgets(line, sizeof(line), lvs)) {
if (sscanf(line, "%64s %64s %25s %25s\n", vg, lv, flags, vg_flags) == 4) {
/* States: s:suspended a:active S:dropped snapshot I:invalid snapshot */
if (strlen(vg) == 38 && /* is is a valid UUID ? */
(flags[4] == 'a' || flags[4] == 's') && /* is it active or suspended? */
vg_flags[5] == 'c') { /* is it clustered ? */
/* Convert hyphen-separated UUIDs into one */
memcpy(&uuid[0], &vg[0], 6);
memcpy(&uuid[6], &vg[7], 4);
memcpy(&uuid[10], &vg[12], 4);
memcpy(&uuid[14], &vg[17], 4);
memcpy(&uuid[18], &vg[22], 4);
memcpy(&uuid[22], &vg[27], 4);
memcpy(&uuid[26], &vg[32], 6);
memcpy(&uuid[32], &lv[0], 6);
memcpy(&uuid[38], &lv[7], 4);
memcpy(&uuid[42], &lv[12], 4);
memcpy(&uuid[46], &lv[17], 4);
memcpy(&uuid[50], &lv[22], 4);
memcpy(&uuid[54], &lv[27], 4);
memcpy(&uuid[58], &lv[32], 6);
uuid[64] = '\0';
/* Look for this lock in the list of EX locks
we were passed on the command-line */
lock_mode = (dm_hash_lookup(excl_uuid, uuid)) ?
LCK_EXCL : LCK_READ;
DEBUGLOG("getting initial lock for %s\n", uuid);
if (hold_lock(uuid, lock_mode, LCKF_NOQUEUE))
DEBUGLOG("Failed to hold lock %s\n", uuid);
}
}
}
if (pclose(lvs))
DEBUGLOG("lvs pclose failed: %s\n", strerror(errno));
dm_free(lvs_cmd);
return 1;
}
static void lvm2_log_fn(int level, const char *file, int line, int dm_errno,
const char *message)
{
/* Send messages to the normal LVM2 logging system too,
so we get debug output when it's asked for.
We need to NULL the function ptr otherwise it will just call
back into here! */
init_log_fn(NULL);
print_log(level, file, line, dm_errno, "%s", message);
init_log_fn(lvm2_log_fn);
/*
* Ignore non-error messages, but store the latest one for returning
* to the user.
*/
if (level != _LOG_ERR && level != _LOG_FATAL)
return;
strncpy(last_error, message, sizeof(last_error));
last_error[sizeof(last_error)-1] = '\0';
}
/* This checks some basic cluster-LVM configuration stuff */
static void check_config(void)
{
int locking_type;
locking_type = find_config_tree_int(cmd, global_locking_type_CFG, NULL);
if (locking_type == 3) /* compiled-in cluster support */
return;
if (locking_type == 2) { /* External library, check name */
const char *libname;
libname = find_config_tree_str(cmd, global_locking_library_CFG, NULL);
if (libname && strstr(libname, "liblvm2clusterlock.so"))
return;
log_error("Incorrect LVM locking library specified in lvm.conf, cluster operations may not work.");
return;
}
log_error("locking_type not set correctly in lvm.conf, cluster operations will not work.");
}
/* Backups up the LVM metadata if it's changed */
void lvm_do_backup(const char *vgname)
{
struct volume_group * vg;
int consistent = 0;
DEBUGLOG("Triggering backup of VG metadata for %s.\n", vgname);
pthread_mutex_lock(&lvm_lock);
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, WARN_PV_READ, &consistent);
if (vg && consistent)
check_current_backup(vg);
else
log_error("Error backing up metadata, can't find VG for group %s", vgname);
release_vg(vg);
dm_pool_empty(cmd->mem);
pthread_mutex_unlock(&lvm_lock);
}
struct dm_hash_node *get_next_excl_lock(struct dm_hash_node *v, char **name)
{
struct lv_info *lvi;
*name = NULL;
if (!v)
v = dm_hash_get_first(lv_hash);
do {
if (v) {
lvi = dm_hash_get_data(lv_hash, v);
DEBUGLOG("Looking for EX locks. found %x mode %d\n", lvi->lock_id, lvi->lock_mode);
if (lvi->lock_mode == LCK_EXCL) {
*name = dm_hash_get_key(lv_hash, v);
}
v = dm_hash_get_next(lv_hash, v);
}
} while (v && !*name);
if (*name)
DEBUGLOG("returning EXclusive UUID %s\n", *name);
return v;
}
void lvm_do_fs_unlock(void)
{
pthread_mutex_lock(&lvm_lock);
DEBUGLOG("Syncing device names\n");
fs_unlock();
pthread_mutex_unlock(&lvm_lock);
}
/* Called to initialise the LVM context of the daemon */
int init_clvm(struct dm_hash_table *excl_uuid)
{
/* Use LOG_DAEMON for syslog messages instead of LOG_USER */
init_syslog(LOG_DAEMON);
openlog("clvmd", LOG_PID, LOG_DAEMON);
/* Initialise already held locks */
if (!get_initial_state(excl_uuid))
log_error("Cannot load initial lock states.");
if (!udev_init_library_context())
stack;
if (!(cmd = create_toolcontext(1, NULL, 0, 1, 1, 1))) {
log_error("Failed to allocate command context");
udev_fin_library_context();
return 0;
}
if (stored_errno()) {
destroy_toolcontext(cmd);
return 0;
}
cmd->cmd_line = "clvmd";
/* Check lvm.conf is setup for cluster-LVM */
check_config();
init_ignore_suspended_devices(1);
/* Trap log messages so we can pass them back to the user */
init_log_fn(lvm2_log_fn);
memlock_inc_daemon(cmd);
return 1;
}
void destroy_lvm(void)
{
if (cmd) {
memlock_dec_daemon(cmd);
destroy_toolcontext(cmd);
udev_fin_library_context();
cmd = NULL;
}
}

View File

@ -1,41 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* Functions in lvm-functions.c */
#ifndef _LVM_FUNCTIONS_H
#define _LVM_FUNCTIONS_H
extern int pre_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern int do_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern const char *do_lock_query(char *resource);
extern int post_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern int do_check_lvm1(const char *vgname);
extern int do_refresh_cache(void);
extern int init_clvm(struct dm_hash_table *excl_uuid);
extern void destroy_lvm(void);
extern void init_lvhash(void);
extern void destroy_lvhash(void);
extern void lvm_do_backup(const char *vgname);
extern char *get_last_lvm_error(void);
extern void do_lock_vg(unsigned char command, unsigned char lock_flags,
char *resource);
extern struct dm_hash_node *get_next_excl_lock(struct dm_hash_node *v, char **name);
void lvm_do_fs_unlock(void);
#endif

View File

@ -1,382 +0,0 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* FIXME Remove duplicated functions from this file. */
/*
* Send a command to a running clvmd from the command-line
*/
#include "clvmd-common.h"
#include "clvm.h"
#include "refresh_clvmd.h"
#include <stddef.h>
#include <sys/socket.h>
#include <sys/un.h>
typedef struct lvm_response {
char node[255];
char *response;
int status;
int len;
} lvm_response_t;
/*
* This gets stuck at the start of memory we allocate so we
* can sanity-check it at deallocation time
*/
#define LVM_SIGNATURE 0x434C564D
static int _clvmd_sock = -1;
/* Open connection to the clvm daemon */
static int _open_local_sock(void)
{
int local_socket;
struct sockaddr_un sockaddr = { .sun_family = AF_UNIX };
if (!dm_strncpy(sockaddr.sun_path, CLVMD_SOCKNAME, sizeof(sockaddr.sun_path))) {
fprintf(stderr, "%s: clvmd socket name too long.", CLVMD_SOCKNAME);
return -1;
}
/* Open local socket */
if ((local_socket = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
fprintf(stderr, "Local socket creation failed: %s", strerror(errno));
return -1;
}
if (connect(local_socket,(struct sockaddr *) &sockaddr,
sizeof(sockaddr))) {
int saved_errno = errno;
fprintf(stderr, "connect() failed on local socket: %s\n",
strerror(errno));
if (close(local_socket))
return -1;
errno = saved_errno;
return -1;
}
return local_socket;
}
/* Send a request and return the status */
static int _send_request(const char *inbuf, int inlen, char **retbuf, int no_response)
{
char outbuf[PIPE_BUF];
struct clvm_header *outheader = (struct clvm_header *) outbuf;
int len;
unsigned off;
int buflen;
int err;
/* Send it to CLVMD */
rewrite:
if ( (err = write(_clvmd_sock, inbuf, inlen)) != inlen) {
if (err == -1 && errno == EINTR)
goto rewrite;
fprintf(stderr, "Error writing data to clvmd: %s", strerror(errno));
return 0;
}
if (no_response)
return 1;
/* Get the response */
reread:
if ((len = read(_clvmd_sock, outbuf, sizeof(struct clvm_header))) < 0) {
if (errno == EINTR)
goto reread;
fprintf(stderr, "Error reading data from clvmd: %s", strerror(errno));
return 0;
}
if (len == 0) {
fprintf(stderr, "EOF reading CLVMD");
errno = ENOTCONN;
return 0;
}
/* Allocate buffer */
buflen = len + outheader->arglen;
*retbuf = dm_malloc(buflen);
if (!*retbuf) {
errno = ENOMEM;
return 0;
}
/* Copy the header */
memcpy(*retbuf, outbuf, len);
outheader = (struct clvm_header *) *retbuf;
/* Read the returned values */
off = 1; /* we've already read the first byte */
while (off <= outheader->arglen && len > 0) {
len = read(_clvmd_sock, outheader->args + off,
buflen - off - offsetof(struct clvm_header, args));
if (len > 0)
off += len;
}
/* Was it an error ? */
if (outheader->status != 0) {
errno = outheader->status;
/* Only return an error here if there are no node-specific
errors present in the message that might have more detail */
if (!(outheader->flags & CLVMD_FLAG_NODEERRS)) {
fprintf(stderr, "cluster request failed: %s\n", strerror(errno));
return 0;
}
}
return 1;
}
/* Build the structure header and parse-out wildcard node names */
static void _build_header(struct clvm_header *head, int cmd, const char *node,
unsigned int len)
{
head->cmd = cmd;
head->status = 0;
head->flags = 0;
head->xid = 0;
head->clientid = 0;
if (len)
/* 1 byte is used from struct clvm_header.args[1], so -> len - 1 */
head->arglen = len - 1;
else {
head->arglen = 0;
*head->args = '\0';
}
/*
* Translate special node names.
*/
if (!node || !strcmp(node, NODE_ALL))
head->node[0] = '\0';
else if (!strcmp(node, NODE_LOCAL)) {
head->node[0] = '\0';
head->flags = CLVMD_FLAG_LOCAL;
} else
strcpy(head->node, node);
}
/*
* Send a message to a(or all) node(s) in the cluster and wait for replies
*/
static int _cluster_request(char cmd, const char *node, void *data, int len,
lvm_response_t ** response, int *num, int no_response)
{
char outbuf[sizeof(struct clvm_header) + len + strlen(node) + 1];
char *inptr;
char *retbuf = NULL;
int status;
int i;
int num_responses = 0;
struct clvm_header *head = (struct clvm_header *) outbuf;
lvm_response_t *rarray;
*num = 0;
if (_clvmd_sock == -1)
_clvmd_sock = _open_local_sock();
if (_clvmd_sock == -1)
return 0;
_build_header(head, cmd, node, len);
if (len)
memcpy(head->node + strlen(head->node) + 1, data, len);
status = _send_request(outbuf, sizeof(struct clvm_header) +
strlen(head->node) + len, &retbuf, no_response);
if (!status || no_response)
goto out;
/* Count the number of responses we got */
head = (struct clvm_header *) retbuf;
inptr = head->args;
while (inptr[0]) {
num_responses++;
inptr += strlen(inptr) + 1;
inptr += sizeof(int);
inptr += strlen(inptr) + 1;
}
/*
* Allocate response array.
* With an extra pair of INTs on the front to sanity
* check the pointer when we are given it back to free
*/
*response = NULL;
if (!(rarray = dm_malloc(sizeof(lvm_response_t) * num_responses +
sizeof(int) * 2))) {
errno = ENOMEM;
status = 0;
goto out;
}
/* Unpack the response into an lvm_response_t array */
inptr = head->args;
i = 0;
while (inptr[0]) {
strcpy(rarray[i].node, inptr);
inptr += strlen(inptr) + 1;
memcpy(&rarray[i].status, inptr, sizeof(int));
inptr += sizeof(int);
rarray[i].response = dm_malloc(strlen(inptr) + 1);
if (rarray[i].response == NULL) {
/* Free up everything else and return error */
int j;
for (j = 0; j < i; j++)
dm_free(rarray[i].response);
dm_free(rarray);
errno = ENOMEM;
status = 0;
goto out;
}
strcpy(rarray[i].response, inptr);
rarray[i].len = strlen(inptr);
inptr += strlen(inptr) + 1;
i++;
}
*num = num_responses;
*response = rarray;
out:
dm_free(retbuf);
return status;
}
/* Free reply array */
static int _cluster_free_request(lvm_response_t * response, int num)
{
int i;
for (i = 0; i < num; i++) {
dm_free(response[i].response);
}
dm_free(response);
return 1;
}
int refresh_clvmd(int all_nodes)
{
int num_responses;
char args[1]; // No args really.
lvm_response_t *response = NULL;
int saved_errno;
int status;
int i;
status = _cluster_request(CLVMD_CMD_REFRESH, all_nodes ? NODE_ALL : NODE_LOCAL, args, 0, &response, &num_responses, 0);
/* If any nodes were down then display them and return an error */
for (i = 0; i < num_responses; i++) {
if (response[i].status == EHOSTDOWN) {
fprintf(stderr, "clvmd not running on node %s",
response[i].node);
status = 0;
errno = response[i].status;
} else if (response[i].status) {
fprintf(stderr, "Error resetting node %s: %s",
response[i].node,
response[i].response[0] ?
response[i].response :
strerror(response[i].status));
status = 0;
errno = response[i].status;
}
}
saved_errno = errno;
_cluster_free_request(response, num_responses);
errno = saved_errno;
return status;
}
int restart_clvmd(int all_nodes)
{
int dummy, status;
status = _cluster_request(CLVMD_CMD_RESTART, all_nodes ? NODE_ALL : NODE_LOCAL, NULL, 0, NULL, &dummy, 1);
/*
* FIXME: we cannot receive response, clvmd re-exec before it.
* but also should not close socket too early (the whole rq is dropped then).
* FIXME: This should be handled this way:
* - client waits for RESTART ack (and socket close)
* - server restarts
* - client checks that server is ready again (VERSION command?)
*/
usleep(500000);
return status;
}
int debug_clvmd(int level, int clusterwide)
{
int num_responses;
char args[1];
const char *nodes;
lvm_response_t *response = NULL;
int saved_errno;
int status;
int i;
args[0] = level;
if (clusterwide)
nodes = NODE_ALL;
else
nodes = NODE_LOCAL;
status = _cluster_request(CLVMD_CMD_SET_DEBUG, nodes, args, 1, &response, &num_responses, 0);
/* If any nodes were down then display them and return an error */
for (i = 0; i < num_responses; i++) {
if (response[i].status == EHOSTDOWN) {
fprintf(stderr, "clvmd not running on node %s",
response[i].node);
status = 0;
errno = response[i].status;
} else if (response[i].status) {
fprintf(stderr, "Error setting debug on node %s: %s",
response[i].node,
response[i].response[0] ?
response[i].response :
strerror(response[i].status));
status = 0;
errno = response[i].status;
}
}
saved_errno = errno;
_cluster_free_request(response, num_responses);
errno = saved_errno;
return status;
}

View File

@ -1,19 +0,0 @@
/*
* Copyright (C) 2007 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
int refresh_clvmd(int all_nodes);
int restart_clvmd(int all_nodes);
int debug_clvmd(int level, int clusterwide);

View File

@ -17,8 +17,6 @@ top_builddir = @top_builddir@
CPG_LIBS = @CPG_LIBS@
CPG_CFLAGS = @CPG_CFLAGS@
SACKPT_LIBS = @SACKPT_LIBS@
SACKPT_CFLAGS = @SACKPT_CFLAGS@
SOURCES = clogd.c cluster.c compat.c functions.c link_mon.c local.c logging.c
@ -26,14 +24,13 @@ TARGETS = cmirrord
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper
LMLIBS += $(CPG_LIBS) $(SACKPT_LIBS)
CFLAGS += $(CPG_CFLAGS) $(SACKPT_CFLAGS) $(EXTRA_EXEC_CFLAGS)
LMLIBS += $(CPG_LIBS)
CFLAGS += $(CPG_CFLAGS) $(EXTRA_EXEC_CFLAGS)
LDFLAGS += $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
cmirrord: $(OBJECTS) $(top_builddir)/lib/liblvm-internal.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) \
$(LVMLIBS) $(LMLIBS) $(LIBS)
$(LVMLIBS) $(LMLIBS) $(INTERNAL_LIBS) $(LIBS)
install: $(TARGETS)
$(INSTALL_PROGRAM) -D cmirrord $(usrsbindir)/cmirrord

View File

@ -16,7 +16,10 @@
#include "functions.h"
#include "link_mon.h"
#include "local.h"
#include "xlate.h"
#include "lib/mm/xlate.h"
/* FIXME: remove this and the code */
#define CMIRROR_HAS_CHECKPOINT 0
#include <corosync/cpg.h>
#include <errno.h>
@ -166,6 +169,9 @@ int cluster_send(struct clog_request *rq)
{
int r;
int found = 0;
#if CMIRROR_HAS_CHECKPOINT
int count = 0;
#endif
struct iovec iov;
struct clog_cpg *entry;
@ -203,8 +209,6 @@ int cluster_send(struct clog_request *rq)
#if CMIRROR_HAS_CHECKPOINT
do {
int count = 0;
r = cpg_mcast_joined(entry->handle, CPG_TYPE_AGREED, &iov, 1);
if (r != SA_AIS_ERR_TRY_AGAIN)
break;
@ -1630,7 +1634,7 @@ int create_cluster_cpg(char *uuid, uint64_t luid)
size = ((strlen(uuid) + 1) > CPG_MAX_NAME_LENGTH) ?
CPG_MAX_NAME_LENGTH : (strlen(uuid) + 1);
strncpy(new->name.value, uuid, size);
(void) dm_strncpy(new->name.value, uuid, size);
new->name.length = (uint32_t)size;
new->luid = luid;

View File

@ -12,8 +12,8 @@
#ifndef _LVM_CLOG_CLUSTER_H
#define _LVM_CLOG_CLUSTER_H
#include "dm-log-userspace.h"
#include "libdevmapper.h"
#include "device_mapper/misc/dm-log-userspace.h"
#include "device_mapper/all.h"
#define DM_ULOG_RESPONSE 0x1000U /* in last byte of 32-bit value */
#define DM_ULOG_CHECKPOINT_READY 21

View File

@ -8,7 +8,7 @@
#include "logging.h"
#include "cluster.h"
#include "compat.h"
#include "xlate.h"
#include "lib/mm/xlate.h"
#include <errno.h>

View File

@ -11,6 +11,7 @@
*/
#include "logging.h"
#include "functions.h"
#include "base/memory/zalloc.h"
#include <sys/sysmacros.h>
#include <dirent.h>
@ -435,7 +436,7 @@ static int _clog_ctr(char *uuid, uint64_t luid,
block_on_error = 1;
}
lc = dm_zalloc(sizeof(*lc));
lc = zalloc(sizeof(*lc));
if (!lc) {
LOG_ERROR("Unable to allocate cluster log context");
r = -ENOMEM;
@ -451,15 +452,19 @@ static int _clog_ctr(char *uuid, uint64_t luid,
lc->skip_bit_warning = region_count;
lc->disk_fd = -1;
lc->log_dev_failed = 0;
strncpy(lc->uuid, uuid, DM_UUID_LEN);
if (!dm_strncpy(lc->uuid, uuid, DM_UUID_LEN)) {
LOG_ERROR("Cannot use too long UUID %s.", uuid);
r = -EINVAL;
goto fail;
}
lc->luid = luid;
if (get_log(lc->uuid, lc->luid) ||
get_pending_log(lc->uuid, lc->luid)) {
LOG_ERROR("[%s/%" PRIu64 "u] Log already exists, unable to create.",
SHORT_UUID(lc->uuid), lc->luid);
dm_free(lc);
return -EINVAL;
r = -EINVAL;
goto fail;
}
dm_list_init(&lc->mark_list);
@ -528,9 +533,9 @@ fail:
LOG_ERROR("Close device error, %s: %s",
disk_path, strerror(errno));
free(lc->disk_buffer);
dm_free(lc->sync_bits);
dm_free(lc->clean_bits);
dm_free(lc);
free(lc->sync_bits);
free(lc->clean_bits);
free(lc);
}
return r;
}
@ -655,9 +660,9 @@ static int clog_dtr(struct dm_ulog_request *rq)
strerror(errno));
if (lc->disk_buffer)
free(lc->disk_buffer);
dm_free(lc->clean_bits);
dm_free(lc->sync_bits);
dm_free(lc);
free(lc->clean_bits);
free(lc->sync_bits);
free(lc);
return 0;
}

View File

@ -12,7 +12,7 @@
#ifndef _LVM_CLOG_FUNCTIONS_H
#define _LVM_CLOG_FUNCTIONS_H
#include "dm-log-userspace.h"
#include "device_mapper/misc/dm-log-userspace.h"
#include "cluster.h"
#define LOG_RESUMED 1

View File

@ -14,7 +14,6 @@
#define _LVM_CLOG_LOGGING_H
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include "configure.h"
#include <stdio.h>

View File

@ -57,13 +57,13 @@ all: device-mapper
device-mapper: $(TARGETS)
CFLAGS_dmeventd.o += $(EXTRA_EXEC_CFLAGS)
LIBS += -ldevmapper $(PTHREAD_LIBS)
LIBS += $(PTHREAD_LIBS)
dmeventd: $(LIB_SHARED) dmeventd.o
$(CC) $(CFLAGS) -L. $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) dmeventd.o \
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(LIBS)
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(INTERNAL_LIBS) $(LIBS) -lm
dmeventd.static: $(LIB_STATIC) dmeventd.o $(interfacebuilddir)/libdevmapper.a
dmeventd.static: $(LIB_STATIC) dmeventd.o
$(CC) $(CFLAGS) $(LDFLAGS) -static -L. -L$(interfacebuilddir) dmeventd.o \
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(LIBS) $(STATIC_LIBS)
@ -73,7 +73,6 @@ endif
ifneq ("$(CFLOW_CMD)", "")
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
-include $(top_builddir)/libdm/libdevmapper.cflow
-include $(top_builddir)/lib/liblvm-internal.cflow
-include $(top_builddir)/lib/liblvm2cmd.cflow
-include $(top_builddir)/daemons/dmeventd/$(LIB_NAME).cflow

View File

@ -16,12 +16,14 @@
* dmeventd - dm event daemon to monitor active mapped devices
*/
#include "dm-logging.h"
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "device_mapper/misc/dm-logging.h"
#include "libdevmapper-event.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "dmeventd.h"
#include "tool.h"
#include "tools/tool.h"
#include <dlfcn.h>
#include <pthread.h>
@ -264,19 +266,19 @@ static pthread_cond_t _timeout_cond = PTHREAD_COND_INITIALIZER;
/* DSO data allocate/free. */
static void _free_dso_data(struct dso_data *data)
{
dm_free(data->dso_name);
dm_free(data);
free(data->dso_name);
free(data);
}
static struct dso_data *_alloc_dso_data(struct message_data *data)
{
struct dso_data *ret = (typeof(ret)) dm_zalloc(sizeof(*ret));
struct dso_data *ret = (typeof(ret)) zalloc(sizeof(*ret));
if (!ret)
return_NULL;
if (!(ret->dso_name = dm_strdup(data->dso_name))) {
dm_free(ret);
if (!(ret->dso_name = strdup(data->dso_name))) {
free(ret);
return_NULL;
}
@ -397,9 +399,9 @@ static void _free_thread_status(struct thread_status *thread)
_lib_put(thread->dso_data);
if (thread->wait_task)
dm_task_destroy(thread->wait_task);
dm_free(thread->device.uuid);
dm_free(thread->device.name);
dm_free(thread);
free(thread->device.uuid);
free(thread->device.name);
free(thread);
}
/* Note: events_field must not be 0, ensured by caller */
@ -408,7 +410,7 @@ static struct thread_status *_alloc_thread_status(const struct message_data *dat
{
struct thread_status *thread;
if (!(thread = dm_zalloc(sizeof(*thread)))) {
if (!(thread = zalloc(sizeof(*thread)))) {
log_error("Cannot create new thread, out of memory.");
return NULL;
}
@ -422,11 +424,11 @@ static struct thread_status *_alloc_thread_status(const struct message_data *dat
if (!dm_task_set_uuid(thread->wait_task, data->device_uuid))
goto_out;
if (!(thread->device.uuid = dm_strdup(data->device_uuid)))
if (!(thread->device.uuid = strdup(data->device_uuid)))
goto_out;
/* Until real name resolved, use UUID */
if (!(thread->device.name = dm_strdup(data->device_uuid)))
if (!(thread->device.name = strdup(data->device_uuid)))
goto_out;
/* runs ioctl and may register lvm2 pluging */
@ -515,7 +517,7 @@ static int _fetch_string(char **ptr, char **src, const int delimiter)
if ((p = strchr(*src, delimiter))) {
if (*src < p) {
*p = 0; /* Temporary exit with \0 */
if (!(*ptr = dm_strdup(*src))) {
if (!(*ptr = strdup(*src))) {
log_error("Failed to fetch item %s.", *src);
ret = 0; /* Allocation fail */
}
@ -525,7 +527,7 @@ static int _fetch_string(char **ptr, char **src, const int delimiter)
(*src)++; /* Skip delmiter, next field */
} else if ((len = strlen(*src))) {
/* No delimiter, item ends with '\0' */
if (!(*ptr = dm_strdup(*src))) {
if (!(*ptr = strdup(*src))) {
log_error("Failed to fetch last item %s.", *src);
ret = 0; /* Fail */
}
@ -538,11 +540,11 @@ out:
/* Free message memory. */
static void _free_message(struct message_data *message_data)
{
dm_free(message_data->id);
dm_free(message_data->dso_name);
dm_free(message_data->device_uuid);
dm_free(message_data->events_str);
dm_free(message_data->timeout_str);
free(message_data->id);
free(message_data->dso_name);
free(message_data->device_uuid);
free(message_data->events_str);
free(message_data->timeout_str);
}
/* Parse a register message from the client. */
@ -574,7 +576,7 @@ static int _parse_message(struct message_data *message_data)
ret = 1;
}
dm_free(msg->data);
free(msg->data);
msg->data = NULL;
return ret;
@ -608,8 +610,8 @@ static int _fill_device_data(struct thread_status *ts)
if (!dm_task_run(dmt))
goto fail;
dm_free(ts->device.name);
if (!(ts->device.name = dm_strdup(dm_task_get_name(dmt))))
free(ts->device.name);
if (!(ts->device.name = strdup(dm_task_get_name(dmt))))
goto fail;
if (!dm_task_get_info(dmt, &dmi))
@ -696,8 +698,8 @@ static int _get_status(struct message_data *message_data)
len = strlen(message_data->id);
msg->size = size + len + 1;
dm_free(msg->data);
if (!(msg->data = dm_malloc(msg->size)))
free(msg->data);
if (!(msg->data = malloc(msg->size)))
goto out;
memcpy(msg->data, message_data->id, len);
@ -712,7 +714,7 @@ static int _get_status(struct message_data *message_data)
ret = 0;
out:
for (j = 0; j < i; ++j)
dm_free(buffers[j]);
free(buffers[j]);
return ret;
}
@ -721,7 +723,7 @@ static int _get_parameters(struct message_data *message_data) {
struct dm_event_daemon_message *msg = message_data->msg;
int size;
dm_free(msg->data);
free(msg->data);
if ((size = dm_asprintf(&msg->data, "%s pid=%d daemon=%s exec_method=%s",
message_data->id, getpid(),
_foreground ? "no" : "yes",
@ -1225,7 +1227,7 @@ static int _registered_device(struct message_data *message_data,
int r;
struct dm_event_daemon_message *msg = message_data->msg;
dm_free(msg->data);
free(msg->data);
if ((r = dm_asprintf(&(msg->data), "%s %s %s %u",
message_data->id,
@ -1365,7 +1367,7 @@ static int _get_timeout(struct message_data *message_data)
if (!thread)
return -ENODEV;
dm_free(msg->data);
free(msg->data);
msg->size = dm_asprintf(&(msg->data), "%s %" PRIu32,
message_data->id, thread->timeout);
@ -1502,7 +1504,7 @@ static int _client_read(struct dm_event_fifos *fifos,
bytes = 0;
if (!size)
break; /* No data -> error */
buf = msg->data = dm_malloc(msg->size);
buf = msg->data = malloc(msg->size);
if (!buf)
break; /* No mem -> error */
header = 0;
@ -1510,7 +1512,7 @@ static int _client_read(struct dm_event_fifos *fifos,
}
if (bytes != size) {
dm_free(msg->data);
free(msg->data);
msg->data = NULL;
return 0;
}
@ -1530,7 +1532,7 @@ static int _client_write(struct dm_event_fifos *fifos,
fd_set fds;
size_t size = 2 * sizeof(uint32_t) + ((msg->data) ? msg->size : 0);
uint32_t *header = dm_malloc(size);
uint32_t *header = malloc(size);
char *buf = (char *)header;
if (!header) {
@ -1560,7 +1562,7 @@ static int _client_write(struct dm_event_fifos *fifos,
}
if (header != temp)
dm_free(header);
free(header);
return (bytes == size);
}
@ -1622,7 +1624,7 @@ static int _do_process_request(struct dm_event_daemon_message *msg)
msg->size = dm_asprintf(&(msg->data), "%s %s %d", answer,
(msg->cmd == DM_EVENT_CMD_DIE) ? "DYING" : "HELLO",
DM_EVENT_PROTOCOL_VERSION);
dm_free(answer);
free(answer);
}
} else if (msg->cmd != DM_EVENT_CMD_ACTIVE && !_parse_message(&message_data)) {
stack;
@ -1664,7 +1666,7 @@ static void _process_request(struct dm_event_fifos *fifos)
DEBUGLOG("<<< CMD:%s (0x%x) completed (result %d).", decode_cmd(cmd), cmd, msg.cmd);
dm_free(msg.data);
free(msg.data);
if (cmd == DM_EVENT_CMD_DIE) {
if (unlink(DMEVENTD_PIDFILE))
@ -1975,7 +1977,7 @@ static int _reinstate_registrations(struct dm_event_fifos *fifos)
int i, ret;
ret = daemon_talk(fifos, &msg, DM_EVENT_CMD_HELLO, NULL, NULL, 0, 0);
dm_free(msg.data);
free(msg.data);
msg.data = NULL;
if (ret) {
@ -2061,13 +2063,13 @@ static void _restart_dmeventd(void)
++count;
}
if (!(_initial_registrations = dm_malloc(sizeof(char*) * (count + 1)))) {
if (!(_initial_registrations = malloc(sizeof(char*) * (count + 1)))) {
fprintf(stderr, "Memory allocation registration failed.\n");
goto bad;
}
for (i = 0; i < count; ++i) {
if (!(_initial_registrations[i] = dm_strdup(message))) {
if (!(_initial_registrations[i] = strdup(message))) {
fprintf(stderr, "Memory allocation for message failed.\n");
goto bad;
}

View File

@ -12,10 +12,12 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "dm-logging.h"
#include "dmlib.h"
#include "libdevmapper-event.h"
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "device_mapper/misc/dm-logging.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "dmeventd.h"
#include "lib/misc/intl.h"
#include <fcntl.h>
#include <sys/file.h>
@ -25,6 +27,7 @@
#include <arpa/inet.h> /* for htonl, ntohl */
#include <pthread.h>
#include <syslog.h>
#include <unistd.h>
static int _debug_level = 0;
static int _use_syslog = 0;
@ -47,8 +50,8 @@ struct dm_event_handler {
static void _dm_event_handler_clear_dev_info(struct dm_event_handler *dmevh)
{
dm_free(dmevh->dev_name);
dm_free(dmevh->uuid);
free(dmevh->dev_name);
free(dmevh->uuid);
dmevh->dev_name = dmevh->uuid = NULL;
dmevh->major = dmevh->minor = 0;
}
@ -57,7 +60,7 @@ struct dm_event_handler *dm_event_handler_create(void)
{
struct dm_event_handler *dmevh;
if (!(dmevh = dm_zalloc(sizeof(*dmevh)))) {
if (!(dmevh = zalloc(sizeof(*dmevh)))) {
log_error("Failed to allocate event handler.");
return NULL;
}
@ -68,9 +71,9 @@ struct dm_event_handler *dm_event_handler_create(void)
void dm_event_handler_destroy(struct dm_event_handler *dmevh)
{
_dm_event_handler_clear_dev_info(dmevh);
dm_free(dmevh->dso);
dm_free(dmevh->dmeventd_path);
dm_free(dmevh);
free(dmevh->dso);
free(dmevh->dmeventd_path);
free(dmevh);
}
int dm_event_handler_set_dmeventd_path(struct dm_event_handler *dmevh, const char *dmeventd_path)
@ -78,9 +81,9 @@ int dm_event_handler_set_dmeventd_path(struct dm_event_handler *dmevh, const cha
if (!dmeventd_path) /* noop */
return 0;
dm_free(dmevh->dmeventd_path);
free(dmevh->dmeventd_path);
if (!(dmevh->dmeventd_path = dm_strdup(dmeventd_path)))
if (!(dmevh->dmeventd_path = strdup(dmeventd_path)))
return -ENOMEM;
return 0;
@ -91,9 +94,9 @@ int dm_event_handler_set_dso(struct dm_event_handler *dmevh, const char *path)
if (!path) /* noop */
return 0;
dm_free(dmevh->dso);
free(dmevh->dso);
if (!(dmevh->dso = dm_strdup(path)))
if (!(dmevh->dso = strdup(path)))
return -ENOMEM;
return 0;
@ -106,7 +109,7 @@ int dm_event_handler_set_dev_name(struct dm_event_handler *dmevh, const char *de
_dm_event_handler_clear_dev_info(dmevh);
if (!(dmevh->dev_name = dm_strdup(dev_name)))
if (!(dmevh->dev_name = strdup(dev_name)))
return -ENOMEM;
return 0;
@ -119,7 +122,7 @@ int dm_event_handler_set_uuid(struct dm_event_handler *dmevh, const char *uuid)
_dm_event_handler_clear_dev_info(dmevh);
if (!(dmevh->uuid = dm_strdup(uuid)))
if (!(dmevh->uuid = strdup(uuid)))
return -ENOMEM;
return 0;
@ -259,7 +262,7 @@ static int _daemon_read(struct dm_event_fifos *fifos,
if (header && (bytes == 2 * sizeof(uint32_t))) {
msg->cmd = ntohl(header[0]);
msg->size = ntohl(header[1]);
buf = msg->data = dm_malloc(msg->size);
buf = msg->data = malloc(msg->size);
size = msg->size;
bytes = 0;
header = 0;
@ -267,7 +270,7 @@ static int _daemon_read(struct dm_event_fifos *fifos,
}
if (bytes != size) {
dm_free(msg->data);
free(msg->data);
msg->data = NULL;
}
return bytes == size;
@ -370,13 +373,13 @@ int daemon_talk(struct dm_event_fifos *fifos,
*/
if (!_daemon_write(fifos, msg)) {
stack;
dm_free(msg->data);
free(msg->data);
msg->data = NULL;
return -EIO;
}
do {
dm_free(msg->data);
free(msg->data);
msg->data = NULL;
if (!_daemon_read(fifos, msg)) {
@ -619,7 +622,7 @@ static int _do_event(int cmd, char *dmeventd_path, struct dm_event_daemon_messag
ret = daemon_talk(&fifos, msg, DM_EVENT_CMD_HELLO, NULL, NULL, 0, 0);
dm_free(msg->data);
free(msg->data);
msg->data = 0;
if (!ret)
@ -645,6 +648,7 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
uuid = dm_task_get_uuid(dmt);
if (!strstr(dmevh->dso, "libdevmapper-event-lvm2thin.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2vdo.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2snapshot.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2mirror.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2raid.so"))
@ -659,7 +663,7 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
ret = 0;
}
dm_free(msg.data);
free(msg.data);
dm_task_destroy(dmt);
@ -686,7 +690,7 @@ int dm_event_unregister_handler(const struct dm_event_handler *dmevh)
ret = 0;
}
dm_free(msg.data);
free(msg.data);
dm_task_destroy(dmt);
@ -702,7 +706,7 @@ static char *_fetch_string(char **src, const int delimiter)
if ((p = strchr(*src, delimiter)))
*p = 0;
if ((ret = dm_strdup(*src)))
if ((ret = strdup(*src)))
*src += strlen(ret) + 1;
if (p)
@ -722,11 +726,11 @@ static int _parse_message(struct dm_event_daemon_message *msg, char **dso_name,
(*dso_name = _fetch_string(&p, ' ')) &&
(*uuid = _fetch_string(&p, ' '))) {
*evmask = atoi(p);
dm_free(id);
free(id);
return 0;
}
dm_free(id);
free(id);
return -ENOMEM;
}
@ -754,11 +758,10 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
uuid = dm_task_get_uuid(dmt);
/* FIXME Distinguish errors connecting to daemon */
if (_do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0)) {
if ((ret = _do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0))) {
log_debug("%s: device not registered.", dm_task_get_name(dmt));
ret = -ENOENT;
goto fail;
}
@ -769,7 +772,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
dm_task_destroy(dmt);
dmt = NULL;
dm_free(msg.data);
free(msg.data);
msg.data = NULL;
_dm_event_handler_clear_dev_info(dmevh);
@ -778,7 +781,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
goto fail;
}
if (!(dmevh->uuid = dm_strdup(reply_uuid))) {
if (!(dmevh->uuid = strdup(reply_uuid))) {
ret = -ENOMEM;
goto fail;
}
@ -791,13 +794,13 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
dm_event_handler_set_dso(dmevh, reply_dso);
dm_event_handler_set_event_mask(dmevh, reply_mask);
dm_free(reply_dso);
free(reply_dso);
reply_dso = NULL;
dm_free(reply_uuid);
free(reply_uuid);
reply_uuid = NULL;
if (!(dmevh->dev_name = dm_strdup(dm_task_get_name(dmt)))) {
if (!(dmevh->dev_name = strdup(dm_task_get_name(dmt)))) {
ret = -ENOMEM;
goto fail;
}
@ -815,9 +818,9 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
return ret;
fail:
dm_free(msg.data);
dm_free(reply_dso);
dm_free(reply_uuid);
free(msg.data);
free(reply_dso);
free(reply_uuid);
_dm_event_handler_clear_dev_info(dmevh);
if (dmt)
dm_task_destroy(dmt);
@ -982,12 +985,12 @@ int dm_event_get_timeout(const char *device_path, uint32_t *timeout)
if (!p) {
log_error("Malformed reply from dmeventd '%s'.",
msg.data);
dm_free(msg.data);
free(msg.data);
return -EIO;
}
*timeout = atoi(p);
}
dm_free(msg.data);
free(msg.data);
return ret;
}

View File

@ -8,4 +8,3 @@ Description: device-mapper event library
Version: @DM_LIB_PATCHLEVEL@
Cflags: -I${includedir}
Libs: -L${libdir} -ldevmapper-event
Requires.private: devmapper

View File

@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2005, 2011 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@ -16,27 +16,7 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SUBDIRS += lvm2
ifneq ("@MIRRORS@", "none")
SUBDIRS += mirror
endif
ifneq ("@SNAPSHOTS@", "none")
SUBDIRS += snapshot
endif
ifneq ("@RAID@", "none")
SUBDIRS += raid
endif
ifneq ("@THIN@", "none")
SUBDIRS += thin
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = lvm2 mirror snapshot raid thin
endif
SUBDIRS += lvm2 snapshot raid thin mirror vdo
include $(top_builddir)/make.tmpl
@ -44,3 +24,4 @@ snapshot: lvm2
mirror: lvm2
raid: lvm2
thin: lvm2
vdo: lvm2

View File

@ -24,7 +24,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += @LVM2CMD_LIB@ -ldevmapper $(PTHREAD_LIBS)
LIBS += @LVM2CMD_LIB@ $(INTERNAL_LIBS) $(PTHREAD_LIBS)
install_lvm2: install_lib_shared

View File

@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "lib/misc/lib.h"
#include "dmeventd_lvm.h"
#include "libdevmapper-event.h"
#include "lvm2cmd.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "tools/lvm2cmd.h"
#include <pthread.h>

View File

@ -30,7 +30,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
install_lvm2: install_dm_plugin

View File

@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "libdevmapper-event.h"
#include "lib/misc/lib.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "dmeventd_lvm.h"
#include "activate.h" /* For TARGET_NAME* */
#include "lib/activate/activate.h"
/* FIXME Reformat to 80 char lines. */

View File

@ -29,7 +29,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
install_lvm2: install_dm_plugin

View File

@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "defaults.h"
#include "lib/misc/lib.h"
#include "lib/config/defaults.h"
#include "dmeventd_lvm.h"
#include "libdevmapper-event.h"
#include "daemons/dmeventd/libdevmapper-event.h"
/* Hold enough elements for the mximum number of RAID images */
#define RAID_DEVS_ELEMS ((DEFAULT_RAID_MAX_IMAGES + 63) / 64)

View File

@ -26,7 +26,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
install_lvm2: install_dm_plugin

View File

@ -12,9 +12,9 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "lib/misc/lib.h"
#include "dmeventd_lvm.h"
#include "libdevmapper-event.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include <sys/sysmacros.h>
#include <sys/wait.h>

View File

@ -29,7 +29,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 -ldevmapper
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
install_lvm2: install_dm_plugin

View File

@ -12,16 +12,16 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h" /* using here lvm log */
#include "lib/misc/lib.h"
#include "dmeventd_lvm.h"
#include "libdevmapper-event.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include <sys/wait.h>
#include <stdarg.h>
/* TODO - move this mountinfo code into library to be reusable */
#ifdef __linux__
# include "kdev_t.h"
# include "libdm/misc/kdev_t.h"
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
@ -64,23 +64,23 @@ DM_EVENT_LOG_FN("thin")
static int _run_command(struct dso_state *state)
{
char val[3][36];
char *env[] = { val[0], val[1], val[2], NULL };
char val[16];
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) dm_snprintf(val[0], sizeof(val[0]), "LVM_RUN_BY_DMEVENTD=1");
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
if (state->data_percent) {
/* Prepare some known data to env vars for easy use */
(void) dm_snprintf(val[1], sizeof(val[1]), "DMEVENTD_THIN_POOL_DATA=%d",
state->data_percent / DM_PERCENT_1);
(void) dm_snprintf(val[2], sizeof(val[2]), "DMEVENTD_THIN_POOL_METADATA=%d",
state->metadata_percent / DM_PERCENT_1);
if (dm_snprintf(val, sizeof(val), "%d",
state->data_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_DATA", val, 1);
if (dm_snprintf(val, sizeof(val), "%d",
state->metadata_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_METADATA", val, 1);
} else {
/* For an error event it's for a user to check status and decide */
env[1] = NULL;
log_debug("Error event processing.");
}
@ -95,7 +95,7 @@ static int _run_command(struct dso_state *state)
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execve(state->argv[0], state->argv, env);
execvp(state->argv[0], state->argv);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);

View File

@ -0,0 +1,3 @@
process_event
register_device
unregister_device

View File

@ -1,6 +1,5 @@
#
# Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@ -16,18 +15,22 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES =\
disk-rep.c \
format1.c \
import-export.c \
import-extents.c \
layout.c \
lvm1-label.c \
vg_number.c
INCLUDES += -I$(top_srcdir)/daemons/dmeventd/plugins/lvm2
CLDFLAGS += -L$(top_builddir)/daemons/dmeventd/plugins/lvm2
LIB_SHARED = liblvm2format1.$(LIB_SUFFIX)
SOURCES = dmeventd_vdo.c
LIB_NAME = libdevmapper-event-lvm2vdo
LIB_SHARED = $(LIB_NAME).$(LIB_SUFFIX)
LIB_VERSION = $(LIB_VERSION_LVM)
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
install: install_lvm2_plugin
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
install_lvm2: install_dm_plugin
install: install_lvm2

View File

@ -0,0 +1,406 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "device_mapper/vdo/target.h"
#include <sys/wait.h>
#include <stdarg.h>
/* First warning when VDO pool is 80% full. */
#define WARNING_THRESH (DM_PERCENT_1 * 80)
/* Run a check every 5%. */
#define CHECK_STEP (DM_PERCENT_1 * 5)
/* Do not bother checking VDO pool is less than 50% full. */
#define CHECK_MINIMUM (DM_PERCENT_1 * 50)
#define MAX_FAILS (256) /* ~42 mins between cmd call retry with 10s delay */
#define VDO_DEBUG 0
struct dso_state {
struct dm_pool *mem;
int percent_check;
int percent;
uint64_t known_data_size;
unsigned fails;
unsigned max_fails;
int restore_sigset;
sigset_t old_sigset;
pid_t pid;
char *argv[3];
const char *cmd_str;
const char *name;
};
DM_EVENT_LOG_FN("vdo")
static int _run_command(struct dso_state *state)
{
char val[16];
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
if (state->percent) {
/* Prepare some known data to env vars for easy use */
if (dm_snprintf(val, sizeof(val), "%d",
state->percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_VDO_POOL", val, 1);
} else {
/* For an error event it's for a user to check status and decide */
log_debug("Error event processing.");
}
log_verbose("Executing command: %s", state->cmd_str);
/* TODO:
* Support parallel run of 'task' and it's waitpid maintainence
* ATM we can't handle signaling of SIGALRM
* as signalling is not allowed while 'process_event()' is running
*/
if (!(state->pid = fork())) {
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execvp(state->argv[0], state->argv);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
state->fails = 1;
return 0;
}
return 1;
}
static int _use_policy(struct dm_task *dmt, struct dso_state *state)
{
#if VDO_DEBUG
log_debug("dmeventd executes: %s.", state->cmd_str);
#endif
if (state->argv[0])
return _run_command(state);
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
log_error("Failed command for %s.", dm_task_get_name(dmt));
state->fails = 1;
return 0;
}
state->fails = 0;
return 1;
}
/* Check if executed command has finished
* Only 1 command may run */
static int _wait_for_pid(struct dso_state *state)
{
int status = 0;
if (state->pid == -1)
return 1;
if (!waitpid(state->pid, &status, WNOHANG))
return 0;
/* Wait for finish */
if (WIFEXITED(status)) {
log_verbose("Child %d exited with status %d.",
state->pid, WEXITSTATUS(status));
state->fails = WEXITSTATUS(status) ? 1 : 0;
} else {
if (WIFSIGNALED(status))
log_verbose("Child %d was terminated with status %d.",
state->pid, WTERMSIG(status));
state->fails = 1;
}
state->pid = -1;
return 1;
}
void process_event(struct dm_task *dmt,
enum dm_event_mask event __attribute__((unused)),
void **user)
{
const char *device = dm_task_get_name(dmt);
struct dso_state *state = *user;
void *next = NULL;
uint64_t start, length;
char *target_type = NULL;
char *params;
int needs_policy = 0;
struct dm_task *new_dmt = NULL;
struct dm_vdo_status_parse_result vdop = { .status = NULL };
#if VDO_DEBUG
log_debug("Watch for VDO %s:%.2f%%.", state->name,
dm_percent_to_round_float(state->percent_check, 2));
#endif
if (!_wait_for_pid(state)) {
log_warn("WARNING: Skipping event, child %d is still running (%s).",
state->pid, state->cmd_str);
return;
}
if (event & DM_EVENT_DEVICE_ERROR) {
#if VDO_DEBUG
log_debug("VDO event error.");
#endif
/* Error -> no need to check and do instant resize */
state->percent = 0;
if (_use_policy(dmt, state))
goto out;
stack;
if (!(new_dmt = dm_task_create(DM_DEVICE_STATUS)))
goto_out;
if (!dm_task_set_uuid(new_dmt, dm_task_get_uuid(dmt)))
goto_out;
/* Non-blocking status read */
if (!dm_task_no_flush(new_dmt))
log_warn("WARNING: Can't set no_flush for dm status.");
if (!dm_task_run(new_dmt))
goto_out;
dmt = new_dmt;
}
dm_get_next_target(dmt, next, &start, &length, &target_type, &params);
if (!target_type || (strcmp(target_type, "vdo") != 0)) {
log_error("Invalid target type.");
goto out;
}
if (!dm_vdo_status_parse(state->mem, params, &vdop)) {
log_error("Failed to parse status.");
goto out;
}
state->percent = dm_make_percent(vdop.status->used_blocks,
vdop.status->total_blocks);
#if VDO_DEBUG
log_debug("VDO %s status %.2f%% " FMTu64 "/" FMTu64 ".",
state->name, dm_percent_to_round_float(state->percent, 2),
vdop.status->used_blocks, vdop.status->total_blocks);
#endif
/* VDO pool size had changed. Clear the threshold. */
if (state->known_data_size != vdop.status->total_blocks) {
state->percent_check = CHECK_MINIMUM;
state->known_data_size = vdop.status->total_blocks;
state->fails = 0;
}
/*
* Trigger action when threshold boundary is exceeded.
* Report 80% threshold warning when it's used above 80%.
* Only 100% is exception as it cannot be surpased so policy
* action is called for: >50%, >55% ... >95%, 100%
*/
if ((state->percent > WARNING_THRESH) &&
(state->percent > state->percent_check))
log_warn("WARNING: VDO %s %s is now %.2f%% full.",
state->name, device,
dm_percent_to_round_float(state->percent, 2));
if (state->percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->percent > state->percent_check)
needs_policy = 1;
state->percent_check = (state->percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->percent_check == DM_PERCENT_100)
state->percent_check--; /* Can't get bigger then 100% */
} else
state->percent_check = CHECK_MINIMUM;
/* Reduce number of _use_policy() calls by power-of-2 factor till frequency of MAX_FAILS is reached.
* Avoids too high number of error retries, yet shows some status messages in log regularly.
* i.e. PV could have been pvmoved and VG/LV was locked for a while...
*/
if (state->fails) {
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
return;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;
state->fails = needs_policy = 1; /* Retry failing command */
} else
state->max_fails = 1; /* Reset on success */
/* FIXME: ATM nothing can be done, drop 0, once it becomes useful */
if (0 && needs_policy)
_use_policy(dmt, state);
out:
if (vdop.status)
dm_pool_free(state->mem, vdop.status);
if (new_dmt)
dm_task_destroy(new_dmt);
}
/* Handle SIGCHLD for a thread */
static void _sig_child(int signum __attribute__((unused)))
{
/* empty SIG_IGN */;
}
/* Setup handler for SIGCHLD when executing external command
* to get quick 'waitpid()' reaction
* It will interrupt syscall just like SIGALRM and
* invoke process_event().
*/
static void _init_thread_signals(struct dso_state *state)
{
struct sigaction act = { .sa_handler = _sig_child };
sigset_t my_sigset;
sigemptyset(&my_sigset);
if (sigaction(SIGCHLD, &act, NULL))
log_warn("WARNING: Failed to set SIGCHLD action.");
else if (sigaddset(&my_sigset, SIGCHLD))
log_warn("WARNING: Failed to add SIGCHLD to set.");
else if (pthread_sigmask(SIG_UNBLOCK, &my_sigset, &state->old_sigset))
log_warn("WARNING: Failed to unblock SIGCHLD.");
else
state->restore_sigset = 1;
}
static void _restore_thread_signals(struct dso_state *state)
{
if (state->restore_sigset &&
pthread_sigmask(SIG_SETMASK, &state->old_sigset, NULL))
log_warn("WARNING: Failed to block SIGCHLD.");
}
int register_device(const char *device,
const char *uuid,
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state;
const char *cmd;
char *str;
char cmd_str[PATH_MAX + 128 + 2]; /* cmd ' ' vg/lv \0 */
const char *name = "pool";
if (!dmeventd_lvm2_init_with_pool("vdo_pool_state", state))
goto_bad;
state->cmd_str = "";
/* Search for command for LVM- prefixed devices only */
cmd = (strncmp(uuid, "LVM-", 4) == 0) ? "_dmeventd_vdo_command" : "";
if (!dmeventd_lvm2_command(state->mem, cmd_str, sizeof(cmd_str), cmd, device))
goto_bad;
if (strncmp(cmd_str, "lvm ", 4) == 0) {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str + 4))) {
log_error("Failed to copy lvm VDO command.");
goto bad;
}
} else if (cmd_str[0] == '/') {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str))) {
log_error("Failed to copy VDO command.");
goto bad;
}
/* Find last space before 'vg/lv' */
if (!(str = strrchr(state->cmd_str, ' ')))
goto inval;
if (!(state->argv[0] = dm_pool_strndup(state->mem, state->cmd_str,
str - state->cmd_str))) {
log_error("Failed to copy command.");
goto bad;
}
state->argv[1] = str + 1; /* 1 argument - vg/lv */
_init_thread_signals(state);
} else if (cmd[0] == 0) {
state->name = "volume"; /* What to use with 'others?' */
} else/* Unuspported command format */
goto inval;
state->pid = -1;
state->name = name;
*user = state;
log_info("Monitoring VDO %s %s.", name, device);
return 1;
inval:
log_error("Invalid command for monitoring: %s.", cmd_str);
bad:
log_error("Failed to monitor VDO %s %s.", name, device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}
int unregister_device(const char *device,
const char *uuid __attribute__((unused)),
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state = *user;
const char *name = state->name;
int i;
for (i = 0; !_wait_for_pid(state) && (i < 6); ++i) {
if (i == 0)
/* Give it 2 seconds, then try to terminate & kill it */
log_verbose("Child %d still not finished (%s) waiting.",
state->pid, state->cmd_str);
else if (i == 3) {
log_warn("WARNING: Terminating child %d.", state->pid);
kill(state->pid, SIGINT);
kill(state->pid, SIGTERM);
} else if (i == 5) {
log_warn("WARNING: Killing child %d.", state->pid);
kill(state->pid, SIGKILL);
}
sleep(1);
}
if (state->pid != -1)
log_warn("WARNING: Cannot kill child %d!", state->pid);
_restore_thread_signals(state);
dmeventd_lvm2_exit_with_pool(state);
log_info("No longer monitoring VDO %s %s.", name, device);
return 1;
}

View File

@ -1,66 +0,0 @@
#
# Copyright (C) 2016 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES = dmfilemapd.c
TARGETS = dmfilemapd
.PHONY: install_dmfilemapd install_dmfilemapd_static
INSTALL_DMFILEMAPD_TARGETS = install_dmfilemapd_dynamic
CLEAN_TARGETS = dmfilemapd.static
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
CFLOW_TARGET = dmfilemapd
include $(top_builddir)/make.tmpl
all: device-mapper
device-mapper: $(TARGETS)
CFLAGS_dmfilemapd.o += $(EXTRA_EXEC_CFLAGS)
LIBS += -ldevmapper
dmfilemapd: $(LIB_SHARED) dmfilemapd.o
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) \
-o $@ dmfilemapd.o $(DL_LIBS) $(LIBS)
dmfilemapd.static: $(LIB_STATIC) dmfilemapd.o $(interfacebuilddir)/libdevmapper.a
$(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -static -L$(interfacebuilddir) \
-o $@ dmfilemapd.o $(DL_LIBS) $(LIBS) $(STATIC_LIBS)
ifneq ("$(CFLOW_CMD)", "")
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
-include $(top_builddir)/libdm/libdevmapper.cflow
-include $(top_builddir)/lib/liblvm-internal.cflow
-include $(top_builddir)/lib/liblvm2cmd.cflow
-include $(top_builddir)/daemons/dmfilemapd/$(LIB_NAME).cflow
endif
install_dmfilemapd_dynamic: dmfilemapd
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
install_dmfilemapd_static: dmfilemapd.static
$(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F)
install_dmfilemapd: $(INSTALL_DMFILEMAPD_TARGETS)
install: install_dmfilemapd
install_device-mapper: install_dmfilemapd

View File

@ -44,6 +44,8 @@ LVMDBUS_BUILDDIR_FILES = \
LVMDBUSD = lvmdbusd
CLEAN_DIRS += __pycache__
include $(top_builddir)/make.tmpl
.PHONY: install_lvmdbusd

View File

@ -497,7 +497,7 @@ class Lv(LvCommon):
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes / 80
space = dbo.SizeBytes // 80
remainder = space % 512
optional_size = space + 512 - remainder

View File

@ -1,2 +0,0 @@
lvmetad
lvmetactl

View File

@ -1,62 +0,0 @@
#
# Copyright (C) 2011-2012 Red Hat, Inc.
#
# This file is part of LVM2.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES = lvmetad-core.c
SOURCES2 = lvmetactl.c
TARGETS = lvmetad lvmetactl
.PHONY: install_lvmetad
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
CFLOW_TARGET = lvmetad
include $(top_builddir)/make.tmpl
CFLAGS_lvmetactl.o += $(EXTRA_EXEC_CFLAGS)
CFLAGS_lvmetad-core.o += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) -ldevmapper $(PTHREAD_LIBS)
lvmetad: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) -ldaemonserver $(LIBS)
lvmetactl: lvmetactl.o $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmetactl.o $(LIBS)
CLEAN_TARGETS += lvmetactl.o
# TODO: No idea. No idea how to test either.
#ifneq ("$(CFLOW_CMD)", "")
#CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
#-include $(top_builddir)/libdm/libdevmapper.cflow
#-include $(top_builddir)/lib/liblvm-internal.cflow
#-include $(top_builddir)/lib/liblvm2cmd.cflow
#-include $(top_builddir)/daemons/dmeventd/$(LIB_NAME).cflow
#-include $(top_builddir)/daemons/dmeventd/plugins/mirror/$(LIB_NAME)-lvm2mirror.cflow
#endif
install_lvmetad: lvmetad
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
install_lvm2: install_lvmetad
install: install_lvm2

View File

@ -1,249 +0,0 @@
/*
* Copyright (C) 2014 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*/
#include "tool.h"
#include "lvmetad-client.h"
daemon_handle h;
static void print_reply(daemon_reply reply)
{
const char *a = daemon_reply_str(reply, "response", NULL);
const char *b = daemon_reply_str(reply, "status", NULL);
const char *c = daemon_reply_str(reply, "reason", NULL);
printf("response \"%s\" status \"%s\" reason \"%s\"\n",
a ? a : "", b ? b : "", c ? c : "");
}
int main(int argc, char **argv)
{
daemon_reply reply;
char *cmd;
char *uuid;
char *name;
int val;
int ver;
if (argc < 2) {
printf("lvmetactl dump\n");
printf("lvmetactl pv_list\n");
printf("lvmetactl vg_list\n");
printf("lvmetactl get_global_info\n");
printf("lvmetactl vg_lookup_name <name>\n");
printf("lvmetactl vg_lookup_uuid <uuid>\n");
printf("lvmetactl pv_lookup_uuid <uuid>\n");
printf("lvmetactl set_global_invalid 0|1\n");
printf("lvmetactl set_global_disable 0|1\n");
printf("lvmetactl set_vg_version <uuid> <name> <version>\n");
printf("lvmetactl vg_lock_type <uuid>\n");
return -1;
}
cmd = argv[1];
h = lvmetad_open(NULL);
if (!strcmp(cmd, "dump")) {
reply = daemon_send_simple(h, "dump",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "pv_list")) {
reply = daemon_send_simple(h, "pv_list",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_list")) {
reply = daemon_send_simple(h, "vg_list",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "get_global_info")) {
reply = daemon_send_simple(h, "get_global_info",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "set_global_invalid")) {
if (argc < 3) {
printf("set_global_invalid 0|1\n");
return -1;
}
val = atoi(argv[2]);
reply = daemon_send_simple(h, "set_global_info",
"global_invalid = " FMTd64, (int64_t) val,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
print_reply(reply);
} else if (!strcmp(cmd, "set_global_disable")) {
if (argc < 3) {
printf("set_global_disable 0|1\n");
return -1;
}
val = atoi(argv[2]);
reply = daemon_send_simple(h, "set_global_info",
"global_disable = " FMTd64, (int64_t) val,
"disable_reason = %s", LVMETAD_DISABLE_REASON_DIRECT,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
print_reply(reply);
} else if (!strcmp(cmd, "set_vg_version")) {
if (argc < 5) {
printf("set_vg_version <uuid> <name> <ver>\n");
return -1;
}
uuid = argv[2];
name = argv[3];
ver = atoi(argv[4]);
if ((strlen(uuid) == 1) && (uuid[0] == '-'))
uuid = NULL;
if ((strlen(name) == 1) && (name[0] == '-'))
name = NULL;
if (uuid && name) {
reply = daemon_send_simple(h, "set_vg_info",
"uuid = %s", uuid,
"name = %s", name,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else if (uuid) {
reply = daemon_send_simple(h, "set_vg_info",
"uuid = %s", uuid,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else if (name) {
reply = daemon_send_simple(h, "set_vg_info",
"name = %s", name,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else {
printf("name or uuid required\n");
return -1;
}
print_reply(reply);
} else if (!strcmp(cmd, "vg_lookup_name")) {
if (argc < 3) {
printf("vg_lookup_name <name>\n");
return -1;
}
name = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"name = %s", name,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_lookup_uuid")) {
if (argc < 3) {
printf("vg_lookup_uuid <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_lock_type")) {
struct dm_config_node *metadata;
const char *lock_type;
if (argc < 3) {
printf("vg_lock_type <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
/* printf("%s\n", reply.buffer.mem); */
metadata = dm_config_find_node(reply.cft->root, "metadata");
if (!metadata) {
printf("no metadata\n");
goto out;
}
lock_type = dm_config_find_str(metadata, "metadata/lock_type", NULL);
if (!lock_type) {
printf("no lock_type\n");
goto out;
}
printf("lock_type %s\n", lock_type);
} else if (!strcmp(cmd, "pv_lookup_uuid")) {
if (argc < 3) {
printf("pv_lookup_uuid <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "pv_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else {
printf("unknown command\n");
goto out_close;
}
out:
daemon_reply_destroy(reply);
out_close:
daemon_close(h);
return 0;
}

View File

@ -1,91 +0,0 @@
/*
* Copyright (C) 2011-2012 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _LVM_LVMETAD_CLIENT_H
#define _LVM_LVMETAD_CLIENT_H
#include "daemon-client.h"
#define LVMETAD_SOCKET DEFAULT_RUN_DIR "/lvmetad.socket"
#define LVMETAD_TOKEN_UPDATE_IN_PROGRESS "update in progress"
#define LVMETAD_DISABLE_REASON_DIRECT "DIRECT"
#define LVMETAD_DISABLE_REASON_LVM1 "LVM1"
#define LVMETAD_DISABLE_REASON_DUPLICATES "DUPLICATES"
#define LVMETAD_DISABLE_REASON_VGRESTORE "VGRESTORE"
#define LVMETAD_DISABLE_REASON_REPAIR "REPAIR"
struct volume_group;
/* Different types of replies we may get from lvmetad. */
typedef struct {
daemon_reply r;
const char **uuids; /* NULL terminated array */
} lvmetad_uuidlist;
typedef struct {
daemon_reply r;
struct dm_config_tree *cft;
} lvmetad_vg;
/* Get a list of VG UUIDs that match a given VG name. */
lvmetad_uuidlist lvmetad_lookup_vgname(daemon_handle h, const char *name);
/* Get the metadata of a single VG, identified by UUID. */
lvmetad_vg lvmetad_get_vg(daemon_handle h, const char *uuid);
/*
* Add and remove PVs on demand. Udev-driven systems will use this interface
* instead of scanning.
*/
daemon_reply lvmetad_add_pv(daemon_handle h, const char *pv_uuid, const char *mda_content);
daemon_reply lvmetad_remove_pv(daemon_handle h, const char *pv_uuid);
/* Trigger a full disk scan, throwing away all caches. XXX do we eventually want
* this? Probably not yet, anyway.
* daemon_reply lvmetad_rescan(daemon_handle h);
*/
/*
* Update the version of metadata of a volume group. The VG has to be locked for
* writing for this, and the VG metadata here has to match whatever has been
* written to the disk (under this lock). This initially avoids the requirement
* for lvmetad to write to disk (in later revisions, lvmetad_supersede_vg may
* also do the writing, or we probably add another function to do that).
*/
daemon_reply lvmetad_supersede_vg(daemon_handle h, struct volume_group *vg);
/* Wrappers to open/close connection */
static inline daemon_handle lvmetad_open(const char *socket)
{
daemon_info lvmetad_info = {
.path = "lvmetad",
.socket = socket ?: LVMETAD_SOCKET,
.protocol = "lvmetad",
.protocol_version = 1,
.autostart = 0
};
return daemon_open(lvmetad_info);
}
static inline void lvmetad_close(daemon_handle h)
{
return daemon_close(h);
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +0,0 @@
#!/bin/bash
export LD_LIBRARY_PATH="$1"
test -n "$2" && {
rm -f /var/run/lvmetad.{socket,pid}
chmod +rx lvmetad
valgrind ./lvmetad -f &
PID=$!
sleep 1
./testclient
kill $PID
exit 0
}
sudo ./test.sh "$1" .

View File

@ -1,147 +0,0 @@
/*
* Copyright (C) 2011-2014 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "tool.h"
#include "lvmetad-client.h"
#include "label.h"
#include "lvmcache.h"
#include "metadata.h"
const char *uuid1 = "abcd-efgh";
const char *uuid2 = "bbcd-efgh";
const char *vgid = "yada-yada";
const char *uuid3 = "cbcd-efgh";
const char *metadata2 = "{\n"
"id = \"yada-yada\"\n"
"seqno = 15\n"
"status = [\"READ\", \"WRITE\"]\n"
"flags = []\n"
"extent_size = 8192\n"
"physical_volumes {\n"
" pv0 {\n"
" id = \"abcd-efgh\"\n"
" }\n"
" pv1 {\n"
" id = \"bbcd-efgh\"\n"
" }\n"
" pv2 {\n"
" id = \"cbcd-efgh\"\n"
" }\n"
"}\n"
"}\n";
void _handle_reply(daemon_reply reply) {
const char *repl = daemon_reply_str(reply, "response", NULL);
const char *status = daemon_reply_str(reply, "status", NULL);
const char *vgid = daemon_reply_str(reply, "vgid", NULL);
fprintf(stderr, "[C] REPLY: %s\n", repl);
if (!strcmp(repl, "failed"))
fprintf(stderr, "[C] REASON: %s\n", daemon_reply_str(reply, "reason", "unknown"));
if (vgid)
fprintf(stderr, "[C] VGID: %s\n", vgid);
if (status)
fprintf(stderr, "[C] STATUS: %s\n", status);
daemon_reply_destroy(reply);
}
void _pv_add(daemon_handle h, const char *uuid, const char *metadata)
{
daemon_reply reply = daemon_send_simple(h, "pv_add", "uuid = %s", uuid,
"metadata = %b", metadata,
NULL);
_handle_reply(reply);
}
int scan(daemon_handle h, char *fn) {
struct device *dev = dev_cache_get(fn, NULL);
struct label *label;
if (!label_read(dev, &label, 0)) {
fprintf(stderr, "[C] no label found on %s\n", fn);
return;
}
char uuid[64];
if (!id_write_format(dev->pvid, uuid, 64)) {
fprintf(stderr, "[C] Failed to format PV UUID for %s", dev_name(dev));
return;
}
fprintf(stderr, "[C] found PV: %s\n", uuid);
struct lvmcache_info *info = (struct lvmcache_info *) label->info;
struct physical_volume pv = { 0, };
if (!(info->fmt->ops->pv_read(info->fmt, dev_name(dev), &pv, 0))) {
fprintf(stderr, "[C] Failed to read PV %s", dev_name(dev));
return;
}
struct format_instance_ctx fic;
struct format_instance *fid = info->fmt->ops->create_instance(info->fmt, &fic);
struct metadata_area *mda;
struct volume_group *vg = NULL;
dm_list_iterate_items(mda, &info->mdas) {
struct volume_group *this = mda->ops->vg_read(fid, "", mda);
if (this && !vg || this->seqno > vg->seqno)
vg = this;
}
if (vg) {
char *buf = NULL;
/* TODO. This is not entirely correct, since export_vg_to_buffer
* adds trailing garbage to the buffer. We may need to use
* export_vg_to_config_tree and format the buffer ourselves. It
* does, however, work for now, since the garbage is well
* formatted and has no conflicting keys with the rest of the
* request. */
export_vg_to_buffer(vg, &buf);
daemon_reply reply =
daemon_send_simple(h, "pv_add", "uuid = %s", uuid,
"metadata = %b", strchr(buf, '{'),
NULL);
_handle_reply(reply);
}
}
void _dump_vg(daemon_handle h, const char *uuid)
{
daemon_reply reply = daemon_send_simple(h, "vg_by_uuid", "uuid = %s", uuid, NULL);
fprintf(stderr, "[C] reply buffer: %s\n", reply.buffer);
daemon_reply_destroy(reply);
}
int main(int argc, char **argv) {
daemon_handle h = lvmetad_open();
/* FIXME Missing error path */
if (argc > 1) {
int i;
struct cmd_context *cmd = create_toolcontext(0, NULL, 0, 0, 1, 1);
for (i = 1; i < argc; ++i) {
const char *uuid = NULL;
scan(h, argv[i]);
}
destroy_toolcontext(cmd);
/* FIXME Missing lvmetad_close() */
return 0;
}
_pv_add(h, uuid1, NULL);
_pv_add(h, uuid2, metadata2);
_dump_vg(h, vgid);
_pv_add(h, uuid3, NULL);
daemon_close(h); /* FIXME lvmetad_close? */
return 0;
}

View File

@ -27,6 +27,8 @@ ifeq ("@BUILD_LOCKDDLM@", "yes")
LOCK_LIBS += -ldlm_lt
endif
SOURCES2 = lvmlockctl.c
TARGETS = lvmlockd lvmlockctl
.PHONY: install_lvmlockd
@ -36,14 +38,14 @@ include $(top_builddir)/make.tmpl
CFLAGS += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) -ldevmapper $(PTHREAD_LIBS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) $(PTHREAD_LIBS)
lvmlockd: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LOCK_LIBS) -ldaemonserver $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LOCK_LIBS) -ldaemonserver $(INTERNAL_LIBS) $(LIBS)
lvmlockctl: lvmlockctl.o $(top_builddir)/libdaemon/client/libdaemonclient.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmlockctl.o $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmlockctl.o $(INTERNAL_LIBS) $(LIBS)
install_lvmlockd: lvmlockd
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)

View File

@ -8,9 +8,9 @@
* of the GNU Lesser General Public License v.2.1.
*/
#include "tool.h"
#include "tools/tool.h"
#include "lvmlockd-client.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include <stddef.h>
#include <getopt.h>

View File

@ -11,7 +11,7 @@
#ifndef _LVM_LVMLOCKD_CLIENT_H
#define _LVM_LVMLOCKD_CLIENT_H
#include "daemon-client.h"
#include "libdaemon/client/daemon-client.h"
#define LVMLOCKD_SOCKET DEFAULT_RUN_DIR "/lvmlockd.socket"

View File

@ -12,14 +12,13 @@
#define _ISOC99_SOURCE
#define _REENTRANT
#include "tool.h"
#include "tools/tool.h"
#include "daemon-io.h"
#include "libdaemon/client/daemon-io.h"
#include "daemon-server.h"
#include "lvm-version.h"
#include "lvmetad-client.h"
#include "lvmlockd-client.h"
#include "dm-ioctl.h" /* for DM_UUID_LEN */
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "device_mapper/misc/dm-ioctl.h"
/* #include <assert.h> */
#include <errno.h>
@ -144,10 +143,6 @@ static const int lvmlockd_protocol_version = 1;
static int daemon_quit;
static int adopt_opt;
static daemon_handle lvmetad_handle;
static pthread_mutex_t lvmetad_mutex;
static int lvmetad_connected;
/*
* We use a separate socket for dumping daemon info.
* This will not interfere with normal operations, and allows
@ -1009,52 +1004,6 @@ static void add_work_action(struct action *act)
pthread_mutex_unlock(&worker_mutex);
}
static daemon_reply send_lvmetad(const char *id, ...)
{
daemon_reply reply;
va_list ap;
int retries = 0;
int err;
va_start(ap, id);
/*
* mutex is used because all threads share a single
* lvmetad connection/handle.
*/
pthread_mutex_lock(&lvmetad_mutex);
retry:
if (!lvmetad_connected) {
lvmetad_handle = lvmetad_open(NULL);
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0) {
err = lvmetad_handle.error ?: lvmetad_handle.socket_fd;
pthread_mutex_unlock(&lvmetad_mutex);
log_error("lvmetad_open reconnect error %d", err);
memset(&reply, 0, sizeof(reply));
reply.error = err;
va_end(ap);
return reply;
} else {
log_debug("lvmetad reconnected");
lvmetad_connected = 1;
}
}
reply = daemon_send_simple_v(lvmetad_handle, id, ap);
/* lvmetad may have been restarted */
if ((reply.error == ECONNRESET) && (retries < 2)) {
daemon_close(lvmetad_handle);
lvmetad_connected = 0;
retries++;
goto retry;
}
pthread_mutex_unlock(&lvmetad_mutex);
va_end(ap);
return reply;
}
static int res_lock(struct lockspace *ls, struct resource *r, struct action *act, int *retry)
{
struct lock *lk;
@ -1250,6 +1199,18 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
rv = -EREMOVED;
}
/*
* lvmetad is no longer used, but the infrastructure for
* distributed cache validation remains. The points
* where vg or global cache state would be invalidated
* remain below and log_debug messages point out where
* they would occur.
*
* The comments related to "lvmetad" remain because they
* describe how some other local cache like lvmetad would
* be invalidated here.
*/
/*
* r is vglk: tell lvmetad to set the vg invalid
* flag, and provide the new r_version. If lvmetad finds
@ -1265,44 +1226,22 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
* caches, and tell lvmetad to set global invalid to 0.
*/
/*
* lvmetad not running:
* Even if we have not previously found lvmetad running,
* we attempt to connect and invalidate in case it has
* been started while lvmlockd is running. We don't
* want to allow lvmetad to be used with invalid data if
* it happens to be enabled and started after lvmlockd.
*/
if (inval_meta && (r->type == LD_RT_VG)) {
daemon_reply reply;
char *uuid;
log_debug("S %s R %s res_lock set lvmetad vg version %u",
log_debug("S %s R %s res_lock invalidate vg state version %u",
ls->name, r->name, new_version);
if (!ls->vg_uuid[0] || !strcmp(ls->vg_uuid, "none"))
uuid = (char *)"none";
else
uuid = ls->vg_uuid;
reply = send_lvmetad("set_vg_info",
"token = %s", "skip",
"uuid = %s", uuid,
"name = %s", ls->vg_name,
"version = " FMTd64, (int64_t)new_version,
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_vg_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
}
if (inval_meta && (r->type == LD_RT_GL)) {
daemon_reply reply;
log_debug("S %s R %s res_lock set lvmetad global invalid",
ls->name, r->name);
reply = send_lvmetad("set_global_info",
"token = %s", "skip",
"global_invalid = " FMTd64, INT64_C(1),
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_global_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
log_debug("S %s R %s res_lock invalidate global state", ls->name, r->name);
}
/*
@ -4812,7 +4751,7 @@ static void close_client_thread(void)
}
/*
* Get a list of all VGs with a lockd type (sanlock|dlm) from lvmetad.
* Get a list of all VGs with a lockd type (sanlock|dlm).
* We'll match this list against a list of existing lockspaces that are
* found in the lock manager.
*
@ -4823,6 +4762,9 @@ static void close_client_thread(void)
static int get_lockd_vgs(struct list_head *vg_lockd)
{
/* FIXME: get VGs some other way */
return -1;
#if 0
struct list_head update_vgs;
daemon_reply reply;
struct dm_config_node *cn;
@ -4979,6 +4921,7 @@ out:
}
return rv;
#endif
}
static char _dm_uuid[DM_UUID_LEN];
@ -5253,7 +5196,7 @@ static void adopt_locks(void)
gl_use_sanlock = 1;
list_for_each_entry(ls, &vg_lockd, list) {
log_debug("adopt lvmetad vg %s lock_type %s lock_args %s",
log_debug("adopt vg %s lock_type %s lock_args %s",
ls->vg_name, lm_str(ls->lm_type), ls->vg_args);
list_for_each_entry(r, &ls->resources, list)
@ -5318,7 +5261,7 @@ static void adopt_locks(void)
/*
* LS in ls_found, not in vg_lockd.
* An lvm lockspace found in the lock manager has no
* corresponding VG in lvmetad. This shouldn't usually
* corresponding VG. This shouldn't usually
* happen, but it's possible the VG could have been removed
* while the orphaned lockspace from it was still around.
* Report an error and leave the ls in the lm alone.
@ -5333,7 +5276,7 @@ static void adopt_locks(void)
/*
* LS in vg_lockd, not in ls_found.
* lockd vgs from lvmetad that do not have an existing lockspace.
* lockd vgs that do not have an existing lockspace.
* This wouldn't be unusual; we just skip the vg.
* But, if the vg has active lvs, then it should have had locks
* and a lockspace. Should we attempt to join the lockspace and
@ -5385,8 +5328,6 @@ static void adopt_locks(void)
memcpy(act->vg_args, ls->vg_args, MAX_ARGS);
act->host_id = ls->host_id;
/* set act->version from lvmetad data? */
log_debug("adopt add %s vg lockspace %s", lm_str(act->lm_type), act->vg_name);
rv = add_lockspace_thread(ls->name, act->vg_name, act->vg_uuid,
@ -5845,13 +5786,6 @@ static int main_loop(daemon_state *ds_arg)
setup_worker_thread();
setup_restart();
pthread_mutex_init(&lvmetad_mutex, NULL);
lvmetad_handle = lvmetad_open(NULL);
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0)
log_error("lvmetad_open error %d", lvmetad_handle.error);
else
lvmetad_connected = 1;
/*
* Attempt to rejoin lockspaces and adopt locks from a previous
* instance of lvmlockd that left behind lockspaces/locks.
@ -5973,7 +5907,6 @@ static int main_loop(daemon_state *ds_arg)
close_worker_thread();
close_client_thread();
closelog();
daemon_close(lvmetad_handle);
return 1; /* libdaemon uses 1 for success */
}

View File

@ -11,13 +11,13 @@
#define _XOPEN_SOURCE 500 /* pthread */
#define _ISOC99_SOURCE
#include "tool.h"
#include "tools/tool.h"
#include "daemon-server.h"
#include "xlate.h"
#include "lib/mm/xlate.h"
#include "lvmlockd-internal.h"
#include "lvmlockd-client.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
/*
* Using synchronous _wait dlm apis so do not define _REENTRANT and
@ -699,7 +699,7 @@ int lm_hosts_dlm(struct lockspace *ls, int notify)
return 0;
memset(ls_nodes_path, 0, sizeof(ls_nodes_path));
snprintf(ls_nodes_path, PATH_MAX-1, "%s/%s/nodes",
snprintf(ls_nodes_path, PATH_MAX, "%s/%s/nodes",
DLM_LOCKSPACES_PATH, ls->name);
if (!(ls_dir = opendir(ls_nodes_path)))

View File

@ -11,13 +11,13 @@
#define _XOPEN_SOURCE 500 /* pthread */
#define _ISOC99_SOURCE
#include "tool.h"
#include "tools/tool.h"
#include "daemon-server.h"
#include "xlate.h"
#include "lib/mm/xlate.h"
#include "lvmlockd-internal.h"
#include "lvmlockd-client.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "sanlock.h"
#include "sanlock_rv.h"
@ -294,6 +294,36 @@ out:
return host_id;
}
/* Prepare valid /dev/mapper/vgname-lvname with all the mangling */
static int build_dm_path(char *path, size_t path_len,
const char *vg_name, const char *lv_name)
{
struct dm_pool *mem;
char *dm_name;
int rv = 0;
if (!(mem = dm_pool_create("namepool", 1024))) {
log_error("Failed to create mempool.");
return -ENOMEM;
}
if (!(dm_name = dm_build_dm_name(mem, vg_name, lv_name, NULL))) {
log_error("Failed to build dm name for %s/%s.", vg_name, lv_name);
rv = -EINVAL;
goto fail;
}
if ((dm_snprintf(path, path_len, "%s/%s", dm_dir(), dm_name) < 0)) {
log_error("Failed to create path %s/%s.", dm_dir(), dm_name);
rv = -EINVAL;
}
fail:
dm_pool_destroy(mem);
return rv;
}
/*
* vgcreate
*
@ -336,7 +366,8 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if (strlen(lock_lv_name) + strlen(lock_args_version) + 2 > MAX_ARGS)
return -EARGS;
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
log_debug("S %s init_vg_san path %s", ls_name, disk.path);
@ -513,7 +544,8 @@ int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
strncpy(rd.rs.lockspace_name, ls_name, SANLK_NAME_LEN);
rd.rs.num_disks = 1;
snprintf(rd.rs.disks[0].path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
if ((rv = build_dm_path(rd.rs.disks[0].path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
align_size = sanlock_align(&rd.rs.disks[0]);
if (align_size <= 0) {
@ -612,7 +644,8 @@ int lm_rename_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_
return rv;
}
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
log_debug("S %s rename_vg_san path %s", ls_name, disk.path);
@ -1069,10 +1102,10 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
* and appending "lockctl" to get /path/to/lvmlockctl.
*/
memset(killpath, 0, sizeof(killpath));
snprintf(killpath, SANLK_PATH_LEN - 1, "%slockctl", LVM_PATH);
snprintf(killpath, SANLK_PATH_LEN, "%slockctl", LVM_PATH);
memset(killargs, 0, sizeof(killargs));
snprintf(killargs, SANLK_PATH_LEN - 1, "--kill %s", ls->vg_name);
snprintf(killargs, SANLK_PATH_LEN, "--kill %s", ls->vg_name);
rv = check_args_version(ls->vg_args, VG_LOCK_ARGS_MAJOR);
if (rv < 0) {
@ -1088,8 +1121,8 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
goto fail;
}
snprintf(disk_path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s",
ls->vg_name, lock_lv_name);
if ((ret = build_dm_path(disk_path, SANLK_PATH_LEN, ls->vg_name, lock_lv_name)))
goto fail;
/*
* When a vg is started, the internal sanlock lv should be

View File

@ -30,11 +30,11 @@ include $(top_builddir)/make.tmpl
CFLAGS += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(DAEMON_LIBS) -ldaemonserver -ldevmapper $(PTHREAD_LIBS)
LIBS += $(DAEMON_LIBS) -ldaemonserver $(PTHREAD_LIBS)
lvmpolld: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(INTERNAL_LIBS) $(LIBS)
install_lvmpolld: lvmpolld
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)

View File

@ -36,7 +36,7 @@ static int add_to_cmd_arr(const char ***cmdargv, const char *str, unsigned *ind)
const char **newargv;
if (*ind && !(*ind % MIN_ARGV_SIZE)) {
newargv = dm_realloc(*cmdargv, (*ind / MIN_ARGV_SIZE + 1) * MIN_ARGV_SIZE * sizeof(char *));
newargv = realloc(*cmdargv, (*ind / MIN_ARGV_SIZE + 1) * MIN_ARGV_SIZE * sizeof(char *));
if (!newargv)
return 0;
*cmdargv = newargv;
@ -50,7 +50,7 @@ static int add_to_cmd_arr(const char ***cmdargv, const char *str, unsigned *ind)
const char **cmdargv_ctr(const struct lvmpolld_lv *pdlv, const char *lvm_binary, unsigned abort_polling, unsigned handle_missing_pvs)
{
unsigned i = 0;
const char **cmd_argv = dm_malloc(MIN_ARGV_SIZE * sizeof(char *));
const char **cmd_argv = malloc(MIN_ARGV_SIZE * sizeof(char *));
if (!cmd_argv)
return NULL;
@ -98,7 +98,7 @@ const char **cmdargv_ctr(const struct lvmpolld_lv *pdlv, const char *lvm_binary,
return cmd_argv;
err:
dm_free(cmd_argv);
free(cmd_argv);
return NULL;
}
@ -122,7 +122,7 @@ static int copy_env(const char ***cmd_envp, unsigned *i, const char *exclude)
const char **cmdenvp_ctr(const struct lvmpolld_lv *pdlv)
{
unsigned i = 0;
const char **cmd_envp = dm_malloc(MIN_ARGV_SIZE * sizeof(char *));
const char **cmd_envp = malloc(MIN_ARGV_SIZE * sizeof(char *));
if (!cmd_envp)
return NULL;
@ -141,6 +141,6 @@ const char **cmdenvp_ctr(const struct lvmpolld_lv *pdlv)
return cmd_envp;
err:
dm_free(cmd_envp);
free(cmd_envp);
return NULL;
}

View File

@ -20,10 +20,10 @@
#define _REENTRANT
#include "tool.h"
#include "tools/tool.h"
#include "lvmpolld-cmd-utils.h"
#include "lvmpolld-protocol.h"
#include "daemons/lvmpolld/lvmpolld-protocol.h"
#include <assert.h>
#include <errno.h>

View File

@ -530,7 +530,7 @@ static response progress_info(client_handle h, struct lvmpolld_state *ls, reques
pdst_unlock(pdst);
dm_free(id);
free(id);
if (pdlv) {
if (st.error)
@ -673,7 +673,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
PD_LOG_PREFIX, "poll operation type mismatch on LV identified by",
id,
polling_op(pdlv_get_type(pdlv)), polling_op(type));
dm_free(id);
free(id);
return reply(LVMPD_RESP_EINVAL,
REASON_DIFFERENT_OPERATION_IN_PROGRESS);
}
@ -683,14 +683,14 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
lvname, sysdir, type, abort_polling, 2 * uinterval);
if (!pdlv) {
pdst_unlock(pdst);
dm_free(id);
free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
if (!pdst_locked_insert(pdst, id, pdlv)) {
pdlv_destroy(pdlv);
pdst_unlock(pdst);
ERROR(ls, "%s: %s", PD_LOG_PREFIX, "couldn't store internal LV data structure");
dm_free(id);
free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
if (!spawn_detached_thread(pdlv)) {
@ -698,7 +698,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
pdst_locked_remove(pdst, id);
pdlv_destroy(pdlv);
pdst_unlock(pdst);
dm_free(id);
free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
@ -709,7 +709,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
pdst_unlock(pdst);
dm_free(id);
free(id);
return daemon_reply_simple(LVMPD_RESP_OK, NULL);
}
@ -806,7 +806,7 @@ static int printout_raw_response(const char *prefix, const char *msg)
char *buf;
char *pos;
buf = dm_strdup(msg);
buf = strdup(msg);
pos = buf;
if (!buf)
@ -819,7 +819,7 @@ static int printout_raw_response(const char *prefix, const char *msg)
_log_line(pos, &b);
pos = next ? next + 1 : 0;
}
dm_free(buf);
free(buf);
return 1;
}

View File

@ -14,7 +14,7 @@
#include "lvmpolld-common.h"
#include "config-util.h"
#include "libdaemon/client/config-util.h"
#include <fcntl.h>
#include <signal.h>
@ -27,12 +27,12 @@ static char *_construct_full_lvname(const char *vgname, const char *lvname)
size_t l;
l = strlen(vgname) + strlen(lvname) + 2; /* vg/lv and \0 */
name = (char *) dm_malloc(l * sizeof(char));
name = (char *) malloc(l * sizeof(char));
if (!name)
return NULL;
if (dm_snprintf(name, l, "%s/%s", vgname, lvname) < 0) {
dm_free(name);
free(name);
name = NULL;
}
@ -47,7 +47,7 @@ static char *_construct_lvm_system_dir_env(const char *sysdir)
* just single char to store NULL byte
*/
size_t l = sysdir ? strlen(sysdir) + 16 : 1;
char *env = (char *) dm_malloc(l * sizeof(char));
char *env = (char *) malloc(l * sizeof(char));
if (!env)
return NULL;
@ -55,7 +55,7 @@ static char *_construct_lvm_system_dir_env(const char *sysdir)
*env = '\0';
if (sysdir && dm_snprintf(env, l, "%s%s", LVM_SYSTEM_DIR, sysdir) < 0) {
dm_free(env);
free(env);
env = NULL;
}
@ -74,7 +74,7 @@ char *construct_id(const char *sysdir, const char *uuid)
size_t l;
l = strlen(uuid) + (sysdir ? strlen(sysdir) : 0) + 1;
id = (char *) dm_malloc(l * sizeof(char));
id = (char *) malloc(l * sizeof(char));
if (!id)
return NULL;
@ -82,7 +82,7 @@ char *construct_id(const char *sysdir, const char *uuid)
dm_snprintf(id, l, "%s", uuid);
if (r < 0) {
dm_free(id);
free(id);
id = NULL;
}
@ -95,7 +95,7 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
const char *sinterval, unsigned pdtimeout,
struct lvmpolld_store *pdst)
{
char *lvmpolld_id = dm_strdup(id), /* copy */
char *lvmpolld_id = strdup(id), /* copy */
*full_lvname = _construct_full_lvname(vgname, lvname), /* copy */
*lvm_system_dir_env = _construct_lvm_system_dir_env(sysdir); /* copy */
@ -106,12 +106,12 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
.lvid = _get_lvid(lvmpolld_id, sysdir),
.lvname = full_lvname,
.lvm_system_dir_env = lvm_system_dir_env,
.sinterval = dm_strdup(sinterval), /* copy */
.sinterval = strdup(sinterval), /* copy */
.pdtimeout = pdtimeout < MIN_POLLING_TIMEOUT ? MIN_POLLING_TIMEOUT : pdtimeout,
.cmd_state = { .retcode = -1, .signal = 0 },
.pdst = pdst,
.init_rq_count = 1
}, *pdlv = (struct lvmpolld_lv *) dm_malloc(sizeof(struct lvmpolld_lv));
}, *pdlv = (struct lvmpolld_lv *) malloc(sizeof(struct lvmpolld_lv));
if (!pdlv || !tmp.lvid || !tmp.lvname || !tmp.lvm_system_dir_env || !tmp.sinterval)
goto err;
@ -124,27 +124,27 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
return pdlv;
err:
dm_free((void *)full_lvname);
dm_free((void *)lvmpolld_id);
dm_free((void *)lvm_system_dir_env);
dm_free((void *)tmp.sinterval);
dm_free((void *)pdlv);
free((void *)full_lvname);
free((void *)lvmpolld_id);
free((void *)lvm_system_dir_env);
free((void *)tmp.sinterval);
free((void *)pdlv);
return NULL;
}
void pdlv_destroy(struct lvmpolld_lv *pdlv)
{
dm_free((void *)pdlv->lvmpolld_id);
dm_free((void *)pdlv->lvname);
dm_free((void *)pdlv->sinterval);
dm_free((void *)pdlv->lvm_system_dir_env);
dm_free((void *)pdlv->cmdargv);
dm_free((void *)pdlv->cmdenvp);
free((void *)pdlv->lvmpolld_id);
free((void *)pdlv->lvname);
free((void *)pdlv->sinterval);
free((void *)pdlv->lvm_system_dir_env);
free((void *)pdlv->cmdargv);
free((void *)pdlv->cmdenvp);
pthread_mutex_destroy(&pdlv->lock);
dm_free((void *)pdlv);
free((void *)pdlv);
}
unsigned pdlv_get_polling_finished(struct lvmpolld_lv *pdlv)
@ -194,7 +194,7 @@ void pdlv_set_polling_finished(struct lvmpolld_lv *pdlv, unsigned finished)
struct lvmpolld_store *pdst_init(const char *name)
{
struct lvmpolld_store *pdst = (struct lvmpolld_store *) dm_malloc(sizeof(struct lvmpolld_store));
struct lvmpolld_store *pdst = (struct lvmpolld_store *) malloc(sizeof(struct lvmpolld_store));
if (!pdst)
return NULL;
@ -212,7 +212,7 @@ struct lvmpolld_store *pdst_init(const char *name)
err_mutex:
dm_hash_destroy(pdst->store);
err_hash:
dm_free(pdst);
free(pdst);
return NULL;
}
@ -223,7 +223,7 @@ void pdst_destroy(struct lvmpolld_store *pdst)
dm_hash_destroy(pdst->store);
pthread_mutex_destroy(&pdst->lock);
dm_free(pdst);
free(pdst);
}
void pdst_locked_lock_all_pdlvs(const struct lvmpolld_store *pdst)
@ -321,7 +321,7 @@ void pdst_locked_destroy_all_pdlvs(const struct lvmpolld_store *pdst)
struct lvmpolld_thread_data *lvmpolld_thread_data_constructor(struct lvmpolld_lv *pdlv)
{
struct lvmpolld_thread_data *data = (struct lvmpolld_thread_data *) dm_malloc(sizeof(struct lvmpolld_thread_data));
struct lvmpolld_thread_data *data = (struct lvmpolld_thread_data *) malloc(sizeof(struct lvmpolld_thread_data));
if (!data)
return NULL;
@ -368,7 +368,7 @@ void lvmpolld_thread_data_destroy(void *thread_private)
pdst_unlock(data->pdlv->pdst);
}
/* may get reallocated in getline(). dm_free must not be used */
/* may get reallocated in getline(). free must not be used */
free(data->line);
if (data->fout && !fclose(data->fout))
@ -389,5 +389,5 @@ void lvmpolld_thread_data_destroy(void *thread_private)
if (data->errpipe[1] >= 0)
(void) close(data->errpipe[1]);
dm_free(data);
free(data);
}

View File

@ -15,7 +15,7 @@
#ifndef _LVM_LVMPOLLD_PROTOCOL_H
#define _LVM_LVMPOLLD_PROTOCOL_H
#include "polling_ops.h"
#include "daemons/lvmpolld/polling_ops.h"
#define LVMPOLLD_PROTOCOL "lvmpolld"
#define LVMPOLLD_PROTOCOL_VERSION 1

52
device_mapper/Makefile Normal file
View File

@ -0,0 +1,52 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
DEVICE_MAPPER_SOURCE=\
device_mapper/datastruct/bitset.c \
device_mapper/libdm-common.c \
device_mapper/libdm-config.c \
device_mapper/libdm-deptree.c \
device_mapper/libdm-file.c \
device_mapper/libdm-report.c \
device_mapper/libdm-string.c \
device_mapper/libdm-targets.c \
device_mapper/libdm-timestamp.c \
device_mapper/mm/pool.c \
device_mapper/regex/matcher.c \
device_mapper/regex/parse_rx.c \
device_mapper/regex/ttree.c \
device_mapper/ioctl/libdm-iface.c \
device_mapper/vdo/vdo_target.c \
device_mapper/vdo/status.c
DEVICE_MAPPER_DEPENDS=$(addprefix $(top_builddir)/,$(subst .c,.d,$(DEVICE_MAPPER_SOURCE)))
DEVICE_MAPPER_OBJECTS=$(addprefix $(top_builddir)/,$(subst .c,.o,$(DEVICE_MAPPER_SOURCE)))
CLEAN_TARGETS+=$(DEVICE_MAPPER_DEPENDS) $(DEVICE_MAPPER_OBJECTS)
#$(DEVICE_MAPPER_DEPENDS): INCLUDES+=$(VDO_INCLUDES)
#$(DEVICE_MAPPER_OBJECTS): INCLUDES+=$(VDO_INCLUDES)
ifeq ("$(USE_TRACKING)","yes")
ifeq (,$(findstring $(MAKECMDGOALS),cscope.out cflow clean distclean lcov \
help check check_local check_cluster check_lvmetad check_lvmpolld))
-include $(DEVICE_MAPPER_DEPENDS)
endif
endif
$(DEVICE_MAPPER_OBJECTS): INCLUDES+=-I$(top_srcdir)/device_mapper/
$(top_builddir)/device_mapper/libdevice-mapper.a: $(DEVICE_MAPPER_OBJECTS)
@echo " [AR] $@"
$(Q) $(RM) $@
$(Q) $(AR) rsv $@ $(DEVICE_MAPPER_OBJECTS) > /dev/null
CLEAN_TARGETS+=$(top_builddir)/device_mapper/libdevice-mapper.a

2142
device_mapper/all.h Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,259 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include <ctype.h>
/* FIXME: calculate this. */
#define INT_SHIFT 5
dm_bitset_t dm_bitset_create(struct dm_pool *mem, unsigned num_bits)
{
unsigned n = (num_bits / DM_BITS_PER_INT) + 2;
size_t size = sizeof(int) * n;
dm_bitset_t bs;
if (mem)
bs = dm_pool_zalloc(mem, size);
else
bs = zalloc(size);
if (!bs)
return NULL;
*bs = num_bits;
return bs;
}
void dm_bitset_destroy(dm_bitset_t bs)
{
free(bs);
}
int dm_bitset_equal(dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
if (in1[i] != in2[i])
return 0;
return 1;
}
void dm_bit_and(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
out[i] = in1[i] & in2[i];
}
void dm_bit_union(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
out[i] = in1[i] | in2[i];
}
static int _test_word(uint32_t test, int bit)
{
uint32_t tb = test >> bit;
return (tb ? ffs(tb) + bit - 1 : -1);
}
static int _test_word_rev(uint32_t test, int bit)
{
uint32_t tb = test << (DM_BITS_PER_INT - 1 - bit);
return (tb ? bit - clz(tb) : -1);
}
int dm_bit_get_next(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit++; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit < (int) bs[0]) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = last_bit - (last_bit & (DM_BITS_PER_INT - 1)) +
DM_BITS_PER_INT;
}
return -1;
}
int dm_bit_get_prev(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit--; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit >= 0) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word_rev(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = (last_bit & ~(DM_BITS_PER_INT - 1)) - 1;
}
return -1;
}
int dm_bit_get_first(dm_bitset_t bs)
{
return dm_bit_get_next(bs, -1);
}
int dm_bit_get_last(dm_bitset_t bs)
{
return dm_bit_get_prev(bs, bs[0] + 1);
}
/*
* Based on the Linux kernel __bitmap_parselist from lib/bitmap.c
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits)
{
unsigned a, b;
int c, old_c, totaldigits, ndigits, nmaskbits;
int at_start, in_range;
dm_bitset_t mask = NULL;
const char *start = str;
size_t len;
scan:
len = strlen(str);
totaldigits = c = 0;
nmaskbits = 0;
do {
at_start = 1;
in_range = 0;
a = b = 0;
ndigits = totaldigits;
/* Get the next value or range of values */
while (len) {
old_c = c;
c = *str++;
len--;
if (isspace(c))
continue;
/* A '\0' or a ',' signal the end of a value or range */
if (c == '\0' || c == ',')
break;
/*
* whitespaces between digits are not allowed,
* but it's ok if whitespaces are on head or tail.
* when old_c is whilespace,
* if totaldigits == ndigits, whitespace is on head.
* if whitespace is on tail, it should not run here.
* as c was ',' or '\0',
* the last code line has broken the current loop.
*/
if ((totaldigits != ndigits) && isspace(old_c))
goto_bad;
if (c == '-') {
if (at_start || in_range)
goto_bad;
b = 0;
in_range = 1;
at_start = 1;
continue;
}
if (!isdigit(c))
goto_bad;
b = b * 10 + (c - '0');
if (!in_range)
a = b;
at_start = 0;
totaldigits++;
}
if (ndigits == totaldigits)
continue;
/* if no digit is after '-', it's wrong */
if (at_start && in_range)
goto_bad;
if (!(a <= b))
goto_bad;
if (b >= nmaskbits)
nmaskbits = b + 1;
while ((a <= b) && mask) {
dm_bit_set(mask, a);
a++;
}
} while (len && c == ',');
if (!mask) {
if (min_num_bits && (nmaskbits < min_num_bits))
nmaskbits = min_num_bits;
if (!(mask = dm_bitset_create(mem, nmaskbits)))
goto_bad;
str = start;
goto scan;
}
return mask;
bad:
if (mask) {
if (mem)
dm_pool_free(mem, mask);
else
dm_bitset_destroy(mask);
}
return NULL;
}
#if defined(__GNUC__)
/*
* Maintain backward compatibility with older versions that did not
* accept a 'min_num_bits' argument to dm_bitset_parse_list().
*/
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem);
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem)
{
return dm_bitset_parse_list(str, mem, 0);
}
#else /* if defined(__GNUC__) */
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,88 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef LIB_DMTARGETS_H
#define LIB_DMTARGETS_H
#include <inttypes.h>
#include <sys/types.h>
struct dm_ioctl;
struct target {
uint64_t start;
uint64_t length;
char *type;
char *params;
struct target *next;
};
struct dm_task {
int type;
char *dev_name;
char *mangled_dev_name;
struct target *head, *tail;
int read_only;
uint32_t event_nr;
int major;
int minor;
int allow_default_major_fallback;
uid_t uid;
gid_t gid;
mode_t mode;
uint32_t read_ahead;
uint32_t read_ahead_flags;
union {
struct dm_ioctl *v4;
} dmi;
char *newname;
char *message;
char *geometry;
uint64_t sector;
int no_flush;
int no_open_count;
int skip_lockfs;
int query_inactive_table;
int suppress_identical_reload;
dm_add_node_t add_node;
uint64_t existing_table_size;
int cookie_set;
int new_uuid;
int secure_data;
int retry_remove;
int deferred_remove;
int enable_checks;
int expected_errno;
int ioctl_errno;
int record_timestamp;
char *uuid;
char *mangled_uuid;
};
struct cmd_data {
const char *name;
const unsigned cmd;
const int version[3];
};
int dm_check_version(void);
uint64_t dm_task_get_existing_table_size(struct dm_task *dmt);
#endif

2692
device_mapper/libdm-common.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,58 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef LIB_DMCOMMON_H
#define LIB_DMCOMMON_H
#include "device_mapper/all.h"
#define DM_DEFAULT_NAME_MANGLING_MODE_ENV_VAR_NAME "DM_DEFAULT_NAME_MANGLING_MODE"
#define DEV_NAME(dmt) (dmt->mangled_dev_name ? : dmt->dev_name)
#define DEV_UUID(DMT) (dmt->mangled_uuid ? : dmt->uuid)
int mangle_string(const char *str, const char *str_name, size_t len,
char *buf, size_t buf_len, dm_string_mangling_t mode);
int unmangle_string(const char *str, const char *str_name, size_t len,
char *buf, size_t buf_len, dm_string_mangling_t mode);
int check_multiple_mangled_string_allowed(const char *str, const char *str_name,
dm_string_mangling_t mode);
struct target *create_target(uint64_t start,
uint64_t len,
const char *type, const char *params);
int add_dev_node(const char *dev_name, uint32_t minor, uint32_t major,
uid_t uid, gid_t gid, mode_t mode, int check_udev, unsigned rely_on_udev);
int rm_dev_node(const char *dev_name, int check_udev, unsigned rely_on_udev);
int rename_dev_node(const char *old_name, const char *new_name,
int check_udev, unsigned rely_on_udev);
int get_dev_node_read_ahead(const char *dev_name, uint32_t major, uint32_t minor,
uint32_t *read_ahead);
int set_dev_node_read_ahead(const char *dev_name, uint32_t major, uint32_t minor,
uint32_t read_ahead, uint32_t read_ahead_flags);
void update_devs(void);
void selinux_release(void);
void inc_suspended(void);
void dec_suspended(void);
int parse_thin_pool_status(const char *params, struct dm_status_thin_pool *s);
int get_uname_version(unsigned *major, unsigned *minor, unsigned *release);
#endif

1486
device_mapper/libdm-config.c Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

261
device_mapper/libdm-file.c Normal file
View File

@ -0,0 +1,261 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "misc/dmlib.h"
#include <sys/file.h>
#include <fcntl.h>
#include <dirent.h>
#include <unistd.h>
static int _is_dir(const char *path)
{
struct stat st;
if (stat(path, &st) < 0) {
log_sys_error("stat", path);
return 0;
}
if (!S_ISDIR(st.st_mode)) {
log_error("Existing path %s is not "
"a directory.", path);
return 0;
}
return 1;
}
static int _create_dir_recursive(const char *dir)
{
char *orig, *s;
int rc, r = 0;
log_verbose("Creating directory \"%s\"", dir);
/* Create parent directories */
orig = s = strdup(dir);
if (!s) {
log_error("Failed to duplicate directory name.");
return 0;
}
while ((s = strchr(s, '/')) != NULL) {
*s = '\0';
if (*orig) {
rc = mkdir(orig, 0777);
if (rc < 0) {
if (errno == EEXIST) {
if (!_is_dir(orig))
goto_out;
} else {
if (errno != EROFS)
log_sys_error("mkdir", orig);
goto out;
}
}
}
*s++ = '/';
}
/* Create final directory */
rc = mkdir(dir, 0777);
if (rc < 0) {
if (errno == EEXIST) {
if (!_is_dir(dir))
goto_out;
} else {
if (errno != EROFS)
log_sys_error("mkdir", orig);
goto out;
}
}
r = 1;
out:
free(orig);
return r;
}
int dm_create_dir(const char *dir)
{
struct stat info;
if (!*dir)
return 1;
if (stat(dir, &info) == 0 && S_ISDIR(info.st_mode))
return 1;
if (!_create_dir_recursive(dir))
return_0;
return 1;
}
int dm_is_empty_dir(const char *dir)
{
struct dirent *dirent;
DIR *d;
if (!(d = opendir(dir))) {
log_sys_error("opendir", dir);
return 0;
}
while ((dirent = readdir(d)))
if (strcmp(dirent->d_name, ".") && strcmp(dirent->d_name, ".."))
break;
if (closedir(d))
log_sys_error("closedir", dir);
return dirent ? 0 : 1;
}
int dm_fclose(FILE *stream)
{
int prev_fail = ferror(stream);
int fclose_fail = fclose(stream);
/* If there was a previous failure, but fclose succeeded,
clear errno, since ferror does not set it, and its value
may be unrelated to the ferror-reported failure. */
if (prev_fail && !fclose_fail)
errno = 0;
return prev_fail || fclose_fail ? EOF : 0;
}
int dm_create_lockfile(const char *lockfile)
{
int fd, value;
size_t bufferlen;
ssize_t write_out;
struct flock lock;
char buffer[50];
int retries = 0;
if ((fd = open(lockfile, O_CREAT | O_WRONLY,
(S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH))) < 0) {
log_error("Cannot open lockfile [%s], error was [%s]",
lockfile, strerror(errno));
return 0;
}
lock.l_type = F_WRLCK;
lock.l_start = 0;
lock.l_whence = SEEK_SET;
lock.l_len = 0;
retry_fcntl:
if (fcntl(fd, F_SETLK, &lock) < 0) {
switch (errno) {
case EINTR:
goto retry_fcntl;
case EACCES:
case EAGAIN:
if (retries == 20) {
log_error("Cannot lock lockfile [%s], error was [%s]",
lockfile, strerror(errno));
break;
} else {
++ retries;
usleep(1000);
goto retry_fcntl;
}
default:
log_error("process is already running");
}
goto fail_close;
}
if (ftruncate(fd, 0) < 0) {
log_error("Cannot truncate pidfile [%s], error was [%s]",
lockfile, strerror(errno));
goto fail_close_unlink;
}
snprintf(buffer, sizeof(buffer), "%u\n", getpid());
bufferlen = strlen(buffer);
write_out = write(fd, buffer, bufferlen);
if ((write_out < 0) || (write_out == 0 && errno)) {
log_error("Cannot write pid to pidfile [%s], error was [%s]",
lockfile, strerror(errno));
goto fail_close_unlink;
}
if ((write_out == 0) || ((size_t)write_out < bufferlen)) {
log_error("Cannot write pid to pidfile [%s], shortwrite of"
"[%" PRIsize_t "] bytes, expected [%" PRIsize_t "]\n",
lockfile, write_out, bufferlen);
goto fail_close_unlink;
}
if ((value = fcntl(fd, F_GETFD, 0)) < 0) {
log_error("Cannot get close-on-exec flag from pidfile [%s], "
"error was [%s]", lockfile, strerror(errno));
goto fail_close_unlink;
}
value |= FD_CLOEXEC;
if (fcntl(fd, F_SETFD, value) < 0) {
log_error("Cannot set close-on-exec flag from pidfile [%s], "
"error was [%s]", lockfile, strerror(errno));
goto fail_close_unlink;
}
return 1;
fail_close_unlink:
if (unlink(lockfile))
log_sys_debug("unlink", lockfile);
fail_close:
if (close(fd))
log_sys_debug("close", lockfile);
return 0;
}
int dm_daemon_is_running(const char* lockfile)
{
int fd;
struct flock lock;
if((fd = open(lockfile, O_RDONLY)) < 0)
return 0;
lock.l_type = F_WRLCK;
lock.l_start = 0;
lock.l_whence = SEEK_SET;
lock.l_len = 0;
if (fcntl(fd, F_GETLK, &lock) < 0) {
log_error("Cannot check lock status of lockfile [%s], error was [%s]",
lockfile, strerror(errno));
if (close(fd))
stack;
return 0;
}
if (close(fd))
stack;
return (lock.l_type == F_UNLCK) ? 0 : 1;
}

5105
device_mapper/libdm-report.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,718 @@
/*
* Copyright (C) 2006-2015 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "misc/dmlib.h"
#include <ctype.h>
#include <stdarg.h>
#include <math.h> /* fabs() */
#include <float.h> /* DBL_EPSILON */
/*
* consume characters while they match the predicate function.
*/
static char *_consume(char *buffer, int (*fn) (int))
{
while (*buffer && fn(*buffer))
buffer++;
return buffer;
}
static int _isword(int c)
{
return !isspace(c);
}
/*
* Split buffer into NULL-separated words in argv.
* Returns number of words.
*/
int dm_split_words(char *buffer, unsigned max,
unsigned ignore_comments __attribute__((unused)),
char **argv)
{
unsigned arg;
for (arg = 0; arg < max; arg++) {
buffer = _consume(buffer, isspace);
if (!*buffer)
break;
argv[arg] = buffer;
buffer = _consume(buffer, _isword);
if (*buffer) {
*buffer = '\0';
buffer++;
}
}
return arg;
}
/*
* Remove hyphen quoting from a component of a name.
* NULL-terminates the component and returns start of next component.
*/
static char *_unquote(char *component)
{
char *c = component;
char *o = c;
char *r;
while (*c) {
if (*(c + 1)) {
if (*c == '-') {
if (*(c + 1) == '-')
c++;
else
break;
}
}
*o = *c;
o++;
c++;
}
r = (*c) ? c + 1 : c;
*o = '\0';
return r;
}
int dm_split_lvm_name(struct dm_pool *mem, const char *dmname,
char **vgname, char **lvname, char **layer)
{
if (!vgname || !lvname || !layer) {
log_error(INTERNAL_ERROR "dm_split_lvm_name: Forbidden NULL parameter detected.");
return 0;
}
if (mem && (!dmname || !(*vgname = dm_pool_strdup(mem, dmname)))) {
log_error("Failed to duplicate lvm name.");
return 0;
} else if (!*vgname) {
log_error("Missing lvm name for split.");
return 0;
}
_unquote(*layer = _unquote(*lvname = _unquote(*vgname)));
return 1;
}
/*
* On error, up to glibc 2.0.6, snprintf returned -1 if buffer was too small;
* From glibc 2.1 it returns number of chars (excl. trailing null) that would
* have been written had there been room.
*
* dm_snprintf reverts to the old behaviour.
*/
int dm_snprintf(char *buf, size_t bufsize, const char *format, ...)
{
int n;
va_list ap;
va_start(ap, format);
n = vsnprintf(buf, bufsize, format, ap);
va_end(ap);
if (n < 0 || ((unsigned) n >= bufsize))
return -1;
return n;
}
const char *dm_basename(const char *path)
{
const char *p = strrchr(path, '/');
return p ? p + 1 : path;
}
int dm_vasprintf(char **result, const char *format, va_list aq)
{
int i, n, size = 16;
va_list ap;
char *buf = malloc(size);
*result = 0;
if (!buf)
return -1;
for (i = 0;; i++) {
va_copy(ap, aq);
n = vsnprintf(buf, size, format, ap);
va_end(ap);
if (0 <= n && n < size)
break;
free(buf);
/* Up to glibc 2.0.6 returns -1 */
size = (n < 0) ? size * 2 : n + 1;
if (!(buf = malloc(size)))
return -1;
}
if (i > 1) {
/* Reallocating more then once? */
if (!(*result = strdup(buf))) {
free(buf);
return -1;
}
free(buf);
} else
*result = buf;
return n + 1;
}
int dm_asprintf(char **result, const char *format, ...)
{
int r;
va_list ap;
va_start(ap, format);
r = dm_vasprintf(result, format, ap);
va_end(ap);
return r;
}
/*
* Count occurences of 'c' in 'str' until we reach a null char.
*
* Returns:
* len - incremented for each char we encounter.
* count - number of occurrences of 'c' and 'c2'.
*/
static void _count_chars(const char *str, size_t *len, int *count,
const int c1, const int c2)
{
const char *ptr;
for (ptr = str; *ptr; ptr++, (*len)++)
if (*ptr == c1 || *ptr == c2)
(*count)++;
}
/*
* Count occurrences of 'c' in 'str' of length 'size'.
*
* Returns:
* Number of occurrences of 'c'
*/
unsigned dm_count_chars(const char *str, size_t len, const int c)
{
size_t i;
unsigned count = 0;
for (i = 0; i < len; i++)
if (str[i] == c)
count++;
return count;
}
/*
* Length of string after escaping double quotes and backslashes.
*/
size_t dm_escaped_len(const char *str)
{
size_t len = 1;
int count = 0;
_count_chars(str, &len, &count, '\"', '\\');
return count + len;
}
/*
* Copies a string, quoting orig_char with quote_char.
* Optionally also quote quote_char.
*/
static void _quote_characters(char **out, const char *src,
const int orig_char, const int quote_char,
int quote_quote_char)
{
while (*src) {
if (*src == orig_char ||
(*src == quote_char && quote_quote_char))
*(*out)++ = quote_char;
*(*out)++ = *src++;
}
}
static void _unquote_one_character(char *src, const char orig_char,
const char quote_char)
{
char *out;
char s, n;
/* Optimise for the common case where no changes are needed. */
while ((s = *src++)) {
if (s == quote_char &&
((n = *src) == orig_char || n == quote_char)) {
out = src++;
*(out - 1) = n;
while ((s = *src++)) {
if (s == quote_char &&
((n = *src) == orig_char || n == quote_char)) {
s = n;
src++;
}
*out = s;
out++;
}
*out = '\0';
return;
}
}
}
/*
* Unquote each character given in orig_char array and unquote quote_char
* as well. Also save the first occurrence of each character from orig_char
* that was found unquoted in arr_substr_first_unquoted array. This way we can
* process several characters in one go.
*/
static void _unquote_characters(char *src, const char *orig_chars,
size_t num_orig_chars,
const char quote_char,
char *arr_substr_first_unquoted[])
{
char *out = src;
char c, s, n;
unsigned i;
while ((s = *src++)) {
for (i = 0; i < num_orig_chars; i++) {
c = orig_chars[i];
if (s == quote_char &&
((n = *src) == c || n == quote_char)) {
s = n;
src++;
break;
}
if (arr_substr_first_unquoted && (s == c) &&
!arr_substr_first_unquoted[i])
arr_substr_first_unquoted[i] = out;
};
*out++ = s;
}
*out = '\0';
}
/*
* Copies a string, quoting hyphens with hyphens.
*/
static void _quote_hyphens(char **out, const char *src)
{
_quote_characters(out, src, '-', '-', 0);
}
/*
* <vg>-<lv>-<layer> or if !layer just <vg>-<lv>.
*/
char *dm_build_dm_name(struct dm_pool *mem, const char *vgname,
const char *lvname, const char *layer)
{
size_t len = 1;
int hyphens = 1;
char *r, *out;
_count_chars(vgname, &len, &hyphens, '-', 0);
_count_chars(lvname, &len, &hyphens, '-', 0);
if (layer && *layer) {
_count_chars(layer, &len, &hyphens, '-', 0);
hyphens++;
}
len += hyphens;
if (!(r = dm_pool_alloc(mem, len))) {
log_error("build_dm_name: Allocation failed for %" PRIsize_t
" for %s %s %s.", len, vgname, lvname, layer);
return NULL;
}
out = r;
_quote_hyphens(&out, vgname);
*out++ = '-';
_quote_hyphens(&out, lvname);
if (layer && *layer) {
/* No hyphen if the layer begins with _ e.g. _mlog */
if (*layer != '_')
*out++ = '-';
_quote_hyphens(&out, layer);
}
*out = '\0';
return r;
}
char *dm_build_dm_uuid(struct dm_pool *mem, const char *uuid_prefix, const char *lvid, const char *layer)
{
char *dmuuid;
size_t len;
if (!layer)
layer = "";
len = strlen(uuid_prefix) + strlen(lvid) + strlen(layer) + 2;
if (!(dmuuid = dm_pool_alloc(mem, len))) {
log_error("build_dm_name: Allocation failed for %" PRIsize_t
" %s %s.", len, lvid, layer);
return NULL;
}
sprintf(dmuuid, "%s%s%s%s", uuid_prefix, lvid, (*layer) ? "-" : "", layer);
return dmuuid;
}
/*
* Copies a string, quoting double quotes with backslashes.
*/
char *dm_escape_double_quotes(char *out, const char *src)
{
char *buf = out;
_quote_characters(&buf, src, '\"', '\\', 1);
*buf = '\0';
return out;
}
/*
* Undo quoting in situ.
*/
void dm_unescape_double_quotes(char *src)
{
_unquote_one_character(src, '\"', '\\');
}
/*
* Unescape colons and "at" signs in situ and save the substrings
* starting at the position of the first unescaped colon and the
* first unescaped "at" sign. This is normally used to unescape
* device names used as PVs.
*/
void dm_unescape_colons_and_at_signs(char *src,
char **substr_first_unquoted_colon,
char **substr_first_unquoted_at_sign)
{
const char *orig_chars = ":@";
char *arr_substr_first_unquoted[] = {NULL, NULL, NULL};
_unquote_characters(src, orig_chars, 2, '\\', arr_substr_first_unquoted);
if (substr_first_unquoted_colon)
*substr_first_unquoted_colon = arr_substr_first_unquoted[0];
if (substr_first_unquoted_at_sign)
*substr_first_unquoted_at_sign = arr_substr_first_unquoted[1];
}
int dm_strncpy(char *dest, const char *src, size_t n)
{
if (memccpy(dest, src, 0, n))
return 1;
if (n > 0)
dest[n - 1] = '\0';
return 0;
}
/* Test if the doubles are close enough to be considered equal */
static int _close_enough(double d1, double d2)
{
return fabs(d1 - d2) < DBL_EPSILON;
}
#define BASE_UNKNOWN 0
#define BASE_SHARED 1
#define BASE_1024 8
#define BASE_1000 15
#define BASE_SPECIAL 21
#define NUM_UNIT_PREFIXES 6
#define NUM_SPECIAL 3
#define SIZE_BUF 128
const char *dm_size_to_string(struct dm_pool *mem, uint64_t size,
char unit_type, int use_si_units,
uint64_t unit_factor, int include_suffix,
dm_size_suffix_t suffix_type)
{
unsigned base = BASE_UNKNOWN;
unsigned s;
int precision;
double d;
uint64_t byte = UINT64_C(0);
uint64_t units = UINT64_C(1024);
char *size_buf = NULL;
char new_unit_type = '\0', unit_type_buf[2];
const char *prefix = "";
const char * const size_str[][3] = {
/* BASE_UNKNOWN */
{" ", " ", " "}, /* [0] */
/* BASE_SHARED - Used if use_si_units = 0 */
{" Exabyte", " EB", "E"}, /* [1] */
{" Petabyte", " PB", "P"}, /* [2] */
{" Terabyte", " TB", "T"}, /* [3] */
{" Gigabyte", " GB", "G"}, /* [4] */
{" Megabyte", " MB", "M"}, /* [5] */
{" Kilobyte", " KB", "K"}, /* [6] */
{" Byte ", " B", "B"}, /* [7] */
/* BASE_1024 - Used if use_si_units = 1 */
{" Exbibyte", " EiB", "e"}, /* [8] */
{" Pebibyte", " PiB", "p"}, /* [9] */
{" Tebibyte", " TiB", "t"}, /* [10] */
{" Gibibyte", " GiB", "g"}, /* [11] */
{" Mebibyte", " MiB", "m"}, /* [12] */
{" Kibibyte", " KiB", "k"}, /* [13] */
{" Byte ", " B", "b"}, /* [14] */
/* BASE_1000 - Used if use_si_units = 1 */
{" Exabyte", " EB", "E"}, /* [15] */
{" Petabyte", " PB", "P"}, /* [16] */
{" Terabyte", " TB", "T"}, /* [17] */
{" Gigabyte", " GB", "G"}, /* [18] */
{" Megabyte", " MB", "M"}, /* [19] */
{" Kilobyte", " kB", "K"}, /* [20] */
/* BASE_SPECIAL */
{" Byte ", " B ", "B"}, /* [21] (shared with BASE_1000) */
{" Units ", " Un", "U"}, /* [22] */
{" Sectors ", " Se", "S"}, /* [23] */
};
if (!(size_buf = dm_pool_alloc(mem, SIZE_BUF))) {
log_error("no memory for size display buffer");
return "";
}
if (!use_si_units) {
/* Case-independent match */
for (s = 0; s < NUM_UNIT_PREFIXES; s++)
if (toupper((int) unit_type) ==
*size_str[BASE_SHARED + s][2]) {
base = BASE_SHARED;
break;
}
} else {
/* Case-dependent match for powers of 1000 */
for (s = 0; s < NUM_UNIT_PREFIXES; s++)
if (unit_type == *size_str[BASE_1000 + s][2]) {
base = BASE_1000;
break;
}
/* Case-dependent match for powers of 1024 */
if (base == BASE_UNKNOWN)
for (s = 0; s < NUM_UNIT_PREFIXES; s++)
if (unit_type == *size_str[BASE_1024 + s][2]) {
base = BASE_1024;
break;
}
}
if (base == BASE_UNKNOWN)
/* Check for special units - s, b or u */
for (s = 0; s < NUM_SPECIAL; s++)
if (toupper((int) unit_type) ==
*size_str[BASE_SPECIAL + s][2]) {
base = BASE_SPECIAL;
break;
}
if (size == UINT64_C(0)) {
if (base == BASE_UNKNOWN)
s = 0;
sprintf(size_buf, "0%s", include_suffix ? size_str[base + s][suffix_type] : "");
return size_buf;
}
size *= UINT64_C(512);
if (base != BASE_UNKNOWN) {
if (!unit_factor) {
unit_type_buf[0] = unit_type;
unit_type_buf[1] = '\0';
if (!(unit_factor = dm_units_to_factor(&unit_type_buf[0], &new_unit_type, 1, NULL)) ||
unit_type != new_unit_type) {
/* The two functions should match (and unrecognised units get treated like 'h'). */
log_error(INTERNAL_ERROR "Inconsistent units: %c and %c.", unit_type, new_unit_type);
return "";
}
}
byte = unit_factor;
} else {
/* Human-readable style */
if (unit_type == 'H' || unit_type == 'R') {
units = UINT64_C(1000);
base = BASE_1000;
} else {
units = UINT64_C(1024);
base = BASE_1024;
}
if (!use_si_units)
base = BASE_SHARED;
byte = units * units * units * units * units * units;
for (s = 0; s < NUM_UNIT_PREFIXES && size < byte; s++)
byte /= units;
if ((s < NUM_UNIT_PREFIXES) &&
((unit_type == 'R') || (unit_type == 'r'))) {
/* When the rounding would cause difference, add '<' prefix
* i.e. 2043M is more then 1.9949G prints <2.00G
* This version is for 2 digits fixed precision */
d = 100. * (double) size / byte;
if (!_close_enough(floorl(d), nearbyintl(d)))
prefix = "<";
}
include_suffix = 1;
}
/* FIXME Make precision configurable */
switch (toupper(*size_str[base + s][DM_SIZE_UNIT])) {
case 'B':
case 'S':
precision = 0;
break;
default:
precision = 2;
}
snprintf(size_buf, SIZE_BUF, "%s%.*f%s", prefix, precision,
(double) size / byte, include_suffix ? size_str[base + s][suffix_type] : "");
return size_buf;
}
uint64_t dm_units_to_factor(const char *units, char *unit_type,
int strict, const char **endptr)
{
char *ptr = NULL;
uint64_t v;
double custom_value = 0;
uint64_t multiplier;
if (endptr)
*endptr = units;
if (isdigit(*units)) {
custom_value = strtod(units, &ptr);
if (ptr == units)
return 0;
v = (uint64_t) strtoull(units, NULL, 10);
if (_close_enough((double) v, custom_value))
custom_value = 0; /* Use integer arithmetic */
units = ptr;
} else
v = 1;
/* Only one units char permitted in strict mode. */
if (strict && units[0] && units[1])
return 0;
if (v == 1)
*unit_type = *units;
else
*unit_type = 'U';
switch (*units) {
case 'h':
case 'H':
case 'r':
case 'R':
multiplier = v = UINT64_C(1);
*unit_type = *units;
break;
case 'b':
case 'B':
multiplier = UINT64_C(1);
break;
#define KILO UINT64_C(1024)
case 's':
case 'S':
multiplier = (KILO/2);
break;
case 'k':
multiplier = KILO;
break;
case 'm':
multiplier = KILO * KILO;
break;
case 'g':
multiplier = KILO * KILO * KILO;
break;
case 't':
multiplier = KILO * KILO * KILO * KILO;
break;
case 'p':
multiplier = KILO * KILO * KILO * KILO * KILO;
break;
case 'e':
multiplier = KILO * KILO * KILO * KILO * KILO * KILO;
break;
#undef KILO
#define KILO UINT64_C(1000)
case 'K':
multiplier = KILO;
break;
case 'M':
multiplier = KILO * KILO;
break;
case 'G':
multiplier = KILO * KILO * KILO;
break;
case 'T':
multiplier = KILO * KILO * KILO * KILO;
break;
case 'P':
multiplier = KILO * KILO * KILO * KILO * KILO;
break;
case 'E':
multiplier = KILO * KILO * KILO * KILO * KILO * KILO;
break;
#undef KILO
default:
return 0;
}
if (endptr)
*endptr = units + 1;
if (_close_enough(custom_value, 0.))
return v * multiplier; /* Use integer arithmetic */
else
return (uint64_t) (custom_value * multiplier);
}

View File

@ -0,0 +1,597 @@
/*
* Copyright (C) 2005-2015 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "misc/dmlib.h"
#include "libdm-common.h"
int dm_get_status_snapshot(struct dm_pool *mem, const char *params,
struct dm_status_snapshot **status)
{
struct dm_status_snapshot *s;
int r;
if (!params) {
log_error("Failed to parse invalid snapshot params.");
return 0;
}
if (!(s = dm_pool_zalloc(mem, sizeof(*s)))) {
log_error("Failed to allocate snapshot status structure.");
return 0;
}
r = sscanf(params, FMTu64 "/" FMTu64 " " FMTu64,
&s->used_sectors, &s->total_sectors,
&s->metadata_sectors);
if (r == 3 || r == 2)
s->has_metadata_sectors = (r == 3);
else if (!strcmp(params, "Invalid"))
s->invalid = 1;
else if (!strcmp(params, "Merge failed"))
s->merge_failed = 1;
else if (!strcmp(params, "Overflow"))
s->overflow = 1;
else {
dm_pool_free(mem, s);
log_error("Failed to parse snapshot params: %s.", params);
return 0;
}
*status = s;
return 1;
}
/*
* Skip nr fields each delimited by a single space.
* FIXME Don't assume single space.
*/
static const char *_skip_fields(const char *p, unsigned nr)
{
while (p && nr-- && (p = strchr(p, ' ')))
p++;
return p;
}
/*
* Count number of single-space delimited fields.
* Number of fields is number of spaces plus one.
*/
static unsigned _count_fields(const char *p)
{
unsigned nr = 1;
if (!p || !*p)
return 0;
while ((p = _skip_fields(p, 1)))
nr++;
return nr;
}
/*
* Various RAID status versions include:
* Versions < 1.5.0 (4 fields):
* <raid_type> <#devs> <health_str> <sync_ratio>
* Versions 1.5.0+ (6 fields):
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt>
* Versions 1.9.0+ (7 fields):
* <raid_type> <#devs> <health_str> <sync_ratio> <sync_action> <mismatch_cnt> <data_offset>
*/
int dm_get_status_raid(struct dm_pool *mem, const char *params,
struct dm_status_raid **status)
{
int i;
unsigned num_fields;
const char *p, *pp, *msg_fields = "";
struct dm_status_raid *s = NULL;
unsigned a = 0;
if ((num_fields = _count_fields(params)) < 4)
goto_bad;
/* Second field holds the device count */
msg_fields = "<#devs> ";
if (!(p = _skip_fields(params, 1)) || (sscanf(p, "%d", &i) != 1))
goto_bad;
msg_fields = "";
if (!(s = dm_pool_zalloc(mem, sizeof(struct dm_status_raid))))
goto_bad;
if (!(s->raid_type = dm_pool_zalloc(mem, p - params)))
goto_bad; /* memory is freed when pool is destroyed */
if (!(s->dev_health = dm_pool_zalloc(mem, i + 1))) /* Space for health chars */
goto_bad;
msg_fields = "<raid_type> <#devices> <health_chars> and <sync_ratio> ";
if (sscanf(params, "%s %u %s " FMTu64 "/" FMTu64,
s->raid_type,
&s->dev_count,
s->dev_health,
&s->insync_regions,
&s->total_regions) != 5)
goto_bad;
/*
* All pre-1.5.0 version parameters are read. Now we check
* for additional 1.5.0+ parameters (i.e. num_fields at least 6).
*
* Note that 'sync_action' will be NULL (and mismatch_count
* will be 0) if the kernel returns a pre-1.5.0 status.
*/
if (num_fields < 6)
goto out;
msg_fields = "<sync_action> and <mismatch_cnt> ";
/* Skip pre-1.5.0 params */
if (!(p = _skip_fields(params, 4)) || !(pp = _skip_fields(p, 1)))
goto_bad;
if (!(s->sync_action = dm_pool_zalloc(mem, pp - p)))
goto_bad;
if (sscanf(p, "%s " FMTu64, s->sync_action, &s->mismatch_count) != 2)
goto_bad;
if (num_fields < 7)
goto out;
/*
* All pre-1.9.0 version parameters are read. Now we check
* for additional 1.9.0+ parameters (i.e. nr_fields at least 7).
*
* Note that data_offset will be 0 if the
* kernel returns a pre-1.9.0 status.
*/
msg_fields = "<data_offset>";
if (!(p = _skip_fields(params, 6))) /* skip pre-1.9.0 params */
goto bad;
if (sscanf(p, FMTu64, &s->data_offset) != 1)
goto bad;
out:
*status = s;
if (s->insync_regions == s->total_regions) {
/* FIXME: kernel gives misleading info here
* Trying to recognize a true state */
while (i-- > 0)
if (s->dev_health[i] == 'a')
a++; /* Count number of 'a' */
if (a && a < s->dev_count) {
/* SOME legs are in 'a' */
if (!strcasecmp(s->sync_action, "recover")
|| !strcasecmp(s->sync_action, "idle"))
/* Kernel may possibly start some action
* in near-by future, do not report 100% */
s->insync_regions--;
}
}
return 1;
bad:
log_error("Failed to parse %sraid params: %s", msg_fields, params);
if (s)
dm_pool_free(mem, s);
*status = NULL;
return 0;
}
/*
* <metadata block size> <#used metadata blocks>/<#total metadata blocks>
* <cache block size> <#used cache blocks>/<#total cache blocks>
* <#read hits> <#read misses> <#write hits> <#write misses>
* <#demotions> <#promotions> <#dirty> <#features> <features>*
* <#core args> <core args>* <policy name> <#policy args> <policy args>*
*
* metadata block size : Fixed block size for each metadata block in
* sectors
* #used metadata blocks : Number of metadata blocks used
* #total metadata blocks : Total number of metadata blocks
* cache block size : Configurable block size for the cache device
* in sectors
* #used cache blocks : Number of blocks resident in the cache
* #total cache blocks : Total number of cache blocks
* #read hits : Number of times a READ bio has been mapped
* to the cache
* #read misses : Number of times a READ bio has been mapped
* to the origin
* #write hits : Number of times a WRITE bio has been mapped
* to the cache
* #write misses : Number of times a WRITE bio has been
* mapped to the origin
* #demotions : Number of times a block has been removed
* from the cache
* #promotions : Number of times a block has been moved to
* the cache
* #dirty : Number of blocks in the cache that differ
* from the origin
* #feature args : Number of feature args to follow
* feature args : 'writethrough' (optional)
* #core args : Number of core arguments (must be even)
* core args : Key/value pairs for tuning the core
* e.g. migration_threshold
* *policy name : Name of the policy
* #policy args : Number of policy arguments to follow (must be even)
* policy args : Key/value pairs
* e.g. sequential_threshold
*/
int dm_get_status_cache(struct dm_pool *mem, const char *params,
struct dm_status_cache **status)
{
int i, feature_argc;
char *str;
const char *p, *pp;
struct dm_status_cache *s;
if (!(s = dm_pool_zalloc(mem, sizeof(struct dm_status_cache))))
return_0;
if (strstr(params, "Error")) {
s->error = 1;
s->fail = 1; /* This is also I/O fail state */
goto out;
}
if (strstr(params, "Fail")) {
s->fail = 1;
goto out;
}
/* Read in args that have definitive placement */
if (sscanf(params,
" " FMTu32
" " FMTu64 "/" FMTu64
" " FMTu32
" " FMTu64 "/" FMTu64
" " FMTu64 " " FMTu64
" " FMTu64 " " FMTu64
" " FMTu64 " " FMTu64
" " FMTu64
" %d",
&s->metadata_block_size,
&s->metadata_used_blocks, &s->metadata_total_blocks,
&s->block_size, /* AKA, chunk_size */
&s->used_blocks, &s->total_blocks,
&s->read_hits, &s->read_misses,
&s->write_hits, &s->write_misses,
&s->demotions, &s->promotions,
&s->dirty_blocks,
&feature_argc) != 14)
goto bad;
/* Now jump to "features" section */
if (!(p = _skip_fields(params, 12)))
goto bad;
/* Read in features */
for (i = 0; i < feature_argc; i++) {
if (!strncmp(p, "writethrough ", 13))
s->feature_flags |= DM_CACHE_FEATURE_WRITETHROUGH;
else if (!strncmp(p, "writeback ", 10))
s->feature_flags |= DM_CACHE_FEATURE_WRITEBACK;
else if (!strncmp(p, "passthrough ", 12))
s->feature_flags |= DM_CACHE_FEATURE_PASSTHROUGH;
else if (!strncmp(p, "metadata2 ", 10))
s->feature_flags |= DM_CACHE_FEATURE_METADATA2;
else
log_error("Unknown feature in status: %s", params);
if (!(p = _skip_fields(p, 1)))
goto bad;
}
/* Read in core_args. */
if (sscanf(p, "%d ", &s->core_argc) != 1)
goto bad;
if ((s->core_argc > 0) &&
(!(s->core_argv = dm_pool_zalloc(mem, sizeof(char *) * s->core_argc)) ||
!(p = _skip_fields(p, 1)) ||
!(str = dm_pool_strdup(mem, p)) ||
!(p = _skip_fields(p, (unsigned) s->core_argc)) ||
(dm_split_words(str, s->core_argc, 0, s->core_argv) != s->core_argc)))
goto bad;
/* Read in policy args */
pp = p;
if (!(p = _skip_fields(p, 1)) ||
!(s->policy_name = dm_pool_zalloc(mem, (p - pp))))
goto bad;
if (sscanf(pp, "%s %d", s->policy_name, &s->policy_argc) != 2)
goto bad;
if (s->policy_argc &&
(!(s->policy_argv = dm_pool_zalloc(mem, sizeof(char *) * s->policy_argc)) ||
!(p = _skip_fields(p, 1)) ||
!(str = dm_pool_strdup(mem, p)) ||
(dm_split_words(str, s->policy_argc, 0, s->policy_argv) != s->policy_argc)))
goto bad;
/* TODO: improve this parser */
if (strstr(p, " ro"))
s->read_only = 1;
if (strstr(p, " needs_check"))
s->needs_check = 1;
out:
*status = s;
return 1;
bad:
log_error("Failed to parse cache params: %s", params);
dm_pool_free(mem, s);
*status = NULL;
return 0;
}
/*
* From linux/Documentation/device-mapper/writecache.txt
*
* Status:
* 1. error indicator - 0 if there was no error, otherwise error number
* 2. the number of blocks
* 3. the number of free blocks
* 4. the number of blocks under writeback
*/
int dm_get_status_writecache(struct dm_pool *mem, const char *params,
struct dm_status_writecache **status)
{
struct dm_status_writecache *s;
if (!(s = dm_pool_zalloc(mem, sizeof(struct dm_status_writecache))))
return_0;
if (sscanf(params, "%u %llu %llu %llu",
&s->error,
(unsigned long long *)&s->total_blocks,
(unsigned long long *)&s->free_blocks,
(unsigned long long *)&s->writeback_blocks) != 4) {
log_error("Failed to parse writecache params: %s.", params);
dm_pool_free(mem, s);
return 0;
}
*status = s;
return 1;
}
int parse_thin_pool_status(const char *params, struct dm_status_thin_pool *s)
{
int pos;
memset(s, 0, sizeof(*s));
if (!params) {
log_error("Failed to parse invalid thin params.");
return 0;
}
if (strstr(params, "Error")) {
s->error = 1;
s->fail = 1; /* This is also I/O fail state */
return 1;
}
if (strstr(params, "Fail")) {
s->fail = 1;
return 1;
}
/* FIXME: add support for held metadata root */
if (sscanf(params, FMTu64 " " FMTu64 "/" FMTu64 " " FMTu64 "/" FMTu64 "%n",
&s->transaction_id,
&s->used_metadata_blocks,
&s->total_metadata_blocks,
&s->used_data_blocks,
&s->total_data_blocks, &pos) < 5) {
log_error("Failed to parse thin pool params: %s.", params);
return 0;
}
/* New status flags */
if (strstr(params + pos, "no_discard_passdown"))
s->discards = DM_THIN_DISCARDS_NO_PASSDOWN;
else if (strstr(params + pos, "ignore_discard"))
s->discards = DM_THIN_DISCARDS_IGNORE;
else /* default discard_passdown */
s->discards = DM_THIN_DISCARDS_PASSDOWN;
/* Default is 'writable' (rw) data */
if (strstr(params + pos, "out_of_data_space"))
s->out_of_data_space = 1;
else if (strstr(params + pos, "ro "))
s->read_only = 1;
/* Default is 'queue_if_no_space' */
if (strstr(params + pos, "error_if_no_space"))
s->error_if_no_space = 1;
if (strstr(params + pos, "needs_check"))
s->needs_check = 1;
return 1;
}
int dm_get_status_thin_pool(struct dm_pool *mem, const char *params,
struct dm_status_thin_pool **status)
{
struct dm_status_thin_pool *s;
if (!(s = dm_pool_alloc(mem, sizeof(struct dm_status_thin_pool)))) {
log_error("Failed to allocate thin_pool status structure.");
return 0;
}
if (!parse_thin_pool_status(params, s)) {
dm_pool_free(mem, s);
return_0;
}
*status = s;
return 1;
}
int dm_get_status_thin(struct dm_pool *mem, const char *params,
struct dm_status_thin **status)
{
struct dm_status_thin *s;
if (!(s = dm_pool_zalloc(mem, sizeof(struct dm_status_thin)))) {
log_error("Failed to allocate thin status structure.");
return 0;
}
if (strchr(params, '-')) {
/* nothing to parse */
} else if (strstr(params, "Fail")) {
s->fail = 1;
} else if (sscanf(params, FMTu64 " " FMTu64,
&s->mapped_sectors,
&s->highest_mapped_sector) != 2) {
dm_pool_free(mem, s);
log_error("Failed to parse thin params: %s.", params);
return 0;
}
*status = s;
return 1;
}
/*
* dm core parms: 0 409600 mirror
* Mirror core parms: 2 253:4 253:5 400/400
* New-style failure params: 1 AA
* New-style log params: 3 cluster 253:3 A
* or 3 disk 253:3 A
* or 1 core
*/
#define DM_MIRROR_MAX_IMAGES 8 /* limited by kernel DM_KCOPYD_MAX_REGIONS */
int dm_get_status_mirror(struct dm_pool *mem, const char *params,
struct dm_status_mirror **status)
{
struct dm_status_mirror *s;
const char *p, *pos = params;
unsigned num_devs, argc, i;
int used;
if (!(s = dm_pool_zalloc(mem, sizeof(*s)))) {
log_error("Failed to alloc mem pool to parse mirror status.");
return 0;
}
if (sscanf(pos, "%u %n", &num_devs, &used) != 1)
goto_out;
pos += used;
if (num_devs > DM_MIRROR_MAX_IMAGES) {
log_error(INTERNAL_ERROR "More then " DM_TO_STRING(DM_MIRROR_MAX_IMAGES)
" reported in mirror status.");
goto out;
}
if (!(s->devs = dm_pool_alloc(mem, num_devs * sizeof(*(s->devs))))) {
log_error("Allocation of devs failed.");
goto out;
}
for (i = 0; i < num_devs; ++i, pos += used)
if (sscanf(pos, "%u:%u %n",
&(s->devs[i].major), &(s->devs[i].minor), &used) != 2)
goto_out;
if (sscanf(pos, FMTu64 "/" FMTu64 "%n",
&s->insync_regions, &s->total_regions, &used) != 2)
goto_out;
pos += used;
if (sscanf(pos, "%u %n", &argc, &used) != 1)
goto_out;
pos += used;
for (i = 0; i < num_devs ; ++i)
s->devs[i].health = pos[i];
if (!(pos = _skip_fields(pos, argc)))
goto_out;
if (strncmp(pos, "userspace", 9) == 0) {
pos += 9;
/* FIXME: support status of userspace mirror implementation */
}
if (sscanf(pos, "%u %n", &argc, &used) != 1)
goto_out;
pos += used;
if (argc == 1) {
/* core, cluster-core */
if (!(s->log_type = dm_pool_strdup(mem, pos))) {
log_error("Allocation of log type string failed.");
goto out;
}
} else {
if (!(p = _skip_fields(pos, 1)))
goto_out;
/* disk, cluster-disk */
if (!(s->log_type = dm_pool_strndup(mem, pos, p - pos - 1))) {
log_error("Allocation of log type string failed.");
goto out;
}
pos = p;
if ((argc > 2) && !strcmp(s->log_type, "disk")) {
s->log_count = argc - 2;
if (!(s->logs = dm_pool_alloc(mem, s->log_count * sizeof(*(s->logs))))) {
log_error("Allocation of logs failed.");
goto out;
}
for (i = 0; i < s->log_count; ++i, pos += used)
if (sscanf(pos, "%u:%u %n",
&s->logs[i].major, &s->logs[i].minor, &used) != 2)
goto_out;
for (i = 0; i < s->log_count; ++i)
s->logs[i].health = pos[i];
}
}
s->dev_count = num_devs;
*status = s;
return 1;
out:
log_error("Failed to parse mirror status %s.", params);
dm_pool_free(mem, s);
*status = NULL;
return 0;
}

View File

@ -0,0 +1,179 @@
/*
* Copyright (C) 2006 Rackable Systems All rights reserved.
* Copyright (C) 2015 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* Abstract out the time methods used so they can be adjusted later -
* the results of these routines should stay in-core.
*/
#include "base/memory/zalloc.h"
#include "misc/dmlib.h"
#include <stdlib.h>
#define NSEC_PER_USEC UINT64_C(1000)
#define NSEC_PER_MSEC UINT64_C(1000000)
#define NSEC_PER_SEC UINT64_C(1000000000)
/*
* The realtime section uses clock_gettime with the CLOCK_MONOTONIC
* parameter to prevent issues with time warps
* This implementation requires librt.
*/
#ifdef HAVE_REALTIME
#include <time.h>
struct dm_timestamp {
struct timespec t;
};
static uint64_t _timestamp_to_uint64(struct dm_timestamp *ts)
{
uint64_t stamp = 0;
stamp += (uint64_t) ts->t.tv_sec * NSEC_PER_SEC;
stamp += (uint64_t) ts->t.tv_nsec;
return stamp;
}
struct dm_timestamp *dm_timestamp_alloc(void)
{
struct dm_timestamp *ts = NULL;
if (!(ts = zalloc(sizeof(*ts))))
stack;
return ts;
}
int dm_timestamp_get(struct dm_timestamp *ts)
{
if (!ts)
return 0;
if (clock_gettime(CLOCK_MONOTONIC, &ts->t)) {
log_sys_error("clock_gettime", "get_timestamp");
ts->t.tv_sec = 0;
ts->t.tv_nsec = 0;
return 0;
}
return 1;
}
#else /* ! HAVE_REALTIME */
/*
* The !realtime section just uses gettimeofday and is therefore subject
* to ntp-type time warps - not sure if should allow that.
*/
#include <sys/time.h>
struct dm_timestamp {
struct timeval t;
};
static uint64_t _timestamp_to_uint64(struct dm_timestamp *ts)
{
uint64_t stamp = 0;
stamp += ts->t.tv_sec * NSEC_PER_SEC;
stamp += ts->t.tv_usec * NSEC_PER_USEC;
return stamp;
}
struct dm_timestamp *dm_timestamp_alloc(void)
{
struct dm_timestamp *ts;
if (!(ts = malloc(sizeof(*ts))))
stack;
return ts;
}
int dm_timestamp_get(struct dm_timestamp *ts)
{
if (!ts)
return 0;
if (gettimeofday(&ts->t, NULL)) {
log_sys_error("gettimeofday", "get_timestamp");
ts->t.tv_sec = 0;
ts->t.tv_usec = 0;
return 0;
}
return 1;
}
#endif /* HAVE_REALTIME */
/*
* Compare two timestamps.
*
* Return: -1 if ts1 is less than ts2
* 0 if ts1 is equal to ts2
* 1 if ts1 is greater than ts2
*/
int dm_timestamp_compare(struct dm_timestamp *ts1, struct dm_timestamp *ts2)
{
uint64_t t1, t2;
t1 = _timestamp_to_uint64(ts1);
t2 = _timestamp_to_uint64(ts2);
if (t2 < t1)
return 1;
if (t1 < t2)
return -1;
return 0;
}
/*
* Return the absolute difference in nanoseconds between
* the dm_timestamp objects ts1 and ts2.
*
* Callers that need to know whether ts1 is before, equal to, or after ts2
* in addition to the magnitude should use dm_timestamp_compare.
*/
uint64_t dm_timestamp_delta(struct dm_timestamp *ts1, struct dm_timestamp *ts2)
{
uint64_t t1, t2;
t1 = _timestamp_to_uint64(ts1);
t2 = _timestamp_to_uint64(ts2);
if (t1 > t2)
return t1 - t2;
return t2 - t1;
}
void dm_timestamp_copy(struct dm_timestamp *ts_new, struct dm_timestamp *ts_old)
{
*ts_new = *ts_old;
}
void dm_timestamp_destroy(struct dm_timestamp *ts)
{
free(ts);
}

View File

@ -0,0 +1,364 @@
/*
* Copyright (C) 2001 - 2003 Sistina Software (UK) Limited.
* Copyright (C) 2004 - 2017 Red Hat, Inc. All rights reserved.
*
* This file is released under the LGPL.
*/
#ifndef _LINUX_DM_IOCTL_V4_H
#define _LINUX_DM_IOCTL_V4_H
#ifdef __linux__
# include <linux/types.h>
#endif
#define DM_DIR "mapper" /* Slashes not supported */
#define DM_CONTROL_NODE "control"
#define DM_MAX_TYPE_NAME 16
#define DM_NAME_LEN 128
#define DM_UUID_LEN 129
/*
* A traditional ioctl interface for the device mapper.
*
* Each device can have two tables associated with it, an
* 'active' table which is the one currently used by io passing
* through the device, and an 'inactive' one which is a table
* that is being prepared as a replacement for the 'active' one.
*
* DM_VERSION:
* Just get the version information for the ioctl interface.
*
* DM_REMOVE_ALL:
* Remove all dm devices, destroy all tables. Only really used
* for debug.
*
* DM_LIST_DEVICES:
* Get a list of all the dm device names.
*
* DM_DEV_CREATE:
* Create a new device, neither the 'active' or 'inactive' table
* slots will be filled. The device will be in suspended state
* after creation, however any io to the device will get errored
* since it will be out-of-bounds.
*
* DM_DEV_REMOVE:
* Remove a device, destroy any tables.
*
* DM_DEV_RENAME:
* Rename a device or set its uuid if none was previously supplied.
*
* DM_SUSPEND:
* This performs both suspend and resume, depending which flag is
* passed in.
* Suspend: This command will not return until all pending io to
* the device has completed. Further io will be deferred until
* the device is resumed.
* Resume: It is no longer an error to issue this command on an
* unsuspended device. If a table is present in the 'inactive'
* slot, it will be moved to the active slot, then the old table
* from the active slot will be _destroyed_. Finally the device
* is resumed.
*
* DM_DEV_STATUS:
* Retrieves the status for the table in the 'active' slot.
*
* DM_DEV_WAIT:
* Wait for a significant event to occur to the device. This
* could either be caused by an event triggered by one of the
* targets of the table in the 'active' slot, or a table change.
*
* DM_TABLE_LOAD:
* Load a table into the 'inactive' slot for the device. The
* device does _not_ need to be suspended prior to this command.
*
* DM_TABLE_CLEAR:
* Destroy any table in the 'inactive' slot (ie. abort).
*
* DM_TABLE_DEPS:
* Return a set of device dependencies for the 'active' table.
*
* DM_TABLE_STATUS:
* Return the targets status for the 'active' table.
*
* DM_TARGET_MSG:
* Pass a message string to the target at a specific offset of a device.
*
* DM_DEV_SET_GEOMETRY:
* Set the geometry of a device by passing in a string in this format:
*
* "cylinders heads sectors_per_track start_sector"
*
* Beware that CHS geometry is nearly obsolete and only provided
* for compatibility with dm devices that can be booted by a PC
* BIOS. See struct hd_geometry for range limits. Also note that
* the geometry is erased if the device size changes.
*/
/*
* All ioctl arguments consist of a single chunk of memory, with
* this structure at the start. If a uuid is specified any
* lookup (eg. for a DM_INFO) will be done on that, *not* the
* name.
*/
struct dm_ioctl {
/*
* The version number is made up of three parts:
* major - no backward or forward compatibility,
* minor - only backwards compatible,
* patch - both backwards and forwards compatible.
*
* All clients of the ioctl interface should fill in the
* version number of the interface that they were
* compiled with.
*
* All recognised ioctl commands (ie. those that don't
* return -ENOTTY) fill out this field, even if the
* command failed.
*/
uint32_t version[3]; /* in/out */
uint32_t data_size; /* total size of data passed in
* including this struct */
uint32_t data_start; /* offset to start of data
* relative to start of this struct */
uint32_t target_count; /* in/out */
int32_t open_count; /* out */
uint32_t flags; /* in/out */
/*
* event_nr holds either the event number (input and output) or the
* udev cookie value (input only).
* The DM_DEV_WAIT ioctl takes an event number as input.
* The DM_SUSPEND, DM_DEV_REMOVE and DM_DEV_RENAME ioctls
* use the field as a cookie to return in the DM_COOKIE
* variable with the uevents they issue.
* For output, the ioctls return the event number, not the cookie.
*/
uint32_t event_nr; /* in/out */
uint32_t padding;
uint64_t dev; /* in/out */
char name[DM_NAME_LEN]; /* device name */
char uuid[DM_UUID_LEN]; /* unique identifier for
* the block device */
char data[7]; /* padding or data */
};
/*
* Used to specify tables. These structures appear after the
* dm_ioctl.
*/
struct dm_target_spec {
uint64_t sector_start;
uint64_t length;
int32_t status; /* used when reading from kernel only */
/*
* Location of the next dm_target_spec.
* - When specifying targets on a DM_TABLE_LOAD command, this value is
* the number of bytes from the start of the "current" dm_target_spec
* to the start of the "next" dm_target_spec.
* - When retrieving targets on a DM_TABLE_STATUS command, this value
* is the number of bytes from the start of the first dm_target_spec
* (that follows the dm_ioctl struct) to the start of the "next"
* dm_target_spec.
*/
uint32_t next;
char target_type[DM_MAX_TYPE_NAME];
/*
* Parameter string starts immediately after this object.
* Be careful to add padding after string to ensure correct
* alignment of subsequent dm_target_spec.
*/
};
/*
* Used to retrieve the target dependencies.
*/
struct dm_target_deps {
uint32_t count; /* Array size */
uint32_t padding; /* unused */
uint64_t dev[0]; /* out */
};
/*
* Used to get a list of all dm devices.
*/
struct dm_name_list {
uint64_t dev;
uint32_t next; /* offset to the next record from
the _start_ of this */
char name[0];
};
/*
* Used to retrieve the target versions
*/
struct dm_target_versions {
uint32_t next;
uint32_t version[3];
char name[0];
};
/*
* Used to pass message to a target
*/
struct dm_target_msg {
uint64_t sector; /* Device sector */
char message[0];
};
/*
* If you change this make sure you make the corresponding change
* to dm-ioctl.c:lookup_ioctl()
*/
enum {
/* Top level cmds */
DM_VERSION_CMD = 0,
DM_REMOVE_ALL_CMD,
DM_LIST_DEVICES_CMD,
/* device level cmds */
DM_DEV_CREATE_CMD,
DM_DEV_REMOVE_CMD,
DM_DEV_RENAME_CMD,
DM_DEV_SUSPEND_CMD,
DM_DEV_STATUS_CMD,
DM_DEV_WAIT_CMD,
/* Table level cmds */
DM_TABLE_LOAD_CMD,
DM_TABLE_CLEAR_CMD,
DM_TABLE_DEPS_CMD,
DM_TABLE_STATUS_CMD,
/* Added later */
DM_LIST_VERSIONS_CMD,
DM_TARGET_MSG_CMD,
DM_DEV_SET_GEOMETRY_CMD,
DM_DEV_ARM_POLL_CMD,
};
#define DM_IOCTL 0xfd
#define DM_VERSION _IOWR(DM_IOCTL, DM_VERSION_CMD, struct dm_ioctl)
#define DM_REMOVE_ALL _IOWR(DM_IOCTL, DM_REMOVE_ALL_CMD, struct dm_ioctl)
#define DM_LIST_DEVICES _IOWR(DM_IOCTL, DM_LIST_DEVICES_CMD, struct dm_ioctl)
#define DM_DEV_CREATE _IOWR(DM_IOCTL, DM_DEV_CREATE_CMD, struct dm_ioctl)
#define DM_DEV_REMOVE _IOWR(DM_IOCTL, DM_DEV_REMOVE_CMD, struct dm_ioctl)
#define DM_DEV_RENAME _IOWR(DM_IOCTL, DM_DEV_RENAME_CMD, struct dm_ioctl)
#define DM_DEV_SUSPEND _IOWR(DM_IOCTL, DM_DEV_SUSPEND_CMD, struct dm_ioctl)
#define DM_DEV_STATUS _IOWR(DM_IOCTL, DM_DEV_STATUS_CMD, struct dm_ioctl)
#define DM_DEV_WAIT _IOWR(DM_IOCTL, DM_DEV_WAIT_CMD, struct dm_ioctl)
#define DM_DEV_ARM_POLL _IOWR(DM_IOCTL, DM_DEV_ARM_POLL_CMD, struct dm_ioctl)
#define DM_TABLE_LOAD _IOWR(DM_IOCTL, DM_TABLE_LOAD_CMD, struct dm_ioctl)
#define DM_TABLE_CLEAR _IOWR(DM_IOCTL, DM_TABLE_CLEAR_CMD, struct dm_ioctl)
#define DM_TABLE_DEPS _IOWR(DM_IOCTL, DM_TABLE_DEPS_CMD, struct dm_ioctl)
#define DM_TABLE_STATUS _IOWR(DM_IOCTL, DM_TABLE_STATUS_CMD, struct dm_ioctl)
#define DM_LIST_VERSIONS _IOWR(DM_IOCTL, DM_LIST_VERSIONS_CMD, struct dm_ioctl)
#define DM_TARGET_MSG _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
#define DM_VERSION_MINOR 36
#define DM_VERSION_PATCHLEVEL 0
#define DM_VERSION_EXTRA "-ioctl (2017-06-09)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
#define DM_SUSPEND_FLAG (1 << 1) /* In/Out */
#define DM_PERSISTENT_DEV_FLAG (1 << 3) /* In */
/*
* Flag passed into ioctl STATUS command to get table information
* rather than current status.
*/
#define DM_STATUS_TABLE_FLAG (1 << 4) /* In */
/*
* Flags that indicate whether a table is present in either of
* the two table slots that a device has.
*/
#define DM_ACTIVE_PRESENT_FLAG (1 << 5) /* Out */
#define DM_INACTIVE_PRESENT_FLAG (1 << 6) /* Out */
/*
* Indicates that the buffer passed in wasn't big enough for the
* results.
*/
#define DM_BUFFER_FULL_FLAG (1 << 8) /* Out */
/*
* This flag is now ignored.
*/
#define DM_SKIP_BDGET_FLAG (1 << 9) /* In */
/*
* Set this to avoid attempting to freeze any filesystem when suspending.
*/
#define DM_SKIP_LOCKFS_FLAG (1 << 10) /* In */
/*
* Set this to suspend without flushing queued ios.
* Also disables flushing uncommitted changes in the thin target before
* generating statistics for DM_TABLE_STATUS and DM_DEV_WAIT.
*/
#define DM_NOFLUSH_FLAG (1 << 11) /* In */
/*
* If set, any table information returned will relate to the inactive
* table instead of the live one. Always check DM_INACTIVE_PRESENT_FLAG
* is set before using the data returned.
*/
#define DM_QUERY_INACTIVE_TABLE_FLAG (1 << 12) /* In */
/*
* If set, a uevent was generated for which the caller may need to wait.
*/
#define DM_UEVENT_GENERATED_FLAG (1 << 13) /* Out */
/*
* If set, rename changes the uuid not the name. Only permitted
* if no uuid was previously supplied: an existing uuid cannot be changed.
*/
#define DM_UUID_FLAG (1 << 14) /* In */
/*
* If set, all buffers are wiped after use. Use when sending
* or requesting sensitive data such as an encryption key.
*/
#define DM_SECURE_DATA_FLAG (1 << 15) /* In */
/*
* If set, a message generated output data.
*/
#define DM_DATA_OUT_FLAG (1 << 16) /* Out */
/*
* If set with DM_DEV_REMOVE or DM_REMOVE_ALL this indicates that if
* the device cannot be removed immediately because it is still in use
* it should instead be scheduled for removal when it gets closed.
*
* On return from DM_DEV_REMOVE, DM_DEV_STATUS or other ioctls, this
* flag indicates that the device is scheduled to be removed when it
* gets closed.
*/
#define DM_DEFERRED_REMOVE (1 << 17) /* In/Out */
/*
* If set, the device is suspended internally.
*/
#define DM_INTERNAL_SUSPEND_FLAG (1 << 18) /* Out */
#endif /* _LINUX_DM_IOCTL_H */

Some files were not shown because too many files have changed in this diff Show More