1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-11-02 04:23:50 +03:00

Compare commits

..

13 Commits

Author SHA1 Message Date
David Teigland
d34484556d lvmcache: simplify metadata cache
The copy of VG metadata stored in lvmcache was not being used
in general.  It pretended to be a generic VG metadata cache,
but was not being used except for clvmd activation.  There
it was used to avoid reading from disk while devices were
suspended, i.e. in resume.

This removes the code that attempted to make this look
like a generic metadata cache, and replaces with with
something narrowly targetted to what it's actually used for.

This is a way of passing the VG from suspend to resume in
clvmd.  Since in the case of clvmd one caller can't simply
pass the same VG to both suspend and resume, suspend needs
to stash the VG somewhere that resume can grab it from.
(resume doesn't want to read it from disk since devices
are suspended.)  The lvmcache vginfo struct is used as a
convenient place to stash the VG to pass it from suspend
to resume, even though it isn't related to the lvmcache
or vginfo.  These suspended_vg* vginfo fields should
not be used or touched anywhere else, they are only to
be used for passing the VG data from suspend to resume
in clvmd.  The VG data being passed between suspend and
resume is never modified, and will only exist in the
brief period between suspend and resume in clvmd.

suspend has both old (current) and new (precommitted)
copies of the VG metadata.  It stashes both of these in
the vginfo prior to suspending devices.  When vg_commit
is successful, it sets a flag in vginfo as before,
signaling the transition from old to new metadata.

resume grabs the VG stashed by suspend.  If the vg_commit
happened, it grabs the new VG, and if the vg_commit didn't
happen it grabs the old VG.  The VG is then used to resume
LVs.

This isolates clvmd-specific code and usage from the
normal lvm vg_read code, making the code simpler and
the behavior easier to verify.

Sequence of operations:

- lv_suspend() has both vg_old and vg_new
  and stashes a copy of each onto the vginfo:
  lvmcache_save_suspended_vg(vg_old);
  lvmcache_save_suspended_vg(vg_new);

- vg_commit() happens, which causes all clvmd
  instances to call lvmcache_commit_metadata(vg).
  A flag is set in the vginfo indicating the
  transition from the old to new VG:
  vginfo->suspended_vg_committed = 1;

- lv_resume() needs either vg_old or vg_new
  to use in resuming LVs.  It doesn't want to
  read the VG from disk since devices are
  suspended, so it gets the VG stashed by
  lv_suspend:
  vg = lvmcache_get_suspended_vg(vgid);

If the vg_commit did not happen, suspended_vg_committed
will not be set, and in this case, lvmcache_get_suspended_vg()
will return the old VG instead of the new VG, and it will
resume LVs based on the old metadata.
2017-11-10 10:53:57 -06:00
David Teigland
62ec6c3163 label_scan: remove extra label scan and read for orphan PVs
When process_each_pv() calls vg_read() on the orphan VG, the
internal implementation was doing an unnecessary
lvmcache_label_scan() and two unnecessary label_read() calls
on each orphan.  Some of those unnecessary label scans/reads
would sometimes be skipped due to caching, but the code was
always doing at least one unnecessary read on each orphan.

The common format_text case was also unecessarily calling into
the format-specific pv_read() function which actually did nothing.

By analyzing each case in which vg_read() was being called on
the orphan VG, we can say that all of the label scans/reads
in vg_read_orphans are unnecessary:

1. reporting commands: the information saved in lvmcache by
the original label scan can be reported.  There is no advantage
to repeating the label scan on the orphans a second time before
reporting it.

2. pvcreate/vgcreate/vgextend: these all share a common
implementation in pvcreate_each_device().  That function
already rescans labels after acquiring the orphan VG lock,
which ensures that the command is using valid lvmcache
information.
2017-11-10 10:53:57 -06:00
David Teigland
e30315db97 vgcreate: improve the use of label_scan
The old code was doing unnecessary label scans when
checking to see if the new VG name exists.  A single
label_scan is sufficient if it is done after the
new VG lock is held.
2017-11-10 10:53:57 -06:00
David Teigland
0f29fd8538 lvmetad: use new label_scan for update from pvscan
Take advantage of the common implementation with aio
and reduced disk reads.
2017-11-10 10:53:57 -06:00
David Teigland
7ad6c21194 lvmetad: use new label_scan for update from lvmlockd
When lvmlockd indicates that the lvmetad cache is out of
date because of changes by another node, lvmetad_pvscan_vg()
rescans the devices in the VG to update lvmetad.  Use the
new label_scan in this function to use the common code and
take advantage of the new aio and reduced reads.
2017-11-10 10:53:57 -06:00
David Teigland
0e15dfe2b6 label_scan/vg_read: use label_read_data to avoid disk reads
The new label_scan() function reads a large buffer of data
from the start of the disk, and saves it so that multiple
structs can be read from it.  Previously, only the label_header
was read from this buffer, and the code which needed data
structures that immediately followed the label_header would
read those from disk separately.  This created a large
number of small, unnecessary disk reads.

In each place that the two read paths (label_scan and vg_read)
need to read data from disk, first check if that data is
already available from the label_read_data buffer, and if
so just copy it from the buffer instead of reading from disk.

Code changes
------------

- passing the label_read_data struct down through
  both read paths to make it available.

- before every disk read, first check if the location
  and size of the desired piece of data exists fully
  in the label_read_data buffer, and if so copy it
  from there.  Otherwise, use the existing code to
  read the data from disk.

- adding some log_error messages on existing error paths
  that were already being updated for the reasons above.

- using similar naming for parallel functions on the two
  parallel read paths that are being updated above.

  label_scan path calls:
  read_metadata_location_summary, text_read_metadata_summary

  vg_read path calls:
  read_metadata_location_vg, text_read_metadata_file

  Previously, those functions were named:

  label_scan path calls:
  vgname_from_mda, text_vgsummary_import

  vg_read path calls:
  _find_vg_rlocn, text_vg_import_fd

I/O changes
-----------

In the label_scan path, the following data is either copied
from label_read_data or read from disk for each PV:

- label_header and pv_header
- mda_header (in _raw_read_mda_header)
- vg metadata name (in read_metadata_location_summary)
- vg metadata (in config_file_read_fd)

Total of 4 reads per PV in the label_scan path.

In the vg_read path, the following data is either copied from
label_read_data or read from disk for each PV:

- mda_header (in _raw_read_mda_header)
- vg metadata name (in read_metadata_location_vg)
- vg metadata (in config_file_read_fd)

Total of 3 reads per PV in the vg_read path.

For a common read/reporting command, each PV will be:

- read by the command's initial lvmcache_label_scan()
- read by lvmcache_label_rescan_vg() at the start of vg_read()
- read by vg_read()

Previously, this would cause 11 synchronous disk reads per PV:
4 from lvmcache_label_scan(), 4 from lvmcache_label_rescan_vg()
and 3 from vg_read().

With this commit's optimization, there are now 2 async disk reads
per PV: 1 from lvmcache_label_scan() and 1 from
lvmcache_label_rescan_vg().

When a second mda is used on a PV, it is located at the
end of the PV.  This second mda and copy of metadata will
not be found in the label_read_data buffer, and will always
require separate disk reads.
2017-11-10 10:53:57 -06:00
David Teigland
83f25c98de independent metadata areas: fix bogus code
Fix mixing bitwise & and logical && which was
always 1 in any case.
2017-11-10 10:53:57 -06:00
David Teigland
667ce84c56 label_scan: fix independent metadata areas
This fixes the use of lvmcache_label_rescan_vg() in the previous
commit for the special case of independent metadata areas.

label scan is about discovering VG name to device associations
using information from disks, but devices in VGs with
independent metadata areas have no information on disk, so
the label scan does nothing for these VGs/devices.
With independent metadata areas, only the VG metadata found
in files is used.  This metadata is found and read in
vg_read in the processing phase.

lvmcache_label_rescan_vg() drops lvmcache info for the VG devices
before repeating the label scan on them.  In the case of
independent metadata areas, there is no metadata on devices, so the
label scan of the devices will find nothing, so will not recreate
the necessary vginfo/info data in lvmcache for the VG.  Fix this
by setting a flag in the lvmcache vginfo struct indicating that
the VG uses independent metadata areas, and label rescanning should
be skipped.

In the case of independent metadata areas, it is the metadata
processing in the vg_read phase that sets up the lvmcache
vginfo/info information, and label scan has no role.
2017-11-10 10:53:57 -06:00
David Teigland
0a9abe6128 label_scan: move to start of command
LVM's general design for scanning/reading of metadata from disks is
that a command begins with a discovery phase, called "label scan",
in which it discovers which devices belong to lvm, what VGs exist on
those devices, and which devices are associated with each VG.
After this comes the processing phase, which is based around
processing specific VGs.  In this phase, lvm acquires a lock on
the VG, and rescans the devices associated with that VG, i.e.
it repeats the label scan steps on the devices in the VG in case
something has changed between the initial label scan and taking
the VG lock.  This ensures that the command is processing the
lastest, unchanging data on disk.

This commit moves the location of these label scans to make them
clearer and avoid unnecessary repeated calls to them.

Previously, the initial label scan was called as a side effect
from various utility functions.  This would lead to it being called
unnecessarily.  It is an expensive operation, and should only be
called when necessary.  Also, this is a primary step in the
function of the command, and as such it should be called prominently
at the top level of command processing, not as a hidden side effect
of a utility function.  lvm knows exactly where and when the
label scan needs to be done.  Because of this, move the label scan
calls from the internal functions to the top level of processing.

Other specific instances of lvmcache_label_scan() are still called
unnecessarily or unclearly by specific commands that do not use
the common process_each functions.  These will be improved in
future commits.

During the processing phase, rescanning labels for devices in a VG
needs to be done after the VG lock is acquired in case things have
changed since the initial label scan.  This was being done by way
of rescanning devices that had the INVALID flag set in lvmcache.
This usually approximated the right set of devices, but it was not
exact, and obfuscated the real requirement.  Correct this by using
a new function that rescans the devices in the VG:
lvmcache_label_rescan_vg().

Apart from being inexact, the rescanning was extremely well hidden.
_vg_read() would call ->create_instance(), _text_create_text_instance(),
_create_vg_text_instance() which would call lvmcache_label_scan()
which would call _scan_invalid() which repeats the label scan on
devices flagged INVALID.  lvmcache_label_rescan_vg() is now called
prominently by _vg_read() directly.
2017-11-10 10:53:57 -06:00
David Teigland
c3fb5c75f4 label_scan: call new label_scan from lvmcache_label_scan
To do label scanning, lvm code calls lvmcache_label_scan().
Change lvmcache_label_scan() to use the new label_scan()
which can use async io, rather than implementing its own
dev iter loop and calling the synchronous label_read() on
each device.

Also add lvmcache_label_rescan_vg() which calls the new
label_scan_devs() which does label scanning on only the
specified devices.  This is for a subsequent commit and
is not yet used.
2017-11-10 10:53:57 -06:00
David Teigland
60e707806f label_scan: add new implementation for async and sync
This adds the implementation without using it in the code.
The code still calls label_read() on each individual device
to do scanning.

Calling the new label_scan() will use async io if async io is
enabled in config settings.  If not enabled, or if async io fails,
label_scan() will fall back to using synchronous io.  If only some
aio ops fail, the code will attempt to perform synchronous io on just
the ios that failed.

Uses linux native aio system calls, not the posix wrappers which
are messier and may not have all the latest linux capabilities.

Internally, the same functionality is used before:

- iterate through each visible device on the system,
  provided from from dev-cache

- call _find_label_header on the dev to find the sector
  containing the label_header

- call _text_read to look at the pv_header and mda locations
  after the pv_header

- for each mda location, read the mda_header and the vg metadata

- add info/vginfo structs to lvmcache which associate the
  device name (info) with the VG name (vginfo) so that vg_read
  can know which devices to read for a given VG name

The new label scanning issues a "large" read beginning at the start
of the device, where large is configurable, but intended to cover
all the labels/headers/metadata that is located at the start of
the device.  This large data buffer from each device is saved in a
global list using a new 'label_read_data' struct.  Currently, this
buffer is only used to find the label_header from the first four
sectors of the device.  In subsequent commits, other functions
that read other structs/metadata will first try to find that data in
the saved label_read_data buffer.  In most common cases, the data
they need can simply be copied out of the existing buffer, and
they can avoid issuing another disk read to get it.
2017-11-10 10:53:51 -06:00
David Teigland
cf22b4c9f5 command: add settings to enable async io
There are config settings to enable aio, and to configure
the concurrency and read size.
2017-11-09 15:38:49 -06:00
David Teigland
18400a7e16 io: add low level async io support
The interface consists of:

- A context struct, one for the entire command.

- An io struct, one per io operation (read).

- dev_async_context_setup() creates an aio context.

- dev_async_context_destroy() destroys an aio context.

- dev_async_alloc_ios() allocates a specified number of io structs,
  along with an associated buffer for the data.

- dev_async_free_ios() frees all the allocated io structs+buffers.

- dev_async_io_get() gets an available io struct from those
  allocated in alloc_ios.  If none are available, it will allocate
  a new io struct if under limit.

- dev_async_io_put() puts a used io struct back into the set
  of unused io structs, making it available for get.

- dev_async_read_submit() start an async read io.

- dev_async_getevents() collect async io completions.
2017-11-09 15:29:20 -06:00
870 changed files with 53195 additions and 51685 deletions

101
.gitignore vendored
View File

@@ -1,7 +1,6 @@
*.5
*.7
*.8
*.8_gen
*.a
*.d
*.o
@@ -31,103 +30,3 @@ make.tmpl
/cscope.out
/tags
/tmp/
tools/man-generator
tools/man-generator.c
test/lib/lvchange
test/lib/lvconvert
test/lib/lvcreate
test/lib/lvdisplay
test/lib/lvextend
test/lib/lvmconfig
test/lib/lvmdiskscan
test/lib/lvmsadc
test/lib/lvmsar
test/lib/lvreduce
test/lib/lvremove
test/lib/lvrename
test/lib/lvresize
test/lib/lvs
test/lib/lvscan
test/lib/pvchange
test/lib/pvck
test/lib/pvcreate
test/lib/pvdisplay
test/lib/pvmove
test/lib/pvremove
test/lib/pvresize
test/lib/pvs
test/lib/pvscan
test/lib/vgcfgbackup
test/lib/vgcfgrestore
test/lib/vgchange
test/lib/vgck
test/lib/vgconvert
test/lib/vgcreate
test/lib/vgdisplay
test/lib/vgexport
test/lib/vgextend
test/lib/vgimport
test/lib/vgimportclone
test/lib/vgmerge
test/lib/vgmknodes
test/lib/vgreduce
test/lib/vgremove
test/lib/vgrename
test/lib/vgs
test/lib/vgscan
test/lib/vgsplit
test/api/lvtest.t
test/api/pe_start.t
test/api/percent.t
test/api/python_lvm_unit.py
test/api/test
test/api/thin_percent.t
test/api/vglist.t
test/api/vgtest.t
test/lib/aux
test/lib/check
test/lib/clvmd
test/lib/dm-version-expected
test/lib/dmeventd
test/lib/dmsetup
test/lib/dmstats
test/lib/fail
test/lib/flavour-ndev-cluster
test/lib/flavour-ndev-cluster-lvmpolld
test/lib/flavour-ndev-lvmetad
test/lib/flavour-ndev-lvmetad-lvmpolld
test/lib/flavour-ndev-lvmpolld
test/lib/flavour-ndev-vanilla
test/lib/flavour-udev-cluster
test/lib/flavour-udev-cluster-lvmpolld
test/lib/flavour-udev-lvmetad
test/lib/flavour-udev-lvmetad-lvmpolld
test/lib/flavour-udev-lvmlockd-dlm
test/lib/flavour-udev-lvmlockd-sanlock
test/lib/flavour-udev-lvmlockd-test
test/lib/flavour-udev-lvmpolld
test/lib/flavour-udev-vanilla
test/lib/fsadm
test/lib/get
test/lib/inittest
test/lib/invalid
test/lib/lvm
test/lib/lvm-wrapper
test/lib/lvmchange
test/lib/lvmdbusd.profile
test/lib/lvmetad
test/lib/lvmpolld
test/lib/not
test/lib/paths
test/lib/paths-common
test/lib/runner
test/lib/should
test/lib/test
test/lib/thin-performance.profile
test/lib/utils
test/lib/version-expected
test/unit/dmraid_t.c
test/unit/unit-test

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -28,6 +28,14 @@ ifeq ("@INTL@", "yes")
SUBDIRS += po
endif
ifeq ("@APPLIB@", "yes")
SUBDIRS += liblvm
endif
ifeq ("@PYTHON_BINDINGS@", "yes")
SUBDIRS += python
endif
ifeq ($(MAKECMDGOALS),clean)
SUBDIRS += test
endif
@@ -35,7 +43,8 @@ endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = conf include man test scripts \
libdaemon lib tools daemons libdm \
udev po
udev po liblvm python \
unit-tests/datastruct unit-tests/mm unit-tests/regex
tools.distclean: test.distclean
endif
DISTCLEAN_DIRS += lcov_reports*
@@ -46,16 +55,17 @@ include make.tmpl
libdm: include
libdaemon: include
lib: libdm libdaemon
liblvm: lib
daemons: lib libdaemon tools
tools: lib libdaemon device-mapper
po: tools daemons
man: tools
all_man: tools
scripts: libdm
test: tools daemons
scripts: liblvm libdm
lib.device-mapper: include.device-mapper
libdm.device-mapper: include.device-mapper
liblvm.device-mapper: include.device-mapper
daemons.device-mapper: libdm.device-mapper
tools.device-mapper: libdm.device-mapper
scripts.device-mapper: include.device-mapper
@@ -69,6 +79,10 @@ po.pofile: tools.pofile daemons.pofile
pofile: po.pofile
endif
ifeq ("@PYTHON_BINDINGS@", "yes")
python: liblvm
endif
ifneq ("$(CFLOW_CMD)", "")
tools.cflow: libdm.cflow lib.cflow
daemons.cflow: tools.cflow
@@ -83,7 +97,7 @@ endif
DISTCLEAN_TARGETS += cscope.out
CLEAN_DIRS += autom4te.cache
check check_system check_cluster check_local check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock: test
check check_system check_cluster check_local check_lvmetad check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock unit: all
$(MAKE) -C test $(@)
conf.generate man.generate: tools
@@ -132,7 +146,7 @@ install_system_dirs:
$(INSTALL_ROOT_DIR) $(DESTDIR)$(DEFAULT_RUN_DIR)
$(INSTALL_ROOT_DATA) /dev/null $(DESTDIR)$(DEFAULT_CACHE_DIR)/.cache
install_initscripts:
install_initscripts:
$(MAKE) -C scripts install_initscripts
install_systemd_generators:
@@ -145,14 +159,19 @@ install_systemd_units:
install_all_man:
$(MAKE) -C man install_all_man
ifeq ("@PYTHON_BINDINGS@", "yes")
install_python_bindings:
$(MAKE) -C liblvm/python install_python_bindings
endif
install_tmpfiles_configuration:
$(MAKE) -C scripts install_tmpfiles_configuration
LCOV_TRACES = libdm.info lib.info tools.info \
LCOV_TRACES = libdm.info lib.info liblvm.info tools.info \
libdaemon/client.info libdaemon/server.info \
test/unit.info \
daemons/clvmd.info \
daemons/dmeventd.info \
daemons/lvmetad.info \
daemons/lvmlockd.info \
daemons/lvmpolld.info
@@ -192,10 +211,30 @@ endif
endif
# FIXME: Drop once top-level make is resolved
-include test/unit/Makefile
include $(top_srcdir)/device_mapper/Makefile
include $(top_srcdir)/base/Makefile
ifeq ("$(TESTING)", "yes")
# testing and report generation
RUBY=ruby1.9 -Ireport-generators/lib -Ireport-generators/test
.PHONY: unit-test ruby-test test-programs
# FIXME: put dependencies on libdm and liblvm
# FIXME: Should be handled by Makefiles in subdirs, not here at top level.
test-programs:
cd unit-tests/regex && $(MAKE)
cd unit-tests/datastruct && $(MAKE)
cd unit-tests/mm && $(MAKE)
unit-test: test-programs
$(RUBY) report-generators/unit_test.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
memcheck: test-programs
$(RUBY) report-generators/memcheck.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
ruby-test:
$(RUBY) report-generators/test/ts.rb
endif
ifneq ($(shell which ctags),)
.PHONY: tags

62
TESTING
View File

@@ -1,62 +0,0 @@
LVM2 Test Suite
===============
The codebase contains many tests in the test subdirectory.
Before running tests
--------------------
Keep in mind the testsuite MUST run under root user.
It is recommended not to use LVM on the test machine, especially when running
tests with udev (`make check_system`.)
You MUST disable (or mask) any LVM daemons:
- lvmetad
- dmeventd
- lvmpolld
- lvmdbusd
- lvmlockd
- clvmd
- cmirrord
For running cluster tests, we are using singlenode locking. Pass
`--with-clvmd=singlenode` to configure.
NOTE: This is useful only for testing, and should not be used in produciton
code.
To run D-Bus daemon tests, existing D-Bus session is required.
Running tests
-------------
As root run:
make check
To run only tests matching a string:
make check T=test
To skip tests matching a string:
make check S=test
There are other targets and many environment variables can be used to tweak the
testsuite - for full list and description run `make -C test help`.
Installing testsuite
--------------------
It is possible to install and run a testsuite against installed LVM. Run the
following:
make -C test install
Then lvm2-testsuite binary can be executed to test installed binaries.
See `lvm2-testsuite --help` for options. The same environment variables can be
used as with `make check`.

View File

@@ -1 +1 @@
2.02.178(2)-git (2018-05-24)
2.02.176(2)-git (2017-10-06)

View File

@@ -1 +1 @@
1.02.147-git (2018-05-24)
1.02.145-git (2017-10-06)

141
WHATS_NEW
View File

@@ -1,142 +1,5 @@
Version 3.0.0
=============
Add basic creation support for VDO target.
Never send any discard ioctl with test mode.
Fix thin-pool alloc which needs same PV for data and metadata.
Extend list of non-memlocked areas with newly linked libs.
Enhance vgcfgrestore to check for active LVs in restored VG.
Configure supports --disable-silent-rules for verbose builds.
Fix unmonitoring of merging snapshots.
Cache can uses metadata format 2 with cleaner policy.
Fix check if resized PV can also fit metadata area.
Avoid showing internal error in lvs output or pvmoved LVs.
Remove clvmd
Remove lvmlib (api)
lvconvert: provide possible layouts between linear and striped/raid
Use versionsort to fix archive file expiry beyond 100000 files.
Version 2.02.178-rc1 - 24th May 2018
====================================
Add libaio dependency for build.
Remove lvm1 and pool format handling and add filter to ignore them.
Move some filter checks to after disks are read.
Rework disk scanning and when it is used.
Add new io layer and shift code to using it.
Fix lvconvert's return code on degraded -m raid1 conversion.
--enable-testing switch for ./configure has been removed.
--with-snapshots switch for ./configure has been removed.
--with-mirrors switch for ./configure has been removed.
--with-raid switch for ./configure has been removed.
--with-thin switch for ./configure has been removed.
--with-cache switch for ./configure has been removed.
Include new unit-test framework and unit tests.
Extend validation of region_size for mirror segment.
Reload whole device stack when reinitilizing mirror log.
Mirrors without monitoring are WARNING and not blocking on error.
Detect too big region_size with clustered mirrors.
Fix evaluation of maximal region size for mirror log.
Enhance mirror log size estimation and use smaller size when possible.
Fix incorrect mirror log size calculation on 32bit arch.
Enhance preloading tree creating.
Fix regression on acceptance of any LV on lvconvert.
Restore usability of thin LV to be again external origin for another thin.
Keep systemd vars on change event in 69-dm-lvm-metad.rules for systemd reload.
Write systemd and non-systemd rule in 69-dm-lvm-metad.rules, GOTO active one.
Add test for activation/volume_list (Sub)LV remnants.
Disallow usage of cache format 2 with mq cache policy.
Again accept striped LV as COW LV with lvconvert -s (2.02.169).
Fix raid target version testing for supported features.
Allow activation of pools when thin/cache_check tool is missing.
Remove RaidLV on creation failure when rmeta devices can't be activated.
Add prioritized_section() to restore cookie boundaries (2.02.177).
Enhance error messages when read error happens.
Enhance mirror log initialization for old mirror target.
Skip private crypto and stratis devices.
Skip frozen raid devices from scanning.
Activate RAID SubLVs on read_only_volume_list readwrite.
Offer convenience type raid5_n converting to raid10.
Automatically avoid reading invalid snapshots during device scan.
Ensure COW device is writable even for read-only thick snapshots.
Support activation of component LVs in read-only mode.
Extend internal library to recognize and work with component LV.
Skip duplicate check for active LV when prompting for its removal.
Activate correct lock holding LV when it is cached.
Do not modify archived metadata when removing striped raid.
Fix memleak on error path when obtaining lv_raid_data_offset.
Fix compatibility size test of extended external origin.
Add external_origin visiting in for_each_sub_lv().
Ensure cluster commands drop their device cache before locking VG.
Do not report LV as remotely active when it's locally exclusive in cluster.
Add deprecate messages for usage of mirrors with mirrorlog.
Separate reporting of monitoring status and error status.
Improve validation of created strings in vgimportclone.
Add missing initialisation of mem pool in systemd generator.
Do not reopen output streams for multithreaded users of liblvm.
Configure ensures /usr/bin dir is checked for dmpd tools.
Restore pvmove support for wide-clustered active volumes (2.02.177).
Avoid non-exclusive activation of exclusive segment types.
Fix trimming sibling PVs when doing a pvmove of raid subLVs.
Preserve exclusive activation during thin snaphost merge.
Avoid exceeding array bounds in allocation tag processing.
Add --lockopt to common options and add option to skip selected locks.
Version 2.02.177 - 18th December 2017
=====================================
When writing text metadata content, use complete 4096 byte blocks.
Change text format metadata alignment from 512 to 4096 bytes.
When writing metadata, consistently skip mdas marked as failed.
Refactor and adjust text format metadata alignment calculation.
Fix python3 path in lvmdbusd to use value detected by configure.
Reduce checks for active LVs in vgchange before background polling.
Ensure _node_send_message always uses clean status of thin pool.
Fix lvmlockd to use pool lock when accessing _tmeta volume.
Report expected sanlock_convert errors only when retries fail.
Avoid blocking in sanlock_convert on SH to EX lock conversion.
Deactivate missing raid LV legs (_rimage_X-missing_Y_Z) on decativation.
Skip read-modify-write when entire block is replaced.
Categorise I/O with reason annotations in debug messages.
Allow extending of raid LVs created with --nosync after a failed repair.
Command will lock memory only when suspending volumes.
Merge segments when pvmove is finished.
Remove label_verify that has never been used.
Ensure very large numbers used as arguments are not casted to lower values.
Enhance reading and validation of options stripes and stripes_size.
Fix printing of default stripe size when user is not using stripes.
Activation code for pvmove automatically discovers holding LVs for resume.
Make a pvmove LV locking holder.
Do not change critical section counter on resume path without real resume.
Enhance activation code to automatically suspend pvmove participants.
Prevent conversion of thin volumes to snapshot origin when lvmlockd is used.
Correct the steps to change lock type in lvmlockd man page.
Retry lock acquisition on recognized sanlock errors.
Fix lock manager error codes in lvmlockd.
Remove unnecessary single read from lvmdiskscan.
Check raid reshape flags in vg_validate().
Add support for pvmove of cache and snapshot origins.
Avoid using precommitted metadata for suspending pvmove tree.
Ehnance pvmove locking.
Deactivate activated LVs on error path when pvmove activation fails.
Add "io" to log/debug_classes for logging low-level I/O.
Eliminate redundant nested VG metadata in VG struct.
Avoid importing persistent filter in vgscan/pvscan/vgrename.
Fix memleak of string buffer when vgcfgbackup runs in secure mode.
Do not print error when clvmd cannot find running clvmd.
Prevent start of new merge of snapshot if origin is already being merged.
Fix offered type for raid6_n_6 to raid5 conversion (raid5_n).
Deactivate sub LVs when removing unused cache-pool.
Do not take backup with suspended devices.
Avoid RAID4 activation on incompatible kernels under all circumstances.
Reject conversion request to striped/raid0 on 2-legged raid4/5.
Version 2.02.176 - 3rd November 2017
====================================
Keep Install section only in lvm2-{lvmetad,lvmpolld}.socket systemd unit.
Fix segfault in lvm_pv_remove in liblvm. (2.02.173)
Do not allow storing VG metadata with LV without any segment.
Fix printed message when thin snapshot was already merged.
Remove created spare LV when creation of thin-pool failed.
Avoid reading ignored metadata when mda gets used again.
Fix detection of moved PVs in vgsplit. (2.02.175)
Version 2.02.176 -
===================================
Ignore --stripes/--stripesize on RAID takeover
Improve used paths for generated systemd units and init shells.
Disallow creation of snapshot of mirror/raid subLV (was never supported).

View File

@@ -1,31 +1,5 @@
Version 1.02.147 -
====================================
Add vdo plugin for monitoring VDO devices.
Version 1.02.147-rc1 - 24th May 2018
====================================
Reuse uname() result for mirror target.
Recognize also mounted btrfs through dm_device_has_mounted_fs().
Add missing log_error() into dm_stats_populate() returning 0.
Avoid calling dm_stats_populat() for DM devices without any stats regions.
Support DM_DEBUG_WITH_LINE_NUMBERS envvar for debug msg with source:line.
Configured command for thin pool threshold handling gets whole environment.
Fix tests for failing dm_snprintf() in stats code.
Parsing mirror status accepts 'userspace' keyword in status.
Introduce dm_malloc_aligned for page alignment of buffers.
Version 1.02.146 - 18th December 2017
=====================================
Activation tree of thin pool skips duplicated check of pool status.
Remove code supporting replicator target.
Do not ignore failure of _info_by_dev().
Propagate delayed resume for pvmove subvolumes.
Suppress integrity encryption keys in 'table' output unless --showkeys supplied.
Version 1.02.145 - 3rd November 2017
====================================
Keep Install section only in dm-event.socket systemd unit.
Issue a specific error with dmsetup status if device is unknown.
Version 1.02.145 -
===================================
Fix RT_LIBS reference in generated libdevmapper.pc for pkg-config
Version 1.02.144 - 6th October 2017

View File

@@ -155,7 +155,7 @@ AC_DEFUN([AC_TRY_LDFLAGS],
# and this notice are preserved. This file is offered as-is, without any
# warranty.
serial 3
#serial 3
AC_DEFUN([AX_GCC_BUILTIN], [
AS_VAR_PUSHDEF([ac_var], [ax_cv_have_$1])

293
aclocal.m4 vendored
View File

@@ -1,6 +1,6 @@
# generated automatically by aclocal 1.15.1 -*- Autoconf -*-
# generated automatically by aclocal 1.15 -*- Autoconf -*-
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -13,7 +13,7 @@
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_python_module.html
# http://www.gnu.org/software/autoconf-archive/ax_python_module.html
# ===========================================================================
#
# SYNOPSIS
@@ -37,7 +37,7 @@ m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 9
#serial 8
AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
AC_DEFUN([AX_PYTHON_MODULE],[
@@ -69,63 +69,32 @@ AC_DEFUN([AX_PYTHON_MODULE],[
fi
])
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 11 (pkg-config-0.29.1)
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 1 (pkg-config-0.24)
#
# Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
dnl Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
dnl Copyright © 2012-2015 Dan Nicholson <dbn.lists@gmail.com>
dnl
dnl This program is free software; you can redistribute it and/or modify
dnl it under the terms of the GNU General Public License as published by
dnl the Free Software Foundation; either version 2 of the License, or
dnl (at your option) any later version.
dnl
dnl This program is distributed in the hope that it will be useful, but
dnl WITHOUT ANY WARRANTY; without even the implied warranty of
dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
dnl General Public License for more details.
dnl
dnl You should have received a copy of the GNU General Public License
dnl along with this program; if not, write to the Free Software
dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
dnl 02111-1307, USA.
dnl
dnl As a special exception to the GNU General Public License, if you
dnl distribute this file as part of a program that contains a
dnl configuration script generated by Autoconf, you may include it under
dnl the same distribution terms that you use for the rest of that
dnl program.
dnl PKG_PREREQ(MIN-VERSION)
dnl -----------------------
dnl Since: 0.29
dnl
dnl Verify that the version of the pkg-config macros are at least
dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
dnl installed version of pkg-config, this checks the developer's version
dnl of pkg.m4 when generating configure.
dnl
dnl To ensure that this macro is defined, also add:
dnl m4_ifndef([PKG_PREREQ],
dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
dnl
dnl See the "Since" comment for each macro you use to see what version
dnl of the macros you require.
m4_defun([PKG_PREREQ],
[m4_define([PKG_MACROS_VERSION], [0.29.1])
m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
[m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
])dnl PKG_PREREQ
dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
dnl ----------------------------------
dnl Since: 0.16
dnl
dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
dnl first found in the path. Checks that the version of pkg-config found
dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
dnl used since that's the first version where most current features of
dnl pkg-config existed.
# PKG_PROG_PKG_CONFIG([MIN-VERSION])
# ----------------------------------
AC_DEFUN([PKG_PROG_PKG_CONFIG],
[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
@@ -147,19 +116,18 @@ if test -n "$PKG_CONFIG"; then
PKG_CONFIG=""
fi
fi[]dnl
])dnl PKG_PROG_PKG_CONFIG
])# PKG_PROG_PKG_CONFIG
dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------------------------------
dnl Since: 0.18
dnl
dnl Check to see whether a particular set of modules exists. Similar to
dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
dnl
dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
dnl only at the first occurence in configure.ac, so if the first place
dnl it's called might be skipped (such as if it is within an "if", you
dnl have to call PKG_CHECK_EXISTS manually
# PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
#
# Check to see whether a particular set of modules exists. Similar
# to PKG_CHECK_MODULES(), but does not set variables or print errors.
#
# Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
# only at the first occurence in configure.ac, so if the first place
# it's called might be skipped (such as if it is within an "if", you
# have to call PKG_CHECK_EXISTS manually
# --------------------------------------------------------------
AC_DEFUN([PKG_CHECK_EXISTS],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
if test -n "$PKG_CONFIG" && \
@@ -169,10 +137,8 @@ m4_ifvaln([$3], [else
$3])dnl
fi])
dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
dnl ---------------------------------------------
dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
dnl pkg_failed based on the result.
# _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
# ---------------------------------------------
m4_define([_PKG_CONFIG],
[if test -n "$$1"; then
pkg_cv_[]$1="$$1"
@@ -184,11 +150,10 @@ m4_define([_PKG_CONFIG],
else
pkg_failed=untried
fi[]dnl
])dnl _PKG_CONFIG
])# _PKG_CONFIG
dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl ---------------------------
dnl Internal check to see if pkg-config supports short errors.
# _PKG_SHORT_ERRORS_SUPPORTED
# -----------------------------
AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
@@ -196,17 +161,19 @@ if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
else
_pkg_short_errors_supported=no
fi[]dnl
])dnl _PKG_SHORT_ERRORS_SUPPORTED
])# _PKG_SHORT_ERRORS_SUPPORTED
dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl --------------------------------------------------------------
dnl Since: 0.4.0
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
# PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
# [ACTION-IF-NOT-FOUND])
#
#
# Note that if there is a possibility the first call to
# PKG_CHECK_MODULES might not happen, you should be sure to include an
# explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
#
#
# --------------------------------------------------------------
AC_DEFUN([PKG_CHECK_MODULES],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
@@ -260,40 +227,16 @@ else
AC_MSG_RESULT([yes])
$3
fi[]dnl
])dnl PKG_CHECK_MODULES
])# PKG_CHECK_MODULES
dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl ---------------------------------------------------------------------
dnl Since: 0.29
dnl
dnl Checks for existence of MODULES and gathers its build flags with
dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
dnl and VARIABLE-PREFIX_LIBS from --libs.
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
dnl configure.ac.
AC_DEFUN([PKG_CHECK_MODULES_STATIC],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
_save_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES($@)
PKG_CONFIG=$_save_PKG_CONFIG[]dnl
])dnl PKG_CHECK_MODULES_STATIC
dnl PKG_INSTALLDIR([DIRECTORY])
dnl -------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable pkgconfigdir as the location where a module
dnl should install pkg-config .pc files. By default the directory is
dnl $libdir/pkgconfig, but the default can be changed by passing
dnl DIRECTORY. The user can override through the --with-pkgconfigdir
dnl parameter.
# PKG_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable pkgconfigdir as the location where a module
# should install pkg-config .pc files. By default the directory is
# $libdir/pkgconfig, but the default can be changed by passing
# DIRECTORY. The user can override through the --with-pkgconfigdir
# parameter.
AC_DEFUN([PKG_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
m4_pushdef([pkg_description],
@@ -304,18 +247,16 @@ AC_ARG_WITH([pkgconfigdir],
AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_INSTALLDIR
]) dnl PKG_INSTALLDIR
dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
dnl --------------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable noarch_pkgconfigdir as the location where a
dnl module should install arch-independent pkg-config .pc files. By
dnl default the directory is $datadir/pkgconfig, but the default can be
dnl changed by passing DIRECTORY. The user can override through the
dnl --with-noarch-pkgconfigdir parameter.
# PKG_NOARCH_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable noarch_pkgconfigdir as the location where a
# module should install arch-independent pkg-config .pc files. By
# default the directory is $datadir/pkgconfig, but the default can be
# changed by passing DIRECTORY. The user can override through the
# --with-noarch-pkgconfigdir parameter.
AC_DEFUN([PKG_NOARCH_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
m4_pushdef([pkg_description],
@@ -326,15 +267,13 @@ AC_ARG_WITH([noarch-pkgconfigdir],
AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_NOARCH_INSTALLDIR
]) dnl PKG_NOARCH_INSTALLDIR
dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------
dnl Since: 0.28
dnl
dnl Retrieves the value of the pkg-config variable for the given module.
# PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
# [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
# -------------------------------------------
# Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
@@ -343,77 +282,9 @@ _PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])dnl PKG_CHECK_VAR
])# PKG_CHECK_VAR
dnl PKG_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND],
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------
dnl
dnl Prepare a "--with-" configure option using the lowercase
dnl [VARIABLE-PREFIX] name, merging the behaviour of AC_ARG_WITH and
dnl PKG_CHECK_MODULES in a single macro.
AC_DEFUN([PKG_WITH_MODULES],
[
m4_pushdef([with_arg], m4_tolower([$1]))
m4_pushdef([description],
[m4_default([$5], [build with ]with_arg[ support])])
m4_pushdef([def_arg], [m4_default([$6], [auto])])
m4_pushdef([def_action_if_found], [AS_TR_SH([with_]with_arg)=yes])
m4_pushdef([def_action_if_not_found], [AS_TR_SH([with_]with_arg)=no])
m4_case(def_arg,
[yes],[m4_pushdef([with_without], [--without-]with_arg)],
[m4_pushdef([with_without],[--with-]with_arg)])
AC_ARG_WITH(with_arg,
AS_HELP_STRING(with_without, description[ @<:@default=]def_arg[@:>@]),,
[AS_TR_SH([with_]with_arg)=def_arg])
AS_CASE([$AS_TR_SH([with_]with_arg)],
[yes],[PKG_CHECK_MODULES([$1],[$2],$3,$4)],
[auto],[PKG_CHECK_MODULES([$1],[$2],
[m4_n([def_action_if_found]) $3],
[m4_n([def_action_if_not_found]) $4])])
m4_popdef([with_arg])
m4_popdef([description])
m4_popdef([def_arg])
])dnl PKG_WITH_MODULES
dnl PKG_HAVE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl -----------------------------------------------
dnl
dnl Convenience macro to trigger AM_CONDITIONAL after PKG_WITH_MODULES
dnl check._[VARIABLE-PREFIX] is exported as make variable.
AC_DEFUN([PKG_HAVE_WITH_MODULES],
[
PKG_WITH_MODULES([$1],[$2],,,[$3],[$4])
AM_CONDITIONAL([HAVE_][$1],
[test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"])
])dnl PKG_HAVE_WITH_MODULES
dnl PKG_HAVE_DEFINE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------------------
dnl
dnl Convenience macro to run AM_CONDITIONAL and AC_DEFINE after
dnl PKG_WITH_MODULES check. HAVE_[VARIABLE-PREFIX] is exported as make
dnl and preprocessor variable.
AC_DEFUN([PKG_HAVE_DEFINE_WITH_MODULES],
[
PKG_HAVE_WITH_MODULES([$1],[$2],[$3],[$4])
AS_IF([test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"],
[AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
])dnl PKG_HAVE_DEFINE_WITH_MODULES
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -446,9 +317,8 @@ AC_DEFUN([AM_PATH_PYTHON],
[
dnl Find a Python interpreter. Python versions prior to 2.0 are not
dnl supported. (2.0 was released on October 16, 2000).
dnl FIXME: Remove the need to hard-code Python versions here.
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],
[python python2 python3 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
[python python2 python3 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0])
AC_ARG_VAR([PYTHON], [the Python interpreter])
@@ -649,7 +519,7 @@ for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[[i]]
sys.exit(sys.hexversion < minverhex)"
AS_IF([AM_RUN_LOG([$1 -c "$prog"])], [$3], [$4])])
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -666,4 +536,5 @@ AC_DEFUN([AM_RUN_LOG],
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
m4_include([acinclude.m4])

View File

@@ -1,31 +0,0 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
BASE_SOURCE=\
base/data-struct/radix-tree.c \
base/data-struct/hash.c \
base/data-struct/list.c
BASE_DEPENDS=$(addprefix $(top_builddir)/,$(subst .c,.d,$(BASE_SOURCE)))
BASE_OBJECTS=$(addprefix $(top_builddir)/,$(subst .c,.o,$(BASE_SOURCE)))
CLEAN_TARGETS+=$(BASE_DEPENDS) $(BASE_OBJECTS)
-include $(BASE_DEPENDS)
$(BASE_OBJECTS): INCLUDES+=-I$(top_srcdir)/base/
$(top_builddir)/base/libbase.a: $(BASE_OBJECTS)
@echo " [AR] $@"
$(Q) $(RM) $@
$(Q) $(AR) rsv $@ $(BASE_OBJECTS) > /dev/null
CLEAN_TARGETS+=$(top_builddir)/base/libbase.a

View File

@@ -1,394 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "hash.h"
struct dm_hash_node {
struct dm_hash_node *next;
void *data;
unsigned data_len;
unsigned keylen;
char key[0];
};
struct dm_hash_table {
unsigned num_nodes;
unsigned num_slots;
struct dm_hash_node **slots;
};
/* Permutation of the Integers 0 through 255 */
static unsigned char _nums[] = {
1, 14, 110, 25, 97, 174, 132, 119, 138, 170, 125, 118, 27, 233, 140, 51,
87, 197, 177, 107, 234, 169, 56, 68, 30, 7, 173, 73, 188, 40, 36, 65,
49, 213, 104, 190, 57, 211, 148, 223, 48, 115, 15, 2, 67, 186, 210, 28,
12, 181, 103, 70, 22, 58, 75, 78, 183, 167, 238, 157, 124, 147, 172,
144,
176, 161, 141, 86, 60, 66, 128, 83, 156, 241, 79, 46, 168, 198, 41, 254,
178, 85, 253, 237, 250, 154, 133, 88, 35, 206, 95, 116, 252, 192, 54,
221,
102, 218, 255, 240, 82, 106, 158, 201, 61, 3, 89, 9, 42, 155, 159, 93,
166, 80, 50, 34, 175, 195, 100, 99, 26, 150, 16, 145, 4, 33, 8, 189,
121, 64, 77, 72, 208, 245, 130, 122, 143, 55, 105, 134, 29, 164, 185,
194,
193, 239, 101, 242, 5, 171, 126, 11, 74, 59, 137, 228, 108, 191, 232,
139,
6, 24, 81, 20, 127, 17, 91, 92, 251, 151, 225, 207, 21, 98, 113, 112,
84, 226, 18, 214, 199, 187, 13, 32, 94, 220, 224, 212, 247, 204, 196,
43,
249, 236, 45, 244, 111, 182, 153, 136, 129, 90, 217, 202, 19, 165, 231,
71,
230, 142, 96, 227, 62, 179, 246, 114, 162, 53, 160, 215, 205, 180, 47,
109,
44, 38, 31, 149, 135, 0, 216, 52, 63, 23, 37, 69, 39, 117, 146, 184,
163, 200, 222, 235, 248, 243, 219, 10, 152, 131, 123, 229, 203, 76, 120,
209
};
static struct dm_hash_node *_create_node(const char *str, unsigned len)
{
struct dm_hash_node *n = malloc(sizeof(*n) + len);
if (n) {
memcpy(n->key, str, len);
n->keylen = len;
}
return n;
}
static unsigned long _hash(const char *str, unsigned len)
{
unsigned long h = 0, g;
unsigned i;
for (i = 0; i < len; i++) {
h <<= 4;
h += _nums[(unsigned char) *str++];
g = h & ((unsigned long) 0xf << 16u);
if (g) {
h ^= g >> 16u;
h ^= g >> 5u;
}
}
return h;
}
struct dm_hash_table *dm_hash_create(unsigned size_hint)
{
size_t len;
unsigned new_size = 16u;
struct dm_hash_table *hc = zalloc(sizeof(*hc));
if (!hc)
return_0;
/* round size hint up to a power of two */
while (new_size < size_hint)
new_size = new_size << 1;
hc->num_slots = new_size;
len = sizeof(*(hc->slots)) * new_size;
if (!(hc->slots = zalloc(len)))
goto_bad;
return hc;
bad:
free(hc->slots);
free(hc);
return 0;
}
static void _free_nodes(struct dm_hash_table *t)
{
struct dm_hash_node *c, *n;
unsigned i;
for (i = 0; i < t->num_slots; i++)
for (c = t->slots[i]; c; c = n) {
n = c->next;
free(c);
}
}
void dm_hash_destroy(struct dm_hash_table *t)
{
_free_nodes(t);
free(t->slots);
free(t);
}
static struct dm_hash_node **_find(struct dm_hash_table *t, const void *key,
uint32_t len)
{
unsigned h = _hash(key, len) & (t->num_slots - 1);
struct dm_hash_node **c;
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len))
break;
}
return c;
}
void *dm_hash_lookup_binary(struct dm_hash_table *t, const void *key,
uint32_t len)
{
struct dm_hash_node **c = _find(t, key, len);
return *c ? (*c)->data : 0;
}
int dm_hash_insert_binary(struct dm_hash_table *t, const void *key,
uint32_t len, void *data)
{
struct dm_hash_node **c = _find(t, key, len);
if (*c)
(*c)->data = data;
else {
struct dm_hash_node *n = _create_node(key, len);
if (!n)
return 0;
n->data = data;
n->next = 0;
*c = n;
t->num_nodes++;
}
return 1;
}
void dm_hash_remove_binary(struct dm_hash_table *t, const void *key,
uint32_t len)
{
struct dm_hash_node **c = _find(t, key, len);
if (*c) {
struct dm_hash_node *old = *c;
*c = (*c)->next;
free(old);
t->num_nodes--;
}
}
void *dm_hash_lookup(struct dm_hash_table *t, const char *key)
{
return dm_hash_lookup_binary(t, key, strlen(key) + 1);
}
int dm_hash_insert(struct dm_hash_table *t, const char *key, void *data)
{
return dm_hash_insert_binary(t, key, strlen(key) + 1, data);
}
void dm_hash_remove(struct dm_hash_table *t, const char *key)
{
dm_hash_remove_binary(t, key, strlen(key) + 1);
}
static struct dm_hash_node **_find_str_with_val(struct dm_hash_table *t,
const void *key, const void *val,
uint32_t len, uint32_t val_len)
{
struct dm_hash_node **c;
unsigned h;
h = _hash(key, len) & (t->num_slots - 1);
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len) && (*c)->data) {
if (((*c)->data_len == val_len) &&
!memcmp(val, (*c)->data, val_len))
return c;
}
}
return NULL;
}
int dm_hash_insert_allow_multiple(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node *n;
struct dm_hash_node *first;
int len = strlen(key) + 1;
unsigned h;
n = _create_node(key, len);
if (!n)
return 0;
n->data = (void *)val;
n->data_len = val_len;
h = _hash(key, len) & (t->num_slots - 1);
first = t->slots[h];
if (first)
n->next = first;
else
n->next = 0;
t->slots[h] = n;
t->num_nodes++;
return 1;
}
/*
* Look through multiple entries with the same key for one that has a
* matching val and return that. If none have maching val, return NULL.
*/
void *dm_hash_lookup_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node **c;
c = _find_str_with_val(t, key, val, strlen(key) + 1, val_len);
return (c && *c) ? (*c)->data : 0;
}
/*
* Look through multiple entries with the same key for one that has a
* matching val and remove that.
*/
void dm_hash_remove_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len)
{
struct dm_hash_node **c;
c = _find_str_with_val(t, key, val, strlen(key) + 1, val_len);
if (c && *c) {
struct dm_hash_node *old = *c;
*c = (*c)->next;
free(old);
t->num_nodes--;
}
}
/*
* Look up the value for a key and count how many
* entries have the same key.
*
* If no entries have key, return NULL and set count to 0.
*
* If one entry has the key, the function returns the val,
* and sets count to 1.
*
* If N entries have the key, the function returns the val
* from the first entry, and sets count to N.
*/
void *dm_hash_lookup_with_count(struct dm_hash_table *t, const char *key, int *count)
{
struct dm_hash_node **c;
struct dm_hash_node **c1 = NULL;
uint32_t len = strlen(key) + 1;
unsigned h;
*count = 0;
h = _hash(key, len) & (t->num_slots - 1);
for (c = &t->slots[h]; *c; c = &((*c)->next)) {
if ((*c)->keylen != len)
continue;
if (!memcmp(key, (*c)->key, len)) {
(*count)++;
if (!c1)
c1 = c;
}
}
if (!c1)
return NULL;
else
return *c1 ? (*c1)->data : 0;
}
unsigned dm_hash_get_num_entries(struct dm_hash_table *t)
{
return t->num_nodes;
}
void dm_hash_iter(struct dm_hash_table *t, dm_hash_iterate_fn f)
{
struct dm_hash_node *c, *n;
unsigned i;
for (i = 0; i < t->num_slots; i++)
for (c = t->slots[i]; c; c = n) {
n = c->next;
f(c->data);
}
}
void dm_hash_wipe(struct dm_hash_table *t)
{
_free_nodes(t);
memset(t->slots, 0, sizeof(struct dm_hash_node *) * t->num_slots);
t->num_nodes = 0u;
}
char *dm_hash_get_key(struct dm_hash_table *t __attribute__((unused)),
struct dm_hash_node *n)
{
return n->key;
}
void *dm_hash_get_data(struct dm_hash_table *t __attribute__((unused)),
struct dm_hash_node *n)
{
return n->data;
}
static struct dm_hash_node *_next_slot(struct dm_hash_table *t, unsigned s)
{
struct dm_hash_node *c = NULL;
unsigned i;
for (i = s; i < t->num_slots && !c; i++)
c = t->slots[i];
return c;
}
struct dm_hash_node *dm_hash_get_first(struct dm_hash_table *t)
{
return _next_slot(t, 0);
}
struct dm_hash_node *dm_hash_get_next(struct dm_hash_table *t, struct dm_hash_node *n)
{
unsigned h = _hash(n->key, n->keylen) & (t->num_slots - 1);
return n->next ? n->next : _next_slot(t, h + 1);
}

View File

@@ -1,94 +0,0 @@
#ifndef BASE_DATA_STRUCT_HASH_H
#define BASE_DATA_STRUCT_HASH_H
#include <stdint.h>
//----------------------------------------------------------------
struct dm_hash_table;
struct dm_hash_node;
typedef void (*dm_hash_iterate_fn) (void *data);
struct dm_hash_table *dm_hash_create(unsigned size_hint)
__attribute__((__warn_unused_result__));
void dm_hash_destroy(struct dm_hash_table *t);
void dm_hash_wipe(struct dm_hash_table *t);
void *dm_hash_lookup(struct dm_hash_table *t, const char *key);
int dm_hash_insert(struct dm_hash_table *t, const char *key, void *data);
void dm_hash_remove(struct dm_hash_table *t, const char *key);
void *dm_hash_lookup_binary(struct dm_hash_table *t, const void *key, uint32_t len);
int dm_hash_insert_binary(struct dm_hash_table *t, const void *key, uint32_t len,
void *data);
void dm_hash_remove_binary(struct dm_hash_table *t, const void *key, uint32_t len);
unsigned dm_hash_get_num_entries(struct dm_hash_table *t);
void dm_hash_iter(struct dm_hash_table *t, dm_hash_iterate_fn f);
char *dm_hash_get_key(struct dm_hash_table *t, struct dm_hash_node *n);
void *dm_hash_get_data(struct dm_hash_table *t, struct dm_hash_node *n);
struct dm_hash_node *dm_hash_get_first(struct dm_hash_table *t);
struct dm_hash_node *dm_hash_get_next(struct dm_hash_table *t, struct dm_hash_node *n);
/*
* dm_hash_insert() replaces the value of an existing
* entry with a matching key if one exists. Otherwise
* it adds a new entry.
*
* dm_hash_insert_with_val() inserts a new entry if
* another entry with the same key already exists.
* val_len is the size of the data being inserted.
*
* If two entries with the same key exist,
* (added using dm_hash_insert_allow_multiple), then:
* . dm_hash_lookup() returns the first one it finds, and
* dm_hash_lookup_with_val() returns the one with a matching
* val_len/val.
* . dm_hash_remove() removes the first one it finds, and
* dm_hash_remove_with_val() removes the one with a matching
* val_len/val.
*
* If a single entry with a given key exists, and it has
* zero val_len, then:
* . dm_hash_lookup() returns it
* . dm_hash_lookup_with_val(val_len=0) returns it
* . dm_hash_remove() removes it
* . dm_hash_remove_with_val(val_len=0) removes it
*
* dm_hash_lookup_with_count() is a single call that will
* both lookup a key's value and check if there is more
* than one entry with the given key.
*
* (It is not meant to retrieve all the entries with the
* given key. In the common case where a single entry exists
* for the key, it is useful to have a single call that will
* both look up the value and indicate if multiple values
* exist for the key.)
*
* dm_hash_lookup_with_count:
* . If no entries exist, the function returns NULL, and
* the count is set to 0.
* . If only one entry exists, the value of that entry is
* returned and count is set to 1.
* . If N entries exists, the value of the first entry is
* returned and count is set to N.
*/
void *dm_hash_lookup_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
void dm_hash_remove_with_val(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
int dm_hash_insert_allow_multiple(struct dm_hash_table *t, const char *key,
const void *val, uint32_t val_len);
void *dm_hash_lookup_with_count(struct dm_hash_table *t, const char *key, int *count);
#define dm_hash_iterate(v, h) \
for (v = dm_hash_get_first((h)); v; \
v = dm_hash_get_next((h), v))
//----------------------------------------------------------------
#endif

View File

@@ -1,170 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "list.h"
#include <assert.h>
#include <stdlib.h>
/*
* Initialise a list before use.
* The list head's next and previous pointers point back to itself.
*/
void dm_list_init(struct dm_list *head)
{
head->n = head->p = head;
}
/*
* Insert an element before 'head'.
* If 'head' is the list head, this adds an element to the end of the list.
*/
void dm_list_add(struct dm_list *head, struct dm_list *elem)
{
assert(head->n);
elem->n = head;
elem->p = head->p;
head->p->n = elem;
head->p = elem;
}
/*
* Insert an element after 'head'.
* If 'head' is the list head, this adds an element to the front of the list.
*/
void dm_list_add_h(struct dm_list *head, struct dm_list *elem)
{
assert(head->n);
elem->n = head->n;
elem->p = head;
head->n->p = elem;
head->n = elem;
}
/*
* Delete an element from its list.
* Note that this doesn't change the element itself - it may still be safe
* to follow its pointers.
*/
void dm_list_del(struct dm_list *elem)
{
elem->n->p = elem->p;
elem->p->n = elem->n;
}
/*
* Remove an element from existing list and insert before 'head'.
*/
void dm_list_move(struct dm_list *head, struct dm_list *elem)
{
dm_list_del(elem);
dm_list_add(head, elem);
}
/*
* Is the list empty?
*/
int dm_list_empty(const struct dm_list *head)
{
return head->n == head;
}
/*
* Is this the first element of the list?
*/
int dm_list_start(const struct dm_list *head, const struct dm_list *elem)
{
return elem->p == head;
}
/*
* Is this the last element of the list?
*/
int dm_list_end(const struct dm_list *head, const struct dm_list *elem)
{
return elem->n == head;
}
/*
* Return first element of the list or NULL if empty
*/
struct dm_list *dm_list_first(const struct dm_list *head)
{
return (dm_list_empty(head) ? NULL : head->n);
}
/*
* Return last element of the list or NULL if empty
*/
struct dm_list *dm_list_last(const struct dm_list *head)
{
return (dm_list_empty(head) ? NULL : head->p);
}
/*
* Return the previous element of the list, or NULL if we've reached the start.
*/
struct dm_list *dm_list_prev(const struct dm_list *head, const struct dm_list *elem)
{
return (dm_list_start(head, elem) ? NULL : elem->p);
}
/*
* Return the next element of the list, or NULL if we've reached the end.
*/
struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *elem)
{
return (dm_list_end(head, elem) ? NULL : elem->n);
}
/*
* Return the number of elements in a list by walking it.
*/
unsigned int dm_list_size(const struct dm_list *head)
{
unsigned int s = 0;
const struct dm_list *v;
dm_list_iterate(v, head)
s++;
return s;
}
/*
* Join two lists together.
* This moves all the elements of the list 'head1' to the end of the list
* 'head', leaving 'head1' empty.
*/
void dm_list_splice(struct dm_list *head, struct dm_list *head1)
{
assert(head->n);
assert(head1->n);
if (dm_list_empty(head1))
return;
head1->p->n = head;
head1->n->p = head->p;
head->p->n = head1->n;
head->p = head1->p;
dm_list_init(head1);
}

View File

@@ -1,209 +0,0 @@
#ifndef BASE_DATA_STRUCT_LIST_H
#define BASE_DATA_STRUCT_LIST_H
//----------------------------------------------------------------
/*
* A list consists of a list head plus elements.
* Each element has 'next' and 'previous' pointers.
* The list head's pointers point to the first and the last element.
*/
struct dm_list {
struct dm_list *n, *p;
};
/*
* String list.
*/
struct dm_str_list {
struct dm_list list;
const char *str;
};
/*
* Initialise a list before use.
* The list head's next and previous pointers point back to itself.
*/
#define DM_LIST_HEAD_INIT(name) { &(name), &(name) }
#define DM_LIST_INIT(name) struct dm_list name = DM_LIST_HEAD_INIT(name)
void dm_list_init(struct dm_list *head);
/*
* Insert an element before 'head'.
* If 'head' is the list head, this adds an element to the end of the list.
*/
void dm_list_add(struct dm_list *head, struct dm_list *elem);
/*
* Insert an element after 'head'.
* If 'head' is the list head, this adds an element to the front of the list.
*/
void dm_list_add_h(struct dm_list *head, struct dm_list *elem);
/*
* Delete an element from its list.
* Note that this doesn't change the element itself - it may still be safe
* to follow its pointers.
*/
void dm_list_del(struct dm_list *elem);
/*
* Remove an element from existing list and insert before 'head'.
*/
void dm_list_move(struct dm_list *head, struct dm_list *elem);
/*
* Join 'head1' to the end of 'head'.
*/
void dm_list_splice(struct dm_list *head, struct dm_list *head1);
/*
* Is the list empty?
*/
int dm_list_empty(const struct dm_list *head);
/*
* Is this the first element of the list?
*/
int dm_list_start(const struct dm_list *head, const struct dm_list *elem);
/*
* Is this the last element of the list?
*/
int dm_list_end(const struct dm_list *head, const struct dm_list *elem);
/*
* Return first element of the list or NULL if empty
*/
struct dm_list *dm_list_first(const struct dm_list *head);
/*
* Return last element of the list or NULL if empty
*/
struct dm_list *dm_list_last(const struct dm_list *head);
/*
* Return the previous element of the list, or NULL if we've reached the start.
*/
struct dm_list *dm_list_prev(const struct dm_list *head, const struct dm_list *elem);
/*
* Return the next element of the list, or NULL if we've reached the end.
*/
struct dm_list *dm_list_next(const struct dm_list *head, const struct dm_list *elem);
/*
* Given the address v of an instance of 'struct dm_list' called 'head'
* contained in a structure of type t, return the containing structure.
*/
#define dm_list_struct_base(v, t, head) \
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
/*
* Given the address v of an instance of 'struct dm_list list' contained in
* a structure of type t, return the containing structure.
*/
#define dm_list_item(v, t) dm_list_struct_base((v), t, list)
/*
* Given the address v of one known element e in a known structure of type t,
* return another element f.
*/
#define dm_struct_field(v, t, e, f) \
(((t *)((uintptr_t)(v) - (uintptr_t)&((t *) 0)->e))->f)
/*
* Given the address v of a known element e in a known structure of type t,
* return the list head 'list'
*/
#define dm_list_head(v, t, e) dm_struct_field(v, t, e, list)
/*
* Set v to each element of a list in turn.
*/
#define dm_list_iterate(v, head) \
for (v = (head)->n; v != head; v = v->n)
/*
* Set v to each element in a list in turn, starting from the element
* in front of 'start'.
* You can use this to 'unwind' a list_iterate and back out actions on
* already-processed elements.
* If 'start' is 'head' it walks the list backwards.
*/
#define dm_list_uniterate(v, head, start) \
for (v = (start)->p; v != head; v = v->p)
/*
* A safe way to walk a list and delete and free some elements along
* the way.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_safe(v, t, head) \
for (v = (head)->n, t = v->n; v != head; v = t, t = v->n)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
*/
#define dm_list_iterate_items_gen(v, head, field) \
for (v = dm_list_struct_base((head)->n, __typeof__(*v), field); \
&v->field != (head); \
v = dm_list_struct_base(v->field.n, __typeof__(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
*/
#define dm_list_iterate_items(v, head) dm_list_iterate_items_gen(v, (head), list)
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_items_gen_safe(v, t, head, field) \
for (v = dm_list_struct_base((head)->n, __typeof__(*v), field), \
t = dm_list_struct_base(v->field.n, __typeof__(*v), field); \
&v->field != (head); \
v = t, t = dm_list_struct_base(v->field.n, __typeof__(*v), field))
/*
* Walk a list, setting 'v' in turn to the containing structure of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
* t must be defined as a temporary variable of the same type as v.
*/
#define dm_list_iterate_items_safe(v, t, head) \
dm_list_iterate_items_gen_safe(v, t, (head), list)
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The 'struct dm_list' variable within the containing structure is 'field'.
*/
#define dm_list_iterate_back_items_gen(v, head, field) \
for (v = dm_list_struct_base((head)->p, __typeof__(*v), field); \
&v->field != (head); \
v = dm_list_struct_base(v->field.p, __typeof__(*v), field))
/*
* Walk a list backwards, setting 'v' in turn to the containing structure
* of each item.
* The containing structure should be the same type as 'v'.
* The list should be 'struct dm_list list' within the containing structure.
*/
#define dm_list_iterate_back_items(v, head) dm_list_iterate_back_items_gen(v, (head), list)
/*
* Return the number of elements in a list by walking it.
*/
unsigned int dm_list_size(const struct dm_list *head);
//----------------------------------------------------------------
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,64 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_DATA_STRUCT_RADIX_TREE_H
#define BASE_DATA_STRUCT_RADIX_TREE_H
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
//----------------------------------------------------------------
struct radix_tree;
union radix_value {
void *ptr;
uint64_t n;
};
typedef void (*radix_value_dtr)(void *context, union radix_value v);
// dtr will be called on any deleted entries. dtr may be NULL.
struct radix_tree *radix_tree_create(radix_value_dtr dtr, void *dtr_context);
void radix_tree_destroy(struct radix_tree *rt);
unsigned radix_tree_size(struct radix_tree *rt);
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value v);
bool radix_tree_remove(struct radix_tree *rt, uint8_t *kb, uint8_t *ke);
// Returns the number of values removed
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *prefix_b, uint8_t *prefix_e);
bool radix_tree_lookup(struct radix_tree *rt,
uint8_t *kb, uint8_t *ke, union radix_value *result);
// The radix tree stores entries in lexicographical order. Which means
// we can iterate entries, in order. Or iterate entries with a particular
// prefix.
struct radix_tree_iterator {
// Returns false if the iteration should end.
bool (*visit)(struct radix_tree_iterator *it,
uint8_t *kb, uint8_t *ke, union radix_value v);
};
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
struct radix_tree_iterator *it);
// Checks that some constraints on the shape of the tree are
// being held. For debug only.
bool radix_tree_is_well_formed(struct radix_tree *rt);
void radix_tree_dump(struct radix_tree *rt, FILE *out);
//----------------------------------------------------------------
#endif

View File

@@ -1,23 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_CONTAINER_OF_H
#define BASE_MEMORY_CONTAINER_OF_H
//----------------------------------------------------------------
#define container_of(v, t, head) \
((t *)((const char *)(v) - (const char *)&((t *) 0)->head))
//----------------------------------------------------------------
#endif

View File

@@ -1,31 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_ZALLOC_H
#define BASE_MEMORY_ZALLOC_H
#include <stdlib.h>
#include <string.h>
//----------------------------------------------------------------
static inline void *zalloc(size_t len)
{
void *ptr = malloc(len);
if (ptr)
memset(ptr, 0, len);
return ptr;
}
//----------------------------------------------------------------
#endif

View File

@@ -1,5 +1,5 @@
#
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -25,7 +25,6 @@ PROFILES=$(PROFILE_TEMPLATES) \
$(srcdir)/cache-smq.profile \
$(srcdir)/thin-generic.profile \
$(srcdir)/thin-performance.profile \
$(srcdir)/vdo-small.profile \
$(srcdir)/lvmdbusd.profile
include $(top_builddir)/make.tmpl
@@ -33,8 +32,8 @@ include $(top_builddir)/make.tmpl
.PHONY: install_conf install_localconf install_profiles
generate:
$(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withgeneralpreamble --withcomments --ignorelocal --withspaces > example.conf.in
$(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withlocalpreamble --withcomments --withspaces local > lvmlocal.conf.in
LD_LIBRARY_PATH=$(top_builddir)/libdm:$(LD_LIBRARY_PATH) $(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withgeneralpreamble --withcomments --ignorelocal --withspaces > example.conf.in
LD_LIBRARY_PATH=$(top_builddir)/libdm:$(LD_LIBRARY_PATH) $(top_builddir)/tools/lvm dumpconfig --type default --unconfigured --withlocalpreamble --withcomments --withspaces local > lvmlocal.conf.in
install_conf: $(CONFSRC)
@if [ ! -e $(confdir)/$(CONFDEST) ]; then \

View File

@@ -151,15 +151,21 @@ devices {
# global_filter = [ "a|.*/|" ]
# Configuration option devices/cache_dir.
# This setting is no longer used.
# Directory in which to store the device cache file.
# The results of filtering are cached on disk to avoid rescanning dud
# devices (which can take a very long time). By default this cache is
# stored in a file named .cache. It is safe to delete this file; the
# tools regenerate it. If obtain_device_list_from_udev is enabled, the
# list of devices is obtained from udev and any existing .cache file
# is removed.
cache_dir = "@DEFAULT_SYS_DIR@/@DEFAULT_CACHE_SUBDIR@"
# Configuration option devices/cache_file_prefix.
# This setting is no longer used.
# A prefix used before the .cache file name. See devices/cache_dir.
cache_file_prefix = ""
# Configuration option devices/write_cache_state.
# This setting is no longer used.
# Enable/disable writing the cache file. See devices/cache_dir.
write_cache_state = 1
# Configuration option devices/types.
@@ -263,7 +269,11 @@ devices {
ignore_lvm_mirrors = 1
# Configuration option devices/disable_after_error_count.
# This setting is no longer used.
# Number of I/O errors after which a device is skipped.
# During each LVM operation, errors received from each device are
# counted. If the counter of a device exceeds the limit set here,
# no further I/O is sent to that device for the remainder of the
# operation. Setting this to 0 disables the counters altogether.
disable_after_error_count = 0
# Configuration option devices/require_restorefile_with_uuid.
@@ -488,149 +498,6 @@ allocation {
# Default physical extent size in KiB to use for new VGs.
# This configuration option has an automatic default value.
# physical_extent_size = 4096
# Configuration option allocation/vdo_use_compression.
# Enables or disables compression when creating a VDO volume.
# Compression may be disabled if necessary to maximize performance
# or to speed processing of data that is unlikely to compress.
# This configuration option has an automatic default value.
# vdo_use_compression = 1
# Configuration option allocation/vdo_use_deduplication.
# Enables or disables deduplication when creating a VDO volume.
# Deduplication may be disabled in instances where data is not expected
# to have good deduplication rates but compression is still desired.
# This configuration option has an automatic default value.
# vdo_use_deduplication = 1
# Configuration option allocation/vdo_emulate_512_sectors.
# Specifies that the VDO volume is to emulate a 512 byte block device.
# This configuration option has an automatic default value.
# vdo_emulate_512_sectors = 0
# Configuration option allocation/vdo_block_map_cache_size_mb.
# Specifies the amount of memory in MiB allocated for caching block map
# pages for VDO volume. The value must be a multiple of 4096 and must be
# at least 128MiB and less than 16TiB. The cache must be at least 16MiB
# per logical thread. Note that there is a memory overhead of 15%.
# This configuration option has an automatic default value.
# vdo_block_map_cache_size_mb = 128
# Configuration option allocation/vdo_block_map_period.
# Tunes the quantity of block map updates that can accumulate
# before cache pages are flushed to disk. The value must be
# at least 1 and less then 16380.
# A lower value means shorter recovery time but lower performance.
# This configuration option has an automatic default value.
# vdo_block_map_period = 16380
# Configuration option allocation/vdo_check_point_frequency.
# The default check point frequency for VDO volume.
# This configuration option has an automatic default value.
# vdo_check_point_frequency = 0
# Configuration option allocation/vdo_use_sparse_index.
# Enables sparse indexing for VDO volume.
# This configuration option has an automatic default value.
# vdo_use_sparse_index = 0
# Configuration option allocation/vdo_index_memory_size_mb.
# Specifies the amount of index memory in MiB for VDO volume.
# The value must be at least 256MiB and at most 1TiB.
# This configuration option has an automatic default value.
# vdo_index_memory_size_mb = 256
# Configuration option allocation/vdo_use_read_cache.
# Enables or disables the read cache within the VDO volume.
# The cache should be enabled if write workloads are expected
# to have high levels of deduplication, or for read intensive
# workloads of highly compressible data.
# This configuration option has an automatic default value.
# vdo_use_read_cache = 0
# Configuration option allocation/vdo_read_cache_size_mb.
# Specifies the extra VDO volume read cache size in MiB.
# This space is in addition to a system-defined minimum.
# The value must be less then 16TiB and 1.12 MiB of memory
# will be used per MiB of read cache specified, per bio thread.
# This configuration option has an automatic default value.
# vdo_read_cache_size_mb = 0
# Configuration option allocation/vdo_slab_size_mb.
# Specifies the size in MiB of the increment by which a VDO is grown.
# Using a smaller size constrains the total maximum physical size
# that can be accommodated. Must be a power of two between 128MiB and 32GiB.
# This configuration option has an automatic default value.
# vdo_slab_size_mb = 2048
# Configuration option allocation/vdo_ack_threads.
# Specifies the number of threads to use for acknowledging
# completion of requested VDO I/O operations.
# The value must be at in range [0..100].
# This configuration option has an automatic default value.
# vdo_ack_threads = 1
# Configuration option allocation/vdo_bio_threads.
# Specifies the number of threads to use for submitting I/O
# operations to the storage device of VDO volume.
# The value must be in range [1..100]
# Each additional thread after the first will use an additional 18MiB of RAM,
# plus 1.12 MiB of RAM per megabyte of configured read cache size.
# This configuration option has an automatic default value.
# vdo_bio_threads = 1
# Configuration option allocation/vdo_bio_rotation.
# Specifies the number of I/O operations to enqueue for each bio-submission
# thread before directing work to the next. The value must be in range [1..1024].
# This configuration option has an automatic default value.
# vdo_bio_rotation = 64
# Configuration option allocation/vdo_cpu_threads.
# Specifies the number of threads to use for CPU-intensive work such as
# hashing or compression for VDO volume. The value must be in range [1..100]
# This configuration option has an automatic default value.
# vdo_cpu_threads = 2
# Configuration option allocation/vdo_hash_zone_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# The value must be at in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_hash_zone_threads = 1
# Configuration option allocation/vdo_logical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# A logical thread count of 9 or more will require explicitly specifying
# a sufficiently large block map cache size, as well.
# The value must be in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_logical_threads = 1
# Configuration option allocation/vdo_physical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on physical block addresses.
# Each additional thread after the first will use an additional 10MiB of RAM.
# The value must be in range [0..16].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_physical_threads = 1
# Configuration option allocation/vdo_write_policy.
# Specifies the write policy:
# auto - VDO will check the storage device and determine whether it supports flushes.
# If it does, VDO will run in async mode, otherwise it will run in sync mode.
# sync - Writes are acknowledged only after data is stably written.
# This policy is not supported if the underlying storage is not also synchronous.
# async - Writes are acknowledged after data has been cached for writing to stable storage.
# Data which has not been flushed is not guaranteed to persist in this mode.
# This configuration option has an automatic default value.
# vdo_write_policy = "auto"
}
# Configuration section log.
@@ -744,9 +611,9 @@ log {
# Select log messages by class.
# Some debugging messages are assigned to a class and only appear in
# debug output if the class is listed here. Classes currently
# available: memory, devices, io, activation, allocation, lvmetad,
# available: memory, devices, activation, allocation, lvmetad,
# metadata, cache, locking, lvmpolld. Use "all" to see everything.
debug_classes = [ "memory", "devices", "io", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}
# Configuration section backup.
@@ -835,17 +702,29 @@ global {
activation = 1
# Configuration option global/fallback_to_lvm1.
# This setting is no longer used.
# Try running LVM1 tools if LVM cannot communicate with DM.
# This option only applies to 2.4 kernels and is provided to help
# switch between device-mapper kernels and LVM1 kernels. The LVM1
# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
# They will stop working once the lvm2 on-disk metadata format is used.
# This configuration option has an automatic default value.
# fallback_to_lvm1 = 0
# fallback_to_lvm1 = @DEFAULT_FALLBACK_TO_LVM1@
# Configuration option global/format.
# This setting is no longer used.
# The default metadata format that commands should use.
# The -M 1|2 option overrides this setting.
#
# Accepted values:
# lvm1
# lvm2
#
# This configuration option has an automatic default value.
# format = "lvm2"
# Configuration option global/format_libraries.
# This setting is no longer used.
# Shared libraries that process different metadata formats.
# If support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# This configuration option does not have a default value defined.
# Configuration option global/segment_libraries.
@@ -861,7 +740,34 @@ global {
etc = "@CONFDIR@"
# Configuration option global/locking_type.
# This setting is no longer used.
# Type of locking to use.
#
# Accepted values:
# 0
# Turns off locking. Warning: this risks metadata corruption if
# commands run concurrently.
# 1
# LVM uses local file-based locking, the standard mode.
# 2
# LVM uses the external shared library locking_library.
# 3
# LVM uses built-in clustered locking with clvmd.
# This is incompatible with lvmetad. If use_lvmetad is enabled,
# LVM prints a warning and disables lvmetad use.
# 4
# LVM uses read-only locking which forbids any operations that
# might change metadata.
# 5
# Offers dummy locking for tools that do not need any locks.
# You should not need to set this directly; the tools will select
# when to use it instead of the configured locking_type.
# Do not use lvmetad or the kernel device-mapper driver with this
# locking type. It is used by the --readonly option that offers
# read-only access to Volume Group metadata that cannot be locked
# safely because it belongs to an inaccessible domain and might be
# in use, for example a virtual machine image or a disk that is
# shared by a clustered machine.
#
locking_type = 1
# Configuration option global/wait_for_locks.
@@ -869,11 +775,19 @@ global {
wait_for_locks = 1
# Configuration option global/fallback_to_clustered_locking.
# This setting is no longer used.
# Attempt to use built-in cluster locking if locking_type 2 fails.
# If using external locking (type 2) and initialisation fails, with
# this enabled, an attempt will be made to use the built-in clustered
# locking. Disable this if using a customised locking_library.
fallback_to_clustered_locking = 1
# Configuration option global/fallback_to_local_locking.
# This setting is no longer used.
# Use locking_type 1 (local) if locking_type 2 or 3 fail.
# If an attempt to initialise type 2 or type 3 locking failed, perhaps
# because cluster components such as clvmd are not running, with this
# enabled, an attempt will be made to use local file-based locking
# (type 1). If this succeeds, only commands against local VGs will
# proceed. VGs marked as clustered will be ignored.
fallback_to_local_locking = 1
# Configuration option global/locking_dir.
@@ -897,7 +811,7 @@ global {
# This configuration option does not have a default value defined.
# Configuration option global/locking_library.
# This setting is no longer used.
# The external locking library to use for locking_type 2.
# This configuration option has an automatic default value.
# locking_library = "liblvm2clusterlock.so"
@@ -907,6 +821,13 @@ global {
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Configuration option global/detect_internal_vg_cache_corruption.
# Internal verification of VG structures.
# Check if CRC matches when a parsed VG is used multiple times. This
# is useful to catch unexpected changes to cached VG structures.
# Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# Configuration option global/metadata_read_only.
# No operations that change on-disk metadata are permitted.
# Additionally, read-only commands that encounter metadata in need of
@@ -1149,17 +1070,6 @@ global {
# This configuration option has an automatic default value.
# cache_repair_options = [ "" ]
# Configuration option global/vdo_format_executable.
# The full path to the vdoformat command.
# LVM uses this command to initial data volume for VDO type logical volume
# This configuration option has an automatic default value.
# vdo_format_executable = "@VDO_FORMAT_CMD@"
# Configuration option global/vdo_format_options.
# List of options passed added to standard vdoformat command.
# This configuration option has an automatic default value.
# vdo_format_options = [ "" ]
# Configuration option global/fsadm_executable.
# The full path to the fsadm command.
# LVM uses this command to help with lvresize -r operations.
@@ -1533,33 +1443,6 @@ activation {
#
thin_pool_autoextend_percent = 20
# Configuration option activation/vdo_pool_autoextend_threshold.
# Auto-extend a VDO pool when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see vdo_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# vdo_pool_autoextend_threshold = 70
#
vdo_pool_autoextend_threshold = 100
# Configuration option activation/vdo_pool_autoextend_percent.
# Auto-extending a VDO pool adds this percent extra space.
# The amount of additional space added to a VDO pool is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# This configuration option has an automatic default value.
# vdo_pool_autoextend_percent = 20
# Configuration option activation/mlock_filter.
# Do not mlock these memory areas.
# While activating devices, I/O to devices being (re)configured is
@@ -1728,7 +1611,20 @@ activation {
# stripesize = 64
# Configuration option metadata/dirs.
# This setting is no longer used.
# Directories holding live copies of text format metadata.
# These directories must not be on logical volumes!
# It's possible to use LVM with a couple of directories here,
# preferably on different (non-LV) filesystems, and with no other
# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
# to on-disk metadata areas. The feature was originally added to
# simplify testing and is not supported under low memory situations -
# the machine could lock up. Never edit any files in these directories
# by hand unless you are absolutely sure you know what you are doing!
# Use the supplied toolset to make changes (e.g. vgcfgrestore).
#
# Example
# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# }
@@ -2181,23 +2077,6 @@ dmeventd {
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/vdo_library.
# The library dmeventd uses when monitoring a VDO pool device.
# libdevmapper-event-lvm2vdo.so monitors the filling of a pool
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the pool is filled.
# This configuration option has an automatic default value.
# vdo_library = "libdevmapper-event-lvm2vdo.so"
# Configuration option dmeventd/vdo_command.
# The plugin runs command with each 5% increment when VDO pool volume
# gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# User handler is specified with the full path starting with '/'.
# This configuration option has an automatic default value.
# vdo_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.

View File

@@ -1,25 +0,0 @@
# Demo configuration for 'VDO' using less memory.
#
allocation {
vdo_use_compression = 1
vdo_use_deduplication = 1
vdo_emulate_512_sectors = 0
vdo_block_map_cache_size_mb = 128
vdo_block_map_period = 16380
vdo_check_point_frequency = 0
vdo_use_sparse_index = 0
vdo_index_memory_size_mb = 256
vdo_use_read_cache = 0
vdo_read_cache_size_mb = 0
vdo_slab_size_mb = 2048
vdo_ack_threads = 1
vdo_bio_threads = 1
vdo_bio_rotation = 64
vdo_cpu_threads = 2
vdo_hash_zone_threads = 1
vdo_logical_threads = 1
vdo_physical_threads = 1
vdo_write_policy = "auto"
}

2658
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -39,12 +39,15 @@ case "$host_os" in
LDDEPS="$LDDEPS .export.sym"
LIB_SUFFIX=so
DEVMAPPER=yes
AIO=yes
BUILD_LVMETAD=no
BUILD_LVMPOLLD=no
LOCKDSANLOCK=no
LOCKDDLM=no
ODIRECT=yes
DM_IOCTLS=yes
SELINUX=yes
CLUSTER=internal
FSADM=yes
BLKDEACTIVATE=yes
;;
@@ -56,9 +59,11 @@ case "$host_os" in
CLDNOWHOLEARCHIVE=
LIB_SUFFIX=dylib
DEVMAPPER=yes
AIO=no
ODIRECT=no
DM_IOCTLS=no
SELINUX=no
CLUSTER=none
FSADM=no
BLKDEACTIVATE=no
;;
@@ -74,7 +79,6 @@ AC_PROG_CC
AC_PROG_CXX
CFLAGS=$save_CFLAGS
CXXFLAGS=$save_CXXFLAGS
PATH_SBIN="$PATH:/usr/sbin:/sbin"
dnl probably no longer needed in 2008, but...
AC_PROG_GCC_TRADITIONAL
@@ -100,7 +104,7 @@ AC_HEADER_SYS_WAIT
AC_HEADER_TIME
AC_CHECK_HEADERS([assert.h ctype.h dirent.h errno.h fcntl.h float.h \
getopt.h inttypes.h langinfo.h libaio.h libgen.h limits.h locale.h paths.h \
getopt.h inttypes.h langinfo.h libgen.h limits.h locale.h paths.h \
signal.h stdarg.h stddef.h stdio.h stdlib.h string.h sys/file.h \
sys/ioctl.h syslog.h sys/mman.h sys/param.h sys/resource.h sys/stat.h \
sys/time.h sys/types.h sys/utsname.h sys/wait.h time.h \
@@ -139,7 +143,6 @@ AC_TYPE_UINT16_T
AC_TYPE_UINT32_T
AC_TYPE_UINT64_T
AX_GCC_BUILTIN([__builtin_clz])
AX_GCC_BUILTIN([__builtin_clzll])
################################################################################
dnl -- Check for functions
@@ -170,15 +173,6 @@ AC_ARG_ENABLE(dependency-tracking,
USE_TRACKING=$enableval, USE_TRACKING=yes)
AC_MSG_RESULT($USE_TRACKING)
################################################################################
dnl -- Disable silence rules
AC_MSG_CHECKING(whether to build silently)
AC_ARG_ENABLE(silent-rules,
AC_HELP_STRING([--disable-silent-rules], [disable silent building]),
SILENT_RULES=$enableval, SILENT_RULES=yes)
AC_MSG_RESULT($SILENT_RULES)
################################################################################
dnl -- Enables statically-linked tools
AC_MSG_CHECKING(whether to use static linking)
@@ -289,6 +283,74 @@ esac
AC_MSG_RESULT($MANGLING)
AC_DEFINE_UNQUOTED([DEFAULT_DM_NAME_MANGLING], $mangling, [Define default name mangling behaviour])
################################################################################
dnl -- LVM1 tool fallback option
AC_MSG_CHECKING(whether to enable lvm1 fallback)
AC_ARG_ENABLE(lvm1_fallback,
AC_HELP_STRING([--enable-lvm1_fallback],
[use this to fall back and use LVM1 binaries if
device-mapper is missing from the kernel]),
LVM1_FALLBACK=$enableval, LVM1_FALLBACK=no)
AC_MSG_RESULT($LVM1_FALLBACK)
if test "$LVM1_FALLBACK" = yes; then
DEFAULT_FALLBACK_TO_LVM1=1
AC_DEFINE([LVM1_FALLBACK], 1, [Define to 1 if 'lvm' should fall back to using LVM1 binaries if device-mapper is missing from the kernel])
else
DEFAULT_FALLBACK_TO_LVM1=0
fi
AC_DEFINE_UNQUOTED(DEFAULT_FALLBACK_TO_LVM1, [$DEFAULT_FALLBACK_TO_LVM1],
[Fall back to LVM1 by default if device-mapper is missing from the kernel.])
################################################################################
dnl -- format1 inclusion type
AC_MSG_CHECKING(whether to include support for lvm1 metadata)
AC_ARG_WITH(lvm1,
AC_HELP_STRING([--with-lvm1=TYPE],
[LVM1 metadata support: internal/shared/none [internal]]),
LVM1=$withval, LVM1=internal)
AC_MSG_RESULT($LVM1)
case "$LVM1" in
none|shared) ;;
internal) AC_DEFINE([LVM1_INTERNAL], 1,
[Define to 1 to include built-in support for LVM1 metadata.]) ;;
*) AC_MSG_ERROR([--with-lvm1 parameter invalid]) ;;
esac
################################################################################
dnl -- format_pool inclusion type
AC_MSG_CHECKING(whether to include support for GFS pool metadata)
AC_ARG_WITH(pool,
AC_HELP_STRING([--with-pool=TYPE],
[GFS pool read-only support: internal/shared/none [internal]]),
POOL=$withval, POOL=internal)
AC_MSG_RESULT($POOL)
case "$POOL" in
none|shared) ;;
internal) AC_DEFINE([POOL_INTERNAL], 1,
[Define to 1 to include built-in support for GFS pool metadata.]) ;;
*) AC_MSG_ERROR([--with-pool parameter invalid])
esac
################################################################################
dnl -- cluster_locking inclusion type
AC_MSG_CHECKING(whether to include support for cluster locking)
AC_ARG_WITH(cluster,
AC_HELP_STRING([--with-cluster=TYPE],
[cluster LVM locking support: internal/shared/none [internal]]),
CLUSTER=$withval)
AC_MSG_RESULT($CLUSTER)
case "$CLUSTER" in
none|shared) ;;
internal) AC_DEFINE([CLUSTER_LOCKING_INTERNAL], 1,
[Define to 1 to include built-in support for clustered LVM locking.]) ;;
*) AC_MSG_ERROR([--with-cluster parameter invalid]) ;;
esac
################################################################################
dnl -- snapshots inclusion type
AC_MSG_CHECKING(whether to include snapshots)
@@ -323,6 +385,13 @@ esac
################################################################################
dnl -- raid inclusion type
AC_MSG_CHECKING(whether to include raid)
AC_ARG_WITH(raid,
AC_HELP_STRING([--with-raid=TYPE],
[raid support: internal/shared/none [internal]]),
RAID=$withval, RAID=internal)
AC_MSG_RESULT($RAID)
AC_ARG_WITH(default-mirror-segtype,
AC_HELP_STRING([--with-default-mirror-segtype=TYPE],
[default mirror segtype: raid1/mirror [raid1]]),
@@ -331,9 +400,14 @@ AC_ARG_WITH(default-raid10-segtype,
AC_HELP_STRING([--with-default-raid10-segtype=TYPE],
[default mirror segtype: raid10/mirror [raid10]]),
DEFAULT_RAID10_SEGTYPE=$withval, DEFAULT_RAID10_SEGTYPE="raid10")
AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.])
case "$RAID" in
none) test "$DEFAULT_MIRROR_SEGTYPE" = "raid1" && DEFAULT_MIRROR_SEGTYPE="mirror"
test "$DEFAULT_RAID10_SEGTYPE" = "raid10" && DEFAULT_RAID10_SEGTYPE="mirror" ;;
shared) ;;
internal) AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.]) ;;
*) AC_MSG_ERROR([--with-raid parameter invalid]) ;;
esac
AC_DEFINE_UNQUOTED([DEFAULT_MIRROR_SEGTYPE], ["$DEFAULT_MIRROR_SEGTYPE"],
[Default segtype used for mirror volumes.])
@@ -396,7 +470,7 @@ case "$THIN" in
internal|shared)
# Empty means a config way to ignore thin checking
if test "$THIN_CHECK_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_CHECK_CMD, thin_check, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_CHECK_CMD, thin_check)
if test -z "$THIN_CHECK_CMD"; then
AC_MSG_WARN([thin_check not found in path $PATH])
THIN_CHECK_CMD=/usr/sbin/thin_check
@@ -420,7 +494,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin dumping
if test "$THIN_DUMP_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_DUMP_CMD, thin_dump, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_DUMP_CMD, thin_dump)
test -z "$THIN_DUMP_CMD" && {
AC_MSG_WARN(thin_dump not found in path $PATH)
THIN_DUMP_CMD=/usr/sbin/thin_dump
@@ -429,7 +503,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin repairing
if test "$THIN_REPAIR_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_REPAIR_CMD, thin_repair, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_REPAIR_CMD, thin_repair)
test -z "$THIN_REPAIR_CMD" && {
AC_MSG_WARN(thin_repair not found in path $PATH)
THIN_REPAIR_CMD=/usr/sbin/thin_repair
@@ -438,7 +512,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin restoring
if test "$THIN_RESTORE_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_RESTORE_CMD, thin_restore, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_RESTORE_CMD, thin_restore)
test -z "$THIN_RESTORE_CMD" && {
AC_MSG_WARN(thin_restore not found in path $PATH)
THIN_RESTORE_CMD=/usr/sbin/thin_restore
@@ -510,7 +584,7 @@ case "$CACHE" in
internal|shared)
# Empty means a config way to ignore cache checking
if test "$CACHE_CHECK_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_CHECK_CMD, cache_check, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_CHECK_CMD, cache_check)
if test -z "$CACHE_CHECK_CMD"; then
AC_MSG_WARN([cache_check not found in path $PATH])
CACHE_CHECK_CMD=/usr/sbin/cache_check
@@ -545,7 +619,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache dumping
if test "$CACHE_DUMP_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_DUMP_CMD, cache_dump, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_DUMP_CMD, cache_dump)
test -z "$CACHE_DUMP_CMD" && {
AC_MSG_WARN(cache_dump not found in path $PATH)
CACHE_DUMP_CMD=/usr/sbin/cache_dump
@@ -554,7 +628,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache repairing
if test "$CACHE_REPAIR_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_REPAIR_CMD, cache_repair, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_REPAIR_CMD, cache_repair)
test -z "$CACHE_REPAIR_CMD" && {
AC_MSG_WARN(cache_repair not found in path $PATH)
CACHE_REPAIR_CMD=/usr/sbin/cache_repair
@@ -563,7 +637,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache restoring
if test "$CACHE_RESTORE_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_RESTORE_CMD, cache_restore, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_RESTORE_CMD, cache_restore)
test -z "$CACHE_RESTORE_CMD" && {
AC_MSG_WARN(cache_restore not found in path $PATH)
CACHE_RESTORE_CMD=/usr/sbin/cache_restore
@@ -591,53 +665,6 @@ AC_DEFINE_UNQUOTED([CACHE_REPAIR_CMD], ["$CACHE_REPAIR_CMD"],
AC_DEFINE_UNQUOTED([CACHE_RESTORE_CMD], ["$CACHE_RESTORE_CMD"],
[The path to 'cache_restore', if available.])
################################################################################
dnl -- cache inclusion type
AC_MSG_CHECKING(whether to include vdo)
AC_ARG_WITH(vdo,
AC_HELP_STRING([--with-vdo=TYPE],
[vdo support: internal/none [internal]]),
VDO=$withval, VDO="none")
AC_MSG_RESULT($VDO)
AC_ARG_WITH(vdo-format,
AC_HELP_STRING([--with-vdo-format=PATH],
[vdoformat tool: [autodetect]]),
VDO_FORMAT_CMD=$withval, VDO_FORMAT_CMD="autodetect")
case "$VDO" in
none) ;;
internal)
AC_DEFINE([VDO_INTERNAL], 1, [Define to 1 to include built-in support for vdo.])
if test "$VDO_FORMAT_CMD" = "autodetect"; then
AC_PATH_TOOL(VDO_FORMAT_CMD, vdoformat, [], [$PATH])
if test -z "$VDO_FORMAT_CMD"; then
AC_MSG_WARN([vdoformat not found in path $PATH])
VDO_FORMAT_CMD=/usr/bin/vdoformat
VDO_CONFIGURE_WARN=y
fi
fi
;;
*) AC_MSG_ERROR([--with-vdo parameter invalid]) ;;
esac
AC_DEFINE_UNQUOTED([VDO_FORMAT_CMD], ["$VDO_FORMAT_CMD"],
[The path to 'vdoformat', if available.])
#
# Do we need to use the API??
# Do we want to link lvm2 with a big library for vdoformating ?
#
#AC_ARG_WITH(vdo-include,
# AC_HELP_STRING([--with-vdo-include=PATH],
# [vdo support: Path to utils headers: [/usr/include/vdo/utils]]),
# VDO_INCLUDE=$withval, VDO_INCLUDE="/usr/include/vdo/utils")
#AC_MSG_RESULT($VDO_INCLUDE)
#
#AC_ARG_WITH(vdo-lib,
# AC_HELP_STRING([--with-vdo-lib=PATH],
# [vdo support: Path to utils lib: [/usr/lib]]),
# VDO_LIB=$withval, VDO_LIB="/usr/lib")
#AC_MSG_RESULT($VDO_LIB)
################################################################################
dnl -- Disable readline
@@ -709,6 +736,241 @@ AC_ARG_WITH(default-run-dir,
AC_DEFINE_UNQUOTED(DEFAULT_RUN_DIR, ["$DEFAULT_RUN_DIR"],
[Default LVM run directory.])
################################################################################
dnl -- Build cluster LVM daemon
AC_MSG_CHECKING(whether to build cluster LVM daemon)
AC_ARG_WITH(clvmd,
[ --with-clvmd=TYPE build cluster LVM Daemon
The following cluster manager combinations are valid:
* cman (RHEL5 or equivalent)
* cman,corosync,openais (or selection of them)
* singlenode (localhost only)
* all (autodetect)
* none (disable build)
[[none]]],
CLVMD=$withval, CLVMD=none)
test "$CLVMD" = yes && CLVMD=all
AC_MSG_RESULT($CLVMD)
dnl -- If clvmd enabled without cluster locking, automagically include it
test "$CLVMD" != none -a "$CLUSTER" = none && CLUSTER=internal
dnl -- init pkgconfig if required
test "$CLVMD" != none && pkg_config_init
dnl -- Express clvmd init script Required-Start / Required-Stop
CLVMD_CMANAGERS=""
dnl -- On RHEL4/RHEL5, qdiskd is started from a separate init script.
dnl -- Enable if we are build for cman.
CLVMD_NEEDS_QDISKD=no
dnl -- define build types
if [[ `expr x"$CLVMD" : '.*gulm.*'` != 0 ]]; then
AC_MSG_ERROR([Since version 2.02.87 GULM locking is no longer supported.]);
fi
if [[ `expr x"$CLVMD" : '.*cman.*'` != 0 ]]; then
BUILDCMAN=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS cman"
CLVMD_NEEDS_QDISKD=yes
fi
if [[ `expr x"$CLVMD" : '.*corosync.*'` != 0 ]]; then
BUILDCOROSYNC=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS corosync"
fi
if [[ `expr x"$CLVMD" : '.*openais.*'` != 0 ]]; then
BUILDOPENAIS=yes
CLVMD_CMANAGERS="$CLVMD_CMANAGERS openais"
fi
test "$CLVMD_NEEDS_QDISKD" != no && CLVMD_CMANAGERS="$CLVMD_CMANAGERS qdiskd"
dnl -- define a soft bailout if we are autodetecting
soft_bailout() {
NOTFOUND=1
}
hard_bailout() {
AC_MSG_ERROR([bailing out])
}
dnl -- if clvmd=all then set soft_bailout (we do not want to error)
dnl -- and set all builds to yes. We need to do this here
dnl -- to skip the openais|corosync sanity check above.
if test "$CLVMD" = all; then
bailout=soft_bailout
BUILDCMAN=yes
BUILDCOROSYNC=yes
BUILDOPENAIS=yes
else
bailout=hard_bailout
fi
dnl -- helper macro to check libs without adding them to LIBS
check_lib_no_libs() {
lib_no_libs_arg1=$1
shift
lib_no_libs_arg2=$1
shift
lib_no_libs_args=$@
AC_CHECK_LIB([$lib_no_libs_arg1],
[$lib_no_libs_arg2],,
[$bailout],
[$lib_no_libs_args])
LIBS=$ac_check_lib_save_LIBS
}
dnl -- Look for cman libraries if required.
if test "$BUILDCMAN" = yes; then
PKG_CHECK_MODULES(CMAN, libcman, [HAVE_CMAN=yes],
[NOTFOUND=0
AC_CHECK_HEADERS(libcman.h,,$bailout)
check_lib_no_libs cman cman_init
if test $NOTFOUND = 0; then
AC_MSG_RESULT([no pkg for libcman, using -lcman])
CMAN_LIBS="-lcman"
HAVE_CMAN=yes
fi])
CHECKCONFDB=yes
CHECKDLM=yes
fi
dnl -- Look for corosync that is required also for openais build
dnl -- only enough recent version of corosync ship pkg-config files.
dnl -- We can safely rely on that to detect the correct bits.
if test "$BUILDCOROSYNC" = yes -o "$BUILDOPENAIS" = yes; then
PKG_CHECK_MODULES(COROSYNC, corosync, [HAVE_COROSYNC=yes], $bailout)
CHECKCONFDB=yes
CHECKCMAP=yes
fi
dnl -- Look for corosync libraries if required.
if test "$BUILDCOROSYNC" = yes; then
PKG_CHECK_MODULES(QUORUM, libquorum, [HAVE_QUORUM=yes], $bailout)
CHECKCPG=yes
CHECKDLM=yes
fi
dnl -- Look for openais libraries if required.
if test "$BUILDOPENAIS" = yes; then
PKG_CHECK_MODULES(SALCK, libSaLck, [HAVE_SALCK=yes], $bailout)
CHECKCPG=yes
fi
dnl -- Below are checks for libraries common to more than one build.
dnl -- Check confdb library.
dnl -- mandatory for corosync < 2.0 build.
dnl -- optional for openais/cman build.
if test "$CHECKCONFDB" = yes; then
PKG_CHECK_MODULES(CONFDB, libconfdb,
[HAVE_CONFDB=yes], [HAVE_CONFDB=no])
AC_CHECK_HEADERS([corosync/confdb.h],
[HAVE_CONFDB_H=yes], [HAVE_CONFDB_H=no])
if test "$HAVE_CONFDB" != yes -a "$HAVE_CONFDB_H" = yes; then
check_lib_no_libs confdb confdb_initialize
AC_MSG_RESULT([no pkg for confdb, using -lconfdb])
CONFDB_LIBS="-lconfdb"
HAVE_CONFDB=yes
fi
fi
dnl -- Check cmap library
dnl -- mandatory for corosync >= 2.0 build.
if test "$CHECKCMAP" = yes; then
PKG_CHECK_MODULES(CMAP, libcmap,
[HAVE_CMAP=yes], [HAVE_CMAP=no])
AC_CHECK_HEADERS([corosync/cmap.h],
[HAVE_CMAP_H=yes], [HAVE_CMAP_H=no])
if test "$HAVE_CMAP" != yes -a "$HAVE_CMAP_H" = yes; then
check_lib_no_libs cmap cmap_initialize
AC_MSG_RESULT([no pkg for cmap, using -lcmap])
CMAP_LIBS="-lcmap"
HAVE_CMAP=yes
fi
fi
if test "$BUILDCOROSYNC" = yes -a \
"$HAVE_CMAP" != yes -a "$HAVE_CONFDB" != yes -a "$CLVMD" != all; then
AC_MSG_ERROR([bailing out... cmap (corosync >= 2.0) or confdb (corosync < 2.0) library is required])
fi
dnl -- Check cpg library.
if test "$CHECKCPG" = yes; then
PKG_CHECK_MODULES(CPG, libcpg, [HAVE_CPG=yes], [$bailout])
fi
dnl -- Check dlm library.
if test "$CHECKDLM" = yes; then
PKG_CHECK_MODULES(DLM, libdlm, [HAVE_DLM=yes],
[NOTFOUND=0
AC_CHECK_HEADERS(libdlm.h,,[$bailout])
check_lib_no_libs dlm dlm_lock -lpthread
if test $NOTFOUND = 0; then
AC_MSG_RESULT([no pkg for libdlm, using -ldlm])
DLM_LIBS="-ldlm -lpthread"
HAVE_DLM=yes
fi])
fi
dnl -- If we are autodetecting, we need to re-create
dnl -- the depedencies checks and set a proper CLVMD,
dnl -- together with init script Required-Start/Stop entries.
if test "$CLVMD" = all; then
CLVMD=none
CLVMD_CMANAGERS=""
CLVMD_NEEDS_QDISKD=no
if test "$HAVE_CMAN" = yes -a \
"$HAVE_DLM" = yes; then
AC_MSG_RESULT([Enabling clvmd cman cluster manager])
CLVMD="$CLVMD,cman"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS cman"
CLVMD_NEEDS_QDISKD=yes
fi
if test "$HAVE_COROSYNC" = yes -a \
"$HAVE_QUORUM" = yes -a \
"$HAVE_CPG" = yes -a \
"$HAVE_DLM" = yes; then
if test "$HAVE_CONFDB" = yes -o "$HAVE_CMAP" = yes; then
AC_MSG_RESULT([Enabling clvmd corosync cluster manager])
CLVMD="$CLVMD,corosync"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS corosync"
fi
fi
if test "$HAVE_COROSYNC" = yes -a \
"$HAVE_CPG" = yes -a \
"$HAVE_SALCK" = yes; then
AC_MSG_RESULT([Enabling clvmd openais cluster manager])
CLVMD="$CLVMD,openais"
CLVMD_CMANAGERS="$CLVMD_CMANAGERS openais"
fi
test "$CLVMD_NEEDS_QDISKD" != no && CLVMD_CMANAGERS="$CLVMD_CMANAGERS qdiskd"
test "$CLVMD" = none && AC_MSG_RESULT([Disabling clvmd build. No cluster manager detected.])
fi
dnl -- Fixup CLVMD_CMANAGERS with new corosync
dnl -- clvmd built with corosync >= 2.0 needs dlm (either init or systemd service)
dnl -- to be started.
if [[ `expr x"$CLVMD" : '.*corosync.*'` != 0 ]]; then
test "$HAVE_CMAP" = yes && CLVMD_CMANAGERS="$CLVMD_CMANAGERS dlm"
fi
################################################################################
dnl -- clvmd pidfile
if test "$CLVMD" != none; then
AC_ARG_WITH(clvmd-pidfile,
AC_HELP_STRING([--with-clvmd-pidfile=PATH],
[clvmd pidfile [PID_DIR/clvmd.pid]]),
CLVMD_PIDFILE=$withval,
CLVMD_PIDFILE="$DEFAULT_PID_DIR/clvmd.pid")
AC_DEFINE_UNQUOTED(CLVMD_PIDFILE, ["$CLVMD_PIDFILE"],
[Path to clvmd pidfile.])
fi
################################################################################
dnl -- Build cluster mirror log daemon
AC_MSG_CHECKING(whether to build cluster mirror log daemon)
@@ -737,6 +999,11 @@ dnl -- Look for corosync libraries if required.
if [[ "$BUILD_CMIRRORD" = yes ]]; then
pkg_config_init
AC_DEFINE([CMIRROR_HAS_CHECKPOINT], 1, [Define to 1 to include libSaCkpt.])
PKG_CHECK_MODULES(SACKPT, libSaCkpt, [HAVE_SACKPT=yes],
[AC_MSG_RESULT([no libSaCkpt, compiling without it])
AC_DEFINE([CMIRROR_HAS_CHECKPOINT], 0, [Define to 0 to exclude libSaCkpt.])])
if test "$HAVE_CPG" != yes; then
PKG_CHECK_MODULES(CPG, libcpg)
fi
@@ -801,6 +1068,20 @@ if test "$PROFILING" = yes; then
fi
fi
################################################################################
dnl -- Enable testing
AC_MSG_CHECKING(whether to enable unit testing)
AC_ARG_ENABLE(testing,
AC_HELP_STRING([--enable-testing],
[enable testing targets in the makefile]),
TESTING=$enableval, TESTING=no)
AC_MSG_RESULT($TESTING)
if test "$TESTING" = yes; then
pkg_config_init
PKG_CHECK_MODULES(CUNIT, cunit >= 2.0)
fi
################################################################################
dnl -- Set LVM2 testsuite data
TESTSUITE_DATA='${datarootdir}/lvm2-testsuite'
@@ -842,6 +1123,34 @@ if test "$DEVMAPPER" = yes; then
AC_DEFINE([DEVMAPPER_SUPPORT], 1, [Define to 1 to enable LVM2 device-mapper interaction.])
fi
################################################################################
dnl -- Disable aio
AC_MSG_CHECKING(whether to use aio)
AC_ARG_ENABLE(aio,
AC_HELP_STRING([--disable-aio],
[disable async i/o]),
AIO=$enableval)
AC_MSG_RESULT($AIO)
if test "$AIO" = yes; then
AC_CHECK_LIB(aio, io_setup,
[AC_DEFINE([AIO_SUPPORT], 1, [Define to 1 if aio is available.])
AIO_LIBS="-laio"
AIO_SUPPORT=yes],
[AIO_LIBS=
AIO_SUPPORT=no ])
fi
################################################################################
dnl -- Build lvmetad
AC_MSG_CHECKING(whether to build LVMetaD)
AC_ARG_ENABLE(lvmetad,
AC_HELP_STRING([--enable-lvmetad],
[enable the LVM Metadata Daemon]),
LVMETAD=$enableval)
test -n "$LVMETAD" && BUILD_LVMETAD=$LVMETAD
AC_MSG_RESULT($BUILD_LVMETAD)
################################################################################
dnl -- Build lvmpolld
AC_MSG_CHECKING(whether to build lvmpolld)
@@ -897,7 +1206,9 @@ AC_MSG_RESULT($BUILD_LVMLOCKD)
if test "$BUILD_LVMLOCKD" = yes; then
AS_IF([test "$LVMPOLLD" = no], [AC_MSG_ERROR([cannot build lvmlockd with --disable-lvmpolld.])])
AS_IF([test "$LVMETAD" = no], [AC_MSG_ERROR([cannot build lvmlockd with --disable-lvmetad.])])
AS_IF([test "$BUILD_LVMPOLLD" = no], [BUILD_LVMPOLLD=yes; AC_MSG_WARN([Enabling lvmpolld - required by lvmlockd.])])
AS_IF([test "$BUILD_LVMETAD" = no], [BUILD_LVMETAD=yes; AC_MSG_WARN([Enabling lvmetad - required by lvmlockd.])])
AC_MSG_CHECKING([defaults for use_lvmlockd])
AC_ARG_ENABLE(use_lvmlockd,
AC_HELP_STRING([--disable-use-lvmlockd],
@@ -922,6 +1233,33 @@ fi
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMLOCKD, [$DEFAULT_USE_LVMLOCKD],
[Use lvmlockd by default.])
################################################################################
dnl -- Check lvmetad
if test "$BUILD_LVMETAD" = yes; then
AC_MSG_CHECKING([defaults for use_lvmetad])
AC_ARG_ENABLE(use_lvmetad,
AC_HELP_STRING([--disable-use-lvmetad],
[disable usage of LVM Metadata Daemon]),
[case ${enableval} in
yes) DEFAULT_USE_LVMETAD=1 ;;
*) DEFAULT_USE_LVMETAD=0 ;;
esac], DEFAULT_USE_LVMETAD=1)
AC_MSG_RESULT($DEFAULT_USE_LVMETAD)
AC_DEFINE([LVMETAD_SUPPORT], 1, [Define to 1 to include code that uses lvmetad.])
AC_ARG_WITH(lvmetad-pidfile,
AC_HELP_STRING([--with-lvmetad-pidfile=PATH],
[lvmetad pidfile [PID_DIR/lvmetad.pid]]),
LVMETAD_PIDFILE=$withval,
LVMETAD_PIDFILE="$DEFAULT_PID_DIR/lvmetad.pid")
AC_DEFINE_UNQUOTED(LVMETAD_PIDFILE, ["$LVMETAD_PIDFILE"],
[Path to lvmetad pidfile.])
else
DEFAULT_USE_LVMETAD=0
fi
AC_DEFINE_UNQUOTED(DEFAULT_USE_LVMETAD, [$DEFAULT_USE_LVMETAD],
[Use lvmetad by default.])
################################################################################
dnl -- Check lvmpolld
if test "$BUILD_LVMPOLLD" = yes; then
@@ -1125,6 +1463,20 @@ if test "$ODIRECT" = yes; then
AC_DEFINE([O_DIRECT_SUPPORT], 1, [Define to 1 to enable O_DIRECT support.])
fi
################################################################################
dnl -- Enable liblvm2app.so
AC_MSG_CHECKING(whether to build liblvm2app.so application library)
AC_ARG_ENABLE(applib,
AC_HELP_STRING([--enable-applib], [build application library]),
APPLIB=$enableval, APPLIB=no)
AC_MSG_RESULT($APPLIB)
AC_SUBST([LVM2APP_LIB])
test "$APPLIB" = yes \
&& LVM2APP_LIB=-llvm2app \
|| LVM2APP_LIB=
AS_IF([test "$APPLIB"],
[AC_MSG_WARN([Python bindings are deprecated. Use D-Bus API])])
################################################################################
dnl -- Enable cmdlib
AC_MSG_CHECKING(whether to compile liblvm2cmd.so)
@@ -1148,9 +1500,44 @@ AS_IF([test "$NOTIFYDBUS_SUPPORT" = yes && test "BUILD_LVMDBUSD" = yes],
[AC_MSG_WARN([Building D-Bus support without D-Bus notifications.])])
################################################################################
dnl -- Enable Python dbus library
dnl -- Enable Python liblvm2app bindings
AC_MSG_CHECKING(whether to build Python wrapper for liblvm2app.so)
AC_ARG_ENABLE(python_bindings,
AC_HELP_STRING([--enable-python_bindings], [build default Python applib bindings]),
PYTHON_BINDINGS=$enableval, PYTHON_BINDINGS=no)
AC_MSG_RESULT($PYTHON_BINDINGS)
if test "$BUILD_LVMDBUSD" = yes; then
AC_MSG_CHECKING(whether to build Python2 wrapper for liblvm2app.so)
AC_ARG_ENABLE(python2_bindings,
AC_HELP_STRING([--enable-python2_bindings], [build Python2 applib bindings]),
PYTHON2_BINDINGS=$enableval, PYTHON2_BINDINGS=no)
AC_MSG_RESULT($PYTHON2_BINDINGS)
AC_MSG_CHECKING(whether to build Python3 wrapper for liblvm2app.so)
AC_ARG_ENABLE(python3_bindings,
AC_HELP_STRING([--enable-python3_bindings], [build Python3 applib bindings]),
PYTHON3_BINDINGS=$enableval, PYTHON3_BINDINGS=no)
AC_MSG_RESULT($PYTHON3_BINDINGS)
if test "$PYTHON_BINDINGS" = yes; then
AC_MSG_ERROR([--enable-python-bindings is replaced by --enable-python2-bindings and --enable-python3-bindings])
fi
if test "$PYTHON2_BINDINGS" = yes; then
AM_PATH_PYTHON([2])
AC_PATH_TOOL(PYTHON2, python2)
test -z "$PYTHON2" && AC_MSG_ERROR([python2 is required for --enable-python2_bindings but cannot be found])
AC_PATH_TOOL(PYTHON2_CONFIG, python2-config)
test -z "$PYTHON2_CONFIG" && AC_PATH_TOOL(PYTHON2_CONFIG, python-config)
test -z "$PYTHON2_CONFIG" && AC_MSG_ERROR([python headers are required for --enable-python2_bindings but cannot be found])
PYTHON2_INCDIRS=`"$PYTHON2_CONFIG" --includes`
PYTHON2_LIBDIRS=`"$PYTHON2_CONFIG" --libs`
PYTHON2DIR=$pythondir
PYTHON_BINDINGS=yes
fi
if test "$PYTHON3_BINDINGS" = yes -o "$BUILD_LVMDBUSD" = yes; then
unset PYTHON PYTHON_CONFIG
unset am_cv_pathless_PYTHON ac_cv_path_PYTHON am_cv_python_platform
unset am_cv_python_pythondir am_cv_python_version am_cv_python_pyexecdir
@@ -1163,13 +1550,20 @@ if test "$BUILD_LVMDBUSD" = yes; then
PYTHON3_INCDIRS=`"$PYTHON3_CONFIG" --includes`
PYTHON3_LIBDIRS=`"$PYTHON3_CONFIG" --libs`
PYTHON3DIR=$pythondir
test "$PYTHON3_BINDINGS" = yes && PYTHON_BINDINGS=yes
PYTHON_BINDINGS=yes
fi
if test "$BUILD_LVMDBUSD" = yes; then
# To get this macro, install autoconf-archive package then run autoreconf
AC_PYTHON_MODULE([pyudev], [Required], python3)
AC_PYTHON_MODULE([dbus], [Required], python3)
fi
if test "$PYTHON_BINDINGS" = yes -o "$PYTHON2_BINDINGS" = yes -o "$PYTHON3_BINDINGS" = yes; then
AC_MSG_WARN([Python bindings are deprecated. Use D-Bus API])
test "$APPLIB" != yes && AC_MSG_ERROR([Python_bindings require --enable-applib])
fi
################################################################################
dnl -- Enable pkg-config
AC_ARG_ENABLE(pkgconfig,
@@ -1241,7 +1635,9 @@ AC_CHECK_LIB(dl, dlopen,
################################################################################
dnl -- Check for shared/static conflicts
if [[ \( "$LVM1" = shared -o "$POOL" = shared \
if [[ \( "$LVM1" = shared -o "$POOL" = shared -o "$CLUSTER" = shared \
-o "$SNAPSHOTS" = shared -o "$MIRRORS" = shared \
-o "$RAID" = shared -o "$CACHE" = shared \
\) -a "$STATIC_LINK" = yes ]]; then
AC_MSG_ERROR([Features cannot be 'shared' when building statically])
fi
@@ -1461,6 +1857,18 @@ if test "$BUILD_LVMPOLLD" = yes; then
AC_FUNC_STRERROR_R
fi
if test "$CLVMD" != none; then
AC_CHECK_HEADERS(mntent.h netdb.h netinet/in.h pthread.h search.h sys/mount.h sys/socket.h sys/uio.h sys/un.h utmpx.h,,AC_MSG_ERROR(bailing out))
AC_CHECK_FUNCS(dup2 getmntent memmove select socket,,hard_bailout)
AC_FUNC_GETMNTENT
AC_FUNC_SELECT_ARGTYPES
fi
if test "$CLUSTER" != none; then
AC_CHECK_HEADERS(sys/socket.h sys/un.h,,hard_bailout)
AC_CHECK_FUNCS(socket,,hard_bailout)
fi
if test "$BUILD_DMEVENTD" = yes; then
AC_CHECK_HEADERS(arpa/inet.h,,hard_bailout)
fi
@@ -1482,7 +1890,7 @@ if test "$BUILD_DMFILEMAPD" = yes; then
fi
################################################################################
AC_PATH_TOOL(MODPROBE_CMD, modprobe, [], [$PATH_SBIN])
AC_PATH_TOOL(MODPROBE_CMD, modprobe)
if test -n "$MODPROBE_CMD"; then
AC_DEFINE_UNQUOTED([MODPROBE_CMD], ["$MODPROBE_CMD"], [The path to 'modprobe', if available.])
@@ -1494,10 +1902,9 @@ SBINDIR="$(eval echo $(eval echo $sbindir))"
LVM_PATH="$SBINDIR/lvm"
AC_DEFINE_UNQUOTED(LVM_PATH, ["$LVM_PATH"], [Path to lvm binary.])
LVMCONFIG_PATH="$$BINDIR/lvmconfig"
AC_DEFINE_UNQUOTED(LVMCONFIG_PATH, ["$LVMCONFIG_PATH"], [Path to lvmconfig binary.])
USRSBINDIR="$(eval echo $(eval echo $usrsbindir))"
CLVMD_PATH="$USRSBINDIR/clvmd"
AC_DEFINE_UNQUOTED(CLVMD_PATH, ["$CLVMD_PATH"], [Path to clvmd binary.])
FSADM_PATH="$SBINDIR/fsadm"
AC_DEFINE_UNQUOTED(FSADM_PATH, ["$FSADM_PATH"], [Path to fsadm binary.])
@@ -1617,11 +2024,13 @@ LVM_LIBAPI=`echo "$VER" | $AWK -F '[[()]]' '{print $2}'`
AC_DEFINE_UNQUOTED(LVM_CONFIGURE_LINE, "$CONFIGURE_LINE", [configure command line used])
################################################################################
AC_SUBST(APPLIB)
AC_SUBST(AWK)
AC_SUBST(BLKID_PC)
AC_SUBST(BUILD_CMIRRORD)
AC_SUBST(BUILD_DMEVENTD)
AC_SUBST(BUILD_LVMDBUSD)
AC_SUBST(BUILD_LVMETAD)
AC_SUBST(BUILD_LVMPOLLD)
AC_SUBST(BUILD_LVMLOCKD)
AC_SUBST(BUILD_LOCKDSANLOCK)
@@ -1634,6 +2043,14 @@ AC_SUBST(CHMOD)
AC_SUBST(CLDFLAGS)
AC_SUBST(CLDNOWHOLEARCHIVE)
AC_SUBST(CLDWHOLEARCHIVE)
AC_SUBST(CLUSTER)
AC_SUBST(CLVMD)
AC_SUBST(CLVMD_CMANAGERS)
AC_SUBST(CLVMD_PATH)
AC_SUBST(CMAN_CFLAGS)
AC_SUBST(CMAN_LIBS)
AC_SUBST(CMAP_CFLAGS)
AC_SUBST(CMAP_LIBS)
AC_SUBST(CMDLIB)
AC_SUBST(CONFDB_CFLAGS)
AC_SUBST(CONFDB_LIBS)
@@ -1649,6 +2066,7 @@ AC_SUBST(DEFAULT_CACHE_SUBDIR)
AC_SUBST(DEFAULT_DATA_ALIGNMENT)
AC_SUBST(DEFAULT_DM_RUN_DIR)
AC_SUBST(DEFAULT_LOCK_DIR)
AC_SUBST(DEFAULT_FALLBACK_TO_LVM1)
AC_SUBST(DEFAULT_MIRROR_SEGTYPE)
AC_SUBST(DEFAULT_PID_DIR)
AC_SUBST(DEFAULT_PROFILE_SUBDIR)
@@ -1658,12 +2076,15 @@ AC_SUBST(DEFAULT_SPARSE_SEGTYPE)
AC_SUBST(DEFAULT_SYS_DIR)
AC_SUBST(DEFAULT_SYS_LOCK_DIR)
AC_SUBST(DEFAULT_USE_BLKID_WIPING)
AC_SUBST(DEFAULT_USE_LVMETAD)
AC_SUBST(DEFAULT_USE_LVMPOLLD)
AC_SUBST(DEFAULT_USE_LVMLOCKD)
AC_SUBST(DEVMAPPER)
AC_SUBST(AIO)
AC_SUBST(DLM_CFLAGS)
AC_SUBST(DLM_LIBS)
AC_SUBST(DL_LIBS)
AC_SUBST(AIO_LIBS)
AC_SUBST(DMEVENTD_PATH)
AC_SUBST(DM_LIB_PATCHLEVEL)
AC_SUBST(ELDFLAGS)
@@ -1678,6 +2099,8 @@ AC_SUBST(JOBS)
AC_SUBST(LDDEPS)
AC_SUBST(LIBS)
AC_SUBST(LIB_SUFFIX)
AC_SUBST(LVM1)
AC_SUBST(LVM1_FALLBACK)
AC_SUBST(LVM_VERSION)
AC_SUBST(LVM_LIBAPI)
AC_SUBST(LVM_MAJOR)
@@ -1694,6 +2117,7 @@ AC_SUBST(OCF)
AC_SUBST(OCFDIR)
AC_SUBST(ODIRECT)
AC_SUBST(PKGCONFIG)
AC_SUBST(POOL)
AC_SUBST(M_LIBS)
AC_SUBST(PTHREAD_LIBS)
AC_SUBST(PYTHON2)
@@ -1709,6 +2133,7 @@ AC_SUBST(PYTHON2DIR)
AC_SUBST(PYTHON3DIR)
AC_SUBST(QUORUM_CFLAGS)
AC_SUBST(QUORUM_LIBS)
AC_SUBST(RAID)
AC_SUBST(RT_LIBS)
AC_SUBST(READLINE_LIBS)
AC_SUBST(REPLICATORS)
@@ -1724,6 +2149,7 @@ AC_SUBST(SYSTEMD_LIBS)
AC_SUBST(SNAPSHOTS)
AC_SUBST(STATICDIR)
AC_SUBST(STATIC_LINK)
AC_SUBST(TESTING)
AC_SUBST(TESTSUITE_DATA)
AC_SUBST(THIN)
AC_SUBST(THIN_CHECK_CMD)
@@ -1741,17 +2167,14 @@ AC_SUBST(UDEV_SYSTEMD_BACKGROUND_JOBS)
AC_SUBST(UDEV_RULE_EXEC_DETECTION)
AC_SUBST(UDEV_HAS_BUILTIN_BLKID)
AC_SUBST(USE_TRACKING)
AC_SUBST(SILENT_RULES)
AC_SUBST(USRSBINDIR)
AC_SUBST(VALGRIND_POOL)
AC_SUBST(VDO)
AC_SUBST(VDO_FORMAT_CMD)
AC_SUBST(VDO_INCLUDE)
AC_SUBST(VDO_LIB)
AC_SUBST(WRITE_INSTALL)
AC_SUBST(DMEVENTD_PIDFILE)
AC_SUBST(LVMETAD_PIDFILE)
AC_SUBST(LVMPOLLD_PIDFILE)
AC_SUBST(LVMLOCKD_PIDFILE)
AC_SUBST(CLVMD_PIDFILE)
AC_SUBST(CMIRRORD_PIDFILE)
AC_SUBST(interface)
AC_SUBST(kerneldir)
@@ -1772,8 +2195,8 @@ dnl -- keep utility scripts running properly
AC_CONFIG_FILES([
Makefile
make.tmpl
libdm/make.tmpl
daemons/Makefile
daemons/clvmd/Makefile
daemons/cmirrord/Makefile
daemons/dmeventd/Makefile
daemons/dmeventd/libdevmapper-event.pc
@@ -1783,12 +2206,10 @@ daemons/dmeventd/plugins/raid/Makefile
daemons/dmeventd/plugins/mirror/Makefile
daemons/dmeventd/plugins/snapshot/Makefile
daemons/dmeventd/plugins/thin/Makefile
daemons/dmeventd/plugins/vdo/Makefile
daemons/dmfilemapd/Makefile
daemons/lvmdbusd/Makefile
daemons/lvmdbusd/lvmdbusd
daemons/lvmdbusd/lvmdb.py
daemons/lvmdbusd/lvm_shell_proxy.py
daemons/lvmdbusd/path.py
daemons/lvmetad/Makefile
daemons/lvmpolld/Makefile
daemons/lvmlockd/Makefile
conf/Makefile
@@ -1796,30 +2217,50 @@ conf/example.conf
conf/lvmlocal.conf
conf/command_profile_template.profile
conf/metadata_profile_template.profile
include/.symlinks
include/Makefile
lib/Makefile
lib/format1/Makefile
lib/format_pool/Makefile
lib/locking/Makefile
lib/mirror/Makefile
include/lvm-version.h
lib/raid/Makefile
lib/snapshot/Makefile
lib/thin/Makefile
lib/cache_segtype/Makefile
libdaemon/Makefile
libdaemon/client/Makefile
libdaemon/server/Makefile
libdm/Makefile
libdm/dm-tools/Makefile
libdm/libdevmapper.pc
liblvm/Makefile
liblvm/liblvm2app.pc
man/Makefile
po/Makefile
python/Makefile
python/setup.py
scripts/blkdeactivate.sh
scripts/blk_availability_init_red_hat
scripts/blk_availability_systemd_red_hat.service
scripts/clvmd_init_red_hat
scripts/cmirrord_init_red_hat
scripts/com.redhat.lvmdbus1.service
scripts/dm_event_systemd_red_hat.service
scripts/dm_event_systemd_red_hat.socket
scripts/lvm2_cluster_activation_red_hat.sh
scripts/lvm2_cluster_activation_systemd_red_hat.service
scripts/lvm2_clvmd_systemd_red_hat.service
scripts/lvm2_cmirrord_systemd_red_hat.service
scripts/lvm2_lvmdbusd_systemd_red_hat.service
scripts/lvm2_lvmetad_init_red_hat
scripts/lvm2_lvmetad_systemd_red_hat.service
scripts/lvm2_lvmetad_systemd_red_hat.socket
scripts/lvm2_lvmpolld_init_red_hat
scripts/lvm2_lvmpolld_systemd_red_hat.service
scripts/lvm2_lvmpolld_systemd_red_hat.socket
scripts/lvm2_lvmlockd_systemd_red_hat.service
scripts/lvm2_lvmlocking_systemd_red_hat.service
scripts/lvm2_monitoring_init_red_hat
scripts/lvm2_monitoring_systemd_red_hat.service
scripts/lvm2_pvscan_systemd_red_hat@.service
@@ -1827,8 +2268,13 @@ scripts/lvm2_tmpfiles_red_hat.conf
scripts/lvmdump.sh
scripts/Makefile
test/Makefile
test/api/Makefile
test/unit/Makefile
tools/Makefile
udev/Makefile
unit-tests/datastruct/Makefile
unit-tests/regex/Makefile
unit-tests/mm/Makefile
])
AC_OUTPUT
@@ -1844,9 +2290,6 @@ AS_IF([test -n "$CACHE_CONFIGURE_WARN"],
AS_IF([test -n "$CACHE_CHECK_VERSION_WARN"],
[AC_MSG_WARN([You should install latest cache_check vsn 0.7.0 to use lvm2 cache metadata format 2])])
AS_IF([test -n "$VDO_CONFIGURE_WARN"],
[AC_MSG_WARN([unrecognized 'vdoformat' tool is REQUIRED for VDO logical volume creation!])])
AS_IF([test "$ODIRECT" != yes],
[AC_MSG_WARN([O_DIRECT disabled: low-memory pvmove may lock up])])

View File

@@ -15,7 +15,11 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
.PHONY: dmeventd cmirrord lvmpolld lvmlockd
.PHONY: dmeventd clvmd cmirrord lvmetad lvmpolld lvmlockd
ifneq ("@CLVMD@", "none")
SUBDIRS += clvmd
endif
ifeq ("@BUILD_CMIRRORD@", "yes")
SUBDIRS += cmirrord
@@ -28,6 +32,10 @@ daemons.cflow: dmeventd.cflow
endif
endif
ifeq ("@BUILD_LVMETAD@", "yes")
SUBDIRS += lvmetad
endif
ifeq ("@BUILD_LVMPOLLD@", "yes")
SUBDIRS += lvmpolld
endif
@@ -40,8 +48,12 @@ ifeq ("@BUILD_LVMDBUSD@", "yes")
SUBDIRS += lvmdbusd
endif
ifeq ("@BUILD_DMFILEMAPD@", "yes")
SUBDIRS += dmfilemapd
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = cmirrord dmeventd lvmpolld lvmlockd lvmdbusd
SUBDIRS = clvmd cmirrord dmeventd lvmetad lvmpolld lvmlockd lvmdbusd dmfilemapd
endif
include $(top_builddir)/make.tmpl

1
daemons/clvmd/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
clvmd

98
daemons/clvmd/Makefile.in Normal file
View File

@@ -0,0 +1,98 @@
#
# Copyright (C) 2004 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
CMAN_LIBS = @CMAN_LIBS@
CMAN_CFLAGS = @CMAN_CFLAGS@
CMAP_LIBS = @CMAP_LIBS@
CMAP_CFLAGS = @CMAP_CFLAGS@
CONFDB_LIBS = @CONFDB_LIBS@
CONFDB_CFLAGS = @CONFDB_CFLAGS@
CPG_LIBS = @CPG_LIBS@
CPG_CFLAGS = @CPG_CFLAGS@
DLM_LIBS = @DLM_LIBS@
DLM_CFLAGS = @DLM_CFLAGS@
QUORUM_LIBS = @QUORUM_LIBS@
QUORUM_CFLAGS = @QUORUM_CFLAGS@
SALCK_LIBS = @SALCK_LIBS@
SALCK_CFLAGS = @SALCK_CFLAGS@
SOURCES = \
clvmd-command.c\
clvmd.c\
lvm-functions.c\
refresh_clvmd.c
ifneq (,$(findstring cman,, "@CLVMD@,"))
SOURCES += clvmd-cman.c
LMLIBS += $(CMAN_LIBS) $(CONFDB_LIBS) $(DLM_LIBS)
CFLAGS += $(CMAN_CFLAGS) $(CONFDB_CFLAGS) $(DLM_CFLAGS)
DEFS += -DUSE_CMAN
endif
ifneq (,$(findstring openais,, "@CLVMD@,"))
SOURCES += clvmd-openais.c
LMLIBS += $(CONFDB_LIBS) $(CPG_LIBS) $(SALCK_LIBS)
CFLAGS += $(CONFDB_CFLAGS) $(CPG_CFLAGS) $(SALCK_CFLAGS)
DEFS += -DUSE_OPENAIS
endif
ifneq (,$(findstring corosync,, "@CLVMD@,"))
SOURCES += clvmd-corosync.c
LMLIBS += $(CMAP_LIBS) $(CONFDB_LIBS) $(CPG_LIBS) $(DLM_LIBS) $(QUORUM_LIBS)
CFLAGS += $(CMAP_CFLAGS) $(CONFDB_CFLAGS) $(CPG_CFLAGS) $(DLM_CFLAGS) $(QUORUM_CFLAGS)
DEFS += -DUSE_COROSYNC
endif
ifneq (,$(findstring singlenode,, &quot;@CLVMD@,&quot;))
SOURCES += clvmd-singlenode.c
DEFS += -DUSE_SINGLENODE
endif
ifeq ($(MAKECMDGOALS),distclean)
SOURCES += clvmd-cman.c
SOURCES += clvmd-openais.c
SOURCES += clvmd-corosync.c
SOURCES += clvmd-singlenode.c
endif
TARGETS = \
clvmd
include $(top_builddir)/make.tmpl
LIBS += $(LVMINTERNAL_LIBS) -ldevmapper $(PTHREAD_LIBS)
CFLAGS += -fno-strict-aliasing $(EXTRA_EXEC_CFLAGS)
ifeq ("@AIO@", "yes")
LIBS += $(AIO_LIBS)
endif
INSTALL_TARGETS = \
install_clvmd
clvmd: $(OBJECTS) $(top_builddir)/lib/liblvm-internal.a
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) \
-o clvmd $(OBJECTS) $(LMLIBS) $(LIBS)
.PHONY: install_clvmd
install_clvmd: $(TARGETS)
$(INSTALL_PROGRAM) -D clvmd $(usrsbindir)/clvmd
install: $(INSTALL_TARGETS)
install_cluster: $(INSTALL_TARGETS)

85
daemons/clvmd/clvm.h Normal file
View File

@@ -0,0 +1,85 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* Definitions for CLVMD server and clients */
/*
* The protocol spoken over the cluster and across the local socket.
*/
#ifndef _CLVM_H
#define _CLVM_H
#include "configure.h"
#include <inttypes.h>
struct clvm_header {
uint8_t cmd; /* See below */
uint8_t flags; /* See below */
uint16_t xid; /* Transaction ID */
uint32_t clientid; /* Only used in Daemon->Daemon comms */
int32_t status; /* For replies, whether request succeeded */
uint32_t arglen; /* Length of argument below.
If >1500 then it will be passed
around the cluster in the system LV */
char node[1]; /* Actually a NUL-terminated string, node name.
If this is empty then the command is
forwarded to all cluster nodes unless
FLAG_LOCAL or FLAG_REMOTE is also set. */
char args[1]; /* Arguments for the command follow the
node name, This member is only
valid if the node name is empty */
} __attribute__ ((packed));
/* Flags */
#define CLVMD_FLAG_LOCAL 1 /* Only do this on the local node */
#define CLVMD_FLAG_SYSTEMLV 2 /* Data in system LV under my node name */
#define CLVMD_FLAG_NODEERRS 4 /* Reply has errors in node-specific portion */
#define CLVMD_FLAG_REMOTE 8 /* Do this on all nodes except for the local node */
/* Name of the local socket to communicate between lvm and clvmd */
#define CLVMD_SOCKNAME DEFAULT_RUN_DIR "/clvmd.sock"
/* Internal commands & replies */
#define CLVMD_CMD_REPLY 1
#define CLVMD_CMD_VERSION 2 /* Send version around cluster when we start */
#define CLVMD_CMD_GOAWAY 3 /* Die if received this - we are running
an incompatible version */
#define CLVMD_CMD_TEST 4 /* Just for mucking about */
#define CLVMD_CMD_LOCK 30
#define CLVMD_CMD_UNLOCK 31
/* Lock/Unlock commands */
#define CLVMD_CMD_LOCK_LV 50
#define CLVMD_CMD_LOCK_VG 51
#define CLVMD_CMD_LOCK_QUERY 52
/* Misc functions */
#define CLVMD_CMD_REFRESH 40
#define CLVMD_CMD_GET_CLUSTERNAME 41
#define CLVMD_CMD_SET_DEBUG 42
#define CLVMD_CMD_VG_BACKUP 43
#define CLVMD_CMD_RESTART 44
#define CLVMD_CMD_SYNC_NAMES 45
/* Used internally by some callers, but not part of the protocol.*/
#ifndef NODE_ALL
# define NODE_ALL "*"
# define NODE_LOCAL "."
# define NODE_REMOTE "^"
#endif
#endif

505
daemons/clvmd/clvmd-cman.c Normal file
View File

@@ -0,0 +1,505 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* CMAN communication layer for clvmd.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvmd-comms.h"
#include "clvm.h"
#include "clvmd.h"
#include "lvm-functions.h"
#include <libdlm.h>
#include <syslog.h>
#define LOCKSPACE_NAME "clvmd"
struct clvmd_node
{
struct cman_node *node;
int clvmd_up;
};
static int num_nodes;
static struct cman_node *nodes = NULL;
static struct cman_node this_node;
static int count_nodes; /* size of allocated nodes array */
static struct dm_hash_table *node_updown_hash;
static dlm_lshandle_t *lockspace;
static cman_handle_t c_handle;
static void count_clvmds_running(void);
static void get_members(void);
static int nodeid_from_csid(const char *csid);
static int name_from_nodeid(int nodeid, char *name);
static void event_callback(cman_handle_t handle, void *private, int reason, int arg);
static void data_callback(cman_handle_t handle, void *private,
char *buf, int len, uint8_t port, int nodeid);
struct lock_wait {
pthread_cond_t cond;
pthread_mutex_t mutex;
struct dlm_lksb lksb;
};
static int _init_cluster(void)
{
node_updown_hash = dm_hash_create(100);
/* Open the cluster communication socket */
c_handle = cman_init(NULL);
if (!c_handle) {
syslog(LOG_ERR, "Can't open cluster manager socket: %m");
return -1;
}
DEBUGLOG("Connected to CMAN\n");
if (cman_start_recv_data(c_handle, data_callback, CLUSTER_PORT_CLVMD)) {
syslog(LOG_ERR, "Can't bind cluster socket: %m");
return -1;
}
if (cman_start_notification(c_handle, event_callback)) {
syslog(LOG_ERR, "Can't start cluster event listening");
return -1;
}
/* Get the cluster members list */
get_members();
count_clvmds_running();
DEBUGLOG("CMAN initialisation complete\n");
/* Create a lockspace for LV & VG locks to live in */
lockspace = dlm_open_lockspace(LOCKSPACE_NAME);
if (!lockspace) {
lockspace = dlm_create_lockspace(LOCKSPACE_NAME, 0600);
if (!lockspace) {
syslog(LOG_ERR, "Unable to create DLM lockspace for CLVM: %m");
return -1;
}
DEBUGLOG("Created DLM lockspace for CLVMD.\n");
} else
DEBUGLOG("Opened existing DLM lockspace for CLVMD.\n");
dlm_ls_pthread_init(lockspace);
DEBUGLOG("DLM initialisation complete\n");
return 0;
}
static void _cluster_init_completed(void)
{
clvmd_cluster_init_completed();
}
static int _get_main_cluster_fd(void)
{
return cman_get_fd(c_handle);
}
static int _get_num_nodes(void)
{
int i;
int nnodes = 0;
/* return number of ACTIVE nodes */
for (i=0; i<num_nodes; i++) {
if (nodes[i].cn_member && nodes[i].cn_nodeid)
nnodes++;
}
return nnodes;
}
/* send_message with the fd check removed */
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
int nodeid = 0;
if (csid)
memcpy(&nodeid, csid, CMAN_MAX_CSID_LEN);
if (cman_send_data(c_handle, buf, msglen, 0, CLUSTER_PORT_CLVMD, nodeid) <= 0)
{
log_error("%s", errtext);
}
return msglen;
}
static void _get_our_csid(char *csid)
{
if (this_node.cn_nodeid == 0) {
cman_get_node(c_handle, 0, &this_node);
}
memcpy(csid, &this_node.cn_nodeid, CMAN_MAX_CSID_LEN);
}
/* Call a callback routine for each node is that known (down means not running a clvmd) */
static int _cluster_do_node_callback(struct local_client *client,
void (*callback) (struct local_client *,
const char *,
int))
{
int i;
int somedown = 0;
for (i = 0; i < _get_num_nodes(); i++) {
if (nodes[i].cn_member && nodes[i].cn_nodeid) {
int up = (int)(long)dm_hash_lookup_binary(node_updown_hash, (char *)&nodes[i].cn_nodeid, sizeof(int));
callback(client, (char *)&nodes[i].cn_nodeid, up);
if (!up)
somedown = -1;
}
}
return somedown;
}
/* Process OOB messages from the cluster socket */
static void event_callback(cman_handle_t handle, void *private, int reason, int arg)
{
char namebuf[MAX_CLUSTER_MEMBER_NAME_LEN];
switch (reason) {
case CMAN_REASON_PORTCLOSED:
name_from_nodeid(arg, namebuf);
log_notice("clvmd on node %s has died\n", namebuf);
DEBUGLOG("Got port closed message, removing node %s\n", namebuf);
dm_hash_insert_binary(node_updown_hash, (char *)&arg, sizeof(int), (void *)0);
break;
case CMAN_REASON_STATECHANGE:
DEBUGLOG("Got state change message, re-reading members list\n");
get_members();
break;
#if defined(LIBCMAN_VERSION) && LIBCMAN_VERSION >= 2
case CMAN_REASON_PORTOPENED:
/* Ignore this, wait for startup message from clvmd itself */
break;
case CMAN_REASON_TRY_SHUTDOWN:
DEBUGLOG("Got try shutdown, sending OK\n");
cman_replyto_shutdown(c_handle, 1);
break;
#endif
default:
/* ERROR */
DEBUGLOG("Got unknown event callback message: %d\n", reason);
break;
}
}
static struct local_client *cman_client;
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
/* Save this for data_callback */
cman_client = fd;
/* We never return a new client */
*new_client = NULL;
return cman_dispatch(c_handle, 0);
}
static void data_callback(cman_handle_t handle, void *private,
char *buf, int len, uint8_t port, int nodeid)
{
/* Ignore looped back messages */
if (nodeid == this_node.cn_nodeid)
return;
process_message(cman_client, buf, len, (char *)&nodeid);
}
static void _add_up_node(const char *csid)
{
/* It's up ! */
int nodeid = nodeid_from_csid(csid);
dm_hash_insert_binary(node_updown_hash, (char *)&nodeid, sizeof(int), (void *)1);
DEBUGLOG("Added new node %d to updown list\n", nodeid);
}
static void _cluster_closedown(void)
{
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
cman_finish(c_handle);
}
static int is_listening(int nodeid)
{
int status;
do {
status = cman_is_listening(c_handle, nodeid, CLUSTER_PORT_CLVMD);
if (status < 0 && errno == EBUSY) { /* Don't busywait */
sleep(1);
errno = EBUSY; /* In case sleep trashes it */
}
}
while (status < 0 && errno == EBUSY);
return status;
}
/* Populate the list of CLVMDs running.
called only at startup time */
static void count_clvmds_running(void)
{
int i;
for (i = 0; i < num_nodes; i++) {
int nodeid = nodes[i].cn_nodeid;
if (is_listening(nodeid) == 1)
dm_hash_insert_binary(node_updown_hash, (void *)&nodeid, sizeof(int), (void*)1);
else
dm_hash_insert_binary(node_updown_hash, (void *)&nodeid, sizeof(int), (void*)0);
}
}
/* Get a list of active cluster members */
static void get_members(void)
{
int retnodes;
int status;
int i;
int high_nodeid = 0;
num_nodes = cman_get_node_count(c_handle);
if (num_nodes == -1) {
log_error("Unable to get node count");
return;
}
/* Not enough room for new nodes list ? */
if (num_nodes > count_nodes && nodes) {
free(nodes);
nodes = NULL;
}
if (nodes == NULL) {
count_nodes = num_nodes + 10; /* Overallocate a little */
nodes = malloc(count_nodes * sizeof(struct cman_node));
if (!nodes) {
log_error("Unable to allocate nodes array\n");
exit(5);
}
}
status = cman_get_nodes(c_handle, count_nodes, &retnodes, nodes);
if (status < 0) {
log_error("Unable to get node details");
exit(6);
}
/* Get the highest nodeid */
for (i=0; i<retnodes; i++) {
if (nodes[i].cn_nodeid > high_nodeid)
high_nodeid = nodes[i].cn_nodeid;
}
}
/* Convert a node name to a CSID */
static int _csid_from_name(char *csid, const char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (strcmp(name, nodes[i].cn_name) == 0) {
memcpy(csid, &nodes[i].cn_nodeid, CMAN_MAX_CSID_LEN);
return 0;
}
}
return -1;
}
/* Convert a CSID to a node name */
static int _name_from_csid(const char *csid, char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (memcmp(csid, &nodes[i].cn_nodeid, CMAN_MAX_CSID_LEN) == 0) {
strcpy(name, nodes[i].cn_name);
return 0;
}
}
/* Who?? */
strcpy(name, "Unknown");
return -1;
}
/* Convert a node ID to a node name */
static int name_from_nodeid(int nodeid, char *name)
{
int i;
for (i = 0; i < num_nodes; i++) {
if (nodeid == nodes[i].cn_nodeid) {
strcpy(name, nodes[i].cn_name);
return 0;
}
}
/* Who?? */
strcpy(name, "Unknown");
return -1;
}
/* Convert a CSID to a node ID */
static int nodeid_from_csid(const char *csid)
{
int nodeid;
memcpy(&nodeid, csid, CMAN_MAX_CSID_LEN);
return nodeid;
}
static int _is_quorate(void)
{
return cman_is_quorate(c_handle);
}
static void sync_ast_routine(void *arg)
{
struct lock_wait *lwait = arg;
pthread_mutex_lock(&lwait->mutex);
pthread_cond_signal(&lwait->cond);
pthread_mutex_unlock(&lwait->mutex);
}
static int _sync_lock(const char *resource, int mode, int flags, int *lockid)
{
int status;
struct lock_wait lwait;
if (!lockid) {
errno = EINVAL;
return -1;
}
DEBUGLOG("sync_lock: '%s' mode:%d flags=%d\n", resource,mode,flags);
/* Conversions need the lockid in the LKSB */
if (flags & LKF_CONVERT)
lwait.lksb.sb_lkid = *lockid;
pthread_cond_init(&lwait.cond, NULL);
pthread_mutex_init(&lwait.mutex, NULL);
pthread_mutex_lock(&lwait.mutex);
status = dlm_ls_lock(lockspace,
mode,
&lwait.lksb,
flags,
resource,
strlen(resource),
0, sync_ast_routine, &lwait, NULL, NULL);
if (status)
return status;
/* Wait for it to complete */
pthread_cond_wait(&lwait.cond, &lwait.mutex);
pthread_mutex_unlock(&lwait.mutex);
*lockid = lwait.lksb.sb_lkid;
errno = lwait.lksb.sb_status;
DEBUGLOG("sync_lock: returning lkid %x\n", *lockid);
if (lwait.lksb.sb_status)
return -1;
else
return 0;
}
static int _sync_unlock(const char *resource /* UNUSED */, int lockid)
{
int status;
struct lock_wait lwait;
DEBUGLOG("sync_unlock: '%s' lkid:%x\n", resource, lockid);
pthread_cond_init(&lwait.cond, NULL);
pthread_mutex_init(&lwait.mutex, NULL);
pthread_mutex_lock(&lwait.mutex);
status = dlm_ls_unlock(lockspace, lockid, 0, &lwait.lksb, &lwait);
if (status)
return status;
/* Wait for it to complete */
pthread_cond_wait(&lwait.cond, &lwait.mutex);
pthread_mutex_unlock(&lwait.mutex);
errno = lwait.lksb.sb_status;
if (lwait.lksb.sb_status != EUNLOCK)
return -1;
else
return 0;
}
static int _get_cluster_name(char *buf, int buflen)
{
cman_cluster_t cluster_info;
int status;
status = cman_get_cluster(c_handle, &cluster_info);
if (!status) {
strncpy(buf, cluster_info.ci_name, buflen);
}
return status;
}
static struct cluster_ops _cluster_cman_ops = {
.name = "cman",
.cluster_init_completed = _cluster_init_completed,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _sync_lock,
.sync_unlock = _sync_unlock,
};
struct cluster_ops *init_cman_cluster(void)
{
if (!_init_cluster())
return &_cluster_cman_ops;
else
return NULL;
}

View File

@@ -0,0 +1,416 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
CLVMD Cluster LVM daemon command processor.
To add commands to the daemon simply add a processor in do_command and return
and messages back in buf and the length in *retlen. The initial value of
buflen is the maximum size of the buffer. if buf is not large enough then it
may be reallocated by the functions in here to a suitable size bearing in
mind that anything larger than the passed-in size will have to be returned
using the system LV and so performance will suffer.
The status return will be negated and passed back to the originating node.
pre- and post- command routines are called only on the local node. The
purpose is primarily to get and release locks, though the pre- routine should
also do any other local setups required by the command (if any) and can
return a failure code that prevents the command from being distributed around
the cluster
The pre- and post- routines are run in their own thread so can block as long
they like, do_command is run in the main clvmd thread so should not block for
too long. If the pre-command returns an error code (!=0) then the command
will not be propogated around the cluster but the post-command WILL be called
Also note that the pre and post routine are *always* called on the local
node, even if the command to be executed was only requested to run on a
remote node. It may peek inside the client structure to check the status of
the command.
The clients of the daemon must, naturally, understand the return messages and
codes.
Routines in here may only READ the values in the client structure passed in
apart from client->private which they are free to do what they like with.
*/
#include "clvmd-common.h"
#include "clvmd-comms.h"
#include "clvm.h"
#include "clvmd.h"
#include "lvm-globals.h"
#include "lvm-functions.h"
#include "locking.h"
#include <sys/utsname.h>
extern struct cluster_ops *clops;
static int restart_clvmd(void);
/* This is where all the real work happens:
NOTE: client will be NULL when this is executed on a remote node */
int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
char **buf, int buflen, int *retlen)
{
char *args = msg->node + strlen(msg->node) + 1;
int arglen = msglen - sizeof(struct clvm_header) - strlen(msg->node);
int status = 0;
char *lockname;
const char *locktype;
struct utsname nodeinfo;
unsigned char lock_cmd;
unsigned char lock_flags;
/* Do the command */
switch (msg->cmd) {
/* Just a test message */
case CLVMD_CMD_TEST:
if (arglen > buflen) {
char *new_buf;
buflen = arglen + 200;
new_buf = realloc(*buf, buflen);
if (new_buf == NULL) {
status = errno;
free (*buf);
}
*buf = new_buf;
}
if (*buf) {
if (uname(&nodeinfo))
memset(&nodeinfo, 0, sizeof(nodeinfo));
*retlen = 1 + dm_snprintf(*buf, buflen,
"TEST from %s: %s v%s",
nodeinfo.nodename, args,
nodeinfo.release);
}
break;
case CLVMD_CMD_LOCK_VG:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
/* Check to see if the VG is in use by LVM1 */
status = do_check_lvm1(lockname);
do_lock_vg(lock_cmd, lock_flags, lockname);
break;
case CLVMD_CMD_LOCK_LV:
/* This is the biggie */
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = do_lock_lv(lock_cmd, lock_flags, lockname);
/* Replace EIO with something less scary */
if (status == EIO) {
*retlen = 1 + dm_snprintf(*buf, buflen, "%s",
get_last_lvm_error());
return EIO;
}
break;
case CLVMD_CMD_LOCK_QUERY:
lockname = &args[2];
if (buflen < 3)
return EIO;
if ((locktype = do_lock_query(lockname)))
*retlen = 1 + dm_snprintf(*buf, buflen, "%s", locktype);
break;
case CLVMD_CMD_REFRESH:
do_refresh_cache();
break;
case CLVMD_CMD_SYNC_NAMES:
lvm_do_fs_unlock();
break;
case CLVMD_CMD_SET_DEBUG:
clvmd_set_debug((debug_t) args[0]);
break;
case CLVMD_CMD_RESTART:
status = restart_clvmd();
break;
case CLVMD_CMD_GET_CLUSTERNAME:
status = clops->get_cluster_name(*buf, buflen);
if (!status)
*retlen = strlen(*buf)+1;
break;
case CLVMD_CMD_VG_BACKUP:
/*
* Do not run backup on local node, caller should do that.
*/
if (!client)
lvm_do_backup(&args[2]);
break;
default:
/* Won't get here because command is validated in pre_command */
break;
}
/* Check the status of the command and return the error text */
if (status) {
if (*buf)
*retlen = dm_snprintf(*buf, buflen, "%s", strerror(status)) + 1;
else
*retlen = 0;
}
return status;
}
static int lock_vg(struct local_client *client)
{
struct dm_hash_table *lock_hash;
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
unsigned char lock_cmd;
int lock_mode;
char *args = header->node + strlen(header->node) + 1;
int lkid;
int status;
char *lockname;
/*
* Keep a track of VG locks in our own hash table. In current
* practice there should only ever be more than two VGs locked
* if a user tries to merge lots of them at once
*/
if (!client->bits.localsock.private) {
if (!(lock_hash = dm_hash_create(3)))
return ENOMEM;
client->bits.localsock.private = (void *) lock_hash;
} else
lock_hash = (struct dm_hash_table *) client->bits.localsock.private;
lock_cmd = args[0] & (LCK_NONBLOCK | LCK_HOLD | LCK_SCOPE_MASK | LCK_TYPE_MASK);
lock_mode = ((int) lock_cmd & LCK_TYPE_MASK);
/* lock_flags = args[1]; */
lockname = &args[2];
DEBUGLOG("(%p) doing PRE command LOCK_VG '%s' at %x\n", client, lockname, lock_cmd);
if (lock_mode == LCK_UNLOCK) {
if (!(lkid = (int) (long) dm_hash_lookup(lock_hash, lockname)))
return EINVAL;
if ((status = sync_unlock(lockname, lkid)))
status = errno;
else
dm_hash_remove(lock_hash, lockname);
} else {
/* Read locks need to be PR; other modes get passed through */
if (lock_mode == LCK_READ)
lock_mode = LCK_PREAD;
if ((status = sync_lock(lockname, lock_mode, (lock_cmd & LCK_NONBLOCK) ? LCKF_NOQUEUE : 0, &lkid)))
status = errno;
else if (!dm_hash_insert(lock_hash, lockname, (void *) (long) lkid))
return ENOMEM;
}
return status;
}
/* Pre-command is a good place to get locks that are needed only for the duration
of the commands around the cluster (don't forget to free them in post-command),
and to sanity check the command arguments */
int do_pre_command(struct local_client *client)
{
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
unsigned char lock_cmd;
unsigned char lock_flags;
char *args = header->node + strlen(header->node) + 1;
int lockid = 0;
int status = 0;
char *lockname;
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_lock("CLVMD_TEST", LCK_EXCL, 0, &lockid);
client->bits.localsock.private = (void *)(long)lockid;
break;
case CLVMD_CMD_LOCK_VG:
lockname = &args[2];
/* We take out a real lock unless LCK_CACHE was set */
if (!strncmp(lockname, "V_", 2) ||
!strncmp(lockname, "P_#", 3))
status = lock_vg(client);
break;
case CLVMD_CMD_LOCK_LV:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = pre_lock_lv(lock_cmd, lock_flags, lockname);
break;
case CLVMD_CMD_REFRESH:
case CLVMD_CMD_GET_CLUSTERNAME:
case CLVMD_CMD_SET_DEBUG:
case CLVMD_CMD_VG_BACKUP:
case CLVMD_CMD_SYNC_NAMES:
case CLVMD_CMD_LOCK_QUERY:
case CLVMD_CMD_RESTART:
break;
default:
log_error("Unknown command %d received\n", header->cmd);
status = EINVAL;
}
return status;
}
/* Note that the post-command routine is called even if the pre-command or the real command
failed */
int do_post_command(struct local_client *client)
{
struct clvm_header *header =
(struct clvm_header *) client->bits.localsock.cmd;
int status = 0;
unsigned char lock_cmd;
unsigned char lock_flags;
char *args = header->node + strlen(header->node) + 1;
char *lockname;
switch (header->cmd) {
case CLVMD_CMD_TEST:
status = sync_unlock("CLVMD_TEST", (int) (long) client->bits.localsock.private);
client->bits.localsock.private = NULL;
break;
case CLVMD_CMD_LOCK_LV:
lock_cmd = args[0];
lock_flags = args[1];
lockname = &args[2];
status = post_lock_lv(lock_cmd, lock_flags, lockname);
break;
default:
/* Nothing to do here */
break;
}
return status;
}
/* Called when the client is about to be deleted */
void cmd_client_cleanup(struct local_client *client)
{
struct dm_hash_node *v;
struct dm_hash_table *lock_hash;
int lkid;
char *lockname;
DEBUGLOG("(%p) Client thread cleanup\n", client);
if (!client->bits.localsock.private)
return;
lock_hash = (struct dm_hash_table *)client->bits.localsock.private;
dm_hash_iterate(v, lock_hash) {
lkid = (int)(long)dm_hash_get_data(lock_hash, v);
lockname = dm_hash_get_key(lock_hash, v);
DEBUGLOG("(%p) Cleanup: Unlocking lock %s %x\n", client, lockname, lkid);
(void) sync_unlock(lockname, lkid);
}
dm_hash_destroy(lock_hash);
client->bits.localsock.private = NULL;
}
static int restart_clvmd(void)
{
const char **argv;
char *lv_name;
int argc = 0, max_locks = 0;
struct dm_hash_node *hn = NULL;
char debug_arg[16];
const char *clvmd = getenv("LVM_CLVMD_BINARY") ? : CLVMD_PATH;
DEBUGLOG("clvmd restart requested\n");
/* Count exclusively-open LVs */
do {
hn = get_next_excl_lock(hn, &lv_name);
if (lv_name) {
max_locks++;
if (!*lv_name)
break; /* FIXME: Is this error ? */
}
} while (hn);
/* clvmd + locks (-E uuid) + debug (-d X) + NULL */
if (!(argv = malloc((max_locks * 2 + 6) * sizeof(*argv))))
goto_out;
/*
* Build the command-line
*/
argv[argc++] = "clvmd";
/* Propagate debug options */
if (clvmd_get_debug()) {
if (dm_snprintf(debug_arg, sizeof(debug_arg), "-d%u", clvmd_get_debug()) < 0)
goto_out;
argv[argc++] = debug_arg;
}
/* Propagate foreground options */
if (clvmd_get_foreground())
argv[argc++] = "-f";
argv[argc++] = "-I";
argv[argc++] = clops->name;
/* Now add the exclusively-open LVs */
hn = NULL;
do {
hn = get_next_excl_lock(hn, &lv_name);
if (lv_name) {
if (!*lv_name)
break; /* FIXME: Is this error ? */
argv[argc++] = "-E";
argv[argc++] = lv_name;
DEBUGLOG("excl lock: %s\n", lv_name);
}
} while (hn);
argv[argc] = NULL;
/* Exec new clvmd */
DEBUGLOG("--- Restarting %s ---\n", clvmd);
for (argc = 1; argv[argc]; argc++) DEBUGLOG("--- %d: %s\n", argc, argv[argc]);
/* NOTE: This will fail when downgrading! */
execvp(clvmd, (char **)argv);
out:
/* We failed */
DEBUGLOG("Restart of clvmd failed.\n");
free(argv);
return EIO;
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2004-2008 Red Hat, Inc. All rights reserved.
* Copyright (C) 2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -12,11 +12,16 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _LIBDM_KDEV_H
#define _LIBDM_KDEV_H
/*
* This file must be included first by every clvmd source file.
*/
#ifndef _LVM_CLVMD_COMMON_H
#define _LVM_CLVMD_COMMON_H
#define MAJOR(dev) ((dev & 0xfff00) >> 8)
#define MINOR(dev) ((dev & 0xff) | ((dev >> 12) & 0xfff00))
#define MKDEV(ma,mi) ((mi & 0xff) | (ma << 8) | ((mi & ~0xff) << 12))
#define _REENTRANT
#include "tool.h"
#include "lvm-logging.h"
#endif

119
daemons/clvmd/clvmd-comms.h Normal file
View File

@@ -0,0 +1,119 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2011 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* Abstraction layer for clvmd cluster communications
*/
#ifndef _CLVMD_COMMS_H
#define _CLVMD_COMMS_H
struct local_client;
struct cluster_ops {
const char *name;
void (*cluster_init_completed) (void);
int (*cluster_send_message) (const void *buf, int msglen,
const char *csid,
const char *errtext);
int (*name_from_csid) (const char *csid, char *name);
int (*csid_from_name) (char *csid, const char *name);
int (*get_num_nodes) (void);
int (*cluster_fd_callback) (struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client);
int (*get_main_cluster_fd) (void); /* gets accept FD or cman cluster socket */
int (*cluster_do_node_callback) (struct local_client *client,
void (*callback) (struct local_client *,
const char *csid,
int node_up));
int (*is_quorate) (void);
void (*get_our_csid) (char *csid);
void (*add_up_node) (const char *csid);
void (*reread_config) (void);
void (*cluster_closedown) (void);
int (*get_cluster_name)(char *buf, int buflen);
int (*sync_lock) (const char *resource, int mode,
int flags, int *lockid);
int (*sync_unlock) (const char *resource, int lockid);
};
#ifdef USE_CMAN
# include <netinet/in.h>
# include "libcman.h"
# define CMAN_MAX_CSID_LEN 4
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN CMAN_MAX_CSID_LEN
# endif
# undef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN CMAN_MAX_NODENAME_LEN
# define CMAN_MAX_CLUSTER_MESSAGE 1500
# define CLUSTER_PORT_CLVMD 11
struct cluster_ops *init_cman_cluster(void);
#endif
#ifdef USE_OPENAIS
# include <openais/saAis.h>
# include <corosync/totem/totem.h>
# define OPENAIS_CSID_LEN (sizeof(int))
# define OPENAIS_MAX_CLUSTER_MESSAGE MESSAGE_SIZE_MAX
# define OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN SA_MAX_NAME_LENGTH
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN SA_MAX_NAME_LENGTH
# endif
# ifndef CMAN_MAX_CLUSTER_MESSAGE
# define CMAN_MAX_CLUSTER_MESSAGE MESSAGE_SIZE_MAX
# endif
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_openais_cluster(void);
#endif
#ifdef USE_COROSYNC
# include <corosync/corotypes.h>
# define COROSYNC_CSID_LEN (sizeof(int))
# define COROSYNC_MAX_CLUSTER_MESSAGE 65535
# define COROSYNC_MAX_CLUSTER_MEMBER_NAME_LEN CS_MAX_NAME_LENGTH
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN CS_MAX_NAME_LENGTH
# endif
# ifndef CMAN_MAX_CLUSTER_MESSAGE
# define CMAN_MAX_CLUSTER_MESSAGE 65535
# endif
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_corosync_cluster(void);
#endif
#ifdef USE_SINGLENODE
# define SINGLENODE_CSID_LEN (sizeof(int))
# ifndef MAX_CLUSTER_MEMBER_NAME_LEN
# define MAX_CLUSTER_MEMBER_NAME_LEN 64
# endif
# define SINGLENODE_MAX_CLUSTER_MESSAGE 65535
# ifndef MAX_CSID_LEN
# define MAX_CSID_LEN sizeof(int)
# endif
struct cluster_ops *init_singlenode_cluster(void);
#endif
#endif

View File

@@ -0,0 +1,662 @@
/*
* Copyright (C) 2009-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* This provides the interface between clvmd and corosync/DLM as the cluster
* and lock manager.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include "lvm-functions.h"
#include "locking.h"
#include <corosync/cpg.h>
#include <corosync/quorum.h>
#ifdef HAVE_COROSYNC_CONFDB_H
# include <corosync/confdb.h>
#elif defined HAVE_COROSYNC_CMAP_H
# include <corosync/cmap.h>
#else
# error "Either HAVE_COROSYNC_CONFDB_H or HAVE_COROSYNC_CMAP_H must be defined."
#endif
#include <libdlm.h>
#include <syslog.h>
/* Timeout value for several corosync calls */
#define LOCKSPACE_NAME "clvmd"
static void corosync_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len);
static void corosync_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries);
static void _cluster_closedown(void);
/* Hash list of nodes in the cluster */
static struct dm_hash_table *node_hash;
/* Number of active nodes */
static int num_nodes;
static unsigned int our_nodeid;
static struct local_client *cluster_client;
/* Corosync handles */
static cpg_handle_t cpg_handle;
static quorum_handle_t quorum_handle;
/* DLM Handle */
static dlm_lshandle_t *lockspace;
static struct cpg_name cpg_group_name;
/* Corosync callback structs */
cpg_callbacks_t corosync_cpg_callbacks = {
.cpg_deliver_fn = corosync_cpg_deliver_callback,
.cpg_confchg_fn = corosync_cpg_confchg_callback,
};
quorum_callbacks_t quorum_callbacks = {
.quorum_notify_fn = NULL,
};
struct node_info
{
enum {NODE_DOWN, NODE_CLVMD} state;
int nodeid;
};
/* Set errno to something approximating the right value and return 0 or -1 */
static int cs_to_errno(cs_error_t err)
{
switch(err)
{
case CS_OK:
return 0;
case CS_ERR_LIBRARY:
errno = EINVAL;
break;
case CS_ERR_VERSION:
errno = EINVAL;
break;
case CS_ERR_INIT:
errno = EINVAL;
break;
case CS_ERR_TIMEOUT:
errno = ETIME;
break;
case CS_ERR_TRY_AGAIN:
errno = EAGAIN;
break;
case CS_ERR_INVALID_PARAM:
errno = EINVAL;
break;
case CS_ERR_NO_MEMORY:
errno = ENOMEM;
break;
case CS_ERR_BAD_HANDLE:
errno = EINVAL;
break;
case CS_ERR_BUSY:
errno = EBUSY;
break;
case CS_ERR_ACCESS:
errno = EPERM;
break;
case CS_ERR_NOT_EXIST:
errno = ENOENT;
break;
case CS_ERR_NAME_TOO_LONG:
errno = ENAMETOOLONG;
break;
case CS_ERR_EXIST:
errno = EEXIST;
break;
case CS_ERR_NO_SPACE:
errno = ENOSPC;
break;
case CS_ERR_INTERRUPT:
errno = EINTR;
break;
case CS_ERR_NAME_NOT_FOUND:
errno = ENOENT;
break;
case CS_ERR_NO_RESOURCES:
errno = ENOMEM;
break;
case CS_ERR_NOT_SUPPORTED:
errno = EOPNOTSUPP;
break;
case CS_ERR_BAD_OPERATION:
errno = EINVAL;
break;
case CS_ERR_FAILED_OPERATION:
errno = EIO;
break;
case CS_ERR_MESSAGE_ERROR:
errno = EIO;
break;
case CS_ERR_QUEUE_FULL:
errno = EXFULL;
break;
case CS_ERR_QUEUE_NOT_AVAILABLE:
errno = EINVAL;
break;
case CS_ERR_BAD_FLAGS:
errno = EINVAL;
break;
case CS_ERR_TOO_BIG:
errno = E2BIG;
break;
case CS_ERR_NO_SECTIONS:
errno = ENOMEM;
break;
default:
errno = EINVAL;
break;
}
return -1;
}
static char *print_corosync_csid(const char *csid)
{
static char buf[128];
int id;
memcpy(&id, csid, sizeof(int));
sprintf(buf, "%d", id);
return buf;
}
static void corosync_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len)
{
int target_nodeid;
memcpy(&target_nodeid, msg, COROSYNC_CSID_LEN);
DEBUGLOG("%u got message from nodeid %d for %d. len %zd\n",
our_nodeid, nodeid, target_nodeid, msg_len-4);
if (nodeid != our_nodeid)
if (target_nodeid == our_nodeid || target_nodeid == 0)
process_message(cluster_client, (char *)msg+COROSYNC_CSID_LEN,
msg_len-COROSYNC_CSID_LEN, (char*)&nodeid);
}
static void corosync_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries)
{
int i;
struct node_info *ninfo;
DEBUGLOG("confchg callback. %zd joined, %zd left, %zd members\n",
joined_list_entries, left_list_entries, member_list_entries);
for (i=0; i<joined_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&joined_list[i].nodeid,
COROSYNC_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = joined_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
COROSYNC_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
for (i=0; i<left_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&left_list[i].nodeid,
COROSYNC_CSID_LEN);
if (ninfo)
ninfo->state = NODE_DOWN;
}
num_nodes = member_list_entries;
}
static int _init_cluster(void)
{
cs_error_t err;
#ifdef QUORUM_SET /* corosync/quorum.h */
uint32_t quorum_type;
#endif
node_hash = dm_hash_create(100);
err = cpg_initialize(&cpg_handle,
&corosync_cpg_callbacks);
if (err != CS_OK) {
syslog(LOG_ERR, "Cannot initialise Corosync CPG service: %d",
err);
DEBUGLOG("Cannot initialise Corosync CPG service: %d", err);
return cs_to_errno(err);
}
#ifdef QUORUM_SET
err = quorum_initialize(&quorum_handle,
&quorum_callbacks,
&quorum_type);
if (quorum_type != QUORUM_SET) {
syslog(LOG_ERR, "Corosync quorum service is not configured");
DEBUGLOG("Corosync quorum service is not configured");
return EINVAL;
}
#else
err = quorum_initialize(&quorum_handle,
&quorum_callbacks);
#endif
if (err != CS_OK) {
syslog(LOG_ERR, "Cannot initialise Corosync quorum service: %d",
err);
DEBUGLOG("Cannot initialise Corosync quorum service: %d", err);
return cs_to_errno(err);
}
/* Create a lockspace for LV & VG locks to live in */
lockspace = dlm_open_lockspace(LOCKSPACE_NAME);
if (!lockspace) {
lockspace = dlm_create_lockspace(LOCKSPACE_NAME, 0600);
if (!lockspace) {
syslog(LOG_ERR, "Unable to create DLM lockspace for CLVM: %m");
return -1;
}
DEBUGLOG("Created DLM lockspace for CLVMD.\n");
} else
DEBUGLOG("Opened existing DLM lockspace for CLVMD.\n");
dlm_ls_pthread_init(lockspace);
DEBUGLOG("DLM initialisation complete\n");
/* Connect to the clvmd group */
strcpy((char *)cpg_group_name.value, "clvmd");
cpg_group_name.length = strlen((char *)cpg_group_name.value);
err = cpg_join(cpg_handle, &cpg_group_name);
if (err != CS_OK) {
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
syslog(LOG_ERR, "Cannot join clvmd process group");
DEBUGLOG("Cannot join clvmd process group: %d\n", err);
return cs_to_errno(err);
}
err = cpg_local_get(cpg_handle,
&our_nodeid);
if (err != CS_OK) {
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
syslog(LOG_ERR, "Cannot get local node id\n");
return cs_to_errno(err);
}
DEBUGLOG("Our local node id is %d\n", our_nodeid);
DEBUGLOG("Connected to Corosync\n");
return 0;
}
static void _cluster_closedown(void)
{
dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 1);
cpg_finalize(cpg_handle);
quorum_finalize(quorum_handle);
}
static void _get_our_csid(char *csid)
{
memcpy(csid, &our_nodeid, sizeof(int));
}
/* Corosync doesn't really have nmode names so we
just use the node ID in hex instead */
static int _csid_from_name(char *csid, const char *name)
{
int nodeid;
struct node_info *ninfo;
if (sscanf(name, "%x", &nodeid) == 1) {
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (ninfo)
return nodeid;
}
return -1;
}
static int _name_from_csid(const char *csid, char *name)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (!ninfo)
{
sprintf(name, "UNKNOWN %s", print_corosync_csid(csid));
return -1;
}
sprintf(name, "%x", ninfo->nodeid);
return 0;
}
static int _get_num_nodes(void)
{
DEBUGLOG("num_nodes = %d\n", num_nodes);
return num_nodes;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
if (!ninfo) {
DEBUGLOG("corosync_add_up_node no node_hash entry for csid %s\n",
print_corosync_csid(csid));
return;
}
DEBUGLOG("corosync_add_up_node %d\n", ninfo->nodeid);
ninfo->state = NODE_CLVMD;
return;
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
struct dm_hash_node *hn;
struct node_info *ninfo;
dm_hash_iterate(hn, node_hash)
{
char csid[COROSYNC_CSID_LEN];
ninfo = dm_hash_get_data(node_hash, hn);
memcpy(csid, dm_hash_get_key(node_hash, hn), COROSYNC_CSID_LEN);
DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
ninfo->state);
if (ninfo->state == NODE_CLVMD)
callback(master_client, csid, 1);
}
return 0;
}
/* Real locking */
static int _lock_resource(const char *resource, int mode, int flags, int *lockid)
{
struct dlm_lksb lksb;
int err;
DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
if (flags & LKF_CONVERT)
lksb.sb_lkid = *lockid;
err = dlm_ls_lock_wait(lockspace,
mode,
&lksb,
flags,
resource,
strlen(resource),
0,
NULL, NULL, NULL);
if (err != 0)
{
DEBUGLOG("dlm_ls_lock returned %d\n", errno);
return err;
}
if (lksb.sb_status != 0)
{
DEBUGLOG("dlm_ls_lock returns lksb.sb_status %d\n", lksb.sb_status);
errno = lksb.sb_status;
return -1;
}
DEBUGLOG("lock_resource returning %d, lock_id=%x\n", err, lksb.sb_lkid);
*lockid = lksb.sb_lkid;
return 0;
}
static int _unlock_resource(const char *resource, int lockid)
{
struct dlm_lksb lksb;
int err;
DEBUGLOG("unlock_resource: %s lockid: %x\n", resource, lockid);
lksb.sb_lkid = lockid;
err = dlm_ls_unlock_wait(lockspace,
lockid,
0,
&lksb);
if (err != 0)
{
DEBUGLOG("Unlock returned %d\n", err);
return err;
}
if (lksb.sb_status != EUNLOCK)
{
DEBUGLOG("dlm_ls_unlock_wait returns lksb.sb_status: %d\n", lksb.sb_status);
errno = lksb.sb_status;
return -1;
}
return 0;
}
static int _is_quorate(void)
{
int quorate;
if (quorum_getquorate(quorum_handle, &quorate) == CS_OK)
return quorate;
else
return 0;
}
static int _get_main_cluster_fd(void)
{
int select_fd;
cpg_fd_get(cpg_handle, &select_fd);
return select_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
cluster_client = fd;
*new_client = NULL;
cpg_dispatch(cpg_handle, CS_DISPATCH_ONE);
return 1;
}
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
static pthread_mutex_t _mutex = PTHREAD_MUTEX_INITIALIZER;
struct iovec iov[2];
cs_error_t err;
int target_node;
if (csid)
memcpy(&target_node, csid, COROSYNC_CSID_LEN);
else
target_node = 0;
iov[0].iov_base = &target_node;
iov[0].iov_len = sizeof(int);
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
pthread_mutex_lock(&_mutex);
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
pthread_mutex_unlock(&_mutex);
return cs_to_errno(err);
}
#ifdef HAVE_COROSYNC_CONFDB_H
/*
* We are not necessarily connected to a Red Hat Cluster system,
* but if we are, this returns the cluster name from cluster.conf.
* I've used confdb rather than ccs to reduce the inter-package
* dependancies as well as to allow people to set a cluster name
* for themselves even if they are not running on RH cluster.
*/
static int _get_cluster_name(char *buf, int buflen)
{
confdb_handle_t handle;
int result;
size_t namelen = buflen;
hdb_handle_t cluster_handle;
confdb_callbacks_t callbacks = {
.confdb_key_change_notify_fn = NULL,
.confdb_object_create_change_notify_fn = NULL,
.confdb_object_delete_change_notify_fn = NULL
};
/* This is a default in case everything else fails */
strncpy(buf, "Corosync", buflen);
/* Look for a cluster name in confdb */
result = confdb_initialize (&handle, &callbacks);
if (result != CS_OK)
return 0;
result = confdb_object_find_start(handle, OBJECT_PARENT_HANDLE);
if (result != CS_OK)
goto out;
result = confdb_object_find(handle, OBJECT_PARENT_HANDLE, (void *)"cluster", strlen("cluster"), &cluster_handle);
if (result != CS_OK)
goto out;
result = confdb_key_get(handle, cluster_handle, (void *)"name", strlen("name"), buf, &namelen);
if (result != CS_OK)
goto out;
buf[namelen] = '\0';
out:
confdb_finalize(handle);
return 0;
}
#elif defined HAVE_COROSYNC_CMAP_H
static int _get_cluster_name(char *buf, int buflen)
{
cmap_handle_t cmap_handle = 0;
int result;
char *name = NULL;
/* This is a default in case everything else fails */
strncpy(buf, "Corosync", buflen);
/* Look for a cluster name in cmap */
result = cmap_initialize(&cmap_handle);
if (result != CS_OK)
return 0;
result = cmap_get_string(cmap_handle, "totem.cluster_name", &name);
if (result != CS_OK)
goto out;
memset(buf, 0, buflen);
strncpy(buf, name, buflen - 1);
out:
if (name)
free(name);
cmap_finalize(cmap_handle);
return 0;
}
#endif
static struct cluster_ops _cluster_corosync_ops = {
.name = "corosync",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _lock_resource,
.sync_unlock = _unlock_resource,
};
struct cluster_ops *init_corosync_cluster(void)
{
if (!_init_cluster())
return &_cluster_corosync_ops;
else
return NULL;
}

View File

@@ -0,0 +1,687 @@
/*
* Copyright (C) 2007-2009 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/*
* This provides the interface between clvmd and OpenAIS as the cluster
* and lock manager.
*/
#include "clvmd-common.h"
#include <pthread.h>
#include <fcntl.h>
#include <syslog.h>
#include <openais/saAis.h>
#include <openais/saLck.h>
#include <corosync/corotypes.h>
#include <corosync/cpg.h>
#include "locking.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "lvm-functions.h"
#include "clvmd.h"
/* Timeout value for several openais calls */
#define TIMEOUT 10
static void openais_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len);
static void openais_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries);
static void _cluster_closedown(void);
/* Hash list of nodes in the cluster */
static struct dm_hash_table *node_hash;
/* For associating lock IDs & resource handles */
static struct dm_hash_table *lock_hash;
/* Number of active nodes */
static int num_nodes;
static unsigned int our_nodeid;
static struct local_client *cluster_client;
/* OpenAIS handles */
static cpg_handle_t cpg_handle;
static SaLckHandleT lck_handle;
static struct cpg_name cpg_group_name;
/* Openais callback structs */
cpg_callbacks_t openais_cpg_callbacks = {
.cpg_deliver_fn = openais_cpg_deliver_callback,
.cpg_confchg_fn = openais_cpg_confchg_callback,
};
struct node_info
{
enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
int nodeid;
};
struct lock_info
{
SaLckResourceHandleT res_handle;
SaLckLockIdT lock_id;
SaNameT lock_name;
};
/* Set errno to something approximating the right value and return 0 or -1 */
static int ais_to_errno(SaAisErrorT err)
{
switch(err)
{
case SA_AIS_OK:
return 0;
case SA_AIS_ERR_LIBRARY:
errno = EINVAL;
break;
case SA_AIS_ERR_VERSION:
errno = EINVAL;
break;
case SA_AIS_ERR_INIT:
errno = EINVAL;
break;
case SA_AIS_ERR_TIMEOUT:
errno = ETIME;
break;
case SA_AIS_ERR_TRY_AGAIN:
errno = EAGAIN;
break;
case SA_AIS_ERR_INVALID_PARAM:
errno = EINVAL;
break;
case SA_AIS_ERR_NO_MEMORY:
errno = ENOMEM;
break;
case SA_AIS_ERR_BAD_HANDLE:
errno = EINVAL;
break;
case SA_AIS_ERR_BUSY:
errno = EBUSY;
break;
case SA_AIS_ERR_ACCESS:
errno = EPERM;
break;
case SA_AIS_ERR_NOT_EXIST:
errno = ENOENT;
break;
case SA_AIS_ERR_NAME_TOO_LONG:
errno = ENAMETOOLONG;
break;
case SA_AIS_ERR_EXIST:
errno = EEXIST;
break;
case SA_AIS_ERR_NO_SPACE:
errno = ENOSPC;
break;
case SA_AIS_ERR_INTERRUPT:
errno = EINTR;
break;
case SA_AIS_ERR_NAME_NOT_FOUND:
errno = ENOENT;
break;
case SA_AIS_ERR_NO_RESOURCES:
errno = ENOMEM;
break;
case SA_AIS_ERR_NOT_SUPPORTED:
errno = EOPNOTSUPP;
break;
case SA_AIS_ERR_BAD_OPERATION:
errno = EINVAL;
break;
case SA_AIS_ERR_FAILED_OPERATION:
errno = EIO;
break;
case SA_AIS_ERR_MESSAGE_ERROR:
errno = EIO;
break;
case SA_AIS_ERR_QUEUE_FULL:
errno = EXFULL;
break;
case SA_AIS_ERR_QUEUE_NOT_AVAILABLE:
errno = EINVAL;
break;
case SA_AIS_ERR_BAD_FLAGS:
errno = EINVAL;
break;
case SA_AIS_ERR_TOO_BIG:
errno = E2BIG;
break;
case SA_AIS_ERR_NO_SECTIONS:
errno = ENOMEM;
break;
default:
errno = EINVAL;
break;
}
return -1;
}
static char *print_openais_csid(const char *csid)
{
static char buf[128];
int id;
memcpy(&id, csid, sizeof(int));
sprintf(buf, "%d", id);
return buf;
}
static int add_internal_client(int fd, fd_callback_t callback)
{
struct local_client *client;
DEBUGLOG("Add_internal_client, fd = %d\n", fd);
if (!(client = dm_zalloc(sizeof(*client)))) {
DEBUGLOG("malloc failed\n");
return -1;
}
client->fd = fd;
client->type = CLUSTER_INTERNAL;
client->callback = callback;
add_client(client);
/* Set Close-on-exec */
fcntl(fd, F_SETFD, 1);
return 0;
}
static void openais_cpg_deliver_callback (cpg_handle_t handle,
const struct cpg_name *groupName,
uint32_t nodeid,
uint32_t pid,
void *msg,
size_t msg_len)
{
int target_nodeid;
memcpy(&target_nodeid, msg, OPENAIS_CSID_LEN);
DEBUGLOG("%u got message from nodeid %d for %d. len %" PRIsize_t "\n",
our_nodeid, nodeid, target_nodeid, msg_len-4);
if (nodeid != our_nodeid)
if (target_nodeid == our_nodeid || target_nodeid == 0)
process_message(cluster_client, (char *)msg+OPENAIS_CSID_LEN,
msg_len-OPENAIS_CSID_LEN, (char*)&nodeid);
}
static void openais_cpg_confchg_callback(cpg_handle_t handle,
const struct cpg_name *groupName,
const struct cpg_address *member_list, size_t member_list_entries,
const struct cpg_address *left_list, size_t left_list_entries,
const struct cpg_address *joined_list, size_t joined_list_entries)
{
int i;
struct node_info *ninfo;
DEBUGLOG("confchg callback. %" PRIsize_t " joined, "
FMTsize_t " left, %" PRIsize_t " members\n",
joined_list_entries, left_list_entries, member_list_entries);
for (i=0; i<joined_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&joined_list[i].nodeid,
OPENAIS_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = joined_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
OPENAIS_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
for (i=0; i<left_list_entries; i++) {
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&left_list[i].nodeid,
OPENAIS_CSID_LEN);
if (ninfo)
ninfo->state = NODE_DOWN;
}
for (i=0; i<member_list_entries; i++) {
if (member_list[i].nodeid == 0) continue;
ninfo = dm_hash_lookup_binary(node_hash,
(char *)&member_list[i].nodeid,
OPENAIS_CSID_LEN);
if (!ninfo) {
ninfo = malloc(sizeof(struct node_info));
if (!ninfo) {
break;
}
else {
ninfo->nodeid = member_list[i].nodeid;
dm_hash_insert_binary(node_hash,
(char *)&ninfo->nodeid,
OPENAIS_CSID_LEN, ninfo);
}
}
ninfo->state = NODE_CLVMD;
}
num_nodes = member_list_entries;
}
static int lck_dispatch(struct local_client *client, char *buf, int len,
const char *csid, struct local_client **new_client)
{
*new_client = NULL;
saLckDispatch(lck_handle, SA_DISPATCH_ONE);
return 1;
}
static int _init_cluster(void)
{
SaAisErrorT err;
SaVersionT ver = { 'B', 1, 1 };
int select_fd;
node_hash = dm_hash_create(100);
lock_hash = dm_hash_create(10);
err = cpg_initialize(&cpg_handle,
&openais_cpg_callbacks);
if (err != SA_AIS_OK) {
syslog(LOG_ERR, "Cannot initialise OpenAIS CPG service: %d",
err);
DEBUGLOG("Cannot initialise OpenAIS CPG service: %d", err);
return ais_to_errno(err);
}
err = saLckInitialize(&lck_handle,
NULL,
&ver);
if (err != SA_AIS_OK) {
cpg_initialize(&cpg_handle, &openais_cpg_callbacks);
syslog(LOG_ERR, "Cannot initialise OpenAIS lock service: %d",
err);
DEBUGLOG("Cannot initialise OpenAIS lock service: %d\n\n", err);
return ais_to_errno(err);
}
/* Connect to the clvmd group */
strcpy((char *)cpg_group_name.value, "clvmd");
cpg_group_name.length = strlen((char *)cpg_group_name.value);
err = cpg_join(cpg_handle, &cpg_group_name);
if (err != SA_AIS_OK) {
cpg_finalize(cpg_handle);
saLckFinalize(lck_handle);
syslog(LOG_ERR, "Cannot join clvmd process group");
DEBUGLOG("Cannot join clvmd process group: %d\n", err);
return ais_to_errno(err);
}
err = cpg_local_get(cpg_handle,
&our_nodeid);
if (err != SA_AIS_OK) {
cpg_finalize(cpg_handle);
saLckFinalize(lck_handle);
syslog(LOG_ERR, "Cannot get local node id\n");
return ais_to_errno(err);
}
DEBUGLOG("Our local node id is %d\n", our_nodeid);
saLckSelectionObjectGet(lck_handle, (SaSelectionObjectT *)&select_fd);
add_internal_client(select_fd, lck_dispatch);
DEBUGLOG("Connected to OpenAIS\n");
return 0;
}
static void _cluster_closedown(void)
{
saLckFinalize(lck_handle);
cpg_finalize(cpg_handle);
}
static void _get_our_csid(char *csid)
{
memcpy(csid, &our_nodeid, sizeof(int));
}
/* OpenAIS doesn't really have nmode names so we
just use the node ID in hex instead */
static int _csid_from_name(char *csid, const char *name)
{
int nodeid;
struct node_info *ninfo;
if (sscanf(name, "%x", &nodeid) == 1) {
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (ninfo)
return nodeid;
}
return -1;
}
static int _name_from_csid(const char *csid, char *name)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (!ninfo)
{
sprintf(name, "UNKNOWN %s", print_openais_csid(csid));
return -1;
}
sprintf(name, "%x", ninfo->nodeid);
return 0;
}
static int _get_num_nodes()
{
DEBUGLOG("num_nodes = %d\n", num_nodes);
return num_nodes;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
struct node_info *ninfo;
ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
if (!ninfo) {
DEBUGLOG("openais_add_up_node no node_hash entry for csid %s\n",
print_openais_csid(csid));
return;
}
DEBUGLOG("openais_add_up_node %d\n", ninfo->nodeid);
ninfo->state = NODE_CLVMD;
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
struct dm_hash_node *hn;
struct node_info *ninfo;
int somedown = 0;
dm_hash_iterate(hn, node_hash)
{
char csid[OPENAIS_CSID_LEN];
ninfo = dm_hash_get_data(node_hash, hn);
memcpy(csid, dm_hash_get_key(node_hash, hn), OPENAIS_CSID_LEN);
DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
ninfo->state);
if (ninfo->state != NODE_DOWN)
callback(master_client, csid, ninfo->state == NODE_CLVMD);
if (ninfo->state != NODE_CLVMD)
somedown = -1;
}
return somedown;
}
/* Real locking */
static int _lock_resource(char *resource, int mode, int flags, int *lockid)
{
struct lock_info *linfo;
SaLckResourceHandleT res_handle;
SaAisErrorT err;
SaLckLockIdT lock_id;
SaLckLockStatusT lockStatus;
/* This needs to be converted from DLM/LVM2 value for OpenAIS LCK */
if (flags & LCK_NONBLOCK) flags = SA_LCK_LOCK_NO_QUEUE;
linfo = malloc(sizeof(struct lock_info));
if (!linfo)
return -1;
DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
linfo->lock_name.length = strlen(resource)+1;
strcpy((char *)linfo->lock_name.value, resource);
err = saLckResourceOpen(lck_handle, &linfo->lock_name,
SA_LCK_RESOURCE_CREATE, TIMEOUT, &res_handle);
if (err != SA_AIS_OK)
{
DEBUGLOG("ResourceOpen returned %d\n", err);
free(linfo);
return ais_to_errno(err);
}
err = saLckResourceLock(
res_handle,
&lock_id,
mode,
flags,
0,
SA_TIME_END,
&lockStatus);
if (err != SA_AIS_OK && lockStatus != SA_LCK_LOCK_GRANTED)
{
free(linfo);
saLckResourceClose(res_handle);
return ais_to_errno(err);
}
/* Wait for it to complete */
DEBUGLOG("lock_resource returning %d, lock_id=%" PRIx64 "\n",
err, lock_id);
linfo->lock_id = lock_id;
linfo->res_handle = res_handle;
dm_hash_insert(lock_hash, resource, linfo);
return ais_to_errno(err);
}
static int _unlock_resource(char *resource, int lockid)
{
SaAisErrorT err;
struct lock_info *linfo;
DEBUGLOG("unlock_resource %s\n", resource);
linfo = dm_hash_lookup(lock_hash, resource);
if (!linfo)
return 0;
DEBUGLOG("unlock_resource: lockid: %" PRIx64 "\n", linfo->lock_id);
err = saLckResourceUnlock(linfo->lock_id, SA_TIME_END);
if (err != SA_AIS_OK)
{
DEBUGLOG("Unlock returned %d\n", err);
return ais_to_errno(err);
}
/* Release the resource */
dm_hash_remove(lock_hash, resource);
saLckResourceClose(linfo->res_handle);
free(linfo);
return ais_to_errno(err);
}
static int _sync_lock(const char *resource, int mode, int flags, int *lockid)
{
int status;
char lock1[strlen(resource)+3];
char lock2[strlen(resource)+3];
snprintf(lock1, sizeof(lock1), "%s-1", resource);
snprintf(lock2, sizeof(lock2), "%s-2", resource);
switch (mode)
{
case LCK_EXCL:
status = _lock_resource(lock1, SA_LCK_EX_LOCK_MODE, flags, lockid);
if (status)
goto out;
/* If we can't get this lock too then bail out */
status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, LCK_NONBLOCK,
lockid);
if (status == SA_LCK_LOCK_NOT_QUEUED)
{
_unlock_resource(lock1, *lockid);
status = -1;
errno = EAGAIN;
}
break;
case LCK_PREAD:
case LCK_READ:
status = _lock_resource(lock1, SA_LCK_PR_LOCK_MODE, flags, lockid);
if (status)
goto out;
_unlock_resource(lock2, *lockid);
break;
case LCK_WRITE:
status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, flags, lockid);
if (status)
goto out;
_unlock_resource(lock1, *lockid);
break;
default:
status = -1;
errno = EINVAL;
break;
}
out:
*lockid = mode;
return status;
}
static int _sync_unlock(const char *resource, int lockid)
{
int status = 0;
char lock1[strlen(resource)+3];
char lock2[strlen(resource)+3];
snprintf(lock1, sizeof(lock1), "%s-1", resource);
snprintf(lock2, sizeof(lock2), "%s-2", resource);
_unlock_resource(lock1, lockid);
_unlock_resource(lock2, lockid);
return status;
}
/* We are always quorate ! */
static int _is_quorate()
{
return 1;
}
static int _get_main_cluster_fd(void)
{
int select_fd;
cpg_fd_get(cpg_handle, &select_fd);
return select_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
cluster_client = fd;
*new_client = NULL;
cpg_dispatch(cpg_handle, SA_DISPATCH_ONE);
return 1;
}
static int _cluster_send_message(const void *buf, int msglen, const char *csid,
const char *errtext)
{
struct iovec iov[2];
SaAisErrorT err;
int target_node;
if (csid)
memcpy(&target_node, csid, OPENAIS_CSID_LEN);
else
target_node = 0;
iov[0].iov_base = &target_node;
iov[0].iov_len = sizeof(int);
iov[1].iov_base = (char *)buf;
iov[1].iov_len = msglen;
err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
return ais_to_errno(err);
}
/* We don't have a cluster name to report here */
static int _get_cluster_name(char *buf, int buflen)
{
strncpy(buf, "OpenAIS", buflen);
return 0;
}
static struct cluster_ops _cluster_openais_ops = {
.name = "openais",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _sync_lock,
.sync_unlock = _sync_unlock,
};
struct cluster_ops *init_openais_cluster(void)
{
if (!_init_cluster())
return &_cluster_openais_ops;
return NULL;
}

View File

@@ -0,0 +1,382 @@
/*
* Copyright (C) 2009-2013 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "locking.h"
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include <sys/un.h>
#include <sys/socket.h>
#include <fcntl.h>
static const char SINGLENODE_CLVMD_SOCKNAME[] = DEFAULT_RUN_DIR "/clvmd_singlenode.sock";
static int listen_fd = -1;
static struct dm_hash_table *_locks;
static int _lockid;
static pthread_mutex_t _lock_mutex = PTHREAD_MUTEX_INITIALIZER;
/* Using one common condition for all locks for simplicity */
static pthread_cond_t _lock_cond = PTHREAD_COND_INITIALIZER;
struct lock {
struct dm_list list;
int lockid;
int mode;
};
static void close_comms(void)
{
if (listen_fd != -1 && close(listen_fd))
stack;
(void)unlink(SINGLENODE_CLVMD_SOCKNAME);
listen_fd = -1;
}
static int init_comms(void)
{
mode_t old_mask;
struct sockaddr_un addr = { .sun_family = AF_UNIX };
if (!dm_strncpy(addr.sun_path, SINGLENODE_CLVMD_SOCKNAME,
sizeof(addr.sun_path))) {
DEBUGLOG("%s: singlenode socket name too long.",
SINGLENODE_CLVMD_SOCKNAME);
return -1;
}
close_comms();
(void) dm_prepare_selinux_context(SINGLENODE_CLVMD_SOCKNAME, S_IFSOCK);
old_mask = umask(0077);
listen_fd = socket(PF_UNIX, SOCK_STREAM, 0);
if (listen_fd < 0) {
DEBUGLOG("Can't create local socket: %s\n", strerror(errno));
goto error;
}
/* Set Close-on-exec */
if (fcntl(listen_fd, F_SETFD, 1)) {
DEBUGLOG("Setting CLOEXEC on client fd failed: %s\n", strerror(errno));
goto error;
}
if (bind(listen_fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
DEBUGLOG("Can't bind local socket: %s\n", strerror(errno));
goto error;
}
if (listen(listen_fd, 10) < 0) {
DEBUGLOG("Can't listen local socket: %s\n", strerror(errno));
goto error;
}
umask(old_mask);
(void) dm_prepare_selinux_context(NULL, 0);
return 0;
error:
umask(old_mask);
(void) dm_prepare_selinux_context(NULL, 0);
close_comms();
return -1;
}
static int _init_cluster(void)
{
int r;
if (!(_locks = dm_hash_create(128))) {
DEBUGLOG("Failed to allocate single-node hash table.\n");
return 1;
}
r = init_comms();
if (r) {
dm_hash_destroy(_locks);
_locks = NULL;
return r;
}
DEBUGLOG("Single-node cluster initialised.\n");
return 0;
}
static void _cluster_closedown(void)
{
close_comms();
/* If there is any awaited resource, kill it softly */
pthread_mutex_lock(&_lock_mutex);
dm_hash_destroy(_locks);
_locks = NULL;
_lockid = 0;
pthread_cond_broadcast(&_lock_cond); /* wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
}
static void _get_our_csid(char *csid)
{
int nodeid = 1;
memcpy(csid, &nodeid, sizeof(int));
}
static int _csid_from_name(char *csid, const char *name)
{
return 1;
}
static int _name_from_csid(const char *csid, char *name)
{
strcpy(name, "SINGLENODE");
return 0;
}
static int _get_num_nodes(void)
{
return 1;
}
/* Node is now known to be running a clvmd */
static void _add_up_node(const char *csid)
{
}
/* Call a callback for each node, so the caller knows whether it's up or down */
static int _cluster_do_node_callback(struct local_client *master_client,
void (*callback)(struct local_client *,
const char *csid, int node_up))
{
return 0;
}
int _lock_file(const char *file, uint32_t flags);
static const char *_get_mode(int mode)
{
switch (mode) {
case LCK_NULL: return "NULL";
case LCK_READ: return "READ";
case LCK_PREAD: return "PREAD";
case LCK_WRITE: return "WRITE";
case LCK_EXCL: return "EXCLUSIVE";
case LCK_UNLOCK: return "UNLOCK";
default: return "????";
}
}
/* Real locking */
static int _lock_resource(const char *resource, int mode, int flags, int *lockid)
{
/* DLM table of allowed transition states */
static const int _dlm_table[6][6] = {
/* Mode NL CR CW PR PW EX */
/* NL */ { 1, 1, 1, 1, 1, 1},
/* CR */ { 1, 1, 1, 1, 1, 0},
/* CW */ { 1, 1, 1, 0, 0, 0},
/* PR */ { 1, 1, 0, 1, 0, 0},
/* PW */ { 1, 1, 0, 0, 0, 0},
/* EX */ { 1, 0, 0, 0, 0, 0}
};
struct lock *lck = NULL, *lckt;
struct dm_list *head;
DEBUGLOG("Locking resource %s, flags=0x%02x (%s%s%s), mode=%s (%d)\n",
resource, flags,
(flags & LCKF_NOQUEUE) ? "NOQUEUE" : "",
((flags & (LCKF_NOQUEUE | LCKF_CONVERT)) ==
(LCKF_NOQUEUE | LCKF_CONVERT)) ? "|" : "",
(flags & LCKF_CONVERT) ? "CONVERT" : "",
_get_mode(mode), mode);
mode &= LCK_TYPE_MASK;
pthread_mutex_lock(&_lock_mutex);
retry:
if (!(head = dm_hash_lookup(_locks, resource))) {
if (flags & LCKF_CONVERT) {
/* In real DLM, lock is identified only by lockid, resource is not used */
DEBUGLOG("Unlocked resource %s cannot be converted\n", resource);
goto_bad;
}
/* Add new locked resource */
if (!(head = dm_malloc(sizeof(struct dm_list))) ||
!dm_hash_insert(_locks, resource, head)) {
dm_free(head);
goto_bad;
}
dm_list_init(head);
} else /* Update/convert locked resource */
dm_list_iterate_items(lck, head) {
/* Check is all locks are compatible with requested lock */
if (flags & LCKF_CONVERT) {
if (lck->lockid != *lockid)
continue;
DEBUGLOG("Converting resource %s lockid=%d mode:%s -> %s...\n",
resource, lck->lockid, _get_mode(lck->mode), _get_mode(mode));
dm_list_iterate_items(lckt, head) {
if ((lckt->lockid != *lockid) &&
!_dlm_table[mode][lckt->mode]) {
if (!(flags & LCKF_NOQUEUE) &&
/* TODO: Real dlm uses here conversion queues */
!pthread_cond_wait(&_lock_cond, &_lock_mutex) &&
_locks) /* End of the game? */
goto retry;
goto bad;
}
}
lck->mode = mode; /* Lock is now converted */
goto out;
} else if (!_dlm_table[mode][lck->mode]) {
DEBUGLOG("Resource %s already locked lockid=%d, mode:%s\n",
resource, lck->lockid, _get_mode(lck->mode));
if (!(flags & LCKF_NOQUEUE) &&
!pthread_cond_wait(&_lock_cond, &_lock_mutex) &&
_locks) { /* End of the game? */
DEBUGLOG("Resource %s retrying lock in mode:%s...\n",
resource, _get_mode(mode));
goto retry;
}
goto bad;
}
}
if (!(flags & LCKF_CONVERT)) {
if (!(lck = dm_malloc(sizeof(struct lock))))
goto_bad;
*lockid = lck->lockid = ++_lockid;
lck->mode = mode;
dm_list_add(head, &lck->list);
}
out:
pthread_cond_broadcast(&_lock_cond); /* to wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Locked resource %s, lockid=%d, mode=%s\n",
resource, lck->lockid, _get_mode(lck->mode));
return 0;
bad:
pthread_cond_broadcast(&_lock_cond); /* to wakeup waiters */
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Failed to lock resource %s\n", resource);
return 1; /* fail */
}
static int _unlock_resource(const char *resource, int lockid)
{
struct lock *lck;
struct dm_list *head;
int r = 1;
if (lockid < 0) {
DEBUGLOG("Not tracking unlock of lockid -1: %s, lockid=%d\n",
resource, lockid);
return 1;
}
DEBUGLOG("Unlocking resource %s, lockid=%d\n", resource, lockid);
pthread_mutex_lock(&_lock_mutex);
pthread_cond_broadcast(&_lock_cond); /* wakeup waiters */
if (!(head = dm_hash_lookup(_locks, resource))) {
pthread_mutex_unlock(&_lock_mutex);
DEBUGLOG("Resource %s is not locked.\n", resource);
return 1;
}
dm_list_iterate_items(lck, head)
if (lck->lockid == lockid) {
dm_list_del(&lck->list);
dm_free(lck);
r = 0;
goto out;
}
DEBUGLOG("Resource %s has wrong lockid %d.\n", resource, lockid);
out:
if (dm_list_empty(head)) {
//DEBUGLOG("Resource %s is no longer hashed (lockid=%d).\n", resource, lockid);
dm_hash_remove(_locks, resource);
dm_free(head);
}
pthread_mutex_unlock(&_lock_mutex);
return r;
}
static int _is_quorate(void)
{
return 1;
}
static int _get_main_cluster_fd(void)
{
return listen_fd;
}
static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
const char *csid,
struct local_client **new_client)
{
return 1;
}
static int _cluster_send_message(const void *buf, int msglen,
const char *csid,
const char *errtext)
{
return 0;
}
static int _get_cluster_name(char *buf, int buflen)
{
return dm_strncpy(buf, "localcluster", buflen) ? 0 : 1;
}
static struct cluster_ops _cluster_singlenode_ops = {
.name = "singlenode",
.cluster_init_completed = NULL,
.cluster_send_message = _cluster_send_message,
.name_from_csid = _name_from_csid,
.csid_from_name = _csid_from_name,
.get_num_nodes = _get_num_nodes,
.cluster_fd_callback = _cluster_fd_callback,
.get_main_cluster_fd = _get_main_cluster_fd,
.cluster_do_node_callback = _cluster_do_node_callback,
.is_quorate = _is_quorate,
.get_our_csid = _get_our_csid,
.add_up_node = _add_up_node,
.reread_config = NULL,
.cluster_closedown = _cluster_closedown,
.get_cluster_name = _get_cluster_name,
.sync_lock = _lock_resource,
.sync_unlock = _unlock_resource,
};
struct cluster_ops *init_singlenode_cluster(void)
{
if (!_init_cluster())
return &_cluster_singlenode_ops;
return NULL;
}

2407
daemons/clvmd/clvmd.c Normal file

File diff suppressed because it is too large Load Diff

126
daemons/clvmd/clvmd.h Normal file
View File

@@ -0,0 +1,126 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _CLVMD_H
#define _CLVMD_H
#define CLVMD_MAJOR_VERSION 0
#define CLVMD_MINOR_VERSION 2
#define CLVMD_PATCH_VERSION 1
/* Default time (in seconds) we will wait for all remote commands to execute
before declaring them dead */
#define DEFAULT_CMD_TIMEOUT 60
/* One of these for each reply we get from command execution on a node */
struct node_reply {
char node[MAX_CLUSTER_MEMBER_NAME_LEN];
char *replymsg;
int status;
struct node_reply *next;
};
typedef enum {DEBUG_OFF, DEBUG_STDERR, DEBUG_SYSLOG} debug_t;
/*
* These exist for the use of local sockets only when we are
* collecting responses from all cluster nodes
*/
struct localsock_bits {
struct node_reply *replies;
int num_replies;
int expected_replies;
time_t sent_time; /* So we can check for timeouts */
int in_progress; /* Only execute one cmd at a time per client */
int sent_out; /* Flag to indicate that a command was sent
to remote nodes */
void *private; /* Private area for command processor use */
void *cmd; /* Whole command as passed down local socket */
int cmd_len; /* Length of above */
int pipe; /* Pipe to send PRE completion status down */
int finished; /* Flag to tell subthread to exit */
int all_success; /* Set to 0 if any node (or the pre_command)
failed */
int cleanup_needed; /* helper for cleanup_zombie */
struct local_client *pipe_client;
pthread_t threadid;
enum { PRE_COMMAND, POST_COMMAND } state;
pthread_mutex_t mutex; /* Main thread and worker synchronisation */
pthread_cond_t cond;
};
/* Entries for PIPE clients */
struct pipe_bits {
struct local_client *client; /* Actual (localsock) client */
pthread_t threadid; /* Our own copy of the thread id */
};
/* Entries for Network socket clients */
struct netsock_bits {
void *private;
int flags;
};
typedef int (*fd_callback_t) (struct local_client * fd, char *buf, int len,
const char *csid,
struct local_client ** new_client);
/* One of these for each fd we are listening on */
struct local_client {
int fd;
enum { CLUSTER_MAIN_SOCK, CLUSTER_DATA_SOCK, LOCAL_RENDEZVOUS,
LOCAL_SOCK, THREAD_PIPE, CLUSTER_INTERNAL } type;
struct local_client *next;
unsigned short xid;
fd_callback_t callback;
uint8_t removeme;
union {
struct localsock_bits localsock;
struct pipe_bits pipe;
struct netsock_bits net;
} bits;
};
#define DEBUGLOG(fmt, args...) debuglog(fmt, ## args)
#ifndef max
#define max(a,b) ((a)>(b)?(a):(b))
#endif
/* The real command processor is in clvmd-command.c */
extern int do_command(struct local_client *client, struct clvm_header *msg,
int msglen, char **buf, int buflen, int *retlen);
/* Pre and post command routines are called only on the local node */
extern int do_pre_command(struct local_client *client);
extern int do_post_command(struct local_client *client);
extern void cmd_client_cleanup(struct local_client *client);
extern int add_client(struct local_client *new_client);
extern void clvmd_cluster_init_completed(void);
extern void process_message(struct local_client *client, char *buf,
int len, const char *csid);
extern void debuglog(const char *fmt, ... )
__attribute__ ((format(printf, 1, 2)));
void clvmd_set_debug(debug_t new_de);
debug_t clvmd_get_debug(void);
int clvmd_get_foreground(void);
int sync_lock(const char *resource, int mode, int flags, int *lockid);
int sync_unlock(const char *resource, int lockid);
#endif

View File

@@ -0,0 +1,941 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "clvmd-common.h"
#include <pthread.h>
#include "clvm.h"
#include "clvmd-comms.h"
#include "clvmd.h"
#include "lvm-functions.h"
/* LVM2 headers */
#include "toolcontext.h"
#include "lvmcache.h"
#include "lvm-globals.h"
#include "activate.h"
#include "archiver.h"
#include "memlock.h"
#include <syslog.h>
static struct cmd_context *cmd = NULL;
static struct dm_hash_table *lv_hash = NULL;
static pthread_mutex_t lv_hash_lock;
static pthread_mutex_t lvm_lock;
static char last_error[1024];
struct lv_info {
int lock_id;
int lock_mode;
};
static const char *decode_full_locking_cmd(uint32_t cmdl)
{
static char buf[128];
const char *type;
const char *scope;
const char *command;
switch (cmdl & LCK_TYPE_MASK) {
case LCK_NULL:
type = "NULL";
break;
case LCK_READ:
type = "READ";
break;
case LCK_PREAD:
type = "PREAD";
break;
case LCK_WRITE:
type = "WRITE";
break;
case LCK_EXCL:
type = "EXCL";
break;
case LCK_UNLOCK:
type = "UNLOCK";
break;
default:
type = "unknown";
break;
}
switch (cmdl & LCK_SCOPE_MASK) {
case LCK_VG:
scope = "VG";
command = "LCK_VG";
break;
case LCK_LV:
scope = "LV";
switch (cmdl & LCK_MASK) {
case LCK_LV_EXCLUSIVE & LCK_MASK:
command = "LCK_LV_EXCLUSIVE";
break;
case LCK_LV_SUSPEND & LCK_MASK:
command = "LCK_LV_SUSPEND";
break;
case LCK_LV_RESUME & LCK_MASK:
command = "LCK_LV_RESUME";
break;
case LCK_LV_ACTIVATE & LCK_MASK:
command = "LCK_LV_ACTIVATE";
break;
case LCK_LV_DEACTIVATE & LCK_MASK:
command = "LCK_LV_DEACTIVATE";
break;
default:
command = "unknown";
break;
}
break;
default:
scope = "unknown";
command = "unknown";
break;
}
sprintf(buf, "0x%x %s (%s|%s%s%s%s%s)", cmdl, command, type, scope,
cmdl & LCK_NONBLOCK ? "|NONBLOCK" : "",
cmdl & LCK_HOLD ? "|HOLD" : "",
cmdl & LCK_CLUSTER_VG ? "|CLUSTER_VG" : "",
cmdl & LCK_CACHE ? "|CACHE" : "");
return buf;
}
/*
* Only processes 8 bits: excludes LCK_CACHE.
*/
static const char *decode_locking_cmd(unsigned char cmdl)
{
return decode_full_locking_cmd((uint32_t) cmdl);
}
static const char *decode_flags(unsigned char flags)
{
static char buf[128];
int len;
len = sprintf(buf, "0x%x ( %s%s%s%s%s%s%s%s)", flags,
flags & LCK_PARTIAL_MODE ? "PARTIAL_MODE|" : "",
flags & LCK_MIRROR_NOSYNC_MODE ? "MIRROR_NOSYNC|" : "",
flags & LCK_DMEVENTD_MONITOR_MODE ? "DMEVENTD_MONITOR|" : "",
flags & LCK_ORIGIN_ONLY_MODE ? "ORIGIN_ONLY|" : "",
flags & LCK_TEST_MODE ? "TEST|" : "",
flags & LCK_CONVERT_MODE ? "CONVERT|" : "",
flags & LCK_DMEVENTD_MONITOR_IGNORE ? "DMEVENTD_MONITOR_IGNORE|" : "",
flags & LCK_REVERT_MODE ? "REVERT|" : "");
if (len > 1)
buf[len - 2] = ' ';
else
buf[0] = '\0';
return buf;
}
char *get_last_lvm_error(void)
{
return last_error;
}
/*
* Hash lock info helpers
*/
static struct lv_info *lookup_info(const char *resource)
{
struct lv_info *lvi;
pthread_mutex_lock(&lv_hash_lock);
lvi = dm_hash_lookup(lv_hash, resource);
pthread_mutex_unlock(&lv_hash_lock);
return lvi;
}
static int insert_info(const char *resource, struct lv_info *lvi)
{
int ret;
pthread_mutex_lock(&lv_hash_lock);
ret = dm_hash_insert(lv_hash, resource, lvi);
pthread_mutex_unlock(&lv_hash_lock);
return ret;
}
static void remove_info(const char *resource)
{
int num_open;
pthread_mutex_lock(&lv_hash_lock);
dm_hash_remove(lv_hash, resource);
/* When last lock is remove, validate there are not left opened devices */
if (!dm_hash_get_first(lv_hash)) {
if (critical_section())
log_error(INTERNAL_ERROR "No volumes are locked however clvmd is in activation mode critical section.");
if ((num_open = dev_cache_check_for_open_devices()))
log_error(INTERNAL_ERROR "No volumes are locked however %d devices are still open.", num_open);
}
pthread_mutex_unlock(&lv_hash_lock);
}
/*
* Return the mode a lock is currently held at (or -1 if not held)
*/
static int get_current_lock(char *resource)
{
struct lv_info *lvi;
if ((lvi = lookup_info(resource)))
return lvi->lock_mode;
return -1;
}
void init_lvhash(void)
{
/* Create hash table for keeping LV locks & status */
lv_hash = dm_hash_create(1024);
pthread_mutex_init(&lv_hash_lock, NULL);
pthread_mutex_init(&lvm_lock, NULL);
}
/* Called at shutdown to tidy the lockspace */
void destroy_lvhash(void)
{
struct dm_hash_node *v;
struct lv_info *lvi;
char *resource;
int status;
pthread_mutex_lock(&lv_hash_lock);
dm_hash_iterate(v, lv_hash) {
lvi = dm_hash_get_data(lv_hash, v);
resource = dm_hash_get_key(lv_hash, v);
if ((status = sync_unlock(resource, lvi->lock_id)))
DEBUGLOG("unlock_all. unlock failed(%d): %s\n",
status, strerror(errno));
dm_free(lvi);
}
dm_hash_destroy(lv_hash);
lv_hash = NULL;
pthread_mutex_unlock(&lv_hash_lock);
}
/* Gets a real lock and keeps the info in the hash table */
static int hold_lock(char *resource, int mode, int flags)
{
int status;
int saved_errno;
struct lv_info *lvi;
/* Mask off invalid options */
flags &= LCKF_NOQUEUE | LCKF_CONVERT;
lvi = lookup_info(resource);
if (lvi) {
if (lvi->lock_mode == mode) {
DEBUGLOG("hold_lock, lock mode %d already held\n",
mode);
return 0;
}
if ((lvi->lock_mode == LCK_EXCL) && (mode == LCK_WRITE)) {
DEBUGLOG("hold_lock, lock already held LCK_EXCL, "
"ignoring LCK_WRITE request\n");
return 0;
}
}
/* Only allow explicit conversions */
if (lvi && !(flags & LCKF_CONVERT)) {
errno = EBUSY;
return -1;
}
if (lvi) {
/* Already exists - convert it */
status = sync_lock(resource, mode, flags, &lvi->lock_id);
saved_errno = errno;
if (!status)
lvi->lock_mode = mode;
else
DEBUGLOG("hold_lock. convert to %d failed: %s\n", mode,
strerror(errno));
errno = saved_errno;
} else {
if (!(lvi = dm_malloc(sizeof(struct lv_info)))) {
errno = ENOMEM;
return -1;
}
lvi->lock_mode = mode;
lvi->lock_id = 0;
status = sync_lock(resource, mode, flags & ~LCKF_CONVERT, &lvi->lock_id);
saved_errno = errno;
if (status) {
dm_free(lvi);
DEBUGLOG("hold_lock. lock at %d failed: %s\n", mode,
strerror(errno));
} else
if (!insert_info(resource, lvi)) {
errno = ENOMEM;
return -1;
}
errno = saved_errno;
}
return status;
}
/* Unlock and remove it from the hash table */
static int hold_unlock(char *resource)
{
struct lv_info *lvi;
int status;
int saved_errno;
if (!(lvi = lookup_info(resource))) {
DEBUGLOG("hold_unlock, lock not already held\n");
return 0;
}
status = sync_unlock(resource, lvi->lock_id);
saved_errno = errno;
if (!status) {
remove_info(resource);
dm_free(lvi);
} else {
DEBUGLOG("hold_unlock. unlock failed(%d): %s\n", status,
strerror(errno));
}
errno = saved_errno;
return status;
}
/* Watch the return codes here.
liblvm API functions return 1(true) for success, 0(false) for failure and don't set errno.
libdlm API functions return 0 for success, -1 for failure and do set errno.
These functions here return 0 for success or >0 for failure (where the retcode is errno)
*/
/* Activate LV exclusive or non-exclusive */
static int do_activate_lv(char *resource, unsigned char command, unsigned char lock_flags, int mode)
{
int oldmode;
int status;
int activate_lv;
int exclusive = 0;
struct lvinfo lvi;
/* Is it already open ? */
oldmode = get_current_lock(resource);
if (oldmode == mode && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_activate_lv, lock already held at %d\n", oldmode);
return 0; /* Nothing to do */
}
/* Does the config file want us to activate this LV ? */
if (!lv_activation_filter(cmd, resource, &activate_lv, NULL))
return EIO;
if (!activate_lv)
return 0; /* Success, we did nothing! */
/* Do we need to activate exclusively? */
if ((activate_lv == 2) || (mode == LCK_EXCL)) {
exclusive = 1;
mode = LCK_EXCL;
}
/*
* Try to get the lock if it's a clustered volume group.
* Use lock conversion only if requested, to prevent implicit conversion
* of exclusive lock to shared one during activation.
*/
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_lock(resource, mode, LCKF_NOQUEUE | ((lock_flags & LCK_CONVERT_MODE) ? LCKF_CONVERT:0));
if (status) {
/* Return an LVM-sensible error for this.
* Forcing EIO makes the upper level return this text
* rather than the strerror text for EAGAIN.
*/
if (errno == EAGAIN) {
sprintf(last_error, "Volume is busy on another node");
errno = EIO;
}
return errno;
}
}
/* If it's suspended then resume it */
if (!lv_info_by_lvid(cmd, resource, 0, &lvi, 0, 0))
goto error;
if (lvi.suspended) {
critical_section_inc(cmd, "resuming");
if (!lv_resume(cmd, resource, 0, NULL)) {
critical_section_dec(cmd, "resumed");
goto error;
}
}
/* Now activate it */
if (!lv_activate(cmd, resource, exclusive, 0, 0, NULL))
goto error;
return 0;
error:
if (!test_mode() && (oldmode == -1 || oldmode != mode))
(void)hold_unlock(resource);
return EIO;
}
/* Resume the LV if it was active */
static int do_resume_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode, origin_only, exclusive, revert;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_resume_lv, lock not already held\n");
return 0; /* We don't need to do anything */
}
origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
exclusive = (oldmode == LCK_EXCL) ? 1 : 0;
revert = (lock_flags & LCK_REVERT_MODE) ? 1 : 0;
if (!lv_resume_if_active(cmd, resource, origin_only, exclusive, revert, NULL))
return EIO;
return 0;
}
/* Suspend the device if active */
static int do_suspend_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode;
unsigned origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
unsigned exclusive;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_suspend_lv, lock not already held\n");
return 0; /* Not active, so it's OK */
}
exclusive = (oldmode == LCK_EXCL) ? 1 : 0;
/* Always call lv_suspend to read commited and precommited data */
if (!lv_suspend_if_active(cmd, resource, origin_only, exclusive, NULL, NULL))
return EIO;
return 0;
}
static int do_deactivate_lv(char *resource, unsigned char command, unsigned char lock_flags)
{
int oldmode;
int status;
/* Is it open ? */
oldmode = get_current_lock(resource);
if (oldmode == -1 && (command & LCK_CLUSTER_VG)) {
DEBUGLOG("do_deactivate_lock, lock not already held\n");
return 0; /* We don't need to do anything */
}
if (!lv_deactivate(cmd, resource, NULL))
return EIO;
if (!test_mode() && command & LCK_CLUSTER_VG) {
status = hold_unlock(resource);
if (status)
return errno;
}
return 0;
}
const char *do_lock_query(char *resource)
{
int mode;
const char *type;
mode = get_current_lock(resource);
switch (mode) {
case LCK_NULL: type = "NL"; break;
case LCK_READ: type = "CR"; break;
case LCK_PREAD:type = "PR"; break;
case LCK_WRITE:type = "PW"; break;
case LCK_EXCL: type = "EX"; break;
default: type = NULL;
}
DEBUGLOG("do_lock_query: resource '%s', mode %i (%s)\n", resource, mode, type ?: "--");
return type;
}
/* This is the LOCK_LV part that happens on all nodes in the cluster -
it is responsible for the interaction with device-mapper and LVM */
int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
{
int status = 0;
DEBUGLOG("do_lock_lv: resource '%s', cmd = %s, flags = %s, critical_section = %d\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags), critical_section());
if (!cmd->initialized.config || config_files_changed(cmd)) {
/* Reinitialise various settings inc. logging, filters */
if (do_refresh_cache()) {
log_error("Updated config file invalid. Aborting.");
return EINVAL;
}
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
if (lock_flags & LCK_MIRROR_NOSYNC_MODE)
init_mirror_in_sync(1);
if (lock_flags & LCK_DMEVENTD_MONITOR_IGNORE)
init_dmeventd_monitor(DMEVENTD_MONITOR_IGNORE);
else {
if (lock_flags & LCK_DMEVENTD_MONITOR_MODE)
init_dmeventd_monitor(1);
else
init_dmeventd_monitor(0);
}
cmd->partial_activation = (lock_flags & LCK_PARTIAL_MODE) ? 1 : 0;
/* clvmd should never try to read suspended device */
init_ignore_suspended_devices(1);
switch (command & LCK_MASK) {
case LCK_LV_EXCLUSIVE:
status = do_activate_lv(resource, command, lock_flags, LCK_EXCL);
break;
case LCK_LV_SUSPEND:
status = do_suspend_lv(resource, command, lock_flags);
break;
case LCK_UNLOCK:
case LCK_LV_RESUME: /* if active */
status = do_resume_lv(resource, command, lock_flags);
break;
case LCK_LV_ACTIVATE:
status = do_activate_lv(resource, command, lock_flags, LCK_READ);
break;
case LCK_LV_DEACTIVATE:
status = do_deactivate_lv(resource, command, lock_flags);
break;
default:
DEBUGLOG("Invalid LV command 0x%x\n", command);
status = EINVAL;
break;
}
if (lock_flags & LCK_MIRROR_NOSYNC_MODE)
init_mirror_in_sync(0);
cmd->partial_activation = 0;
/* clean the pool for another command */
dm_pool_empty(cmd->mem);
init_test(0);
pthread_mutex_unlock(&lvm_lock);
DEBUGLOG("Command return is %d, critical_section is %d\n", status, critical_section());
return status;
}
/* Functions to do on the local node only BEFORE the cluster-wide stuff above happens */
int pre_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
{
/* Nearly all the stuff happens cluster-wide. Apart from SUSPEND. Here we get the
lock out on this node (because we are the node modifying the metadata)
before suspending cluster-wide.
LCKF_CONVERT is used always, local node is going to modify metadata
*/
if ((command & (LCK_SCOPE_MASK | LCK_TYPE_MASK)) == LCK_LV_SUSPEND &&
(command & LCK_CLUSTER_VG)) {
DEBUGLOG("pre_lock_lv: resource '%s', cmd = %s, flags = %s\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags));
if (!(lock_flags & LCK_TEST_MODE) &&
hold_lock(resource, LCK_WRITE, LCKF_NOQUEUE | LCKF_CONVERT))
return errno;
}
return 0;
}
/* Functions to do on the local node only AFTER the cluster-wide stuff above happens */
int post_lock_lv(unsigned char command, unsigned char lock_flags,
char *resource)
{
int status;
unsigned origin_only = (lock_flags & LCK_ORIGIN_ONLY_MODE) ? 1 : 0;
/* Opposite of above, done on resume after a metadata update */
if ((command & (LCK_SCOPE_MASK | LCK_TYPE_MASK)) == LCK_LV_RESUME &&
(command & LCK_CLUSTER_VG)) {
int oldmode;
DEBUGLOG("post_lock_lv: resource '%s', cmd = %s, flags = %s\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags));
/* If the lock state is PW then restore it to what it was */
oldmode = get_current_lock(resource);
if (oldmode == LCK_WRITE) {
struct lvinfo lvi;
pthread_mutex_lock(&lvm_lock);
status = lv_info_by_lvid(cmd, resource, origin_only, &lvi, 0, 0);
pthread_mutex_unlock(&lvm_lock);
if (!status)
return EIO;
if (!(lock_flags & LCK_TEST_MODE)) {
if (lvi.exists) {
if (hold_lock(resource, LCK_READ, LCKF_CONVERT))
return errno;
} else if (hold_unlock(resource))
return errno;
}
}
}
return 0;
}
/* Check if a VG is in use by LVM1 so we don't stomp on it */
int do_check_lvm1(const char *vgname)
{
int status;
status = check_lvm1_vg_inactive(cmd, vgname);
return status == 1 ? 0 : EBUSY;
}
int do_refresh_cache(void)
{
DEBUGLOG("Refreshing context\n");
log_notice("Refreshing context");
pthread_mutex_lock(&lvm_lock);
if (!refresh_toolcontext(cmd)) {
pthread_mutex_unlock(&lvm_lock);
return -1;
}
cmd->use_aio = 0;
init_full_scan_done(0);
init_ignore_suspended_devices(1);
lvmcache_force_next_label_scan();
lvmcache_label_scan(cmd);
dm_pool_empty(cmd->mem);
pthread_mutex_unlock(&lvm_lock);
return 0;
}
/*
* Handle VG lock - drop metadata or update lvmcache state
*/
void do_lock_vg(unsigned char command, unsigned char lock_flags, char *resource)
{
uint32_t lock_cmd = command;
char *vgname = resource + 2;
lock_cmd &= (LCK_SCOPE_MASK | LCK_TYPE_MASK | LCK_HOLD);
/*
* Check if LCK_CACHE should be set. All P_ locks except # are cache related.
*/
if (strncmp(resource, "P_#", 3) && !strncmp(resource, "P_", 2))
lock_cmd |= LCK_CACHE;
DEBUGLOG("do_lock_vg: resource '%s', cmd = %s, flags = %s, critical_section = %d\n",
resource, decode_full_locking_cmd(lock_cmd), decode_flags(lock_flags), critical_section());
/* P_#global causes a full cache refresh */
if (!strcmp(resource, "P_" VG_GLOBAL)) {
do_refresh_cache();
return;
}
pthread_mutex_lock(&lvm_lock);
init_test((lock_flags & LCK_TEST_MODE) ? 1 : 0);
switch (lock_cmd) {
case LCK_VG_COMMIT:
DEBUGLOG("vg_commit notification for VG %s\n", vgname);
lvmcache_commit_metadata(vgname);
break;
case LCK_VG_REVERT:
DEBUGLOG("vg_revert notification for VG %s\n", vgname);
lvmcache_drop_metadata(vgname, 1);
break;
case LCK_VG_DROP_CACHE:
default:
DEBUGLOG("Invalidating cached metadata for VG %s\n", vgname);
lvmcache_drop_metadata(vgname, 0);
}
init_test(0);
pthread_mutex_unlock(&lvm_lock);
}
/*
* Ideally, clvmd should be started before any LVs are active
* but this may not be the case...
* I suppose this also comes in handy if clvmd crashes, not that it would!
*/
static int get_initial_state(struct dm_hash_table *excl_uuid)
{
int lock_mode;
char lv[65], vg[65], flags[26], vg_flags[26]; /* with space for '\0' */
char uuid[65];
char line[255];
char *lvs_cmd;
const char *lvm_binary = getenv("LVM_BINARY") ? : LVM_PATH;
FILE *lvs;
if (dm_asprintf(&lvs_cmd, "%s lvs --config 'log{command_names=0 prefix=\"\"}' "
"--nolocking --noheadings -o vg_uuid,lv_uuid,lv_attr,vg_attr",
lvm_binary) < 0)
return_0;
/* FIXME: Maybe link and use liblvm2cmd directly instead of fork */
if (!(lvs = popen(lvs_cmd, "r"))) {
dm_free(lvs_cmd);
return 0;
}
while (fgets(line, sizeof(line), lvs)) {
if (sscanf(line, "%64s %64s %25s %25s\n", vg, lv, flags, vg_flags) == 4) {
/* States: s:suspended a:active S:dropped snapshot I:invalid snapshot */
if (strlen(vg) == 38 && /* is is a valid UUID ? */
(flags[4] == 'a' || flags[4] == 's') && /* is it active or suspended? */
vg_flags[5] == 'c') { /* is it clustered ? */
/* Convert hyphen-separated UUIDs into one */
memcpy(&uuid[0], &vg[0], 6);
memcpy(&uuid[6], &vg[7], 4);
memcpy(&uuid[10], &vg[12], 4);
memcpy(&uuid[14], &vg[17], 4);
memcpy(&uuid[18], &vg[22], 4);
memcpy(&uuid[22], &vg[27], 4);
memcpy(&uuid[26], &vg[32], 6);
memcpy(&uuid[32], &lv[0], 6);
memcpy(&uuid[38], &lv[7], 4);
memcpy(&uuid[42], &lv[12], 4);
memcpy(&uuid[46], &lv[17], 4);
memcpy(&uuid[50], &lv[22], 4);
memcpy(&uuid[54], &lv[27], 4);
memcpy(&uuid[58], &lv[32], 6);
uuid[64] = '\0';
/* Look for this lock in the list of EX locks
we were passed on the command-line */
lock_mode = (dm_hash_lookup(excl_uuid, uuid)) ?
LCK_EXCL : LCK_READ;
DEBUGLOG("getting initial lock for %s\n", uuid);
if (hold_lock(uuid, lock_mode, LCKF_NOQUEUE))
DEBUGLOG("Failed to hold lock %s\n", uuid);
}
}
}
if (pclose(lvs))
DEBUGLOG("lvs pclose failed: %s\n", strerror(errno));
dm_free(lvs_cmd);
return 1;
}
static void lvm2_log_fn(int level, const char *file, int line, int dm_errno,
const char *message)
{
/* Send messages to the normal LVM2 logging system too,
so we get debug output when it's asked for.
We need to NULL the function ptr otherwise it will just call
back into here! */
init_log_fn(NULL);
print_log(level, file, line, dm_errno, "%s", message);
init_log_fn(lvm2_log_fn);
/*
* Ignore non-error messages, but store the latest one for returning
* to the user.
*/
if (level != _LOG_ERR && level != _LOG_FATAL)
return;
strncpy(last_error, message, sizeof(last_error));
last_error[sizeof(last_error)-1] = '\0';
}
/* This checks some basic cluster-LVM configuration stuff */
static void check_config(void)
{
int locking_type;
locking_type = find_config_tree_int(cmd, global_locking_type_CFG, NULL);
if (locking_type == 3) /* compiled-in cluster support */
return;
if (locking_type == 2) { /* External library, check name */
const char *libname;
libname = find_config_tree_str(cmd, global_locking_library_CFG, NULL);
if (libname && strstr(libname, "liblvm2clusterlock.so"))
return;
log_error("Incorrect LVM locking library specified in lvm.conf, cluster operations may not work.");
return;
}
log_error("locking_type not set correctly in lvm.conf, cluster operations will not work.");
}
/* Backups up the LVM metadata if it's changed */
void lvm_do_backup(const char *vgname)
{
struct volume_group * vg;
int consistent = 0;
DEBUGLOG("Triggering backup of VG metadata for %s.\n", vgname);
pthread_mutex_lock(&lvm_lock);
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, WARN_PV_READ, &consistent);
if (vg && consistent)
check_current_backup(vg);
else
log_error("Error backing up metadata, can't find VG for group %s", vgname);
release_vg(vg);
dm_pool_empty(cmd->mem);
pthread_mutex_unlock(&lvm_lock);
}
struct dm_hash_node *get_next_excl_lock(struct dm_hash_node *v, char **name)
{
struct lv_info *lvi;
*name = NULL;
if (!v)
v = dm_hash_get_first(lv_hash);
do {
if (v) {
lvi = dm_hash_get_data(lv_hash, v);
DEBUGLOG("Looking for EX locks. found %x mode %d\n", lvi->lock_id, lvi->lock_mode);
if (lvi->lock_mode == LCK_EXCL) {
*name = dm_hash_get_key(lv_hash, v);
}
v = dm_hash_get_next(lv_hash, v);
}
} while (v && !*name);
if (*name)
DEBUGLOG("returning EXclusive UUID %s\n", *name);
return v;
}
void lvm_do_fs_unlock(void)
{
pthread_mutex_lock(&lvm_lock);
DEBUGLOG("Syncing device names\n");
fs_unlock();
pthread_mutex_unlock(&lvm_lock);
}
/* Called to initialise the LVM context of the daemon */
int init_clvm(struct dm_hash_table *excl_uuid)
{
/* Use LOG_DAEMON for syslog messages instead of LOG_USER */
init_syslog(LOG_DAEMON);
openlog("clvmd", LOG_PID, LOG_DAEMON);
/* Initialise already held locks */
if (!get_initial_state(excl_uuid))
log_error("Cannot load initial lock states.");
if (!udev_init_library_context())
stack;
if (!(cmd = create_toolcontext(1, NULL, 0, 1, 1, 1))) {
log_error("Failed to allocate command context");
udev_fin_library_context();
return 0;
}
if (stored_errno()) {
destroy_toolcontext(cmd);
return 0;
}
cmd->cmd_line = "clvmd";
/* Check lvm.conf is setup for cluster-LVM */
check_config();
init_ignore_suspended_devices(1);
cmd->use_aio = 0;
/* Trap log messages so we can pass them back to the user */
init_log_fn(lvm2_log_fn);
memlock_inc_daemon(cmd);
return 1;
}
void destroy_lvm(void)
{
if (cmd) {
memlock_dec_daemon(cmd);
destroy_toolcontext(cmd);
udev_fin_library_context();
cmd = NULL;
}
}

View File

@@ -0,0 +1,41 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* Functions in lvm-functions.c */
#ifndef _LVM_FUNCTIONS_H
#define _LVM_FUNCTIONS_H
extern int pre_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern int do_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern const char *do_lock_query(char *resource);
extern int post_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern int do_check_lvm1(const char *vgname);
extern int do_refresh_cache(void);
extern int init_clvm(struct dm_hash_table *excl_uuid);
extern void destroy_lvm(void);
extern void init_lvhash(void);
extern void destroy_lvhash(void);
extern void lvm_do_backup(const char *vgname);
extern char *get_last_lvm_error(void);
extern void do_lock_vg(unsigned char command, unsigned char lock_flags,
char *resource);
extern struct dm_hash_node *get_next_excl_lock(struct dm_hash_node *v, char **name);
void lvm_do_fs_unlock(void);
#endif

View File

@@ -0,0 +1,382 @@
/*
* Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2010 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/* FIXME Remove duplicated functions from this file. */
/*
* Send a command to a running clvmd from the command-line
*/
#include "clvmd-common.h"
#include "clvm.h"
#include "refresh_clvmd.h"
#include <stddef.h>
#include <sys/socket.h>
#include <sys/un.h>
typedef struct lvm_response {
char node[255];
char *response;
int status;
int len;
} lvm_response_t;
/*
* This gets stuck at the start of memory we allocate so we
* can sanity-check it at deallocation time
*/
#define LVM_SIGNATURE 0x434C564D
static int _clvmd_sock = -1;
/* Open connection to the clvm daemon */
static int _open_local_sock(void)
{
int local_socket;
struct sockaddr_un sockaddr = { .sun_family = AF_UNIX };
if (!dm_strncpy(sockaddr.sun_path, CLVMD_SOCKNAME, sizeof(sockaddr.sun_path))) {
fprintf(stderr, "%s: clvmd socket name too long.", CLVMD_SOCKNAME);
return -1;
}
/* Open local socket */
if ((local_socket = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
fprintf(stderr, "Local socket creation failed: %s", strerror(errno));
return -1;
}
if (connect(local_socket,(struct sockaddr *) &sockaddr,
sizeof(sockaddr))) {
int saved_errno = errno;
fprintf(stderr, "connect() failed on local socket: %s\n",
strerror(errno));
if (close(local_socket))
return -1;
errno = saved_errno;
return -1;
}
return local_socket;
}
/* Send a request and return the status */
static int _send_request(const char *inbuf, int inlen, char **retbuf, int no_response)
{
char outbuf[PIPE_BUF];
struct clvm_header *outheader = (struct clvm_header *) outbuf;
int len;
unsigned off;
int buflen;
int err;
/* Send it to CLVMD */
rewrite:
if ( (err = write(_clvmd_sock, inbuf, inlen)) != inlen) {
if (err == -1 && errno == EINTR)
goto rewrite;
fprintf(stderr, "Error writing data to clvmd: %s", strerror(errno));
return 0;
}
if (no_response)
return 1;
/* Get the response */
reread:
if ((len = read(_clvmd_sock, outbuf, sizeof(struct clvm_header))) < 0) {
if (errno == EINTR)
goto reread;
fprintf(stderr, "Error reading data from clvmd: %s", strerror(errno));
return 0;
}
if (len == 0) {
fprintf(stderr, "EOF reading CLVMD");
errno = ENOTCONN;
return 0;
}
/* Allocate buffer */
buflen = len + outheader->arglen;
*retbuf = dm_malloc(buflen);
if (!*retbuf) {
errno = ENOMEM;
return 0;
}
/* Copy the header */
memcpy(*retbuf, outbuf, len);
outheader = (struct clvm_header *) *retbuf;
/* Read the returned values */
off = 1; /* we've already read the first byte */
while (off <= outheader->arglen && len > 0) {
len = read(_clvmd_sock, outheader->args + off,
buflen - off - offsetof(struct clvm_header, args));
if (len > 0)
off += len;
}
/* Was it an error ? */
if (outheader->status != 0) {
errno = outheader->status;
/* Only return an error here if there are no node-specific
errors present in the message that might have more detail */
if (!(outheader->flags & CLVMD_FLAG_NODEERRS)) {
fprintf(stderr, "cluster request failed: %s\n", strerror(errno));
return 0;
}
}
return 1;
}
/* Build the structure header and parse-out wildcard node names */
static void _build_header(struct clvm_header *head, int cmd, const char *node,
unsigned int len)
{
head->cmd = cmd;
head->status = 0;
head->flags = 0;
head->xid = 0;
head->clientid = 0;
if (len)
/* 1 byte is used from struct clvm_header.args[1], so -> len - 1 */
head->arglen = len - 1;
else {
head->arglen = 0;
*head->args = '\0';
}
/*
* Translate special node names.
*/
if (!node || !strcmp(node, NODE_ALL))
head->node[0] = '\0';
else if (!strcmp(node, NODE_LOCAL)) {
head->node[0] = '\0';
head->flags = CLVMD_FLAG_LOCAL;
} else
strcpy(head->node, node);
}
/*
* Send a message to a(or all) node(s) in the cluster and wait for replies
*/
static int _cluster_request(char cmd, const char *node, void *data, int len,
lvm_response_t ** response, int *num, int no_response)
{
char outbuf[sizeof(struct clvm_header) + len + strlen(node) + 1];
char *inptr;
char *retbuf = NULL;
int status;
int i;
int num_responses = 0;
struct clvm_header *head = (struct clvm_header *) outbuf;
lvm_response_t *rarray;
*num = 0;
if (_clvmd_sock == -1)
_clvmd_sock = _open_local_sock();
if (_clvmd_sock == -1)
return 0;
_build_header(head, cmd, node, len);
if (len)
memcpy(head->node + strlen(head->node) + 1, data, len);
status = _send_request(outbuf, sizeof(struct clvm_header) +
strlen(head->node) + len, &retbuf, no_response);
if (!status || no_response)
goto out;
/* Count the number of responses we got */
head = (struct clvm_header *) retbuf;
inptr = head->args;
while (inptr[0]) {
num_responses++;
inptr += strlen(inptr) + 1;
inptr += sizeof(int);
inptr += strlen(inptr) + 1;
}
/*
* Allocate response array.
* With an extra pair of INTs on the front to sanity
* check the pointer when we are given it back to free
*/
*response = NULL;
if (!(rarray = dm_malloc(sizeof(lvm_response_t) * num_responses +
sizeof(int) * 2))) {
errno = ENOMEM;
status = 0;
goto out;
}
/* Unpack the response into an lvm_response_t array */
inptr = head->args;
i = 0;
while (inptr[0]) {
strcpy(rarray[i].node, inptr);
inptr += strlen(inptr) + 1;
memcpy(&rarray[i].status, inptr, sizeof(int));
inptr += sizeof(int);
rarray[i].response = dm_malloc(strlen(inptr) + 1);
if (rarray[i].response == NULL) {
/* Free up everything else and return error */
int j;
for (j = 0; j < i; j++)
dm_free(rarray[i].response);
dm_free(rarray);
errno = ENOMEM;
status = 0;
goto out;
}
strcpy(rarray[i].response, inptr);
rarray[i].len = strlen(inptr);
inptr += strlen(inptr) + 1;
i++;
}
*num = num_responses;
*response = rarray;
out:
dm_free(retbuf);
return status;
}
/* Free reply array */
static int _cluster_free_request(lvm_response_t * response, int num)
{
int i;
for (i = 0; i < num; i++) {
dm_free(response[i].response);
}
dm_free(response);
return 1;
}
int refresh_clvmd(int all_nodes)
{
int num_responses;
char args[1]; // No args really.
lvm_response_t *response = NULL;
int saved_errno;
int status;
int i;
status = _cluster_request(CLVMD_CMD_REFRESH, all_nodes ? NODE_ALL : NODE_LOCAL, args, 0, &response, &num_responses, 0);
/* If any nodes were down then display them and return an error */
for (i = 0; i < num_responses; i++) {
if (response[i].status == EHOSTDOWN) {
fprintf(stderr, "clvmd not running on node %s",
response[i].node);
status = 0;
errno = response[i].status;
} else if (response[i].status) {
fprintf(stderr, "Error resetting node %s: %s",
response[i].node,
response[i].response[0] ?
response[i].response :
strerror(response[i].status));
status = 0;
errno = response[i].status;
}
}
saved_errno = errno;
_cluster_free_request(response, num_responses);
errno = saved_errno;
return status;
}
int restart_clvmd(int all_nodes)
{
int dummy, status;
status = _cluster_request(CLVMD_CMD_RESTART, all_nodes ? NODE_ALL : NODE_LOCAL, NULL, 0, NULL, &dummy, 1);
/*
* FIXME: we cannot receive response, clvmd re-exec before it.
* but also should not close socket too early (the whole rq is dropped then).
* FIXME: This should be handled this way:
* - client waits for RESTART ack (and socket close)
* - server restarts
* - client checks that server is ready again (VERSION command?)
*/
usleep(500000);
return status;
}
int debug_clvmd(int level, int clusterwide)
{
int num_responses;
char args[1];
const char *nodes;
lvm_response_t *response = NULL;
int saved_errno;
int status;
int i;
args[0] = level;
if (clusterwide)
nodes = NODE_ALL;
else
nodes = NODE_LOCAL;
status = _cluster_request(CLVMD_CMD_SET_DEBUG, nodes, args, 1, &response, &num_responses, 0);
/* If any nodes were down then display them and return an error */
for (i = 0; i < num_responses; i++) {
if (response[i].status == EHOSTDOWN) {
fprintf(stderr, "clvmd not running on node %s",
response[i].node);
status = 0;
errno = response[i].status;
} else if (response[i].status) {
fprintf(stderr, "Error setting debug on node %s: %s",
response[i].node,
response[i].response[0] ?
response[i].response :
strerror(response[i].status));
status = 0;
errno = response[i].status;
}
}
saved_errno = errno;
_cluster_free_request(response, num_responses);
errno = saved_errno;
return status;
}

View File

@@ -0,0 +1,19 @@
/*
* Copyright (C) 2007 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
int refresh_clvmd(int all_nodes);
int restart_clvmd(int all_nodes);
int debug_clvmd(int level, int clusterwide);

View File

@@ -17,6 +17,8 @@ top_builddir = @top_builddir@
CPG_LIBS = @CPG_LIBS@
CPG_CFLAGS = @CPG_CFLAGS@
SACKPT_LIBS = @SACKPT_LIBS@
SACKPT_CFLAGS = @SACKPT_CFLAGS@
SOURCES = clogd.c cluster.c compat.c functions.c link_mon.c local.c logging.c
@@ -24,13 +26,14 @@ TARGETS = cmirrord
include $(top_builddir)/make.tmpl
LMLIBS += $(CPG_LIBS)
CFLAGS += $(CPG_CFLAGS) $(EXTRA_EXEC_CFLAGS)
LIBS += -ldevmapper
LMLIBS += $(CPG_LIBS) $(SACKPT_LIBS)
CFLAGS += $(CPG_CFLAGS) $(SACKPT_CFLAGS) $(EXTRA_EXEC_CFLAGS)
LDFLAGS += $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
cmirrord: $(OBJECTS) $(top_builddir)/lib/liblvm-internal.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) \
$(LVMLIBS) $(LMLIBS) $(INTERNAL_LIBS) $(LIBS)
$(LVMLIBS) $(LMLIBS) $(LIBS)
install: $(TARGETS)
$(INSTALL_PROGRAM) -D cmirrord $(usrsbindir)/cmirrord

View File

@@ -16,10 +16,7 @@
#include "functions.h"
#include "link_mon.h"
#include "local.h"
#include "lib/mm/xlate.h"
/* FIXME: remove this and the code */
#define CMIRROR_HAS_CHECKPOINT 0
#include "xlate.h"
#include <corosync/cpg.h>
#include <errno.h>
@@ -169,9 +166,6 @@ int cluster_send(struct clog_request *rq)
{
int r;
int found = 0;
#if CMIRROR_HAS_CHECKPOINT
int count = 0;
#endif
struct iovec iov;
struct clog_cpg *entry;
@@ -209,6 +203,8 @@ int cluster_send(struct clog_request *rq)
#if CMIRROR_HAS_CHECKPOINT
do {
int count = 0;
r = cpg_mcast_joined(entry->handle, CPG_TYPE_AGREED, &iov, 1);
if (r != SA_AIS_ERR_TRY_AGAIN)
break;
@@ -1634,7 +1630,7 @@ int create_cluster_cpg(char *uuid, uint64_t luid)
size = ((strlen(uuid) + 1) > CPG_MAX_NAME_LENGTH) ?
CPG_MAX_NAME_LENGTH : (strlen(uuid) + 1);
(void) dm_strncpy(new->name.value, uuid, size);
strncpy(new->name.value, uuid, size);
new->name.length = (uint32_t)size;
new->luid = luid;

View File

@@ -12,8 +12,8 @@
#ifndef _LVM_CLOG_CLUSTER_H
#define _LVM_CLOG_CLUSTER_H
#include "device_mapper/misc/dm-log-userspace.h"
#include "device_mapper/all.h"
#include "dm-log-userspace.h"
#include "libdevmapper.h"
#define DM_ULOG_RESPONSE 0x1000U /* in last byte of 32-bit value */
#define DM_ULOG_CHECKPOINT_READY 21

View File

@@ -8,7 +8,7 @@
#include "logging.h"
#include "cluster.h"
#include "compat.h"
#include "lib/mm/xlate.h"
#include "xlate.h"
#include <errno.h>

View File

@@ -11,7 +11,6 @@
*/
#include "logging.h"
#include "functions.h"
#include "base/memory/zalloc.h"
#include <sys/sysmacros.h>
#include <dirent.h>
@@ -436,7 +435,7 @@ static int _clog_ctr(char *uuid, uint64_t luid,
block_on_error = 1;
}
lc = zalloc(sizeof(*lc));
lc = dm_zalloc(sizeof(*lc));
if (!lc) {
LOG_ERROR("Unable to allocate cluster log context");
r = -ENOMEM;
@@ -452,19 +451,15 @@ static int _clog_ctr(char *uuid, uint64_t luid,
lc->skip_bit_warning = region_count;
lc->disk_fd = -1;
lc->log_dev_failed = 0;
if (!dm_strncpy(lc->uuid, uuid, DM_UUID_LEN)) {
LOG_ERROR("Cannot use too long UUID %s.", uuid);
r = -EINVAL;
goto fail;
}
strncpy(lc->uuid, uuid, DM_UUID_LEN);
lc->luid = luid;
if (get_log(lc->uuid, lc->luid) ||
get_pending_log(lc->uuid, lc->luid)) {
LOG_ERROR("[%s/%" PRIu64 "u] Log already exists, unable to create.",
SHORT_UUID(lc->uuid), lc->luid);
r = -EINVAL;
goto fail;
dm_free(lc);
return -EINVAL;
}
dm_list_init(&lc->mark_list);
@@ -533,9 +528,9 @@ fail:
LOG_ERROR("Close device error, %s: %s",
disk_path, strerror(errno));
free(lc->disk_buffer);
free(lc->sync_bits);
free(lc->clean_bits);
free(lc);
dm_free(lc->sync_bits);
dm_free(lc->clean_bits);
dm_free(lc);
}
return r;
}
@@ -660,9 +655,9 @@ static int clog_dtr(struct dm_ulog_request *rq)
strerror(errno));
if (lc->disk_buffer)
free(lc->disk_buffer);
free(lc->clean_bits);
free(lc->sync_bits);
free(lc);
dm_free(lc->clean_bits);
dm_free(lc->sync_bits);
dm_free(lc);
return 0;
}

View File

@@ -12,7 +12,7 @@
#ifndef _LVM_CLOG_FUNCTIONS_H
#define _LVM_CLOG_FUNCTIONS_H
#include "device_mapper/misc/dm-log-userspace.h"
#include "dm-log-userspace.h"
#include "cluster.h"
#define LOG_RESUMED 1

View File

@@ -14,6 +14,7 @@
#define _LVM_CLOG_LOGGING_H
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include "configure.h"
#include <stdio.h>

View File

@@ -57,13 +57,13 @@ all: device-mapper
device-mapper: $(TARGETS)
CFLAGS_dmeventd.o += $(EXTRA_EXEC_CFLAGS)
LIBS += $(PTHREAD_LIBS)
LIBS += -ldevmapper $(PTHREAD_LIBS)
dmeventd: $(LIB_SHARED) dmeventd.o
$(CC) $(CFLAGS) -L. $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) dmeventd.o \
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(INTERNAL_LIBS) $(LIBS) -lm
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(LIBS)
dmeventd.static: $(LIB_STATIC) dmeventd.o
dmeventd.static: $(LIB_STATIC) dmeventd.o $(interfacebuilddir)/libdevmapper.a
$(CC) $(CFLAGS) $(LDFLAGS) -static -L. -L$(interfacebuilddir) dmeventd.o \
-o $@ $(DL_LIBS) $(DMEVENT_LIBS) $(LIBS) $(STATIC_LIBS)
@@ -73,6 +73,7 @@ endif
ifneq ("$(CFLOW_CMD)", "")
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
-include $(top_builddir)/libdm/libdevmapper.cflow
-include $(top_builddir)/lib/liblvm-internal.cflow
-include $(top_builddir)/lib/liblvm2cmd.cflow
-include $(top_builddir)/daemons/dmeventd/$(LIB_NAME).cflow

View File

@@ -16,14 +16,12 @@
* dmeventd - dm event daemon to monitor active mapped devices
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "device_mapper/misc/dm-logging.h"
#include "dm-logging.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "libdevmapper-event.h"
#include "dmeventd.h"
#include "tools/tool.h"
#include "tool.h"
#include <dlfcn.h>
#include <pthread.h>
@@ -266,19 +264,19 @@ static pthread_cond_t _timeout_cond = PTHREAD_COND_INITIALIZER;
/* DSO data allocate/free. */
static void _free_dso_data(struct dso_data *data)
{
free(data->dso_name);
free(data);
dm_free(data->dso_name);
dm_free(data);
}
static struct dso_data *_alloc_dso_data(struct message_data *data)
{
struct dso_data *ret = (typeof(ret)) zalloc(sizeof(*ret));
struct dso_data *ret = (typeof(ret)) dm_zalloc(sizeof(*ret));
if (!ret)
return_NULL;
if (!(ret->dso_name = strdup(data->dso_name))) {
free(ret);
if (!(ret->dso_name = dm_strdup(data->dso_name))) {
dm_free(ret);
return_NULL;
}
@@ -399,9 +397,9 @@ static void _free_thread_status(struct thread_status *thread)
_lib_put(thread->dso_data);
if (thread->wait_task)
dm_task_destroy(thread->wait_task);
free(thread->device.uuid);
free(thread->device.name);
free(thread);
dm_free(thread->device.uuid);
dm_free(thread->device.name);
dm_free(thread);
}
/* Note: events_field must not be 0, ensured by caller */
@@ -410,7 +408,7 @@ static struct thread_status *_alloc_thread_status(const struct message_data *dat
{
struct thread_status *thread;
if (!(thread = zalloc(sizeof(*thread)))) {
if (!(thread = dm_zalloc(sizeof(*thread)))) {
log_error("Cannot create new thread, out of memory.");
return NULL;
}
@@ -424,11 +422,11 @@ static struct thread_status *_alloc_thread_status(const struct message_data *dat
if (!dm_task_set_uuid(thread->wait_task, data->device_uuid))
goto_out;
if (!(thread->device.uuid = strdup(data->device_uuid)))
if (!(thread->device.uuid = dm_strdup(data->device_uuid)))
goto_out;
/* Until real name resolved, use UUID */
if (!(thread->device.name = strdup(data->device_uuid)))
if (!(thread->device.name = dm_strdup(data->device_uuid)))
goto_out;
/* runs ioctl and may register lvm2 pluging */
@@ -517,7 +515,7 @@ static int _fetch_string(char **ptr, char **src, const int delimiter)
if ((p = strchr(*src, delimiter))) {
if (*src < p) {
*p = 0; /* Temporary exit with \0 */
if (!(*ptr = strdup(*src))) {
if (!(*ptr = dm_strdup(*src))) {
log_error("Failed to fetch item %s.", *src);
ret = 0; /* Allocation fail */
}
@@ -527,7 +525,7 @@ static int _fetch_string(char **ptr, char **src, const int delimiter)
(*src)++; /* Skip delmiter, next field */
} else if ((len = strlen(*src))) {
/* No delimiter, item ends with '\0' */
if (!(*ptr = strdup(*src))) {
if (!(*ptr = dm_strdup(*src))) {
log_error("Failed to fetch last item %s.", *src);
ret = 0; /* Fail */
}
@@ -540,11 +538,11 @@ out:
/* Free message memory. */
static void _free_message(struct message_data *message_data)
{
free(message_data->id);
free(message_data->dso_name);
free(message_data->device_uuid);
free(message_data->events_str);
free(message_data->timeout_str);
dm_free(message_data->id);
dm_free(message_data->dso_name);
dm_free(message_data->device_uuid);
dm_free(message_data->events_str);
dm_free(message_data->timeout_str);
}
/* Parse a register message from the client. */
@@ -576,7 +574,7 @@ static int _parse_message(struct message_data *message_data)
ret = 1;
}
free(msg->data);
dm_free(msg->data);
msg->data = NULL;
return ret;
@@ -610,8 +608,8 @@ static int _fill_device_data(struct thread_status *ts)
if (!dm_task_run(dmt))
goto fail;
free(ts->device.name);
if (!(ts->device.name = strdup(dm_task_get_name(dmt))))
dm_free(ts->device.name);
if (!(ts->device.name = dm_strdup(dm_task_get_name(dmt))))
goto fail;
if (!dm_task_get_info(dmt, &dmi))
@@ -698,8 +696,8 @@ static int _get_status(struct message_data *message_data)
len = strlen(message_data->id);
msg->size = size + len + 1;
free(msg->data);
if (!(msg->data = malloc(msg->size)))
dm_free(msg->data);
if (!(msg->data = dm_malloc(msg->size)))
goto out;
memcpy(msg->data, message_data->id, len);
@@ -714,7 +712,7 @@ static int _get_status(struct message_data *message_data)
ret = 0;
out:
for (j = 0; j < i; ++j)
free(buffers[j]);
dm_free(buffers[j]);
return ret;
}
@@ -723,7 +721,7 @@ static int _get_parameters(struct message_data *message_data) {
struct dm_event_daemon_message *msg = message_data->msg;
int size;
free(msg->data);
dm_free(msg->data);
if ((size = dm_asprintf(&msg->data, "%s pid=%d daemon=%s exec_method=%s",
message_data->id, getpid(),
_foreground ? "no" : "yes",
@@ -756,7 +754,6 @@ static void *_timeout_thread(void *unused __attribute__((unused)))
struct thread_status *thread;
struct timespec timeout;
time_t curr_time;
int ret;
DEBUGLOG("Timeout thread starting.");
pthread_cleanup_push(_exit_timeout, NULL);
@@ -778,10 +775,7 @@ static void *_timeout_thread(void *unused __attribute__((unused)))
} else {
DEBUGLOG("Sending SIGALRM to Thr %x for timeout.",
(int) thread->thread);
ret = pthread_kill(thread->thread, SIGALRM);
if (ret && (ret != ESRCH))
log_error("Unable to wakeup Thr %x for timeout: %s.",
(int) thread->thread, strerror(ret));
pthread_kill(thread->thread, SIGALRM);
}
_unlock_mutex();
}
@@ -871,7 +865,6 @@ static int _event_wait(struct thread_status *thread)
* This is so that you can break out of waiting on an event,
* either for a timeout event, or to cancel the thread.
*/
sigemptyset(&old);
sigemptyset(&set);
sigaddset(&set, SIGALRM);
if (pthread_sigmask(SIG_UNBLOCK, &set, &old) != 0) {
@@ -1227,7 +1220,7 @@ static int _registered_device(struct message_data *message_data,
int r;
struct dm_event_daemon_message *msg = message_data->msg;
free(msg->data);
dm_free(msg->data);
if ((r = dm_asprintf(&(msg->data), "%s %s %s %u",
message_data->id,
@@ -1367,7 +1360,7 @@ static int _get_timeout(struct message_data *message_data)
if (!thread)
return -ENODEV;
free(msg->data);
dm_free(msg->data);
msg->size = dm_asprintf(&(msg->data), "%s %" PRIu32,
message_data->id, thread->timeout);
@@ -1504,7 +1497,7 @@ static int _client_read(struct dm_event_fifos *fifos,
bytes = 0;
if (!size)
break; /* No data -> error */
buf = msg->data = malloc(msg->size);
buf = msg->data = dm_malloc(msg->size);
if (!buf)
break; /* No mem -> error */
header = 0;
@@ -1512,7 +1505,7 @@ static int _client_read(struct dm_event_fifos *fifos,
}
if (bytes != size) {
free(msg->data);
dm_free(msg->data);
msg->data = NULL;
return 0;
}
@@ -1532,7 +1525,7 @@ static int _client_write(struct dm_event_fifos *fifos,
fd_set fds;
size_t size = 2 * sizeof(uint32_t) + ((msg->data) ? msg->size : 0);
uint32_t *header = malloc(size);
uint32_t *header = dm_malloc(size);
char *buf = (char *)header;
if (!header) {
@@ -1562,7 +1555,7 @@ static int _client_write(struct dm_event_fifos *fifos,
}
if (header != temp)
free(header);
dm_free(header);
return (bytes == size);
}
@@ -1624,7 +1617,7 @@ static int _do_process_request(struct dm_event_daemon_message *msg)
msg->size = dm_asprintf(&(msg->data), "%s %s %d", answer,
(msg->cmd == DM_EVENT_CMD_DIE) ? "DYING" : "HELLO",
DM_EVENT_PROTOCOL_VERSION);
free(answer);
dm_free(answer);
}
} else if (msg->cmd != DM_EVENT_CMD_ACTIVE && !_parse_message(&message_data)) {
stack;
@@ -1666,7 +1659,7 @@ static void _process_request(struct dm_event_fifos *fifos)
DEBUGLOG("<<< CMD:%s (0x%x) completed (result %d).", decode_cmd(cmd), cmd, msg.cmd);
free(msg.data);
dm_free(msg.data);
if (cmd == DM_EVENT_CMD_DIE) {
if (unlink(DMEVENTD_PIDFILE))
@@ -1977,7 +1970,7 @@ static int _reinstate_registrations(struct dm_event_fifos *fifos)
int i, ret;
ret = daemon_talk(fifos, &msg, DM_EVENT_CMD_HELLO, NULL, NULL, 0, 0);
free(msg.data);
dm_free(msg.data);
msg.data = NULL;
if (ret) {
@@ -2063,13 +2056,13 @@ static void _restart_dmeventd(void)
++count;
}
if (!(_initial_registrations = malloc(sizeof(char*) * (count + 1)))) {
if (!(_initial_registrations = dm_malloc(sizeof(char*) * (count + 1)))) {
fprintf(stderr, "Memory allocation registration failed.\n");
goto bad;
}
for (i = 0; i < count; ++i) {
if (!(_initial_registrations[i] = strdup(message))) {
if (!(_initial_registrations[i] = dm_strdup(message))) {
fprintf(stderr, "Memory allocation for message failed.\n");
goto bad;
}

View File

@@ -12,12 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include "device_mapper/misc/dm-logging.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "dm-logging.h"
#include "dmlib.h"
#include "libdevmapper-event.h"
#include "dmeventd.h"
#include "lib/misc/intl.h"
#include <fcntl.h>
#include <sys/file.h>
@@ -27,7 +25,6 @@
#include <arpa/inet.h> /* for htonl, ntohl */
#include <pthread.h>
#include <syslog.h>
#include <unistd.h>
static int _debug_level = 0;
static int _use_syslog = 0;
@@ -50,8 +47,8 @@ struct dm_event_handler {
static void _dm_event_handler_clear_dev_info(struct dm_event_handler *dmevh)
{
free(dmevh->dev_name);
free(dmevh->uuid);
dm_free(dmevh->dev_name);
dm_free(dmevh->uuid);
dmevh->dev_name = dmevh->uuid = NULL;
dmevh->major = dmevh->minor = 0;
}
@@ -60,7 +57,7 @@ struct dm_event_handler *dm_event_handler_create(void)
{
struct dm_event_handler *dmevh;
if (!(dmevh = zalloc(sizeof(*dmevh)))) {
if (!(dmevh = dm_zalloc(sizeof(*dmevh)))) {
log_error("Failed to allocate event handler.");
return NULL;
}
@@ -71,9 +68,9 @@ struct dm_event_handler *dm_event_handler_create(void)
void dm_event_handler_destroy(struct dm_event_handler *dmevh)
{
_dm_event_handler_clear_dev_info(dmevh);
free(dmevh->dso);
free(dmevh->dmeventd_path);
free(dmevh);
dm_free(dmevh->dso);
dm_free(dmevh->dmeventd_path);
dm_free(dmevh);
}
int dm_event_handler_set_dmeventd_path(struct dm_event_handler *dmevh, const char *dmeventd_path)
@@ -81,9 +78,9 @@ int dm_event_handler_set_dmeventd_path(struct dm_event_handler *dmevh, const cha
if (!dmeventd_path) /* noop */
return 0;
free(dmevh->dmeventd_path);
dm_free(dmevh->dmeventd_path);
if (!(dmevh->dmeventd_path = strdup(dmeventd_path)))
if (!(dmevh->dmeventd_path = dm_strdup(dmeventd_path)))
return -ENOMEM;
return 0;
@@ -94,9 +91,9 @@ int dm_event_handler_set_dso(struct dm_event_handler *dmevh, const char *path)
if (!path) /* noop */
return 0;
free(dmevh->dso);
dm_free(dmevh->dso);
if (!(dmevh->dso = strdup(path)))
if (!(dmevh->dso = dm_strdup(path)))
return -ENOMEM;
return 0;
@@ -109,7 +106,7 @@ int dm_event_handler_set_dev_name(struct dm_event_handler *dmevh, const char *de
_dm_event_handler_clear_dev_info(dmevh);
if (!(dmevh->dev_name = strdup(dev_name)))
if (!(dmevh->dev_name = dm_strdup(dev_name)))
return -ENOMEM;
return 0;
@@ -122,7 +119,7 @@ int dm_event_handler_set_uuid(struct dm_event_handler *dmevh, const char *uuid)
_dm_event_handler_clear_dev_info(dmevh);
if (!(dmevh->uuid = strdup(uuid)))
if (!(dmevh->uuid = dm_strdup(uuid)))
return -ENOMEM;
return 0;
@@ -262,7 +259,7 @@ static int _daemon_read(struct dm_event_fifos *fifos,
if (header && (bytes == 2 * sizeof(uint32_t))) {
msg->cmd = ntohl(header[0]);
msg->size = ntohl(header[1]);
buf = msg->data = malloc(msg->size);
buf = msg->data = dm_malloc(msg->size);
size = msg->size;
bytes = 0;
header = 0;
@@ -270,7 +267,7 @@ static int _daemon_read(struct dm_event_fifos *fifos,
}
if (bytes != size) {
free(msg->data);
dm_free(msg->data);
msg->data = NULL;
}
return bytes == size;
@@ -373,13 +370,13 @@ int daemon_talk(struct dm_event_fifos *fifos,
*/
if (!_daemon_write(fifos, msg)) {
stack;
free(msg->data);
dm_free(msg->data);
msg->data = NULL;
return -EIO;
}
do {
free(msg->data);
dm_free(msg->data);
msg->data = NULL;
if (!_daemon_read(fifos, msg)) {
@@ -622,7 +619,7 @@ static int _do_event(int cmd, char *dmeventd_path, struct dm_event_daemon_messag
ret = daemon_talk(&fifos, msg, DM_EVENT_CMD_HELLO, NULL, NULL, 0, 0);
free(msg->data);
dm_free(msg->data);
msg->data = 0;
if (!ret)
@@ -648,7 +645,6 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
uuid = dm_task_get_uuid(dmt);
if (!strstr(dmevh->dso, "libdevmapper-event-lvm2thin.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2vdo.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2snapshot.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2mirror.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2raid.so"))
@@ -663,7 +659,7 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
ret = 0;
}
free(msg.data);
dm_free(msg.data);
dm_task_destroy(dmt);
@@ -690,7 +686,7 @@ int dm_event_unregister_handler(const struct dm_event_handler *dmevh)
ret = 0;
}
free(msg.data);
dm_free(msg.data);
dm_task_destroy(dmt);
@@ -706,7 +702,7 @@ static char *_fetch_string(char **src, const int delimiter)
if ((p = strchr(*src, delimiter)))
*p = 0;
if ((ret = strdup(*src)))
if ((ret = dm_strdup(*src)))
*src += strlen(ret) + 1;
if (p)
@@ -726,11 +722,11 @@ static int _parse_message(struct dm_event_daemon_message *msg, char **dso_name,
(*dso_name = _fetch_string(&p, ' ')) &&
(*uuid = _fetch_string(&p, ' '))) {
*evmask = atoi(p);
free(id);
dm_free(id);
return 0;
}
free(id);
dm_free(id);
return -ENOMEM;
}
@@ -758,10 +754,11 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
uuid = dm_task_get_uuid(dmt);
/* FIXME Distinguish errors connecting to daemon */
if ((ret = _do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0))) {
if (_do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0)) {
log_debug("%s: device not registered.", dm_task_get_name(dmt));
ret = -ENOENT;
goto fail;
}
@@ -772,7 +769,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
dm_task_destroy(dmt);
dmt = NULL;
free(msg.data);
dm_free(msg.data);
msg.data = NULL;
_dm_event_handler_clear_dev_info(dmevh);
@@ -781,7 +778,7 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
goto fail;
}
if (!(dmevh->uuid = strdup(reply_uuid))) {
if (!(dmevh->uuid = dm_strdup(reply_uuid))) {
ret = -ENOMEM;
goto fail;
}
@@ -794,13 +791,13 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
dm_event_handler_set_dso(dmevh, reply_dso);
dm_event_handler_set_event_mask(dmevh, reply_mask);
free(reply_dso);
dm_free(reply_dso);
reply_dso = NULL;
free(reply_uuid);
dm_free(reply_uuid);
reply_uuid = NULL;
if (!(dmevh->dev_name = strdup(dm_task_get_name(dmt)))) {
if (!(dmevh->dev_name = dm_strdup(dm_task_get_name(dmt)))) {
ret = -ENOMEM;
goto fail;
}
@@ -818,9 +815,9 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
return ret;
fail:
free(msg.data);
free(reply_dso);
free(reply_uuid);
dm_free(msg.data);
dm_free(reply_dso);
dm_free(reply_uuid);
_dm_event_handler_clear_dev_info(dmevh);
if (dmt)
dm_task_destroy(dmt);
@@ -985,12 +982,12 @@ int dm_event_get_timeout(const char *device_path, uint32_t *timeout)
if (!p) {
log_error("Malformed reply from dmeventd '%s'.",
msg.data);
free(msg.data);
dm_free(msg.data);
return -EIO;
}
*timeout = atoi(p);
}
free(msg.data);
dm_free(msg.data);
return ret;
}

View File

@@ -8,3 +8,4 @@ Description: device-mapper event library
Version: @DM_LIB_PATCHLEVEL@
Cflags: -I${includedir}
Libs: -L${libdir} -ldevmapper-event
Requires.private: devmapper

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2005, 2011 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -16,7 +16,27 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SUBDIRS += lvm2 snapshot raid thin mirror vdo
SUBDIRS += lvm2
ifneq ("@MIRRORS@", "none")
SUBDIRS += mirror
endif
ifneq ("@SNAPSHOTS@", "none")
SUBDIRS += snapshot
endif
ifneq ("@RAID@", "none")
SUBDIRS += raid
endif
ifneq ("@THIN@", "none")
SUBDIRS += thin
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = lvm2 mirror snapshot raid thin
endif
include $(top_builddir)/make.tmpl
@@ -24,4 +44,3 @@ snapshot: lvm2
mirror: lvm2
raid: lvm2
thin: lvm2
vdo: lvm2

View File

@@ -24,7 +24,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += @LVM2CMD_LIB@ $(INTERNAL_LIBS) $(PTHREAD_LIBS)
LIBS += @LVM2CMD_LIB@ -ldevmapper $(PTHREAD_LIBS)
install_lvm2: install_lib_shared

View File

@@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "lib.h"
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "tools/lvm2cmd.h"
#include "libdevmapper-event.h"
#include "lvm2cmd.h"
#include <pthread.h>

View File

@@ -30,7 +30,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "lib.h"
#include "libdevmapper-event.h"
#include "dmeventd_lvm.h"
#include "lib/activate/activate.h"
#include "activate.h" /* For TARGET_NAME* */
/* FIXME Reformat to 80 char lines. */

View File

@@ -29,7 +29,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -12,10 +12,10 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "lib/config/defaults.h"
#include "lib.h"
#include "defaults.h"
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "libdevmapper-event.h"
/* Hold enough elements for the mximum number of RAID images */
#define RAID_DEVS_ELEMS ((DEFAULT_RAID_MAX_IMAGES + 63) / 64)

View File

@@ -26,7 +26,7 @@ LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -12,9 +12,9 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "lib.h"
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "libdevmapper-event.h"
#include <sys/sysmacros.h>
#include <sys/wait.h>

View File

@@ -29,7 +29,7 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
include $(top_builddir)/make.tmpl
LIBS += -ldevmapper-event-lvm2 $(INTERNAL_LIBS)
LIBS += -ldevmapper-event-lvm2 -ldevmapper
install_lvm2: install_dm_plugin

View File

@@ -12,16 +12,16 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "lib.h" /* using here lvm log */
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "libdevmapper-event.h"
#include <sys/wait.h>
#include <stdarg.h>
/* TODO - move this mountinfo code into library to be reusable */
#ifdef __linux__
# include "libdm/misc/kdev_t.h"
# include "kdev_t.h"
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
@@ -62,25 +62,27 @@ struct dso_state {
DM_EVENT_LOG_FN("thin")
#define UUID_PREFIX "LVM-"
static int _run_command(struct dso_state *state)
{
char val[16];
char val[3][36];
char *env[] = { val[0], val[1], val[2], NULL };
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
(void) dm_snprintf(val[0], sizeof(val[0]), "LVM_RUN_BY_DMEVENTD=1");
if (state->data_percent) {
/* Prepare some known data to env vars for easy use */
if (dm_snprintf(val, sizeof(val), "%d",
state->data_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_DATA", val, 1);
if (dm_snprintf(val, sizeof(val), "%d",
state->metadata_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_METADATA", val, 1);
(void) dm_snprintf(val[1], sizeof(val[1]), "DMEVENTD_THIN_POOL_DATA=%d",
state->data_percent / DM_PERCENT_1);
(void) dm_snprintf(val[2], sizeof(val[2]), "DMEVENTD_THIN_POOL_METADATA=%d",
state->metadata_percent / DM_PERCENT_1);
} else {
/* For an error event it's for a user to check status and decide */
env[1] = NULL;
log_debug("Error event processing.");
}
@@ -95,7 +97,7 @@ static int _run_command(struct dso_state *state)
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execvp(state->argv[0], state->argv);
execve(state->argv[0], state->argv, env);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);

View File

@@ -1,3 +0,0 @@
process_event
register_device
unregister_device

View File

@@ -1,406 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib/misc/lib.h"
#include "dmeventd_lvm.h"
#include "daemons/dmeventd/libdevmapper-event.h"
#include "device_mapper/vdo/target.h"
#include <sys/wait.h>
#include <stdarg.h>
/* First warning when VDO pool is 80% full. */
#define WARNING_THRESH (DM_PERCENT_1 * 80)
/* Run a check every 5%. */
#define CHECK_STEP (DM_PERCENT_1 * 5)
/* Do not bother checking VDO pool is less than 50% full. */
#define CHECK_MINIMUM (DM_PERCENT_1 * 50)
#define MAX_FAILS (256) /* ~42 mins between cmd call retry with 10s delay */
#define VDO_DEBUG 0
struct dso_state {
struct dm_pool *mem;
int percent_check;
int percent;
uint64_t known_data_size;
unsigned fails;
unsigned max_fails;
int restore_sigset;
sigset_t old_sigset;
pid_t pid;
char *argv[3];
const char *cmd_str;
const char *name;
};
DM_EVENT_LOG_FN("vdo")
static int _run_command(struct dso_state *state)
{
char val[16];
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
if (state->percent) {
/* Prepare some known data to env vars for easy use */
if (dm_snprintf(val, sizeof(val), "%d",
state->percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_VDO_POOL", val, 1);
} else {
/* For an error event it's for a user to check status and decide */
log_debug("Error event processing.");
}
log_verbose("Executing command: %s", state->cmd_str);
/* TODO:
* Support parallel run of 'task' and it's waitpid maintainence
* ATM we can't handle signaling of SIGALRM
* as signalling is not allowed while 'process_event()' is running
*/
if (!(state->pid = fork())) {
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execvp(state->argv[0], state->argv);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
state->fails = 1;
return 0;
}
return 1;
}
static int _use_policy(struct dm_task *dmt, struct dso_state *state)
{
#if VDO_DEBUG
log_debug("dmeventd executes: %s.", state->cmd_str);
#endif
if (state->argv[0])
return _run_command(state);
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
log_error("Failed command for %s.", dm_task_get_name(dmt));
state->fails = 1;
return 0;
}
state->fails = 0;
return 1;
}
/* Check if executed command has finished
* Only 1 command may run */
static int _wait_for_pid(struct dso_state *state)
{
int status = 0;
if (state->pid == -1)
return 1;
if (!waitpid(state->pid, &status, WNOHANG))
return 0;
/* Wait for finish */
if (WIFEXITED(status)) {
log_verbose("Child %d exited with status %d.",
state->pid, WEXITSTATUS(status));
state->fails = WEXITSTATUS(status) ? 1 : 0;
} else {
if (WIFSIGNALED(status))
log_verbose("Child %d was terminated with status %d.",
state->pid, WTERMSIG(status));
state->fails = 1;
}
state->pid = -1;
return 1;
}
void process_event(struct dm_task *dmt,
enum dm_event_mask event __attribute__((unused)),
void **user)
{
const char *device = dm_task_get_name(dmt);
struct dso_state *state = *user;
void *next = NULL;
uint64_t start, length;
char *target_type = NULL;
char *params;
int needs_policy = 0;
struct dm_task *new_dmt = NULL;
struct dm_vdo_status_parse_result vdop = { .status = NULL };
#if VDO_DEBUG
log_debug("Watch for VDO %s:%.2f%%.", state->name,
dm_percent_to_round_float(state->percent_check, 2));
#endif
if (!_wait_for_pid(state)) {
log_warn("WARNING: Skipping event, child %d is still running (%s).",
state->pid, state->cmd_str);
return;
}
if (event & DM_EVENT_DEVICE_ERROR) {
#if VDO_DEBUG
log_debug("VDO event error.");
#endif
/* Error -> no need to check and do instant resize */
state->percent = 0;
if (_use_policy(dmt, state))
goto out;
stack;
if (!(new_dmt = dm_task_create(DM_DEVICE_STATUS)))
goto_out;
if (!dm_task_set_uuid(new_dmt, dm_task_get_uuid(dmt)))
goto_out;
/* Non-blocking status read */
if (!dm_task_no_flush(new_dmt))
log_warn("WARNING: Can't set no_flush for dm status.");
if (!dm_task_run(new_dmt))
goto_out;
dmt = new_dmt;
}
dm_get_next_target(dmt, next, &start, &length, &target_type, &params);
if (!target_type || (strcmp(target_type, "vdo") != 0)) {
log_error("Invalid target type.");
goto out;
}
if (!dm_vdo_status_parse(state->mem, params, &vdop)) {
log_error("Failed to parse status.");
goto out;
}
state->percent = dm_make_percent(vdop.status->used_blocks,
vdop.status->total_blocks);
#if VDO_DEBUG
log_debug("VDO %s status %.2f%% " FMTu64 "/" FMTu64 ".",
state->name, dm_percent_to_round_float(state->percent, 2),
vdop.status->used_blocks, vdop.status->total_blocks);
#endif
/* VDO pool size had changed. Clear the threshold. */
if (state->known_data_size != vdop.status->total_blocks) {
state->percent_check = CHECK_MINIMUM;
state->known_data_size = vdop.status->total_blocks;
state->fails = 0;
}
/*
* Trigger action when threshold boundary is exceeded.
* Report 80% threshold warning when it's used above 80%.
* Only 100% is exception as it cannot be surpased so policy
* action is called for: >50%, >55% ... >95%, 100%
*/
if ((state->percent > WARNING_THRESH) &&
(state->percent > state->percent_check))
log_warn("WARNING: VDO %s %s is now %.2f%% full.",
state->name, device,
dm_percent_to_round_float(state->percent, 2));
if (state->percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->percent > state->percent_check)
needs_policy = 1;
state->percent_check = (state->percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->percent_check == DM_PERCENT_100)
state->percent_check--; /* Can't get bigger then 100% */
} else
state->percent_check = CHECK_MINIMUM;
/* Reduce number of _use_policy() calls by power-of-2 factor till frequency of MAX_FAILS is reached.
* Avoids too high number of error retries, yet shows some status messages in log regularly.
* i.e. PV could have been pvmoved and VG/LV was locked for a while...
*/
if (state->fails) {
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
return;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;
state->fails = needs_policy = 1; /* Retry failing command */
} else
state->max_fails = 1; /* Reset on success */
/* FIXME: ATM nothing can be done, drop 0, once it becomes useful */
if (0 && needs_policy)
_use_policy(dmt, state);
out:
if (vdop.status)
dm_pool_free(state->mem, vdop.status);
if (new_dmt)
dm_task_destroy(new_dmt);
}
/* Handle SIGCHLD for a thread */
static void _sig_child(int signum __attribute__((unused)))
{
/* empty SIG_IGN */;
}
/* Setup handler for SIGCHLD when executing external command
* to get quick 'waitpid()' reaction
* It will interrupt syscall just like SIGALRM and
* invoke process_event().
*/
static void _init_thread_signals(struct dso_state *state)
{
struct sigaction act = { .sa_handler = _sig_child };
sigset_t my_sigset;
sigemptyset(&my_sigset);
if (sigaction(SIGCHLD, &act, NULL))
log_warn("WARNING: Failed to set SIGCHLD action.");
else if (sigaddset(&my_sigset, SIGCHLD))
log_warn("WARNING: Failed to add SIGCHLD to set.");
else if (pthread_sigmask(SIG_UNBLOCK, &my_sigset, &state->old_sigset))
log_warn("WARNING: Failed to unblock SIGCHLD.");
else
state->restore_sigset = 1;
}
static void _restore_thread_signals(struct dso_state *state)
{
if (state->restore_sigset &&
pthread_sigmask(SIG_SETMASK, &state->old_sigset, NULL))
log_warn("WARNING: Failed to block SIGCHLD.");
}
int register_device(const char *device,
const char *uuid,
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state;
const char *cmd;
char *str;
char cmd_str[PATH_MAX + 128 + 2]; /* cmd ' ' vg/lv \0 */
const char *name = "pool";
if (!dmeventd_lvm2_init_with_pool("vdo_pool_state", state))
goto_bad;
state->cmd_str = "";
/* Search for command for LVM- prefixed devices only */
cmd = (strncmp(uuid, "LVM-", 4) == 0) ? "_dmeventd_vdo_command" : "";
if (!dmeventd_lvm2_command(state->mem, cmd_str, sizeof(cmd_str), cmd, device))
goto_bad;
if (strncmp(cmd_str, "lvm ", 4) == 0) {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str + 4))) {
log_error("Failed to copy lvm VDO command.");
goto bad;
}
} else if (cmd_str[0] == '/') {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str))) {
log_error("Failed to copy VDO command.");
goto bad;
}
/* Find last space before 'vg/lv' */
if (!(str = strrchr(state->cmd_str, ' ')))
goto inval;
if (!(state->argv[0] = dm_pool_strndup(state->mem, state->cmd_str,
str - state->cmd_str))) {
log_error("Failed to copy command.");
goto bad;
}
state->argv[1] = str + 1; /* 1 argument - vg/lv */
_init_thread_signals(state);
} else if (cmd[0] == 0) {
state->name = "volume"; /* What to use with 'others?' */
} else/* Unuspported command format */
goto inval;
state->pid = -1;
state->name = name;
*user = state;
log_info("Monitoring VDO %s %s.", name, device);
return 1;
inval:
log_error("Invalid command for monitoring: %s.", cmd_str);
bad:
log_error("Failed to monitor VDO %s %s.", name, device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}
int unregister_device(const char *device,
const char *uuid __attribute__((unused)),
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state = *user;
const char *name = state->name;
int i;
for (i = 0; !_wait_for_pid(state) && (i < 6); ++i) {
if (i == 0)
/* Give it 2 seconds, then try to terminate & kill it */
log_verbose("Child %d still not finished (%s) waiting.",
state->pid, state->cmd_str);
else if (i == 3) {
log_warn("WARNING: Terminating child %d.", state->pid);
kill(state->pid, SIGINT);
kill(state->pid, SIGTERM);
} else if (i == 5) {
log_warn("WARNING: Killing child %d.", state->pid);
kill(state->pid, SIGKILL);
}
sleep(1);
}
if (state->pid != -1)
log_warn("WARNING: Cannot kill child %d!", state->pid);
_restore_thread_signals(state);
dmeventd_lvm2_exit_with_pool(state);
log_info("No longer monitoring VDO %s %s.", name, device);
return 1;
}

View File

@@ -1,2 +1 @@
dmsetup
dmfilemapd

View File

@@ -0,0 +1,66 @@
#
# Copyright (C) 2016 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES = dmfilemapd.c
TARGETS = dmfilemapd
.PHONY: install_dmfilemapd install_dmfilemapd_static
INSTALL_DMFILEMAPD_TARGETS = install_dmfilemapd_dynamic
CLEAN_TARGETS = dmfilemapd.static
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
CFLOW_TARGET = dmfilemapd
include $(top_builddir)/make.tmpl
all: device-mapper
device-mapper: $(TARGETS)
CFLAGS_dmfilemapd.o += $(EXTRA_EXEC_CFLAGS)
LIBS += -ldevmapper
dmfilemapd: $(LIB_SHARED) dmfilemapd.o
$(CC) $(CFLAGS) $(LDFLAGS) $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS) \
-o $@ dmfilemapd.o $(DL_LIBS) $(LIBS)
dmfilemapd.static: $(LIB_STATIC) dmfilemapd.o $(interfacebuilddir)/libdevmapper.a
$(CC) $(CFLAGS) $(LDFLAGS) $(ELDFLAGS) -static -L$(interfacebuilddir) \
-o $@ dmfilemapd.o $(DL_LIBS) $(LIBS) $(STATIC_LIBS)
ifneq ("$(CFLOW_CMD)", "")
CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
-include $(top_builddir)/libdm/libdevmapper.cflow
-include $(top_builddir)/lib/liblvm-internal.cflow
-include $(top_builddir)/lib/liblvm2cmd.cflow
-include $(top_builddir)/daemons/dmfilemapd/$(LIB_NAME).cflow
endif
install_dmfilemapd_dynamic: dmfilemapd
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
install_dmfilemapd_static: dmfilemapd.static
$(INSTALL_PROGRAM) -D $< $(staticdir)/$(<F)
install_dmfilemapd: $(INSTALL_DMFILEMAPD_TARGETS)
install: install_dmfilemapd
install_device-mapper: install_dmfilemapd

View File

@@ -14,8 +14,11 @@
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "util.h"
#include "libdm/misc/dm-logging.h"
#include "tool.h"
#include "dm-logging.h"
#include "defaults.h"
#include <sys/types.h>
#include <sys/stat.h>
@@ -26,15 +29,13 @@
#include <ctype.h>
#ifdef __linux__
# include "libdm/misc/kdev_t.h"
# include "kdev_t.h"
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
# define MKDEV(x,y) makedev((x),(y))
#endif
#define DEFAULT_PROC_DIR "/proc"
/* limit to two updates/sec */
#define FILEMAPD_WAIT_USECS 500000
@@ -310,7 +311,7 @@ static int _parse_args(int argc, char **argv, struct filemap_monitor *fm)
return 0;
}
fm->path = strdup(argv[0]);
fm->path = dm_strdup(argv[0]);
if (!fm->path) {
_early_log("Could not allocate memory for path argument.");
return 0;
@@ -537,8 +538,8 @@ static void _filemap_monitor_destroy(struct filemap_monitor *fm)
_filemap_monitor_end_notify(fm);
_filemap_monitor_close_fd(fm);
}
free((void *) fm->program_id);
free(fm->path);
dm_free((void *) fm->program_id);
dm_free(fm->path);
}
static int _filemap_monitor_check_same_file(int fd1, int fd2)
@@ -698,7 +699,7 @@ static int _update_regions(struct dm_stats *dms, struct filemap_monitor *fm)
fm->group_id, regions[0]);
fm->group_id = regions[0];
}
free(regions);
dm_free(regions);
fm->nr_regions = nr_regions;
return 1;
}
@@ -740,7 +741,7 @@ static int _dmfilemapd(struct filemap_monitor *fm)
*/
program_id = dm_stats_get_region_program_id(dms, fm->group_id);
if (program_id)
fm->program_id = strdup(program_id);
fm->program_id = dm_strdup(program_id);
else
fm->program_id = NULL;
dm_stats_set_program_id(dms, 1, program_id);
@@ -801,7 +802,7 @@ bad:
return 1;
}
static const char * const _mode_names[] = {
static const char * _mode_names[] = {
"inode",
"path"
};
@@ -816,7 +817,7 @@ int main(int argc, char **argv)
memset(&fm, 0, sizeof(fm));
if (!_parse_args(argc, argv, &fm)) {
free(fm.path);
dm_free(fm.path);
return 1;
}
@@ -826,10 +827,8 @@ int main(int argc, char **argv)
"mode=%s, path=%s", fm.fd, fm.group_id,
_mode_names[fm.mode], fm.path);
if (!_foreground && !_daemonise(&fm)) {
free(fm.path);
if (!_foreground && !_daemonise(&fm))
return 1;
}
return _dmfilemapd(&fm);
}

View File

@@ -1,4 +1 @@
path.py
lvmdbusd
lvmdb.py
lvm_shell_proxy.py

View File

@@ -26,7 +26,9 @@ LVMDBUS_SRCDIR_FILES = \
__init__.py \
job.py \
loader.py \
lvmdb.py \
main.py \
lvm_shell_proxy.py \
lv.py \
manager.py \
objectmanager.py \
@@ -38,21 +40,14 @@ LVMDBUS_SRCDIR_FILES = \
vg.py
LVMDBUS_BUILDDIR_FILES = \
lvmdb.py \
lvm_shell_proxy.py \
path.py
LVMDBUSD = lvmdbusd
CLEAN_DIRS += __pycache__
LVMDBUSD = $(srcdir)/lvmdbusd
include $(top_builddir)/make.tmpl
.PHONY: install_lvmdbusd
all:
test -x $(LVMDBUSD) || chmod 755 $(LVMDBUSD)
install_lvmdbusd:
$(INSTALL_DIR) $(sbindir)
$(INSTALL_SCRIPT) $(LVMDBUSD) $(sbindir)
@@ -68,5 +63,4 @@ install_lvm2: install_lvmdbusd
install: install_lvm2
DISTCLEAN_TARGETS+= \
$(LVMDBUS_BUILDDIR_FILES) \
$(LVMDBUSD)
$(LVMDBUS_BUILDDIR_FILES)

View File

@@ -232,6 +232,7 @@ class LvState(State):
@utils.dbus_property(LV_COMMON_INTERFACE, 'Attr', 's')
@utils.dbus_property(LV_COMMON_INTERFACE, 'DataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'SnapPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'DataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'MetaDataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'CopyPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'SyncPercent', 'u')
@@ -497,7 +498,7 @@ class Lv(LvCommon):
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes // 80
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

2
daemons/lvmetad/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
lvmetad
lvmetactl

View File

@@ -0,0 +1,62 @@
#
# Copyright (C) 2011-2012 Red Hat, Inc.
#
# This file is part of LVM2.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES = lvmetad-core.c
SOURCES2 = lvmetactl.c
TARGETS = lvmetad lvmetactl
.PHONY: install_lvmetad
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
CFLOW_TARGET = lvmetad
include $(top_builddir)/make.tmpl
CFLAGS_lvmetactl.o += $(EXTRA_EXEC_CFLAGS)
CFLAGS_lvmetad-core.o += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) -ldevmapper $(PTHREAD_LIBS)
lvmetad: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) -ldaemonserver $(LIBS)
lvmetactl: lvmetactl.o $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmetactl.o $(LIBS)
CLEAN_TARGETS += lvmetactl.o
# TODO: No idea. No idea how to test either.
#ifneq ("$(CFLOW_CMD)", "")
#CFLOW_SOURCES = $(addprefix $(srcdir)/, $(SOURCES))
#-include $(top_builddir)/libdm/libdevmapper.cflow
#-include $(top_builddir)/lib/liblvm-internal.cflow
#-include $(top_builddir)/lib/liblvm2cmd.cflow
#-include $(top_builddir)/daemons/dmeventd/$(LIB_NAME).cflow
#-include $(top_builddir)/daemons/dmeventd/plugins/mirror/$(LIB_NAME)-lvm2mirror.cflow
#endif
install_lvmetad: lvmetad
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)
install_lvm2: install_lvmetad
install: install_lvm2

249
daemons/lvmetad/lvmetactl.c Normal file
View File

@@ -0,0 +1,249 @@
/*
* Copyright (C) 2014 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*/
#include "tool.h"
#include "lvmetad-client.h"
daemon_handle h;
static void print_reply(daemon_reply reply)
{
const char *a = daemon_reply_str(reply, "response", NULL);
const char *b = daemon_reply_str(reply, "status", NULL);
const char *c = daemon_reply_str(reply, "reason", NULL);
printf("response \"%s\" status \"%s\" reason \"%s\"\n",
a ? a : "", b ? b : "", c ? c : "");
}
int main(int argc, char **argv)
{
daemon_reply reply;
char *cmd;
char *uuid;
char *name;
int val;
int ver;
if (argc < 2) {
printf("lvmetactl dump\n");
printf("lvmetactl pv_list\n");
printf("lvmetactl vg_list\n");
printf("lvmetactl get_global_info\n");
printf("lvmetactl vg_lookup_name <name>\n");
printf("lvmetactl vg_lookup_uuid <uuid>\n");
printf("lvmetactl pv_lookup_uuid <uuid>\n");
printf("lvmetactl set_global_invalid 0|1\n");
printf("lvmetactl set_global_disable 0|1\n");
printf("lvmetactl set_vg_version <uuid> <name> <version>\n");
printf("lvmetactl vg_lock_type <uuid>\n");
return -1;
}
cmd = argv[1];
h = lvmetad_open(NULL);
if (!strcmp(cmd, "dump")) {
reply = daemon_send_simple(h, "dump",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "pv_list")) {
reply = daemon_send_simple(h, "pv_list",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_list")) {
reply = daemon_send_simple(h, "vg_list",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "get_global_info")) {
reply = daemon_send_simple(h, "get_global_info",
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "set_global_invalid")) {
if (argc < 3) {
printf("set_global_invalid 0|1\n");
return -1;
}
val = atoi(argv[2]);
reply = daemon_send_simple(h, "set_global_info",
"global_invalid = " FMTd64, (int64_t) val,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
print_reply(reply);
} else if (!strcmp(cmd, "set_global_disable")) {
if (argc < 3) {
printf("set_global_disable 0|1\n");
return -1;
}
val = atoi(argv[2]);
reply = daemon_send_simple(h, "set_global_info",
"global_disable = " FMTd64, (int64_t) val,
"disable_reason = %s", LVMETAD_DISABLE_REASON_DIRECT,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
print_reply(reply);
} else if (!strcmp(cmd, "set_vg_version")) {
if (argc < 5) {
printf("set_vg_version <uuid> <name> <ver>\n");
return -1;
}
uuid = argv[2];
name = argv[3];
ver = atoi(argv[4]);
if ((strlen(uuid) == 1) && (uuid[0] == '-'))
uuid = NULL;
if ((strlen(name) == 1) && (name[0] == '-'))
name = NULL;
if (uuid && name) {
reply = daemon_send_simple(h, "set_vg_info",
"uuid = %s", uuid,
"name = %s", name,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else if (uuid) {
reply = daemon_send_simple(h, "set_vg_info",
"uuid = %s", uuid,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else if (name) {
reply = daemon_send_simple(h, "set_vg_info",
"name = %s", name,
"version = " FMTd64, (int64_t) ver,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
} else {
printf("name or uuid required\n");
return -1;
}
print_reply(reply);
} else if (!strcmp(cmd, "vg_lookup_name")) {
if (argc < 3) {
printf("vg_lookup_name <name>\n");
return -1;
}
name = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"name = %s", name,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_lookup_uuid")) {
if (argc < 3) {
printf("vg_lookup_uuid <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else if (!strcmp(cmd, "vg_lock_type")) {
struct dm_config_node *metadata;
const char *lock_type;
if (argc < 3) {
printf("vg_lock_type <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "vg_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
/* printf("%s\n", reply.buffer.mem); */
metadata = dm_config_find_node(reply.cft->root, "metadata");
if (!metadata) {
printf("no metadata\n");
goto out;
}
lock_type = dm_config_find_str(metadata, "metadata/lock_type", NULL);
if (!lock_type) {
printf("no lock_type\n");
goto out;
}
printf("lock_type %s\n", lock_type);
} else if (!strcmp(cmd, "pv_lookup_uuid")) {
if (argc < 3) {
printf("pv_lookup_uuid <uuid>\n");
return -1;
}
uuid = argv[2];
reply = daemon_send_simple(h, "pv_lookup",
"uuid = %s", uuid,
"token = %s", "skip",
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", "lvmetactl",
NULL);
printf("%s\n", reply.buffer.mem);
} else {
printf("unknown command\n");
goto out_close;
}
out:
daemon_reply_destroy(reply);
out_close:
daemon_close(h);
return 0;
}

View File

@@ -0,0 +1,91 @@
/*
* Copyright (C) 2011-2012 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef _LVM_LVMETAD_CLIENT_H
#define _LVM_LVMETAD_CLIENT_H
#include "daemon-client.h"
#define LVMETAD_SOCKET DEFAULT_RUN_DIR "/lvmetad.socket"
#define LVMETAD_TOKEN_UPDATE_IN_PROGRESS "update in progress"
#define LVMETAD_DISABLE_REASON_DIRECT "DIRECT"
#define LVMETAD_DISABLE_REASON_LVM1 "LVM1"
#define LVMETAD_DISABLE_REASON_DUPLICATES "DUPLICATES"
#define LVMETAD_DISABLE_REASON_VGRESTORE "VGRESTORE"
#define LVMETAD_DISABLE_REASON_REPAIR "REPAIR"
struct volume_group;
/* Different types of replies we may get from lvmetad. */
typedef struct {
daemon_reply r;
const char **uuids; /* NULL terminated array */
} lvmetad_uuidlist;
typedef struct {
daemon_reply r;
struct dm_config_tree *cft;
} lvmetad_vg;
/* Get a list of VG UUIDs that match a given VG name. */
lvmetad_uuidlist lvmetad_lookup_vgname(daemon_handle h, const char *name);
/* Get the metadata of a single VG, identified by UUID. */
lvmetad_vg lvmetad_get_vg(daemon_handle h, const char *uuid);
/*
* Add and remove PVs on demand. Udev-driven systems will use this interface
* instead of scanning.
*/
daemon_reply lvmetad_add_pv(daemon_handle h, const char *pv_uuid, const char *mda_content);
daemon_reply lvmetad_remove_pv(daemon_handle h, const char *pv_uuid);
/* Trigger a full disk scan, throwing away all caches. XXX do we eventually want
* this? Probably not yet, anyway.
* daemon_reply lvmetad_rescan(daemon_handle h);
*/
/*
* Update the version of metadata of a volume group. The VG has to be locked for
* writing for this, and the VG metadata here has to match whatever has been
* written to the disk (under this lock). This initially avoids the requirement
* for lvmetad to write to disk (in later revisions, lvmetad_supersede_vg may
* also do the writing, or we probably add another function to do that).
*/
daemon_reply lvmetad_supersede_vg(daemon_handle h, struct volume_group *vg);
/* Wrappers to open/close connection */
static inline daemon_handle lvmetad_open(const char *socket)
{
daemon_info lvmetad_info = {
.path = "lvmetad",
.socket = socket ?: LVMETAD_SOCKET,
.protocol = "lvmetad",
.protocol_version = 1,
.autostart = 0
};
return daemon_open(lvmetad_info);
}
static inline void lvmetad_close(daemon_handle h)
{
return daemon_close(h);
}
#endif

File diff suppressed because it is too large Load Diff

16
daemons/lvmetad/test.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/bash
export LD_LIBRARY_PATH="$1"
test -n "$2" && {
rm -f /var/run/lvmetad.{socket,pid}
chmod +rx lvmetad
valgrind ./lvmetad -f &
PID=$!
sleep 1
./testclient
kill $PID
exit 0
}
sudo ./test.sh "$1" .

View File

@@ -0,0 +1,147 @@
/*
* Copyright (C) 2011-2014 Red Hat, Inc.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License v.2.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "tool.h"
#include "lvmetad-client.h"
#include "label.h"
#include "lvmcache.h"
#include "metadata.h"
const char *uuid1 = "abcd-efgh";
const char *uuid2 = "bbcd-efgh";
const char *vgid = "yada-yada";
const char *uuid3 = "cbcd-efgh";
const char *metadata2 = "{\n"
"id = \"yada-yada\"\n"
"seqno = 15\n"
"status = [\"READ\", \"WRITE\"]\n"
"flags = []\n"
"extent_size = 8192\n"
"physical_volumes {\n"
" pv0 {\n"
" id = \"abcd-efgh\"\n"
" }\n"
" pv1 {\n"
" id = \"bbcd-efgh\"\n"
" }\n"
" pv2 {\n"
" id = \"cbcd-efgh\"\n"
" }\n"
"}\n"
"}\n";
void _handle_reply(daemon_reply reply) {
const char *repl = daemon_reply_str(reply, "response", NULL);
const char *status = daemon_reply_str(reply, "status", NULL);
const char *vgid = daemon_reply_str(reply, "vgid", NULL);
fprintf(stderr, "[C] REPLY: %s\n", repl);
if (!strcmp(repl, "failed"))
fprintf(stderr, "[C] REASON: %s\n", daemon_reply_str(reply, "reason", "unknown"));
if (vgid)
fprintf(stderr, "[C] VGID: %s\n", vgid);
if (status)
fprintf(stderr, "[C] STATUS: %s\n", status);
daemon_reply_destroy(reply);
}
void _pv_add(daemon_handle h, const char *uuid, const char *metadata)
{
daemon_reply reply = daemon_send_simple(h, "pv_add", "uuid = %s", uuid,
"metadata = %b", metadata,
NULL);
_handle_reply(reply);
}
int scan(daemon_handle h, char *fn) {
struct device *dev = dev_cache_get(fn, NULL);
struct label *label;
if (!label_read(dev, &label, 0)) {
fprintf(stderr, "[C] no label found on %s\n", fn);
return;
}
char uuid[64];
if (!id_write_format(dev->pvid, uuid, 64)) {
fprintf(stderr, "[C] Failed to format PV UUID for %s", dev_name(dev));
return;
}
fprintf(stderr, "[C] found PV: %s\n", uuid);
struct lvmcache_info *info = (struct lvmcache_info *) label->info;
struct physical_volume pv = { 0, };
if (!(info->fmt->ops->pv_read(info->fmt, dev_name(dev), &pv, 0))) {
fprintf(stderr, "[C] Failed to read PV %s", dev_name(dev));
return;
}
struct format_instance_ctx fic;
struct format_instance *fid = info->fmt->ops->create_instance(info->fmt, &fic);
struct metadata_area *mda;
struct volume_group *vg = NULL;
dm_list_iterate_items(mda, &info->mdas) {
struct volume_group *this = mda->ops->vg_read(fid, "", mda);
if (this && !vg || this->seqno > vg->seqno)
vg = this;
}
if (vg) {
char *buf = NULL;
/* TODO. This is not entirely correct, since export_vg_to_buffer
* adds trailing garbage to the buffer. We may need to use
* export_vg_to_config_tree and format the buffer ourselves. It
* does, however, work for now, since the garbage is well
* formatted and has no conflicting keys with the rest of the
* request. */
export_vg_to_buffer(vg, &buf);
daemon_reply reply =
daemon_send_simple(h, "pv_add", "uuid = %s", uuid,
"metadata = %b", strchr(buf, '{'),
NULL);
_handle_reply(reply);
}
}
void _dump_vg(daemon_handle h, const char *uuid)
{
daemon_reply reply = daemon_send_simple(h, "vg_by_uuid", "uuid = %s", uuid, NULL);
fprintf(stderr, "[C] reply buffer: %s\n", reply.buffer);
daemon_reply_destroy(reply);
}
int main(int argc, char **argv) {
daemon_handle h = lvmetad_open();
/* FIXME Missing error path */
if (argc > 1) {
int i;
struct cmd_context *cmd = create_toolcontext(0, NULL, 0, 0, 1, 1);
for (i = 1; i < argc; ++i) {
const char *uuid = NULL;
scan(h, argv[i]);
}
destroy_toolcontext(cmd);
/* FIXME Missing lvmetad_close() */
return 0;
}
_pv_add(h, uuid1, NULL);
_pv_add(h, uuid2, metadata2);
_dump_vg(h, vgid);
_pv_add(h, uuid3, NULL);
daemon_close(h); /* FIXME lvmetad_close? */
return 0;
}

View File

@@ -27,8 +27,6 @@ ifeq ("@BUILD_LOCKDDLM@", "yes")
LOCK_LIBS += -ldlm_lt
endif
SOURCES2 = lvmlockctl.c
TARGETS = lvmlockd lvmlockctl
.PHONY: install_lvmlockd
@@ -38,14 +36,14 @@ include $(top_builddir)/make.tmpl
CFLAGS += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) $(PTHREAD_LIBS)
LIBS += $(RT_LIBS) $(DAEMON_LIBS) -ldevmapper $(PTHREAD_LIBS)
lvmlockd: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LOCK_LIBS) -ldaemonserver $(INTERNAL_LIBS) $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LOCK_LIBS) -ldaemonserver $(LIBS)
lvmlockctl: lvmlockctl.o $(top_builddir)/libdaemon/client/libdaemonclient.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmlockctl.o $(INTERNAL_LIBS) $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ lvmlockctl.o $(LIBS)
install_lvmlockd: lvmlockd
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)

View File

@@ -8,9 +8,9 @@
* of the GNU Lesser General Public License v.2.1.
*/
#include "tools/tool.h"
#include "tool.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "lvmlockd-client.h"
#include <stddef.h>
#include <getopt.h>

View File

@@ -11,7 +11,7 @@
#ifndef _LVM_LVMLOCKD_CLIENT_H
#define _LVM_LVMLOCKD_CLIENT_H
#include "libdaemon/client/daemon-client.h"
#include "daemon-client.h"
#define LVMLOCKD_SOCKET DEFAULT_RUN_DIR "/lvmlockd.socket"
@@ -49,6 +49,5 @@ static inline void lvmlockd_close(daemon_handle h)
#define ELOCKIO 218 /* sanlock io errors during lock op, may be transient. */
#define EREMOVED 219
#define EDEVOPEN 220 /* sanlock failed to open lvmlock LV */
#define ELMERR 221
#endif /* _LVM_LVMLOCKD_CLIENT_H */

View File

@@ -12,13 +12,14 @@
#define _ISOC99_SOURCE
#define _REENTRANT
#include "tools/tool.h"
#include "tool.h"
#include "libdaemon/client/daemon-io.h"
#include "daemon-io.h"
#include "daemon-server.h"
#include "lvm-version.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "device_mapper/misc/dm-ioctl.h"
#include "lvmetad-client.h"
#include "lvmlockd-client.h"
#include "dm-ioctl.h" /* for DM_UUID_LEN */
/* #include <assert.h> */
#include <errno.h>
@@ -143,6 +144,10 @@ static const int lvmlockd_protocol_version = 1;
static int daemon_quit;
static int adopt_opt;
static daemon_handle lvmetad_handle;
static pthread_mutex_t lvmetad_mutex;
static int lvmetad_connected;
/*
* We use a separate socket for dumping daemon info.
* This will not interfere with normal operations, and allows
@@ -1004,6 +1009,52 @@ static void add_work_action(struct action *act)
pthread_mutex_unlock(&worker_mutex);
}
static daemon_reply send_lvmetad(const char *id, ...)
{
daemon_reply reply;
va_list ap;
int retries = 0;
int err;
va_start(ap, id);
/*
* mutex is used because all threads share a single
* lvmetad connection/handle.
*/
pthread_mutex_lock(&lvmetad_mutex);
retry:
if (!lvmetad_connected) {
lvmetad_handle = lvmetad_open(NULL);
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0) {
err = lvmetad_handle.error ?: lvmetad_handle.socket_fd;
pthread_mutex_unlock(&lvmetad_mutex);
log_error("lvmetad_open reconnect error %d", err);
memset(&reply, 0, sizeof(reply));
reply.error = err;
va_end(ap);
return reply;
} else {
log_debug("lvmetad reconnected");
lvmetad_connected = 1;
}
}
reply = daemon_send_simple_v(lvmetad_handle, id, ap);
/* lvmetad may have been restarted */
if ((reply.error == ECONNRESET) && (retries < 2)) {
daemon_close(lvmetad_handle);
lvmetad_connected = 0;
retries++;
goto retry;
}
pthread_mutex_unlock(&lvmetad_mutex);
va_end(ap);
return reply;
}
static int res_lock(struct lockspace *ls, struct resource *r, struct action *act, int *retry)
{
struct lock *lk;
@@ -1199,18 +1250,6 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
rv = -EREMOVED;
}
/*
* lvmetad is no longer used, but the infrastructure for
* distributed cache validation remains. The points
* where vg or global cache state would be invalidated
* remain below and log_debug messages point out where
* they would occur.
*
* The comments related to "lvmetad" remain because they
* describe how some other local cache like lvmetad would
* be invalidated here.
*/
/*
* r is vglk: tell lvmetad to set the vg invalid
* flag, and provide the new r_version. If lvmetad finds
@@ -1226,22 +1265,44 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
* caches, and tell lvmetad to set global invalid to 0.
*/
/*
* lvmetad not running:
* Even if we have not previously found lvmetad running,
* we attempt to connect and invalidate in case it has
* been started while lvmlockd is running. We don't
* want to allow lvmetad to be used with invalid data if
* it happens to be enabled and started after lvmlockd.
*/
if (inval_meta && (r->type == LD_RT_VG)) {
log_debug("S %s R %s res_lock invalidate vg state version %u",
daemon_reply reply;
char *uuid;
log_debug("S %s R %s res_lock set lvmetad vg version %u",
ls->name, r->name, new_version);
if (!ls->vg_uuid[0] || !strcmp(ls->vg_uuid, "none"))
uuid = (char *)"none";
else
uuid = ls->vg_uuid;
reply = send_lvmetad("set_vg_info",
"token = %s", "skip",
"uuid = %s", uuid,
"name = %s", ls->vg_name,
"version = " FMTd64, (int64_t)new_version,
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_vg_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
}
if (inval_meta && (r->type == LD_RT_GL)) {
log_debug("S %s R %s res_lock invalidate global state", ls->name, r->name);
daemon_reply reply;
log_debug("S %s R %s res_lock set lvmetad global invalid",
ls->name, r->name);
reply = send_lvmetad("set_global_info",
"token = %s", "skip",
"global_invalid = " FMTd64, INT64_C(1),
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_global_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
}
/*
@@ -1328,11 +1389,12 @@ static int res_convert(struct lockspace *ls, struct resource *r,
}
rv = lm_convert(ls, r, act->mode, act, r_version);
log_debug("S %s R %s res_convert rv %d", ls->name, r->name, rv);
if (rv < 0)
if (rv < 0) {
log_error("S %s R %s res_convert lm error %d", ls->name, r->name, rv);
return rv;
}
log_debug("S %s R %s res_convert lm done", ls->name, r->name);
if (lk->mode == LD_LK_EX && act->mode == LD_LK_SH) {
r->sh_count = 1;
@@ -4751,7 +4813,7 @@ static void close_client_thread(void)
}
/*
* Get a list of all VGs with a lockd type (sanlock|dlm).
* Get a list of all VGs with a lockd type (sanlock|dlm) from lvmetad.
* We'll match this list against a list of existing lockspaces that are
* found in the lock manager.
*
@@ -4762,9 +4824,6 @@ static void close_client_thread(void)
static int get_lockd_vgs(struct list_head *vg_lockd)
{
/* FIXME: get VGs some other way */
return -1;
#if 0
struct list_head update_vgs;
daemon_reply reply;
struct dm_config_node *cn;
@@ -4921,7 +4980,6 @@ out:
}
return rv;
#endif
}
static char _dm_uuid[DM_UUID_LEN];
@@ -5196,7 +5254,7 @@ static void adopt_locks(void)
gl_use_sanlock = 1;
list_for_each_entry(ls, &vg_lockd, list) {
log_debug("adopt vg %s lock_type %s lock_args %s",
log_debug("adopt lvmetad vg %s lock_type %s lock_args %s",
ls->vg_name, lm_str(ls->lm_type), ls->vg_args);
list_for_each_entry(r, &ls->resources, list)
@@ -5261,7 +5319,7 @@ static void adopt_locks(void)
/*
* LS in ls_found, not in vg_lockd.
* An lvm lockspace found in the lock manager has no
* corresponding VG. This shouldn't usually
* corresponding VG in lvmetad. This shouldn't usually
* happen, but it's possible the VG could have been removed
* while the orphaned lockspace from it was still around.
* Report an error and leave the ls in the lm alone.
@@ -5276,7 +5334,7 @@ static void adopt_locks(void)
/*
* LS in vg_lockd, not in ls_found.
* lockd vgs that do not have an existing lockspace.
* lockd vgs from lvmetad that do not have an existing lockspace.
* This wouldn't be unusual; we just skip the vg.
* But, if the vg has active lvs, then it should have had locks
* and a lockspace. Should we attempt to join the lockspace and
@@ -5328,6 +5386,8 @@ static void adopt_locks(void)
memcpy(act->vg_args, ls->vg_args, MAX_ARGS);
act->host_id = ls->host_id;
/* set act->version from lvmetad data? */
log_debug("adopt add %s vg lockspace %s", lm_str(act->lm_type), act->vg_name);
rv = add_lockspace_thread(ls->name, act->vg_name, act->vg_uuid,
@@ -5786,6 +5846,13 @@ static int main_loop(daemon_state *ds_arg)
setup_worker_thread();
setup_restart();
pthread_mutex_init(&lvmetad_mutex, NULL);
lvmetad_handle = lvmetad_open(NULL);
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0)
log_error("lvmetad_open error %d", lvmetad_handle.error);
else
lvmetad_connected = 1;
/*
* Attempt to rejoin lockspaces and adopt locks from a previous
* instance of lvmlockd that left behind lockspaces/locks.
@@ -5907,6 +5974,7 @@ static int main_loop(daemon_state *ds_arg)
close_worker_thread();
close_client_thread();
closelog();
daemon_close(lvmetad_handle);
return 1; /* libdaemon uses 1 for success */
}

View File

@@ -11,13 +11,13 @@
#define _XOPEN_SOURCE 500 /* pthread */
#define _ISOC99_SOURCE
#include "tools/tool.h"
#include "tool.h"
#include "daemon-server.h"
#include "lib/mm/xlate.h"
#include "xlate.h"
#include "lvmlockd-internal.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "lvmlockd-client.h"
/*
* Using synchronous _wait dlm apis so do not define _REENTRANT and
@@ -508,7 +508,7 @@ lockrv:
}
if (rv < 0) {
log_error("S %s R %s lock_dlm acquire error %d errno %d", ls->name, r->name, rv, errno);
return -ELMERR;
return rv;
}
if (rdd->vb) {
@@ -581,7 +581,6 @@ int lm_convert_dlm(struct lockspace *ls, struct resource *r,
}
if (rv < 0) {
log_error("S %s R %s convert_dlm error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
}
@@ -655,7 +654,6 @@ int lm_unlock_dlm(struct lockspace *ls, struct resource *r,
0, NULL, NULL, NULL);
if (rv < 0) {
log_error("S %s R %s unlock_dlm error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
@@ -699,7 +697,7 @@ int lm_hosts_dlm(struct lockspace *ls, int notify)
return 0;
memset(ls_nodes_path, 0, sizeof(ls_nodes_path));
snprintf(ls_nodes_path, PATH_MAX, "%s/%s/nodes",
snprintf(ls_nodes_path, PATH_MAX-1, "%s/%s/nodes",
DLM_LOCKSPACES_PATH, ls->name);
if (!(ls_dir = opendir(ls_nodes_path)))

View File

@@ -11,13 +11,13 @@
#define _XOPEN_SOURCE 500 /* pthread */
#define _ISOC99_SOURCE
#include "tools/tool.h"
#include "tool.h"
#include "daemon-server.h"
#include "lib/mm/xlate.h"
#include "xlate.h"
#include "lvmlockd-internal.h"
#include "daemons/lvmlockd/lvmlockd-client.h"
#include "lvmlockd-client.h"
#include "sanlock.h"
#include "sanlock_rv.h"
@@ -294,36 +294,6 @@ out:
return host_id;
}
/* Prepare valid /dev/mapper/vgname-lvname with all the mangling */
static int build_dm_path(char *path, size_t path_len,
const char *vg_name, const char *lv_name)
{
struct dm_pool *mem;
char *dm_name;
int rv = 0;
if (!(mem = dm_pool_create("namepool", 1024))) {
log_error("Failed to create mempool.");
return -ENOMEM;
}
if (!(dm_name = dm_build_dm_name(mem, vg_name, lv_name, NULL))) {
log_error("Failed to build dm name for %s/%s.", vg_name, lv_name);
rv = -EINVAL;
goto fail;
}
if ((dm_snprintf(path, path_len, "%s/%s", dm_dir(), dm_name) < 0)) {
log_error("Failed to create path %s/%s.", dm_dir(), dm_name);
rv = -EINVAL;
}
fail:
dm_pool_destroy(mem);
return rv;
}
/*
* vgcreate
*
@@ -366,8 +336,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if (strlen(lock_lv_name) + strlen(lock_args_version) + 2 > MAX_ARGS)
return -EARGS;
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
log_debug("S %s init_vg_san path %s", ls_name, disk.path);
@@ -544,8 +513,7 @@ int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
strncpy(rd.rs.lockspace_name, ls_name, SANLK_NAME_LEN);
rd.rs.num_disks = 1;
if ((rv = build_dm_path(rd.rs.disks[0].path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(rd.rs.disks[0].path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
align_size = sanlock_align(&rd.rs.disks[0]);
if (align_size <= 0) {
@@ -644,8 +612,7 @@ int lm_rename_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_
return rv;
}
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
log_debug("S %s rename_vg_san path %s", ls_name, disk.path);
@@ -1102,10 +1069,10 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
* and appending "lockctl" to get /path/to/lvmlockctl.
*/
memset(killpath, 0, sizeof(killpath));
snprintf(killpath, SANLK_PATH_LEN, "%slockctl", LVM_PATH);
snprintf(killpath, SANLK_PATH_LEN - 1, "%slockctl", LVM_PATH);
memset(killargs, 0, sizeof(killargs));
snprintf(killargs, SANLK_PATH_LEN, "--kill %s", ls->vg_name);
snprintf(killargs, SANLK_PATH_LEN - 1, "--kill %s", ls->vg_name);
rv = check_args_version(ls->vg_args, VG_LOCK_ARGS_MAJOR);
if (rv < 0) {
@@ -1121,8 +1088,8 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
goto fail;
}
if ((ret = build_dm_path(disk_path, SANLK_PATH_LEN, ls->vg_name, lock_lv_name)))
goto fail;
snprintf(disk_path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s",
ls->vg_name, lock_lv_name);
/*
* When a vg is started, the internal sanlock lv should be
@@ -1493,12 +1460,6 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
rv = sanlock_acquire(lms->sock, -1, flags, 1, &rs, &opt);
/*
* errors: translate the sanlock error number to an lvmlockd error.
* We don't want to return an sanlock-specific error number from
* this function to code that doesn't recognize sanlock error numbers.
*/
if (rv == -EAGAIN) {
/*
* It appears that sanlock_acquire returns EAGAIN when we request
@@ -1567,26 +1528,6 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
return -EAGAIN;
}
if (rv == SANLK_AIO_TIMEOUT) {
/*
* sanlock got an i/o timeout when trying to acquire the
* lease on disk.
*/
log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
*retry = 0;
return -EAGAIN;
}
if (rv == SANLK_DBLOCK_LVER || rv == SANLK_DBLOCK_MBAL) {
/*
* There was contention with another host for the lease,
* and we lost.
*/
log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
*retry = 0;
return -EAGAIN;
}
if (rv == SANLK_ACQUIRE_OWNED_RETRY) {
/*
* The lock is held by a failed host, and will eventually
@@ -1637,25 +1578,15 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
if (rv == -ENOSPC)
rv = -ELOCKIO;
/*
* generic error number for sanlock errors that we are not
* catching above.
*/
return -ELMERR;
return rv;
}
/*
* sanlock acquire success (rv 0)
*/
if (rds->vb) {
rv = sanlock_get_lvb(0, rs, (char *)&vb, sizeof(vb));
if (rv < 0) {
log_error("S %s R %s lock_san get_lvb error %d", ls->name, r->name, rv);
memset(rds->vb, 0, sizeof(struct val_blk));
memset(vb_out, 0, sizeof(struct val_blk));
/* the lock is still acquired, the vb values considered invalid */
rv = 0;
goto out;
}
@@ -1708,7 +1639,6 @@ int lm_convert_sanlock(struct lockspace *ls, struct resource *r,
if (rv < 0) {
log_error("S %s R %s convert_san set_lvb error %d",
ls->name, r->name, rv);
return -ELMERR;
}
}
@@ -1721,35 +1651,14 @@ int lm_convert_sanlock(struct lockspace *ls, struct resource *r,
if (daemon_test)
return 0;
/*
* Don't block waiting for a failed lease to expire since it causes
* sanlock_convert to block for a long time, which would prevent this
* thread from processing other lock requests.
*
* FIXME: SANLK_CONVERT_OWNER_NOWAIT is the same as SANLK_ACQUIRE_OWNER_NOWAIT.
* Change to use the CONVERT define when the latest sanlock version has it.
*/
flags |= SANLK_ACQUIRE_OWNER_NOWAIT;
rv = sanlock_convert(lms->sock, -1, flags, rs);
if (!rv)
return 0;
switch (rv) {
case -EAGAIN:
case SANLK_ACQUIRE_IDLIVE:
case SANLK_ACQUIRE_OWNED:
case SANLK_ACQUIRE_OWNED_RETRY:
case SANLK_ACQUIRE_OTHER:
case SANLK_AIO_TIMEOUT:
case SANLK_DBLOCK_LVER:
case SANLK_DBLOCK_MBAL:
/* expected errors from known/normal cases like lock contention or io timeouts */
log_debug("S %s R %s convert_san error %d", ls->name, r->name, rv);
if (rv == -EAGAIN) {
/* FIXME: When could this happen? Should something different be done? */
log_error("S %s R %s convert_san EAGAIN", ls->name, r->name);
return -EAGAIN;
default:
}
if (rv < 0) {
log_error("S %s R %s convert_san convert error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
@@ -1786,7 +1695,6 @@ static int release_rename(struct lockspace *ls, struct resource *r)
rv = sanlock_release(lms->sock, -1, SANLK_REL_RENAME, 2, res_args);
if (rv < 0) {
log_error("S %s R %s unlock_san release rename error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
free(res_args);
@@ -1843,7 +1751,6 @@ int lm_unlock_sanlock(struct lockspace *ls, struct resource *r,
if (rv < 0) {
log_error("S %s R %s unlock_san set_lvb error %d",
ls->name, r->name, rv);
return -ELMERR;
}
}
@@ -1862,8 +1769,6 @@ int lm_unlock_sanlock(struct lockspace *ls, struct resource *r,
if (rv == -EIO)
rv = -ELOCKIO;
else if (rv < 0)
rv = -ELMERR;
return rv;
}

View File

@@ -30,11 +30,11 @@ include $(top_builddir)/make.tmpl
CFLAGS += $(EXTRA_EXEC_CFLAGS)
INCLUDES += -I$(top_srcdir)/libdaemon/server
LDFLAGS += -L$(top_builddir)/libdaemon/server $(EXTRA_EXEC_LDFLAGS) $(ELDFLAGS)
LIBS += $(DAEMON_LIBS) -ldaemonserver $(PTHREAD_LIBS)
LIBS += $(DAEMON_LIBS) -ldaemonserver -ldevmapper $(PTHREAD_LIBS)
lvmpolld: $(OBJECTS) $(top_builddir)/libdaemon/client/libdaemonclient.a \
$(top_builddir)/libdaemon/server/libdaemonserver.a
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(INTERNAL_LIBS) $(LIBS)
$(CC) $(CFLAGS) $(LDFLAGS) -o $@ $(OBJECTS) $(LIBS)
install_lvmpolld: lvmpolld
$(INSTALL_PROGRAM) -D $< $(sbindir)/$(<F)

View File

@@ -36,7 +36,7 @@ static int add_to_cmd_arr(const char ***cmdargv, const char *str, unsigned *ind)
const char **newargv;
if (*ind && !(*ind % MIN_ARGV_SIZE)) {
newargv = realloc(*cmdargv, (*ind / MIN_ARGV_SIZE + 1) * MIN_ARGV_SIZE * sizeof(char *));
newargv = dm_realloc(*cmdargv, (*ind / MIN_ARGV_SIZE + 1) * MIN_ARGV_SIZE * sizeof(char *));
if (!newargv)
return 0;
*cmdargv = newargv;
@@ -50,7 +50,7 @@ static int add_to_cmd_arr(const char ***cmdargv, const char *str, unsigned *ind)
const char **cmdargv_ctr(const struct lvmpolld_lv *pdlv, const char *lvm_binary, unsigned abort_polling, unsigned handle_missing_pvs)
{
unsigned i = 0;
const char **cmd_argv = malloc(MIN_ARGV_SIZE * sizeof(char *));
const char **cmd_argv = dm_malloc(MIN_ARGV_SIZE * sizeof(char *));
if (!cmd_argv)
return NULL;
@@ -98,7 +98,7 @@ const char **cmdargv_ctr(const struct lvmpolld_lv *pdlv, const char *lvm_binary,
return cmd_argv;
err:
free(cmd_argv);
dm_free(cmd_argv);
return NULL;
}
@@ -122,7 +122,7 @@ static int copy_env(const char ***cmd_envp, unsigned *i, const char *exclude)
const char **cmdenvp_ctr(const struct lvmpolld_lv *pdlv)
{
unsigned i = 0;
const char **cmd_envp = malloc(MIN_ARGV_SIZE * sizeof(char *));
const char **cmd_envp = dm_malloc(MIN_ARGV_SIZE * sizeof(char *));
if (!cmd_envp)
return NULL;
@@ -141,6 +141,6 @@ const char **cmdenvp_ctr(const struct lvmpolld_lv *pdlv)
return cmd_envp;
err:
free(cmd_envp);
dm_free(cmd_envp);
return NULL;
}

View File

@@ -20,10 +20,10 @@
#define _REENTRANT
#include "tools/tool.h"
#include "tool.h"
#include "lvmpolld-cmd-utils.h"
#include "daemons/lvmpolld/lvmpolld-protocol.h"
#include "lvmpolld-protocol.h"
#include <assert.h>
#include <errno.h>

View File

@@ -530,7 +530,7 @@ static response progress_info(client_handle h, struct lvmpolld_state *ls, reques
pdst_unlock(pdst);
free(id);
dm_free(id);
if (pdlv) {
if (st.error)
@@ -673,7 +673,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
PD_LOG_PREFIX, "poll operation type mismatch on LV identified by",
id,
polling_op(pdlv_get_type(pdlv)), polling_op(type));
free(id);
dm_free(id);
return reply(LVMPD_RESP_EINVAL,
REASON_DIFFERENT_OPERATION_IN_PROGRESS);
}
@@ -683,14 +683,14 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
lvname, sysdir, type, abort_polling, 2 * uinterval);
if (!pdlv) {
pdst_unlock(pdst);
free(id);
dm_free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
if (!pdst_locked_insert(pdst, id, pdlv)) {
pdlv_destroy(pdlv);
pdst_unlock(pdst);
ERROR(ls, "%s: %s", PD_LOG_PREFIX, "couldn't store internal LV data structure");
free(id);
dm_free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
if (!spawn_detached_thread(pdlv)) {
@@ -698,7 +698,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
pdst_locked_remove(pdst, id);
pdlv_destroy(pdlv);
pdst_unlock(pdst);
free(id);
dm_free(id);
return reply(LVMPD_RESP_FAILED, REASON_ENOMEM);
}
@@ -709,7 +709,7 @@ static response poll_init(client_handle h, struct lvmpolld_state *ls, request re
pdst_unlock(pdst);
free(id);
dm_free(id);
return daemon_reply_simple(LVMPD_RESP_OK, NULL);
}
@@ -806,7 +806,7 @@ static int printout_raw_response(const char *prefix, const char *msg)
char *buf;
char *pos;
buf = strdup(msg);
buf = dm_strdup(msg);
pos = buf;
if (!buf)
@@ -819,7 +819,7 @@ static int printout_raw_response(const char *prefix, const char *msg)
_log_line(pos, &b);
pos = next ? next + 1 : 0;
}
free(buf);
dm_free(buf);
return 1;
}

View File

@@ -14,7 +14,7 @@
#include "lvmpolld-common.h"
#include "libdaemon/client/config-util.h"
#include "config-util.h"
#include <fcntl.h>
#include <signal.h>
@@ -27,12 +27,12 @@ static char *_construct_full_lvname(const char *vgname, const char *lvname)
size_t l;
l = strlen(vgname) + strlen(lvname) + 2; /* vg/lv and \0 */
name = (char *) malloc(l * sizeof(char));
name = (char *) dm_malloc(l * sizeof(char));
if (!name)
return NULL;
if (dm_snprintf(name, l, "%s/%s", vgname, lvname) < 0) {
free(name);
dm_free(name);
name = NULL;
}
@@ -47,7 +47,7 @@ static char *_construct_lvm_system_dir_env(const char *sysdir)
* just single char to store NULL byte
*/
size_t l = sysdir ? strlen(sysdir) + 16 : 1;
char *env = (char *) malloc(l * sizeof(char));
char *env = (char *) dm_malloc(l * sizeof(char));
if (!env)
return NULL;
@@ -55,7 +55,7 @@ static char *_construct_lvm_system_dir_env(const char *sysdir)
*env = '\0';
if (sysdir && dm_snprintf(env, l, "%s%s", LVM_SYSTEM_DIR, sysdir) < 0) {
free(env);
dm_free(env);
env = NULL;
}
@@ -74,7 +74,7 @@ char *construct_id(const char *sysdir, const char *uuid)
size_t l;
l = strlen(uuid) + (sysdir ? strlen(sysdir) : 0) + 1;
id = (char *) malloc(l * sizeof(char));
id = (char *) dm_malloc(l * sizeof(char));
if (!id)
return NULL;
@@ -82,7 +82,7 @@ char *construct_id(const char *sysdir, const char *uuid)
dm_snprintf(id, l, "%s", uuid);
if (r < 0) {
free(id);
dm_free(id);
id = NULL;
}
@@ -95,7 +95,7 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
const char *sinterval, unsigned pdtimeout,
struct lvmpolld_store *pdst)
{
char *lvmpolld_id = strdup(id), /* copy */
char *lvmpolld_id = dm_strdup(id), /* copy */
*full_lvname = _construct_full_lvname(vgname, lvname), /* copy */
*lvm_system_dir_env = _construct_lvm_system_dir_env(sysdir); /* copy */
@@ -106,12 +106,12 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
.lvid = _get_lvid(lvmpolld_id, sysdir),
.lvname = full_lvname,
.lvm_system_dir_env = lvm_system_dir_env,
.sinterval = strdup(sinterval), /* copy */
.sinterval = dm_strdup(sinterval), /* copy */
.pdtimeout = pdtimeout < MIN_POLLING_TIMEOUT ? MIN_POLLING_TIMEOUT : pdtimeout,
.cmd_state = { .retcode = -1, .signal = 0 },
.pdst = pdst,
.init_rq_count = 1
}, *pdlv = (struct lvmpolld_lv *) malloc(sizeof(struct lvmpolld_lv));
}, *pdlv = (struct lvmpolld_lv *) dm_malloc(sizeof(struct lvmpolld_lv));
if (!pdlv || !tmp.lvid || !tmp.lvname || !tmp.lvm_system_dir_env || !tmp.sinterval)
goto err;
@@ -124,27 +124,27 @@ struct lvmpolld_lv *pdlv_create(struct lvmpolld_state *ls, const char *id,
return pdlv;
err:
free((void *)full_lvname);
free((void *)lvmpolld_id);
free((void *)lvm_system_dir_env);
free((void *)tmp.sinterval);
free((void *)pdlv);
dm_free((void *)full_lvname);
dm_free((void *)lvmpolld_id);
dm_free((void *)lvm_system_dir_env);
dm_free((void *)tmp.sinterval);
dm_free((void *)pdlv);
return NULL;
}
void pdlv_destroy(struct lvmpolld_lv *pdlv)
{
free((void *)pdlv->lvmpolld_id);
free((void *)pdlv->lvname);
free((void *)pdlv->sinterval);
free((void *)pdlv->lvm_system_dir_env);
free((void *)pdlv->cmdargv);
free((void *)pdlv->cmdenvp);
dm_free((void *)pdlv->lvmpolld_id);
dm_free((void *)pdlv->lvname);
dm_free((void *)pdlv->sinterval);
dm_free((void *)pdlv->lvm_system_dir_env);
dm_free((void *)pdlv->cmdargv);
dm_free((void *)pdlv->cmdenvp);
pthread_mutex_destroy(&pdlv->lock);
free((void *)pdlv);
dm_free((void *)pdlv);
}
unsigned pdlv_get_polling_finished(struct lvmpolld_lv *pdlv)
@@ -194,7 +194,7 @@ void pdlv_set_polling_finished(struct lvmpolld_lv *pdlv, unsigned finished)
struct lvmpolld_store *pdst_init(const char *name)
{
struct lvmpolld_store *pdst = (struct lvmpolld_store *) malloc(sizeof(struct lvmpolld_store));
struct lvmpolld_store *pdst = (struct lvmpolld_store *) dm_malloc(sizeof(struct lvmpolld_store));
if (!pdst)
return NULL;
@@ -212,7 +212,7 @@ struct lvmpolld_store *pdst_init(const char *name)
err_mutex:
dm_hash_destroy(pdst->store);
err_hash:
free(pdst);
dm_free(pdst);
return NULL;
}
@@ -223,7 +223,7 @@ void pdst_destroy(struct lvmpolld_store *pdst)
dm_hash_destroy(pdst->store);
pthread_mutex_destroy(&pdst->lock);
free(pdst);
dm_free(pdst);
}
void pdst_locked_lock_all_pdlvs(const struct lvmpolld_store *pdst)
@@ -321,7 +321,7 @@ void pdst_locked_destroy_all_pdlvs(const struct lvmpolld_store *pdst)
struct lvmpolld_thread_data *lvmpolld_thread_data_constructor(struct lvmpolld_lv *pdlv)
{
struct lvmpolld_thread_data *data = (struct lvmpolld_thread_data *) malloc(sizeof(struct lvmpolld_thread_data));
struct lvmpolld_thread_data *data = (struct lvmpolld_thread_data *) dm_malloc(sizeof(struct lvmpolld_thread_data));
if (!data)
return NULL;
@@ -368,7 +368,7 @@ void lvmpolld_thread_data_destroy(void *thread_private)
pdst_unlock(data->pdlv->pdst);
}
/* may get reallocated in getline(). free must not be used */
/* may get reallocated in getline(). dm_free must not be used */
free(data->line);
if (data->fout && !fclose(data->fout))
@@ -389,5 +389,5 @@ void lvmpolld_thread_data_destroy(void *thread_private)
if (data->errpipe[1] >= 0)
(void) close(data->errpipe[1]);
free(data);
dm_free(data);
}

View File

@@ -15,7 +15,7 @@
#ifndef _LVM_LVMPOLLD_PROTOCOL_H
#define _LVM_LVMPOLLD_PROTOCOL_H
#include "daemons/lvmpolld/polling_ops.h"
#include "polling_ops.h"
#define LVMPOLLD_PROTOCOL "lvmpolld"
#define LVMPOLLD_PROTOCOL_VERSION 1

View File

@@ -1,52 +0,0 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
DEVICE_MAPPER_SOURCE=\
device_mapper/datastruct/bitset.c \
device_mapper/libdm-common.c \
device_mapper/libdm-config.c \
device_mapper/libdm-deptree.c \
device_mapper/libdm-file.c \
device_mapper/libdm-report.c \
device_mapper/libdm-string.c \
device_mapper/libdm-targets.c \
device_mapper/libdm-timestamp.c \
device_mapper/mm/pool.c \
device_mapper/regex/matcher.c \
device_mapper/regex/parse_rx.c \
device_mapper/regex/ttree.c \
device_mapper/ioctl/libdm-iface.c \
device_mapper/vdo/vdo_target.c \
device_mapper/vdo/status.c
DEVICE_MAPPER_DEPENDS=$(addprefix $(top_builddir)/,$(subst .c,.d,$(DEVICE_MAPPER_SOURCE)))
DEVICE_MAPPER_OBJECTS=$(addprefix $(top_builddir)/,$(subst .c,.o,$(DEVICE_MAPPER_SOURCE)))
CLEAN_TARGETS+=$(DEVICE_MAPPER_DEPENDS) $(DEVICE_MAPPER_OBJECTS)
#$(DEVICE_MAPPER_DEPENDS): INCLUDES+=$(VDO_INCLUDES)
#$(DEVICE_MAPPER_OBJECTS): INCLUDES+=$(VDO_INCLUDES)
ifeq ("$(USE_TRACKING)","yes")
ifeq (,$(findstring $(MAKECMDGOALS),cscope.out cflow clean distclean lcov \
help check check_local check_cluster check_lvmetad check_lvmpolld))
-include $(DEVICE_MAPPER_DEPENDS)
endif
endif
$(DEVICE_MAPPER_OBJECTS): INCLUDES+=-I$(top_srcdir)/device_mapper/
$(top_builddir)/device_mapper/libdevice-mapper.a: $(DEVICE_MAPPER_OBJECTS)
@echo " [AR] $@"
$(Q) $(RM) $@
$(Q) $(AR) rsv $@ $(DEVICE_MAPPER_OBJECTS) > /dev/null
CLEAN_TARGETS+=$(top_builddir)/device_mapper/libdevice-mapper.a

File diff suppressed because it is too large Load Diff

View File

@@ -1,259 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "device_mapper/misc/dmlib.h"
#include "base/memory/zalloc.h"
#include <ctype.h>
/* FIXME: calculate this. */
#define INT_SHIFT 5
dm_bitset_t dm_bitset_create(struct dm_pool *mem, unsigned num_bits)
{
unsigned n = (num_bits / DM_BITS_PER_INT) + 2;
size_t size = sizeof(int) * n;
dm_bitset_t bs;
if (mem)
bs = dm_pool_zalloc(mem, size);
else
bs = zalloc(size);
if (!bs)
return NULL;
*bs = num_bits;
return bs;
}
void dm_bitset_destroy(dm_bitset_t bs)
{
free(bs);
}
int dm_bitset_equal(dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
if (in1[i] != in2[i])
return 0;
return 1;
}
void dm_bit_and(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
out[i] = in1[i] & in2[i];
}
void dm_bit_union(dm_bitset_t out, dm_bitset_t in1, dm_bitset_t in2)
{
int i;
for (i = (in1[0] / DM_BITS_PER_INT) + 1; i; i--)
out[i] = in1[i] | in2[i];
}
static int _test_word(uint32_t test, int bit)
{
uint32_t tb = test >> bit;
return (tb ? ffs(tb) + bit - 1 : -1);
}
static int _test_word_rev(uint32_t test, int bit)
{
uint32_t tb = test << (DM_BITS_PER_INT - 1 - bit);
return (tb ? bit - clz(tb) : -1);
}
int dm_bit_get_next(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit++; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit < (int) bs[0]) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = last_bit - (last_bit & (DM_BITS_PER_INT - 1)) +
DM_BITS_PER_INT;
}
return -1;
}
int dm_bit_get_prev(dm_bitset_t bs, int last_bit)
{
int bit, word;
uint32_t test;
last_bit--; /* otherwise we'll return the same bit again */
/*
* bs[0] holds number of bits
*/
while (last_bit >= 0) {
word = last_bit >> INT_SHIFT;
test = bs[word + 1];
bit = last_bit & (DM_BITS_PER_INT - 1);
if ((bit = _test_word_rev(test, bit)) >= 0)
return (word * DM_BITS_PER_INT) + bit;
last_bit = (last_bit & ~(DM_BITS_PER_INT - 1)) - 1;
}
return -1;
}
int dm_bit_get_first(dm_bitset_t bs)
{
return dm_bit_get_next(bs, -1);
}
int dm_bit_get_last(dm_bitset_t bs)
{
return dm_bit_get_prev(bs, bs[0] + 1);
}
/*
* Based on the Linux kernel __bitmap_parselist from lib/bitmap.c
*/
dm_bitset_t dm_bitset_parse_list(const char *str, struct dm_pool *mem,
size_t min_num_bits)
{
unsigned a, b;
int c, old_c, totaldigits, ndigits, nmaskbits;
int at_start, in_range;
dm_bitset_t mask = NULL;
const char *start = str;
size_t len;
scan:
len = strlen(str);
totaldigits = c = 0;
nmaskbits = 0;
do {
at_start = 1;
in_range = 0;
a = b = 0;
ndigits = totaldigits;
/* Get the next value or range of values */
while (len) {
old_c = c;
c = *str++;
len--;
if (isspace(c))
continue;
/* A '\0' or a ',' signal the end of a value or range */
if (c == '\0' || c == ',')
break;
/*
* whitespaces between digits are not allowed,
* but it's ok if whitespaces are on head or tail.
* when old_c is whilespace,
* if totaldigits == ndigits, whitespace is on head.
* if whitespace is on tail, it should not run here.
* as c was ',' or '\0',
* the last code line has broken the current loop.
*/
if ((totaldigits != ndigits) && isspace(old_c))
goto_bad;
if (c == '-') {
if (at_start || in_range)
goto_bad;
b = 0;
in_range = 1;
at_start = 1;
continue;
}
if (!isdigit(c))
goto_bad;
b = b * 10 + (c - '0');
if (!in_range)
a = b;
at_start = 0;
totaldigits++;
}
if (ndigits == totaldigits)
continue;
/* if no digit is after '-', it's wrong */
if (at_start && in_range)
goto_bad;
if (!(a <= b))
goto_bad;
if (b >= nmaskbits)
nmaskbits = b + 1;
while ((a <= b) && mask) {
dm_bit_set(mask, a);
a++;
}
} while (len && c == ',');
if (!mask) {
if (min_num_bits && (nmaskbits < min_num_bits))
nmaskbits = min_num_bits;
if (!(mask = dm_bitset_create(mem, nmaskbits)))
goto_bad;
str = start;
goto scan;
}
return mask;
bad:
if (mask) {
if (mem)
dm_pool_free(mem, mask);
else
dm_bitset_destroy(mask);
}
return NULL;
}
#if defined(__GNUC__)
/*
* Maintain backward compatibility with older versions that did not
* accept a 'min_num_bits' argument to dm_bitset_parse_list().
*/
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem);
dm_bitset_t dm_bitset_parse_list_v1_02_129(const char *str, struct dm_pool *mem)
{
return dm_bitset_parse_list(str, mem, 0);
}
#else /* if defined(__GNUC__) */
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,88 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef LIB_DMTARGETS_H
#define LIB_DMTARGETS_H
#include <inttypes.h>
#include <sys/types.h>
struct dm_ioctl;
struct target {
uint64_t start;
uint64_t length;
char *type;
char *params;
struct target *next;
};
struct dm_task {
int type;
char *dev_name;
char *mangled_dev_name;
struct target *head, *tail;
int read_only;
uint32_t event_nr;
int major;
int minor;
int allow_default_major_fallback;
uid_t uid;
gid_t gid;
mode_t mode;
uint32_t read_ahead;
uint32_t read_ahead_flags;
union {
struct dm_ioctl *v4;
} dmi;
char *newname;
char *message;
char *geometry;
uint64_t sector;
int no_flush;
int no_open_count;
int skip_lockfs;
int query_inactive_table;
int suppress_identical_reload;
dm_add_node_t add_node;
uint64_t existing_table_size;
int cookie_set;
int new_uuid;
int secure_data;
int retry_remove;
int deferred_remove;
int enable_checks;
int expected_errno;
int ioctl_errno;
int record_timestamp;
char *uuid;
char *mangled_uuid;
};
struct cmd_data {
const char *name;
const unsigned cmd;
const int version[3];
};
int dm_check_version(void);
uint64_t dm_task_get_existing_table_size(struct dm_task *dmt);
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,58 +0,0 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2012 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef LIB_DMCOMMON_H
#define LIB_DMCOMMON_H
#include "device_mapper/all.h"
#define DM_DEFAULT_NAME_MANGLING_MODE_ENV_VAR_NAME "DM_DEFAULT_NAME_MANGLING_MODE"
#define DEV_NAME(dmt) (dmt->mangled_dev_name ? : dmt->dev_name)
#define DEV_UUID(DMT) (dmt->mangled_uuid ? : dmt->uuid)
int mangle_string(const char *str, const char *str_name, size_t len,
char *buf, size_t buf_len, dm_string_mangling_t mode);
int unmangle_string(const char *str, const char *str_name, size_t len,
char *buf, size_t buf_len, dm_string_mangling_t mode);
int check_multiple_mangled_string_allowed(const char *str, const char *str_name,
dm_string_mangling_t mode);
struct target *create_target(uint64_t start,
uint64_t len,
const char *type, const char *params);
int add_dev_node(const char *dev_name, uint32_t minor, uint32_t major,
uid_t uid, gid_t gid, mode_t mode, int check_udev, unsigned rely_on_udev);
int rm_dev_node(const char *dev_name, int check_udev, unsigned rely_on_udev);
int rename_dev_node(const char *old_name, const char *new_name,
int check_udev, unsigned rely_on_udev);
int get_dev_node_read_ahead(const char *dev_name, uint32_t major, uint32_t minor,
uint32_t *read_ahead);
int set_dev_node_read_ahead(const char *dev_name, uint32_t major, uint32_t minor,
uint32_t read_ahead, uint32_t read_ahead_flags);
void update_devs(void);
void selinux_release(void);
void inc_suspended(void);
void dec_suspended(void);
int parse_thin_pool_status(const char *params, struct dm_status_thin_pool *s);
int get_uname_version(unsigned *major, unsigned *minor, unsigned *release);
#endif

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More