1
0
mirror of git://sourceware.org/git/lvm2.git synced 2025-09-27 05:44:18 +03:00

Compare commits

..

14 Commits

Author SHA1 Message Date
David Teigland
99a8bfcd72 doc: lvm disk reading 2018-01-26 06:50:52 -06:00
David Teigland
dbe60f74d4 lvmcache: simplify metadata cache
The copy of VG metadata stored in lvmcache was not being used
in general.  It pretended to be a generic VG metadata cache,
but was not being used except for clvmd activation.  There
it was used to avoid reading from disk while devices were
suspended, i.e. in resume.

This removes the code that attempted to make this look
like a generic metadata cache, and replaces with with
something narrowly targetted to what it's actually used for.

This is a way of passing the VG from suspend to resume in
clvmd.  Since in the case of clvmd one caller can't simply
pass the same VG to both suspend and resume, suspend needs
to stash the VG somewhere that resume can grab it from.
(resume doesn't want to read it from disk since devices
are suspended.)  The lvmcache vginfo struct is used as a
convenient place to stash the VG to pass it from suspend
to resume, even though it isn't related to the lvmcache
or vginfo.  These suspended_vg* vginfo fields should
not be used or touched anywhere else, they are only to
be used for passing the VG data from suspend to resume
in clvmd.  The VG data being passed between suspend and
resume is never modified, and will only exist in the
brief period between suspend and resume in clvmd.

suspend has both old (current) and new (precommitted)
copies of the VG metadata.  It stashes both of these in
the vginfo prior to suspending devices.  When vg_commit
is successful, it sets a flag in vginfo as before,
signaling the transition from old to new metadata.

resume grabs the VG stashed by suspend.  If the vg_commit
happened, it grabs the new VG, and if the vg_commit didn't
happen it grabs the old VG.  The VG is then used to resume
LVs.

This isolates clvmd-specific code and usage from the
normal lvm vg_read code, making the code simpler and
the behavior easier to verify.

Sequence of operations:

- lv_suspend() has both vg_old and vg_new
  and stashes a copy of each onto the vginfo:
  lvmcache_save_suspended_vg(vg_old);
  lvmcache_save_suspended_vg(vg_new);

- vg_commit() happens, which causes all clvmd
  instances to call lvmcache_commit_metadata(vg).
  A flag is set in the vginfo indicating the
  transition from the old to new VG:
  vginfo->suspended_vg_committed = 1;

- lv_resume() needs either vg_old or vg_new
  to use in resuming LVs.  It doesn't want to
  read the VG from disk since devices are
  suspended, so it gets the VG stashed by
  lv_suspend:
  vg = lvmcache_get_suspended_vg(vgid);

If the vg_commit did not happen, suspended_vg_committed
will not be set, and in this case, lvmcache_get_suspended_vg()
will return the old VG instead of the new VG, and it will
resume LVs based on the old metadata.
2017-11-10 16:11:49 -06:00
David Teigland
c958bbf2dc label_scan: remove extra label scan and read for orphan PVs
When process_each_pv() calls vg_read() on the orphan VG, the
internal implementation was doing an unnecessary
lvmcache_label_scan() and two unnecessary label_read() calls
on each orphan.  Some of those unnecessary label scans/reads
would sometimes be skipped due to caching, but the code was
always doing at least one unnecessary read on each orphan.

The common format_text case was also unecessarily calling into
the format-specific pv_read() function which actually did nothing.

By analyzing each case in which vg_read() was being called on
the orphan VG, we can say that all of the label scans/reads
in vg_read_orphans are unnecessary:

1. reporting commands: the information saved in lvmcache by
the original label scan can be reported.  There is no advantage
to repeating the label scan on the orphans a second time before
reporting it.

2. pvcreate/vgcreate/vgextend: these all share a common
implementation in pvcreate_each_device().  That function
already rescans labels after acquiring the orphan VG lock,
which ensures that the command is using valid lvmcache
information.
2017-11-10 16:11:48 -06:00
David Teigland
27b9282901 vgcreate: improve the use of label_scan
The old code was doing unnecessary label scans when
checking to see if the new VG name exists.  A single
label_scan is sufficient if it is done after the
new VG lock is held.
2017-11-10 16:11:48 -06:00
David Teigland
cdb6797b13 lvmetad: use new label_scan for update from pvscan
Take advantage of the common implementation with aio
and reduced disk reads.
2017-11-10 16:11:48 -06:00
David Teigland
1fc542a898 lvmetad: use new label_scan for update from lvmlockd
When lvmlockd indicates that the lvmetad cache is out of
date because of changes by another node, lvmetad_pvscan_vg()
rescans the devices in the VG to update lvmetad.  Use the
new label_scan in this function to use the common code and
take advantage of the new aio and reduced reads.
2017-11-10 16:11:48 -06:00
David Teigland
6a97427f20 label_scan/vg_read: use label_read_data to avoid disk reads
The new label_scan() function reads a large buffer of data
from the start of the disk, and saves it so that multiple
structs can be read from it.  Previously, only the label_header
was read from this buffer, and the code which needed data
structures that immediately followed the label_header would
read those from disk separately.  This created a large
number of small, unnecessary disk reads.

In each place that the two read paths (label_scan and vg_read)
need to read data from disk, first check if that data is
already available from the label_read_data buffer, and if
so just copy it from the buffer instead of reading from disk.

Code changes
------------

- passing the label_read_data struct down through
  both read paths to make it available.

- before every disk read, first check if the location
  and size of the desired piece of data exists fully
  in the label_read_data buffer, and if so copy it
  from there.  Otherwise, use the existing code to
  read the data from disk.

- adding some log_error messages on existing error paths
  that were already being updated for the reasons above.

- using similar naming for parallel functions on the two
  parallel read paths that are being updated above.

  label_scan path calls:
  read_metadata_location_summary, text_read_metadata_summary

  vg_read path calls:
  read_metadata_location_vg, text_read_metadata_file

  Previously, those functions were named:

  label_scan path calls:
  vgname_from_mda, text_vgsummary_import

  vg_read path calls:
  _find_vg_rlocn, text_vg_import_fd

I/O changes
-----------

In the label_scan path, the following data is either copied
from label_read_data or read from disk for each PV:

- label_header and pv_header
- mda_header (in _raw_read_mda_header)
- vg metadata name (in read_metadata_location_summary)
- vg metadata (in config_file_read_fd)

Total of 4 reads per PV in the label_scan path.

In the vg_read path, the following data is either copied from
label_read_data or read from disk for each PV:

- mda_header (in _raw_read_mda_header)
- vg metadata name (in read_metadata_location_vg)
- vg metadata (in config_file_read_fd)

Total of 3 reads per PV in the vg_read path.

For a common read/reporting command, each PV will be:

- read by the command's initial lvmcache_label_scan()
- read by lvmcache_label_rescan_vg() at the start of vg_read()
- read by vg_read()

Previously, this would cause 11 synchronous disk reads per PV:
4 from lvmcache_label_scan(), 4 from lvmcache_label_rescan_vg()
and 3 from vg_read().

With this commit's optimization, there are now 2 async disk reads
per PV: 1 from lvmcache_label_scan() and 1 from
lvmcache_label_rescan_vg().

When a second mda is used on a PV, it is located at the
end of the PV.  This second mda and copy of metadata will
not be found in the label_read_data buffer, and will always
require separate disk reads.
2017-11-10 16:11:48 -06:00
David Teigland
1abc31c741 independent metadata areas: fix bogus code
Fix mixing bitwise & and logical && which was
always 1 in any case.
2017-11-10 16:11:48 -06:00
David Teigland
fd55b0b637 label_scan: fix independent metadata areas
This fixes the use of lvmcache_label_rescan_vg() in the previous
commit for the special case of independent metadata areas.

label scan is about discovering VG name to device associations
using information from disks, but devices in VGs with
independent metadata areas have no information on disk, so
the label scan does nothing for these VGs/devices.
With independent metadata areas, only the VG metadata found
in files is used.  This metadata is found and read in
vg_read in the processing phase.

lvmcache_label_rescan_vg() drops lvmcache info for the VG devices
before repeating the label scan on them.  In the case of
independent metadata areas, there is no metadata on devices, so the
label scan of the devices will find nothing, so will not recreate
the necessary vginfo/info data in lvmcache for the VG.  Fix this
by setting a flag in the lvmcache vginfo struct indicating that
the VG uses independent metadata areas, and label rescanning should
be skipped.

In the case of independent metadata areas, it is the metadata
processing in the vg_read phase that sets up the lvmcache
vginfo/info information, and label scan has no role.
2017-11-10 16:11:48 -06:00
David Teigland
19843cf1d6 label_scan: move to start of command
LVM's general design for scanning/reading of metadata from disks is
that a command begins with a discovery phase, called "label scan",
in which it discovers which devices belong to lvm, what VGs exist on
those devices, and which devices are associated with each VG.
After this comes the processing phase, which is based around
processing specific VGs.  In this phase, lvm acquires a lock on
the VG, and rescans the devices associated with that VG, i.e.
it repeats the label scan steps on the devices in the VG in case
something has changed between the initial label scan and taking
the VG lock.  This ensures that the command is processing the
lastest, unchanging data on disk.

This commit moves the location of these label scans to make them
clearer and avoid unnecessary repeated calls to them.

Previously, the initial label scan was called as a side effect
from various utility functions.  This would lead to it being called
unnecessarily.  It is an expensive operation, and should only be
called when necessary.  Also, this is a primary step in the
function of the command, and as such it should be called prominently
at the top level of command processing, not as a hidden side effect
of a utility function.  lvm knows exactly where and when the
label scan needs to be done.  Because of this, move the label scan
calls from the internal functions to the top level of processing.

Other specific instances of lvmcache_label_scan() are still called
unnecessarily or unclearly by specific commands that do not use
the common process_each functions.  These will be improved in
future commits.

During the processing phase, rescanning labels for devices in a VG
needs to be done after the VG lock is acquired in case things have
changed since the initial label scan.  This was being done by way
of rescanning devices that had the INVALID flag set in lvmcache.
This usually approximated the right set of devices, but it was not
exact, and obfuscated the real requirement.  Correct this by using
a new function that rescans the devices in the VG:
lvmcache_label_rescan_vg().

Apart from being inexact, the rescanning was extremely well hidden.
_vg_read() would call ->create_instance(), _text_create_text_instance(),
_create_vg_text_instance() which would call lvmcache_label_scan()
which would call _scan_invalid() which repeats the label scan on
devices flagged INVALID.  lvmcache_label_rescan_vg() is now called
prominently by _vg_read() directly.
2017-11-10 16:11:48 -06:00
David Teigland
6b840ed950 label_scan: call new label_scan from lvmcache_label_scan
To do label scanning, lvm code calls lvmcache_label_scan().
Change lvmcache_label_scan() to use the new label_scan()
which can use async io, rather than implementing its own
dev iter loop and calling the synchronous label_read() on
each device.

Also add lvmcache_label_rescan_vg() which calls the new
label_scan_devs() which does label scanning on only the
specified devices.  This is for a subsequent commit and
is not yet used.
2017-11-10 16:11:48 -06:00
David Teigland
493280f60f label_scan: add new implementation for async and sync
This adds the implementation without using it in the code.
The code still calls label_read() on each individual device
to do scanning.

Calling the new label_scan() will use async io if async io is
enabled in config settings.  If not enabled, or if async io fails,
label_scan() will fall back to using synchronous io.  If only some
aio ops fail, the code will attempt to perform synchronous io on just
the ios that failed.

Uses linux native aio system calls, not the posix wrappers which
are messier and may not have all the latest linux capabilities.

Internally, the same functionality is used before:

- iterate through each visible device on the system,
  provided from from dev-cache

- call _find_label_header on the dev to find the sector
  containing the label_header

- call _text_read to look at the pv_header and mda locations
  after the pv_header

- for each mda location, read the mda_header and the vg metadata

- add info/vginfo structs to lvmcache which associate the
  device name (info) with the VG name (vginfo) so that vg_read
  can know which devices to read for a given VG name

The new label scanning issues a "large" read beginning at the start
of the device, where large is configurable, but intended to cover
all the labels/headers/metadata that is located at the start of
the device.  This large data buffer from each device is saved in a
global list using a new 'label_read_data' struct.  Currently, this
buffer is only used to find the label_header from the first four
sectors of the device.  In subsequent commits, other functions
that read other structs/metadata will first try to find that data in
the saved label_read_data buffer.  In most common cases, the data
they need can simply be copied out of the existing buffer, and
they can avoid issuing another disk read to get it.
2017-11-10 16:11:48 -06:00
David Teigland
235278ae95 command: add settings to enable async io
There are config settings to enable aio, and to configure
the concurrency and read size.
2017-11-10 16:11:44 -06:00
David Teigland
89399d3b72 io: add low level async io support
The interface consists of:

- A context struct, one for the entire command.

- An io struct, one per io operation (read).

- dev_async_context_setup() creates an aio context.

- dev_async_context_destroy() destroys an aio context.

- dev_async_alloc_ios() allocates a specified number of io structs,
  along with an associated buffer for the data.

- dev_async_free_ios() frees all the allocated io structs+buffers.

- dev_async_io_get() gets an available io struct from those
  allocated in alloc_ios.  If none are available, it will allocate
  a new io struct if under limit.

- dev_async_io_put() puts a used io struct back into the set
  of unused io structs, making it available for get.

- dev_async_read_submit() start an async read io.

- dev_async_getevents() collect async io completions.
2017-11-10 10:55:32 -06:00
714 changed files with 19538 additions and 28150 deletions

105
.gitignore vendored
View File

@@ -1,7 +1,6 @@
*.5
*.7
*.8
*.8_gen
*.a
*.d
*.o
@@ -28,108 +27,6 @@ make.tmpl
/config.log
/config.status
/configure.scan
/cscope.*
/html/
/reports/
/cscope.out
/tags
/tmp/
tools/man-generator
tools/man-generator.c
test/lib/lvchange
test/lib/lvconvert
test/lib/lvcreate
test/lib/lvdisplay
test/lib/lvextend
test/lib/lvmconfig
test/lib/lvmdiskscan
test/lib/lvmsadc
test/lib/lvmsar
test/lib/lvreduce
test/lib/lvremove
test/lib/lvrename
test/lib/lvresize
test/lib/lvs
test/lib/lvscan
test/lib/pvchange
test/lib/pvck
test/lib/pvcreate
test/lib/pvdisplay
test/lib/pvmove
test/lib/pvremove
test/lib/pvresize
test/lib/pvs
test/lib/pvscan
test/lib/vgcfgbackup
test/lib/vgcfgrestore
test/lib/vgchange
test/lib/vgck
test/lib/vgconvert
test/lib/vgcreate
test/lib/vgdisplay
test/lib/vgexport
test/lib/vgextend
test/lib/vgimport
test/lib/vgimportclone
test/lib/vgmerge
test/lib/vgmknodes
test/lib/vgreduce
test/lib/vgremove
test/lib/vgrename
test/lib/vgs
test/lib/vgscan
test/lib/vgsplit
test/api/lvtest.t
test/api/pe_start.t
test/api/percent.t
test/api/python_lvm_unit.py
test/api/test
test/api/thin_percent.t
test/api/vglist.t
test/api/vgtest.t
test/lib/aux
test/lib/check
test/lib/clvmd
test/lib/dm-version-expected
test/lib/dmeventd
test/lib/dmsetup
test/lib/dmstats
test/lib/fail
test/lib/flavour-ndev-cluster
test/lib/flavour-ndev-cluster-lvmpolld
test/lib/flavour-ndev-lvmetad
test/lib/flavour-ndev-lvmetad-lvmpolld
test/lib/flavour-ndev-lvmpolld
test/lib/flavour-ndev-vanilla
test/lib/flavour-udev-cluster
test/lib/flavour-udev-cluster-lvmpolld
test/lib/flavour-udev-lvmetad
test/lib/flavour-udev-lvmetad-lvmpolld
test/lib/flavour-udev-lvmlockd-dlm
test/lib/flavour-udev-lvmlockd-sanlock
test/lib/flavour-udev-lvmlockd-test
test/lib/flavour-udev-lvmpolld
test/lib/flavour-udev-vanilla
test/lib/fsadm
test/lib/get
test/lib/inittest
test/lib/invalid
test/lib/lvm
test/lib/lvm-wrapper
test/lib/lvmchange
test/lib/lvmdbusd.profile
test/lib/lvmetad
test/lib/lvmpolld
test/lib/not
test/lib/paths
test/lib/paths-common
test/lib/runner
test/lib/should
test/lib/test
test/lib/thin-performance.profile
test/lib/utils
test/lib/version-expected
test/unit/dmraid_t.c
test/unit/unit-test

View File

@@ -1,25 +0,0 @@
BSD 2-Clause License
Copyright (c) 2014, Red Hat, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2015 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -18,7 +18,7 @@ top_builddir = @top_builddir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
SUBDIRS = conf daemons include lib libdaemon libdm man scripts device_mapper tools
SUBDIRS = conf daemons include lib libdaemon libdm man scripts tools
ifeq ("@UDEV_RULES@", "yes")
SUBDIRS += udev
@@ -43,7 +43,8 @@ endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = conf include man test scripts \
libdaemon lib tools daemons libdm \
udev po liblvm python device_mapper
udev po liblvm python \
unit-tests/datastruct unit-tests/mm unit-tests/regex
tools.distclean: test.distclean
endif
DISTCLEAN_DIRS += lcov_reports*
@@ -51,21 +52,16 @@ DISTCLEAN_TARGETS += config.cache config.log config.status make.tmpl
include make.tmpl
include $(top_srcdir)/base/Makefile
libdm: include $(top_builddir)/base/libbase.a
libdaemon: include $(top_builddir)/base/libbase.a
lib: libdm libdaemon $(top_builddir)/base/libbase.a
liblvm: lib $(top_builddir)/base/libbase.a
daemons: lib libdaemon tools $(top_builddir)/base/libbase.a
tools: lib libdaemon device-mapper $(top_builddir)/base/libbase.a
libdm: include
libdaemon: include
lib: libdm libdaemon
liblvm: lib
daemons: lib libdaemon tools
tools: lib libdaemon device-mapper
po: tools daemons
man: tools
all_man: tools
scripts: liblvm libdm
test: tools daemons $(top_builddir)/base/libbase.a
unit-test: lib $(top_builddir)/base/libbase.a
run-unit-test: unit-test
lib.device-mapper: include.device-mapper
libdm.device-mapper: include.device-mapper
@@ -101,7 +97,7 @@ endif
DISTCLEAN_TARGETS += cscope.out
CLEAN_DIRS += autom4te.cache
check check_system check_cluster check_local check_lvmetad check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock unit-test run-unit-test: test
check check_system check_cluster check_local check_lvmetad check_lvmpolld check_lvmlockd_test check_lvmlockd_dlm check_lvmlockd_sanlock unit: all
$(MAKE) -C test $(@)
conf.generate man.generate: tools
@@ -126,11 +122,11 @@ rpm: dist
$(LN_S) -f $(abs_top_srcdir)/spec/build.inc $(rpmbuilddir)/SOURCES
$(LN_S) -f $(abs_top_srcdir)/spec/macros.inc $(rpmbuilddir)/SOURCES
$(LN_S) -f $(abs_top_srcdir)/spec/packages.inc $(rpmbuilddir)/SOURCES
DM_VER=$$(cut -d' ' -f1 $(top_srcdir)/VERSION_DM | cut -d- -f1);\
GIT_VER=$$(cd $(top_srcdir); git describe | cut -s -d- --output-delimiter=. -f2,3);\
DM_VER=$$(cut -d- -f1 $(top_srcdir)/VERSION_DM);\
GIT_VER=$$(cd $(top_srcdir); git describe | cut -d- --output-delimiter=. -f2,3 || echo 0);\
sed -e "s,\(device_mapper_version\) [0-9.]*$$,\1 $$DM_VER," \
-e "s,^\(Version:[^0-9%]*\)[0-9.]*$$,\1 $(LVM_VER)," \
-e "s,^\(Release:[^0-9%]*\)[0-9.]\+,\1 $${GIT_VER:-"0"}," \
-e "s,^\(Release:[^0-9%]*\)[0-9.]\+,\1 $$GIT_VER," \
$(top_srcdir)/spec/source.inc >$(rpmbuilddir)/SOURCES/source.inc
rpmbuild -v --define "_topdir $(rpmbuilddir)" -ba $(top_srcdir)/spec/lvm2.spec
@@ -150,7 +146,7 @@ install_system_dirs:
$(INSTALL_ROOT_DIR) $(DESTDIR)$(DEFAULT_RUN_DIR)
$(INSTALL_ROOT_DATA) /dev/null $(DESTDIR)$(DEFAULT_CACHE_DIR)/.cache
install_initscripts:
install_initscripts:
$(MAKE) -C scripts install_initscripts
install_systemd_generators:
@@ -173,7 +169,6 @@ install_tmpfiles_configuration:
LCOV_TRACES = libdm.info lib.info liblvm.info tools.info \
libdaemon/client.info libdaemon/server.info \
test/unit.info \
daemons/clvmd.info \
daemons/dmeventd.info \
daemons/lvmetad.info \
@@ -205,17 +200,42 @@ $(LCOV_TRACES):
ifneq ("$(GENHTML)", "")
lcov: $(LCOV_TRACES)
$(RM) -rf $(LCOV_REPORTS_DIR)
$(RM) -r $(LCOV_REPORTS_DIR)
$(MKDIR_P) $(LCOV_REPORTS_DIR)
$(LCOV) --capture --directory $(top_builddir) --ignore-errors source \
--output-file $(LCOV_REPORTS_DIR)/out.info
-test ! -s $(LCOV_REPORTS_DIR)/out.info || \
$(GENHTML) -o $(LCOV_REPORTS_DIR) --ignore-errors source \
$(LCOV_REPORTS_DIR)/out.info
for i in $(LCOV_TRACES); do \
test -s $$i -a $$(wc -w <$$i) -ge 100 && lc="$$lc $$i"; \
done; \
test -z "$$lc" || $(GENHTML) -p @abs_top_builddir@ \
-o $(LCOV_REPORTS_DIR) $$lc
endif
endif
ifeq ("$(TESTING)", "yes")
# testing and report generation
RUBY=ruby1.9 -Ireport-generators/lib -Ireport-generators/test
.PHONY: unit-test ruby-test test-programs
# FIXME: put dependencies on libdm and liblvm
# FIXME: Should be handled by Makefiles in subdirs, not here at top level.
test-programs:
cd unit-tests/regex && $(MAKE)
cd unit-tests/datastruct && $(MAKE)
cd unit-tests/mm && $(MAKE)
unit-test: test-programs
$(RUBY) report-generators/unit_test.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
memcheck: test-programs
$(RUBY) report-generators/memcheck.rb $(shell find . -name TESTS)
$(RUBY) report-generators/title_page.rb
ruby-test:
$(RUBY) report-generators/test/ts.rb
endif
ifneq ($(shell which ctags),)
.PHONY: tags
tags:

8
README
View File

@@ -7,6 +7,7 @@ There is no warranty - see COPYING and COPYING.LIB.
Tarballs are available from:
ftp://sourceware.org/pub/lvm2/
ftp://sources.redhat.com/pub/lvm2/
https://github.com/lvmteam/lvm2/releases
The source code is stored in git:
@@ -41,9 +42,6 @@ Report upstream bugs at:
or open issues at:
https://github.com/lvmteam/lvm2/issues
The source code repository used until 7th June 2012 is accessible using CVS:
The source code repository used until 7th June 2012 is accessible here:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/LVM2/?cvsroot=lvm2.
cvs -d :pserver:cvs@sourceware.org:/cvs/lvm2 login cvs
cvs -d :pserver:cvs@sourceware.org:/cvs/lvm2 checkout LVM2
The password is cvs.

62
TESTING
View File

@@ -1,62 +0,0 @@
LVM2 Test Suite
===============
The codebase contains many tests in the test subdirectory.
Before running tests
--------------------
Keep in mind the testsuite MUST run under root user.
It is recommended not to use LVM on the test machine, especially when running
tests with udev (`make check_system`.)
You MUST disable (or mask) any LVM daemons:
- lvmetad
- dmeventd
- lvmpolld
- lvmdbusd
- lvmlockd
- clvmd
- cmirrord
For running cluster tests, we are using singlenode locking. Pass
`--with-clvmd=singlenode` to configure.
NOTE: This is useful only for testing, and should not be used in produciton
code.
To run D-Bus daemon tests, existing D-Bus session is required.
Running tests
-------------
As root run:
make check
To run only tests matching a string:
make check T=test
To skip tests matching a string:
make check S=test
There are other targets and many environment variables can be used to tweak the
testsuite - for full list and description run `make -C test help`.
Installing testsuite
--------------------
It is possible to install and run a testsuite against installed LVM. Run the
following:
make -C test install
Then lvm2-testsuite binary can be executed to test installed binaries.
See `lvm2-testsuite --help` for options. The same environment variables can be
used as with `make check`.

View File

@@ -1 +1 @@
2.02.189(2)-git (2021-05-07)
2.02.177(2)-git (2017-11-03)

View File

@@ -1 +1 @@
1.02.174-git (2021-05-07)
1.02.146-git (2017-11-03)

261
WHATS_NEW
View File

@@ -1,267 +1,8 @@
Version 2.02.189 -
================================
Version 2.02.188 - 07th May 2021
================================
Fix problem with unbound variable usage within fsadm.
Avoid removing LVs on error path of lvconvert during creation volumes.
Fix crashing lvdisplay when thin volume was waiting for merge.
Support option --errorwhenfull when converting volume to thin-pool.
Improve thin-performance profile support conversion to thin-pool.
Support resize of cached volumes.
Allocation prints better error when metadata cannot fit on a single PV.
Pvmove can better resolve full thin-pool tree move.
Limit pool metadata spare to 16GiB.
Improves convertsion and allocation of pool metadata.
Support thin pool metadata 15.88GiB, adds 64MiB, thin_pool_crop_metadata=0.
Enhance lvdisplay to report raid availiable/partial.
Enhance error handling for fsadm and hanled correct fsck result.
Stop logging rename errors from persintent filter.
Dmeventd lvm plugin ignores higher reserved_stack lvm.conf values.
Support using BLKZEROOUT for clearing devices.
Support interruption when wipping LVs.
Add configure --enable-editline support as an alternative to readline.
Zero pool metadata on allocation (disable with allocation/zero_metadata=0).
Failure in zeroing or wiping will fail command (bypass with -Zn, -Wn).
Fix support for lvconvert --repair used by foreign apps (i.e. Docker).
Support interruption for bcache waiting.
Fix bcache when device has too many failing writes.
Fix bcache waiting for IO completion with failing disks.
Configure use own python path name order to prefer using python3.
Enhance reporting and error handling when creating thin volumes.
Use revert_lv() on reload error path after vg_revert().
Improve estimation of needed extents when creating thin-pool.
Use extra 1% when resizing thin-pool metadata LV with --use-policy.
Enhance --use-policy percentage rounding.
Switch code base to use flexible array syntax.
Preserve uint32_t for seqno handling.
Switch from mmap to plain read when loading regular files.
Fix running out of free buffers for async writing for larger writes.
Fix conversion to raid from striped lagging type.
Fix conversion to 'mirrored' mirror log with larger regionsize.
Fix support for lvconvert --repair used by foreign apps (i.e. Docker).
Version 2.02.187 - 24th March 2020
==================================
Avoid running cache input arg validation when creating vdo pool.
Prevent raid reshaping of stacked volumes.
Ensure minimum required region size on striped RaidLV creation.
Fix resize of thin-pool with data and metadata of different segtype.
Fix splitting mirror leg in cluster.
Fix activation order when removing merged snapshot.
Add support for DM_DEVICE_GET_TARGET_VERSION into device_mapper.
Add lvextend-raid.sh to check on RaidLV extensions synchronization.
Fix lvmetad shutdown and avoid lenghty timeouts when rebooting system.
Prevent creating VGs with PVs with different logical block sizes.
Pvmove runs in exlusively activating mode for exclusively active LVs.
Activate thin-pool layered volume as 'read-only' device.
Ignore crypto devices with UUID signature CRYPT-SUBDEV.
Enhance validation for thin and cache pool conversion and swapping.
Fixed activation on boot - lvm2 no longer activates incomplete VGs.
Version 2.02.186 - 27th August 2019
===================================
Improve internal removal of cached devices.
Synchronize with udev when dropping snapshot.
Add missing device synchronization point before removing pvmove node.
Correctly set read_ahead for LVs when pvmove is finished.
Fix metadata writes from corrupting with large physical block size.
Report no_discard_passdown for cache LVs with lvs -o+kernel_discards.
Prevent shared active mirror LVs with lvmlockd.
Version 2.02.185 - 13th May 2019
================================
Fix change of monitoring in clustered volumes.
Improve -lXXX%VG modifier which improves cache segment estimation.
Add synchronization with udev before removing cached devices.
Fix missing growth of _pmspare volume when extending _tmeta volume.
Automatically grow thin metadata, when thin data gets too big.
Add support for vgsplit with cached devices.
Fix signal delivery checking race in libdaemon (lvmetad).
Add missing Before=shutdown.target to LVM2 services to fix shutdown ordering.
Version 2.02.184 - 22nd March 2019
==================================
Fix (de)activation of RaidLVs with visible SubLVs
Change scan_lvs default to 0 so LVs are not scanned for PVs.
Add scan_lvs config setting to control if lvm scans LVs for PVs.
Fix missing proper initialization of pv_list struct when adding pv.
Version 2.02.183 - 07th December 2018
=====================================
Avoid disabling lvmetad when repair does nothing.
Fix component detection for md version 0.90.
Use sync io if async io_setup fails, or use_aio=0 is set in config.
Avoid opening devices to get block size by using existing open fd.
Version 2.02.182 - 30th October 2018
Version 2.02.177 -
====================================
Fix possible write race between last metadata block and the first extent.
Fix filtering of md 1.0 devices so they are not seen as duplicate PVs.
Fix lvconvert striped/raid0/raid0_meta -> raid6 regression.
Add After=rbdmap.service to {lvm2-activation-net,blk-availability}.service.
Fix pvs with lvmetad to avoid too many open files from filter reads.
Fix pvscan --cache to avoid too many open files from filter reads.
Reduce max concurrent aios to avoid EMFILE with many devices.
Fix lvconvert conversion attempts to linear.
Fix lvconvert raid0/raid0_meta -> striped regression.
Fix lvconvert --splitmirror for mirror type (2.02.178).
Do not pair cache policy and cache metadata format.
Fix mirrors honoring read_only_volume_list.
Version 2.02.181 - 01 August 2018
=================================
Reject conversions on raid1 LVs with split tracked SubLVs.
Reject conversions on raid1 split tracked SubLVs.
Fix dmstats list failing when no regions exist.
Reject conversions of LVs under snapshot.
Limit suggested options on incorrect option for lvconvert subcommand.
Version 2.02.180 - 19th July 2018
=================================
Never send any discard ioctl with test mode.
Fix thin-pool alloc which needs same PV for data and metadata.
Extend list of non-memlocked areas with newly linked libs.
Enhance vgcfgrestore to check for active LVs in restored VG.
lvconvert: provide possible layouts between linear and striped/raid
Fix unmonitoring of merging snapshots.
Add missing -l description in fsadm man page.
Cache can uses metadata format 2 with cleaner policy.
Avoid showing internal error in lvs output or pvmoved LVs.
Fix check if resized PV can also fit metadata area.
Reopen devices RDWR only before writing to avoid udev issues.
Change pvresize output confusing when no resize took place.
Fix lvmetad hanging on shutdown.
Fix mem leak in clvmd and more coverity issues.
Version 2.02.179 - 18th June 2018
=================================
Allow forced vgchange to lock type none on clustered VG.
Add the report field "shared".
Enable automatic metadata consistency repair on a shared VG.
Fix pvremove force on a PV with a shared VG.
Fixed vgimportclone of a PV with a shared VG.
Enable previously disallowed thin/cache commands in shared VGs.
Enable metadata-related changes on LVs active with shared lock.
Do not continue trying to use a device that cannot be opened.
Fix problems opening a device that fails and returns.
Use versionsort to fix archive file expiry beyond 100000 files.
Version 2.02.178 - 13th June 2018
=================================
Version 2.02.178-rc1 - 24th May 2018
====================================
Add libaio dependency for build.
Remove lvm1 and pool format handling and add filter to ignore them.
Move some filter checks to after disks are read.
Rework disk scanning and when it is used.
Add new io layer and shift code to using it.
Fix lvconvert's return code on degraded -m raid1 conversion.
--enable-testing switch for ./configure has been removed.
--with-snapshots switch for ./configure has been removed.
--with-mirrors switch for ./configure has been removed.
--with-raid switch for ./configure has been removed.
--with-thin switch for ./configure has been removed.
--with-cache switch for ./configure has been removed.
Include new unit-test framework and unit tests.
Extend validation of region_size for mirror segment.
Reload whole device stack when reinitilizing mirror log.
Mirrors without monitoring are WARNING and not blocking on error.
Detect too big region_size with clustered mirrors.
Fix evaluation of maximal region size for mirror log.
Enhance mirror log size estimation and use smaller size when possible.
Fix incorrect mirror log size calculation on 32bit arch.
Enhance preloading tree creating.
Fix regression on acceptance of any LV on lvconvert.
Restore usability of thin LV to be again external origin for another thin.
Keep systemd vars on change event in 69-dm-lvm-metad.rules for systemd reload.
Write systemd and non-systemd rule in 69-dm-lvm-metad.rules, GOTO active one.
Add test for activation/volume_list (Sub)LV remnants.
Disallow usage of cache format 2 with mq cache policy.
Again accept striped LV as COW LV with lvconvert -s (2.02.169).
Fix raid target version testing for supported features.
Allow activation of pools when thin/cache_check tool is missing.
Remove RaidLV on creation failure when rmeta devices can't be activated.
Add prioritized_section() to restore cookie boundaries (2.02.177).
Enhance error messages when read error happens.
Enhance mirror log initialization for old mirror target.
Skip private crypto and stratis devices.
Skip frozen raid devices from scanning.
Activate RAID SubLVs on read_only_volume_list readwrite.
Offer convenience type raid5_n converting to raid10.
Automatically avoid reading invalid snapshots during device scan.
Ensure COW device is writable even for read-only thick snapshots.
Support activation of component LVs in read-only mode.
Extend internal library to recognize and work with component LV.
Skip duplicate check for active LV when prompting for its removal.
Activate correct lock holding LV when it is cached.
Do not modify archived metadata when removing striped raid.
Fix memleak on error path when obtaining lv_raid_data_offset.
Fix compatibility size test of extended external origin.
Add external_origin visiting in for_each_sub_lv().
Ensure cluster commands drop their device cache before locking VG.
Do not report LV as remotely active when it's locally exclusive in cluster.
Add deprecate messages for usage of mirrors with mirrorlog.
Separate reporting of monitoring status and error status.
Improve validation of created strings in vgimportclone.
Add missing initialisation of mem pool in systemd generator.
Do not reopen output streams for multithreaded users of liblvm.
Configure ensures /usr/bin dir is checked for dmpd tools.
Restore pvmove support for wide-clustered active volumes (2.02.177).
Avoid non-exclusive activation of exclusive segment types.
Fix trimming sibling PVs when doing a pvmove of raid subLVs.
Preserve exclusive activation during thin snaphost merge.
Avoid exceeding array bounds in allocation tag processing.
Add --lockopt to common options and add option to skip selected locks.
Version 2.02.177 - 18th December 2017
=====================================
When writing text metadata content, use complete 4096 byte blocks.
Change text format metadata alignment from 512 to 4096 bytes.
When writing metadata, consistently skip mdas marked as failed.
Refactor and adjust text format metadata alignment calculation.
Fix python3 path in lvmdbusd to use value detected by configure.
Reduce checks for active LVs in vgchange before background polling.
Ensure _node_send_message always uses clean status of thin pool.
Fix lvmlockd to use pool lock when accessing _tmeta volume.
Report expected sanlock_convert errors only when retries fail.
Avoid blocking in sanlock_convert on SH to EX lock conversion.
Deactivate missing raid LV legs (_rimage_X-missing_Y_Z) on decativation.
Skip read-modify-write when entire block is replaced.
Categorise I/O with reason annotations in debug messages.
Allow extending of raid LVs created with --nosync after a failed repair.
Command will lock memory only when suspending volumes.
Merge segments when pvmove is finished.
Remove label_verify that has never been used.
Ensure very large numbers used as arguments are not casted to lower values.
Enhance reading and validation of options stripes and stripes_size.
Fix printing of default stripe size when user is not using stripes.
Activation code for pvmove automatically discovers holding LVs for resume.
Make a pvmove LV locking holder.
Do not change critical section counter on resume path without real resume.
Enhance activation code to automatically suspend pvmove participants.
Prevent conversion of thin volumes to snapshot origin when lvmlockd is used.
Correct the steps to change lock type in lvmlockd man page.
Retry lock acquisition on recognized sanlock errors.
Fix lock manager error codes in lvmlockd.
Remove unnecessary single read from lvmdiskscan.
Check raid reshape flags in vg_validate().
Add support for pvmove of cache and snapshot origins.
Avoid using precommitted metadata for suspending pvmove tree.
Ehnance pvmove locking.
Deactivate activated LVs on error path when pvmove activation fails.
Add "io" to log/debug_classes for logging low-level I/O.
Eliminate redundant nested VG metadata in VG struct.
Avoid importing persistent filter in vgscan/pvscan/vgrename.
Fix memleak of string buffer when vgcfgbackup runs in secure mode.
Do not print error when clvmd cannot find running clvmd.
Prevent start of new merge of snapshot if origin is already being merged.
Fix offered type for raid6_n_6 to raid5 conversion (raid5_n).
Deactivate sub LVs when removing unused cache-pool.
Do not take backup with suspended devices.
Avoid RAID4 activation on incompatible kernels under all circumstances.
Reject conversion request to striped/raid0 on 2-legged raid4/5.
Version 2.02.176 - 3rd November 2017
====================================

View File

@@ -1,73 +1,5 @@
Version 1.02.174 -
================================
Version 1.02.172 - 07th May 2021
================================
Add dm_tree_node_add_thin_pool_target_v1 with crop_metadata support.
Add support for VDO in blkdeactivate script.
Try to remove all created devices on dm preload tree error path.
Fix dm_list interators with gcc 10 optimization (-ftree-pta).
Dmeventd handles timer without looping on short intervals.
Version 1.02.170 - 24th March 2020
==================================
Add support for DM_DEVICE_GET_TARGET_VERSION.
Version 1.02.164 - 27th August 2019
===================================
Add debug of dmsetup udevcomplete with hexa print DM_COOKIE_COMPLETED.
Fix versioning of dm_stats_create_region and dm_stats_create_region.
Parsing of cache status understand no_discard_passdown.
Version 1.02.158 - 13th May 2019
================================
Version 1.02.156 - 22nd March 2019
==================================
Ensure migration_threshold for cache is at least 8 chunks.
Enhance ioctl flattening and add parameters only when needed.
Add DM_DEVICE_ARM_POLL for API completness matching kernel.
Version 1.02.154 - 07th December 2018
=====================================
Do not add parameters for RESUME with DM_DEVICE_CREATE dm task.
Fix dmstats report printing no output.
Version 1.02.152 - 30th October 2018
Version 1.02.146 -
====================================
Add hot fix to avoiding locking collision when monitoring thin-pools.
Version 1.02.150 - 01 August 2018
=================================
Add vdo plugin for monitoring VDO devices.
Version 1.02.149 - 19th July 2018
=================================
Version 1.02.148 - 18th June 2018
=================================
Version 1.02.147 - 13th June 2018
=================================
Version 1.02.147-rc1 - 24th May 2018
====================================
Reuse uname() result for mirror target.
Recognize also mounted btrfs through dm_device_has_mounted_fs().
Add missing log_error() into dm_stats_populate() returning 0.
Avoid calling dm_stats_populat() for DM devices without any stats regions.
Support DM_DEBUG_WITH_LINE_NUMBERS envvar for debug msg with source:line.
Configured command for thin pool threshold handling gets whole environment.
Fix tests for failing dm_snprintf() in stats code.
Parsing mirror status accepts 'userspace' keyword in status.
Introduce dm_malloc_aligned for page alignment of buffers.
Version 1.02.146 - 18th December 2017
=====================================
Activation tree of thin pool skips duplicated check of pool status.
Remove code supporting replicator target.
Do not ignore failure of _info_by_dev().
Propagate delayed resume for pvmove subvolumes.
Suppress integrity encryption keys in 'table' output unless --showkeys supplied.
Version 1.02.145 - 3rd November 2017

View File

@@ -155,7 +155,7 @@ AC_DEFUN([AC_TRY_LDFLAGS],
# and this notice are preserved. This file is offered as-is, without any
# warranty.
serial 3
#serial 3
AC_DEFUN([AX_GCC_BUILTIN], [
AS_VAR_PUSHDEF([ac_var], [ax_cv_have_$1])

297
aclocal.m4 vendored
View File

@@ -1,6 +1,6 @@
# generated automatically by aclocal 1.16.2 -*- Autoconf -*-
# generated automatically by aclocal 1.15 -*- Autoconf -*-
# Copyright (C) 1996-2020 Free Software Foundation, Inc.
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -13,7 +13,7 @@
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_python_module.html
# http://www.gnu.org/software/autoconf-archive/ax_python_module.html
# ===========================================================================
#
# SYNOPSIS
@@ -37,7 +37,7 @@ m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 9
#serial 8
AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
AC_DEFUN([AX_PYTHON_MODULE],[
@@ -69,63 +69,32 @@ AC_DEFUN([AX_PYTHON_MODULE],[
fi
])
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 11 (pkg-config-0.29.1)
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 1 (pkg-config-0.24)
#
# Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
dnl Copyright © 2004 Scott James Remnant <scott@netsplit.com>.
dnl Copyright © 2012-2015 Dan Nicholson <dbn.lists@gmail.com>
dnl
dnl This program is free software; you can redistribute it and/or modify
dnl it under the terms of the GNU General Public License as published by
dnl the Free Software Foundation; either version 2 of the License, or
dnl (at your option) any later version.
dnl
dnl This program is distributed in the hope that it will be useful, but
dnl WITHOUT ANY WARRANTY; without even the implied warranty of
dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
dnl General Public License for more details.
dnl
dnl You should have received a copy of the GNU General Public License
dnl along with this program; if not, write to the Free Software
dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
dnl 02111-1307, USA.
dnl
dnl As a special exception to the GNU General Public License, if you
dnl distribute this file as part of a program that contains a
dnl configuration script generated by Autoconf, you may include it under
dnl the same distribution terms that you use for the rest of that
dnl program.
dnl PKG_PREREQ(MIN-VERSION)
dnl -----------------------
dnl Since: 0.29
dnl
dnl Verify that the version of the pkg-config macros are at least
dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
dnl installed version of pkg-config, this checks the developer's version
dnl of pkg.m4 when generating configure.
dnl
dnl To ensure that this macro is defined, also add:
dnl m4_ifndef([PKG_PREREQ],
dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
dnl
dnl See the "Since" comment for each macro you use to see what version
dnl of the macros you require.
m4_defun([PKG_PREREQ],
[m4_define([PKG_MACROS_VERSION], [0.29.1])
m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
[m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
])dnl PKG_PREREQ
dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
dnl ----------------------------------
dnl Since: 0.16
dnl
dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
dnl first found in the path. Checks that the version of pkg-config found
dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
dnl used since that's the first version where most current features of
dnl pkg-config existed.
# PKG_PROG_PKG_CONFIG([MIN-VERSION])
# ----------------------------------
AC_DEFUN([PKG_PROG_PKG_CONFIG],
[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
@@ -147,19 +116,18 @@ if test -n "$PKG_CONFIG"; then
PKG_CONFIG=""
fi
fi[]dnl
])dnl PKG_PROG_PKG_CONFIG
])# PKG_PROG_PKG_CONFIG
dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------------------------------
dnl Since: 0.18
dnl
dnl Check to see whether a particular set of modules exists. Similar to
dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
dnl
dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
dnl only at the first occurence in configure.ac, so if the first place
dnl it's called might be skipped (such as if it is within an "if", you
dnl have to call PKG_CHECK_EXISTS manually
# PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
#
# Check to see whether a particular set of modules exists. Similar
# to PKG_CHECK_MODULES(), but does not set variables or print errors.
#
# Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
# only at the first occurence in configure.ac, so if the first place
# it's called might be skipped (such as if it is within an "if", you
# have to call PKG_CHECK_EXISTS manually
# --------------------------------------------------------------
AC_DEFUN([PKG_CHECK_EXISTS],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
if test -n "$PKG_CONFIG" && \
@@ -169,10 +137,8 @@ m4_ifvaln([$3], [else
$3])dnl
fi])
dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
dnl ---------------------------------------------
dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
dnl pkg_failed based on the result.
# _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
# ---------------------------------------------
m4_define([_PKG_CONFIG],
[if test -n "$$1"; then
pkg_cv_[]$1="$$1"
@@ -184,11 +150,10 @@ m4_define([_PKG_CONFIG],
else
pkg_failed=untried
fi[]dnl
])dnl _PKG_CONFIG
])# _PKG_CONFIG
dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl ---------------------------
dnl Internal check to see if pkg-config supports short errors.
# _PKG_SHORT_ERRORS_SUPPORTED
# -----------------------------
AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
@@ -196,17 +161,19 @@ if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
else
_pkg_short_errors_supported=no
fi[]dnl
])dnl _PKG_SHORT_ERRORS_SUPPORTED
])# _PKG_SHORT_ERRORS_SUPPORTED
dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl --------------------------------------------------------------
dnl Since: 0.4.0
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
# PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
# [ACTION-IF-NOT-FOUND])
#
#
# Note that if there is a possibility the first call to
# PKG_CHECK_MODULES might not happen, you should be sure to include an
# explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
#
#
# --------------------------------------------------------------
AC_DEFUN([PKG_CHECK_MODULES],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
@@ -260,40 +227,16 @@ else
AC_MSG_RESULT([yes])
$3
fi[]dnl
])dnl PKG_CHECK_MODULES
])# PKG_CHECK_MODULES
dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl ---------------------------------------------------------------------
dnl Since: 0.29
dnl
dnl Checks for existence of MODULES and gathers its build flags with
dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
dnl and VARIABLE-PREFIX_LIBS from --libs.
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
dnl configure.ac.
AC_DEFUN([PKG_CHECK_MODULES_STATIC],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
_save_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES($@)
PKG_CONFIG=$_save_PKG_CONFIG[]dnl
])dnl PKG_CHECK_MODULES_STATIC
dnl PKG_INSTALLDIR([DIRECTORY])
dnl -------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable pkgconfigdir as the location where a module
dnl should install pkg-config .pc files. By default the directory is
dnl $libdir/pkgconfig, but the default can be changed by passing
dnl DIRECTORY. The user can override through the --with-pkgconfigdir
dnl parameter.
# PKG_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable pkgconfigdir as the location where a module
# should install pkg-config .pc files. By default the directory is
# $libdir/pkgconfig, but the default can be changed by passing
# DIRECTORY. The user can override through the --with-pkgconfigdir
# parameter.
AC_DEFUN([PKG_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
m4_pushdef([pkg_description],
@@ -304,18 +247,16 @@ AC_ARG_WITH([pkgconfigdir],
AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_INSTALLDIR
]) dnl PKG_INSTALLDIR
dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
dnl --------------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable noarch_pkgconfigdir as the location where a
dnl module should install arch-independent pkg-config .pc files. By
dnl default the directory is $datadir/pkgconfig, but the default can be
dnl changed by passing DIRECTORY. The user can override through the
dnl --with-noarch-pkgconfigdir parameter.
# PKG_NOARCH_INSTALLDIR(DIRECTORY)
# -------------------------
# Substitutes the variable noarch_pkgconfigdir as the location where a
# module should install arch-independent pkg-config .pc files. By
# default the directory is $datadir/pkgconfig, but the default can be
# changed by passing DIRECTORY. The user can override through the
# --with-noarch-pkgconfigdir parameter.
AC_DEFUN([PKG_NOARCH_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
m4_pushdef([pkg_description],
@@ -326,15 +267,13 @@ AC_ARG_WITH([noarch-pkgconfigdir],
AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_NOARCH_INSTALLDIR
]) dnl PKG_NOARCH_INSTALLDIR
dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------
dnl Since: 0.28
dnl
dnl Retrieves the value of the pkg-config variable for the given module.
# PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
# [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
# -------------------------------------------
# Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
@@ -343,77 +282,9 @@ _PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])dnl PKG_CHECK_VAR
])# PKG_CHECK_VAR
dnl PKG_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND],
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------
dnl
dnl Prepare a "--with-" configure option using the lowercase
dnl [VARIABLE-PREFIX] name, merging the behaviour of AC_ARG_WITH and
dnl PKG_CHECK_MODULES in a single macro.
AC_DEFUN([PKG_WITH_MODULES],
[
m4_pushdef([with_arg], m4_tolower([$1]))
m4_pushdef([description],
[m4_default([$5], [build with ]with_arg[ support])])
m4_pushdef([def_arg], [m4_default([$6], [auto])])
m4_pushdef([def_action_if_found], [AS_TR_SH([with_]with_arg)=yes])
m4_pushdef([def_action_if_not_found], [AS_TR_SH([with_]with_arg)=no])
m4_case(def_arg,
[yes],[m4_pushdef([with_without], [--without-]with_arg)],
[m4_pushdef([with_without],[--with-]with_arg)])
AC_ARG_WITH(with_arg,
AS_HELP_STRING(with_without, description[ @<:@default=]def_arg[@:>@]),,
[AS_TR_SH([with_]with_arg)=def_arg])
AS_CASE([$AS_TR_SH([with_]with_arg)],
[yes],[PKG_CHECK_MODULES([$1],[$2],$3,$4)],
[auto],[PKG_CHECK_MODULES([$1],[$2],
[m4_n([def_action_if_found]) $3],
[m4_n([def_action_if_not_found]) $4])])
m4_popdef([with_arg])
m4_popdef([description])
m4_popdef([def_arg])
])dnl PKG_WITH_MODULES
dnl PKG_HAVE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl -----------------------------------------------
dnl
dnl Convenience macro to trigger AM_CONDITIONAL after PKG_WITH_MODULES
dnl check._[VARIABLE-PREFIX] is exported as make variable.
AC_DEFUN([PKG_HAVE_WITH_MODULES],
[
PKG_WITH_MODULES([$1],[$2],,,[$3],[$4])
AM_CONDITIONAL([HAVE_][$1],
[test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"])
])dnl PKG_HAVE_WITH_MODULES
dnl PKG_HAVE_DEFINE_WITH_MODULES(VARIABLE-PREFIX, MODULES,
dnl [DESCRIPTION], [DEFAULT])
dnl ------------------------------------------------------
dnl
dnl Convenience macro to run AM_CONDITIONAL and AC_DEFINE after
dnl PKG_WITH_MODULES check. HAVE_[VARIABLE-PREFIX] is exported as make
dnl and preprocessor variable.
AC_DEFUN([PKG_HAVE_DEFINE_WITH_MODULES],
[
PKG_HAVE_WITH_MODULES([$1],[$2],[$3],[$4])
AS_IF([test "$AS_TR_SH([with_]m4_tolower([$1]))" = "yes"],
[AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
])dnl PKG_HAVE_DEFINE_WITH_MODULES
# Copyright (C) 1999-2020 Free Software Foundation, Inc.
# Copyright (C) 1999-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -447,11 +318,8 @@ AC_DEFUN([AM_PATH_PYTHON],
dnl Find a Python interpreter. Python versions prior to 2.0 are not
dnl supported. (2.0 was released on October 16, 2000).
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],
[python python2 python3 dnl
python3.9 python3.8 python3.7 python3.6 python3.5 python3.4 python3.3 dnl
python3.2 python3.1 python3.0 dnl
python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 dnl
python2.0])
[python python2 python3 python3.3 python3.2 python3.1 python3.0 python2.7 dnl
python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0])
AC_ARG_VAR([PYTHON], [the Python interpreter])
@@ -651,7 +519,7 @@ for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[[i]]
sys.exit(sys.hexversion < minverhex)"
AS_IF([AM_RUN_LOG([$1 -c "$prog"])], [$3], [$4])])
# Copyright (C) 2001-2020 Free Software Foundation, Inc.
# Copyright (C) 2001-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
@@ -668,4 +536,5 @@ AC_DEFUN([AM_RUN_LOG],
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
m4_include([acinclude.m4])

View File

@@ -1,44 +0,0 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
#
# This file is part of the device-mapper userspace tools.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU Lesser General Public License v.2.1.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# Uncomment this to build the simple radix tree. You'll need to make clean too.
# Comment to build the advanced radix tree.
#base/data-struct/radix-tree.o: CFLAGS += -DSIMPLE_RADIX_TREE
# NOTE: this Makefile only works as 'include' for toplevel Makefile
# which defined all top_* variables
BASE_SOURCE=\
base/data-struct/radix-tree.c
BASE_TARGET = base/libbase.a
BASE_DEPENDS = $(BASE_SOURCE:%.c=%.d)
BASE_OBJECTS = $(BASE_SOURCE:%.c=%.o)
CLEAN_TARGETS += $(BASE_DEPENDS) $(BASE_OBJECTS) \
$(BASE_SOURCE:%.c=%.gcda) \
$(BASE_SOURCE:%.c=%.gcno) \
$(BASE_TARGET)
$(BASE_TARGET): $(BASE_OBJECTS)
@echo " [AR] $@"
$(Q) $(RM) $@
$(Q) $(AR) rsv $@ $(BASE_OBJECTS) > /dev/null
ifneq (,$(findstring $(MAKECMDGOALS),clean distclean))
BASE_SOURCE += \
base/data-struct/hash.c \
base/data-struct/list.c
endif
ifeq ("$(DEPENDS)","yes")
-include $(BASE_DEPENDS)
endif

File diff suppressed because it is too large Load Diff

View File

@@ -1,256 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#include "radix-tree.h"
#include "base/memory/container_of.h"
#include "base/memory/zalloc.h"
#include <assert.h>
#include <stdlib.h>
#include <stdio.h>
//----------------------------------------------------------------
// This implementation is based around nested binary trees. Very
// simple (and hopefully correct).
struct node {
struct node *left;
struct node *right;
uint8_t key;
struct node *center;
bool has_value;
union radix_value value;
};
struct radix_tree {
radix_value_dtr dtr;
void *dtr_context;
struct node *root;
};
struct radix_tree *
radix_tree_create(radix_value_dtr dtr, void *dtr_context)
{
struct radix_tree *rt = zalloc(sizeof(*rt));
if (rt) {
rt->dtr = dtr;
rt->dtr_context = dtr_context;
}
return rt;
}
// Returns the number of entries in the tree
static unsigned _destroy_tree(struct node *n, radix_value_dtr dtr, void *context)
{
unsigned r;
if (!n)
return 0;
r = _destroy_tree(n->left, dtr, context);
r += _destroy_tree(n->right, dtr, context);
r += _destroy_tree(n->center, dtr, context);
if (n->has_value) {
if (dtr)
dtr(context, n->value);
r++;
}
free(n);
return r;
}
void radix_tree_destroy(struct radix_tree *rt)
{
_destroy_tree(rt->root, rt->dtr, rt->dtr_context);
free(rt);
}
static unsigned _count(struct node *n)
{
unsigned r;
if (!n)
return 0;
r = _count(n->left);
r += _count(n->right);
r += _count(n->center);
if (n->has_value)
r++;
return r;
}
unsigned radix_tree_size(struct radix_tree *rt)
{
return _count(rt->root);
}
static struct node **_lookup(struct node **pn, uint8_t *kb, uint8_t *ke)
{
struct node *n = *pn;
if (!n || (kb == ke))
return pn;
if (*kb < n->key)
return _lookup(&n->left, kb, ke);
else if (*kb > n->key)
return _lookup(&n->right, kb, ke);
else
return _lookup(&n->center, kb + 1, ke);
}
static bool _insert(struct node **pn, uint8_t *kb, uint8_t *ke, union radix_value v)
{
struct node *n = *pn;
if (!n) {
n = zalloc(sizeof(*n));
if (!n)
return false;
n->key = *kb;
*pn = n;
}
if (kb == ke) {
n->has_value = true;
n->value = v;
return true;
}
if (*kb < n->key)
return _insert(&n->left, kb, ke, v);
else if (*kb > n->key)
return _insert(&n->right, kb, ke, v);
else
return _insert(&n->center, kb + 1, ke, v);
}
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value v)
{
return _insert(&rt->root, kb, ke, v);
}
bool radix_tree_remove(struct radix_tree *rt, uint8_t *kb, uint8_t *ke)
{
struct node **pn = _lookup(&rt->root, kb, ke);
struct node *n = *pn;
if (!n || !n->has_value)
return false;
else {
if (rt->dtr)
rt->dtr(rt->dtr_context, n->value);
if (n->left || n->center || n->right) {
n->has_value = false;
return true;
} else {
// FIXME: delete parent if this was the last entry
free(n);
*pn = NULL;
}
return true;
}
}
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *kb, uint8_t *ke)
{
struct node **pn;
unsigned count;
pn = _lookup(&rt->root, kb, ke);
if (*pn) {
count = _destroy_tree(*pn, rt->dtr, rt->dtr_context);
*pn = NULL;
}
return count;
}
bool
radix_tree_lookup(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value *result)
{
struct node **pn = _lookup(&rt->root, kb, ke);
struct node *n = *pn;
if (n && n->has_value) {
*result = n->value;
return true;
} else
return false;
}
static void _iterate(struct node *n, struct radix_tree_iterator *it)
{
if (!n)
return;
_iterate(n->left, it);
if (n->has_value)
// FIXME: fill out the key
it->visit(it, NULL, NULL, n->value);
_iterate(n->center, it);
_iterate(n->right, it);
}
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
struct radix_tree_iterator *it)
{
if (kb == ke)
_iterate(rt->root, it);
else {
struct node **pn = _lookup(&rt->root, kb, ke);
struct node *n = *pn;
if (n) {
if (n->has_value)
it->visit(it, NULL, NULL, n->value);
_iterate(n->center, it);
}
}
}
bool radix_tree_is_well_formed(struct radix_tree *rt)
{
return true;
}
void radix_tree_dump(struct radix_tree *rt, FILE *out)
{
}
//----------------------------------------------------------------

View File

@@ -1,21 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
//----------------------------------------------------------------
#ifdef SIMPLE_RADIX_TREE
#include "base/data-struct/radix-tree-simple.c"
#else
#include "base/data-struct/radix-tree-adaptive.c"
#endif
//----------------------------------------------------------------

View File

@@ -1,64 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_DATA_STRUCT_RADIX_TREE_H
#define BASE_DATA_STRUCT_RADIX_TREE_H
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
//----------------------------------------------------------------
struct radix_tree;
union radix_value {
void *ptr;
uint64_t n;
};
typedef void (*radix_value_dtr)(void *context, union radix_value v);
// dtr will be called on any deleted entries. dtr may be NULL.
struct radix_tree *radix_tree_create(radix_value_dtr dtr, void *dtr_context);
void radix_tree_destroy(struct radix_tree *rt);
unsigned radix_tree_size(struct radix_tree *rt);
bool radix_tree_insert(struct radix_tree *rt, uint8_t *kb, uint8_t *ke, union radix_value v);
bool radix_tree_remove(struct radix_tree *rt, uint8_t *kb, uint8_t *ke);
// Returns the number of values removed
unsigned radix_tree_remove_prefix(struct radix_tree *rt, uint8_t *prefix_b, uint8_t *prefix_e);
bool radix_tree_lookup(struct radix_tree *rt,
uint8_t *kb, uint8_t *ke, union radix_value *result);
// The radix tree stores entries in lexicographical order. Which means
// we can iterate entries, in order. Or iterate entries with a particular
// prefix.
struct radix_tree_iterator {
// Returns false if the iteration should end.
bool (*visit)(struct radix_tree_iterator *it,
uint8_t *kb, uint8_t *ke, union radix_value v);
};
void radix_tree_iterate(struct radix_tree *rt, uint8_t *kb, uint8_t *ke,
struct radix_tree_iterator *it);
// Checks that some constraints on the shape of the tree are
// being held. For debug only.
bool radix_tree_is_well_formed(struct radix_tree *rt);
void radix_tree_dump(struct radix_tree *rt, FILE *out);
//----------------------------------------------------------------
#endif

View File

@@ -1,25 +0,0 @@
// Copyright (C) 2018 - 2020 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_CONTAINER_OF_H
#define BASE_MEMORY_CONTAINER_OF_H
#include <stddef.h> // offsetof
//----------------------------------------------------------------
#define container_of(v, t, head) \
((t *)((char *)(v) - offsetof(t, head)))
//----------------------------------------------------------------
#endif

View File

@@ -1,31 +0,0 @@
// Copyright (C) 2018 Red Hat, Inc. All rights reserved.
//
// This file is part of LVM2.
//
// This copyrighted material is made available to anyone wishing to use,
// modify, copy, or redistribute it subject to the terms and conditions
// of the GNU Lesser General Public License v.2.1.
//
// You should have received a copy of the GNU Lesser General Public License
// along with this program; if not, write to the Free Software Foundation,
// Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
#ifndef BASE_MEMORY_ZALLOC_H
#define BASE_MEMORY_ZALLOC_H
#include <stdlib.h>
#include <string.h>
//----------------------------------------------------------------
static inline void *zalloc(size_t len)
{
void *ptr = malloc(len);
if (ptr)
memset(ptr, 0, len);
return ptr;
}
//----------------------------------------------------------------
#endif

View File

@@ -185,20 +185,6 @@ devices {
# present on the system. sysfs must be part of the kernel and mounted.)
sysfs_scan = 1
# Configuration option devices/scan_lvs.
# Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.
# When 1, LVM will detect PVs layered on LVs, and caution must be
# taken to avoid a host accessing a layered VG that may not belong
# to it, e.g. from a guest image. This generally requires excluding
# the LVs with device filters. Also, when this setting is enabled,
# every LVM command will scan every active LV on the system (unless
# filtered), which can cause performance problems on systems with
# many active LVs. When this setting is 0, LVM will not detect or
# use PVs that exist on LVs, and will not allow a PV to be created on
# an LV. The LVs are ignored using a built in device filter that
# identifies and excludes LVs.
scan_lvs = 0
# Configuration option devices/multipath_component_detection.
# Ignore devices that are components of DM multipath devices.
multipath_component_detection = 1
@@ -326,12 +312,6 @@ devices {
# Enabling this setting allows the VG to be used as usual even with
# uncertain devices.
allow_changes_with_duplicate_pvs = 0
# Configuration option devices/allow_mixed_block_sizes.
# Allow PVs in the same VG with different logical block sizes.
# When allowed, the user is responsible to ensure that an LV is
# using PVs with matching block sizes when necessary.
allow_mixed_block_sizes = 1
}
# Configuration section allocation.
@@ -467,16 +447,9 @@ allocation {
# This configuration option does not have a default value defined.
# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
# Thin pool metadata and data will always use different PVs.
# Thin pool metdata and data will always use different PVs.
thin_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/thin_pool_crop_metadata.
# Older version of lvm2 cropped pool's metadata size to 15.81 GiB.
# This is slightly less then the actual maximum 15.88 GiB.
# For compatibility with older version and use of cropped size set to 1.
# This configuration option has an automatic default value.
# thin_pool_crop_metadata = 0
# Configuration option allocation/thin_pool_zero.
# Thin pool data chunks are zeroed before they are first used.
# Zeroing with a larger thin pool chunk size reduces performance.
@@ -512,10 +485,6 @@ allocation {
# This configuration option has an automatic default value.
# thin_pool_chunk_size_policy = "generic"
# Configuration option allocation/zero_metadata.
# Zero whole metadata area before use with thin or cache pool.
zero_metadata = 1
# Configuration option allocation/thin_pool_chunk_size.
# The minimal chunk size in KiB for thin pool volumes.
# Larger chunk sizes may improve performance for plain thin volumes,
@@ -642,9 +611,9 @@ log {
# Select log messages by class.
# Some debugging messages are assigned to a class and only appear in
# debug output if the class is listed here. Classes currently
# available: memory, devices, io, activation, allocation, lvmetad,
# available: memory, devices, activation, allocation, lvmetad,
# metadata, cache, locking, lvmpolld. Use "all" to see everything.
debug_classes = [ "memory", "devices", "io", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}
# Configuration section backup.
@@ -733,17 +702,29 @@ global {
activation = 1
# Configuration option global/fallback_to_lvm1.
# This setting is no longer used.
# Try running LVM1 tools if LVM cannot communicate with DM.
# This option only applies to 2.4 kernels and is provided to help
# switch between device-mapper kernels and LVM1 kernels. The LVM1
# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
# They will stop working once the lvm2 on-disk metadata format is used.
# This configuration option has an automatic default value.
# fallback_to_lvm1 = 0
# fallback_to_lvm1 = @DEFAULT_FALLBACK_TO_LVM1@
# Configuration option global/format.
# This setting is no longer used.
# The default metadata format that commands should use.
# The -M 1|2 option overrides this setting.
#
# Accepted values:
# lvm1
# lvm2
#
# This configuration option has an automatic default value.
# format = "lvm2"
# Configuration option global/format_libraries.
# This setting is no longer used.
# Shared libraries that process different metadata formats.
# If support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# This configuration option does not have a default value defined.
# Configuration option global/segment_libraries.
@@ -840,6 +821,13 @@ global {
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Configuration option global/detect_internal_vg_cache_corruption.
# Internal verification of VG structures.
# Check if CRC matches when a parsed VG is used multiple times. This
# is useful to catch unexpected changes to cached VG structures.
# Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# Configuration option global/metadata_read_only.
# No operations that change on-disk metadata are permitted.
# Additionally, read-only commands that encounter metadata in need of
@@ -922,11 +910,6 @@ global {
# This configuration option has an automatic default value.
# lvdisplay_shows_full_device_path = 0
# Configuration option global/use_aio.
# Use async I/O when reading and writing devices.
# This configuration option has an automatic default value.
# use_aio = 1
# Configuration option global/use_lvmetad.
# Use lvmetad to cache metadata and reduce disk scanning.
# When enabled (and running), lvmetad provides LVM commands with VG
@@ -1144,16 +1127,6 @@ global {
# When enabled, an LVM command that changes PVs, changes VG metadata,
# or changes the activation state of an LV will send a notification.
notify_dbus = 1
# Configuration option global/io_memory_size.
# The amount of memory in KiB that LVM allocates to perform disk io.
# LVM performance may benefit from more io memory when there are many
# disks or VG metadata is large. Increasing this size may be necessary
# when a single copy of VG metadata is larger than the current setting.
# This value should usually not be decreased from the default; setting
# it too low can result in lvm failing to read VGs.
# This configuration option has an automatic default value.
# io_memory_size = 8192
}
# Configuration section activation.

576
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -39,6 +39,7 @@ case "$host_os" in
LDDEPS="$LDDEPS .export.sym"
LIB_SUFFIX=so
DEVMAPPER=yes
AIO=yes
BUILD_LVMETAD=no
BUILD_LVMPOLLD=no
LOCKDSANLOCK=no
@@ -58,6 +59,7 @@ case "$host_os" in
CLDNOWHOLEARCHIVE=
LIB_SUFFIX=dylib
DEVMAPPER=yes
AIO=no
ODIRECT=no
DM_IOCTLS=no
SELINUX=no
@@ -77,7 +79,6 @@ AC_PROG_CC
AC_PROG_CXX
CFLAGS=$save_CFLAGS
CXXFLAGS=$save_CXXFLAGS
PATH_SBIN="$PATH:/usr/sbin:/sbin"
dnl probably no longer needed in 2008, but...
AC_PROG_GCC_TRADITIONAL
@@ -103,7 +104,7 @@ AC_HEADER_SYS_WAIT
AC_HEADER_TIME
AC_CHECK_HEADERS([assert.h ctype.h dirent.h errno.h fcntl.h float.h \
getopt.h inttypes.h langinfo.h libaio.h libgen.h limits.h locale.h paths.h \
getopt.h inttypes.h langinfo.h libgen.h limits.h locale.h paths.h \
signal.h stdarg.h stddef.h stdio.h stdlib.h string.h sys/file.h \
sys/ioctl.h syslog.h sys/mman.h sys/param.h sys/resource.h sys/stat.h \
sys/time.h sys/types.h sys/utsname.h sys/wait.h time.h \
@@ -146,7 +147,7 @@ AX_GCC_BUILTIN([__builtin_clz])
################################################################################
dnl -- Check for functions
AC_CHECK_FUNCS([ftruncate gethostname getpagesize gettimeofday localtime_r \
memchr memset mkdir mkfifo munmap nl_langinfo pselect realpath rmdir setenv \
memchr memset mkdir mkfifo munmap nl_langinfo realpath rmdir setenv \
setlocale strcasecmp strchr strcspn strdup strerror strncasecmp strndup \
strrchr strspn strstr strtol strtoul uname], , [AC_MSG_ERROR(bailing out)])
AC_FUNC_ALLOCA
@@ -282,6 +283,58 @@ esac
AC_MSG_RESULT($MANGLING)
AC_DEFINE_UNQUOTED([DEFAULT_DM_NAME_MANGLING], $mangling, [Define default name mangling behaviour])
################################################################################
dnl -- LVM1 tool fallback option
AC_MSG_CHECKING(whether to enable lvm1 fallback)
AC_ARG_ENABLE(lvm1_fallback,
AC_HELP_STRING([--enable-lvm1_fallback],
[use this to fall back and use LVM1 binaries if
device-mapper is missing from the kernel]),
LVM1_FALLBACK=$enableval, LVM1_FALLBACK=no)
AC_MSG_RESULT($LVM1_FALLBACK)
if test "$LVM1_FALLBACK" = yes; then
DEFAULT_FALLBACK_TO_LVM1=1
AC_DEFINE([LVM1_FALLBACK], 1, [Define to 1 if 'lvm' should fall back to using LVM1 binaries if device-mapper is missing from the kernel])
else
DEFAULT_FALLBACK_TO_LVM1=0
fi
AC_DEFINE_UNQUOTED(DEFAULT_FALLBACK_TO_LVM1, [$DEFAULT_FALLBACK_TO_LVM1],
[Fall back to LVM1 by default if device-mapper is missing from the kernel.])
################################################################################
dnl -- format1 inclusion type
AC_MSG_CHECKING(whether to include support for lvm1 metadata)
AC_ARG_WITH(lvm1,
AC_HELP_STRING([--with-lvm1=TYPE],
[LVM1 metadata support: internal/shared/none [internal]]),
LVM1=$withval, LVM1=internal)
AC_MSG_RESULT($LVM1)
case "$LVM1" in
none|shared) ;;
internal) AC_DEFINE([LVM1_INTERNAL], 1,
[Define to 1 to include built-in support for LVM1 metadata.]) ;;
*) AC_MSG_ERROR([--with-lvm1 parameter invalid]) ;;
esac
################################################################################
dnl -- format_pool inclusion type
AC_MSG_CHECKING(whether to include support for GFS pool metadata)
AC_ARG_WITH(pool,
AC_HELP_STRING([--with-pool=TYPE],
[GFS pool read-only support: internal/shared/none [internal]]),
POOL=$withval, POOL=internal)
AC_MSG_RESULT($POOL)
case "$POOL" in
none|shared) ;;
internal) AC_DEFINE([POOL_INTERNAL], 1,
[Define to 1 to include built-in support for GFS pool metadata.]) ;;
*) AC_MSG_ERROR([--with-pool parameter invalid])
esac
################################################################################
dnl -- cluster_locking inclusion type
AC_MSG_CHECKING(whether to include support for cluster locking)
@@ -332,6 +385,13 @@ esac
################################################################################
dnl -- raid inclusion type
AC_MSG_CHECKING(whether to include raid)
AC_ARG_WITH(raid,
AC_HELP_STRING([--with-raid=TYPE],
[raid support: internal/shared/none [internal]]),
RAID=$withval, RAID=internal)
AC_MSG_RESULT($RAID)
AC_ARG_WITH(default-mirror-segtype,
AC_HELP_STRING([--with-default-mirror-segtype=TYPE],
[default mirror segtype: raid1/mirror [raid1]]),
@@ -340,9 +400,14 @@ AC_ARG_WITH(default-raid10-segtype,
AC_HELP_STRING([--with-default-raid10-segtype=TYPE],
[default mirror segtype: raid10/mirror [raid10]]),
DEFAULT_RAID10_SEGTYPE=$withval, DEFAULT_RAID10_SEGTYPE="raid10")
AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.])
case "$RAID" in
none) test "$DEFAULT_MIRROR_SEGTYPE" = "raid1" && DEFAULT_MIRROR_SEGTYPE="mirror"
test "$DEFAULT_RAID10_SEGTYPE" = "raid10" && DEFAULT_RAID10_SEGTYPE="mirror" ;;
shared) ;;
internal) AC_DEFINE([RAID_INTERNAL], 1,
[Define to 1 to include built-in support for raid.]) ;;
*) AC_MSG_ERROR([--with-raid parameter invalid]) ;;
esac
AC_DEFINE_UNQUOTED([DEFAULT_MIRROR_SEGTYPE], ["$DEFAULT_MIRROR_SEGTYPE"],
[Default segtype used for mirror volumes.])
@@ -405,7 +470,7 @@ case "$THIN" in
internal|shared)
# Empty means a config way to ignore thin checking
if test "$THIN_CHECK_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_CHECK_CMD, thin_check, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_CHECK_CMD, thin_check)
if test -z "$THIN_CHECK_CMD"; then
AC_MSG_WARN([thin_check not found in path $PATH])
THIN_CHECK_CMD=/usr/sbin/thin_check
@@ -429,7 +494,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin dumping
if test "$THIN_DUMP_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_DUMP_CMD, thin_dump, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_DUMP_CMD, thin_dump)
test -z "$THIN_DUMP_CMD" && {
AC_MSG_WARN(thin_dump not found in path $PATH)
THIN_DUMP_CMD=/usr/sbin/thin_dump
@@ -438,7 +503,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin repairing
if test "$THIN_REPAIR_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_REPAIR_CMD, thin_repair, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_REPAIR_CMD, thin_repair)
test -z "$THIN_REPAIR_CMD" && {
AC_MSG_WARN(thin_repair not found in path $PATH)
THIN_REPAIR_CMD=/usr/sbin/thin_repair
@@ -447,7 +512,7 @@ case "$THIN" in
fi
# Empty means a config way to ignore thin restoring
if test "$THIN_RESTORE_CMD" = "autodetect"; then
AC_PATH_TOOL(THIN_RESTORE_CMD, thin_restore, [], [$PATH_SBIN])
AC_PATH_TOOL(THIN_RESTORE_CMD, thin_restore)
test -z "$THIN_RESTORE_CMD" && {
AC_MSG_WARN(thin_restore not found in path $PATH)
THIN_RESTORE_CMD=/usr/sbin/thin_restore
@@ -519,7 +584,7 @@ case "$CACHE" in
internal|shared)
# Empty means a config way to ignore cache checking
if test "$CACHE_CHECK_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_CHECK_CMD, cache_check, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_CHECK_CMD, cache_check)
if test -z "$CACHE_CHECK_CMD"; then
AC_MSG_WARN([cache_check not found in path $PATH])
CACHE_CHECK_CMD=/usr/sbin/cache_check
@@ -554,7 +619,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache dumping
if test "$CACHE_DUMP_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_DUMP_CMD, cache_dump, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_DUMP_CMD, cache_dump)
test -z "$CACHE_DUMP_CMD" && {
AC_MSG_WARN(cache_dump not found in path $PATH)
CACHE_DUMP_CMD=/usr/sbin/cache_dump
@@ -563,7 +628,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache repairing
if test "$CACHE_REPAIR_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_REPAIR_CMD, cache_repair, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_REPAIR_CMD, cache_repair)
test -z "$CACHE_REPAIR_CMD" && {
AC_MSG_WARN(cache_repair not found in path $PATH)
CACHE_REPAIR_CMD=/usr/sbin/cache_repair
@@ -572,7 +637,7 @@ case "$CACHE" in
fi
# Empty means a config way to ignore cache restoring
if test "$CACHE_RESTORE_CMD" = "autodetect"; then
AC_PATH_TOOL(CACHE_RESTORE_CMD, cache_restore, [], [$PATH_SBIN])
AC_PATH_TOOL(CACHE_RESTORE_CMD, cache_restore)
test -z "$CACHE_RESTORE_CMD" && {
AC_MSG_WARN(cache_restore not found in path $PATH)
CACHE_RESTORE_CMD=/usr/sbin/cache_restore
@@ -607,12 +672,6 @@ AC_ARG_ENABLE([readline],
AC_HELP_STRING([--disable-readline], [disable readline support]),
READLINE=$enableval, READLINE=maybe)
################################################################################
dnl -- Disable editline
AC_ARG_ENABLE([editline],
AC_HELP_STRING([--enable-editline], [enable editline support]),
EDITLINE=$enableval, EDITLINE=no)
################################################################################
dnl -- Disable realtime clock support
AC_MSG_CHECKING(whether to enable realtime support)
@@ -656,7 +715,7 @@ dnl -- Set up pidfile and run directory
AH_TEMPLATE(DEFAULT_PID_DIR)
AC_ARG_WITH(default-pid-dir,
AC_HELP_STRING([--with-default-pid-dir=PID_DIR],
[default directory to keep PID files in [autodetect]]),
[Default directory to keep PID files in. [autodetect]]),
DEFAULT_PID_DIR="$withval", DEFAULT_PID_DIR=$RUN_DIR)
AC_DEFINE_UNQUOTED(DEFAULT_PID_DIR, ["$DEFAULT_PID_DIR"],
[Default directory to keep PID files in.])
@@ -664,7 +723,7 @@ AC_DEFINE_UNQUOTED(DEFAULT_PID_DIR, ["$DEFAULT_PID_DIR"],
AH_TEMPLATE(DEFAULT_DM_RUN_DIR, [Name of default DM run directory.])
AC_ARG_WITH(default-dm-run-dir,
AC_HELP_STRING([--with-default-dm-run-dir=DM_RUN_DIR],
[default DM run directory [autodetect]]),
[ Default DM run directory. [autodetect]]),
DEFAULT_DM_RUN_DIR="$withval", DEFAULT_DM_RUN_DIR=$RUN_DIR)
AC_DEFINE_UNQUOTED(DEFAULT_DM_RUN_DIR, ["$DEFAULT_DM_RUN_DIR"],
[Default DM run directory.])
@@ -672,7 +731,7 @@ AC_DEFINE_UNQUOTED(DEFAULT_DM_RUN_DIR, ["$DEFAULT_DM_RUN_DIR"],
AH_TEMPLATE(DEFAULT_RUN_DIR, [Name of default LVM run directory.])
AC_ARG_WITH(default-run-dir,
AC_HELP_STRING([--with-default-run-dir=RUN_DIR],
[default LVM run directory [autodetect_run_dir/lvm]]),
[Default LVM run directory. [autodetect_run_dir/lvm]]),
DEFAULT_RUN_DIR="$withval", DEFAULT_RUN_DIR="$RUN_DIR/lvm")
AC_DEFINE_UNQUOTED(DEFAULT_RUN_DIR, ["$DEFAULT_RUN_DIR"],
[Default LVM run directory.])
@@ -1009,6 +1068,20 @@ if test "$PROFILING" = yes; then
fi
fi
################################################################################
dnl -- Enable testing
AC_MSG_CHECKING(whether to enable unit testing)
AC_ARG_ENABLE(testing,
AC_HELP_STRING([--enable-testing],
[enable testing targets in the makefile]),
TESTING=$enableval, TESTING=no)
AC_MSG_RESULT($TESTING)
if test "$TESTING" = yes; then
pkg_config_init
PKG_CHECK_MODULES(CUNIT, cunit >= 2.0)
fi
################################################################################
dnl -- Set LVM2 testsuite data
TESTSUITE_DATA='${datarootdir}/lvm2-testsuite'
@@ -1050,6 +1123,24 @@ if test "$DEVMAPPER" = yes; then
AC_DEFINE([DEVMAPPER_SUPPORT], 1, [Define to 1 to enable LVM2 device-mapper interaction.])
fi
################################################################################
dnl -- Disable aio
AC_MSG_CHECKING(whether to use aio)
AC_ARG_ENABLE(aio,
AC_HELP_STRING([--disable-aio],
[disable async i/o]),
AIO=$enableval)
AC_MSG_RESULT($AIO)
if test "$AIO" = yes; then
AC_CHECK_LIB(aio, io_setup,
[AC_DEFINE([AIO_SUPPORT], 1, [Define to 1 if aio is available.])
AIO_LIBS="-laio"
AIO_SUPPORT=yes],
[AIO_LIBS=
AIO_SUPPORT=no ])
fi
################################################################################
dnl -- Build lvmetad
AC_MSG_CHECKING(whether to build LVMetaD)
@@ -1384,7 +1475,7 @@ test "$APPLIB" = yes \
&& LVM2APP_LIB=-llvm2app \
|| LVM2APP_LIB=
AS_IF([test "$APPLIB"],
[AC_MSG_WARN([liblvm2app is deprecated. Use D-Bus API])])
[AC_MSG_WARN([Python bindings are deprecated. Use D-Bus API])])
################################################################################
dnl -- Enable cmdlib
@@ -1433,9 +1524,6 @@ if test "$PYTHON_BINDINGS" = yes; then
AC_MSG_ERROR([--enable-python-bindings is replaced by --enable-python2-bindings and --enable-python3-bindings])
fi
m4_define_default([_AM_PYTHON_INTERPRETER_LIST],[ python3 python2 python dnl
python3.9 python3.8 python3.7 python3.6 python3.5 python3.4 python3.3 python3.2 python3.1 python3.0 dnl
python2.7 python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0 ])
if test "$PYTHON2_BINDINGS" = yes; then
AM_PATH_PYTHON([2])
AC_PATH_TOOL(PYTHON2, python2)
@@ -1448,7 +1536,7 @@ if test "$PYTHON2_BINDINGS" = yes; then
PYTHON2DIR=$pythondir
PYTHON_BINDINGS=yes
fi
if test "$PYTHON3_BINDINGS" = yes -o "$BUILD_LVMDBUSD" = yes; then
unset PYTHON PYTHON_CONFIG
unset am_cv_pathless_PYTHON ac_cv_path_PYTHON am_cv_python_platform
@@ -1462,7 +1550,7 @@ if test "$PYTHON3_BINDINGS" = yes -o "$BUILD_LVMDBUSD" = yes; then
PYTHON3_INCDIRS=`"$PYTHON3_CONFIG" --includes`
PYTHON3_LIBDIRS=`"$PYTHON3_CONFIG" --libs`
PYTHON3DIR=$pythondir
test "$PYTHON3_BINDINGS" = yes && PYTHON_BINDINGS=yes
PYTHON_BINDINGS=yes
fi
if test "$BUILD_LVMDBUSD" = yes; then
@@ -1548,6 +1636,8 @@ AC_CHECK_LIB(dl, dlopen,
################################################################################
dnl -- Check for shared/static conflicts
if [[ \( "$LVM1" = shared -o "$POOL" = shared -o "$CLUSTER" = shared \
-o "$SNAPSHOTS" = shared -o "$MIRRORS" = shared \
-o "$RAID" = shared -o "$CACHE" = shared \
\) -a "$STATIC_LINK" = yes ]]; then
AC_MSG_ERROR([Features cannot be 'shared' when building statically])
fi
@@ -1588,33 +1678,6 @@ if test "$SELINUX" = yes; then
HAVE_SELINUX=no ])
fi
################################################################################
dnl -- Check BLKZEROOUT support
AC_CACHE_CHECK([for BLKZEROOUT in sys/ioctl.h.],
[ac_cv_have_blkzeroout],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
[#include <sys/ioctl.h>
#include <linux/fs.h>
int bar(void) { return ioctl(0, BLKZEROOUT, 0); }]
)], [ac_cv_have_blkzeroout=yes], [ac_cv_have_blkzeroout=no])])
AC_ARG_ENABLE(blkzeroout,
AC_HELP_STRING([--disable-blkzeroout],
[do not use BLKZEROOUT for device zeroing]),
BLKZEROOUT=$enableval, BLKZEROOUT=yes)
AC_MSG_CHECKING(whether to use BLKZEROOUT for device zeroing)
if test "$BLKZEROOUT" = yes; then
AC_IF_YES(ac_cv_have_blkzeroout,
AC_DEFINE(HAVE_BLKZEROOUT, 1,
[Define if ioctl BLKZEROOUT can be used for device zeroing.]),
BLKZEROOUT=no)
fi
AC_MSG_RESULT($BLKZEROOUT)
################################################################################
dnl -- Check for realtime clock support
RT_LIBS=
@@ -1648,16 +1711,6 @@ AC_IF_YES(ac_cv_stat_st_ctim,
dnl -- Check for getopt
AC_CHECK_HEADERS(getopt.h, AC_DEFINE([HAVE_GETOPTLONG], 1, [Define to 1 if getopt_long is available.]))
################################################################################
dnl -- Check for editline
if test "$EDITLINE" == yes; then
PKG_CHECK_MODULES([EDITLINE], [libedit], [
AC_DEFINE([EDITLINE_SUPPORT], 1,
[Define to 1 to include the LVM editline shell.])], AC_MSG_ERROR(
[libedit could not be found which is required for the --enable-readline option.])
)
fi
################################################################################
dnl -- Check for readline (Shamelessly copied from parted 1.4.17)
if test "$READLINE" != no; then
@@ -1790,12 +1843,6 @@ fi
AC_MSG_CHECKING(whether to enable readline)
AC_MSG_RESULT($READLINE)
if test "$EDITLINE" = yes; then
AC_CHECK_HEADERS(editline/readline.h editline/history.h,,hard_bailout)
fi
AC_MSG_CHECKING(whether to enable editline)
AC_MSG_RESULT($EDITLINE)
if test "$BUILD_CMIRRORD" = yes; then
AC_CHECK_FUNCS(atexit,,hard_bailout)
fi
@@ -1843,7 +1890,7 @@ if test "$BUILD_DMFILEMAPD" = yes; then
fi
################################################################################
AC_PATH_TOOL(MODPROBE_CMD, modprobe, [], [$PATH_SBIN])
AC_PATH_TOOL(MODPROBE_CMD, modprobe)
if test -n "$MODPROBE_CMD"; then
AC_DEFINE_UNQUOTED([MODPROBE_CMD], ["$MODPROBE_CMD"], [The path to 'modprobe', if available.])
@@ -2019,6 +2066,7 @@ AC_SUBST(DEFAULT_CACHE_SUBDIR)
AC_SUBST(DEFAULT_DATA_ALIGNMENT)
AC_SUBST(DEFAULT_DM_RUN_DIR)
AC_SUBST(DEFAULT_LOCK_DIR)
AC_SUBST(DEFAULT_FALLBACK_TO_LVM1)
AC_SUBST(DEFAULT_MIRROR_SEGTYPE)
AC_SUBST(DEFAULT_PID_DIR)
AC_SUBST(DEFAULT_PROFILE_SUBDIR)
@@ -2032,9 +2080,11 @@ AC_SUBST(DEFAULT_USE_LVMETAD)
AC_SUBST(DEFAULT_USE_LVMPOLLD)
AC_SUBST(DEFAULT_USE_LVMLOCKD)
AC_SUBST(DEVMAPPER)
AC_SUBST(AIO)
AC_SUBST(DLM_CFLAGS)
AC_SUBST(DLM_LIBS)
AC_SUBST(DL_LIBS)
AC_SUBST(AIO_LIBS)
AC_SUBST(DMEVENTD_PATH)
AC_SUBST(DM_LIB_PATCHLEVEL)
AC_SUBST(ELDFLAGS)
@@ -2049,6 +2099,8 @@ AC_SUBST(JOBS)
AC_SUBST(LDDEPS)
AC_SUBST(LIBS)
AC_SUBST(LIB_SUFFIX)
AC_SUBST(LVM1)
AC_SUBST(LVM1_FALLBACK)
AC_SUBST(LVM_VERSION)
AC_SUBST(LVM_LIBAPI)
AC_SUBST(LVM_MAJOR)
@@ -2065,6 +2117,7 @@ AC_SUBST(OCF)
AC_SUBST(OCFDIR)
AC_SUBST(ODIRECT)
AC_SUBST(PKGCONFIG)
AC_SUBST(POOL)
AC_SUBST(M_LIBS)
AC_SUBST(PTHREAD_LIBS)
AC_SUBST(PYTHON2)
@@ -2080,9 +2133,9 @@ AC_SUBST(PYTHON2DIR)
AC_SUBST(PYTHON3DIR)
AC_SUBST(QUORUM_CFLAGS)
AC_SUBST(QUORUM_LIBS)
AC_SUBST(RAID)
AC_SUBST(RT_LIBS)
AC_SUBST(READLINE_LIBS)
AC_SUBST(EDITLINE_LIBS)
AC_SUBST(REPLICATORS)
AC_SUBST(SACKPT_CFLAGS)
AC_SUBST(SACKPT_LIBS)
@@ -2096,6 +2149,7 @@ AC_SUBST(SYSTEMD_LIBS)
AC_SUBST(SNAPSHOTS)
AC_SUBST(STATICDIR)
AC_SUBST(STATIC_LINK)
AC_SUBST(TESTING)
AC_SUBST(TESTSUITE_DATA)
AC_SUBST(THIN)
AC_SUBST(THIN_CHECK_CMD)
@@ -2152,17 +2206,12 @@ daemons/dmeventd/plugins/raid/Makefile
daemons/dmeventd/plugins/mirror/Makefile
daemons/dmeventd/plugins/snapshot/Makefile
daemons/dmeventd/plugins/thin/Makefile
daemons/dmeventd/plugins/vdo/Makefile
daemons/dmfilemapd/Makefile
daemons/lvmdbusd/Makefile
daemons/lvmdbusd/lvmdbusd
daemons/lvmdbusd/lvmdb.py
daemons/lvmdbusd/lvm_shell_proxy.py
daemons/lvmdbusd/path.py
daemons/lvmetad/Makefile
daemons/lvmpolld/Makefile
daemons/lvmlockd/Makefile
device_mapper/Makefile
conf/Makefile
conf/example.conf
conf/lvmlocal.conf
@@ -2171,8 +2220,15 @@ conf/metadata_profile_template.profile
include/.symlinks
include/Makefile
lib/Makefile
lib/format1/Makefile
lib/format_pool/Makefile
lib/locking/Makefile
lib/mirror/Makefile
include/lvm-version.h
lib/raid/Makefile
lib/snapshot/Makefile
lib/thin/Makefile
lib/cache_segtype/Makefile
libdaemon/Makefile
libdaemon/client/Makefile
libdaemon/server/Makefile
@@ -2213,10 +2269,12 @@ scripts/lvmdump.sh
scripts/Makefile
test/Makefile
test/api/Makefile
test/api/python_lvm_unit.py
test/unit/Makefile
tools/Makefile
udev/Makefile
unit-tests/datastruct/Makefile
unit-tests/regex/Makefile
unit-tests/mm/Makefile
])
AC_OUTPUT

View File

@@ -46,7 +46,6 @@ const char *find_config_tree_str(struct cmd_context *cmd, int id, struct profile
return "STRING";
}
/*
struct logical_volume *origin_from_cow(const struct logical_volume *lv)
{
if (lv)
@@ -54,7 +53,6 @@ struct logical_volume *origin_from_cow(const struct logical_volume *lv)
__coverity_panic__();
}
*/
/* simple_memccpy() from glibc */
void *memccpy(void *dest, const void *src, int c, size_t n)

View File

@@ -74,9 +74,13 @@ TARGETS = \
include $(top_builddir)/make.tmpl
LIBS += $(LVMINTERNAL_LIBS) -ldevmapper $(PTHREAD_LIBS) -laio
LIBS += $(LVMINTERNAL_LIBS) -ldevmapper $(PTHREAD_LIBS)
CFLAGS += -fno-strict-aliasing $(EXTRA_EXEC_CFLAGS)
ifeq ("@AIO@", "yes")
LIBS += $(AIO_LIBS)
endif
INSTALL_TARGETS = \
install_clvmd

View File

@@ -108,6 +108,7 @@ int do_command(struct local_client *client, struct clvm_header *msg, int msglen,
lock_flags = args[1];
lockname = &args[2];
/* Check to see if the VG is in use by LVM1 */
status = do_check_lvm1(lockname);
do_lock_vg(lock_cmd, lock_flags, lockname);
break;

View File

@@ -887,8 +887,7 @@ static void main_loop(int cmd_timeout)
sigemptyset(&ss);
sigaddset(&ss, SIGINT);
sigaddset(&ss, SIGTERM);
if (pthread_sigmask(SIG_UNBLOCK, &ss, NULL))
log_warn("WARNING: Failed to unblock SIGCHLD.");
pthread_sigmask(SIG_UNBLOCK, &ss, NULL);
/* Main loop */
while (!quit) {
fd_set in;
@@ -1732,12 +1731,11 @@ static __attribute__ ((noreturn)) void *pre_and_post_thread(void *arg)
SIGUSR2 (kills subthreads) */
sigemptyset(&ss);
sigaddset(&ss, SIGUSR1);
if (pthread_sigmask(SIG_BLOCK, &ss, NULL))
log_warn("WARNING: Failed to block SIGUSR1.");
pthread_sigmask(SIG_BLOCK, &ss, NULL);
sigdelset(&ss, SIGUSR1);
sigaddset(&ss, SIGUSR2);
if (pthread_sigmask(SIG_UNBLOCK, &ss, NULL))
log_warn("WARNING: Failed to unblock SIGUSR2.");
pthread_sigmask(SIG_UNBLOCK, &ss, NULL);
/* Loop around doing PRE and POST functions until the client goes away */
while (!client->bits.localsock.finished) {
@@ -2001,9 +1999,6 @@ static int send_message(void *buf, int msglen, const char *csid, int fd,
return clops->cluster_send_message(buf, msglen, csid, errtext);
}
if (fd < 0)
return 0;
/* Make sure it all goes */
for (ptr = 0; ptr < msglen;) {
if ((len = write(fd, (char*)buf + ptr, msglen - ptr)) <= 0) {

View File

@@ -639,6 +639,16 @@ int post_lock_lv(unsigned char command, unsigned char lock_flags,
return 0;
}
/* Check if a VG is in use by LVM1 so we don't stomp on it */
int do_check_lvm1(const char *vgname)
{
int status;
status = check_lvm1_vg_inactive(cmd, vgname);
return status == 1 ? 0 : EBUSY;
}
int do_refresh_cache(void)
{
DEBUGLOG("Refreshing context\n");
@@ -651,9 +661,12 @@ int do_refresh_cache(void)
return -1;
}
cmd->use_aio = 0;
cmd->use_scan_cache = 0;
init_full_scan_done(0);
init_ignore_suspended_devices(1);
lvmcache_force_next_label_scan();
lvmcache_label_scan(cmd);
label_scan_destroy(cmd); /* destroys bcache (to close devs), keeps lvmcache */
dm_pool_empty(cmd->mem);
pthread_mutex_unlock(&lvm_lock);
@@ -796,7 +809,8 @@ static void lvm2_log_fn(int level, const char *file, int line, int dm_errno,
if (level != _LOG_ERR && level != _LOG_FATAL)
return;
(void) dm_strncpy(last_error, message, sizeof(last_error));
strncpy(last_error, message, sizeof(last_error));
last_error[sizeof(last_error)-1] = '\0';
}
/* This checks some basic cluster-LVM configuration stuff */
@@ -832,7 +846,7 @@ void lvm_do_backup(const char *vgname)
pthread_mutex_lock(&lvm_lock);
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, 0, 0, WARN_PV_READ, &consistent);
vg = vg_read_internal(cmd, vgname, NULL /*vgid*/, WARN_PV_READ, &consistent);
if (vg && consistent)
check_current_backup(vg);
@@ -908,6 +922,8 @@ int init_clvm(struct dm_hash_table *excl_uuid)
/* Check lvm.conf is setup for cluster-LVM */
check_config();
init_ignore_suspended_devices(1);
cmd->use_aio = 0;
cmd->use_scan_cache = 0;
/* Trap log messages so we can pass them back to the user */
init_log_fn(lvm2_log_fn);

View File

@@ -25,6 +25,7 @@ extern int do_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
extern const char *do_lock_query(char *resource);
extern int post_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
char *resource);
extern int do_check_lvm1(const char *vgname);
extern int do_refresh_cache(void);
extern int init_clvm(struct dm_hash_table *excl_uuid);
extern void destroy_lvm(void);

View File

@@ -166,9 +166,6 @@ int cluster_send(struct clog_request *rq)
{
int r;
int found = 0;
#if CMIRROR_HAS_CHECKPOINT
int count = 0;
#endif
struct iovec iov;
struct clog_cpg *entry;
@@ -206,6 +203,8 @@ int cluster_send(struct clog_request *rq)
#if CMIRROR_HAS_CHECKPOINT
do {
int count = 0;
r = cpg_mcast_joined(entry->handle, CPG_TYPE_AGREED, &iov, 1);
if (r != SA_AIS_ERR_TRY_AGAIN)
break;
@@ -1631,7 +1630,7 @@ int create_cluster_cpg(char *uuid, uint64_t luid)
size = ((strlen(uuid) + 1) > CPG_MAX_NAME_LENGTH) ?
CPG_MAX_NAME_LENGTH : (strlen(uuid) + 1);
(void) dm_strncpy(new->name.value, uuid, size);
strncpy(new->name.value, uuid, size);
new->name.length = (uint32_t)size;
new->luid = luid;

View File

@@ -451,19 +451,15 @@ static int _clog_ctr(char *uuid, uint64_t luid,
lc->skip_bit_warning = region_count;
lc->disk_fd = -1;
lc->log_dev_failed = 0;
if (!dm_strncpy(lc->uuid, uuid, DM_UUID_LEN)) {
LOG_ERROR("Cannot use too long UUID %s.", uuid);
r = -EINVAL;
goto fail;
}
strncpy(lc->uuid, uuid, DM_UUID_LEN);
lc->luid = luid;
if (get_log(lc->uuid, lc->luid) ||
get_pending_log(lc->uuid, lc->luid)) {
LOG_ERROR("[%s/%" PRIu64 "u] Log already exists, unable to create.",
SHORT_UUID(lc->uuid), lc->luid);
r = -EINVAL;
goto fail;
dm_free(lc);
return -EINVAL;
}
dm_list_init(&lc->mark_list);

View File

@@ -14,6 +14,7 @@
#define _LVM_CLOG_LOGGING_H
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include "configure.h"
#include <stdio.h>

View File

@@ -752,9 +752,8 @@ static void _exit_timeout(void *unused __attribute__((unused)))
static void *_timeout_thread(void *unused __attribute__((unused)))
{
struct thread_status *thread;
struct timespec timeout, real_time;
struct timespec timeout;
time_t curr_time;
int ret;
DEBUGLOG("Timeout thread starting.");
pthread_cleanup_push(_exit_timeout, NULL);
@@ -763,16 +762,7 @@ static void *_timeout_thread(void *unused __attribute__((unused)))
while (!dm_list_empty(&_timeout_registry)) {
timeout.tv_sec = 0;
timeout.tv_nsec = 0;
#ifndef HAVE_REALTIME
curr_time = time(NULL);
#else
if (clock_gettime(CLOCK_REALTIME, &real_time)) {
log_error("Failed to read clock_gettime().");
break;
}
/* 10ms back to the future */
curr_time = real_time.tv_sec + ((real_time.tv_nsec > (1000000000 - 10000000)) ? 1 : 0);
#endif
dm_list_iterate_items_gen(thread, &_timeout_registry, timeout_list) {
if (thread->next_time <= curr_time) {
@@ -785,10 +775,7 @@ static void *_timeout_thread(void *unused __attribute__((unused)))
} else {
DEBUGLOG("Sending SIGALRM to Thr %x for timeout.",
(int) thread->thread);
ret = pthread_kill(thread->thread, SIGALRM);
if (ret && (ret != ESRCH))
log_error("Unable to wakeup Thr %x for timeout: %s.",
(int) thread->thread, strerror(ret));
pthread_kill(thread->thread, SIGALRM);
}
_unlock_mutex();
}
@@ -2030,8 +2017,8 @@ static int _reinstate_registrations(struct dm_event_fifos *fifos)
static void _restart_dmeventd(void)
{
struct dm_event_fifos fifos = {
.client = -1,
.server = -1,
.client = -1,
/* FIXME Make these either configurable or depend directly on dmeventd_path */
.client_path = DM_EVENT_FIFO_CLIENT,
.server_path = DM_EVENT_FIFO_SERVER

View File

@@ -605,8 +605,8 @@ static int _do_event(int cmd, char *dmeventd_path, struct dm_event_daemon_messag
{
int ret;
struct dm_event_fifos fifos = {
.client = -1,
.server = -1,
.client = -1,
/* FIXME Make these either configurable or depend directly on dmeventd_path */
.client_path = DM_EVENT_FIFO_CLIENT,
.server_path = DM_EVENT_FIFO_SERVER
@@ -645,7 +645,6 @@ int dm_event_register_handler(const struct dm_event_handler *dmevh)
uuid = dm_task_get_uuid(dmt);
if (!strstr(dmevh->dso, "libdevmapper-event-lvm2thin.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2vdo.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2snapshot.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2mirror.so") &&
!strstr(dmevh->dso, "libdevmapper-event-lvm2raid.so"))
@@ -755,10 +754,11 @@ int dm_event_get_registered_device(struct dm_event_handler *dmevh, int next)
uuid = dm_task_get_uuid(dmt);
/* FIXME Distinguish errors connecting to daemon */
if ((ret = _do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0))) {
if (_do_event(next ? DM_EVENT_CMD_GET_NEXT_REGISTERED_DEVICE :
DM_EVENT_CMD_GET_REGISTERED_DEVICE, dmevh->dmeventd_path,
&msg, dmevh->dso, uuid, dmevh->mask, 0)) {
log_debug("%s: device not registered.", dm_task_get_name(dmt));
ret = -ENOENT;
goto fail;
}

View File

@@ -1,6 +1,6 @@
#
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2004-2005, 2011 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -16,7 +16,27 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SUBDIRS += lvm2 snapshot raid thin mirror vdo
SUBDIRS += lvm2
ifneq ("@MIRRORS@", "none")
SUBDIRS += mirror
endif
ifneq ("@SNAPSHOTS@", "none")
SUBDIRS += snapshot
endif
ifneq ("@RAID@", "none")
SUBDIRS += raid
endif
ifneq ("@THIN@", "none")
SUBDIRS += thin
endif
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS = lvm2 mirror snapshot raid thin
endif
include $(top_builddir)/make.tmpl
@@ -24,4 +44,3 @@ snapshot: lvm2
mirror: lvm2
raid: lvm2
thin: lvm2
vdo: lvm2

View File

@@ -31,13 +31,6 @@ static pthread_mutex_t _register_mutex = PTHREAD_MUTEX_INITIALIZER;
static int _register_count = 0;
static struct dm_pool *_mem_pool = NULL;
static void *_lvm_handle = NULL;
static DM_LIST_INIT(_env_registry);
struct env_data {
struct dm_list list;
const char *cmd;
const char *data;
};
DM_EVENT_LOG_FN("#lvm")
@@ -71,7 +64,7 @@ int dmeventd_lvm2_init(void)
if (!_lvm_handle) {
lvm2_log_fn(_lvm2_print_log);
if (!(_lvm_handle = lvm2_init_threaded()))
if (!(_lvm_handle = lvm2_init()))
goto out;
/*
@@ -107,7 +100,6 @@ void dmeventd_lvm2_exit(void)
lvm2_run(_lvm_handle, "_memlock_dec");
dm_pool_destroy(_mem_pool);
_mem_pool = NULL;
dm_list_init(&_env_registry);
lvm2_exit(_lvm_handle);
_lvm_handle = NULL;
log_debug("lvm plugin exited.");
@@ -132,8 +124,6 @@ int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
static char _internal_prefix[] = "_dmeventd_";
char *vg = NULL, *lv = NULL, *layer;
int r;
struct env_data *env_data;
const char *env = NULL;
if (!dm_split_lvm_name(mem, device, &vg, &lv, &layer)) {
log_error("Unable to determine VG name from %s.",
@@ -147,36 +137,18 @@ int dmeventd_lvm2_command(struct dm_pool *mem, char *buffer, size_t size,
*layer = '\0';
if (!strncmp(cmd, _internal_prefix, sizeof(_internal_prefix) - 1)) {
/* check if ENVVAR wasn't already resolved */
dm_list_iterate_items(env_data, &_env_registry)
if (!strcmp(cmd, env_data->cmd)) {
env = env_data->data;
break;
}
dmeventd_lvm2_lock();
/* output of internal command passed via env var */
if (!dmeventd_lvm2_run(cmd))
cmd = NULL;
else if ((cmd = getenv(cmd)))
cmd = dm_pool_strdup(mem, cmd); /* copy with lock */
dmeventd_lvm2_unlock();
if (!env) {
/* run lvm2 command to find out setting value */
dmeventd_lvm2_lock();
if (!dmeventd_lvm2_run(cmd) ||
!(env = getenv(cmd))) {
dmeventd_lvm2_unlock();
log_error("Unable to find configured command.");
return 0;
}
/* output of internal command passed via env var */
env = dm_pool_strdup(_mem_pool, env); /* copy with lock */
dmeventd_lvm2_unlock();
if (!env ||
!(env_data = dm_pool_zalloc(_mem_pool, sizeof(*env_data))) ||
!(env_data->cmd = dm_pool_strdup(_mem_pool, cmd))) {
log_error("Unable to allocate env memory.");
return 0;
}
env_data->data = env;
/* add to ENVVAR registry */
dm_list_add(&_env_registry, &env_data->list);
if (!cmd) {
log_error("Unable to find configured command.");
return 0;
}
cmd = env;
}
r = dm_snprintf(buffer, size, "%s %s/%s", cmd, vg, lv);

View File

@@ -76,17 +76,14 @@ static int _process_raid_event(struct dso_state *state, char *params, const char
}
if (dead) {
/*
* Use the first event to run a repair ignoring any additonal ones.
*
* We presume lvconvert to do pre-repair
* checks to avoid bloat in this plugin.
*/
if (!state->warned && status->insync_regions < status->total_regions) {
state->warned = 1;
log_warn("WARNING: waiting for resynchronization to finish "
"before initiating repair on RAID device %s.", device);
/* Fall through to allow lvconvert to run. */
if (status->insync_regions < status->total_regions) {
if (!state->warned) {
state->warned = 1;
log_warn("WARNING: waiting for resynchronization to finish "
"before initiating repair on RAID device %s.", device);
}
goto out; /* Not yet done syncing with accessible devices */
}
if (state->failed)

View File

@@ -62,25 +62,27 @@ struct dso_state {
DM_EVENT_LOG_FN("thin")
#define UUID_PREFIX "LVM-"
static int _run_command(struct dso_state *state)
{
char val[16];
char val[3][36];
char *env[] = { val[0], val[1], val[2], NULL };
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
(void) dm_snprintf(val[0], sizeof(val[0]), "LVM_RUN_BY_DMEVENTD=1");
if (state->data_percent) {
/* Prepare some known data to env vars for easy use */
if (dm_snprintf(val, sizeof(val), "%d",
state->data_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_DATA", val, 1);
if (dm_snprintf(val, sizeof(val), "%d",
state->metadata_percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_THIN_POOL_METADATA", val, 1);
(void) dm_snprintf(val[1], sizeof(val[1]), "DMEVENTD_THIN_POOL_DATA=%d",
state->data_percent / DM_PERCENT_1);
(void) dm_snprintf(val[2], sizeof(val[2]), "DMEVENTD_THIN_POOL_METADATA=%d",
state->metadata_percent / DM_PERCENT_1);
} else {
/* For an error event it's for a user to check status and decide */
env[1] = NULL;
log_debug("Error event processing.");
}
@@ -95,7 +97,7 @@ static int _run_command(struct dso_state *state)
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execvp(state->argv[0], state->argv);
execve(state->argv[0], state->argv, env);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
@@ -286,7 +288,7 @@ void process_event(struct dm_task *dmt,
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
goto out;
return;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;

View File

@@ -1,3 +0,0 @@
process_event
register_device
unregister_device

View File

@@ -1,419 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "dmeventd_lvm.h"
#include "libdevmapper-event.h"
#include <sys/wait.h>
#include <stdarg.h>
/* First warning when VDO pool is 80% full. */
#define WARNING_THRESH (DM_PERCENT_1 * 80)
/* Run a check every 5%. */
#define CHECK_STEP (DM_PERCENT_1 * 5)
/* Do not bother checking VDO pool is less than 50% full. */
#define CHECK_MINIMUM (DM_PERCENT_1 * 50)
#define MAX_FAILS (256) /* ~42 mins between cmd call retry with 10s delay */
#define VDO_DEBUG 0
struct dso_state {
struct dm_pool *mem;
int percent_check;
int percent;
uint64_t known_data_size;
unsigned fails;
unsigned max_fails;
int restore_sigset;
sigset_t old_sigset;
pid_t pid;
char *argv[3];
const char *cmd_str;
const char *name;
};
struct vdo_status {
uint64_t used_blocks;
uint64_t total_blocks;
};
static int _vdo_status_parse(const char *params, struct vdo_status *status)
{
if (sscanf(params, "%*s %*s %*s %*s %*s %" PRIu64 " %" PRIu64,
&status->used_blocks,
&status->total_blocks) < 2) {
log_error("Failed to parse vdo params: %s.", params);
return 0;
}
return 1;
}
DM_EVENT_LOG_FN("vdo")
static int _run_command(struct dso_state *state)
{
char val[16];
int i;
/* Mark for possible lvm2 command we are running from dmeventd
* lvm2 will not try to talk back to dmeventd while processing it */
(void) setenv("LVM_RUN_BY_DMEVENTD", "1", 1);
if (state->percent) {
/* Prepare some known data to env vars for easy use */
if (dm_snprintf(val, sizeof(val), "%d",
state->percent / DM_PERCENT_1) != -1)
(void) setenv("DMEVENTD_VDO_POOL", val, 1);
} else {
/* For an error event it's for a user to check status and decide */
log_debug("Error event processing.");
}
log_verbose("Executing command: %s", state->cmd_str);
/* TODO:
* Support parallel run of 'task' and it's waitpid maintainence
* ATM we can't handle signaling of SIGALRM
* as signalling is not allowed while 'process_event()' is running
*/
if (!(state->pid = fork())) {
/* child */
(void) close(0);
for (i = 3; i < 255; ++i) (void) close(i);
execvp(state->argv[0], state->argv);
_exit(errno);
} else if (state->pid == -1) {
log_error("Can't fork command %s.", state->cmd_str);
state->fails = 1;
return 0;
}
return 1;
}
static int _use_policy(struct dm_task *dmt, struct dso_state *state)
{
#if VDO_DEBUG
log_debug("dmeventd executes: %s.", state->cmd_str);
#endif
if (state->argv[0])
return _run_command(state);
if (!dmeventd_lvm2_run_with_lock(state->cmd_str)) {
log_error("Failed command for %s.", dm_task_get_name(dmt));
state->fails = 1;
return 0;
}
state->fails = 0;
return 1;
}
/* Check if executed command has finished
* Only 1 command may run */
static int _wait_for_pid(struct dso_state *state)
{
int status = 0;
if (state->pid == -1)
return 1;
if (!waitpid(state->pid, &status, WNOHANG))
return 0;
/* Wait for finish */
if (WIFEXITED(status)) {
log_verbose("Child %d exited with status %d.",
state->pid, WEXITSTATUS(status));
state->fails = WEXITSTATUS(status) ? 1 : 0;
} else {
if (WIFSIGNALED(status))
log_verbose("Child %d was terminated with status %d.",
state->pid, WTERMSIG(status));
state->fails = 1;
}
state->pid = -1;
return 1;
}
void process_event(struct dm_task *dmt,
enum dm_event_mask event __attribute__((unused)),
void **user)
{
const char *device = dm_task_get_name(dmt);
struct dso_state *state = *user;
void *next = NULL;
uint64_t start, length;
char *target_type = NULL;
char *params;
int needs_policy = 0;
struct dm_task *new_dmt = NULL;
struct vdo_status status;
#if VDO_DEBUG
log_debug("Watch for VDO %s:%.2f%%.", state->name,
dm_percent_to_round_float(state->percent_check, 2));
#endif
if (!_wait_for_pid(state)) {
log_warn("WARNING: Skipping event, child %d is still running (%s).",
state->pid, state->cmd_str);
return;
}
if (event & DM_EVENT_DEVICE_ERROR) {
#if VDO_DEBUG
log_debug("VDO event error.");
#endif
/* Error -> no need to check and do instant resize */
state->percent = 0;
if (_use_policy(dmt, state))
goto out;
stack;
if (!(new_dmt = dm_task_create(DM_DEVICE_STATUS)))
goto_out;
if (!dm_task_set_uuid(new_dmt, dm_task_get_uuid(dmt)))
goto_out;
/* Non-blocking status read */
if (!dm_task_no_flush(new_dmt))
log_warn("WARNING: Can't set no_flush for dm status.");
if (!dm_task_run(new_dmt))
goto_out;
dmt = new_dmt;
}
dm_get_next_target(dmt, next, &start, &length, &target_type, &params);
if (!target_type || (strcmp(target_type, "vdo") != 0)) {
log_error("Invalid target type.");
goto out;
}
if (!_vdo_status_parse(params, &status)) {
log_error("Failed to parse status.");
goto out;
}
state->percent = dm_make_percent(status.used_blocks,
status.total_blocks);
#if VDO_DEBUG
log_debug("VDO %s status %.2f%% " FMTu64 "/" FMTu64 ".",
state->name, dm_percent_to_round_float(state->percent, 2),
status.used_blocks, status.total_blocks);
#endif
/* VDO pool size had changed. Clear the threshold. */
if (state->known_data_size != status.total_blocks) {
state->percent_check = CHECK_MINIMUM;
state->known_data_size = status.total_blocks;
state->fails = 0;
}
/*
* Trigger action when threshold boundary is exceeded.
* Report 80% threshold warning when it's used above 80%.
* Only 100% is exception as it cannot be surpased so policy
* action is called for: >50%, >55% ... >95%, 100%
*/
if ((state->percent > WARNING_THRESH) &&
(state->percent > state->percent_check))
log_warn("WARNING: VDO %s %s is now %.2f%% full.",
state->name, device,
dm_percent_to_round_float(state->percent, 2));
if (state->percent > CHECK_MINIMUM) {
/* Run action when usage raised more than CHECK_STEP since the last time */
if (state->percent > state->percent_check)
needs_policy = 1;
state->percent_check = (state->percent / CHECK_STEP + 1) * CHECK_STEP;
if (state->percent_check == DM_PERCENT_100)
state->percent_check--; /* Can't get bigger then 100% */
} else
state->percent_check = CHECK_MINIMUM;
/* Reduce number of _use_policy() calls by power-of-2 factor till frequency of MAX_FAILS is reached.
* Avoids too high number of error retries, yet shows some status messages in log regularly.
* i.e. PV could have been pvmoved and VG/LV was locked for a while...
*/
if (state->fails) {
if (state->fails++ <= state->max_fails) {
log_debug("Postponing frequently failing policy (%u <= %u).",
state->fails - 1, state->max_fails);
goto out;
}
if (state->max_fails < MAX_FAILS)
state->max_fails <<= 1;
state->fails = needs_policy = 1; /* Retry failing command */
} else
state->max_fails = 1; /* Reset on success */
/* FIXME: ATM nothing can be done, drop 0, once it becomes useful */
if (0 && needs_policy)
_use_policy(dmt, state);
out:
if (new_dmt)
dm_task_destroy(new_dmt);
}
/* Handle SIGCHLD for a thread */
static void _sig_child(int signum __attribute__((unused)))
{
/* empty SIG_IGN */;
}
/* Setup handler for SIGCHLD when executing external command
* to get quick 'waitpid()' reaction
* It will interrupt syscall just like SIGALRM and
* invoke process_event().
*/
static void _init_thread_signals(struct dso_state *state)
{
struct sigaction act = { .sa_handler = _sig_child };
sigset_t my_sigset;
sigemptyset(&my_sigset);
if (sigaction(SIGCHLD, &act, NULL))
log_warn("WARNING: Failed to set SIGCHLD action.");
else if (sigaddset(&my_sigset, SIGCHLD))
log_warn("WARNING: Failed to add SIGCHLD to set.");
else if (pthread_sigmask(SIG_UNBLOCK, &my_sigset, &state->old_sigset))
log_warn("WARNING: Failed to unblock SIGCHLD.");
else
state->restore_sigset = 1;
}
static void _restore_thread_signals(struct dso_state *state)
{
if (state->restore_sigset &&
pthread_sigmask(SIG_SETMASK, &state->old_sigset, NULL))
log_warn("WARNING: Failed to block SIGCHLD.");
}
int register_device(const char *device,
const char *uuid,
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state;
const char *cmd;
char *str;
char cmd_str[PATH_MAX + 128 + 2]; /* cmd ' ' vg/lv \0 */
const char *name = "pool";
if (!dmeventd_lvm2_init_with_pool("vdo_pool_state", state))
goto_bad;
state->cmd_str = "";
/* Search for command for LVM- prefixed devices only */
cmd = (strncmp(uuid, "LVM-", 4) == 0) ? "_dmeventd_vdo_command" : "";
if (!dmeventd_lvm2_command(state->mem, cmd_str, sizeof(cmd_str), cmd, device))
goto_bad;
if (strncmp(cmd_str, "lvm ", 4) == 0) {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str + 4))) {
log_error("Failed to copy lvm VDO command.");
goto bad;
}
} else if (cmd_str[0] == '/') {
if (!(state->cmd_str = dm_pool_strdup(state->mem, cmd_str))) {
log_error("Failed to copy VDO command.");
goto bad;
}
/* Find last space before 'vg/lv' */
if (!(str = strrchr(state->cmd_str, ' ')))
goto inval;
if (!(state->argv[0] = dm_pool_strndup(state->mem, state->cmd_str,
str - state->cmd_str))) {
log_error("Failed to copy command.");
goto bad;
}
state->argv[1] = str + 1; /* 1 argument - vg/lv */
_init_thread_signals(state);
} else if (cmd[0] == 0) {
state->name = "volume"; /* What to use with 'others?' */
} else/* Unuspported command format */
goto inval;
state->pid = -1;
state->name = name;
*user = state;
log_info("Monitoring VDO %s %s.", name, device);
return 1;
inval:
log_error("Invalid command for monitoring: %s.", cmd_str);
bad:
log_error("Failed to monitor VDO %s %s.", name, device);
if (state)
dmeventd_lvm2_exit_with_pool(state);
return 0;
}
int unregister_device(const char *device,
const char *uuid __attribute__((unused)),
int major __attribute__((unused)),
int minor __attribute__((unused)),
void **user)
{
struct dso_state *state = *user;
const char *name = state->name;
int i;
for (i = 0; !_wait_for_pid(state) && (i < 6); ++i) {
if (i == 0)
/* Give it 2 seconds, then try to terminate & kill it */
log_verbose("Child %d still not finished (%s) waiting.",
state->pid, state->cmd_str);
else if (i == 3) {
log_warn("WARNING: Terminating child %d.", state->pid);
kill(state->pid, SIGINT);
kill(state->pid, SIGTERM);
} else if (i == 5) {
log_warn("WARNING: Killing child %d.", state->pid);
kill(state->pid, SIGKILL);
}
sleep(1);
}
if (state->pid != -1)
log_warn("WARNING: Cannot kill child %d!", state->pid);
_restore_thread_signals(state);
dmeventd_lvm2_exit_with_pool(state);
log_info("No longer monitoring VDO %s %s.", name, device);
return 1;
}

View File

@@ -33,7 +33,7 @@
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
# define MKDEV(x,y) makedev((dev_t)(x),(dev_t)(y))
# define MKDEV(x,y) makedev((x),(y))
#endif
/* limit to two updates/sec */
@@ -629,10 +629,10 @@ check_unlinked:
static int _daemonise(struct filemap_monitor *fm)
{
pid_t pid = 0;
pid_t pid = 0, sid;
int fd;
if (!setsid()) {
if (!(sid = setsid())) {
_early_log("setsid failed.");
return 0;
}
@@ -802,7 +802,7 @@ bad:
return 1;
}
static const char * const _mode_names[] = {
static const char * _mode_names[] = {
"inode",
"path"
};
@@ -827,10 +827,8 @@ int main(int argc, char **argv)
"mode=%s, path=%s", fm.fd, fm.group_id,
_mode_names[fm.mode], fm.path);
if (!_foreground && !_daemonise(&fm)) {
dm_free(fm.path);
if (!_foreground && !_daemonise(&fm))
return 1;
}
return _dmfilemapd(&fm);
}

View File

@@ -1,4 +1 @@
path.py
lvmdbusd
lvmdb.py
lvm_shell_proxy.py

View File

@@ -26,7 +26,9 @@ LVMDBUS_SRCDIR_FILES = \
__init__.py \
job.py \
loader.py \
lvmdb.py \
main.py \
lvm_shell_proxy.py \
lv.py \
manager.py \
objectmanager.py \
@@ -38,21 +40,14 @@ LVMDBUS_SRCDIR_FILES = \
vg.py
LVMDBUS_BUILDDIR_FILES = \
lvmdb.py \
lvm_shell_proxy.py \
path.py
LVMDBUSD = lvmdbusd
CLEAN_DIRS += __pycache__
LVMDBUSD = $(srcdir)/lvmdbusd
include $(top_builddir)/make.tmpl
.PHONY: install_lvmdbusd
all:
test -x $(LVMDBUSD) || chmod 755 $(LVMDBUSD)
install_lvmdbusd:
$(INSTALL_DIR) $(sbindir)
$(INSTALL_SCRIPT) $(LVMDBUSD) $(sbindir)
@@ -68,5 +63,4 @@ install_lvm2: install_lvmdbusd
install: install_lvm2
DISTCLEAN_TARGETS+= \
$(LVMDBUS_BUILDDIR_FILES) \
$(LVMDBUSD)
$(LVMDBUS_BUILDDIR_FILES)

View File

@@ -232,6 +232,7 @@ class LvState(State):
@utils.dbus_property(LV_COMMON_INTERFACE, 'Attr', 's')
@utils.dbus_property(LV_COMMON_INTERFACE, 'DataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'SnapPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'DataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'MetaDataPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'CopyPercent', 'u')
@utils.dbus_property(LV_COMMON_INTERFACE, 'SyncPercent', 'u')
@@ -497,7 +498,7 @@ class Lv(LvCommon):
# it is a thin lv
if not dbo.IsThinVolume:
if optional_size == 0:
space = dbo.SizeBytes // 80
space = dbo.SizeBytes / 80
remainder = space % 512
optional_size = space + 512 - remainder

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

View File

@@ -1,4 +1,4 @@
#!@PYTHON3@
#!/usr/bin/env python3
# Copyright (C) 2015-2016 Red Hat, Inc. All rights reserved.
#

View File

@@ -22,6 +22,7 @@
#define LVMETAD_TOKEN_UPDATE_IN_PROGRESS "update in progress"
#define LVMETAD_DISABLE_REASON_DIRECT "DIRECT"
#define LVMETAD_DISABLE_REASON_LVM1 "LVM1"
#define LVMETAD_DISABLE_REASON_DUPLICATES "DUPLICATES"
#define LVMETAD_DISABLE_REASON_VGRESTORE "VGRESTORE"
#define LVMETAD_DISABLE_REASON_REPAIR "REPAIR"

View File

@@ -200,12 +200,12 @@ struct vg_info {
#define GLFL_INVALID 0x00000001
#define GLFL_DISABLE 0x00000002
#define GLFL_DISABLE_REASON_DIRECT 0x00000004
/* 0x00000008 */
#define GLFL_DISABLE_REASON_LVM1 0x00000008
#define GLFL_DISABLE_REASON_DUPLICATES 0x00000010
#define GLFL_DISABLE_REASON_VGRESTORE 0x00000020
#define GLFL_DISABLE_REASON_REPAIR 0x00000040
#define GLFL_DISABLE_REASON_ALL (GLFL_DISABLE_REASON_DIRECT | GLFL_DISABLE_REASON_REPAIR | GLFL_DISABLE_REASON_DUPLICATES | GLFL_DISABLE_REASON_VGRESTORE)
#define GLFL_DISABLE_REASON_ALL (GLFL_DISABLE_REASON_DIRECT | GLFL_DISABLE_REASON_REPAIR | GLFL_DISABLE_REASON_LVM1 | GLFL_DISABLE_REASON_DUPLICATES | GLFL_DISABLE_REASON_VGRESTORE)
#define VGFL_INVALID 0x00000001
@@ -730,7 +730,7 @@ static response vg_lookup(lvmetad_state *s, request r)
if (!(res.cft->root = n = dm_config_create_node(res.cft, "response")))
goto nomem_un;
if (!(n->v = dm_config_create_value(res.cft)))
if (!(n->v = dm_config_create_value(cft)))
goto nomem_un;
n->parent = res.cft->root;
@@ -2369,6 +2369,8 @@ static response set_global_info(lvmetad_state *s, request r)
reason_flags |= GLFL_DISABLE_REASON_DIRECT;
if (strstr(reason, LVMETAD_DISABLE_REASON_REPAIR))
reason_flags |= GLFL_DISABLE_REASON_REPAIR;
if (strstr(reason, LVMETAD_DISABLE_REASON_LVM1))
reason_flags |= GLFL_DISABLE_REASON_LVM1;
if (strstr(reason, LVMETAD_DISABLE_REASON_DUPLICATES))
reason_flags |= GLFL_DISABLE_REASON_DUPLICATES;
if (strstr(reason, LVMETAD_DISABLE_REASON_VGRESTORE))
@@ -2419,17 +2421,21 @@ static response set_global_info(lvmetad_state *s, request r)
static response get_global_info(lvmetad_state *s, request r)
{
/* This buffer should be large enough to hold all the possible reasons. */
char reason[REASON_BUF_SIZE] = { 0 };
char reason[REASON_BUF_SIZE];
char flag_str[64];
int pid;
/* This buffer should be large enough to hold all the possible reasons. */
memset(reason, 0, sizeof(reason));
pid = (int)daemon_request_int(r, "pid", 0);
if (s->flags & GLFL_DISABLE) {
snprintf(reason, REASON_BUF_SIZE, "%s%s%s%s",
snprintf(reason, REASON_BUF_SIZE - 1, "%s%s%s%s%s",
(s->flags & GLFL_DISABLE_REASON_DIRECT) ? LVMETAD_DISABLE_REASON_DIRECT "," : "",
(s->flags & GLFL_DISABLE_REASON_REPAIR) ? LVMETAD_DISABLE_REASON_REPAIR "," : "",
(s->flags & GLFL_DISABLE_REASON_LVM1) ? LVMETAD_DISABLE_REASON_LVM1 "," : "",
(s->flags & GLFL_DISABLE_REASON_DUPLICATES) ? LVMETAD_DISABLE_REASON_DUPLICATES "," : "",
(s->flags & GLFL_DISABLE_REASON_VGRESTORE) ? LVMETAD_DISABLE_REASON_VGRESTORE "," : "");
}
@@ -2525,8 +2531,10 @@ inval:
info = dm_hash_lookup(s->vgid_to_info, uuid);
if (!info) {
if (!(info = dm_zalloc(sizeof(struct vg_info))))
info = malloc(sizeof(struct vg_info));
if (!info)
goto bad;
memset(info, 0, sizeof(struct vg_info));
if (!dm_hash_insert(s->vgid_to_info, uuid, (void*)info))
goto bad;
}
@@ -2669,7 +2677,6 @@ static response handler(daemon_state s, client_handle h, request r)
int pid;
int cache_lock = 0;
int info_lock = 0;
uint64_t timegap = 0;
rq = daemon_request_str(r, "request", "NONE");
token = daemon_request_str(r, "token", "NONE");
@@ -2698,8 +2705,9 @@ static response handler(daemon_state s, client_handle h, request r)
if (!prev_in_progress && this_in_progress) {
/* New update is starting (filter token is replaced by update token) */
(void) dm_strncpy(prev_token, state->token, sizeof(prev_token));
(void) dm_strncpy(state->token, token, sizeof(state->token));
memcpy(prev_token, state->token, 128);
strncpy(state->token, token, 128);
state->token[127] = 0;
state->update_begin = _monotonic_seconds();
state->update_timeout = update_timeout;
state->update_pid = pid;
@@ -2712,32 +2720,23 @@ static response handler(daemon_state s, client_handle h, request r)
state->update_cmd);
} else if (prev_in_progress && this_in_progress) {
timegap = _monotonic_seconds() - state->update_begin;
if (timegap < state->update_timeout) {
pthread_mutex_unlock(&state->token_lock);
return daemon_reply_simple("token_updating",
"expected = %s", state->token,
"update_pid = " FMTd64, (int64_t)state->update_pid,
"reason = %s", "another command has populated the cache",
NULL);
}
/* Current update is cancelled and replaced by a new update */
WARN(state, "token_update replacing pid %d begin %llu len %d cmd %s",
DEBUGLOG(state, "token_update replacing pid %d begin %llu len %d cmd %s",
state->update_pid,
(unsigned long long)state->update_begin,
(int)(timegap),
(int)(_monotonic_seconds() - state->update_begin),
state->update_cmd);
(void) dm_strncpy(prev_token, state->token, sizeof(prev_token));
(void) dm_strncpy(state->token, token, sizeof(state->token));
memcpy(prev_token, state->token, 128);
strncpy(state->token, token, 128);
state->token[127] = 0;
state->update_begin = _monotonic_seconds();
state->update_timeout = update_timeout;
state->update_pid = pid;
strncpy(state->update_cmd, cmd, CMD_NAME_SIZE - 1);
WARN(state, "token_update begin %llu timeout %d pid %d cmd %s",
DEBUGLOG(state, "token_update begin %llu timeout %d pid %d cmd %s",
(unsigned long long)state->update_begin,
state->update_timeout,
state->update_pid,
@@ -2748,7 +2747,7 @@ static response handler(daemon_state s, client_handle h, request r)
if (state->update_pid != pid) {
/* If a pid doing update was cancelled, ignore its token update at the end. */
WARN(state, "token_update ignored from cancelled update pid %d", pid);
DEBUGLOG(state, "token_update ignored from cancelled update pid %d", pid);
pthread_mutex_unlock(&state->token_lock);
return daemon_reply_simple("token_mismatch",
@@ -2759,12 +2758,13 @@ static response handler(daemon_state s, client_handle h, request r)
NULL);
}
WARN(state, "token_update end len %d pid %d new token %s",
DEBUGLOG(state, "token_update end len %d pid %d new token %s",
(int)(_monotonic_seconds() - state->update_begin),
state->update_pid, token);
(void) dm_strncpy(prev_token, state->token, sizeof(prev_token));
(void) dm_strncpy(state->token, token, sizeof(state->token));
memcpy(prev_token, state->token, 128);
strncpy(state->token, token, 128);
state->token[127] = 0;
state->update_begin = 0;
state->update_timeout = 0;
state->update_pid = 0;
@@ -2964,7 +2964,7 @@ static void usage(const char *prog, FILE *file)
int main(int argc, char *argv[])
{
signed char opt;
struct timespec timeout;
struct timeval timeout;
daemon_idle di = { .ptimeout = &timeout };
lvmetad_state ls = { .log_config = "" };
daemon_state s = {

View File

@@ -27,8 +27,6 @@ ifeq ("@BUILD_LOCKDDLM@", "yes")
LOCK_LIBS += -ldlm_lt
endif
SOURCES2 = lvmlockctl.c
TARGETS = lvmlockd lvmlockctl
.PHONY: install_lvmlockd

View File

@@ -22,9 +22,9 @@ static inline daemon_handle lvmlockd_open(const char *sock)
daemon_info lvmlockd_info = {
.path = "lvmlockd",
.socket = sock ?: LVMLOCKD_SOCKET,
.autostart = 0,
.protocol = "lvmlockd",
.protocol_version = 1,
.autostart = 0
};
return daemon_open(lvmlockd_info);
@@ -32,7 +32,7 @@ static inline daemon_handle lvmlockd_open(const char *sock)
static inline void lvmlockd_close(daemon_handle h)
{
daemon_close(h);
return daemon_close(h);
}
/*
@@ -49,6 +49,5 @@ static inline void lvmlockd_close(daemon_handle h)
#define ELOCKIO 218 /* sanlock io errors during lock op, may be transient. */
#define EREMOVED 219
#define EDEVOPEN 220 /* sanlock failed to open lvmlock LV */
#define ELMERR 221
#endif /* _LVM_LVMLOCKD_CLIENT_H */

View File

@@ -1009,8 +1009,6 @@ static void add_work_action(struct action *act)
pthread_mutex_unlock(&worker_mutex);
}
#define ERR_LVMETAD_NOT_RUNNING -200
static daemon_reply send_lvmetad(const char *id, ...)
{
daemon_reply reply;
@@ -1031,9 +1029,9 @@ retry:
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0) {
err = lvmetad_handle.error ?: lvmetad_handle.socket_fd;
pthread_mutex_unlock(&lvmetad_mutex);
log_debug("lvmetad_open reconnect error %d", err);
log_error("lvmetad_open reconnect error %d", err);
memset(&reply, 0, sizeof(reply));
reply.error = ERR_LVMETAD_NOT_RUNNING;
reply.error = err;
va_end(ap);
return reply;
} else {
@@ -1267,15 +1265,6 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
* caches, and tell lvmetad to set global invalid to 0.
*/
/*
* lvmetad not running:
* Even if we have not previously found lvmetad running,
* we attempt to connect and invalidate in case it has
* been started while lvmlockd is running. We don't
* want to allow lvmetad to be used with invalid data if
* it happens to be enabled and started after lvmlockd.
*/
if (inval_meta && (r->type == LD_RT_VG)) {
daemon_reply reply;
char *uuid;
@@ -1295,10 +1284,8 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
"version = " FMTd64, (int64_t)new_version,
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
if (reply.error != ERR_LVMETAD_NOT_RUNNING)
log_error("set_vg_info in lvmetad failed %d", reply.error);
}
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_vg_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
}
@@ -1313,10 +1300,8 @@ static int res_lock(struct lockspace *ls, struct resource *r, struct action *act
"global_invalid = " FMTd64, INT64_C(1),
NULL);
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
if (reply.error != ERR_LVMETAD_NOT_RUNNING)
log_error("set_global_info in lvmetad failed %d", reply.error);
}
if (reply.error || strcmp(daemon_reply_str(reply, "response", ""), "OK"))
log_error("set_global_info in lvmetad failed %d", reply.error);
daemon_reply_destroy(reply);
}
@@ -1404,11 +1389,12 @@ static int res_convert(struct lockspace *ls, struct resource *r,
}
rv = lm_convert(ls, r, act->mode, act, r_version);
log_debug("S %s R %s res_convert rv %d", ls->name, r->name, rv);
if (rv < 0)
if (rv < 0) {
log_error("S %s R %s res_convert lm error %d", ls->name, r->name, rv);
return rv;
}
log_debug("S %s R %s res_convert lm done", ls->name, r->name);
if (lk->mode == LD_LK_EX && act->mode == LD_LK_SH) {
r->sh_count = 1;
@@ -2815,9 +2801,6 @@ static int add_lockspace_thread(const char *ls_name,
if (ls2->thread_stop) {
log_debug("add_lockspace_thread %s exists and stopping", ls->name);
rv = -EAGAIN;
} else if (!ls2->create_fail && !ls2->create_done) {
log_debug("add_lockspace_thread %s exists and starting", ls->name);
rv = -ESTARTING;
} else {
log_debug("add_lockspace_thread %s exists", ls->name);
rv = -EEXIST;
@@ -3059,7 +3042,7 @@ static int count_lockspace_starting(uint32_t client_id)
pthread_mutex_lock(&lockspaces_mutex);
list_for_each_entry(ls, &lockspaces, list) {
if (client_id && (ls->start_client_id != client_id))
if (ls->start_client_id != client_id)
continue;
if (!ls->create_done && !ls->create_fail) {
@@ -3460,7 +3443,7 @@ static void *worker_thread_main(void *arg_in)
add_client_result(act);
} else if (act->op == LD_OP_START_WAIT) {
act->result = count_lockspace_starting(0);
act->result = count_lockspace_starting(act->client_id);
if (!act->result)
add_client_result(act);
else
@@ -3494,7 +3477,7 @@ static void *worker_thread_main(void *arg_in)
list_for_each_entry_safe(act, safe, &delayed_list, list) {
if (act->op == LD_OP_START_WAIT) {
log_debug("work delayed start_wait for client %u", act->client_id);
act->result = count_lockspace_starting(0);
act->result = count_lockspace_starting(act->client_id);
if (!act->result) {
list_del(&act->list);
add_client_result(act);
@@ -5866,7 +5849,7 @@ static int main_loop(daemon_state *ds_arg)
pthread_mutex_init(&lvmetad_mutex, NULL);
lvmetad_handle = lvmetad_open(NULL);
if (lvmetad_handle.error || lvmetad_handle.socket_fd < 0)
log_debug("lvmetad_open error %d", lvmetad_handle.error);
log_error("lvmetad_open error %d", lvmetad_handle.error);
else
lvmetad_connected = 1;
@@ -5874,13 +5857,8 @@ static int main_loop(daemon_state *ds_arg)
* Attempt to rejoin lockspaces and adopt locks from a previous
* instance of lvmlockd that left behind lockspaces/locks.
*/
if (adopt_opt) {
/* FIXME: implement this without lvmetad */
if (!lvmetad_connected)
log_error("Cannot adopt locks without lvmetad running.");
else
adopt_locks();
}
if (adopt_opt)
adopt_locks();
while (1) {
rv = poll(pollfd, pollfd_maxi + 1, -1);
@@ -6035,14 +6013,14 @@ static void usage(char *prog, FILE *file)
int main(int argc, char *argv[])
{
daemon_state ds = {
.name = "lvmlockd",
.daemon_main = main_loop,
.daemon_init = NULL,
.daemon_fini = NULL,
.pidfile = getenv("LVM_LVMLOCKD_PIDFILE"),
.socket_path = getenv("LVM_LVMLOCKD_SOCKET"),
.protocol = lvmlockd_protocol,
.protocol_version = lvmlockd_protocol_version,
.daemon_init = NULL,
.daemon_fini = NULL,
.daemon_main = main_loop,
.name = "lvmlockd",
};
static struct option long_options[] = {

View File

@@ -508,7 +508,7 @@ lockrv:
}
if (rv < 0) {
log_error("S %s R %s lock_dlm acquire error %d errno %d", ls->name, r->name, rv, errno);
return -ELMERR;
return rv;
}
if (rdd->vb) {
@@ -581,7 +581,6 @@ int lm_convert_dlm(struct lockspace *ls, struct resource *r,
}
if (rv < 0) {
log_error("S %s R %s convert_dlm error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
}
@@ -655,7 +654,6 @@ int lm_unlock_dlm(struct lockspace *ls, struct resource *r,
0, NULL, NULL, NULL);
if (rv < 0) {
log_error("S %s R %s unlock_dlm error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
@@ -699,7 +697,7 @@ int lm_hosts_dlm(struct lockspace *ls, int notify)
return 0;
memset(ls_nodes_path, 0, sizeof(ls_nodes_path));
snprintf(ls_nodes_path, PATH_MAX, "%s/%s/nodes",
snprintf(ls_nodes_path, PATH_MAX-1, "%s/%s/nodes",
DLM_LOCKSPACES_PATH, ls->name);
if (!(ls_dir = opendir(ls_nodes_path)))

View File

@@ -151,7 +151,7 @@ struct resource {
struct list_head locks;
struct list_head actions;
char lv_args[MAX_ARGS+1];
char lm_data[]; /* lock manager specific data */
char lm_data[0]; /* lock manager specific data */
};
#define LD_LF_PERSISTENT 0x00000001

View File

@@ -294,36 +294,6 @@ out:
return host_id;
}
/* Prepare valid /dev/mapper/vgname-lvname with all the mangling */
static int build_dm_path(char *path, size_t path_len,
const char *vg_name, const char *lv_name)
{
struct dm_pool *mem;
char *dm_name;
int rv = 0;
if (!(mem = dm_pool_create("namepool", 1024))) {
log_error("Failed to create mempool.");
return -ENOMEM;
}
if (!(dm_name = dm_build_dm_name(mem, vg_name, lv_name, NULL))) {
log_error("Failed to build dm name for %s/%s.", vg_name, lv_name);
rv = -EINVAL;
goto fail;
}
if ((dm_snprintf(path, path_len, "%s/%s", dm_dir(), dm_name) < 0)) {
log_error("Failed to create path %s/%s.", dm_dir(), dm_name);
rv = -EINVAL;
}
fail:
dm_pool_destroy(mem);
return rv;
}
/*
* vgcreate
*
@@ -366,8 +336,7 @@ int lm_init_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_ar
if (strlen(lock_lv_name) + strlen(lock_args_version) + 2 > MAX_ARGS)
return -EARGS;
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
log_debug("S %s init_vg_san path %s", ls_name, disk.path);
@@ -544,8 +513,7 @@ int lm_init_lv_sanlock(char *ls_name, char *vg_name, char *lv_name,
strncpy(rd.rs.lockspace_name, ls_name, SANLK_NAME_LEN);
rd.rs.num_disks = 1;
if ((rv = build_dm_path(rd.rs.disks[0].path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(rd.rs.disks[0].path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
align_size = sanlock_align(&rd.rs.disks[0]);
if (align_size <= 0) {
@@ -644,8 +612,7 @@ int lm_rename_vg_sanlock(char *ls_name, char *vg_name, uint32_t flags, char *vg_
return rv;
}
if ((rv = build_dm_path(disk.path, SANLK_PATH_LEN, vg_name, lock_lv_name)))
return rv;
snprintf(disk.path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s", vg_name, lock_lv_name);
log_debug("S %s rename_vg_san path %s", ls_name, disk.path);
@@ -1102,10 +1069,10 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
* and appending "lockctl" to get /path/to/lvmlockctl.
*/
memset(killpath, 0, sizeof(killpath));
snprintf(killpath, SANLK_PATH_LEN, "%slockctl", LVM_PATH);
snprintf(killpath, SANLK_PATH_LEN - 1, "%slockctl", LVM_PATH);
memset(killargs, 0, sizeof(killargs));
snprintf(killargs, SANLK_PATH_LEN, "--kill %s", ls->vg_name);
snprintf(killargs, SANLK_PATH_LEN - 1, "--kill %s", ls->vg_name);
rv = check_args_version(ls->vg_args, VG_LOCK_ARGS_MAJOR);
if (rv < 0) {
@@ -1121,8 +1088,8 @@ int lm_prepare_lockspace_sanlock(struct lockspace *ls)
goto fail;
}
if ((ret = build_dm_path(disk_path, SANLK_PATH_LEN, ls->vg_name, lock_lv_name)))
goto fail;
snprintf(disk_path, SANLK_PATH_LEN-1, "/dev/mapper/%s-%s",
ls->vg_name, lock_lv_name);
/*
* When a vg is started, the internal sanlock lv should be
@@ -1493,12 +1460,6 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
rv = sanlock_acquire(lms->sock, -1, flags, 1, &rs, &opt);
/*
* errors: translate the sanlock error number to an lvmlockd error.
* We don't want to return an sanlock-specific error number from
* this function to code that doesn't recognize sanlock error numbers.
*/
if (rv == -EAGAIN) {
/*
* It appears that sanlock_acquire returns EAGAIN when we request
@@ -1567,26 +1528,6 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
return -EAGAIN;
}
if (rv == SANLK_AIO_TIMEOUT) {
/*
* sanlock got an i/o timeout when trying to acquire the
* lease on disk.
*/
log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
*retry = 0;
return -EAGAIN;
}
if (rv == SANLK_DBLOCK_LVER || rv == SANLK_DBLOCK_MBAL) {
/*
* There was contention with another host for the lease,
* and we lost.
*/
log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
*retry = 0;
return -EAGAIN;
}
if (rv == SANLK_ACQUIRE_OWNED_RETRY) {
/*
* The lock is held by a failed host, and will eventually
@@ -1637,25 +1578,15 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
if (rv == -ENOSPC)
rv = -ELOCKIO;
/*
* generic error number for sanlock errors that we are not
* catching above.
*/
return -ELMERR;
return rv;
}
/*
* sanlock acquire success (rv 0)
*/
if (rds->vb) {
rv = sanlock_get_lvb(0, rs, (char *)&vb, sizeof(vb));
if (rv < 0) {
log_error("S %s R %s lock_san get_lvb error %d", ls->name, r->name, rv);
memset(rds->vb, 0, sizeof(struct val_blk));
memset(vb_out, 0, sizeof(struct val_blk));
/* the lock is still acquired, the vb values considered invalid */
rv = 0;
goto out;
}
@@ -1708,7 +1639,6 @@ int lm_convert_sanlock(struct lockspace *ls, struct resource *r,
if (rv < 0) {
log_error("S %s R %s convert_san set_lvb error %d",
ls->name, r->name, rv);
return -ELMERR;
}
}
@@ -1721,35 +1651,14 @@ int lm_convert_sanlock(struct lockspace *ls, struct resource *r,
if (daemon_test)
return 0;
/*
* Don't block waiting for a failed lease to expire since it causes
* sanlock_convert to block for a long time, which would prevent this
* thread from processing other lock requests.
*
* FIXME: SANLK_CONVERT_OWNER_NOWAIT is the same as SANLK_ACQUIRE_OWNER_NOWAIT.
* Change to use the CONVERT define when the latest sanlock version has it.
*/
flags |= SANLK_ACQUIRE_OWNER_NOWAIT;
rv = sanlock_convert(lms->sock, -1, flags, rs);
if (!rv)
return 0;
switch (rv) {
case -EAGAIN:
case SANLK_ACQUIRE_IDLIVE:
case SANLK_ACQUIRE_OWNED:
case SANLK_ACQUIRE_OWNED_RETRY:
case SANLK_ACQUIRE_OTHER:
case SANLK_AIO_TIMEOUT:
case SANLK_DBLOCK_LVER:
case SANLK_DBLOCK_MBAL:
/* expected errors from known/normal cases like lock contention or io timeouts */
log_debug("S %s R %s convert_san error %d", ls->name, r->name, rv);
if (rv == -EAGAIN) {
/* FIXME: When could this happen? Should something different be done? */
log_error("S %s R %s convert_san EAGAIN", ls->name, r->name);
return -EAGAIN;
default:
}
if (rv < 0) {
log_error("S %s R %s convert_san convert error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
return rv;
@@ -1786,7 +1695,6 @@ static int release_rename(struct lockspace *ls, struct resource *r)
rv = sanlock_release(lms->sock, -1, SANLK_REL_RENAME, 2, res_args);
if (rv < 0) {
log_error("S %s R %s unlock_san release rename error %d", ls->name, r->name, rv);
rv = -ELMERR;
}
free(res_args);
@@ -1843,7 +1751,6 @@ int lm_unlock_sanlock(struct lockspace *ls, struct resource *r,
if (rv < 0) {
log_error("S %s R %s unlock_san set_lvb error %d",
ls->name, r->name, rv);
return -ELMERR;
}
}
@@ -1862,8 +1769,6 @@ int lm_unlock_sanlock(struct lockspace *ls, struct resource *r,
if (rv == -EIO)
rv = -ELOCKIO;
else if (rv < 0)
rv = -ELMERR;
return rv;
}

View File

@@ -915,7 +915,7 @@ int main(int argc, char *argv[])
int option_index = 0;
int client = 0, server = 0;
unsigned action = ACTION_MAX;
struct timespec timeout;
struct timeval timeout;
daemon_idle di = { .ptimeout = &timeout };
struct lvmpolld_state ls = { .log_config = "" };
daemon_state s = {

View File

@@ -1,248 +0,0 @@
#include "target.h"
// For DM_ARRAY_SIZE!
#include "libdevmapper.h"
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
//----------------------------------------------------------------
static char *_tok_cpy(const char *b, const char *e)
{
char *new = malloc((e - b) + 1);
char *ptr = new;
if (new) {
while (b != e)
*ptr++ = *b++;
*ptr = '\0';
}
return new;
}
static bool _tok_eq(const char *b, const char *e, const char *str)
{
while (b != e) {
if (!*str || *b != *str)
return false;
b++;
str++;
}
return !*str;
}
static bool _parse_operating_mode(const char *b, const char *e, void *context)
{
static struct {
const char *str;
enum vdo_operating_mode mode;
} _table[] = {
{"recovering", VDO_MODE_RECOVERING},
{"read-only", VDO_MODE_READ_ONLY},
{"normal", VDO_MODE_NORMAL}
};
enum vdo_operating_mode *r = context;
unsigned i;
for (i = 0; i < DM_ARRAY_SIZE(_table); i++) {
if (_tok_eq(b, e, _table[i].str)) {
*r = _table[i].mode;
return true;
}
}
return false;
}
static bool _parse_compression_state(const char *b, const char *e, void *context)
{
static struct {
const char *str;
enum vdo_compression_state state;
} _table[] = {
{"online", VDO_COMPRESSION_ONLINE},
{"offline", VDO_COMPRESSION_OFFLINE}
};
enum vdo_compression_state *r = context;
unsigned i;
for (i = 0; i < DM_ARRAY_SIZE(_table); i++) {
if (_tok_eq(b, e, _table[i].str)) {
*r = _table[i].state;
return true;
}
}
return false;
}
static bool _parse_recovering(const char *b, const char *e, void *context)
{
bool *r = context;
if (_tok_eq(b, e, "recovering"))
*r = true;
else if (_tok_eq(b, e, "-"))
*r = false;
else
return false;
return true;
}
static bool _parse_index_state(const char *b, const char *e, void *context)
{
static struct {
const char *str;
enum vdo_index_state state;
} _table[] = {
{"error", VDO_INDEX_ERROR},
{"closed", VDO_INDEX_CLOSED},
{"opening", VDO_INDEX_OPENING},
{"closing", VDO_INDEX_CLOSING},
{"offline", VDO_INDEX_OFFLINE},
{"online", VDO_INDEX_ONLINE},
{"unknown", VDO_INDEX_UNKNOWN}
};
enum vdo_index_state *r = context;
unsigned i;
for (i = 0; i < DM_ARRAY_SIZE(_table); i++) {
if (_tok_eq(b, e, _table[i].str)) {
*r = _table[i].state;
return true;
}
}
return false;
}
static bool _parse_uint64(const char *b, const char *e, void *context)
{
uint64_t *r = context, n;
n = 0;
while (b != e) {
if (!isdigit(*b))
return false;
n = (n * 10) + (*b - '0');
b++;
}
*r = n;
return true;
}
static const char *_eat_space(const char *b, const char *e)
{
while (b != e && isspace(*b))
b++;
return b;
}
static const char *_next_tok(const char *b, const char *e)
{
const char *te = b;
while (te != e && !isspace(*te))
te++;
return te == b ? NULL : te;
}
static void _set_error(struct vdo_status_parse_result *result, const char *fmt, ...)
__attribute__ ((format(printf, 2, 3)));
static void _set_error(struct vdo_status_parse_result *result, const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
vsnprintf(result->error, sizeof(result->error), fmt, ap);
va_end(ap);
}
static bool _parse_field(const char **b, const char *e,
bool (*p_fn)(const char *, const char *, void *),
void *field, const char *field_name,
struct vdo_status_parse_result *result)
{
const char *te;
te = _next_tok(*b, e);
if (!te) {
_set_error(result, "couldn't get token for '%s'", field_name);
return false;
}
if (!p_fn(*b, te, field)) {
_set_error(result, "couldn't parse '%s'", field_name);
return false;
}
*b = _eat_space(te, e);
return true;
}
bool vdo_status_parse(const char *input, struct vdo_status_parse_result *result)
{
const char *b = b = input;
const char *e = input + strlen(input);
const char *te;
struct vdo_status *s = malloc(sizeof(*s));
if (!s) {
_set_error(result, "out of memory");
return false;
}
b = _eat_space(b, e);
te = _next_tok(b, e);
if (!te) {
_set_error(result, "couldn't get token for device");
free(s);
return false;
}
s->device = _tok_cpy(b, te);
if (!s->device) {
_set_error(result, "out of memory");
free(s);
return false;
}
b = _eat_space(te, e);
#define XX(p, f, fn) if (!_parse_field(&b, e, p, f, fn, result)) goto bad;
XX(_parse_operating_mode, &s->operating_mode, "operating mode");
XX(_parse_recovering, &s->recovering, "recovering");
XX(_parse_index_state, &s->index_state, "index state");
XX(_parse_compression_state, &s->compression_state, "compression state");
XX(_parse_uint64, &s->used_blocks, "used blocks");
XX(_parse_uint64, &s->total_blocks, "total blocks");
#undef XX
if (b != e) {
_set_error(result, "too many tokens");
goto bad;
}
result->status = s;
return true;
bad:
free(s->device);
free(s);
return false;
}
//----------------------------------------------------------------

View File

@@ -1,68 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of the device-mapper userspace tools.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef DEVICE_MAPPER_VDO_TARGET_H
#define DEVICE_MAPPER_VDO_TARGET_H
#include <stdbool.h>
#include <stdint.h>
//----------------------------------------------------------------
enum vdo_operating_mode {
VDO_MODE_RECOVERING,
VDO_MODE_READ_ONLY,
VDO_MODE_NORMAL
};
enum vdo_compression_state {
VDO_COMPRESSION_ONLINE,
VDO_COMPRESSION_OFFLINE
};
enum vdo_index_state {
VDO_INDEX_ERROR,
VDO_INDEX_CLOSED,
VDO_INDEX_OPENING,
VDO_INDEX_CLOSING,
VDO_INDEX_OFFLINE,
VDO_INDEX_ONLINE,
VDO_INDEX_UNKNOWN
};
struct vdo_status {
char *device;
enum vdo_operating_mode operating_mode;
bool recovering;
enum vdo_index_state index_state;
enum vdo_compression_state compression_state;
uint64_t used_blocks;
uint64_t total_blocks;
};
void vdo_status_destroy(struct vdo_status *s);
#define VDO_MAX_ERROR 256
struct vdo_status_parse_result {
char error[VDO_MAX_ERROR];
struct vdo_status *status;
};
// Parses the status line from the kernel target.
bool vdo_status_parse(const char *input, struct vdo_status_parse_result *result);
//----------------------------------------------------------------
#endif

View File

@@ -1,3 +1,4 @@
LVM disk reading
Reading disks happens in two phases. The first is a discovery phase,
@@ -59,6 +60,16 @@ that belong to LVM, and a list of VG names that exist on
those devices. Each device (info struct) is associated
with the VG (vginfo struct) it is used in.
If the number <N> of KB read in step (a) was large enough, then
all the structs/metadata needed in steps b-e will be found
in the data buffer returned by a. If a particular struct
or metadata needed in steps b-e are located outside the range
of the initial read, then those steps need to issue their own
read at the necessary location to get that bit of data.
(The optional second mda_header and VG metadata in step g
is located at the end of the device, and will always require
an additional read.)
Phase 1 in code:
@@ -66,31 +77,55 @@ The most relevant functions are listed for each step in the outline.
lvmcache_label_scan()
label_scan()
_label_scan_async()
. dev_cache_scan()
choose which devices on the system to look at
for each dev: dev = dev_iter_get(iter) ...
. for each dev in dev_cache: bcache prefetch/read
a. _label_read_async_start()
. _process_block() to process data from bcache
_find_lvm_header() checks if this is an lvm dev by looking at label_header
_text_read() via ops->read() looks at mda/pv/vg data to populate lvmcache
b. _label_read_data_process()
_find_label_header()
. _read_mda_header_and_metadata()
c. _label_read_data_process()
ops->read()
_text_read()
d. _read_mda_header_and_metadata()
raw_read_mda_header()
. _read_mda_header_and_metadata()
e. _read_mda_header_and_metadata()
read_metadata_location()
text_read_metadata_summary()
config_file_read_fd()
_read_vgsummary() via ops->read_vgsummary()
ops->read_vgsummary()
_read_vgsummary()
. _text_read(): lvmcache_add()
f. _text_read(): lvmcache_add()
[adds this device to list of lvm devices]
_read_mda_header_and_metadata(): lvmcache_update_vgname_and_id()
_read_mda_header_and_metadata(): lvmcache_update_vgname_and_id()
[adds the VG name to list of VGs]
Phase 1 in log messages:
For each device:
Scanning data from all devs async
a. Reading sectors from device <dev>
b. Parsing label and data from device <dev>
d. Copying mda header sector from <dev> ...
or if the mda_header needs to be read from disk:
Reading mda header sector from <dev> ...
e. Copying metadata summary for <dev> ...
or if the metadata needs to be read from disk:
Reading metadata summary for <dev> ...
f. lvmcache <dev> ...
Phase 2: Work
-------------
@@ -115,11 +150,10 @@ For each VG name:
a. Lock the VG.
b. Repeat the phase 1 scan steps for each device in this VG.
b. Repeat the phase 1 scan steps for each device (PV) in this VG.
The phase 1 information in lvmcache may have changed because no VG lock
was held during phase 1. So, repeat the phase 1 steps, but only for the
devices in this VG. N.B. for commands that are just reporting data,
we skip this step if the data from phase 1 was complete and consistent.
devices in this VG.
c. Get the list of on-disk metadata locations for this VG.
Phase 1 created this list in lvmcache to be used here. At this
@@ -141,6 +175,11 @@ d. For each metadata location on each device in the VG
in the previous step. The VG metadata is fully analyzed and used
to create an in-memory 'struct volume_group'.
Copying or reading the mda_header and VG metadata in steps d.1 and d.2
follow the same model as in phase 1: if the data read in scan step 2.b
covered these areas, then data is simply copied out of the buffer from
step 2.b, otherwise new reads are done.
e. Compare the copies of VG metadata that were found in each location.
If some copies are older, choose the newest one to use, and update
any older copies.
@@ -157,14 +196,14 @@ The most relevant functions are listed for each step in the outline.
For each VG name:
process_each_vg()
. vg_read()
a. vg_read()
lock_vol()
. vg_read()
lvmcache_label_rescan_vg() (if needed)
[insert phase 1 steps for scanning devs, but only devs in this vg]
b. vg_read()
lvmcache_label_rescan_vg()
[insert phase 1 steps a-f]
. vg_read()
c. vg_read()
create_instance()
_text_create_text_instance()
_create_vg_text_instance()
@@ -173,166 +212,35 @@ For each VG name:
by phase 1, into fid->metadata_areas_in_use. This is
the key connection between phase 1 and phase 2.]
. dm_list_iterate_items(mda, &fid->metadata_areas_in_use)
d. dm_list_iterate_items(mda, &fid->metadata_areas_in_use)
. _vg_read_raw() via ops->vg_read()
raw_read_mda_header()
d1. ops->vg_read()
_vg_read_raw()
raw_read_mda_header()
. _vg_read_raw()
text_read_metadata()
config_file_read_fd()
_read_vg() via ops->read_vg()
. return the 'vg' struct from vg_read() and use it to do
command-specific work
d2. _vg_read_raw()
text_read_metadata()
config_file_read_fd()
ops->read_vg()
_read_vg()
Phase 2 in log messages:
Filter i/o
----------
For each VG name:
Processing VG <name>
Reading VG <name>
Some filters must be applied before reading a device, and other filters
must be applied after reading a device. In all cases, the filters must be
applied before lvm processes the device, i.e. before it looks for an lvm
label.
b. Reading VG rereading labels for <name>
Scanning data from devs async
[insert log messages from phase 1 steps a-f]
Scanned data from <N> devs async
1. Some filters need to be applied prior to reading any devices
because the purpose of the filter is to avoid submitting any
io on the excluded devices. The regex filter is the primary
example. Other filters benefit from being applied prior to
reading devices because they can tell which devices to
exclude without doing io to the device. An example of this
is the mpath filter.
For each mda on each <dev> in the VG:
2. Some filters need to be applied after reading a device because
they are based on data/signatures seen on the device.
The partitioned filter is an example of this; lvm needs to
read a device to see if it has a partition table before it can
know whether to exclude the device from further processing.
d. Reading VG <name> from <dev>
We apply filters from 1 before reading devices, and we apply filters from
2 after populating bcache, but before processing the device (i.e. before
checking for an lvm label, which is the first step in processing.)
The current implementation of this makes filters return -EAGAIN if they
want to read the device, but bcache data is not yet available. This will
happen when filtering runs prior to populating bcache. In this case the
device is flagged. After bcache is populated, the filters are reapplied
to the flagged devices. The filters which need to look at device content
are now able to get it from bcache. Devices that do not pass filters at
this point are excluded just like devices which were excluded earlier.
(Some filters from 2 can be skipped by consulting udev for the information
instead of reading the device. This is not entirely reliable, so it is
disabled by default with the config setting external_device_info_source.
It may be worthwhile to change the filters to use the udev info as a hint,
or only use udev info for filtering in reporting commands where
inaccuracies are not a big problem.)
I/O Performance
---------------
. 400 loop devices used as PVs
. 40 VGs each with 10 PVs
. each VG has one active LV
. each of the 10 PVs in vg0 has an artificial 100 ms read delay
. read/write/io_submit are system call counts using strace
. old is lvm 2.2.175
. new is lvm 2.2.178 (shortly before)
Command: pvs
------------
old: 0m17.422s
new: 0m0.331s
old: read 7773 write 497
new: read 2807 write 495 io_submit 448
Command: vgs
------------
old: 0m20.383s
new: 0m0.325s
old: read 10684 write 129
new: read 2807 write 129 io_submit 448
Command: vgck vg0
-----------------
old: 0m16.212s
new: 0m1.290s
old: read 6372 write 4
new: read 2807 write 4 io_submit 458
Command: lvcreate -n test -l1 -an vg0
-------------------------------------
old: 0m29.271s
new: 0m1.351s
old: read 6503 write 39
new: read 2808 write 9 io_submit 488
Command: lvremove vg0/test
--------------------------
old: 0m29.262s
new: 0m1.348s
old: read 6502 write 36
new: read 2807 write 6 io_submit 488
io_submit sources
-----------------
vgs:
reads:
- 400 for each PV
- 40 for each LV
- 8 for other devs on the system
vgck vg0:
reads:
- 400 for each PV
- 40 for each LV
- 10 for each PV in vg0 (rescan)
- 8 for other devs on the system
lvcreate -n test -l1 -an vg0
reads:
- 400 for each PV
- 40 for each LV
- 10 for each PV in vg0 (rescan)
- 8 for other devs on the system
writes:
- 10 for metadata on each PV in vg0
- 10 for precommit on each PV in vg0
- 10 for commit on each PV in vg0
With lvmetad
------------
Command: pvs
------------
old: 0m5.405s
new: 0m1.404s
Command: vgs
------------
old: 0m0.222s
new: 0m0.223s
Command: lvcreate -n test -l1 -an vg0
-------------------------------------
old: 0m10.128s
new: 0m1.137s
d.1. Copying|Reading mda header sector from <dev> ...
d.2. Copying|Reading metadata from <dev> ...

View File

@@ -1,158 +0,0 @@
Over time, I'd like to refactor the LVM code into these high level modules.
+-------------------------------------------+
| |
| User Interface |
| |
| |
+-------------------+-----------------------+
|
+--------------------v-----------------------+
| |
| LVM Core |
| |
| |
+----+----------------+-----------------+----+
| | |
+-----v-----+ +-----v------+ +------v----+
| | | | | |
| Device | | Metadata | | System |
| Mapper | | | | |
| | | | | |
| | | | | |
| | | | | |
+-----------+ +------------+ +-----------+
+---------------------------------------------------------+
+------------------------------------+
| |
| Base |
| |
| |
| |
| |
+------------------------------------+
Going from the bottom up we have:
Base
----
This holds all our general purpose code such as data structures, regex engine,
memory allocators. In fact pretty much everything in libdevmapper apart from
the dm code and config.
This can be used by any code in the system, which is why I've drawn a line
between it and the code above rather than using arrows.
If anyone can come up with a better name please do. I'm trying to stay away
from 'utils'.
Device mapper
-------------
As well as the low level dm-ioctl driving code we need to have all our dm 'best
practise' stuff in here. For instance this is the code that decides to use the
mirror target to move some data around; that knows to suspend a thin volume
before taking a snapshot of it. This module is going to have a lot more code
in it than the current libdevmapper.
It should not know anything about the LVM abstractions or metadata (no PVs, LVs
or VGs). It just knows about the dm world.
Code in here is only allowed to use base.
Metadata model
--------------
Here we have all the format handling, labelling, config parsing etc. We try
and put *everything* to do with LVM in here that doesn't actually require dm.
System
------
Code that interfaces with the system (udev etc).
LVM Core
--------
[terrible name]
This ties together the last 3 units. It should just be glue. We need to be
strict about pushing code down from here to keep this as small as possible.
User interface
--------------
Self explanatory.
Headers
-------
Headers will be included using sub directories to make it clearer where they
are in the tree.
eg,
#include "base/mm/pool.h"
#include "base/data-struct/list.h"
#include "dm/thin-provisioning.h"
#include "core/pvmove.h"
Getting there
=============
+-------------------------------------------+
| |
| |
| Tools |
| |
| |
| |
+---------+------------------------------+--+
| |
| +---------------v---------------------------+
| | |
| | |
| | Lib |
| | |
| | |
| | |
| | |
| +----------------+--------------------------+
| |
| |
+-----v-------------------------------v-----+
| |
| |
| libdevmapper |
| |
| |
| |
| |
+-------------------------------------------+
This is where I see us now.
'base' should be easy to factor out, it's just the non-dm part of libdevmapper
(ie. the bulk of it). But we have the problem that libdevmapper is a public
interface to get round.
'lib' is where the bulk of our code currently is. Dependency-wise the code is
a bit like a ball of string. So splitting it up is going to take time. We can
probably pull code pretty quickly into the 'metadata model' dir. But factoring
out the dm best practises stuff is going to require splitting at least
files, and probably functions. Certainly not something that can be done in one
go. System should just be a question of cherry picking functions.
I'm not too familiar with the tools dir. Hopefully it just corresponds with
the User Interface module and doesn't contain any business logic.

View File

@@ -1,53 +0,0 @@
Version 2.02.178
================
There are going to be some large changes to the lvm2 codebase
over the next year or so. Starting with this release. These
changes should be internal rather than having a big effect on
the command line. Inevitably these changes will increase the
chance of bugs, so please be on the alert.
Remove support for obsolete metadata formats
--------------------------------------------
Support for the GFS pool format, and format used by the
original 1990's version of LVM1 have been removed.
Use asynchronous IO
-------------------
Almost all IO uses libaio now.
Rewrite label scanning
----------------------
Dave Teigland has reworked the label scanning and metadata reading
logic to minimise the amount of IOs issued. Combined with the aio changes
this can greatly improve scanning speed for some systems.
./configure options
-------------------
We're going to try and remove as many options from ./configure as we
can. Each option multiplies the number of possible configurations
that we should test (this testing is currently not occurring).
The first batch to be removed are:
--enable-testing
--with-snapshots
--with-mirrors
--with-raid
--with-thin
--with-cache
Stable targets that are in the upstream kernel will just be supported.
In future optional target flags will be given in two situations:
1) The target is experimental, or not upstream at all (eg, vdo).
2) The target is deprecated and support will be removed at some future date.
This decision could well be contentious, so could distro maintainers feel
free to comment.

View File

@@ -1,257 +0,0 @@
Building unit tests
===================
make unit-unit/unit-test
Running unit tests
==================
The tests leave no artifacts at the moment, so you can just run
unit-test/unit-test from wherever you want.
./unit-test <list|run> [pattern]
Listing tests
-------------
Every test has a symbolic path associated with it. Just like file paths they
are split into components separated by '/'s. The 'list' command will show you
a tree of these tests, along with some description text.
ejt@devel-vm1:~/lvm2/unit-test/$ ./unit-test list
base
data-struct
bitset
and ................................................. and all bits
equal ............................................... equality
get_next ............................................ get next set bit
list
splice .............................................. joining lists together
string
asprint ............................................. tests asprint
strncpy ............................................. tests string copying
device
bcache
block-size-multiple-page ............................ block size must be a multiple of page size
block-size-positive ................................. block size must be positive
blocks-get-evicted .................................. block get evicted with many reads
cache-blocks-positive ............................... nr cache blocks must be positive
create-destroy ...................................... simple create/destroy
flush-waits ......................................... flush waits for all dirty
get-reads ........................................... bcache_get() triggers read
prefetch-never-waits ................................ too many prefetches does not trigger a wait
prefetch-reads ...................................... prefetch issues a read
read-multiple-files ................................. read from multiple files
reads-cached ........................................ repeated reads are cached
writeback-occurs .................................... dirty data gets written back
zero-flag-dirties ................................... zeroed data counts as dirty
formatting
percent
0 ................................................... Pretty printing of percentages near 0%
100 ................................................. Pretty printing of percentages near 100%
regex
fingerprints .......................................... not sure
matching .............................................. test the matcher with a variety of regexes
dm
target
mirror
status .............................................. parsing mirror status
metadata
config
cascade ............................................... cascade
clone ................................................. duplicating a config tree
parse ................................................. parsing various
An optional 'pattern' argument may be specified to select subsets of tests.
This pattern is a posix regex and does a substring match, so you will need to
use anchors if you particularly want the match at the beginning or end of the
string.
ejt@devel-vm1:~/lvm2/unit-test/$ ./unit-test list data-struct
base
data-struct
bitset
and ................................................. and all bits
equal ............................................... equality
get_next ............................................ get next set bit
list
splice .............................................. joining lists together
string
asprint ............................................. tests asprint
strncpy ............................................. tests string copying
ejt@devel-vm1:~/lvm2/unit-test/$ ./unit-test list s$
base
device
bcache
flush-waits ......................................... flush waits for all dirty
get-reads ........................................... bcache_get() triggers read
prefetch-never-waits ................................ too many prefetches does not trigger a wait
prefetch-reads ...................................... prefetch issues a read
read-multiple-files ................................. read from multiple files
writeback-occurs .................................... dirty data gets written back
zero-flag-dirties ................................... zeroed data counts as dirty
regex
fingerprints .......................................... not sure
dm
target
mirror
status .............................................. parsing mirror status
Running tests
=============
'make run-unit-test' from the top level will run all unit tests. But I tend to
run it by hand to I can select just the tests I'm working on.
Use the 'run' command to run the tests. Currently all logging goes to stderr,
so the test runner prints a line at the start of the test and a line
indicating success or failure at the end.
ejt@devel-vm1:~/lvm2/unit-test/$ ./unit-test run bcache/block-size
[RUN ] /base/device/bcache/block-size-multiple-page
bcache block size must be a multiple of page size
bcache block size must be a multiple of page size
bcache block size must be a multiple of page size
bcache block size must be a multiple of page size
[ OK] /base/device/bcache/block-size-multiple-page
[RUN ] /base/device/bcache/block-size-positive
bcache must have a non zero block size
[ OK] /base/device/bcache/block-size-positive
2/2 tests passed
ejt@devel-vm1:~/lvm2/unit-test/$ ./unit-test run data-struct
[RUN ] /base/data-struct/bitset/and
[ OK] /base/data-struct/bitset/and
[RUN ] /base/data-struct/bitset/equal
[ OK] /base/data-struct/bitset/equal
[RUN ] /base/data-struct/bitset/get_next
[ OK] /base/data-struct/bitset/get_next
[RUN ] /base/data-struct/list/splice
[ OK] /base/data-struct/list/splice
[RUN ] /base/data-struct/string/asprint
[ OK] /base/data-struct/string/asprint
[RUN ] /base/data-struct/string/strncpy
[ OK] /base/data-struct/string/strncpy
6/6 tests passed
Writing tests
=============
[See unit-test/framework.h and unit-test/units.h for the details]
Tests are grouped together into 'suites', all tests in a suite share a
'fixture'. A fixture is a void * to any object you want; use it to set up any
common environment that you need for the tests to run (eg, creating a dm_pool).
Test suites have nothing to do with the test paths, you can have tests from
different suites with similar paths, the runner sorts things for you.
Put your tests in a file in unit-test/, with '_t' at the end of the name
(convention only, nothing relies on this).
#include "units.h"
Then write any fixtures you need:
eg,
static void *_mem_init(void) {
struct dm_pool *mem = dm_pool_create("bitset test", 1024);
if (!mem) {
fprintf(stderr, "out of memory\n");
exit(1);
}
return mem;
}
static void _mem_exit(void *mem)
{
dm_pool_destroy(mem);
}
Then write your tests, which should take the void * that was returned by your
fixture. Use the T_ASSERT* macros to indicate failure.
eg,
static void test_equal(void *fixture)
{
struct dm_pool *mem = fixture;
dm_bitset_t bs1 = dm_bitset_create(mem, NR_BITS);
dm_bitset_t bs2 = dm_bitset_create(mem, NR_BITS);
int i, j;
for (i = 0, j = 1; i < NR_BITS; i += j, j++) {
dm_bit_set(bs1, i);
dm_bit_set(bs2, i);
}
T_ASSERT(dm_bitset_equal(bs1, bs2));
T_ASSERT(dm_bitset_equal(bs2, bs1));
for (i = 0; i < NR_BITS; i++) {
bit_flip(bs1, i);
T_ASSERT(!dm_bitset_equal(bs1, bs2));
T_ASSERT(!dm_bitset_equal(bs2, bs1));
T_ASSERT(dm_bitset_equal(bs1, bs1)); /* comparing with self */
bit_flip(bs1, i);
}
}
At the end of your test file you should write a function that builds one or
more test suites and adds them to the list of all suites that is passed in. I
tend to write a little macro (T) to save typing the same test path repeatedly.
eg,
#define T(path, desc, fn) register_test(ts, "/base/data-struct/bitset/" path, desc, fn)
void bitset_tests(struct dm_list *all_tests)
{
struct test_suite *ts = test_suite_create(_mem_init, _mem_exit);
if (!ts) {
fprintf(stderr, "out of memory\n");
exit(1);
}
T("get_next", "get next set bit", test_get_next);
T("equal", "equality", test_equal);
T("and", "and all bits", test_and);
dm_list_add(all_tests, &ts->list);
}
Then you need to declare your registration function and call it in units.h.
// Declare the function that adds tests suites here ...
...
void bitset_tests(struct dm_list *suites);
...
// ... and call it in here.
static inline void register_all_tests(struct dm_list *suites)
{
...
bitset_tests(suites);
...
}
Finally add your test file to the Makefile.in and rerun configure.

View File

@@ -1,104 +0,0 @@
# VDO - Compression and deduplication.
Currently device stacking looks like this:
Physical x [multipath] x [partition] x [mdadm] x [LUKS] x [LVS] x [LUKS] x [FS|Database|...]
Adding VDO:
Physical x [multipath] x [partition] x [mdadm] x [LUKS] x [LVS] x [LUKS] x VDO x [LVS] x [FS|Database|...]
## Where VDO fits (and where it does not):
### Backing devices for VDO volumes:
1. Physical x [multipath] x [partition] x [mdadm],
2. LUKS over (1) - full disk encryption.
3. LVs (raids|mirror|stripe|linear) x [cache] over (1).
4. LUKS over (3) - especially when using raids.
Usual limitations apply:
- Never layer LUKS over another LUKS - it makes no sense.
- LUKS is better over the raids, than under.
Devices which are not best suitable as backing device:
- thin volumes - at the moment it is not possible to take snapshot of active VDO volume on top of thin volume.
### Using VDO as a PV:
1. under tdata
- The best fit - it will deduplicate additional redundancies among all
snapshots and will reduce the footprint.
- Risks: Resize! dmevent will not be able to handle resizing of tpool ATM.
2. under corig
- This is useful to keep the most frequently used data in cache
uncompressed or without deduplication if that happens to be a bottleneck.
- Cache may fit better under VDO device, depending on compressibility and
amount of duplicates, as
- compression will reduce amount of data, thus effectively increasing
size of cache,
- and deduplication may emphasize hotspots.
- Performance testing of your particular workload is strongly recommended.
3. under (multiple) linear LVs - e.g. used for VMs.
### And where VDO does not fit:
- *never* use VDO under LUKS volumes
- these are random data and do not compress nor deduplicate well,
- *never* use VDO under cmeta and tmeta LVs
- these are random data and do not compress nor deduplicate well,
- under raids
- raid{4,5,6} scrambles data, so they do not deduplicate well,
- raid{1,4,5,6,10} also causes amount of data grow, so more (duplicit in
case of raid{1,10}) work has to be done in order to find less duplicates.
### And where it could be useful:
- under snapshot CoW device - when there are multiple of those it could deduplicate
## Development
### Things to decide
- under integrity devices
- VDO should work well for data blocks,
- but hashes are mostly unique and not compressible - were it possible it
would make sense to have separate imeta and idata volumes for integrity
devices.
### Future Integration of VDO into LVM:
One issue is using both LUKS and RAID under VDO. We have two options:
- use mdadm x LUKS x VDO+LV
- use LV RAID x LUKS x VDO+LV
In both cases dmeventd will not be able to resize the volume at the moment.
Another issue is duality of VDO - it can be used as a top level LV (with a
filesystem on top) but it can be used as "pool" for multiple devices too.
This will be solved in similar way thin pools allow multiple volumes.
Also VDO, has two sizes - its physical size and virtual size - and when
overprovisioning, just like tpool, we face same problems - VDO can get full,
without exposing it to a FS. dmeventd monitoring will be needed.
Another possible RFE is to split data and metadata - keep data on HDD and metadata on SSD.
## Issues / Testing
- fstrim/discard pass down - does it work with VDO?
- VDO can run in synchronous vs. asynchronous mode:
- synchronous for devices where write is safe after it is confirmed. Some
devices are lying.
- asynchronous for devices requiring flush.
- Multiple devices under VDO - need to find and expose common properties, or
not allow grouping them together. (This is same for all volumes with more
physical devices below.)
- pvmove changing characteristics of underlying device.
- autoactivation during boot?
- Q: can we use VDO for RootFS? Dracut!

View File

@@ -14,7 +14,6 @@
@top_srcdir@/lib/config/defaults.h
@top_srcdir@/lib/datastruct/btree.h
@top_srcdir@/lib/datastruct/str_list.h
@top_srcdir@/lib/device/bcache.h
@top_srcdir@/lib/device/dev-cache.h
@top_srcdir@/lib/device/dev-ext-udev-constants.h
@top_srcdir@/lib/device/dev-type.h

View File

@@ -1,4 +1,7 @@
/* include/configure.h.in. Generated from configure.ac by autoheader. */
/* include/configure.h.in. Generated from configure.in by autoheader. */
/* Define to 1 if aio is available. */
#undef AIO_SUPPORT
/* Define to 1 to use libblkid detection of signatures when wiping. */
#undef BLKID_WIPING_SUPPORT
@@ -72,6 +75,10 @@
/* Default system configuration directory. */
#undef DEFAULT_ETC_DIR
/* Fall back to LVM1 by default if device-mapper is missing from the kernel.
*/
#undef DEFAULT_FALLBACK_TO_LVM1
/* Name of default locking directory. */
#undef DEFAULT_LOCK_DIR
@@ -144,9 +151,6 @@
/* Library version */
#undef DM_LIB_VERSION
/* Define to 1 to include the LVM editline shell. */
#undef EDITLINE_SUPPORT
/* Path to fsadm binary. */
#undef FSADM_PATH
@@ -172,9 +176,6 @@
/* Define to 1 if you have the `atexit' function. */
#undef HAVE_ATEXIT
/* Define if ioctl BLKZEROOUT can be used for device zeroing. */
#undef HAVE_BLKZEROOUT
/* Define to 1 if canonicalize_file_name is available. */
#undef HAVE_CANONICALIZE_FILE_NAME
@@ -209,12 +210,6 @@
/* Define to 1 if you have the `dup2' function. */
#undef HAVE_DUP2
/* Define to 1 if you have the <editline/history.h> header file. */
#undef HAVE_EDITLINE_HISTORY_H
/* Define to 1 if you have the <editline/readline.h> header file. */
#undef HAVE_EDITLINE_READLINE_H
/* Define to 1 if you have the <errno.h> header file. */
#undef HAVE_ERRNO_H
@@ -257,9 +252,6 @@
/* Define to 1 if you have the <langinfo.h> header file. */
#undef HAVE_LANGINFO_H
/* Define to 1 if you have the <libaio.h> header file. */
#undef HAVE_LIBAIO_H
/* Define to 1 if you have the <libcman.h> header file. */
#undef HAVE_LIBCMAN_H
@@ -352,15 +344,15 @@
/* Define to 1 if you have the <paths.h> header file. */
#undef HAVE_PATHS_H
/* Define to 1 if you have the `pselect' function. */
#undef HAVE_PSELECT
/* Define to 1 if you have the <pthread.h> header file. */
#undef HAVE_PTHREAD_H
/* Define to 1 if the system has the type `ptrdiff_t'. */
#undef HAVE_PTRDIFF_T
/* Define to 1 if the compiler has the `__builtin_clz` builtin. */
#undef HAVE___BUILTIN_CLZ
/* Define to 1 if you have the <readline/history.h> header file. */
#undef HAVE_READLINE_HISTORY_H
@@ -489,16 +481,9 @@
/* Define to 1 if you have the `strtoull' function. */
#undef HAVE_STRTOULL
/* Define to 1 if `st_blocks' is a member of `struct stat'. */
#undef HAVE_STRUCT_STAT_ST_BLOCKS
/* Define to 1 if `st_rdev' is a member of `struct stat'. */
#undef HAVE_STRUCT_STAT_ST_RDEV
/* Define to 1 if your `struct stat' has `st_blocks'. Deprecated, use
`HAVE_STRUCT_STAT_ST_BLOCKS' instead. */
#undef HAVE_ST_BLOCKS
/* Define to 1 if you have the <syslog.h> header file. */
#undef HAVE_SYSLOG_H
@@ -570,9 +555,6 @@
/* Define to 1 if you have the <sys/utsname.h> header file. */
#undef HAVE_SYS_UTSNAME_H
/* Define to 1 if you have the <sys/vfs.h> header file. */
#undef HAVE_SYS_VFS_H
/* Define to 1 if you have the <sys/wait.h> header file. */
#undef HAVE_SYS_WAIT_H
@@ -612,9 +594,6 @@
/* Define to 1 if the system has the type `_Bool'. */
#undef HAVE__BOOL
/* Define to 1 if the system has the `__builtin_clz' built-in function */
#undef HAVE___BUILTIN_CLZ
/* Internalization package */
#undef INTL_PACKAGE
@@ -631,6 +610,13 @@
slash. */
#undef LSTAT_FOLLOWS_SLASHED_SYMLINK
/* Define to 1 if 'lvm' should fall back to using LVM1 binaries if
device-mapper is missing from the kernel */
#undef LVM1_FALLBACK
/* Define to 1 to include built-in support for LVM1 metadata. */
#undef LVM1_INTERNAL
/* Path to lvmetad pidfile. */
#undef LVMETAD_PIDFILE
@@ -693,6 +679,9 @@
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define to 1 to include built-in support for GFS pool metadata. */
#undef POOL_INTERNAL
/* Define to 1 to include built-in support for raid. */
#undef RAID_INTERNAL

View File

@@ -16,6 +16,34 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
ifeq ("@LVM1@", "shared")
SUBDIRS = format1
endif
ifeq ("@POOL@", "shared")
SUBDIRS += format_pool
endif
ifeq ("@SNAPSHOTS@", "shared")
SUBDIRS += snapshot
endif
ifeq ("@MIRRORS@", "shared")
SUBDIRS += mirror
endif
ifeq ("@RAID@", "shared")
SUBDIRS += raid
endif
ifeq ("@THIN@", "shared")
SUBDIRS += thin
endif
ifeq ("@CACHE@", "shared")
SUBDIRS += cache_segtype
endif
ifeq ("@CLUSTER@", "shared")
SUBDIRS += locking
endif
@@ -23,13 +51,10 @@ endif
SOURCES =\
activate/activate.c \
cache/lvmcache.c \
cache_segtype/cache.c \
commands/toolcontext.c \
config/config.c \
datastruct/btree.c \
datastruct/str_list.c \
device/bcache.c \
device/bcache-utils.c \
device/dev-cache.c \
device/dev-ext.c \
device/dev-io.c \
@@ -38,7 +63,6 @@ SOURCES =\
device/dev-type.c \
device/dev-luks.c \
device/dev-dasd.c \
device/dev-lvm1-pool.c \
display/display.c \
error/errseg.c \
unknown/unknown.c \
@@ -53,7 +77,6 @@ SOURCES =\
filters/filter-type.c \
filters/filter-usable.c \
filters/filter-internal.c \
filters/filter-signature.c \
format_text/archive.c \
format_text/archiver.c \
format_text/export.c \
@@ -84,7 +107,6 @@ SOURCES =\
metadata/snapshot_manip.c \
metadata/thin_manip.c \
metadata/vg.c \
mirror/mirrored.c \
misc/crc.c \
misc/lvm-exec.c \
misc/lvm-file.c \
@@ -98,19 +120,55 @@ SOURCES =\
mm/memlock.c \
notify/lvmnotify.c \
properties/prop_common.c \
raid/raid.c \
report/properties.c \
report/report.c \
snapshot/snapshot.c \
striped/striped.c \
thin/thin.c \
uuid/uuid.c \
zero/zero.c
ifeq ("@LVM1@", "internal")
SOURCES +=\
format1/disk-rep.c \
format1/format1.c \
format1/import-export.c \
format1/import-extents.c \
format1/layout.c \
format1/lvm1-label.c \
format1/vg_number.c
endif
ifeq ("@POOL@", "internal")
SOURCES +=\
format_pool/disk_rep.c \
format_pool/format_pool.c \
format_pool/import_export.c \
format_pool/pool_label.c
endif
ifeq ("@CLUSTER@", "internal")
SOURCES += locking/cluster_locking.c
endif
ifeq ("@SNAPSHOTS@", "internal")
SOURCES += snapshot/snapshot.c
endif
ifeq ("@MIRRORS@", "internal")
SOURCES += mirror/mirrored.c
endif
ifeq ("@RAID@", "internal")
SOURCES += raid/raid.c
endif
ifeq ("@THIN@", "internal")
SOURCES += thin/thin.c
endif
ifeq ("@CACHE@", "internal")
SOURCES += cache_segtype/cache.c
endif
ifeq ("@DEVMAPPER@", "yes")
SOURCES +=\
activate/dev_manager.c \
@@ -143,7 +201,14 @@ LIB_STATIC = $(LIB_NAME).a
ifeq ($(MAKECMDGOALS),distclean)
SUBDIRS =\
format1 \
format_pool \
snapshot \
mirror \
notify \
raid \
thin \
cache_segtype \
locking
endif
@@ -152,18 +217,6 @@ CFLOW_LIST_TARGET = $(LIB_NAME).cflow
PROGS_CFLAGS = $(BLKID_CFLAGS) $(UDEV_CFLAGS)
ifneq (,$(findstring $(MAKECMDGOALS),clean distclean))
SOURCES += \
integrity/integrity.c \
label/hints.c \
metadata/integrity_manip.c \
metadata/pv_list.c \
metadata/vdo_manip.c \
metadata/writecache_manip.c \
vdo/vdo.c \
writecache/writecache.c
endif
include $(top_builddir)/make.tmpl
$(SUBDIRS): $(LIB_STATIC)

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
/*
* Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
* Copyright (C) 2004-2018 Red Hat, Inc. All rights reserved.
* Copyright (C) 2004-2016 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
@@ -83,7 +83,6 @@ struct lv_activate_opts {
* flags are persistent in udev db for any spurious event
* that follows. */
unsigned resuming; /* Set when resuming after a suspend. */
const struct logical_volume *component_lv;
};
void set_activation(int activation, int silent);
@@ -91,6 +90,7 @@ int activation(void);
int driver_version(char *version, size_t size);
int library_version(char *version, size_t size);
int lvm1_present(struct cmd_context *cmd);
int module_present(struct cmd_context *cmd, const char *target_name);
int target_present_version(struct cmd_context *cmd, const char *target_name,
@@ -134,8 +134,6 @@ int lv_info(struct cmd_context *cmd, const struct logical_volume *lv, int use_la
struct lvinfo *info, int with_open_count, int with_read_ahead);
int lv_info_by_lvid(struct cmd_context *cmd, const char *lvid_s, int use_layer,
struct lvinfo *info, int with_open_count, int with_read_ahead);
int lv_info_with_name_check(struct cmd_context *cmd, const struct logical_volume *lv,
int use_layer, struct lvinfo *info);
/*
* Returns 1 if lv_info_and_seg_status structure has been populated,
@@ -178,11 +176,13 @@ int lv_raid_sync_action(const struct logical_volume *lv, char **sync_action);
int lv_raid_message(const struct logical_volume *lv, const char *msg);
int lv_cache_status(const struct logical_volume *cache_lv,
struct lv_status_cache **status);
int lv_thin_pool_percent(const struct logical_volume *lv, int metadata,
dm_percent_t *percent);
int lv_thin_percent(const struct logical_volume *lv, int mapped,
dm_percent_t *percent);
int lv_thin_pool_transaction_id(const struct logical_volume *lv,
uint64_t *transaction_id);
int lv_thin_device_id(const struct logical_volume *lv, uint32_t *device_id);
int lv_thin_status(const struct logical_volume *lv, int flush,
struct lv_status_thin **status);
int lv_thin_pool_status(const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **status);
/*
* Return number of LVs in the VG that are active.
@@ -198,11 +198,6 @@ int lv_is_active_exclusive(const struct logical_volume *lv);
int lv_is_active_exclusive_locally(const struct logical_volume *lv);
int lv_is_active_exclusive_remotely(const struct logical_volume *lv);
/* Check is any component LV is active */
const struct logical_volume *lv_component_is_active(const struct logical_volume *lv);
const struct logical_volume *lv_holder_is_active(const struct logical_volume *lv);
int deactivate_lv_with_sub_lv(const struct logical_volume *lv);
int lv_has_target_type(struct dm_pool *mem, const struct logical_volume *lv,
const char *layer, const char *target_type);
@@ -211,9 +206,9 @@ int monitor_dev_for_events(struct cmd_context *cmd, const struct logical_volume
#ifdef DMEVENTD
# include "libdevmapper-event.h"
char *get_monitor_dso_path(struct cmd_context *cmd, int id);
char *get_monitor_dso_path(struct cmd_context *cmd, const char *libpath);
int target_registered_with_dmeventd(struct cmd_context *cmd, const char *dso,
const struct logical_volume *lv, int *pending, int *monitored);
const struct logical_volume *lv, int *pending);
int target_register_events(struct cmd_context *cmd, const char *dso, const struct logical_volume *lv,
int evmask __attribute__((unused)), int set, int timeout);
#endif
@@ -234,7 +229,6 @@ struct dev_usable_check_params {
unsigned int check_suspended:1;
unsigned int check_error_target:1;
unsigned int check_reserved:1;
unsigned int check_lv:1;
};
/*

File diff suppressed because it is too large Load Diff

View File

@@ -27,7 +27,7 @@ struct dm_info;
struct device;
struct lv_seg_status;
int read_only_lv(const struct logical_volume *lv, const struct lv_activate_opts *laopts, const char *layer);
int read_only_lv(const struct logical_volume *lv, const struct lv_activate_opts *laopts);
/*
* Constructor and destructor.
@@ -47,7 +47,7 @@ void dev_manager_exit(void);
*/
int dev_manager_info(struct cmd_context *cmd, const struct logical_volume *lv,
const char *layer,
int with_open_count, int with_read_ahead, int with_name_check,
int with_open_count, int with_read_ahead,
struct dm_info *dminfo, uint32_t *read_ahead,
struct lv_seg_status *seg_status);
@@ -66,15 +66,19 @@ int dev_manager_raid_message(struct dev_manager *dm,
int dev_manager_cache_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct lv_status_cache **status);
int dev_manager_thin_status(struct dev_manager *dm,
const struct logical_volume *lv, int flush,
struct lv_status_thin **status);
int dev_manager_thin_pool_status(struct dev_manager *dm,
const struct logical_volume *lv,
struct dm_status_thin_pool **status,
int flush);
int dev_manager_thin_pool_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int metadata, dm_percent_t *percent);
int dev_manager_thin_percent(struct dev_manager *dm,
const struct logical_volume *lv,
int mapped, dm_percent_t *percent);
int dev_manager_thin_device_id(struct dev_manager *dm,
const struct logical_volume *lv,
uint32_t *device_id);
int dev_manager_thin_pool_status(struct dev_manager *dm,
const struct logical_volume *lv, int flush,
struct lv_status_thin_pool **status);
int dev_manager_suspend(struct dev_manager *dm, const struct logical_volume *lv,
struct lv_activate_opts *laopts, int lockfs, int flush_required);
int dev_manager_activate(struct dev_manager *dm, const struct logical_volume *lv,

View File

@@ -313,7 +313,7 @@ struct fs_op_parms {
char *lv_name;
char *dev;
char *old_lv_name;
char names[];
char names[0];
};
static void _store_str(char **pos, char **ptr, const char *str)
@@ -441,7 +441,7 @@ static int _fs_op(fs_op_t type, const char *dev_dir, const char *vg_name,
const char *lv_name, const char *dev, const char *old_lv_name,
int check_udev)
{
if (prioritized_section()) {
if (critical_section()) {
if (!_stack_fs_op(type, dev_dir, vg_name, lv_name, dev,
old_lv_name, check_udev))
return_0;
@@ -487,8 +487,7 @@ int fs_rename_lv(const struct logical_volume *lv, const char *dev,
void fs_unlock(void)
{
/* Do not allow syncing device name with suspended devices */
if (!dm_get_suspended_counter()) {
if (!critical_section()) {
log_debug_activation("Syncing device names");
/* Wait for all processed udev devices */
if (!dm_udev_wait(_fs_cookie))

1223
lib/cache/lvmcache.c vendored

File diff suppressed because it is too large Load Diff

47
lib/cache/lvmcache.h vendored
View File

@@ -59,17 +59,22 @@ struct lvmcache_vgsummary {
const char *lock_type;
uint32_t mda_checksum;
size_t mda_size;
int zero_offset;
int seqno;
};
int lvmcache_init(struct cmd_context *cmd);
int lvmcache_init(void);
void lvmcache_allow_reads_with_lvmetad(void);
void lvmcache_destroy(struct cmd_context *cmd, int retain_orphans, int reset);
/*
* lvmcache_label_scan() will scan labels the first time it's
* called, but not on subsequent calls, unless
* lvmcache_force_next_label_scan() is called first
* to force the next lvmcache_label_scan() to scan again.
*/
void lvmcache_force_next_label_scan(void);
int lvmcache_label_scan(struct cmd_context *cmd);
int lvmcache_label_rescan_vg(struct cmd_context *cmd, const char *vgname, const char *vgid, int open_rw);
int lvmcache_label_rescan_vg(struct cmd_context *cmd, const char *vgname, const char *vgid);
/* Add/delete a device */
struct lvmcache_info *lvmcache_add(struct labeller *labeller, const char *pvid,
@@ -78,7 +83,6 @@ struct lvmcache_info *lvmcache_add(struct labeller *labeller, const char *pvid,
uint32_t vgstatus);
int lvmcache_add_orphan_vginfo(const char *vgname, struct format_type *fmt);
void lvmcache_del(struct lvmcache_info *info);
void lvmcache_del_dev(struct device *dev);
/* Update things */
int lvmcache_update_vgname_and_id(struct lvmcache_info *info,
@@ -132,6 +136,7 @@ struct dm_list *lvmcache_get_pvids(struct cmd_context *cmd, const char *vgname,
void lvmcache_drop_metadata(const char *vgname, int drop_precommitted);
void lvmcache_commit_metadata(const char *vgname);
int lvmcache_pvid_is_locked(const char *pvid);
int lvmcache_fid_add_mdas(struct lvmcache_info *info, struct format_instance *fid,
const char *id, int id_len);
int lvmcache_fid_add_mdas_pv(struct lvmcache_info *info, struct format_instance *fid);
@@ -155,8 +160,6 @@ uint32_t lvmcache_ext_flags(struct lvmcache_info *info);
const struct format_type *lvmcache_fmt(struct lvmcache_info *info);
struct label *lvmcache_get_label(struct lvmcache_info *info);
struct label *lvmcache_get_dev_label(struct device *dev);
int lvmcache_has_dev_info(struct device *dev);
void lvmcache_update_pv(struct lvmcache_info *info, struct physical_volume *pv,
const struct format_type *fmt);
@@ -180,6 +183,7 @@ int lvmcache_foreach_pv(struct lvmcache_vginfo *vginfo,
uint64_t lvmcache_device_size(struct lvmcache_info *info);
void lvmcache_set_device_size(struct lvmcache_info *info, uint64_t size);
struct device *lvmcache_device(struct lvmcache_info *info);
void lvmcache_make_valid(struct lvmcache_info *info);
int lvmcache_is_orphan(struct lvmcache_info *info);
int lvmcache_uncertain_ownership(struct lvmcache_info *info);
unsigned lvmcache_mda_count(struct lvmcache_info *info);
@@ -188,8 +192,6 @@ uint64_t lvmcache_smallest_mda_size(struct lvmcache_info *info);
int lvmcache_found_duplicate_pvs(void);
void lvmcache_pvscan_duplicate_check(struct cmd_context *cmd);
int lvmcache_get_unused_duplicate_devs(struct cmd_context *cmd, struct dm_list *head);
int vg_has_duplicate_pvs(struct volume_group *vg);
@@ -209,27 +211,12 @@ void lvmcache_remove_unchosen_duplicate(struct device *dev);
int lvmcache_pvid_in_unchosen_duplicates(const char *pvid);
void lvmcache_save_suspended_vg(struct volume_group *vg, int precommitted);
struct volume_group *lvmcache_get_suspended_vg(const char *vgid);
void lvmcache_drop_suspended_vg(struct volume_group *vg);
int lvmcache_get_vg_devs(struct cmd_context *cmd,
struct lvmcache_vginfo *vginfo,
struct dm_list *devs);
struct lvmcache_vginfo *vginfo,
struct dm_list *devs);
void lvmcache_set_independent_location(const char *vgname);
int lvmcache_scan_mismatch(struct cmd_context *cmd, const char *vgname, const char *vgid);
/*
* These are clvmd-specific functions and are not related to lvmcache.
* FIXME: rename these with a clvm_ prefix in place of lvmcache_
*/
void lvmcache_save_vg(struct volume_group *vg, int precommitted);
struct volume_group *lvmcache_get_saved_vg(const char *vgid, int precommitted);
struct volume_group *lvmcache_get_saved_vg_latest(const char *vgid);
void lvmcache_drop_saved_vgid(const char *vgid);
uint64_t lvmcache_max_metadata_size(void);
void lvmcache_save_metadata_size(uint64_t val);
void lvmcache_get_mdas(struct cmd_context *cmd,
const char *vgname, const char *vgid,
struct dm_list *mda_list);
#endif

248
lib/cache/lvmetad.c vendored
View File

@@ -31,13 +31,14 @@ static daemon_handle _lvmetad = { .error = 0 };
static int _lvmetad_use = 0;
static int _lvmetad_connected = 0;
static int _lvmetad_daemon_pid = 0;
static int _was_connected = 0;
static char *_lvmetad_token = NULL;
static const char *_lvmetad_socket = NULL;
static struct cmd_context *_lvmetad_cmd = NULL;
static int64_t _lvmetad_update_timeout;
static int _found_lvm1_metadata = 0;
static struct volume_group *_lvmetad_pvscan_vg(struct cmd_context *cmd, struct volume_group *vg, const char *vgid, struct format_type *fmt);
static uint64_t _monotonic_seconds(void)
@@ -115,10 +116,8 @@ static int _log_debug_inequality(const char *name, struct dm_config_node *a, str
void lvmetad_disconnect(void)
{
if (_lvmetad_connected) {
if (_lvmetad_connected)
daemon_close(_lvmetad);
_was_connected = 1;
}
_lvmetad_connected = 0;
_lvmetad_use = 0;
@@ -248,7 +247,7 @@ int lvmetad_token_matches(struct cmd_context *cmd)
const char *daemon_token;
unsigned int delay_usec = 0;
unsigned int wait_sec = 0;
uint64_t now = 0, wait_start = 0, last_warn = 0;
uint64_t now = 0, wait_start = 0;
int ret = 1;
wait_sec = (unsigned int)_lvmetad_update_timeout;
@@ -294,15 +293,12 @@ retry:
wait_start = now;
if (now - wait_start > wait_sec) {
log_warn("pvscan[%d] WARNING: Not using lvmetad after %u sec lvmetad_update_wait_time.", getpid(), wait_sec);
log_warn("WARNING: Not using lvmetad after %u sec lvmetad_update_wait_time.", wait_sec);
goto fail;
}
if (now - last_warn >= 10) {
last_warn = now;
log_warn("pvscan[%d] WARNING: lvmetad is being updated, retrying (setup) for %u more seconds.",
getpid(), wait_sec - (unsigned int)(now - wait_start));
}
log_warn("WARNING: lvmetad is being updated, retrying (setup) for %u more seconds.",
wait_sec - (unsigned int)(now - wait_start));
/* Delay a random period between 1 and 2 seconds. */
delay_usec = 1000000 + lvm_even_rand(&_lvmetad_cmd->rand_seed, 1000000);
@@ -316,7 +312,6 @@ retry:
* The caller should do a disk scan to populate lvmetad.
*/
if (!strcmp(daemon_token, "none")) {
log_debug_lvmetad("lvmetad initialization needed.");
ret = 0;
goto out;
}
@@ -328,16 +323,10 @@ retry:
* our global filter.
*/
if (strcmp(daemon_token, _lvmetad_token)) {
log_debug_lvmetad("lvmetad initialization needed for different filter.");
ret = 0;
goto out;
}
if (wait_start)
log_debug_lvmetad("lvmetad initialized during wait.");
else
log_debug_lvmetad("lvmetad initialized previously.");
out:
daemon_reply_destroy(reply);
return ret;
@@ -562,15 +551,9 @@ static int _token_update(int *replaced_update)
daemon_reply reply;
const char *token_expected;
const char *prev_token;
const char *reply_str;
int update_pid;
int ending_our_update;
unsigned int wait_sec = 0;
uint64_t now = 0, wait_start = 0;
wait_sec = (unsigned int)_lvmetad_update_timeout;
unsigned int delay_usec = 0;
retry:
log_debug_lvmetad("Sending lvmetad token_update %s", _lvmetad_token);
reply = _lvmetad_send(NULL, "token_update", NULL);
@@ -584,45 +567,22 @@ retry:
}
update_pid = (int)daemon_reply_int(reply, "update_pid", 0);
reply_str = daemon_reply_str(reply, "response", "");
if (!strcmp(reply_str, "token_updating")) {
daemon_reply_destroy(reply);
if (!(now = _monotonic_seconds())) {
log_print_unless_silent("_monotonic_seconds error");
return 0;
}
if (!wait_start)
wait_start = now;
if (now - wait_start <= wait_sec) {
log_warn("lvmetad is being updated, retry for %u more seconds.",
wait_sec - (unsigned int)(now - wait_start));
delay_usec = 1000000 + lvm_even_rand(&_lvmetad_cmd->rand_seed, 1000000);
usleep(delay_usec);
goto retry;
}
log_print_unless_silent("Not using lvmetad after %u sec lvmetad_update_wait_time, no more try.", wait_sec);
return 0;
}
/*
* A mismatch can only happen when this command attempts to set the
* token to filter:<hash> at the end of its update, but the update has
* been preempted in lvmetad by a new one (from update_pid).
*/
if (!strcmp(reply_str, "token_mismatch")) {
if (!strcmp(daemon_reply_str(reply, "response", ""), "token_mismatch")) {
token_expected = daemon_reply_str(reply, "expected", "");
ending_our_update = strcmp(_lvmetad_token, LVMETAD_TOKEN_UPDATE_IN_PROGRESS);
log_print_unless_silent("Received token update mismatch expected \"%s\" our token \"%s\" update_pid %d our pid %d",
log_debug_lvmetad("Received token update mismatch expected \"%s\" our token \"%s\" update_pid %d our pid %d",
token_expected, _lvmetad_token, update_pid, getpid());
if (ending_our_update && (update_pid != getpid())) {
log_print_unless_silent("WARNING: lvmetad was updated by another command (pid %d).", update_pid);
log_warn("WARNING: lvmetad was updated by another command (pid %d).", update_pid);
} else {
/*
* Shouldn't happen.
@@ -639,7 +599,7 @@ retry:
return 0;
}
if (strcmp(reply_str, "OK")) {
if (strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
log_error("Failed response from lvmetad for token update.");
daemon_reply_destroy(reply);
return 0;
@@ -666,7 +626,6 @@ static int _lvmetad_handle_reply(daemon_reply reply, const char *id, const char
{
const char *token_expected;
const char *action;
const char *reply_str;
int action_modifies = 0;
int daemon_in_update;
int we_are_in_update;
@@ -711,8 +670,8 @@ static int _lvmetad_handle_reply(daemon_reply reply, const char *id, const char
/*
* Errors related to token mismatch.
*/
reply_str = daemon_reply_str(reply, "response", "");
if (!strcmp(reply_str, "token_mismatch")) {
if (!strcmp(daemon_reply_str(reply, "response", ""), "token_mismatch")) {
token_expected = daemon_reply_str(reply, "expected", "");
update_pid = (int)daemon_reply_int(reply, "update_pid", 0);
@@ -810,14 +769,14 @@ static int _lvmetad_handle_reply(daemon_reply reply, const char *id, const char
*/
/* All OK? */
if (!strcmp(reply_str, "OK")) {
if (!strcmp(daemon_reply_str(reply, "response", ""), "OK")) {
if (found)
*found = 1;
return 1;
}
/* Unknown device permitted? */
if (found && !strcmp(reply_str, "unknown")) {
if (found && !strcmp(daemon_reply_str(reply, "response", ""), "unknown")) {
log_very_verbose("Request to %s %s%sin lvmetad did not find any matching object.",
action, object, *object ? " " : "");
*found = 0;
@@ -825,7 +784,7 @@ static int _lvmetad_handle_reply(daemon_reply reply, const char *id, const char
}
/* Multiple VGs with the same name were found. */
if (found && !strcmp(reply_str, "multiple")) {
if (found && !strcmp(daemon_reply_str(reply, "response", ""), "multiple")) {
log_very_verbose("Request to %s %s%sin lvmetad found multiple matching objects.",
action, object, *object ? " " : "");
if (found)
@@ -1560,7 +1519,7 @@ int lvmetad_vg_list_to_lvmcache(struct cmd_context *cmd)
return 1;
}
struct extract_dl_baton {
struct _extract_dl_baton {
int i;
struct dm_config_tree *cft;
struct dm_config_node *pre_sib;
@@ -1568,7 +1527,7 @@ struct extract_dl_baton {
static int _extract_mda(struct metadata_area *mda, void *baton)
{
struct extract_dl_baton *b = baton;
struct _extract_dl_baton *b = baton;
struct dm_config_node *cn;
char id[32];
@@ -1589,7 +1548,7 @@ static int _extract_mda(struct metadata_area *mda, void *baton)
static int _extract_disk_location(const char *name, struct disk_locn *dl, void *baton)
{
struct extract_dl_baton *b = baton;
struct _extract_dl_baton *b = baton;
struct dm_config_node *cn;
char id[32];
@@ -1624,7 +1583,7 @@ static int _extract_ba(struct disk_locn *ba, void *baton)
static int _extract_mdas(struct lvmcache_info *info, struct dm_config_tree *cft,
struct dm_config_node *pre_sib)
{
struct extract_dl_baton baton = { .cft = cft };
struct _extract_dl_baton baton = { .cft = cft };
if (!lvmcache_foreach_mda(info, &_extract_mda, &baton))
return 0;
@@ -1651,7 +1610,7 @@ int lvmetad_pv_found(struct cmd_context *cmd, const struct id *pvid, struct devi
struct dm_config_tree *pvmeta, *vgmeta;
const char *status = NULL, *vgname = NULL;
int64_t changed = 0;
int result, seqno_after;
int result;
if (!lvmetad_used() || test_mode())
return 1;
@@ -1716,12 +1675,10 @@ int lvmetad_pv_found(struct cmd_context *cmd, const struct id *pvid, struct devi
result = _lvmetad_handle_reply(reply, "pv_found", uuid, NULL);
if (vg && result) {
seqno_after = daemon_reply_int(reply, "seqno_after", -1);
if ((seqno_after != (int) vg->seqno) ||
(seqno_after != daemon_reply_int(reply, "seqno_before", -1)))
log_warn("WARNING: Inconsistent metadata found for VG %s", vg->name);
}
if (vg && result &&
(daemon_reply_int(reply, "seqno_after", -1) != vg->seqno ||
daemon_reply_int(reply, "seqno_after", -1) != daemon_reply_int(reply, "seqno_before", -1)))
log_warn("WARNING: Inconsistent metadata found for VG %s", vg->name);
if (result && found_vgnames) {
status = daemon_reply_str(reply, "status", NULL);
@@ -1729,13 +1686,6 @@ int lvmetad_pv_found(struct cmd_context *cmd, const struct id *pvid, struct devi
changed = daemon_reply_int(reply, "changed", 0);
}
if (vg && vg->system_id && vg->system_id[0] &&
cmd->system_id && cmd->system_id[0] &&
strcmp(vg->system_id, cmd->system_id)) {
log_debug_lvmetad("Ignore foreign VG %s on %s", vg->name , dev_name(dev));
goto out;
}
/*
* If lvmetad now sees all PVs in the VG, it returned the
* "complete" status string. Add this VG name to the list
@@ -1766,7 +1716,7 @@ int lvmetad_pv_found(struct cmd_context *cmd, const struct id *pvid, struct devi
log_error("str_list_add failed");
}
}
out:
daemon_reply_destroy(reply);
return result;
@@ -1817,10 +1767,14 @@ struct _lvmetad_pvscan_baton {
static int _lvmetad_pvscan_single(struct metadata_area *mda, void *baton)
{
struct _lvmetad_pvscan_baton *b = baton;
struct device *mda_dev = mda_get_device(mda);
struct label_read_data *ld;
struct volume_group *vg;
ld = get_label_read_data(b->cmd, mda_dev);
if (mda_is_ignored(mda) ||
!(vg = mda->ops->vg_read(b->fid, "", mda, NULL, NULL)))
!(vg = mda->ops->vg_read(b->fid, "", mda, ld, NULL, NULL)))
return 1;
/* FIXME Also ensure contents match etc. */
@@ -1840,12 +1794,16 @@ static int _lvmetad_pvscan_single(struct metadata_area *mda, void *baton)
static int _lvmetad_pvscan_vg_single(struct metadata_area *mda, void *baton)
{
struct _lvmetad_pvscan_baton *b = baton;
struct device *mda_dev = mda_get_device(mda);
struct label_read_data *ld;
struct volume_group *vg = NULL;
if (mda_is_ignored(mda))
return 1;
if (!(vg = mda->ops->vg_read(b->fid, "", mda, NULL, NULL)))
ld = get_label_read_data(b->cmd, mda_dev);
if (!(vg = mda->ops->vg_read(b->fid, "", mda, ld, NULL, NULL)))
return 1;
if (!b->vg)
@@ -1925,7 +1883,7 @@ static struct volume_group *_lvmetad_pvscan_vg(struct cmd_context *cmd, struct v
*/
log_debug_lvmetad("Rescan VG %s scanning data from devs in previous metadata.", vg->name);
label_scan_devs(cmd, cmd->full_filter, &pvs_scan);
label_scan_devs(cmd, &pvs_scan);
/*
* Check if any pvs_scan entries are no longer PVs.
@@ -2192,7 +2150,7 @@ static struct volume_group *_lvmetad_pvscan_vg(struct cmd_context *cmd, struct v
if (found_new_pvs)
log_debug_lvmetad("Rescan VG %s scanning all devs to find new PVs.", vg->name);
label_scan(cmd);
label_scan_force(cmd);
if (!(vginfo = lvmcache_vginfo_from_vgname(vg->name, NULL))) {
log_error("VG %s vg info not found after rescanning devices.", vg->name);
@@ -2324,6 +2282,18 @@ int lvmetad_pvscan_single(struct cmd_context *cmd, struct device *dev,
if (!baton.fid)
goto_bad;
if (fmt->features & FMT_OBSOLETE) {
fmt->ops->destroy_instance(baton.fid);
log_warn("WARNING: Disabling lvmetad cache which does not support obsolete (lvm1) metadata.");
lvmetad_set_disabled(cmd, LVMETAD_DISABLE_REASON_LVM1);
_found_lvm1_metadata = 1;
/*
* return 1 (success) so that we'll continue to populate lvmetad
* instead of leaving the update incomplete.
*/
return 1;
}
lvmcache_foreach_mda(info, _lvmetad_pvscan_single, &baton);
if (!baton.vg)
@@ -2367,10 +2337,10 @@ bad:
* generally revert disk scanning and not use lvmetad.
*/
int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list *found_vgnames)
int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait)
{
struct device_list *devl, *devl2;
struct dm_list scan_devs;
struct dev_iter *iter;
struct device *dev;
daemon_reply reply;
char *future_token;
const char *reason;
@@ -2386,8 +2356,6 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
}
retry:
dm_list_init(&scan_devs);
/*
* If another update is in progress, delay to allow it to finish,
* rather than interrupting it with our own update.
@@ -2397,11 +2365,26 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
replacing_other_update = 1;
}
label_scan(cmd);
if (lvmcache_found_duplicate_pvs()) {
log_warn("WARNING: Scan found duplicate PVs.");
return 0;
}
log_verbose("Scanning all devices to update lvmetad.");
if (!(iter = dev_iter_create(cmd->lvmetad_filter, 1))) {
log_error("dev_iter creation failed");
return 0;
}
future_token = _lvmetad_token;
_lvmetad_token = (char *) LVMETAD_TOKEN_UPDATE_IN_PROGRESS;
if (!_token_update(&replaced_update)) {
log_error("Failed to start lvmetad update.");
log_error("Failed to update lvmetad which had an update in progress.");
dev_iter_destroy(iter);
_lvmetad_token = future_token;
return 0;
}
@@ -2417,18 +2400,16 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
if (do_wait && !retries) {
retries = 1;
log_warn("WARNING: lvmetad update in progress, retrying update.");
dev_iter_destroy(iter);
_lvmetad_token = future_token;
goto retry;
}
log_warn("WARNING: lvmetad update in progress, skipping update.");
dev_iter_destroy(iter);
_lvmetad_token = future_token;
return 0;
}
log_verbose("Scanning all devices to initialize lvmetad.");
label_scan_pvscan_all(cmd, &scan_devs);
log_debug_lvmetad("Telling lvmetad to clear its cache");
reply = _lvmetad_send(cmd, "pv_clear_all", NULL);
if (!_lvmetad_handle_reply(reply, "pv_clear_all", "", NULL))
@@ -2438,24 +2419,15 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
was_silent = silent_mode();
init_silent(1);
log_debug_lvmetad("Sending %d devices to lvmetad.", dm_list_size(&scan_devs));
dm_list_iterate_items_safe(devl, devl2, &scan_devs) {
while ((dev = dev_iter_get(iter))) {
if (sigint_caught()) {
ret = 0;
stack;
break;
}
dm_list_del(&devl->list);
ret = lvmetad_pvscan_single(cmd, devl->dev, found_vgnames, NULL);
label_scan_invalidate(devl->dev);
dm_free(devl);
if (!ret) {
if (!lvmetad_pvscan_single(cmd, dev, NULL, NULL)) {
ret = 0;
stack;
break;
}
@@ -2463,6 +2435,8 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
init_silent(was_silent);
dev_iter_destroy(iter);
_lvmetad_token = future_token;
/*
@@ -2480,17 +2454,12 @@ int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list
return 0;
}
/* This will disable lvmetad if label scan found duplicates. */
lvmcache_pvscan_duplicate_check(cmd);
if (lvmcache_found_duplicate_pvs()) {
log_warn("WARNING: Scan found duplicate PVs.");
return 0;
}
/*
* If lvmetad is disabled, and no duplicate PVs were seen, then re-enable lvmetad.
* If lvmetad is disabled, and no lvm1 metadata was seen and no
* duplicate PVs were seen, then re-enable lvmetad.
*/
if (lvmetad_is_disabled(cmd, &reason) && !lvmcache_found_duplicate_pvs()) {
if (lvmetad_is_disabled(cmd, &reason) &&
!lvmcache_found_duplicate_pvs() && !_found_lvm1_metadata) {
log_debug_lvmetad("Enabling lvmetad which was previously disabled.");
lvmetad_clear_disabled(cmd);
}
@@ -2585,18 +2554,11 @@ static int _lvmetad_get_pv_cache_list(struct cmd_context *cmd, struct dm_list *p
*/
static void _update_pv_in_udev(struct cmd_context *cmd, dev_t devt)
{
/*
* FIXME: this is diabled as part of removing dev_opens
* to integrate bcache. If this is really needed, we
* can do a separate open/close here.
*/
log_debug_devs("SKIP device %d:%d open to update udev",
(int)MAJOR(devt), (int)MINOR(devt));
#if 0
struct device *dev;
log_debug_devs("device %d:%d open to update udev",
(int)MAJOR(devt), (int)MINOR(devt));
if (!(dev = dev_cache_get_by_devt(devt, cmd->lvmetad_filter))) {
log_error("_update_pv_in_udev no dev found");
return;
@@ -2609,7 +2571,6 @@ static void _update_pv_in_udev(struct cmd_context *cmd, dev_t devt)
if (!dev_close(dev))
stack;
#endif
}
/*
@@ -2792,7 +2753,7 @@ void lvmetad_validate_global_cache(struct cmd_context *cmd, int force)
* we rescanned for the token, and the time we acquired the global
* lock.)
*/
if (!lvmetad_pvscan_all_devs(cmd, 1, NULL)) {
if (!lvmetad_pvscan_all_devs(cmd, 1)) {
log_warn("WARNING: Not using lvmetad because cache update failed.");
lvmetad_make_unused(cmd);
return;
@@ -3021,49 +2982,20 @@ int lvmetad_vg_is_foreign(struct cmd_context *cmd, const char *vgname, const cha
*/
void lvmetad_set_disabled(struct cmd_context *cmd, const char *reason)
{
daemon_handle tmph = { .error = 0 };
daemon_reply reply;
int tmp_con = 0;
/*
* If we were using lvmetad at the start of the command, but are not
* now, then _was_connected should still be set. In this case we
* want to make a temp connection just to disable it.
*/
if (!_lvmetad_use) {
if (_was_connected) {
/* Create a special temp connection just to send disable */
tmph = lvmetad_open(_lvmetad_socket);
if (tmph.socket_fd < 0 || tmph.error) {
log_warn("Failed to connect to lvmetad to disable.");
return;
}
tmp_con = 1;
} else {
/* We were never using lvmetad, don't start now. */
return;
}
}
if (!_lvmetad_use)
return;
log_debug_lvmetad("Sending lvmetad disabled %s", reason);
if (tmp_con)
reply = daemon_send_simple(tmph, "set_global_info",
reply = daemon_send_simple(_lvmetad, "set_global_info",
"token = %s", "skip",
"global_disable = " FMTd64, (int64_t)1,
"disable_reason = %s", reason,
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", get_cmd_name(),
NULL);
else
reply = daemon_send_simple(_lvmetad, "set_global_info",
"token = %s", "skip",
"global_disable = " FMTd64, (int64_t)1,
"disable_reason = %s", reason,
"pid = " FMTd64, (int64_t)getpid(),
"cmd = %s", get_cmd_name(),
NULL);
if (reply.error)
log_error("Failed to send message to lvmetad %d", reply.error);
@@ -3071,9 +3003,6 @@ void lvmetad_set_disabled(struct cmd_context *cmd, const char *reason)
log_error("Failed response from lvmetad.");
daemon_reply_destroy(reply);
if (tmp_con)
daemon_close(tmph);
}
void lvmetad_clear_disabled(struct cmd_context *cmd)
@@ -3138,6 +3067,9 @@ int lvmetad_is_disabled(struct cmd_context *cmd, const char **reason)
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_REPAIR)) {
*reason = "a repair command was run";
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_LVM1)) {
*reason = "LVM1 metadata was found";
} else if (strstr(reply_reason, LVMETAD_DISABLE_REASON_DUPLICATES)) {
*reason = "duplicate PVs were found";

78
lib/cache/lvmetad.h vendored
View File

@@ -17,8 +17,6 @@
#include "config-util.h"
#include <stdint.h>
struct volume_group;
struct cmd_context;
struct dm_config_tree;
@@ -151,7 +149,7 @@ int lvmetad_pvscan_single(struct cmd_context *cmd, struct device *dev,
struct dm_list *found_vgnames,
struct dm_list *changed_vgnames);
int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list *found_vgnames);
int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait);
int lvmetad_vg_clear_outdated_pvs(struct volume_group *vg);
void lvmetad_validate_global_cache(struct cmd_context *cmd, int force);
@@ -165,48 +163,38 @@ void lvmetad_clear_disabled(struct cmd_context *cmd);
# else /* LVMETAD_SUPPORT */
static inline int lvmetad_connect(struct cmd_context *cmd) {return 0;}
static inline void lvmetad_disconnect(void) {}
static inline void lvmetad_make_unused(struct cmd_context *cmd) {}
static inline int lvmetad_used(void) {return 0;}
static inline void lvmetad_set_socket(const char *thing) {}
static inline int lvmetad_socket_present(void) {return 0;}
static inline int lvmetad_pidfile_present(void) {return 0;}
static inline void lvmetad_set_token(const struct dm_config_value *filter) {}
static inline void lvmetad_release_token(void) {}
static inline int lvmetad_vg_update_pending(struct volume_group *vg) {return 1;}
static inline int lvmetad_vg_update_finish(struct volume_group *vg) {return 1;}
static inline int lvmetad_vg_remove_pending(struct volume_group *vg) {return 1;}
static inline int lvmetad_vg_remove_finish(struct volume_group *vg) {return 1;}
static inline int lvmetad_pv_found(struct cmd_context *cmd, const struct id *pvid, struct device *dev,
const struct format_type *fmt, uint64_t label_sector,
struct volume_group *vg,
struct dm_list *found_vgnames,
struct dm_list *changed_vgnames) {return 1;}
static inline int lvmetad_pv_gone(dev_t devno, const char *pv_name) {return 1;}
static inline int lvmetad_pv_gone_by_dev(struct device *dev) {return 1;}
static inline int lvmetad_pv_list_to_lvmcache(struct cmd_context *cmd) {return 1;}
static inline int lvmetad_pv_lookup(struct cmd_context *cmd, struct id pvid, int *found) {return 0;}
static inline int lvmetad_pv_lookup_by_dev(struct cmd_context *cmd, struct device *dev, int *found) {return 0;}
static inline int lvmetad_vg_list_to_lvmcache(struct cmd_context *cmd) {return 1;}
static inline int lvmetad_get_vgnameids(struct cmd_context *cmd, struct dm_list *vgnameids) {return 0;}
static inline struct volume_group *lvmetad_vg_lookup(struct cmd_context *cmd,
const char *vgname, const char *vgid) {return NULL;}
static inline int lvmetad_pvscan_single(struct cmd_context *cmd, struct device *dev,
struct dm_list *found_vgnames,
struct dm_list *changed_vgnames) {return 0;}
static inline int lvmetad_pvscan_all_devs(struct cmd_context *cmd, int do_wait, struct dm_list *found_vgnames) {return 0;}
static inline int lvmetad_vg_clear_outdated_pvs(struct volume_group *vg) {return 0;}
static inline void lvmetad_validate_global_cache(struct cmd_context *cmd, int force) {}
static inline int lvmetad_token_matches(struct cmd_context *cmd) {return 1;}
static inline int lvmetad_vg_is_foreign(struct cmd_context *cmd, const char *vgname, const char *vgid) {return 0;}
static inline int lvmetad_is_disabled(struct cmd_context *cmd, const char **reason) {return 0;}
static inline void lvmetad_set_disabled(struct cmd_context *cmd, const char *reason) {}
static inline void lvmetad_clear_disabled(struct cmd_context *cmd) {}
# define lvmetad_disconnect() do { } while (0)
# define lvmetad_connect(cmd) (0)
# define lvmetad_make_unused(cmd) do { } while (0)
# define lvmetad_used() (0)
# define lvmetad_set_socket(a) do { } while (0)
# define lvmetad_socket_present() (0)
# define lvmetad_pidfile_present() (0)
# define lvmetad_set_token(a) do { } while (0)
# define lvmetad_release_token() do { } while (0)
# define lvmetad_vg_update(vg) (1)
# define lvmetad_vg_update_pending(vg) (1)
# define lvmetad_vg_update_finish(vg) (1)
# define lvmetad_vg_remove_pending(vg) (1)
# define lvmetad_vg_remove_finish(vg) (1)
# define lvmetad_pv_found(cmd, pvid, dev, fmt, label_sector, vg, found_vgnames, changed_vgnames) (1)
# define lvmetad_pv_gone(devno, pv_name) (1)
# define lvmetad_pv_gone_by_dev(dev) (1)
# define lvmetad_pv_list_to_lvmcache(cmd) (1)
# define lvmetad_pv_lookup(cmd, pvid, found) (0)
# define lvmetad_pv_lookup_by_dev(cmd, dev, found) (0)
# define lvmetad_vg_list_to_lvmcache(cmd) (1)
# define lvmetad_get_vgnameids(cmd, vgnameids) do { } while (0)
# define lvmetad_vg_lookup(cmd, vgname, vgid) (NULL)
# define lvmetad_pvscan_single(cmd, dev, found_vgnames, changed_vgnames) (0)
# define lvmetad_pvscan_all_devs(cmd, do_wait) (0)
# define lvmetad_vg_clear_outdated_pvs(vg) do { } while (0)
# define lvmetad_validate_global_cache(cmd, force) do { } while (0)
# define lvmetad_vg_is_foreign(cmd, vgname, vgid) (0)
# define lvmetad_token_matches(cmd) (1)
# define lvmetad_is_disabled(cmd, reason) (0)
# define lvmetad_set_disabled(cmd, reason) do { } while (0)
# define lvmetad_clear_disabled(cmd) do { } while (0)
# endif /* LVMETAD_SUPPORT */

View File

@@ -1,4 +1,4 @@
# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
# Copyright (C) 2013-2014 Red Hat, Inc. All rights reserved.
#
# This file is part of LVM2.
#
@@ -14,10 +14,11 @@ srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
SOURCES=\
vdo/status.c
SOURCES = cache.c
LIB_SHARED = liblvm2cache.$(LIB_SUFFIX)
LIB_VERSION = $(LIB_VERSION_LVM)
include $(top_builddir)/make.tmpl
LIB_NAME = libdevicemapper
LIB_STATIC = $(LIB_NAME).a
install: install_lvm2_plugin

View File

@@ -36,9 +36,16 @@
#include "sharedlib.h"
#endif
#ifdef LVM1_INTERNAL
#include "format1.h"
#endif
#ifdef POOL_INTERNAL
#include "format_pool.h"
#endif
#include <locale.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/utsname.h>
#include <syslog.h>
#include <time.h>
@@ -276,8 +283,6 @@ static int _parse_debug_classes(struct cmd_context *cmd)
debug_classes |= LOG_CLASS_LVMPOLLD;
else if (!strcasecmp(cv->v.str, "dbus"))
debug_classes |= LOG_CLASS_DBUS;
else if (!strcasecmp(cv->v.str, "io"))
debug_classes |= LOG_CLASS_IO;
else
log_verbose("Unrecognised value for log/debug_classes: %s", cv->v.str);
}
@@ -333,8 +338,6 @@ static void _init_logging(struct cmd_context *cmd)
find_config_tree_bool(cmd, global_test_CFG, NULL);
init_test(cmd->default_settings.test);
init_use_aio(find_config_tree_bool(cmd, global_use_aio_CFG, NULL));
/* Settings for logging to file */
if (find_config_tree_bool(cmd, log_overwrite_CFG, NULL))
append = 0;
@@ -539,6 +542,8 @@ static int _process_config(struct cmd_context *cmd)
const struct dm_config_value *cv;
int64_t pv_min_kb;
int udev_disabled = 0;
int scan_size_kb;
int use_scan_cache;
char sysfs_dir[PATH_MAX];
if (!_check_config(cmd))
@@ -561,7 +566,7 @@ static int _process_config(struct cmd_context *cmd)
#ifdef DEVMAPPER_SUPPORT
dm_set_dev_dir(cmd->dev_dir);
if (!dm_set_uuid_prefix(UUID_PREFIX))
if (!dm_set_uuid_prefix("LVM-"))
return_0;
#endif
@@ -622,6 +627,34 @@ static int _process_config(struct cmd_context *cmd)
cmd->default_settings.udev_sync = udev_disabled ? 0 :
find_config_tree_bool(cmd, activation_udev_sync_CFG, NULL);
use_scan_cache = find_config_tree_bool(cmd, devices_scan_cache_CFG, NULL);
#ifdef AIO_SUPPORT
cmd->use_aio = find_config_tree_bool(cmd, devices_scan_async_CFG, NULL);
if (cmd->use_aio) {
cmd->use_scan_cache = 1;
if (!use_scan_cache)
log_warn("WARNING: Ignoring scan_cache setting because of scan_async.");
} else
cmd->use_scan_cache = use_scan_cache;
#else
cmd->use_scan_cache = use_scan_cache;
cmd->use_aio = 0;
if (find_config_tree_bool(cmd, devices_scan_async_CFG, NULL))
log_warn("WARNING: Ignoring scan_async setting, no async I/O support.");
#endif
scan_size_kb = find_config_tree_int(cmd, devices_scan_size_CFG, NULL);
if (!scan_size_kb || (scan_size_kb < 0) || (scan_size_kb % 4)) {
log_warn("WARNING: Ignoring invalid scan_size %d KB, using default %u KB.",
scan_size_kb, DEFAULT_SCAN_SIZE_KB);
log_warn("scan_size has units of KB and must be a multiple of 4 KB.");
scan_size_kb = DEFAULT_SCAN_SIZE_KB;
}
cmd->default_settings.scan_size_kb = scan_size_kb;
/*
* Set udev_fallback lazily on first use since it requires
* checking DM driver version which is an extra ioctl!
@@ -685,8 +718,6 @@ static int _process_config(struct cmd_context *cmd)
if (!_init_system_id(cmd))
return_0;
init_io_memory_size(find_config_tree_int(cmd, global_io_memory_size_CFG, NULL));
return 1;
}
@@ -1057,7 +1088,7 @@ static int _init_dev_cache(struct cmd_context *cmd)
return 1;
}
#define MAX_FILTERS 10
#define MAX_FILTERS 9
static struct dev_filter *_init_lvmetad_filter_chain(struct cmd_context *cmd)
{
@@ -1117,7 +1148,7 @@ static struct dev_filter *_init_lvmetad_filter_chain(struct cmd_context *cmd)
nr_filt++;
/* usable device filter. Required. */
if (!(filters[nr_filt] = usable_filter_create(cmd, cmd->dev_types,
if (!(filters[nr_filt] = usable_filter_create(cmd->dev_types,
lvmetad_used() ? FILTER_MODE_PRE_LVMETAD
: FILTER_MODE_NO_LVMETAD))) {
log_error("Failed to create usabled device filter");
@@ -1138,17 +1169,10 @@ static struct dev_filter *_init_lvmetad_filter_chain(struct cmd_context *cmd)
}
nr_filt++;
/* signature filter. Required. */
if (!(filters[nr_filt] = signature_filter_create(cmd->dev_types))) {
log_error("Failed to create signature device filter");
goto bad;
}
nr_filt++;
/* md component filter. Optional, non-critical. */
if (find_config_tree_bool(cmd, devices_md_component_detection_CFG, NULL)) {
init_md_filtering(1);
if ((filters[nr_filt] = md_filter_create(cmd, cmd->dev_types)))
if ((filters[nr_filt] = md_filter_create(cmd->dev_types)))
nr_filt++;
}
@@ -1237,7 +1261,7 @@ int init_filters(struct cmd_context *cmd, unsigned load_persistent_cache)
}
nr_filt++;
}
if (!(filter_components[nr_filt] = usable_filter_create(cmd, cmd->dev_types, FILTER_MODE_POST_LVMETAD))) {
if (!(filter_components[nr_filt] = usable_filter_create(cmd->dev_types, FILTER_MODE_POST_LVMETAD))) {
log_verbose("Failed to create usable device filter.");
goto bad;
}
@@ -1344,6 +1368,20 @@ static int _init_formats(struct cmd_context *cmd)
const struct dm_config_node *cn;
#endif
#ifdef LVM1_INTERNAL
if (!(fmt = init_lvm1_format(cmd)))
return 0;
fmt->library = NULL;
dm_list_add(&cmd->formats, &fmt->list);
#endif
#ifdef POOL_INTERNAL
if (!(fmt = init_pool_format(cmd)))
return 0;
fmt->library = NULL;
dm_list_add(&cmd->formats, &fmt->list);
#endif
#ifdef HAVE_LIBDL
/* Load any formats in shared libs if not static */
if (!is_static() &&
@@ -1466,7 +1504,6 @@ static int _init_segtypes(struct cmd_context *cmd)
struct segment_type *segtype;
struct segtype_library seglib = { .cmd = cmd, .lib = NULL };
struct segment_type *(*init_segtype_array[])(struct cmd_context *cmd) = {
init_linear_segtype,
init_striped_segtype,
init_zero_segtype,
init_error_segtype,
@@ -1638,6 +1675,7 @@ static void _init_rand(struct cmd_context *cmd)
static void _init_globals(struct cmd_context *cmd)
{
init_full_scan_done(0);
init_mirror_in_sync(0);
}
@@ -1761,8 +1799,6 @@ void destroy_config_context(struct cmd_context *cmd)
dm_pool_destroy(cmd->mem);
if (cmd->libmem)
dm_pool_destroy(cmd->libmem);
if (cmd->pending_delete_mem)
dm_pool_destroy(cmd->pending_delete_mem);
dm_free(cmd);
}
@@ -1770,8 +1806,6 @@ void destroy_config_context(struct cmd_context *cmd)
/*
* A "config context" is a very light weight toolcontext that
* is only used for reading config settings from lvm.conf.
*
* FIXME: this needs to go back to parametrized create_toolcontext()
*/
struct cmd_context *create_config_context(void)
{
@@ -1788,12 +1822,6 @@ struct cmd_context *create_config_context(void)
if (!(cmd->libmem = dm_pool_create("library", 4 * 1024)))
goto_out;
if (!(cmd->mem = dm_pool_create("command", 4 * 1024)))
goto out;
if (!(cmd->pending_delete_mem = dm_pool_create("pending_delete", 1024)))
goto_out;
dm_list_init(&cmd->config_files);
dm_list_init(&cmd->tags);
@@ -1824,7 +1852,7 @@ out:
}
/* Entry point */
struct cmd_context *create_toolcontext(unsigned is_clvmd,
struct cmd_context *create_toolcontext(unsigned is_long_lived,
const char *system_dir,
unsigned set_buffering,
unsigned threaded,
@@ -1851,8 +1879,7 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
log_error("Failed to allocate command context");
return NULL;
}
cmd->is_long_lived = is_clvmd;
cmd->is_clvmd = is_clvmd;
cmd->is_long_lived = is_long_lived;
cmd->threaded = threaded ? 1 : 0;
cmd->handles_missing_pvs = 0;
cmd->handles_unknown_segments = 0;
@@ -1871,13 +1898,7 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
#ifndef VALGRIND_POOL
/* Set in/out stream buffering before glibc */
if (set_buffering
#ifdef SYS_gettid
/* For threaded programs no changes of streams */
/* On linux gettid() is implemented only via syscall */
&& (syscall(SYS_gettid) == getpid())
#endif
) {
if (set_buffering) {
/* Allocate 2 buffers */
if (!(cmd->linebuffer = dm_malloc(2 * _linebuffer_size))) {
log_error("Failed to allocate line buffer.");
@@ -1908,7 +1929,7 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
}
}
/* Buffers are used for lines without '\n' */
} else if (!set_buffering)
} else
/* Without buffering, must not use stdin/stdout */
init_silent(1);
#endif
@@ -1942,9 +1963,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
goto out;
}
if (!(cmd->pending_delete_mem = dm_pool_create("pending_delete", 1024)))
goto_out;
if (!_init_lvm_conf(cmd))
goto_out;
@@ -1984,13 +2002,11 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
if (!_init_formats(cmd))
goto_out;
if (!lvmcache_init(cmd))
goto_out;
/* FIXME: move into lvmcache_init */
if (!init_lvmcache_orphans(cmd))
goto_out;
dm_list_init(&cmd->unused_duplicate_devs);
if (!_init_segtypes(cmd))
goto_out;
@@ -2010,8 +2026,6 @@ struct cmd_context *create_toolcontext(unsigned is_clvmd,
cmd->current_settings = cmd->default_settings;
cmd->initialized.config = 1;
dm_list_init(&cmd->pending_delete);
out:
if (!cmd->initialized.config) {
destroy_toolcontext(cmd);
@@ -2122,7 +2136,6 @@ int refresh_toolcontext(struct cmd_context *cmd)
activation_release();
lvmcache_destroy(cmd, 0, 0);
label_scan_destroy(cmd);
label_exit();
_destroy_segtypes(&cmd->segtypes);
_destroy_formats(cmd, &cmd->formats);
@@ -2209,9 +2222,6 @@ int refresh_toolcontext(struct cmd_context *cmd)
if (!_init_formats(cmd))
return_0;
if (!lvmcache_init(cmd))
return_0;
if (!init_lvmcache_orphans(cmd))
return_0;
@@ -2223,12 +2233,6 @@ int refresh_toolcontext(struct cmd_context *cmd)
cmd->initialized.config = 1;
if (!dm_list_empty(&cmd->pending_delete)) {
log_debug(INTERNAL_ERROR "Unprocessed pending delete for %d devices.",
dm_list_size(&cmd->pending_delete));
dm_list_init(&cmd->pending_delete);
}
if (cmd->initialized.connections && !init_connections(cmd))
return_0;
@@ -2248,10 +2252,11 @@ void destroy_toolcontext(struct cmd_context *cmd)
!cmd->filter->dump(cmd->filter, 1))
stack;
label_scan_destroy(cmd);
archive_exit(cmd);
backup_exit(cmd);
lvmcache_destroy(cmd, 0, 0);
label_scan_destroy(cmd);
label_exit();
_destroy_segtypes(&cmd->segtypes);
_destroy_formats(cmd, &cmd->formats);
@@ -2272,8 +2277,6 @@ void destroy_toolcontext(struct cmd_context *cmd)
if (cmd->libmem)
dm_pool_destroy(cmd->libmem);
if (cmd->pending_delete_mem)
dm_pool_destroy(cmd->pending_delete_mem);
#ifndef VALGRIND_POOL
if (cmd->linebuffer) {
/* Reset stream buffering to defaults */

View File

@@ -39,10 +39,9 @@ struct config_info {
int udev_rules;
int udev_sync;
int udev_fallback;
int cache_vgmetadata;
int scan_size_kb;
const char *msg_prefix;
const char *fmt_name;
const char *dmeventd_executable;
uint64_t unit_factor;
int cmd_name; /* Show command name? */
mode_t umask;
@@ -95,7 +94,6 @@ struct cmd_context {
char **argv;
struct arg_values *opt_arg_values;
struct dm_list arg_value_groups;
int opt_count; /* total number of options (beginning with - or --) */
/*
* Position args remaining after command name
@@ -155,8 +153,6 @@ struct cmd_context {
unsigned include_shared_vgs:1; /* report/display cmds can reveal lockd VGs */
unsigned include_active_foreign_vgs:1; /* cmd should process foreign VGs with active LVs */
unsigned vg_read_print_access_error:1; /* print access errors from vg_read */
unsigned allow_mixed_block_sizes:1;
unsigned force_access_clustered:1;
unsigned lockd_gl_disable:1;
unsigned lockd_vg_disable:1;
unsigned lockd_lv_disable:1;
@@ -164,16 +160,13 @@ struct cmd_context {
unsigned lockd_vg_rescan:1;
unsigned lockd_vg_default_sh:1;
unsigned lockd_vg_enforce_sh:1;
unsigned lockd_lv_sh:1;
unsigned vg_notify:1;
unsigned lv_notify:1;
unsigned pv_notify:1;
unsigned activate_component:1; /* command activates component LV */
unsigned process_component_lvs:1; /* command processes also component LVs */
unsigned mirror_warn_printed:1; /* command already printed warning about non-monitored mirrors */
unsigned use_scan_cache:1;
unsigned use_aio:1;
unsigned pvscan_cache_single:1;
unsigned can_use_one_scan:1;
unsigned is_clvmd:1;
unsigned use_full_md_check:1;
/*
* Filtering.
@@ -232,15 +225,14 @@ struct cmd_context {
const char *report_list_item_separator;
const char *time_format;
unsigned rand_seed;
struct dm_list pending_delete; /* list of LVs for removal */
struct dm_pool *pending_delete_mem; /* memory pool for pending deletes */
struct dm_list unused_duplicate_devs; /* save preferences between lvmcache instances */
};
/*
* system_dir may be NULL to use the default value.
* The environment variable LVM_SYSTEM_DIR always takes precedence.
*/
struct cmd_context *create_toolcontext(unsigned is_clvmd,
struct cmd_context *create_toolcontext(unsigned is_long_lived,
const char *system_dir,
unsigned set_buffering,
unsigned threaded,

View File

@@ -23,7 +23,6 @@
#include "toolcontext.h"
#include "lvm-file.h"
#include "memlock.h"
#include "label.h"
#include <sys/stat.h>
#include <sys/mman.h>
@@ -495,17 +494,17 @@ int override_config_tree_from_profile(struct cmd_context *cmd,
* and function avoids parsing of mda into config tree which
* remains unmodified and should not be used.
*/
int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_reason_t reason,
int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, char *buf_async,
off_t offset, size_t size, off_t offset2, size_t size2,
checksum_fn_t checksum_fn, uint32_t checksum,
int checksum_only, int no_dup_node_check)
{
char *fb, *fe;
int r = 0;
int sz, use_plain_read = 1;
int use_mmap = 1;
off_t mmap_offset = 0;
char *buf = NULL;
struct config_source *cs = dm_config_get_custom(cft);
size_t rsize;
if (!_is_file_based_config_source(cs->type)) {
log_error(INTERNAL_ERROR "config_file_read_fd: expected file, special file "
@@ -514,45 +513,44 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
return 0;
}
/* Only use plain read with regular files */
/* Only use mmap with regular files */
if (!(dev->flags & DEV_REGULAR) || size2)
use_plain_read = 0;
use_mmap = 0;
if (!(buf = dm_malloc(size + size2))) {
log_error("Failed to allocate circular buffer.");
return 0;
}
if (use_plain_read) {
/* Note: also used for lvm.conf to read all settings */
for (rsize = 0; rsize < size; rsize += sz) {
do {
sz = read(dev_fd(dev), buf + rsize, size - rsize);
} while ((sz < 0) && ((errno == EINTR) || (errno == EAGAIN)));
if (sz < 0) {
log_sys_error("read", dev_name(dev));
goto out;
}
if (buf_async) {
if (!(buf = dm_malloc(size + size2))) {
log_error("Failed to allocate circular buffer.");
return 0;
}
} else {
if (!dev_read_bytes(dev, offset, size, buf))
memcpy(buf, buf_async + offset, size);
if (size2)
memcpy(buf + size, buf_async + offset2, size2);
fb = buf;
} else if (use_mmap) {
mmap_offset = offset % lvm_getpagesize();
/* memory map the file */
fb = mmap((caddr_t) 0, size + mmap_offset, PROT_READ,
MAP_PRIVATE, dev_fd(dev), offset - mmap_offset);
if (fb == (caddr_t) (-1)) {
log_sys_error("mmap", dev_name(dev));
goto out;
if (size2) {
if (!dev_read_bytes(dev, offset2, size2, buf + size))
goto out;
}
fb = fb + mmap_offset;
} else {
if (!(buf = dm_malloc(size + size2))) {
log_error("Failed to allocate circular buffer.");
return 0;
}
if (!dev_read_circular(dev, (uint64_t) offset, size,
(uint64_t) offset2, size2, buf)) {
goto out;
}
fb = buf;
}
fb = buf;
/*
* The checksum passed in is the checksum from the mda_header
* preceding this metadata. They should always match.
* FIXME: handle case where mda_header checksum is bad,
* but the checksum calculated here is correct.
*/
if (checksum_fn && checksum !=
(checksum_fn(checksum_fn(INITIAL_CRC, (const uint8_t *)fb, size),
(const uint8_t *)(fb + size), size2))) {
@@ -574,7 +572,15 @@ int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_r
r = 1;
out:
dm_free(buf);
if (!use_mmap)
dm_free(buf);
else {
/* unmap the file */
if (munmap(fb - mmap_offset, size + mmap_offset)) {
log_sys_error("munmap", dev_name(dev));
r = 0;
}
}
return r;
}
@@ -607,7 +613,7 @@ int config_file_read(struct dm_config_tree *cft)
}
}
r = config_file_read_fd(cft, cf->dev, DEV_IO_MDA_CONTENT, 0, (size_t) info.st_size, 0, 0,
r = config_file_read_fd(cft, cf->dev, NULL, 0, (size_t) info.st_size, 0, 0,
(checksum_fn_t) NULL, 0, 0, 0);
if (!cf->keep_open) {

View File

@@ -17,12 +17,12 @@
#define _LVM_CONFIG_H
#include "libdevmapper.h"
#include "device.h"
/* 16 bits: 3 bits for major, 4 bits for minor, 9 bits for patchlevel */
/* FIXME Max LVM version supported: 7.15.511. Extend bits when needed. */
#define vsn(major, minor, patchlevel) (major << 13 | minor << 9 | patchlevel)
struct device;
struct cmd_context;
typedef enum {
@@ -239,7 +239,7 @@ config_source_t config_get_source_type(struct dm_config_tree *cft);
typedef uint32_t (*checksum_fn_t) (uint32_t initial, const uint8_t *buf, uint32_t size);
struct dm_config_tree *config_open(config_source_t source, const char *filename, int keep_open);
int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, dev_io_reason_t reason,
int config_file_read_fd(struct dm_config_tree *cft, struct device *dev, char *buf_async,
off_t offset, size_t size, off_t offset2, size_t size2,
checksum_fn_t checksum_fn, uint32_t checksum,
int skip_parse, int no_dup_node_check);

View File

@@ -345,19 +345,6 @@ cfg(devices_sysfs_scan_CFG, "sysfs_scan", devices_CFG_SECTION, 0, CFG_TYPE_BOOL,
"This is a quick way of filtering out block devices that are not\n"
"present on the system. sysfs must be part of the kernel and mounted.)\n")
cfg(devices_scan_lvs_CFG, "scan_lvs", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_SCAN_LVS, vsn(2, 2, 182), NULL, 0, NULL,
"Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.\n"
"When 1, LVM will detect PVs layered on LVs, and caution must be\n"
"taken to avoid a host accessing a layered VG that may not belong\n"
"to it, e.g. from a guest image. This generally requires excluding\n"
"the LVs with device filters. Also, when this setting is enabled,\n"
"every LVM command will scan every active LV on the system (unless\n"
"filtered), which can cause performance problems on systems with\n"
"many active LVs. When this setting is 0, LVM will not detect or\n"
"use PVs that exist on LVs, and will not allow a PV to be created on\n"
"an LV. The LVs are ignored using a built in device filter that\n"
"identifies and excludes LVs.\n")
cfg(devices_multipath_component_detection_CFG, "multipath_component_detection", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_MULTIPATH_COMPONENT_DETECTION, vsn(2, 2, 89), NULL, 0, NULL,
"Ignore devices that are components of DM multipath devices.\n")
@@ -470,10 +457,29 @@ cfg(devices_allow_changes_with_duplicate_pvs_CFG, "allow_changes_with_duplicate_
"Enabling this setting allows the VG to be used as usual even with\n"
"uncertain devices.\n")
cfg(devices_allow_mixed_block_sizes_CFG, "allow_mixed_block_sizes", devices_CFG_SECTION, 0, CFG_TYPE_BOOL, 1, vsn(2, 2, 187), NULL, 0, NULL,
"Allow PVs in the same VG with different logical block sizes.\n"
"When allowed, the user is responsible to ensure that an LV is\n"
"using PVs with matching block sizes when necessary.\n")
cfg(devices_scan_async_CFG, "scan_async", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SCAN_ASYNC, vsn(2, 2, 176), NULL, 0, NULL,
"Use async I/O to read headers and metadata from disks in parallel.\n")
cfg(devices_scan_cache_CFG, "scan_cache", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_SCAN_CACHE, vsn(2, 2, 176), NULL, 0, NULL,
"Cache the disk blocks that a command scans from each PV.\n"
"When scanning disks for LVM headers and metadata, scan_size KiB\n"
"are initially read from each disk. These scanned blocks can be\n"
"cached by the command to avoid rereading them from disk.\n"
"The scan_cache is required when scan_async is enabled.\n")
cfg(devices_scan_size_CFG, "scan_size", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_SCAN_SIZE_KB, vsn(2, 2, 176), NULL, 0, NULL,
"Number of KiB to read from each disk when scanning disks.\n"
"The initial scan size is intended to cover all the headers\n"
"and metadata that LVM places at the start of each disk so\n"
"that a single read operation can retrieve them all.\n"
"Any headers or metadata that lie beyond this size require\n"
"an additional disk read. Must be a multiple of 4KiB.\n")
cfg(devices_async_events_CFG, "async_events", devices_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_ASYNC_EVENTS, vsn(2, 2, 176), NULL, 0, NULL,
"Max number of concurrent async reads when scanning disks.\n"
"Up to this many disks can be read concurrently when scanning\n"
"disks with async I/O. This setting may be limited by the system\n"
"aio configuration. This should not exceed the open files limit.\n")
cfg_array(allocation_cling_tag_list_CFG, "cling_tag_list", allocation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 2, 77), NULL, 0, NULL,
"Advise LVM which PVs to use when searching for new space.\n"
@@ -594,12 +600,7 @@ cfg(allocation_cache_pool_max_chunks_CFG, "cache_pool_max_chunks", allocation_CF
"Using cache pool with more chunks may degrade cache performance.\n")
cfg(allocation_thin_pool_metadata_require_separate_pvs_CFG, "thin_pool_metadata_require_separate_pvs", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS, vsn(2, 2, 89), NULL, 0, NULL,
"Thin pool metadata and data will always use different PVs.\n")
cfg(allocation_thin_pool_crop_metadata_CFG, "thin_pool_crop_metadata", allocation_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_CROP_METADATA, vsn(2, 2, 188), NULL, 0, NULL,
"Older version of lvm2 cropped pool's metadata size to 15.81 GiB.\n"
"This is slightly less then the actual maximum 15.88 GiB.\n"
"For compatibility with older version and use of cropped size set to 1.\n")
"Thin pool metdata and data will always use different PVs.\n")
cfg(allocation_thin_pool_zero_CFG, "thin_pool_zero", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_THIN_POOL_ZERO, vsn(2, 2, 99), NULL, 0, NULL,
"Thin pool data chunks are zeroed before they are first used.\n"
@@ -630,9 +631,6 @@ cfg(allocation_thin_pool_chunk_size_policy_CFG, "thin_pool_chunk_size_policy", a
" 512KiB.\n"
"#\n")
cfg(allocation_zero_metadata_CFG, "zero_metadata", allocation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_ZERO_METADATA, vsn(2, 2, 188), NULL, 0, NULL,
"Zero whole metadata area before use with thin or cache pool.\n")
cfg_runtime(allocation_thin_pool_chunk_size_CFG, "thin_pool_chunk_size", allocation_CFG_SECTION, CFG_PROFILABLE | CFG_PROFILABLE_METADATA | CFG_DEFAULT_UNDEFINED, CFG_TYPE_INT, vsn(2, 2, 99), 0, NULL,
"The minimal chunk size in KiB for thin pool volumes.\n"
"Larger chunk sizes may improve performance for plain thin volumes,\n"
@@ -731,11 +729,11 @@ cfg(log_activation_CFG, "activation", log_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(
cfg(log_activate_file_CFG, "activate_file", log_CFG_SECTION, CFG_DEFAULT_UNDEFINED | CFG_UNSUPPORTED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL, NULL)
cfg_array(log_debug_classes_CFG, "debug_classes", log_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, "#Smemory#Sdevices#Sio#Sactivation#Sallocation#Slvmetad#Smetadata#Scache#Slocking#Slvmpolld#Sdbus", vsn(2, 2, 99), NULL, 0, NULL,
cfg_array(log_debug_classes_CFG, "debug_classes", log_CFG_SECTION, CFG_ALLOW_EMPTY, CFG_TYPE_STRING, "#Smemory#Sdevices#Sactivation#Sallocation#Slvmetad#Smetadata#Scache#Slocking#Slvmpolld#Sdbus", vsn(2, 2, 99), NULL, 0, NULL,
"Select log messages by class.\n"
"Some debugging messages are assigned to a class and only appear in\n"
"debug output if the class is listed here. Classes currently\n"
"available: memory, devices, io, activation, allocation, lvmetad,\n"
"available: memory, devices, activation, allocation, lvmetad,\n"
"metadata, cache, locking, lvmpolld. Use \"all\" to see everything.\n")
cfg(backup_backup_CFG, "backup", backup_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_BACKUP_ENABLED, vsn(1, 0, 0), NULL, 0, NULL,
@@ -793,14 +791,26 @@ cfg(global_activation_CFG, "activation", global_CFG_SECTION, 0, CFG_TYPE_BOOL, D
"is not present in the kernel, disabling this should suppress\n"
"the error messages.\n")
cfg(global_fallback_to_lvm1_CFG, "fallback_to_lvm1", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, 0, vsn(1, 0, 18), NULL, 0, NULL,
"This setting is no longer used.\n")
cfg(global_fallback_to_lvm1_CFG, "fallback_to_lvm1", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_FALLBACK_TO_LVM1, vsn(1, 0, 18), "@DEFAULT_FALLBACK_TO_LVM1@", 0, NULL,
"Try running LVM1 tools if LVM cannot communicate with DM.\n"
"This option only applies to 2.4 kernels and is provided to help\n"
"switch between device-mapper kernels and LVM1 kernels. The LVM1\n"
"tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.\n"
"They will stop working once the lvm2 on-disk metadata format is used.\n")
cfg(global_format_CFG, "format", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_STRING, DEFAULT_FORMAT, vsn(1, 0, 0), NULL, 0, NULL,
"This setting is no longer used.\n")
"The default metadata format that commands should use.\n"
"The -M 1|2 option overrides this setting.\n"
"#\n"
"Accepted values:\n"
" lvm1\n"
" lvm2\n"
"#\n")
cfg_array(global_format_libraries_CFG, "format_libraries", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 0), NULL, 0, NULL,
"This setting is no longer used.")
"Shared libraries that process different metadata formats.\n"
"If support for LVM1 metadata was compiled as a shared library use\n"
"format_libraries = \"liblvm2format1.so\"\n")
cfg_array(global_segment_libraries_CFG, "segment_libraries", global_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(1, 0, 18), NULL, 0, NULL, NULL)
@@ -961,9 +971,6 @@ cfg(global_lvdisplay_shows_full_device_path_CFG, "lvdisplay_shows_full_device_pa
"Previously this was always shown as /dev/vgname/lvname even when that\n"
"was never a valid path in the /dev filesystem.\n")
cfg(global_use_aio_CFG, "use_aio", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_BOOL, DEFAULT_USE_AIO, vsn(2, 2, 183), NULL, 0, NULL,
"Use async I/O when reading and writing devices.\n")
cfg(global_use_lvmetad_CFG, "use_lvmetad", global_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_USE_LVMETAD, vsn(2, 2, 93), "@DEFAULT_USE_LVMETAD@", 0, NULL,
"Use lvmetad to cache metadata and reduce disk scanning.\n"
"When enabled (and running), lvmetad provides LVM commands with VG\n"
@@ -1152,14 +1159,6 @@ cfg(global_notify_dbus_CFG, "notify_dbus", global_CFG_SECTION, 0, CFG_TYPE_BOOL,
"When enabled, an LVM command that changes PVs, changes VG metadata,\n"
"or changes the activation state of an LV will send a notification.\n")
cfg(global_io_memory_size_CFG, "io_memory_size", global_CFG_SECTION, CFG_DEFAULT_COMMENTED, CFG_TYPE_INT, DEFAULT_IO_MEMORY_SIZE_KB, vsn(2, 2, 184), NULL, 0, NULL,
"The amount of memory in KiB that LVM allocates to perform disk io.\n"
"LVM performance may benefit from more io memory when there are many\n"
"disks or VG metadata is large. Increasing this size may be necessary\n"
"when a single copy of VG metadata is larger than the current setting.\n"
"This value should usually not be decreased from the default; setting\n"
"it too low can result in lvm failing to read VGs.\n")
cfg(activation_udev_sync_CFG, "udev_sync", activation_CFG_SECTION, 0, CFG_TYPE_BOOL, DEFAULT_UDEV_SYNC, vsn(2, 2, 51), NULL, 0, NULL,
"Use udev notifications to synchronize udev and LVM.\n"
"The --nodevsync option overrides this setting.\n"

View File

@@ -59,7 +59,6 @@
#define DEFAULT_METADATA_READ_ONLY 0
#define DEFAULT_LVDISPLAY_SHOWS_FULL_DEVICE_PATH 0
#define DEFAULT_UNKNOWN_DEVICE_NAME "[unknown]"
#define DEFAULT_USE_AIO 1
#define DEFAULT_SANLOCK_LV_EXTEND_MB 256
@@ -105,20 +104,15 @@
#define DEFAULT_THIN_REPAIR_OPTION1 ""
#define DEFAULT_THIN_REPAIR_OPTIONS_CONFIG "#S" DEFAULT_THIN_REPAIR_OPTION1
#define DEFAULT_THIN_POOL_METADATA_REQUIRE_SEPARATE_PVS 0
#define DEFAULT_THIN_POOL_CROP_METADATA 0
#define DEFAULT_THIN_POOL_MAX_METADATA_SIZE_V1_KB (UINT64_C(255) * ((1 << 14) - 64) * 4) /* KB */ /* 0x3f8040 blocks */
#define DEFAULT_THIN_POOL_MAX_METADATA_SIZE (DM_THIN_MAX_METADATA_SIZE / 2) /* KB */
#define DEFAULT_THIN_POOL_MIN_METADATA_SIZE 2048 /* KB */
#define DEFAULT_THIN_POOL_OPTIMAL_METADATA_SIZE (128 * 1024) /* KB */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_POLICY "generic"
#define DEFAULT_THIN_POOL_CHUNK_SIZE 64 /* KB */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_PERFORMANCE 512 /* KB */
/* Chunk size big enough it no longer needs jump by power-of-2 */
#define DEFAULT_THIN_POOL_CHUNK_SIZE_ALIGNED 1024 /* KB */
#define DEFAULT_THIN_POOL_DISCARDS "passdown"
#define DEFAULT_THIN_POOL_ZERO 1
#define DEFAULT_POOL_METADATA_SPARE 1 /* thin + cache */
#define DEFAULT_ZERO_METADATA 1 /* thin + cache */
#ifdef CACHE_CHECK_NEEDS_CHECK
# define DEFAULT_CACHE_CHECK_OPTION1 "-q"
@@ -272,8 +266,9 @@
#define DEFAULT_THIN_POOL_AUTOEXTEND_THRESHOLD 100
#define DEFAULT_THIN_POOL_AUTOEXTEND_PERCENT 20
#define DEFAULT_SCAN_LVS 0
#define DEFAULT_IO_MEMORY_SIZE_KB 8192
#define DEFAULT_ASYNC_EVENTS 100
#define DEFAULT_SCAN_ASYNC 1
#define DEFAULT_SCAN_CACHE 1
#define DEFAULT_SCAN_SIZE_KB 128
#endif /* _LVM_DEFAULTS_H */

View File

@@ -1,287 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "bcache.h"
// FIXME: need to define this in a common place (that doesn't pull in deps)
#ifndef SECTOR_SHIFT
#define SECTOR_SHIFT 9
#endif
//----------------------------------------------------------------
static void byte_range_to_block_range(struct bcache *cache, uint64_t start, size_t len,
block_address *bb, block_address *be)
{
block_address block_size = bcache_block_sectors(cache) << SECTOR_SHIFT;
*bb = start / block_size;
*be = (start + len + block_size - 1) / block_size;
}
static uint64_t _min(uint64_t lhs, uint64_t rhs)
{
if (rhs < lhs)
return rhs;
return lhs;
}
//----------------------------------------------------------------
void bcache_prefetch_bytes(struct bcache *cache, int fd, uint64_t start, size_t len)
{
block_address bb, be;
byte_range_to_block_range(cache, start, len, &bb, &be);
while (bb < be) {
bcache_prefetch(cache, fd, bb);
bb++;
}
}
//----------------------------------------------------------------
bool bcache_read_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data)
{
struct block *b;
block_address bb, be;
uint64_t block_size = bcache_block_sectors(cache) << SECTOR_SHIFT;
uint64_t block_offset = start % block_size;
bcache_prefetch_bytes(cache, fd, start, len);
byte_range_to_block_range(cache, start, len, &bb, &be);
for (; bb != be; bb++) {
if (!bcache_get(cache, fd, bb, 0, &b))
return false;
size_t blen = _min(block_size - block_offset, len);
memcpy(data, ((unsigned char *) b->data) + block_offset, blen);
bcache_put(b);
block_offset = 0;
len -= blen;
data = ((unsigned char *) data) + blen;
}
return true;
}
bool bcache_invalidate_bytes(struct bcache *cache, int fd, uint64_t start, size_t len)
{
block_address bb, be;
bool result = true;
byte_range_to_block_range(cache, start, len, &bb, &be);
for (; bb != be; bb++) {
if (!bcache_invalidate(cache, fd, bb))
result = false;
}
return result;
}
//----------------------------------------------------------------
// Writing bytes and zeroing bytes are very similar, so we factor out
// this common code.
struct updater;
typedef bool (*partial_update_fn)(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len);
typedef bool (*whole_update_fn)(struct updater *u, int fd, block_address bb, block_address be);
struct updater {
struct bcache *cache;
partial_update_fn partial_fn;
whole_update_fn whole_fn;
void *data;
};
static bool _update_bytes(struct updater *u, int fd, uint64_t start, size_t len)
{
struct bcache *cache = u->cache;
block_address bb, be;
uint64_t block_size = bcache_block_sectors(cache) << SECTOR_SHIFT;
uint64_t block_offset = start % block_size;
uint64_t nr_whole;
byte_range_to_block_range(cache, start, len, &bb, &be);
// If the last block is partial, we will require a read, so let's
// prefetch it.
if ((start + len) % block_size)
bcache_prefetch(cache, fd, (start + len) / block_size);
// First block may be partial
if (block_offset) {
size_t blen = _min(block_size - block_offset, len);
if (!u->partial_fn(u, fd, bb, block_offset, blen))
return false;
len -= blen;
if (!len)
return true;
bb++;
}
// Now we write out a set of whole blocks
nr_whole = len / block_size;
if (!u->whole_fn(u, fd, bb, bb + nr_whole))
return false;
bb += nr_whole;
len -= nr_whole * block_size;
if (!len)
return true;
// Finally we write a partial end block
return u->partial_fn(u, fd, bb, 0, len);
}
//----------------------------------------------------------------
static bool _write_partial(struct updater *u, int fd, block_address bb,
uint64_t offset, size_t len)
{
struct block *b;
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memcpy(((unsigned char *) b->data) + offset, u->data, len);
u->data = ((unsigned char *) u->data) + len;
bcache_put(b);
return true;
}
static bool _write_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
uint64_t block_size = bcache_block_sectors(u->cache) << SECTOR_SHIFT;
for (; bb != be; bb++) {
// We don't need to read the block since we are overwriting
// it completely.
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
memcpy(b->data, u->data, block_size);
u->data = ((unsigned char *) u->data) + block_size;
bcache_put(b);
}
return true;
}
bool bcache_write_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data)
{
struct updater u;
u.cache = cache;
u.partial_fn = _write_partial;
u.whole_fn = _write_whole;
u.data = data;
return _update_bytes(&u, fd, start, len);
}
//----------------------------------------------------------------
static bool _zero_partial(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len)
{
struct block *b;
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memset(((unsigned char *) b->data) + offset, 0, len);
bcache_put(b);
return true;
}
static bool _zero_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
for (; bb != be; bb++) {
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
bcache_put(b);
}
return true;
}
bool bcache_zero_bytes(struct bcache *cache, int fd, uint64_t start, size_t len)
{
struct updater u;
u.cache = cache;
u.partial_fn = _zero_partial;
u.whole_fn = _zero_whole;
u.data = NULL;
return _update_bytes(&u, fd, start, len);
}
//----------------------------------------------------------------
static bool _set_partial(struct updater *u, int fd, block_address bb, uint64_t offset, size_t len)
{
struct block *b;
uint8_t val = *((uint8_t *) u->data);
if (!bcache_get(u->cache, fd, bb, GF_DIRTY, &b))
return false;
memset(((unsigned char *) b->data) + offset, val, len);
bcache_put(b);
return true;
}
static bool _set_whole(struct updater *u, int fd, block_address bb, block_address be)
{
struct block *b;
uint8_t val = *((uint8_t *) u->data);
uint64_t len = bcache_block_sectors(u->cache) * 512;
for (; bb != be; bb++) {
if (!bcache_get(u->cache, fd, bb, GF_ZERO, &b))
return false;
memset((unsigned char *) b->data, val, len);
bcache_put(b);
}
return true;
}
bool bcache_set_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, uint8_t val)
{
struct updater u;
u.cache = cache;
u.partial_fn = _set_partial;
u.whole_fn = _set_whole;
u.data = &val;
return _update_bytes(&u, fd, start, len);
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,173 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef BCACHE_H
#define BCACHE_H
#include "libdevmapper.h"
#include <linux/fs.h>
#include <stdint.h>
#include <stdbool.h>
/*----------------------------------------------------------------*/
// FIXME: move somewhere more sensible
#define container_of(v, t, head) \
((t *)((const char *)(v) - offsetof(t, head)))
/*----------------------------------------------------------------*/
enum dir {
DIR_READ,
DIR_WRITE
};
typedef uint64_t block_address;
typedef uint64_t sector_t;
typedef void io_complete_fn(void *context, int io_error);
struct io_engine {
void (*destroy)(struct io_engine *e);
bool (*issue)(struct io_engine *e, enum dir d, int fd,
sector_t sb, sector_t se, void *data, void *context);
bool (*wait)(struct io_engine *e, io_complete_fn fn);
unsigned (*max_io)(struct io_engine *e);
};
struct io_engine *create_async_io_engine(void);
struct io_engine *create_sync_io_engine(void);
/*----------------------------------------------------------------*/
struct bcache;
struct block {
/* clients may only access these three fields */
int fd;
uint64_t index;
void *data;
struct bcache *cache;
struct dm_list list;
unsigned flags;
unsigned ref_count;
int error;
enum dir io_dir;
};
/*
* Ownership of engine passes. Engine will be destroyed even if this fails.
*/
struct bcache *bcache_create(sector_t block_size, unsigned nr_cache_blocks,
struct io_engine *engine);
void bcache_destroy(struct bcache *cache);
enum bcache_get_flags {
/*
* The block will be zeroed before get_block returns it. This
* potentially avoids a read if the block is not already in the cache.
* GF_DIRTY is implicit.
*/
GF_ZERO = (1 << 0),
/*
* Indicates the caller is intending to change the data in the block, a
* writeback will occur after the block is released.
*/
GF_DIRTY = (1 << 1)
};
sector_t bcache_block_sectors(struct bcache *cache);
unsigned bcache_nr_cache_blocks(struct bcache *cache);
unsigned bcache_max_prefetches(struct bcache *cache);
/*
* Use the prefetch method to take advantage of asynchronous IO. For example,
* if you wanted to read a block from many devices concurrently you'd do
* something like this:
*
* dm_list_iterate_items (dev, &devices)
* bcache_prefetch(cache, dev->fd, block);
*
* dm_list_iterate_items (dev, &devices) {
* if (!bcache_get(cache, dev->fd, block, &b))
* fail();
*
* process_block(b);
* }
*
* It's slightly sub optimal, since you may not run the gets in the order that
* they complete. But we're talking a very small difference, and it's worth it
* to keep callbacks out of this interface.
*/
void bcache_prefetch(struct bcache *cache, int fd, block_address index);
/*
* Returns true on success.
*/
bool bcache_get(struct bcache *cache, int fd, block_address index,
unsigned flags, struct block **result);
void bcache_put(struct block *b);
/*
* flush() does not attempt to writeback locked blocks. flush will fail
* (return false), if any unlocked dirty data cannot be written back.
*/
bool bcache_flush(struct bcache *cache);
/*
* Removes a block from the cache.
*
* If the block is dirty it will be written back first. If the writeback fails
* false will be returned.
*
* If the block is currently held false will be returned.
*/
bool bcache_invalidate(struct bcache *cache, int fd, block_address index);
/*
* Invalidates all blocks on the given descriptor. Call this before closing
* the descriptor to make sure everything is written back.
*/
bool bcache_invalidate_fd(struct bcache *cache, int fd);
/*
* Call this function if flush, or invalidate fail and you do not
* wish to retry the writes. This will throw away any dirty data
* not written. If any blocks for fd are held, then it will call
* abort().
*/
void bcache_abort_fd(struct bcache *cache, int fd);
//----------------------------------------------------------------
// The next four functions are utilities written in terms of the above api.
// Prefetches the blocks neccessary to satisfy a byte range.
void bcache_prefetch_bytes(struct bcache *cache, int fd, uint64_t start, size_t len);
// Reads, writes and zeroes bytes. Returns false if errors occur.
bool bcache_read_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data);
bool bcache_write_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, void *data);
bool bcache_zero_bytes(struct bcache *cache, int fd, uint64_t start, size_t len);
bool bcache_set_bytes(struct bcache *cache, int fd, uint64_t start, size_t len, uint8_t val);
bool bcache_invalidate_bytes(struct bcache *cache, int fd, uint64_t start, size_t len);
void bcache_set_last_byte(struct bcache *cache, int fd, uint64_t offset, int sector_size);
void bcache_unset_last_byte(struct bcache *cache, int fd);
//----------------------------------------------------------------
#endif

View File

@@ -34,7 +34,7 @@ struct dev_iter {
struct dir_list {
struct dm_list list;
char dir[];
char dir[0];
};
static struct {
@@ -66,7 +66,6 @@ static void _dev_init(struct device *dev, int max_error_count)
dev->phys_block_size = -1;
dev->block_size = -1;
dev->fd = -1;
dev->bcache_fd = -1;
dev->read_ahead = -1;
dev->max_error_count = max_error_count;
@@ -74,6 +73,7 @@ static void _dev_init(struct device *dev, int max_error_count)
dev->ext.src = DEV_EXT_NONE;
dm_list_init(&dev->aliases);
dm_list_init(&dev->open_list);
}
void dev_destroy_file(struct device *dev)
@@ -275,8 +275,10 @@ static int _compare_paths(const char *path0, const char *path1)
if (slash1 < slash0)
return 1;
(void) dm_strncpy(p0, path0, sizeof(p0));
(void) dm_strncpy(p1, path1, sizeof(p1));
strncpy(p0, path0, sizeof(p0) - 1);
p0[sizeof(p0) - 1] = '\0';
strncpy(p1, path1, sizeof(p1) - 1);
p1[sizeof(p1) - 1] = '\0';
s0 = p0 + 1;
s1 = p1 + 1;
@@ -334,8 +336,10 @@ static int _add_alias(struct device *dev, const char *path)
/* Is name already there? */
dm_list_iterate_items(strl, &dev->aliases) {
if (!strcmp(strl->str, path))
if (!strcmp(strl->str, path)) {
log_debug_devs("%s: Already in device cache", path);
return 1;
}
}
sl->str = path;
@@ -343,7 +347,13 @@ static int _add_alias(struct device *dev, const char *path)
if (!dm_list_empty(&dev->aliases)) {
oldpath = dm_list_item(dev->aliases.n, struct dm_str_list)->str;
prefer_old = _compare_paths(path, oldpath);
}
log_debug_devs("%s: Aliased to %s in device cache%s (%d:%d)",
path, oldpath, prefer_old ? "" : " (preferred name)",
(int) MAJOR(dev->dev), (int) MINOR(dev->dev));
} else
log_debug_devs("%s: Added to device cache (%d:%d)", path,
(int) MAJOR(dev->dev), (int) MINOR(dev->dev));
if (prefer_old)
dm_list_add(&dev->aliases, &sl->list);
@@ -481,7 +491,7 @@ static struct device *_get_device_for_sysfs_dev_name_using_devno(const char *dev
return NULL;
}
devno = MKDEV(major, minor);
devno = MKDEV((dev_t)major, (dev_t)minor);
if (!(dev = (struct device *) btree_lookup(_cache.devices, (uint32_t) devno))) {
/*
* If we get here, it means the device is referenced in sysfs, but it's not yet in /dev.
@@ -659,28 +669,6 @@ struct dm_list *dev_cache_get_dev_list_for_lvid(const char *lvid)
return dm_hash_lookup(_cache.lvid_index, lvid);
}
/*
* Scanning code calls this when it fails to open a device using
* this path. The path is dropped from dev-cache. In the next
* dev_cache_scan it may be added again, but it could be for a
* different device.
*/
void dev_cache_failed_path(struct device *dev, const char *path)
{
struct dm_str_list *strl;
if (dm_hash_lookup(_cache.names, path))
dm_hash_remove(_cache.names, path);
dm_list_iterate_items(strl, &dev->aliases) {
if (!strcmp(strl->str, path)) {
dm_list_del(&strl->list);
break;
}
}
}
/*
* Either creates a new dev, or adds an alias to
* an existing dev.
@@ -688,8 +676,6 @@ void dev_cache_failed_path(struct device *dev, const char *path)
static int _insert_dev(const char *path, dev_t d)
{
struct device *dev;
struct device *dev_by_devt;
struct device *dev_by_path;
static dev_t loopfile_count = 0;
int loopfile = 0;
char *path_copy;
@@ -702,26 +688,8 @@ static int _insert_dev(const char *path, dev_t d)
loopfile = 1;
}
dev_by_devt = (struct device *) btree_lookup(_cache.devices, (uint32_t) d);
dev_by_path = (struct device *) dm_hash_lookup(_cache.names, path);
dev = dev_by_devt;
/*
* Existing device, existing path points to the same device.
*/
if (dev_by_devt && dev_by_path && (dev_by_devt == dev_by_path)) {
log_debug_devs("Found dev %d:%d %s - exists. %.8s",
(int)MAJOR(d), (int)MINOR(d), path, dev->pvid);
return 1;
}
/*
* No device or path found, add devt to cache.devices, add name to cache.names.
*/
if (!dev_by_devt && !dev_by_path) {
log_debug_devs("Found dev %d:%d %s - new.",
(int)MAJOR(d), (int)MINOR(d), path);
/* is this device already registered ? */
if (!(dev = (struct device *) btree_lookup(_cache.devices, (uint32_t) d))) {
if (!(dev = (struct device *) btree_lookup(_cache.sysfs_only_devices, (uint32_t) d))) {
/* create new device */
if (loopfile) {
@@ -736,126 +704,30 @@ static int _insert_dev(const char *path, dev_t d)
_free(dev);
return 0;
}
}
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!loopfile && !_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
if (dm_hash_lookup(_cache.names, path) == dev) {
/* Hash already has matching entry present */
log_debug("Path already cached %s.", path);
return 1;
}
/*
* Existing device, path is new, add path as a new alias for the device.
*/
if (dev_by_devt && !dev_by_path) {
log_debug_devs("Found dev %d:%d %s - new alias.",
(int)MAJOR(d), (int)MINOR(d), path);
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!loopfile && !_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
/*
* No existing device, but path exists and previously pointed
* to a different device.
*/
if (!dev_by_devt && dev_by_path) {
log_debug_devs("Found dev %d:%d %s - new device, path was previously %d:%d.",
(int)MAJOR(d), (int)MINOR(d), path,
(int)MAJOR(dev_by_path->dev), (int)MINOR(dev_by_path->dev));
if (!(dev = (struct device *) btree_lookup(_cache.sysfs_only_devices, (uint32_t) d))) {
/* create new device */
if (loopfile) {
if (!(dev = dev_create_file(path, NULL, NULL, 0)))
return_0;
} else if (!(dev = _dev_create(d)))
return_0;
}
if (!(btree_insert(_cache.devices, (uint32_t) d, dev))) {
log_error("Couldn't insert device into binary tree.");
_free(dev);
return 0;
}
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!loopfile && !_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
dm_hash_remove(_cache.names, path);
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
if (!loopfile && !_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
/*
* Existing device, and path exists and previously pointed to
* a different device.
*/
if (dev_by_devt && dev_by_path) {
log_debug_devs("Found dev %d:%d %s - existing device, path was previously %d:%d.",
(int)MAJOR(d), (int)MINOR(d), path,
(int)MAJOR(dev_by_path->dev), (int)MINOR(dev_by_path->dev));
if (!(path_copy = dm_pool_strdup(_cache.mem, path))) {
log_error("Failed to duplicate path string.");
return 0;
}
if (!loopfile && !_add_alias(dev, path_copy)) {
log_error("Couldn't add alias to dev cache.");
return 0;
}
dm_hash_remove(_cache.names, path);
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
return 1;
if (!dm_hash_insert(_cache.names, path_copy, dev)) {
log_error("Couldn't add name to hash in dev cache.");
return 0;
}
log_error("Found dev %d:%d %s - failed to use.", (int)MAJOR(d), (int)MINOR(d), path);
return 0;
return 1;
}
static char *_join(const char *dir, const char *name)
@@ -985,7 +857,7 @@ static int _dev_cache_iterate_sysfs_for_index(const char *path)
continue;
}
devno = MKDEV(major, minor);
devno = MKDEV((dev_t)major, (dev_t)minor);
if (!(dev = (struct device *) btree_lookup(_cache.devices, (uint32_t) devno)) &&
!(dev = (struct device *) btree_lookup(_cache.sysfs_only_devices, (uint32_t) devno))) {
if (!dm_device_get_name(major, minor, 1, devname, sizeof(devname)) ||
@@ -1193,25 +1065,28 @@ static int _insert(const char *path, const struct stat *info,
}
if (rec && !_insert_dir(path))
return 0;
return_0;
} else { /* add a device */
if (!S_ISBLK(info->st_mode))
if (!S_ISBLK(info->st_mode)) {
log_debug_devs("%s: Not a block device", path);
return 1;
}
if (!_insert_dev(path, info->st_rdev))
return 0;
return_0;
}
return 1;
}
void dev_cache_scan(void)
static void _full_scan(int dev_scan)
{
struct dir_list *dl;
log_debug_devs("Creating list of system devices.");
_cache.has_scanned = 1;
if (_cache.has_scanned && !dev_scan)
return;
log_debug_devs("Adding device paths to dev cache");
_insert_dirs(&_cache.dirs);
@@ -1219,6 +1094,11 @@ void dev_cache_scan(void)
dm_list_iterate_items(dl, &_cache.files)
_insert_file(dl->dir);
_cache.has_scanned = 1;
init_full_scan_done(1);
log_debug_devs("Added %d device paths to dev cache", dm_hash_get_num_entries(_cache.names));
}
int dev_cache_has_scanned(void)
@@ -1226,6 +1106,14 @@ int dev_cache_has_scanned(void)
return _cache.has_scanned;
}
void dev_cache_scan(int do_scan)
{
if (!do_scan)
_cache.has_scanned = 1;
else
_full_scan(1);
}
static int _init_preferred_names(struct cmd_context *cmd)
{
const struct dm_config_node *cn;
@@ -1289,6 +1177,7 @@ out:
int dev_cache_init(struct cmd_context *cmd)
{
_cache.names = NULL;
_cache.has_scanned = 0;
if (!(_cache.mem = dm_pool_create("dev_cache", 10 * 1024)))
return_0;
@@ -1345,8 +1234,7 @@ static int _check_for_open_devices(int close_immediate)
dev_name(dev), dev->open_count);
num_open++;
if (close_immediate)
if (!dev_close_immediate(dev))
stack;
dev_close_immediate(dev);
}
}
@@ -1509,7 +1397,6 @@ struct device *dev_cache_get(const char *name, struct dev_filter *f)
struct stat buf;
struct device *d = (struct device *) dm_hash_lookup(_cache.names, name);
int info_available = 0;
int ret = 1;
if (d && (d->flags & DEV_REGULAR))
return d;
@@ -1532,30 +1419,15 @@ struct device *dev_cache_get(const char *name, struct dev_filter *f)
_insert(name, info_available ? &buf : NULL, 0, obtain_device_list_from_udev());
d = (struct device *) dm_hash_lookup(_cache.names, name);
if (!d) {
dev_cache_scan();
_full_scan(0);
d = (struct device *) dm_hash_lookup(_cache.names, name);
}
}
if (!d)
return NULL;
if (d && (d->flags & DEV_REGULAR))
return d;
if (f && !(d->flags & DEV_REGULAR)) {
ret = f->passes_filter(f, d);
if (ret == -EAGAIN) {
log_debug_devs("get device by name defer filter %s", dev_name(d));
d->flags |= DEV_FILTER_AFTER_SCAN;
ret = 1;
}
}
if (f && !(d->flags & DEV_REGULAR) && !ret)
if (!d || (f && !(d->flags & DEV_REGULAR) && !(f->passes_filter(f, d))))
return NULL;
log_debug_devs("Using %s", dev_name(d));
return d;
}
@@ -1582,7 +1454,6 @@ struct device *dev_cache_get_by_devt(dev_t dev, struct dev_filter *f)
const char *sysfs_dir;
struct stat info;
struct device *d = _dev_cache_seek_devt(dev);
int ret;
if (d && (d->flags & DEV_REGULAR))
return d;
@@ -1591,47 +1462,38 @@ struct device *dev_cache_get_by_devt(dev_t dev, struct dev_filter *f)
sysfs_dir = dm_sysfs_dir();
if (sysfs_dir && *sysfs_dir) {
/* First check if dev is sysfs to avoid useless scan */
if (dm_snprintf(path, sizeof(path), "%sdev/block/%d:%d",
if (dm_snprintf(path, sizeof(path), "%s/dev/block/%d:%d",
sysfs_dir, (int)MAJOR(dev), (int)MINOR(dev)) < 0) {
log_error("dm_snprintf partition failed.");
return NULL;
}
if (lstat(path, &info)) {
log_debug("No sysfs entry for %d:%d errno %d at %s.",
(int)MAJOR(dev), (int)MINOR(dev), errno, path);
log_debug("No sysfs entry for %d:%d.",
(int)MAJOR(dev), (int)MINOR(dev));
return NULL;
}
}
dev_cache_scan();
_full_scan(0);
d = _dev_cache_seek_devt(dev);
}
if (!d)
return NULL;
if (d->flags & DEV_REGULAR)
return d;
if (!f)
return d;
ret = f->passes_filter(f, d);
if (ret == -EAGAIN) {
log_debug_devs("get device by number defer filter %s", dev_name(d));
d->flags |= DEV_FILTER_AFTER_SCAN;
ret = 1;
}
if (ret)
return d;
return NULL;
return (d && (!f || (d->flags & DEV_REGULAR) ||
f->passes_filter(f, d))) ? d : NULL;
}
struct dev_iter *dev_iter_create(struct dev_filter *f, int unused)
void dev_cache_full_scan(struct dev_filter *f)
{
if (f && f->wipe) {
f->wipe(f); /* might call _full_scan(1) */
if (!full_scan_done())
_full_scan(1);
} else
_full_scan(1);
}
struct dev_iter *dev_iter_create(struct dev_filter *f, int dev_scan)
{
struct dev_iter *di = dm_malloc(sizeof(*di));
@@ -1640,6 +1502,13 @@ struct dev_iter *dev_iter_create(struct dev_filter *f, int unused)
return NULL;
}
if (dev_scan && !trust_cache()) {
/* Flag gets reset between each command */
if (!full_scan_done())
dev_cache_full_scan(f);
} else
_full_scan(0);
di->current = btree_first(_cache.devices);
di->filter = f;
if (di->filter)
@@ -1664,27 +1533,13 @@ static struct device *_iter_next(struct dev_iter *iter)
struct device *dev_iter_get(struct dev_iter *iter)
{
struct dev_filter *f;
int ret;
while (iter->current) {
struct device *d = _iter_next(iter);
ret = 1;
f = iter->filter;
if (f && !(d->flags & DEV_REGULAR)) {
ret = f->passes_filter(f, d);
if (ret == -EAGAIN) {
log_debug_devs("get device by iter defer filter %s", dev_name(d));
d->flags |= DEV_FILTER_AFTER_SCAN;
ret = 1;
}
}
if (!f || (d->flags & DEV_REGULAR) || ret)
if (!iter->filter || (d->flags & DEV_REGULAR) ||
iter->filter->passes_filter(iter->filter, d)) {
log_debug_devs("Using %s", dev_name(d));
return d;
}
}
return NULL;

View File

@@ -46,8 +46,10 @@ int dev_cache_exit(void);
*/
int dev_cache_check_for_open_devices(void);
void dev_cache_scan(void);
/* Trigger(1) or avoid(0) a scan */
void dev_cache_scan(int do_scan);
int dev_cache_has_scanned(void);
void dev_cache_full_scan(struct dev_filter *f);
int dev_cache_add_dir(const char *path);
int dev_cache_add_loopfile(const char *path);
@@ -64,12 +66,10 @@ void dev_set_preferred_name(struct dm_str_list *sl, struct device *dev);
* Object for iterating through the cache.
*/
struct dev_iter;
struct dev_iter *dev_iter_create(struct dev_filter *f, int unused);
struct dev_iter *dev_iter_create(struct dev_filter *f, int dev_scan);
void dev_iter_destroy(struct dev_iter *iter);
struct device *dev_iter_get(struct dev_iter *iter);
void dev_reset_error_count(struct cmd_context *cmd);
void dev_cache_failed_path(struct device *dev, const char *path);
#endif

View File

@@ -100,6 +100,8 @@ const char *dev_ext_name(struct device *dev)
return _ext_registry[dev->ext.src].name;
}
static const char *_ext_attached_msg = "External handle attached to device";
struct dev_ext *dev_ext_get(struct device *dev)
{
struct dev_ext *ext;
@@ -108,10 +110,10 @@ struct dev_ext *dev_ext_get(struct device *dev)
handle_ptr = dev->ext.handle;
if (!(ext = _ext_registry[dev->ext.src].dev_ext_get(dev)))
log_error("%s: Failed to get external handle [%s].",
log_error("Failed to get external handle for device %s [%s].",
dev_name(dev), dev_ext_name(dev));
else if (handle_ptr != dev->ext.handle)
log_debug_devs("%s: External handle [%s:%p] attached", dev_name(dev),
log_debug_devs("%s %s [%s:%p]", _ext_attached_msg, dev_name(dev),
dev_ext_name(dev), dev->ext.handle);
return ext;
@@ -129,10 +131,10 @@ int dev_ext_release(struct device *dev)
handle_ptr = dev->ext.handle;
if (!(r = _ext_registry[dev->ext.src].dev_ext_release(dev)))
log_error("%s: Failed to release external handle [%s:%p]",
log_error("Failed to release external handle for device %s [%s:%p].",
dev_name(dev), dev_ext_name(dev), dev->ext.handle);
else
log_debug_devs("%s: External handle [%s:%p] detached",
log_debug_devs("External handle detached from device %s [%s:%p]",
dev_name(dev), dev_ext_name(dev), handle_ptr);
return r;
@@ -141,7 +143,7 @@ int dev_ext_release(struct device *dev)
int dev_ext_enable(struct device *dev, dev_ext_t src)
{
if (dev->ext.enabled && (dev->ext.src != src) && !dev_ext_release(dev)) {
log_error("%s: Failed to enable external handle [%s].",
log_error("Failed to enable external handle for device %s [%s].",
dev_name(dev), _ext_registry[src].name);
return 0;
}
@@ -158,7 +160,7 @@ int dev_ext_disable(struct device *dev)
return 1;
if (!dev_ext_release(dev)) {
log_error("%s: Failed to disable external handle [%s].",
log_error("Failed to disable external handle for device %s [%s].",
dev_name(dev), dev_ext_name(dev));
return 0;
}

View File

@@ -16,7 +16,9 @@
#include "lib.h"
#include "device.h"
#include "metadata.h"
#include "lvmcache.h"
#include "memlock.h"
#include "locking.h"
#include <limits.h>
#include <sys/stat.h>
@@ -51,31 +53,14 @@
# endif
#endif
static DM_LIST_INIT(_open_devices);
static unsigned _dev_size_seqno = 1;
static const char *_reasons[] = {
"dev signatures",
"PV labels",
"VG metadata header",
"VG metadata content",
"extra VG metadata header",
"extra VG metadata content",
"LVM1 metadata",
"pool metadata",
"LV content",
"logging",
};
static const char *_reason_text(dev_io_reason_t reason)
{
return _reasons[(unsigned) reason];
}
/*-----------------------------------------------------------------
* The standard io loop that keeps submitting an io until it's
* all gone.
*---------------------------------------------------------------*/
static int _io(struct device_area *where, char *buffer, int should_write, dev_io_reason_t reason)
static int _io(struct device_area *where, char *buffer, int should_write)
{
int fd = dev_fd(where->dev);
ssize_t n = 0;
@@ -87,11 +72,6 @@ static int _io(struct device_area *where, char *buffer, int should_write, dev_io
return 0;
}
log_debug_io("%s %s:%8" PRIu64 " bytes (sync) at %" PRIu64 "%s (for %s)",
should_write ? "Write" : "Read ", dev_name(where->dev),
where->size, (uint64_t) where->start,
(should_write && test_mode()) ? " (test mode - suppressed)" : "", _reason_text(reason));
/*
* Skip all writes in test mode.
*/
@@ -135,60 +115,6 @@ static int _io(struct device_area *where, char *buffer, int should_write, dev_io
return (total == (size_t) where->size);
}
int dev_get_direct_block_sizes(struct device *dev, unsigned int *physical_block_size,
unsigned int *logical_block_size)
{
int fd = dev->bcache_fd;
int do_close = 0;
unsigned int pbs = 0;
unsigned int lbs = 0;
if (dev->physical_block_size || dev->logical_block_size) {
*physical_block_size = dev->physical_block_size;
*logical_block_size = dev->logical_block_size;
return 1;
}
if (fd <= 0) {
if (!dev_open_readonly(dev))
return 0;
fd = dev_fd(dev);
do_close = 1;
}
#ifdef BLKPBSZGET /* not defined before kernel version 2.6.32 (e.g. rhel5) */
/*
* BLKPBSZGET from kernel comment for blk_queue_physical_block_size:
* "the lowest possible sector size that the hardware can operate on
* without reverting to read-modify-write operations"
*/
if (ioctl(fd, BLKPBSZGET, &pbs)) {
stack;
pbs = 0;
}
#endif
/*
* BLKSSZGET from kernel comment for blk_queue_logical_block_size:
* "the lowest possible block size that the storage device can address."
*/
if (ioctl(fd, BLKSSZGET, &lbs)) {
stack;
lbs = 0;
}
dev->physical_block_size = pbs;
dev->logical_block_size = lbs;
*physical_block_size = pbs;
*logical_block_size = lbs;
if (do_close && !dev_close_immediate(dev))
stack;
return 1;
}
/*-----------------------------------------------------------------
* LVM2 uses O_DIRECT when performing metadata io, which requires
* block size aligned accesses. If any io is not aligned we have
@@ -202,66 +128,57 @@ int dev_get_direct_block_sizes(struct device *dev, unsigned int *physical_block_
int dev_get_block_size(struct device *dev, unsigned int *physical_block_size, unsigned int *block_size)
{
const char *name = dev_name(dev);
int fd = dev->bcache_fd;
int do_close = 0;
int needs_open;
int r = 1;
if ((dev->phys_block_size > 0) && (dev->block_size > 0)) {
*physical_block_size = (unsigned int)dev->phys_block_size;
*block_size = (unsigned int)dev->block_size;
return 1;
}
needs_open = (!dev->open_count && (dev->phys_block_size == -1 || dev->block_size == -1));
if (fd <= 0) {
if (!dev->open_count) {
if (!dev_open_readonly(dev))
return_0;
do_close = 1;
}
fd = dev_fd(dev);
}
if (needs_open && !dev_open_readonly(dev))
return_0;
if (dev->block_size == -1) {
if (ioctl(fd, BLKBSZGET, &dev->block_size) < 0) {
if (ioctl(dev_fd(dev), BLKBSZGET, &dev->block_size) < 0) {
log_sys_error("ioctl BLKBSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Block size is %u bytes", name, dev->block_size);
log_debug_devs("%s: block size is %u bytes", name, dev->block_size);
}
#ifdef BLKPBSZGET
/* BLKPBSZGET is available in kernel >= 2.6.32 only */
if (dev->phys_block_size == -1) {
if (ioctl(fd, BLKPBSZGET, &dev->phys_block_size) < 0) {
if (ioctl(dev_fd(dev), BLKPBSZGET, &dev->phys_block_size) < 0) {
log_sys_error("ioctl BLKPBSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Physical block size is %u bytes", name, dev->phys_block_size);
log_debug_devs("%s: physical block size is %u bytes", name, dev->phys_block_size);
}
#elif defined (BLKSSZGET)
/* if we can't get physical block size, just use logical block size instead */
if (dev->phys_block_size == -1) {
if (ioctl(fd, BLKSSZGET, &dev->phys_block_size) < 0) {
if (ioctl(dev_fd(dev), BLKSSZGET, &dev->phys_block_size) < 0) {
log_sys_error("ioctl BLKSSZGET", name);
r = 0;
goto out;
}
log_debug_devs("%s: Physical block size can't be determined: Using logical block size of %u bytes", name, dev->phys_block_size);
log_debug_devs("%s: physical block size can't be determined, using logical "
"block size of %u bytes", name, dev->phys_block_size);
}
#else
/* if even BLKSSZGET is not available, use default 512b */
if (dev->phys_block_size == -1) {
dev->phys_block_size = 512;
log_debug_devs("%s: Physical block size can't be determined: Using block size of %u bytes instead", name, dev->phys_block_size);
log_debug_devs("%s: physical block size can't be determined, using block "
"size of %u bytes instead", name, dev->phys_block_size);
}
#endif
*physical_block_size = (unsigned int) dev->phys_block_size;
*block_size = (unsigned int) dev->block_size;
out:
if (do_close && !dev_close_immediate(dev))
if (needs_open && !dev_close(dev))
stack;
return r;
@@ -290,12 +207,11 @@ static void _widen_region(unsigned int block_size, struct device_area *region,
}
static int _aligned_io(struct device_area *where, char *buffer,
int should_write, dev_io_reason_t reason)
int should_write)
{
char *bounce, *bounce_buf;
unsigned int physical_block_size = 0;
unsigned int block_size = 0;
unsigned buffer_was_widened = 0;
uintptr_t mask;
struct device_area widened;
int r = 0;
@@ -306,18 +222,14 @@ static int _aligned_io(struct device_area *where, char *buffer,
if (!block_size)
block_size = lvm_getpagesize();
mask = block_size - 1;
_widen_region(block_size, where, &widened);
/* Did we widen the buffer? When writing, this means means read-modify-write. */
if (where->size != widened.size || where->start != widened.start) {
buffer_was_widened = 1;
log_debug_io("Widening request for %" PRIu64 " bytes at %" PRIu64 " to %" PRIu64 " bytes at %" PRIu64 " on %s (for %s)",
where->size, (uint64_t) where->start, widened.size, (uint64_t) widened.start, dev_name(where->dev), _reason_text(reason));
} else if (!((uintptr_t) buffer & mask))
/* Perform the I/O directly. */
return _io(where, buffer, should_write, reason);
/* Do we need to use a bounce buffer? */
mask = block_size - 1;
if (!memcmp(where, &widened, sizeof(widened)) &&
!((uintptr_t) buffer & mask))
return _io(where, buffer, should_write);
/* Allocate a bounce buffer with an extra block */
if (!(bounce_buf = bounce = dm_malloc((size_t) widened.size + block_size))) {
@@ -331,12 +243,10 @@ static int _aligned_io(struct device_area *where, char *buffer,
if (((uintptr_t) bounce) & mask)
bounce = (char *) ((((uintptr_t) bounce) + mask) & ~mask);
/* Do we need to read into the bounce buffer? */
if ((!should_write || buffer_was_widened) &&
!_io(&widened, bounce, 0, reason)) {
/* channel the io through the bounce buffer */
if (!_io(&widened, bounce, 0)) {
if (!should_write)
goto_out;
/* FIXME Handle errors properly! */
/* FIXME pre-extend the file */
memset(bounce, '\n', widened.size);
}
@@ -346,7 +256,7 @@ static int _aligned_io(struct device_area *where, char *buffer,
(size_t) where->size);
/* ... then we write */
if (!(r = _io(&widened, bounce, 1, reason)))
if (!(r = _io(&widened, bounce, 1)))
stack;
goto out;
@@ -392,8 +302,6 @@ static int _dev_get_size_file(struct device *dev, uint64_t *size)
static int _dev_get_size_dev(struct device *dev, uint64_t *size)
{
const char *name = dev_name(dev);
int fd = dev->bcache_fd;
int do_close = 0;
if (dev->size_seqno == _dev_size_seqno) {
log_very_verbose("%s: using cached size %" PRIu64 " sectors",
@@ -402,17 +310,13 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
return 1;
}
if (fd <= 0) {
if (!dev_open_readonly(dev))
return_0;
fd = dev_fd(dev);
do_close = 1;
}
if (!dev_open_readonly(dev))
return_0;
if (ioctl(fd, BLKGETSIZE64, size) < 0) {
if (ioctl(dev_fd(dev), BLKGETSIZE64, size) < 0) {
log_sys_error("ioctl BLKGETSIZE64", name);
if (do_close && !dev_close_immediate(dev))
stack;
if (!dev_close(dev))
log_sys_error("close", name);
return 0;
}
@@ -420,10 +324,10 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
dev->size = *size;
dev->size_seqno = _dev_size_seqno;
log_very_verbose("%s: size is %" PRIu64 " sectors", name, *size);
if (!dev_close(dev))
log_sys_error("close", name);
if (do_close && !dev_close_immediate(dev))
stack;
log_very_verbose("%s: size is %" PRIu64 " sectors", name, *size);
return 1;
}
@@ -431,24 +335,18 @@ static int _dev_get_size_dev(struct device *dev, uint64_t *size)
static int _dev_read_ahead_dev(struct device *dev, uint32_t *read_ahead)
{
long read_ahead_long;
int fd = dev->bcache_fd;
int do_close = 0;
if (dev->read_ahead != -1) {
*read_ahead = (uint32_t) dev->read_ahead;
return 1;
}
if (fd <= 0) {
if (!dev_open_readonly(dev))
return_0;
fd = dev_fd(dev);
do_close = 1;
}
if (!dev_open_readonly(dev))
return_0;
if (ioctl(fd, BLKRAGET, &read_ahead_long) < 0) {
if (ioctl(dev->fd, BLKRAGET, &read_ahead_long) < 0) {
log_sys_error("ioctl BLKRAGET", dev_name(dev));
if (do_close && !dev_close_immediate(dev))
if (!dev_close(dev))
stack;
return 0;
}
@@ -459,8 +357,8 @@ static int _dev_read_ahead_dev(struct device *dev, uint32_t *read_ahead)
log_very_verbose("%s: read_ahead is %u sectors",
dev_name(dev), *read_ahead);
if (do_close && !dev_close_immediate(dev))
log_sys_error("close", dev_name(dev));
if (!dev_close(dev))
stack;
return 1;
}
@@ -475,20 +373,18 @@ static int _dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64
discard_range[0] = offset_bytes;
discard_range[1] = size_bytes;
log_debug_devs("Discarding %" PRIu64 " bytes offset %" PRIu64 " bytes on %s. %s",
size_bytes, offset_bytes, dev_name(dev),
test_mode() ? " (test mode - suppressed)" : "");
if (!test_mode() && ioctl(dev->fd, BLKDISCARD, &discard_range) < 0) {
log_debug_devs("Discarding %" PRIu64 " bytes offset %" PRIu64 " bytes on %s.",
size_bytes, offset_bytes, dev_name(dev));
if (ioctl(dev->fd, BLKDISCARD, &discard_range) < 0) {
log_error("%s: BLKDISCARD ioctl at offset %" PRIu64 " size %" PRIu64 " failed: %s.",
dev_name(dev), offset_bytes, size_bytes, strerror(errno));
if (!dev_close_immediate(dev))
if (!dev_close(dev))
stack;
/* It doesn't matter if discard failed, so return success. */
return 1;
}
if (!dev_close_immediate(dev))
if (!dev_close(dev))
stack;
return 1;
@@ -567,15 +463,13 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
return 1;
}
if (dev->open_count && !need_excl)
log_debug_devs("%s: Already opened read-only. Upgrading "
if (dev->open_count && !need_excl) {
log_debug_devs("%s already opened read-only. Upgrading "
"to read-write.", dev_name(dev));
dev->open_count++;
}
/* dev_close_immediate will decrement this */
dev->open_count++;
if (!dev_close_immediate(dev))
stack;
dev_close_immediate(dev);
// FIXME: dev with DEV_ALLOCED is released
// but code is referencing it
}
@@ -626,7 +520,10 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
}
}
#endif
log_sys_debug("open", name);
if (quiet)
log_sys_debug("open", name);
else
log_sys_error("open", name);
dev->flags |= DEV_OPEN_FAILURE;
return 0;
@@ -653,8 +550,7 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
if (!(dev->flags & DEV_REGULAR) &&
((fstat(dev->fd, &buf) < 0) || (buf.st_rdev != dev->dev))) {
log_error("%s: fstat failed: Has device name changed?", name);
if (!dev_close_immediate(dev))
stack;
dev_close_immediate(dev);
return 0;
}
@@ -666,6 +562,8 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
if ((flags & O_CREAT) && !(flags & O_TRUNC))
dev->end = lseek(dev->fd, (off_t) 0, SEEK_END);
dm_list_add(&_open_devices, &dev->open_list);
log_debug_devs("Opened %s %s%s%s", dev_name(dev),
dev->flags & DEV_OPENED_RW ? "RW" : "RO",
dev->flags & DEV_OPENED_EXCL ? " O_EXCL" : "",
@@ -702,21 +600,27 @@ int dev_open_readonly_quiet(struct device *dev)
int dev_test_excl(struct device *dev)
{
int flags = 0;
int flags;
int r;
flags = vg_write_lock_held() ? O_RDWR : O_RDONLY;
flags |= O_EXCL;
flags |= O_RDWR;
return dev_open_flags(dev, flags, 1, 1);
r = dev_open_flags(dev, flags, 1, 1);
if (r)
dev_close_immediate(dev);
return r;
}
static void _close(struct device *dev)
{
if (close(dev->fd))
log_sys_debug("close", dev_name(dev));
log_sys_error("close", dev_name(dev));
dev->fd = -1;
dev->phys_block_size = -1;
dev->block_size = -1;
dm_list_del(&dev->open_list);
log_debug_devs("Closed %s", dev_name(dev));
@@ -726,6 +630,7 @@ static void _close(struct device *dev)
static int _dev_close(struct device *dev, int immediate)
{
if (dev->fd < 0) {
log_error("Attempt to close device '%s' "
"which is not open.", dev_name(dev));
@@ -744,7 +649,9 @@ static int _dev_close(struct device *dev, int immediate)
log_debug_devs("%s: Immediate close attempt while still referenced",
dev_name(dev));
if (immediate || (dev->open_count < 1))
/* Close unless device is known to belong to a locked VG */
if (immediate ||
(dev->open_count < 1 && !lvmcache_pvid_is_locked(dev->pvid)))
_close(dev);
return 1;
@@ -760,6 +667,18 @@ int dev_close_immediate(struct device *dev)
return _dev_close(dev, 1);
}
void dev_close_all(void)
{
struct dm_list *doh, *doht;
struct device *dev;
dm_list_iterate_safe(doh, doht, &_open_devices) {
dev = dm_list_struct_base(doh, struct device, open_list);
if (dev->open_count < 1)
_close(dev);
}
}
static inline int _dev_is_valid(struct device *dev)
{
return (dev->max_error_count == NO_DEV_ERROR_COUNT_LIMIT ||
@@ -774,7 +693,7 @@ static void _dev_inc_error_count(struct device *dev)
dev->max_error_count, dev_name(dev));
}
int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer)
int dev_read(struct device *dev, uint64_t offset, size_t len, void *buffer)
{
struct device_area where;
int ret;
@@ -789,7 +708,9 @@ int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t re
where.start = offset;
where.size = len;
ret = _aligned_io(&where, buffer, 0, reason);
// fprintf(stderr, "READ: %s, %lld, %d\n", dev_name(dev), offset, len);
ret = _aligned_io(&where, buffer, 0);
if (!ret)
_dev_inc_error_count(dev);
@@ -802,9 +723,9 @@ int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t re
* 'buf' should be len+len2.
*/
int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
uint64_t offset2, size_t len2, dev_io_reason_t reason, char *buf)
uint64_t offset2, size_t len2, char *buf)
{
if (!dev_read(dev, offset, len, reason, buf)) {
if (!dev_read(dev, offset, len, buf)) {
log_error("Read from %s failed", dev_name(dev));
return 0;
}
@@ -816,7 +737,7 @@ int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
if (!len2)
return 1;
if (!dev_read(dev, offset2, len2, reason, buf + len)) {
if (!dev_read(dev, offset2, len2, buf + len)) {
log_error("Circular read from %s failed",
dev_name(dev));
return 0;
@@ -830,14 +751,14 @@ int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
*/
/* FIXME pre-extend the file */
int dev_append(struct device *dev, size_t len, dev_io_reason_t reason, char *buffer)
int dev_append(struct device *dev, size_t len, char *buffer)
{
int r;
if (!dev->open_count)
return_0;
r = dev_write(dev, dev->end, len, reason, buffer);
r = dev_write(dev, dev->end, len, buffer);
dev->end += (uint64_t) len;
#ifndef O_DIRECT_SUPPORT
@@ -846,7 +767,7 @@ int dev_append(struct device *dev, size_t len, dev_io_reason_t reason, char *buf
return r;
}
int dev_write(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer)
int dev_write(struct device *dev, uint64_t offset, size_t len, void *buffer)
{
struct device_area where;
int ret;
@@ -857,25 +778,20 @@ int dev_write(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t r
if (!_dev_is_valid(dev))
return 0;
if (!len) {
log_error(INTERNAL_ERROR "Attempted to write 0 bytes to %s at " FMTu64, dev_name(dev), offset);
return 0;
}
where.dev = dev;
where.start = offset;
where.size = len;
dev->flags |= DEV_ACCESSED_W;
ret = _aligned_io(&where, buffer, 1, reason);
ret = _aligned_io(&where, buffer, 1);
if (!ret)
_dev_inc_error_count(dev);
return ret;
}
int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, int value)
int dev_set(struct device *dev, uint64_t offset, size_t len, int value)
{
size_t s;
char buffer[4096] __attribute__((aligned(8)));
@@ -894,7 +810,7 @@ int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t rea
memset(buffer, value, sizeof(buffer));
while (1) {
s = len > sizeof(buffer) ? sizeof(buffer) : len;
if (!dev_write(dev, offset, s, reason, buffer))
if (!dev_write(dev, offset, s, buffer))
break;
len -= s;
@@ -911,3 +827,324 @@ int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t rea
return (len == 0);
}
#ifdef AIO_SUPPORT
/*
* io_setup() wrapper:
* async_event_count is the max number of concurrent async
* i/os, i.e. the number of devices that can be read at once
*
* max_io_alloc_count: max number of aio structs to allocate,
* each with a buf_len size buffer.
*
* max_buf_alloc_bytes: max number of bytes to use for buffers
* attached to all aio structs; each aio struct gets a
* buf_len size buffer.
*
* When only max_io_alloc_count is set, it is used directly.
*
* When only max_buf_alloc_bytes is set, the number of aio
* structs is determined by this number divided by buf_len.
*
* When both are set, max_io_alloc_count is reduced, if needed,
* to whatever value max_buf_alloc_bytes would allow.
*
* When both are zero, there is no limit on the number of aio
* structs. If allocation fails for an aio struct or its buffer,
* the code should revert to synchronous io.
*/
struct dev_async_context *dev_async_context_setup(unsigned async_event_count,
unsigned max_io_alloc_count,
unsigned max_buf_alloc_bytes,
int buf_len)
{
struct dev_async_context *ac;
unsigned nr_events = DEFAULT_ASYNC_EVENTS;
int count;
int error;
if (async_event_count)
nr_events = async_event_count;
if (!(ac = malloc(sizeof(struct dev_async_context))))
return_0;
memset(ac, 0, sizeof(struct dev_async_context));
dm_list_init(&ac->unused_ios);
error = io_setup(nr_events, &ac->aio_ctx);
if (error < 0) {
log_warn("WARNING: async io setup error %d with %u events.", error, nr_events);
free(ac);
return_0;
}
if (!max_io_alloc_count && !max_buf_alloc_bytes)
count = 0;
else if (!max_io_alloc_count && max_buf_alloc_bytes)
count = max_buf_alloc_bytes / buf_len;
else if (max_io_alloc_count && max_buf_alloc_bytes) {
if (max_io_alloc_count * buf_len > max_buf_alloc_bytes)
count = max_buf_alloc_bytes / buf_len;
} else
count = max_io_alloc_count;
ac->max_ios = count;
return ac;
}
void dev_async_context_destroy(struct dev_async_context *ac)
{
io_destroy(ac->aio_ctx);
free(ac);
}
static struct dev_async_io *_async_io_alloc(int buf_len)
{
struct dev_async_io *aio;
char *buf;
char **p_buf;
/*
* mem pool doesn't seem to work for this, probably because
* of the memalign that follows.
*/
if (!(aio = malloc(sizeof(struct dev_async_io))))
return_0;
memset(aio, 0, sizeof(struct dev_async_io));
buf = NULL;
p_buf = &buf;
if (posix_memalign((void *)p_buf, getpagesize(), buf_len)) {
free(aio);
return_NULL;
}
memset(buf, 0, buf_len);
aio->buf = buf;
aio->buf_len = buf_len;
return aio;
}
static void _async_io_free(struct dev_async_io *aio)
{
if (aio->buf)
free(aio->buf);
free(aio);
}
int dev_async_alloc_ios(struct dev_async_context *ac, int num, int buf_len, int *available)
{
struct dev_async_io *aio;
int count;
int i;
/* FIXME: check if num wants more pre allocated? */
if (!dm_list_empty(&ac->unused_ios))
return 1;
/*
* When no limit is used and no pre-alloc number is set,
* then no ios are allocated up front, but the are
* allocated as needed in get().
*/
if (!ac->max_ios && !num) {
*available = 0;
return 1;
}
if (num && !ac->max_ios)
count = num;
else if (!num && ac->max_ios)
count = ac->max_ios;
else if (num > ac->max_ios)
count = ac->max_ios;
else if (num < ac->max_ios)
count = num;
else
count = ac->max_ios;
for (i = 0; i < count; i++) {
if (!(aio = _async_io_alloc(buf_len))) {
ac->num_ios = i;
*available = i;
return 1;
}
dm_list_add(&ac->unused_ios, &aio->list);
}
ac->num_ios = count;
*available = count;
return 1;
}
void dev_async_free_ios(struct dev_async_context *ac)
{
struct dev_async_io *aio, *aio2;
dm_list_iterate_items_safe(aio, aio2, &ac->unused_ios) {
dm_list_del(&aio->list);
_async_io_free(aio);
}
}
struct dev_async_io *dev_async_io_get(struct dev_async_context *ac, int buf_len)
{
struct dm_list *aio_item;
struct dev_async_io *aio;
if (!(aio_item = dm_list_first(&ac->unused_ios)))
goto alloc_new;
aio = dm_list_item(aio_item, struct dev_async_io);
dm_list_del(&aio->list);
return aio;
alloc_new:
/* alloc on demand if there is no max or we have used less than max */
if (!ac->max_ios || (ac->num_ios < ac->max_ios)) {
if ((aio = _async_io_alloc(buf_len))) {
ac->num_ios++;
return aio;
}
}
return NULL;
}
void dev_async_io_put(struct dev_async_context *ac, struct dev_async_io *aio)
{
if (!ac)
_async_io_free(aio);
else {
memset(aio->buf, 0, aio->buf_len);
aio->dev = NULL;
aio->len = 0;
aio->done = 0;
aio->result = 0;
dm_list_add(&ac->unused_ios, &aio->list);
}
}
/* io_submit() wrapper */
int dev_async_read_submit(struct dev_async_context *ac, struct dev_async_io *aio,
struct device *dev, uint32_t len, uint64_t offset, int *nospace)
{
struct iocb *iocb = &aio->iocb;
int error;
*nospace = 0;
if (len > aio->buf_len)
return_0;
aio->len = len;
iocb->data = aio;
iocb->aio_fildes = dev_fd(dev);
iocb->aio_lio_opcode = IO_CMD_PREAD;
iocb->u.c.buf = aio->buf;
iocb->u.c.nbytes = len;
iocb->u.c.offset = offset;
error = io_submit(ac->aio_ctx, 1, &iocb);
if (error == -EAGAIN)
*nospace = 1;
if (error < 0)
return 0;
return 1;
}
/* io_getevents() wrapper */
int dev_async_getevents(struct dev_async_context *ac, int wait_count, struct timespec *timeout,
int *done_count)
{
int wait_nr;
int rv;
int i;
*done_count = 0;
retry:
memset(&ac->events, 0, sizeof(ac->events));
if (wait_count >= MAX_GET_EVENTS)
wait_nr = MAX_GET_EVENTS;
else
wait_nr = wait_count;
rv = io_getevents(ac->aio_ctx, 1, wait_nr, (struct io_event *)&ac->events, timeout);
if (rv == -EINTR)
goto retry;
if (rv < 0)
return 0;
if (!rv)
return 1;
for (i = 0; i < rv; i++) {
struct iocb *iocb = ac->events[i].obj;
struct dev_async_io *aio = iocb->data;
aio->result = ac->events[i].res;
aio->done = 1;
}
*done_count = rv;
return 1;
}
#else /* AIO_SUPPORT */
struct dev_async_context *dev_async_context_setup(unsigned async_event_count,
unsigned max_io_alloc_count,
unsigned max_buf_alloc_bytes,
int buf_len)
{
return NULL;
}
void dev_async_context_destroy(struct dev_async_context *ac)
{
}
int dev_async_alloc_ios(struct dev_async_context *ac, int num, int buf_len, int *available)
{
return 0;
}
void dev_async_free_ios(struct dev_async_context *ac)
{
}
struct dev_async_io *dev_async_io_get(struct dev_async_context *ac, int buf_len)
{
return NULL;
}
void dev_async_io_put(struct dev_async_context *ac, struct dev_async_io *aio)
{
}
int dev_async_read_submit(struct dev_async_context *ac, struct dev_async_io *aio,
struct device *dev, uint32_t len, uint64_t offset, int *nospace)
{
return 0;
}
int dev_async_getevents(struct dev_async_context *ac, int wait_count, struct timespec *timeout,
int *done_count)
{
return 0;
}
#endif /* AIO_SUPPORT */

View File

@@ -18,22 +18,27 @@
#define LUKS_SIGNATURE "LUKS\xba\xbe"
#define LUKS_SIGNATURE_SIZE 6
int dev_is_luks(struct device *dev, uint64_t *offset_found, int full)
int dev_is_luks(struct device *dev, uint64_t *offset_found)
{
char buf[LUKS_SIGNATURE_SIZE];
int ret = -1;
if (!scan_bcache)
return -EAGAIN;
if (!dev_open_readonly(dev)) {
stack;
return -1;
}
if (offset_found)
*offset_found = 0;
if (!dev_read_bytes(dev, 0, LUKS_SIGNATURE_SIZE, buf))
if (!dev_read(dev, 0, LUKS_SIGNATURE_SIZE, buf))
goto_out;
ret = memcmp(buf, LUKS_SIGNATURE, LUKS_SIGNATURE_SIZE) ? 0 : 1;
out:
if (!dev_close(dev))
stack;
return ret;
}

View File

@@ -1,174 +0,0 @@
/*
* Copyright (C) 2018 Red Hat, Inc. All rights reserved.
*
* This file is part of LVM2.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v.2.1.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "lib.h"
#include "dev-type.h"
#include "xlate.h"
/*
* These lvm1 structs just used NAME_LEN in the previous format1 lvm2 code, but
* NAME_LEN was defined as 128 in generic lvm2 code that was not lvm1-specific
* and not disk-format-specific.
*/
#define LVM1_NAME_LEN 128
struct data_area {
uint32_t base;
uint32_t size;
} __attribute__ ((packed));
struct pv_disk {
int8_t id[2];
uint16_t version; /* lvm version */
struct data_area pv_on_disk;
struct data_area vg_on_disk;
struct data_area pv_uuidlist_on_disk;
struct data_area lv_on_disk;
struct data_area pe_on_disk;
int8_t pv_uuid[LVM1_NAME_LEN];
int8_t vg_name[LVM1_NAME_LEN];
int8_t system_id[LVM1_NAME_LEN]; /* for vgexport/vgimport */
uint32_t pv_major;
uint32_t pv_number;
uint32_t pv_status;
uint32_t pv_allocatable;
uint32_t pv_size;
uint32_t lv_cur;
uint32_t pe_size;
uint32_t pe_total;
uint32_t pe_allocated;
/* only present on version == 2 pv's */
uint32_t pe_start;
} __attribute__ ((packed));
int dev_is_lvm1(struct device *dev, char *buf, int buflen)
{
struct pv_disk *pvd = (struct pv_disk *) buf;
uint32_t version;
int ret;
version = xlate16(pvd->version);
if (pvd->id[0] == 'H' && pvd->id[1] == 'M' &&
(version == 1 || version == 2))
ret = 1;
else
ret = 0;
return ret;
}
#define POOL_MAGIC 0x011670
#define POOL_NAME_SIZE 256
#define NSPMajorVersion 4
#define NSPMinorVersion 1
#define NSPUpdateLevel 3
/* When checking for version matching, the first two numbers **
** are important for metadata formats, a.k.a pool labels. **
** All the numbers are important when checking if the user **
** space tools match up with the kernel module............. */
#define POOL_VERSION (NSPMajorVersion << 16 | \
NSPMinorVersion << 8 | \
NSPUpdateLevel)
struct pool_disk {
uint64_t pl_magic; /* Pool magic number */
uint64_t pl_pool_id; /* Unique pool identifier */
char pl_pool_name[POOL_NAME_SIZE]; /* Name of pool */
uint32_t pl_version; /* Pool version */
uint32_t pl_subpools; /* Number of subpools in this pool */
uint32_t pl_sp_id; /* Subpool number within pool */
uint32_t pl_sp_devs; /* Number of data partitions in this subpool */
uint32_t pl_sp_devid; /* Partition number within subpool */
uint32_t pl_sp_type; /* Partition type */
uint64_t pl_blocks; /* Number of blocks in this partition */
uint32_t pl_striping; /* Striping size within subpool */
/*
* If the number of DMEP devices is zero, then the next field **
* ** (pl_sp_dmepid) becomes the subpool ID for redirection. In **
* ** other words, if this subpool does not have the capability **
* ** to do DMEP, then it must specify which subpool will do it **
* ** in it's place
*/
/*
* While the next 3 field are no longer used, they must stay to keep **
* ** backward compatibility...........................................
*/
uint32_t pl_sp_dmepdevs;/* Number of dmep devices in this subpool */
uint32_t pl_sp_dmepid; /* Dmep device number within subpool */
uint32_t pl_sp_weight; /* if dmep dev, pref to using it */
uint32_t pl_minor; /* the pool minor number */
uint32_t pl_padding; /* reminder - think about alignment */
/*
* Even though we're zeroing out 8k at the front of the disk before
* writing the label, putting this in
*/
char pl_reserve[184]; /* bump the structure size out to 512 bytes */
};
#define CPIN_8(x, y, z) {memcpy((x), (y), (z));}
#define CPIN_16(x, y) {(x) = xlate16_be((y));}
#define CPIN_32(x, y) {(x) = xlate32_be((y));}
#define CPIN_64(x, y) {(x) = xlate64_be((y));}
static void pool_label_in(struct pool_disk *pl, void *buf)
{
struct pool_disk *bufpl = (struct pool_disk *) buf;
CPIN_64(pl->pl_magic, bufpl->pl_magic);
CPIN_64(pl->pl_pool_id, bufpl->pl_pool_id);
CPIN_8(pl->pl_pool_name, bufpl->pl_pool_name, POOL_NAME_SIZE);
CPIN_32(pl->pl_version, bufpl->pl_version);
CPIN_32(pl->pl_subpools, bufpl->pl_subpools);
CPIN_32(pl->pl_sp_id, bufpl->pl_sp_id);
CPIN_32(pl->pl_sp_devs, bufpl->pl_sp_devs);
CPIN_32(pl->pl_sp_devid, bufpl->pl_sp_devid);
CPIN_32(pl->pl_sp_type, bufpl->pl_sp_type);
CPIN_64(pl->pl_blocks, bufpl->pl_blocks);
CPIN_32(pl->pl_striping, bufpl->pl_striping);
CPIN_32(pl->pl_sp_dmepdevs, bufpl->pl_sp_dmepdevs);
CPIN_32(pl->pl_sp_dmepid, bufpl->pl_sp_dmepid);
CPIN_32(pl->pl_sp_weight, bufpl->pl_sp_weight);
CPIN_32(pl->pl_minor, bufpl->pl_minor);
CPIN_32(pl->pl_padding, bufpl->pl_padding);
CPIN_8(pl->pl_reserve, bufpl->pl_reserve, 184);
}
int dev_is_pool(struct device *dev, char *buf, int buflen)
{
struct pool_disk pd;
int ret;
pool_label_in(&pd, buf);
/* can ignore 8 rightmost bits for ondisk format check */
if ((pd.pl_magic == POOL_MAGIC) &&
(pd.pl_version >> 8 == POOL_VERSION >> 8))
ret = 1;
else
ret = 0;
return ret;
}

View File

@@ -16,7 +16,6 @@
#include "lib.h"
#include "dev-type.h"
#include "xlate.h"
#include "crc.h"
#ifdef UDEV_SYNC_SUPPORT
#include <libudev.h> /* for MD detection using udev db records */
#include "dev-ext-udev-constants.h"
@@ -38,114 +37,53 @@ static int _dev_has_md_magic(struct device *dev, uint64_t sb_offset)
uint32_t md_magic;
/* Version 1 is little endian; version 0.90.0 is machine endian */
if (!dev_read_bytes(dev, sb_offset, sizeof(uint32_t), &md_magic))
return_0;
if ((md_magic == MD_SB_MAGIC) ||
((MD_SB_MAGIC != xlate32(MD_SB_MAGIC)) && (md_magic == xlate32(MD_SB_MAGIC))))
if (dev_read(dev, sb_offset, sizeof(uint32_t), &md_magic) &&
((md_magic == MD_SB_MAGIC) ||
((MD_SB_MAGIC != xlate32(MD_SB_MAGIC)) && (md_magic == xlate32(MD_SB_MAGIC)))))
return 1;
return 0;
}
#define IMSM_SIGNATURE "Intel Raid ISM Cfg Sig. "
#define IMSM_SIG_LEN (sizeof(IMSM_SIGNATURE) - 1)
static int _dev_has_imsm_magic(struct device *dev, uint64_t devsize_sectors)
{
char imsm_signature[IMSM_SIG_LEN];
uint64_t off = (devsize_sectors * 512) - 1024;
if (!dev_read_bytes(dev, off, IMSM_SIG_LEN, imsm_signature))
return_0;
if (!memcmp(imsm_signature, IMSM_SIGNATURE, IMSM_SIG_LEN))
return 1;
return 0;
}
#define DDF_MAGIC 0xDE11DE11
struct ddf_header {
uint32_t magic;
uint32_t crc;
char guid[24];
char revision[8];
char padding[472];
};
static int _dev_has_ddf_magic(struct device *dev, uint64_t devsize_sectors, uint64_t *sb_offset)
{
struct ddf_header hdr;
uint32_t crc, our_crc;
uint64_t off;
uint64_t devsize_bytes = devsize_sectors * 512;
if (devsize_bytes < 0x30000)
return 0;
/* 512 bytes before the end of device (from libblkid) */
off = ((devsize_bytes / 0x200) - 1) * 0x200;
if (!dev_read_bytes(dev, off, 512, &hdr))
return_0;
if ((hdr.magic == cpu_to_be32(DDF_MAGIC)) ||
(hdr.magic == cpu_to_le32(DDF_MAGIC))) {
crc = hdr.crc;
hdr.crc = 0xffffffff;
our_crc = calc_crc(0, (const uint8_t *)&hdr, 512);
if ((cpu_to_be32(our_crc) == crc) ||
(cpu_to_le32(our_crc) == crc)) {
*sb_offset = off;
return 1;
} else {
log_debug_devs("Found md ddf magic at %llu wrong crc %x disk %x %s",
(unsigned long long)off, our_crc, crc, dev_name(dev));
return 0;
}
}
/* 128KB before the end of device (from libblkid) */
off = ((devsize_bytes / 0x200) - 257) * 0x200;
if (!dev_read_bytes(dev, off, 512, &hdr))
return_0;
if ((hdr.magic == cpu_to_be32(DDF_MAGIC)) ||
(hdr.magic == cpu_to_le32(DDF_MAGIC))) {
crc = hdr.crc;
hdr.crc = 0xffffffff;
our_crc = calc_crc(0, (const uint8_t *)&hdr, 512);
if ((cpu_to_be32(our_crc) == crc) ||
(cpu_to_le32(our_crc) == crc)) {
*sb_offset = off;
return 1;
} else {
log_debug_devs("Found md ddf magic at %llu wrong crc %x disk %x %s",
(unsigned long long)off, our_crc, crc, dev_name(dev));
return 0;
}
}
return 0;
}
/*
* _udev_dev_is_md_component() only works if
* external_device_info_source="udev"
*
* but
*
* udev_dev_is_md_component() in dev-type.c only works if
* obtain_device_list_from_udev=1
*
* and neither of those config setting matches very well
* with what we're doing here.
* Calculate the position of the superblock.
* It is always aligned to a 4K boundary and
* depending on minor_version, it can be:
* 0: At least 8K, but less than 12K, from end of device
* 1: At start of device
* 2: 4K from start of device.
*/
typedef enum {
MD_MINOR_VERSION_MIN,
MD_MINOR_V0 = MD_MINOR_VERSION_MIN,
MD_MINOR_V1,
MD_MINOR_V2,
MD_MINOR_VERSION_MAX = MD_MINOR_V2
} md_minor_version_t;
static uint64_t _v1_sb_offset(uint64_t size, md_minor_version_t minor_version)
{
uint64_t sb_offset;
switch(minor_version) {
case MD_MINOR_V0:
sb_offset = (size - 8 * 2) & ~(4 * 2 - 1ULL);
break;
case MD_MINOR_V1:
sb_offset = 0;
break;
case MD_MINOR_V2:
sb_offset = 4 * 2;
break;
default:
log_warn(INTERNAL_ERROR "WARNING: Unknown minor version %d.",
minor_version);
return 0;
}
sb_offset <<= SECTOR_SHIFT;
return sb_offset;
}
#ifdef UDEV_SYNC_SUPPORT
static int _udev_dev_is_md(struct device *dev)
@@ -171,13 +109,11 @@ static int _udev_dev_is_md(struct device *dev)
/*
* Returns -1 on error
*/
static int _native_dev_is_md(struct device *dev, uint64_t *offset_found, int full)
static int _native_dev_is_md(struct device *dev, uint64_t *offset_found)
{
uint64_t size, sb_offset = 0;
int ret;
if (!scan_bcache)
return -EAGAIN;
int ret = 1;
md_minor_version_t minor;
uint64_t size, sb_offset;
if (!dev_get_size(dev, &size)) {
stack;
@@ -187,124 +123,47 @@ static int _native_dev_is_md(struct device *dev, uint64_t *offset_found, int ful
if (size < MD_RESERVED_SECTORS * 2)
return 0;
/*
* Some md versions locate the magic number at the end of the device.
* Those checks can't be satisfied with the initial scan data, and
* require an extra read i/o at the end of every device. Issuing
* an extra read to every device in every command, just to check for
* the old md format is a bad tradeoff.
*
* When "full" is set, we check a the start and end of the device for
* md magic numbers. When "full" is not set, we only check at the
* start of the device for the magic numbers. We decide for each
* command if it should do a full check (cmd->use_full_md_check),
* and set it for commands that could possibly write to an md dev
* (pvcreate/vgcreate/vgextend).
*/
/*
* md superblock version 1.1 at offset 0 from start
*/
if (_dev_has_md_magic(dev, 0)) {
log_debug_devs("Found md magic number at offset 0 of %s.", dev_name(dev));
ret = 1;
goto out;
if (!dev_open_readonly(dev)) {
stack;
return -1;
}
/*
* md superblock version 1.2 at offset 4KB from start
*/
if (_dev_has_md_magic(dev, 4096)) {
log_debug_devs("Found md magic number at offset 4096 of %s.", dev_name(dev));
ret = 1;
goto out;
}
if (!full) {
ret = 0;
goto out;
}
/*
* Handle superblocks at the end of the device.
*/
/*
* md superblock version 0 at 64KB from end of device
* (after end is aligned to 64KB)
*/
/* Check if it is an md component device. */
/* Version 0.90.0 */
sb_offset = MD_NEW_SIZE_SECTORS(size) << SECTOR_SHIFT;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
if (_dev_has_md_magic(dev, sb_offset))
goto out;
}
/*
* md superblock version 1.0 at 8KB from end of device
*/
sb_offset = ((size - 8 * 2) & ~(4 * 2 - 1ULL)) << SECTOR_SHIFT;
if (_dev_has_md_magic(dev, sb_offset)) {
log_debug_devs("Found md magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
/*
* md imsm superblock 1K from end of device
*/
if (_dev_has_imsm_magic(dev, size)) {
log_debug_devs("Found md imsm magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
sb_offset = 1024;
ret = 1;
goto out;
}
/*
* md ddf superblock 512 bytes from end, or 128KB from end
*/
if (_dev_has_ddf_magic(dev, size, &sb_offset)) {
log_debug_devs("Found md ddf magic number at offset %llu of %s.", (unsigned long long)sb_offset, dev_name(dev));
ret = 1;
goto out;
}
minor = MD_MINOR_VERSION_MIN;
/* Version 1, try v1.0 -> v1.2 */
do {
sb_offset = _v1_sb_offset(size, minor);
if (_dev_has_md_magic(dev, sb_offset))
goto out;
} while (++minor <= MD_MINOR_VERSION_MAX);
ret = 0;
out:
if (!dev_close(dev))
stack;
if (ret && offset_found)
*offset_found = sb_offset;
return ret;
}
int dev_is_md(struct device *dev, uint64_t *offset_found, int full)
int dev_is_md(struct device *dev, uint64_t *offset_found)
{
int ret;
/*
* If non-native device status source is selected, use it
* only if offset_found is not requested as this
* information is not in udev db.
*/
if ((dev->ext.src == DEV_EXT_NONE) || offset_found) {
ret = _native_dev_is_md(dev, offset_found, full);
if (!full) {
if (!ret || (ret == -EAGAIN)) {
if (udev_dev_is_md_component(dev))
return 1;
}
}
return ret;
}
if ((dev->ext.src == DEV_EXT_NONE) || offset_found)
return _native_dev_is_md(dev, offset_found);
if (dev->ext.src == DEV_EXT_UDEV)
return _udev_dev_is_md(dev);
@@ -514,26 +373,6 @@ unsigned long dev_md_stripe_width(struct dev_types *dt, struct device *dev)
return stripe_width_sectors;
}
int dev_is_md_with_end_superblock(struct dev_types *dt, struct device *dev)
{
char version_string[MD_MAX_SYSFS_SIZE];
const char *attribute = "metadata_version";
if (MAJOR(dev->dev) != dt->md_major)
return 0;
if (_md_sysfs_attribute_scanf(dt, dev, attribute,
"%s", &version_string) != 1)
return -1;
log_very_verbose("Device %s %s is %s.",
dev_name(dev), attribute, version_string);
if (!strcmp(version_string, "1.0") || !strcmp(version_string, "0.90"))
return 1;
return 0;
}
#else
int dev_is_md(struct device *dev __attribute__((unused)),

View File

@@ -35,21 +35,23 @@ static int _swap_detect_signature(const char *buf)
return 0;
}
int dev_is_swap(struct device *dev, uint64_t *offset_found, int full)
int dev_is_swap(struct device *dev, uint64_t *offset_found)
{
char buf[10];
uint64_t size;
unsigned page;
int ret = 0;
if (!scan_bcache)
return -EAGAIN;
if (!dev_get_size(dev, &size)) {
stack;
return -1;
}
if (!dev_open_readonly(dev)) {
stack;
return -1;
}
for (page = 0x1000; page <= MAX_PAGESIZE; page <<= 1) {
/*
* skip 32k pagesize since this does not seem to be supported
@@ -58,7 +60,8 @@ int dev_is_swap(struct device *dev, uint64_t *offset_found, int full)
continue;
if (size < (page >> SECTOR_SHIFT))
break;
if (!dev_read_bytes(dev, page - SIGNATURE_SIZE, SIGNATURE_SIZE, buf)) {
if (!dev_read(dev, page - SIGNATURE_SIZE,
SIGNATURE_SIZE, buf)) {
ret = -1;
break;
}
@@ -70,6 +73,9 @@ int dev_is_swap(struct device *dev, uint64_t *offset_found, int full)
}
}
if (!dev_close(dev))
stack;
return ret;
}

View File

@@ -17,7 +17,6 @@
#include "xlate.h"
#include "config.h"
#include "metadata.h"
#include "label.h"
#include <libgen.h>
#include <ctype.h>
@@ -214,9 +213,6 @@ int dev_subsystem_part_major(struct dev_types *dt, struct device *dev)
if (MAJOR(dev->dev) == dt->device_mapper_major)
return 1;
if (MAJOR(dev->dev) == dt->md_major)
return 1;
if (MAJOR(dev->dev) == dt->drbd_major)
return 1;
@@ -367,7 +363,7 @@ static int _has_partition_table(struct device *dev)
uint16_t magic;
} __attribute__((packed)) buf; /* sizeof() == SECTOR_SIZE */
if (!dev_read_bytes(dev, UINT64_C(0), sizeof(buf), &buf))
if (!dev_read(dev, UINT64_C(0), sizeof(buf), &buf))
return_0;
/* FIXME Check for other types of partition table too */
@@ -436,9 +432,6 @@ static int _native_dev_is_partitioned(struct dev_types *dt, struct device *dev)
{
int r;
if (!scan_bcache)
return -EAGAIN;
if (!_is_partitionable(dt, dev))
return 0;
@@ -446,8 +439,17 @@ static int _native_dev_is_partitioned(struct dev_types *dt, struct device *dev)
if ((MAJOR(dev->dev) == dt->dasd_major) && dasd_is_cdl_formatted(dev))
return 1;
if (!dev_open_readonly_quiet(dev)) {
log_debug_devs("%s: failed to open device, considering device "
"is partitioned", dev_name(dev));
return 1;
}
r = _has_partition_table(dev);
if (!dev_close(dev))
stack;
return r;
}
@@ -504,7 +506,7 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
*/
if ((parts = dt->dev_type_array[major].max_partitions) > 1) {
if ((residue = minor % parts)) {
*result = MKDEV(major, (minor - residue));
*result = MKDEV((dev_t)major, (dev_t)(minor - residue));
ret = 2;
} else {
*result = dev->dev;
@@ -574,7 +576,7 @@ int dev_get_primary_dev(struct dev_types *dt, struct device *dev, dev_t *result)
path, buffer);
goto out;
}
*result = MKDEV(major, minor);
*result = MKDEV((dev_t)major, (dev_t)minor);
ret = 2;
out:
if (fp && fclose(fp))
@@ -673,7 +675,7 @@ static int _blkid_wipe(blkid_probe probe, struct device *dev, const char *name,
} else
log_verbose(_msg_wiping, type, name);
if (!dev_write_zeros(dev, offset_value, len)) {
if (!dev_set(dev, offset_value, len, 0)) {
log_error("Failed to wipe %s signature on %s.", type, name);
return 0;
}
@@ -746,12 +748,12 @@ out:
static int _wipe_signature(struct device *dev, const char *type, const char *name,
int wipe_len, int yes, force_t force, int *wiped,
int (*signature_detection_fn)(struct device *dev, uint64_t *offset_found, int full))
int (*signature_detection_fn)(struct device *dev, uint64_t *offset_found))
{
int wipe;
uint64_t offset_found;
wipe = signature_detection_fn(dev, &offset_found, 1);
wipe = signature_detection_fn(dev, &offset_found);
if (wipe == -1) {
log_error("Fatal error while trying to detect %s on %s.",
type, name);
@@ -770,7 +772,7 @@ static int _wipe_signature(struct device *dev, const char *type, const char *nam
}
log_print_unless_silent("Wiping %s on %s.", type, name);
if (!dev_write_zeros(dev, offset_found, wipe_len)) {
if (!dev_set(dev, offset_found, wipe_len, 0)) {
log_error("Failed to wipe %s on %s.", type, name);
return 0;
}
@@ -1003,23 +1005,25 @@ int dev_is_rotational(struct dev_types *dt, struct device *dev)
* failed already due to timeout in udev - in both cases the
* udev_device_get_is_initialized returns 0.
*/
#define UDEV_DEV_IS_COMPONENT_ITERATION_COUNT 100
#define UDEV_DEV_IS_COMPONENT_USLEEP 100000
#define UDEV_DEV_IS_MPATH_COMPONENT_ITERATION_COUNT 100
#define UDEV_DEV_IS_MPATH_COMPONENT_USLEEP 100000
static struct udev_device *_udev_get_dev(struct device *dev)
int udev_dev_is_mpath_component(struct device *dev)
{
struct udev *udev_context = udev_get_library_context();
struct udev_device *udev_device = NULL;
const char *value;
int initialized = 0;
unsigned i = 0;
int ret = 0;
if (!udev_context) {
log_warn("WARNING: No udev context available to check if device %s is multipath component.", dev_name(dev));
return NULL;
return 0;
}
while (1) {
if (i >= UDEV_DEV_IS_COMPONENT_ITERATION_COUNT)
if (i >= UDEV_DEV_IS_MPATH_COMPONENT_ITERATION_COUNT)
break;
if (udev_device)
@@ -1027,7 +1031,7 @@ static struct udev_device *_udev_get_dev(struct device *dev)
if (!(udev_device = udev_device_new_from_devnum(udev_context, 'b', dev->dev))) {
log_warn("WARNING: Failed to get udev device handler for device %s.", dev_name(dev));
return NULL;
return 0;
}
#ifdef HAVE_LIBUDEV_UDEV_DEVICE_GET_IS_INITIALIZED
@@ -1039,35 +1043,19 @@ static struct udev_device *_udev_get_dev(struct device *dev)
#endif
log_debug("Device %s not initialized in udev database (%u/%u, %u microseconds).", dev_name(dev),
i + 1, UDEV_DEV_IS_COMPONENT_ITERATION_COUNT,
i * UDEV_DEV_IS_COMPONENT_USLEEP);
i + 1, UDEV_DEV_IS_MPATH_COMPONENT_ITERATION_COUNT,
i * UDEV_DEV_IS_MPATH_COMPONENT_USLEEP);
usleep(UDEV_DEV_IS_COMPONENT_USLEEP);
usleep(UDEV_DEV_IS_MPATH_COMPONENT_USLEEP);
i++;
}
if (!initialized) {
log_warn("WARNING: Device %s not initialized in udev database even after waiting %u microseconds.",
dev_name(dev), i * UDEV_DEV_IS_COMPONENT_USLEEP);
dev_name(dev), i * UDEV_DEV_IS_MPATH_COMPONENT_USLEEP);
goto out;
}
out:
return udev_device;
}
int udev_dev_is_mpath_component(struct device *dev)
{
struct udev_device *udev_device;
const char *value;
int ret = 0;
if (!obtain_device_list_from_udev())
return 0;
if (!(udev_device = _udev_get_dev(dev)))
return 0;
value = udev_device_get_property_value(udev_device, DEV_EXT_UDEV_BLKID_TYPE);
if (value && !strcmp(value, DEV_EXT_UDEV_BLKID_TYPE_MPATH)) {
log_debug("Device %s is multipath component based on blkid variable in udev db (%s=\"%s\").",
@@ -1087,31 +1075,6 @@ out:
udev_device_unref(udev_device);
return ret;
}
int udev_dev_is_md_component(struct device *dev)
{
struct udev_device *udev_device;
const char *value;
int ret = 0;
if (!obtain_device_list_from_udev())
return 0;
if (!(udev_device = _udev_get_dev(dev)))
return 0;
value = udev_device_get_property_value(udev_device, DEV_EXT_UDEV_BLKID_TYPE);
if (value && !strcmp(value, DEV_EXT_UDEV_BLKID_TYPE_SW_RAID)) {
log_debug("Device %s is md raid component based on blkid variable in udev db (%s=\"%s\").",
dev_name(dev), DEV_EXT_UDEV_BLKID_TYPE, value);
ret = 1;
goto out;
}
out:
udev_device_unref(udev_device);
return ret;
}
#else
int udev_dev_is_mpath_component(struct device *dev)
@@ -1119,9 +1082,4 @@ int udev_dev_is_mpath_component(struct device *dev)
return 0;
}
int udev_dev_is_md_component(struct device *dev)
{
return 0;
}
#endif

View File

@@ -17,7 +17,6 @@
#include "device.h"
#include "display.h"
#include "label.h"
#define NUMBER_OF_MAJORS 4096
@@ -26,7 +25,7 @@
#else
# define MAJOR(x) major((x))
# define MINOR(x) minor((x))
# define MKDEV(x,y) makedev((dev_t)(x),(dev_t)(y))
# define MKDEV(x,y) makedev((x),(y))
#endif
#define PARTITION_SCSI_DEVICE (1 << 0)
@@ -57,15 +56,11 @@ const char *dev_subsystem_name(struct dev_types *dt, struct device *dev);
int major_is_scsi_device(struct dev_types *dt, int major);
/* Signature/superblock recognition with position returned where found. */
int dev_is_md(struct device *dev, uint64_t *sb, int full);
int dev_is_swap(struct device *dev, uint64_t *signature, int full);
int dev_is_luks(struct device *dev, uint64_t *signature, int full);
int dev_is_md(struct device *dev, uint64_t *sb);
int dev_is_swap(struct device *dev, uint64_t *signature);
int dev_is_luks(struct device *dev, uint64_t *signature);
int dasd_is_cdl_formatted(struct device *dev);
int udev_dev_is_mpath_component(struct device *dev);
int udev_dev_is_md_component(struct device *dev);
int dev_is_lvm1(struct device *dev, char *buf, int buflen);
int dev_is_pool(struct device *dev, char *buf, int buflen);
/* Signature wiping. */
#define TYPE_LVM1_MEMBER 0x001
@@ -77,7 +72,6 @@ int wipe_known_signatures(struct cmd_context *cmd, struct device *dev, const cha
/* Type-specific device properties */
unsigned long dev_md_stripe_width(struct dev_types *dt, struct device *dev);
int dev_is_md_with_end_superblock(struct dev_types *dt, struct device *dev);
/* Partitioning */
int major_max_partitions(struct dev_types *dt, int major);

View File

@@ -19,6 +19,7 @@
#include "uuid.h"
#include <fcntl.h>
#include <libaio.h>
#define DEV_ACCESSED_W 0x00000001 /* Device written to? */
#define DEV_REGULAR 0x00000002 /* Regular file? */
@@ -31,11 +32,6 @@
#define DEV_USED_FOR_LV 0x00000100 /* Is device used for an LV */
#define DEV_ASSUMED_FOR_LV 0x00000200 /* Is device assumed for an LV */
#define DEV_NOT_O_NOATIME 0x00000400 /* Don't use O_NOATIME */
#define DEV_IN_BCACHE 0x00000800 /* dev fd is open and used in bcache */
#define DEV_BCACHE_EXCL 0x00001000 /* bcache_fd should be open EXCL */
#define DEV_FILTER_AFTER_SCAN 0x00002000 /* apply filter after bcache has data */
#define DEV_FILTER_OUT_SCAN 0x00004000 /* filtered out during label scan */
#define DEV_BCACHE_WRITE 0x00008000 /* bcache_fd is open with RDWR */
/*
* Support for external device info.
@@ -67,18 +63,15 @@ struct device {
int open_count;
int error_count;
int max_error_count;
int phys_block_size; /* From either BLKPBSZGET or BLKSSZGET, don't use */
int block_size; /* From BLKBSZGET, returns bdev->bd_block_size, likely set by fs, probably don't use */
int physical_block_size; /* From BLKPBSZGET: lowest possible sector size that the hardware can operate on without reverting to read-modify-write operations */
int logical_block_size; /* From BLKSSZGET: lowest possible block size that the storage device can address */
int phys_block_size;
int block_size;
int read_ahead;
int bcache_fd;
uint32_t flags;
unsigned size_seqno;
uint64_t size;
uint64_t end;
struct dm_list open_list;
struct dev_ext ext;
const char *duplicate_prefer_reason;
const char *vgid; /* if device is an LV */
const char *lvid; /* if device is an LV */
@@ -87,22 +80,6 @@ struct device {
char _padding[7];
};
/*
* All I/O is annotated with the reason it is performed.
*/
typedef enum dev_io_reason {
DEV_IO_SIGNATURES = 0, /* Scanning device signatures */
DEV_IO_LABEL, /* LVM PV disk label */
DEV_IO_MDA_HEADER, /* Text format metadata area header */
DEV_IO_MDA_CONTENT, /* Text format metadata area content */
DEV_IO_MDA_EXTRA_HEADER, /* Header of any extra metadata areas on device */
DEV_IO_MDA_EXTRA_CONTENT, /* Content of any extra metadata areas on device */
DEV_IO_FMT1, /* Original LVM1 metadata format */
DEV_IO_POOL, /* Pool metadata format */
DEV_IO_LV, /* Content written to an LV */
DEV_IO_LOG /* Logging messages */
} dev_io_reason_t;
struct device_list {
struct dm_list list;
struct device *dev;
@@ -114,6 +91,32 @@ struct device_area {
uint64_t size; /* Bytes */
};
/*
* We'll collect the results of this many async reads
* in one system call. It shouldn't matter much what
* number is used here.
*/
#define MAX_GET_EVENTS 16
struct dev_async_context {
io_context_t aio_ctx;
struct io_event events[MAX_GET_EVENTS]; /* for processing completions */
struct dm_list unused_ios; /* unused/available aio structcs */
int num_ios; /* number of allocated aio structs */
int max_ios; /* max number of aio structs to allocate */
};
struct dev_async_io {
struct dm_list list;
struct iocb iocb;
struct device *dev;
char *buf;
uint32_t buf_len; /* size of buf */
uint32_t len; /* size of submitted io */
int done;
int result;
};
/*
* Support for external device info.
*/
@@ -134,8 +137,6 @@ void dev_size_seqno_inc(void);
* All io should use these routines.
*/
int dev_get_block_size(struct device *dev, unsigned int *phys_block_size, unsigned int *block_size);
int dev_get_direct_block_sizes(struct device *dev, unsigned int *physical_block_size,
unsigned int *logical_block_size);
int dev_get_size(struct device *dev, uint64_t *size);
int dev_get_read_ahead(struct device *dev, uint32_t *read_ahead);
int dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64_t size_bytes);
@@ -149,17 +150,18 @@ int dev_open_readonly_buffered(struct device *dev);
int dev_open_readonly_quiet(struct device *dev);
int dev_close(struct device *dev);
int dev_close_immediate(struct device *dev);
void dev_close_all(void);
int dev_test_excl(struct device *dev);
int dev_fd(struct device *dev);
const char *dev_name(const struct device *dev);
int dev_read(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer);
int dev_read(struct device *dev, uint64_t offset, size_t len, void *buffer);
int dev_read_circular(struct device *dev, uint64_t offset, size_t len,
uint64_t offset2, size_t len2, dev_io_reason_t reason, char *buf);
int dev_write(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, void *buffer);
int dev_append(struct device *dev, size_t len, dev_io_reason_t reason, char *buffer);
int dev_set(struct device *dev, uint64_t offset, size_t len, dev_io_reason_t reason, int value);
uint64_t offset2, size_t len2, char *buf);
int dev_write(struct device *dev, uint64_t offset, size_t len, void *buffer);
int dev_append(struct device *dev, size_t len, char *buffer);
int dev_set(struct device *dev, uint64_t offset, size_t len, int value);
void dev_flush(struct device *dev);
struct device *dev_create_file(const char *filename, struct device *dev,
@@ -169,4 +171,27 @@ void dev_destroy_file(struct device *dev);
/* Return a valid device name from the alias list; NULL otherwise */
const char *dev_name_confirmed(struct device *dev, int quiet);
struct dev_async_context *dev_async_context_setup(unsigned async_event_count,
unsigned max_io_alloc_count,
unsigned max_buf_alloc_bytes,
int buf_len);
void dev_async_context_destroy(struct dev_async_context *ac);
/* allocate aio structs (with buffers), up to the max specified during context setup */
int dev_async_alloc_ios(struct dev_async_context *ac, int num, int buf_len, int *available);
/* free aio structs (and buffers) */
void dev_async_free_ios(struct dev_async_context *ac);
/* get an available aio struct (with buffer) */
struct dev_async_io *dev_async_io_get(struct dev_async_context *ac, int buf_len);
/* make an aio struct (with buffer) available for use (by another get) */
void dev_async_io_put(struct dev_async_context *ac, struct dev_async_io *aio);
int dev_async_read_submit(struct dev_async_context *ac, struct dev_async_io *aio,
struct device *dev, uint32_t len, uint64_t offset, int *nospace);
int dev_async_getevents(struct dev_async_context *ac, int wait_count, struct timespec *timeout,
int *done_count);
#endif

View File

@@ -399,19 +399,17 @@ int lvdisplay_full(struct cmd_context *cmd,
void *handle __attribute__((unused)))
{
struct lvinfo info;
int inkernel, snap_active = 0, partial = 0, raid_is_avail = 1;
int inkernel, snap_active = 0;
char uuid[64] __attribute__((aligned(8)));
const char *access_str;
struct lv_segment *snap_seg = NULL, *mirror_seg = NULL;
struct lv_segment *seg = NULL;
int lvm1compat;
dm_percent_t snap_percent;
int thin_pool_active = 0;
dm_percent_t thin_data_percent = 0, thin_metadata_percent = 0;
int thin_data_active = 0, thin_metadata_active = 0;
dm_percent_t thin_data_percent, thin_metadata_percent;
int thin_active = 0;
dm_percent_t thin_percent = 0;
struct lv_status_thin *thin_status = NULL;
struct lv_status_thin_pool *thin_pool_status = NULL;
dm_percent_t thin_percent;
struct lv_status_cache *cache_status = NULL;
if (lv_is_historical(lv))
@@ -474,7 +472,7 @@ int lvdisplay_full(struct cmd_context *cmd,
snap_active ? "active" : "INACTIVE");
}
snap_seg = NULL;
} else if (lv_is_cow(lv) && (snap_seg = find_snapshot(lv))) {
} else if ((snap_seg = find_snapshot(lv))) {
if (inkernel &&
(snap_active = lv_snapshot_percent(snap_seg->cow,
&snap_percent)))
@@ -504,18 +502,15 @@ int lvdisplay_full(struct cmd_context *cmd,
if (seg->merge_lv)
log_print("LV merging to %s",
seg->merge_lv->name);
if (inkernel && (thin_active = lv_thin_status(lv, 0, &thin_status))) {
thin_percent = thin_status->usage;
dm_pool_destroy(thin_status->mem);
}
if (inkernel)
thin_active = lv_thin_percent(lv, 0, &thin_percent);
if (lv_is_merging_origin(lv))
log_print("LV merged with %s",
find_snapshot(lv)->lv->name);
} else if (lv_is_thin_pool(lv)) {
if ((thin_pool_active = lv_thin_pool_status(lv, 0, &thin_pool_status))) {
thin_data_percent = thin_pool_status->data_usage;
thin_metadata_percent = thin_pool_status->metadata_usage;
dm_pool_destroy(thin_pool_status->mem);
if (lv_info(cmd, lv, 1, &info, 1, 1) && info.exists) {
thin_data_active = lv_thin_pool_percent(lv, 0, &thin_data_percent);
thin_metadata_active = lv_thin_pool_percent(lv, 1, &thin_metadata_percent);
}
/* FIXME: display thin_pool targets transid for activated LV as well */
seg = first_seg(lv);
@@ -536,18 +531,11 @@ int lvdisplay_full(struct cmd_context *cmd,
log_print("LV Pool data %s", seg_lv(seg, 0)->name);
}
if (lv_is_partial(lv))
partial = 1;
if (lv_is_raid(lv))
raid_is_avail = raid_is_available(lv) ? 1 : 0;
if (inkernel && info.suspended)
log_print("LV Status suspended");
else if (activation())
log_print("LV Status %savailable%s",
(inkernel && raid_is_avail) ? "" : "NOT ",
partial ? " (partial)" : "");
log_print("LV Status %savailable",
inkernel ? "" : "NOT ");
/********* FIXME lv_number
log_print("LV # %u", lv->lv_number + 1);
@@ -581,12 +569,13 @@ int lvdisplay_full(struct cmd_context *cmd,
dm_pool_destroy(cache_status->mem);
}
if (thin_pool_active) {
if (thin_data_active)
log_print("Allocated pool data %s%%",
display_percent(cmd, thin_data_percent));
if (thin_metadata_active)
log_print("Allocated metadata %s%%",
display_percent(cmd, thin_metadata_percent));
}
if (thin_active)
log_print("Mapped size %s%%",
@@ -716,10 +705,13 @@ void vgdisplay_full(const struct volume_group *vg)
log_print("--- Volume group ---");
log_print("VG Name %s", vg->name);
log_print("System ID %s", (vg->system_id && *vg->system_id) ? vg->system_id : "");
log_print("System ID %s", (vg->system_id && *vg->system_id) ? vg->system_id : vg->lvm1_system_id ? : "");
log_print("Format %s", vg->fid->fmt->name);
log_print("Metadata Areas %d", vg_mda_count(vg));
log_print("Metadata Sequence No %d", vg->seqno);
if (vg->fid->fmt->features & FMT_MDAS) {
log_print("Metadata Areas %d",
vg_mda_count(vg));
log_print("Metadata Sequence No %d", vg->seqno);
}
access_str = vg->status & (LVM_READ | LVM_WRITE);
log_print("VG Access %s%s%s%s",
access_str == (LVM_READ | LVM_WRITE) ? "read/write" : "",

View File

@@ -15,19 +15,14 @@
#include "lib.h"
#include "filter.h"
#include "device.h"
static int _and_p(struct dev_filter *f, struct device *dev)
{
struct dev_filter **filters;
int ret;
for (filters = (struct dev_filter **) f->private; *filters; ++filters) {
ret = (*filters)->passes_filter(*filters, dev);
if (!ret)
for (filters = (struct dev_filter **) f->private; *filters; ++filters)
if (!(*filters)->passes_filter(*filters, dev))
return 0; /* No 'stack': a filter, not an error. */
}
return 1;
}

View File

@@ -74,7 +74,7 @@ struct dev_filter *internal_filter_create(void)
f->destroy = _destroy;
f->use_count = 0;
log_debug_devs("Internal filter initialised.");
log_debug_devs("internal filter initialised.");
return f;
}

View File

@@ -16,98 +16,21 @@
#include "lib.h"
#include "filter.h"
/* See label.c comment about this hack. */
extern int use_full_md_check;
#ifdef __linux__
#define MSG_SKIPPING "%s: Skipping md component device"
/*
* The purpose of these functions is to ignore md component devices,
* e.g. if /dev/md0 is a raid1 composed of /dev/loop0 and /dev/loop1,
* lvm wants to deal with md0 and ignore loop0 and loop1. md0 should
* pass the filter, and loop0,loop1 should not pass the filter so lvm
* will ignore them.
*
* (This is assuming lvm.conf md_component_detection=1.)
*
* If lvm does *not* ignore the components, then lvm may read lvm
* labels from the component devs and potentially the md dev,
* which can trigger duplicate detection, and/or cause lvm to display
* md components as PVs rather than ignoring them.
*
* If scanning md componenents causes duplicates to be seen, then
* the lvm duplicate resolution will exclude the components.
*
* The lvm md filter has three modes:
*
* 1. look for md superblock at the start of the device
* 2. look for md superblock at the start and end of the device
* 3. use udev to detect components
*
* mode 1 will not detect and exclude components of md devices
* that use superblock version 0.9 or 1.0 which is at the end of the device.
*
* mode 2 will detect these, but mode 2 doubles the i/o done by label
* scan, since there's a read at both the start and end of every device.
*
* mode 3 is used when external_device_info_source="udev". It does
* not require any io from lvm, but this mode is not used by default
* because there have been problems getting reliable info from udev.
*
* lvm uses mode 2 when:
*
* - the command is pvcreate/vgcreate/vgextend, which format new
* devices, and if the user ran these commands on a component
* device of an md device 0.9 or 1.0, then it would cause problems.
* FIXME: this would only really need to scan the end of the
* devices being formatted, not all devices.
*
* - it sees an md device on the system using version 0.9 or 1.0.
* The point of this is just to avoid displaying md components
* from the 'pvs' command.
* FIXME: the cost (double i/o) may not be worth the benefit
* (not showing md components).
*/
/*
* Returns 0 if:
* the device is an md component and it should be ignored.
*
* Returns 1 if:
* the device is not md component and should not be ignored.
*
* The actual md device will pass this filter and should be used,
* it is the md component devices that we are trying to exclude
* that will not pass.
*/
static int _passes_md_filter(struct dev_filter *f, struct device *dev)
static int _ignore_md(struct dev_filter *f __attribute__((unused)),
struct device *dev)
{
int ret;
/*
* When md_component_dectection=0, don't even try to skip md
* components.
*/
if (!md_filtering())
return 1;
ret = dev_is_md(dev, NULL, use_full_md_check);
if (ret == -EAGAIN) {
/* let pass, call again after scan */
dev->flags |= DEV_FILTER_AFTER_SCAN;
log_debug_devs("filter md deferred %s", dev_name(dev));
return 1;
}
if (ret == 0)
return 1;
ret = dev_is_md(dev, NULL);
if (ret == 1) {
log_debug_devs("md filter full %d excluding md component %s", use_full_md_check, dev_name(dev));
if (dev->ext.src == DEV_EXT_NONE)
log_debug_devs(MSG_SKIPPING, dev_name(dev));
else
@@ -133,7 +56,7 @@ static void _destroy(struct dev_filter *f)
dm_free(f);
}
struct dev_filter *md_filter_create(struct cmd_context *cmd, struct dev_types *dt)
struct dev_filter *md_filter_create(struct dev_types *dt)
{
struct dev_filter *f;
@@ -142,7 +65,7 @@ struct dev_filter *md_filter_create(struct cmd_context *cmd, struct dev_types *d
return NULL;
}
f->passes_filter = _passes_md_filter;
f->passes_filter = _ignore_md;
f->destroy = _destroy;
f->use_count = 0;
f->private = dt;

View File

@@ -21,18 +21,8 @@
static int _passes_partitioned_filter(struct dev_filter *f, struct device *dev)
{
struct dev_types *dt = (struct dev_types *) f->private;
int ret;
ret = dev_is_partitioned(dt, dev);
if (ret == -EAGAIN) {
/* let pass, call again after scan */
log_debug_devs("filter partitioned deferred %s", dev_name(dev));
dev->flags |= DEV_FILTER_AFTER_SCAN;
return 1;
}
if (ret) {
if (dev_is_partitioned(dt, dev)) {
if (dev->ext.src == DEV_EXT_NONE)
log_debug_devs(MSG_SKIPPING, dev_name(dev));
else

View File

@@ -26,39 +26,12 @@ struct pfilter {
struct dev_types *dt;
};
/*
* The persistent filter is filter layer that sits above the other filters and
* caches the final result of those other filters. When a device is first
* checked against filters, it will not be in this cache, so this filter will
* pass the device down to the other filters to check it. The other filters
* will run and either include the device (good/pass) or exclude the device
* (bad/fail). That good or bad result propagates up through this filter which
* saves the result. The next time some code checks the filters against the
* device, this persistent/cache filter is checked first. This filter finds
* the previous result in its cache and returns it without reevaluating the
* other real filters.
*
* FIXME: a cache like this should not be needed. The fact it's needed is a
* symptom of code that should be fixed to not reevaluate filters multiple
* times. A device should be checked against the filter once, and then not
* need to be checked again. With scanning now controlled, we could probably
* do this.
*
* FIXME: "persistent" isn't a great name for this caching filter. This filter
* at one time saved its cache results to a file, which is how it got the name.
* That .cache file does not work well, causes problems, and is no longer used
* by default. The old code for it should be removed.
*/
static const char* _good_device = "good";
static const char* _bad_device = "bad";
/*
* The hash table holds one of these two states
* against each entry.
*/
#define PF_BAD_DEVICE ((void *) &_good_device)
#define PF_GOOD_DEVICE ((void *) &_bad_device)
#define PF_BAD_DEVICE ((void *) 1)
#define PF_GOOD_DEVICE ((void *) 2)
static int _init_hash(struct pfilter *pf)
{
@@ -75,7 +48,11 @@ static void _persistent_filter_wipe(struct dev_filter *f)
{
struct pfilter *pf = (struct pfilter *) f->private;
log_verbose("Wiping cache of LVM-capable devices");
dm_hash_wipe(pf->devices);
/* Trigger complete device scan */
dev_cache_scan(1);
}
static int _read_array(struct pfilter *pf, struct dm_config_tree *cft,
@@ -142,13 +119,21 @@ int persistent_filter_load(struct dev_filter *f, struct dm_config_tree **cft_out
if (!config_file_read(cft))
goto_out;
log_debug_devs("Loading persistent filter cache from %s", pf->file);
_read_array(pf, cft, "persistent_filter_cache/valid_devices",
PF_GOOD_DEVICE);
/* We don't gain anything by holding invalid devices */
/* _read_array(pf, cft, "persistent_filter_cache/invalid_devices",
PF_BAD_DEVICE); */
/* Did we find anything? */
if (dm_hash_get_num_entries(pf->devices)) {
/* We populated dev_cache ourselves */
dev_cache_scan(0);
if (!dev_cache_index_devs())
stack;
r = 1;
}
log_very_verbose("Loaded persistent filter cache from %s", pf->file);
out:
@@ -272,7 +257,8 @@ static int _persistent_filter_dump(struct dev_filter *f, int merge_existing)
goto_out;
if (rename(tmp_file, pf->file))
log_sys_debug("rename", tmp_file);
log_error("%s: rename to %s failed: %s", tmp_file, pf->file,
strerror(errno));
r = 1;
@@ -288,61 +274,29 @@ out:
static int _lookup_p(struct dev_filter *f, struct device *dev)
{
struct pfilter *pf = (struct pfilter *) f->private;
void *l;
void *l = dm_hash_lookup(pf->devices, dev_name(dev));
struct dm_str_list *sl;
int pass = 1;
if (dm_list_empty(&dev->aliases)) {
log_debug_devs("%d:%d: filter cache skipping (no name)",
(int)MAJOR(dev->dev), (int)MINOR(dev->dev));
return 0;
}
l = dm_hash_lookup(pf->devices, dev_name(dev));
/* Cached bad, skip dev */
/* Cached BAD? */
if (l == PF_BAD_DEVICE) {
log_debug_devs("%s: filter cache skipping (cached bad)", dev_name(dev));
log_debug_devs("%s: Skipping (cached)", dev_name(dev));
return 0;
}
/* Cached good, use dev */
if (l == PF_GOOD_DEVICE) {
log_debug_devs("%s: filter cache using (cached good)", dev_name(dev));
return 1;
/* Test dm devices every time, so cache them as GOOD. */
if (MAJOR(dev->dev) == pf->dt->device_mapper_major) {
if (!l)
dm_list_iterate_items(sl, &dev->aliases)
if (!dm_hash_insert(pf->devices, sl->str, PF_GOOD_DEVICE)) {
log_error("Failed to hash device to filter.");
return 0;
}
return pf->real->passes_filter(pf->real, dev);
}
/* Uncached, check filters and cache the result */
/* Uncached */
if (!l) {
dev->flags &= ~DEV_FILTER_AFTER_SCAN;
pass = pf->real->passes_filter(pf->real, dev);
if (!pass) {
/*
* A device that does not pass one filter is excluded
* even if the result of another filter is deferred,
* because the deferred result won't change the exclude.
*/
l = PF_BAD_DEVICE;
} else if ((pass == -EAGAIN) || (dev->flags & DEV_FILTER_AFTER_SCAN)) {
/*
* When the filter result is deferred, we let the device
* pass for now, but do not cache the result. We need to
* rerun the filters later. At that point the final result
* will be cached.
*/
log_debug_devs("filter cache deferred %s", dev_name(dev));
dev->flags |= DEV_FILTER_AFTER_SCAN;
pass = 1;
goto out;
} else if (pass) {
l = PF_GOOD_DEVICE;
}
log_debug_devs("filter caching %s %s", pass ? "good" : "bad", dev_name(dev));
l = pf->real->passes_filter(pf->real, dev) ? PF_GOOD_DEVICE : PF_BAD_DEVICE;
dm_list_iterate_items(sl, &dev->aliases)
if (!dm_hash_insert(pf->devices, sl->str, l)) {
@@ -350,8 +304,8 @@ static int _lookup_p(struct dev_filter *f, struct device *dev)
return 0;
}
}
out:
return pass;
return (l == PF_BAD_DEVICE) ? 0 : 1;
}
static void _persistent_destroy(struct dev_filter *f)

Some files were not shown because too many files have changed in this diff Show More