1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
lvm2/lib/Makefile.in

150 lines
3.3 KiB
Makefile
Raw Normal View History

2001-09-21 16:37:43 +04:00
#
2004-03-30 23:35:44 +04:00
# Copyright (C) 2001-2004 Sistina Software, Inc. All rights reserved.
# Copyright (C) 2004-2014 Red Hat, Inc. All rights reserved.
2001-09-21 16:37:43 +04:00
#
# This file is part of LVM2.
2001-09-21 16:37:43 +04:00
#
2004-03-30 23:35:44 +04:00
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions
# of the GNU General Public License v.2.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
2001-09-21 16:37:43 +04:00
srcdir = @srcdir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
abs_srcdir = @abs_srcdir@
2001-09-21 16:37:43 +04:00
2004-03-26 16:21:12 +03:00
SOURCES =\
2001-10-16 20:25:28 +04:00
activate/activate.c \
cache/lvmcache.c \
writecache/writecache.c \
Allow dm-integrity to be used for raid images dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
2019-11-21 01:07:27 +03:00
integrity/integrity.c \
cache_segtype/cache.c \
2002-11-18 17:04:08 +03:00
commands/toolcontext.c \
2001-09-21 16:37:43 +04:00
config/config.c \
2001-10-25 15:38:19 +04:00
datastruct/btree.c \
2003-09-18 00:35:57 +04:00
datastruct/str_list.c \
device/bcache.c \
device/bcache-utils.c \
device/dev-cache.c \
device usage based on devices file The LVM devices file lists devices that lvm can use. The default file is /etc/lvm/devices/system.devices, and the lvmdevices(8) command is used to add or remove device entries. If the file does not exist, or if lvm.conf includes use_devicesfile=0, then lvm will not use a devices file. When the devices file is in use, the regex filter is not used, and the filter settings in lvm.conf or on the command line are ignored. LVM records devices in the devices file using hardware-specific IDs, such as the WWID, and attempts to use subsystem-specific IDs for virtual device types. These device IDs are also written in the VG metadata. When no hardware or virtual ID is available, lvm falls back using the unstable device name as the device ID. When devnames are used, lvm performs extra scanning to find devices if their devname changes, e.g. after reboot. When proper device IDs are used, an lvm command will not look at devices outside the devices file, but when devnames are used as a fallback, lvm will scan devices outside the devices file to locate PVs on renamed devices. A config setting search_for_devnames can be used to control the scanning for renamed devname entries. Related to the devices file, the new command option --devices <devnames> allows a list of devices to be specified for the command to use, overriding the devices file. The listed devices act as a sort of devices file in terms of limiting which devices lvm will see and use. Devices that are not listed will appear to be missing to the lvm command. Multiple devices files can be kept in /etc/lvm/devices, which allows lvm to be used with different sets of devices, e.g. system devices do not need to be exposed to a specific application, and the application can use lvm on its own set of devices that are not exposed to the system. The option --devicesfile <filename> is used to select the devices file to use with the command. Without the option set, the default system devices file is used. Setting --devicesfile "" causes lvm to not use a devices file. An existing, empty devices file means lvm will see no devices. The new command vgimportdevices adds PVs from a VG to the devices file and updates the VG metadata to include the device IDs. vgimportdevices -a will import all VGs into the system devices file. LVM commands run by dmeventd not use a devices file by default, and will look at all devices on the system. A devices file can be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If this file exists, lvm commands run by dmeventd will use it. Internal implementaion: - device_ids_read - read the devices file . add struct dev_use (du) to cmd->use_devices for each devices file entry - dev_cache_scan - get /dev entries . add struct device (dev) to dev_cache for each device on the system - device_ids_match - match devices file entries to /dev entries . match each du on cmd->use_devices to a dev in dev_cache, using device ID . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID - label_scan - read lvm headers and metadata from devices . filters are applied, those that do not need data from the device . filter-deviceid skips devs without MATCHED_USE_ID, i.e. skips /dev entries that are not listed in the devices file . read lvm label from dev . filters are applied, those that use data from the device . read lvm metadata from dev . add info/vginfo structs for PVs/VGs (info is "lvmcache") - device_ids_find_renamed_devs - handle devices with unstable devname ID where devname changed . this step only needed when devs do not have proper device IDs, and their dev names change, e.g. after reboot sdb becomes sdc. . detect incorrect match because PVID in the devices file entry does not match the PVID found when the device was read above . undo incorrect match between du and dev above . search system devices for new location of PVID . update devices file with new devnames for PVIDs on renamed devices . label_scan the renamed devs - continue with command processing
2020-06-23 21:25:41 +03:00
device/device_id.c \
device/dev-ext.c \
device/dev-io.c \
device/dev-md.c \
devices: rework libudev usage related to config settings: obtain_device_info_from_udev (controls if lvm gets a list of devices from readdir /dev or from libudev) external_device_info_source (controls if lvm asks libudev for device information) . Make the obtain_device_list_from_udev setting affect only the choice of readdir /dev vs libudev. The setting no longer controls if udev is used for device type checks. . Change obtain_device_list_from_udev default to 0. This helps avoid boot timeouts due to slow libudev queries, avoids reported failures from udev_enumerate_scan_devices, and avoids delays from "device not initialized in udev database" errors. Even without errors, for a system booting with 1024 PVs, lvm2-pvscan times improve from about 100 sec to 15 sec, and the pvscan command from about 64 sec to about 4 sec. . For external_device_info_source="none", remove all libudev device info queries, and use only lvm native device info. . For external_device_info_source="udev", first check lvm native device info, then check libudev info. . Remove sleep/retry loop when attempting libudev queries for device info. udev info will simply be skipped if it's not immediately available. . Only set up a libdev connection if it will be used by obtain_device_list_from_udev/external_device_info_source. . For native multipath component detection, use /etc/multipath/wwids. If a device has a wwid matching an entry in the wwids file, then it's considered a multipath component. This is necessary to natively detect multipath components when the mpath device is not set up.
2021-06-09 01:12:09 +03:00
device/dev-mpath.c \
device/dev-swap.c \
device/dev-type.c \
device/dev-luks.c \
device/dev-dasd.c \
2018-05-01 22:32:15 +03:00
device/dev-lvm1-pool.c \
device/online.c \
device/parse_vpd.c \
display/display.c \
error/errseg.c \
unknown/unknown.c \
filters/filter-composite.c \
filters/filter-persistent.c \
filters/filter-regex.c \
filters/filter-sysfs.c \
filters/filter-md.c \
filters/filter-fwraid.c \
filters/filter-mpath.c \
filters/filter-partitioned.c \
filters/filter-type.c \
filters/filter-usable.c \
2018-05-01 22:32:15 +03:00
filters/filter-signature.c \
device usage based on devices file The LVM devices file lists devices that lvm can use. The default file is /etc/lvm/devices/system.devices, and the lvmdevices(8) command is used to add or remove device entries. If the file does not exist, or if lvm.conf includes use_devicesfile=0, then lvm will not use a devices file. When the devices file is in use, the regex filter is not used, and the filter settings in lvm.conf or on the command line are ignored. LVM records devices in the devices file using hardware-specific IDs, such as the WWID, and attempts to use subsystem-specific IDs for virtual device types. These device IDs are also written in the VG metadata. When no hardware or virtual ID is available, lvm falls back using the unstable device name as the device ID. When devnames are used, lvm performs extra scanning to find devices if their devname changes, e.g. after reboot. When proper device IDs are used, an lvm command will not look at devices outside the devices file, but when devnames are used as a fallback, lvm will scan devices outside the devices file to locate PVs on renamed devices. A config setting search_for_devnames can be used to control the scanning for renamed devname entries. Related to the devices file, the new command option --devices <devnames> allows a list of devices to be specified for the command to use, overriding the devices file. The listed devices act as a sort of devices file in terms of limiting which devices lvm will see and use. Devices that are not listed will appear to be missing to the lvm command. Multiple devices files can be kept in /etc/lvm/devices, which allows lvm to be used with different sets of devices, e.g. system devices do not need to be exposed to a specific application, and the application can use lvm on its own set of devices that are not exposed to the system. The option --devicesfile <filename> is used to select the devices file to use with the command. Without the option set, the default system devices file is used. Setting --devicesfile "" causes lvm to not use a devices file. An existing, empty devices file means lvm will see no devices. The new command vgimportdevices adds PVs from a VG to the devices file and updates the VG metadata to include the device IDs. vgimportdevices -a will import all VGs into the system devices file. LVM commands run by dmeventd not use a devices file by default, and will look at all devices on the system. A devices file can be created for dmeventd (/etc/lvm/devices/dmeventd.devices) If this file exists, lvm commands run by dmeventd will use it. Internal implementaion: - device_ids_read - read the devices file . add struct dev_use (du) to cmd->use_devices for each devices file entry - dev_cache_scan - get /dev entries . add struct device (dev) to dev_cache for each device on the system - device_ids_match - match devices file entries to /dev entries . match each du on cmd->use_devices to a dev in dev_cache, using device ID . on match, set du->dev, dev->id, dev->flags MATCHED_USE_ID - label_scan - read lvm headers and metadata from devices . filters are applied, those that do not need data from the device . filter-deviceid skips devs without MATCHED_USE_ID, i.e. skips /dev entries that are not listed in the devices file . read lvm label from dev . filters are applied, those that use data from the device . read lvm metadata from dev . add info/vginfo structs for PVs/VGs (info is "lvmcache") - device_ids_find_renamed_devs - handle devices with unstable devname ID where devname changed . this step only needed when devs do not have proper device IDs, and their dev names change, e.g. after reboot sdb becomes sdc. . detect incorrect match because PVID in the devices file entry does not match the PVID found when the device was read above . undo incorrect match between du and dev above . search system devices for new location of PVID . update devices file with new devnames for PVIDs on renamed devices . label_scan the renamed devs - continue with command processing
2020-06-23 21:25:41 +03:00
filters/filter-deviceid.c \
format_text/archive.c \
format_text/archiver.c \
2001-12-20 14:52:54 +03:00
format_text/export.c \
format_text/flags.c \
2002-01-07 12:16:20 +03:00
format_text/format-text.c \
2001-12-20 14:52:54 +03:00
format_text/import.c \
2002-11-18 17:04:08 +03:00
format_text/import_vsn1.c \
format_text/text_label.c \
freeseg/freeseg.c \
2002-01-11 13:43:32 +03:00
label/label.c \
label/hints.c \
locking/file_locking.c \
locking/locking.c \
2001-09-21 16:37:43 +04:00
log/log.c \
metadata/cache_manip.c \
metadata/writecache_manip.c \
Allow dm-integrity to be used for raid images dm-integrity stores checksums of the data written to an LV, and returns an error if data read from the LV does not match the previously saved checksum. When used on raid images, dm-raid will correct the error by reading the block from another image, and the device user sees no error. The integrity metadata (checksums) are stored on an internal LV allocated by lvm for each linear image. The internal LV is allocated on the same PV as the image. Create a raid LV with an integrity layer over each raid image (for raid levels 1,4,5,6,10): lvcreate --type raidN --raidintegrity y [options] Add an integrity layer to images of an existing raid LV: lvconvert --raidintegrity y LV Remove the integrity layer from images of a raid LV: lvconvert --raidintegrity n LV Settings Use --raidintegritymode journal|bitmap (journal is default) to configure the method used by dm-integrity to ensure crash consistency. Initialization When integrity is added to an LV, the kernel needs to initialize the integrity metadata/checksums for all blocks in the LV. The data corruption checking performed by dm-integrity will only operate on areas of the LV that are already initialized. The progress of integrity initialization is reported by the "syncpercent" LV reporting field (and under the Cpy%Sync lvs column.) Example: create a raid1 LV with integrity: $ lvcreate --type raid1 -m1 --raidintegrity y -n rr -L1G foo Creating integrity metadata LV rr_rimage_0_imeta with size 12.00 MiB. Logical volume "rr_rimage_0_imeta" created. Creating integrity metadata LV rr_rimage_1_imeta with size 12.00 MiB. Logical volume "rr_rimage_1_imeta" created. Logical volume "rr" created. $ lvs -a foo LV VG Attr LSize Origin Cpy%Sync rr foo rwi-a-r--- 1.00g 4.93 [rr_rimage_0] foo gwi-aor--- 1.00g [rr_rimage_0_iorig] 41.02 [rr_rimage_0_imeta] foo ewi-ao---- 12.00m [rr_rimage_0_iorig] foo -wi-ao---- 1.00g [rr_rimage_1] foo gwi-aor--- 1.00g [rr_rimage_1_iorig] 39.45 [rr_rimage_1_imeta] foo ewi-ao---- 12.00m [rr_rimage_1_iorig] foo -wi-ao---- 1.00g [rr_rmeta_0] foo ewi-aor--- 4.00m [rr_rmeta_1] foo ewi-aor--- 4.00m
2019-11-21 01:07:27 +03:00
metadata/integrity_manip.c \
metadata/lv.c \
metadata/lv_manip.c \
metadata/merge.c \
metadata/metadata.c \
2003-04-30 19:24:49 +04:00
metadata/mirror.c \
metadata/pool_manip.c \
metadata/pv.c \
2020-01-14 23:12:20 +03:00
metadata/pv_list.c \
2005-04-20 00:58:25 +04:00
metadata/pv_manip.c \
metadata/pv_map.c \
metadata/raid_manip.c \
2004-09-16 22:40:56 +04:00
metadata/segtype.c \
metadata/snapshot_manip.c \
metadata/thin_manip.c \
metadata/vdo_manip.c \
metadata/vg.c \
mirror/mirrored.c \
2002-11-18 17:04:08 +03:00
misc/crc.c \
misc/lvm-exec.c \
misc/lvm-file.c \
misc/lvm-flock.c \
2008-10-30 20:27:28 +03:00
misc/lvm-globals.c \
misc/lvm-maths.c \
misc/lvm-signal.c \
misc/lvm-string.c \
2006-08-17 22:23:44 +04:00
misc/lvm-wrappers.c \
misc/lvm-percent.c \
misc/sharedlib.c \
mm/memlock.c \
notify/lvmnotify.c \
properties/prop_common.c \
raid/raid.c \
report/properties.c \
report/report.c \
snapshot/snapshot.c \
2004-05-05 01:25:57 +04:00
striped/striped.c \
thin/thin.c \
uuid/uuid.c \
zero/zero.c
2002-11-18 17:04:08 +03:00
ifeq ("@DEVMAPPER@", "yes")
2004-03-26 16:21:12 +03:00
SOURCES +=\
activate/dev_manager.c \
activate/fs.c
endif
ifeq ("@BUILD_LVMPOLLD@", "yes")
SOURCES +=\
lvmpolld/lvmpolld-client.c
endif
2015-03-05 23:00:44 +03:00
ifeq ("@BUILD_LVMLOCKD@", "yes")
SOURCES +=\
locking/lvmlockd.c
endif
ifeq ("@VDO@", "internal")
SOURCES += vdo/vdo.c
endif
LIB_NAME = liblvm-internal
LIB_STATIC = $(LIB_NAME).a
2001-09-21 16:37:43 +04:00
CFLOW_LIST = $(SOURCES)
CFLOW_LIST_TARGET = $(LIB_NAME).cflow
PROGS_CFLAGS = $(BLKID_CFLAGS) $(UDEV_CFLAGS)
2001-09-21 16:37:43 +04:00
include $(top_builddir)/make.tmpl
$(SUBDIRS): $(LIB_STATIC)
CLEAN_TARGETS += misc/configure.h misc/lvm-version.h