2008-10-30 20:40:00 +03:00
/*
* Copyright ( C ) 2001 - 2004 Sistina Software , Inc . All rights reserved .
2011-02-18 17:11:22 +03:00
* Copyright ( C ) 2004 - 2011 Red Hat , Inc . All rights reserved .
2008-10-30 20:40:00 +03:00
*
* This file is part of LVM2 .
*
* This copyrighted material is made available to anyone wishing to use ,
* modify , copy , or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v .2 .1 .
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program ; if not , write to the Free Software Foundation ,
2016-01-21 13:49:46 +03:00
* Inc . , 51 Franklin Street , Fifth Floor , Boston , MA 02110 - 1301 USA
2008-10-30 20:40:00 +03:00
*/
# ifndef _LVM_GLOBALS_H
# define _LVM_GLOBALS_H
# define VERBOSE_BASE_LEVEL _LOG_WARN
# define SECURITY_LEVEL 0
2012-02-01 17:42:18 +04:00
# define PV_MIN_SIZE_KB 512
2008-10-30 20:40:00 +03:00
2015-11-07 19:17:37 +03:00
enum dev_ext_e ;
2008-10-30 20:40:00 +03:00
void init_verbose ( int level ) ;
config: add silent mode
Accept -q as the short form of --quiet.
Suppress non-essential standard output if -q is given twice.
Treat log/silent in lvm.conf as equivalent to -qq.
Review all log_print messages and change some to
log_print_unless_silent.
When silent, the following commands still produce output:
dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
pvs, version, vgcfgrestore -l, vgdisplay, vgs.
[Needs checking.]
Non-essential messages are shifted from log level 4 to log level 5
for syslog and lvm2_log_fn purposes.
2012-08-25 23:35:48 +04:00
void init_silent ( int silent ) ;
2008-10-30 20:40:00 +03:00
void init_test ( int level ) ;
2018-11-30 23:56:16 +03:00
void init_use_aio ( int useaio ) ;
2008-10-30 20:40:00 +03:00
void init_md_filtering ( int level ) ;
2016-05-25 21:57:33 +03:00
void init_internal_filtering ( int level ) ;
filters: add firmware RAID filter
Just like MD filtering that detects components of software RAID (md),
add detection for firmware RAID.
We're not adding any native code to detect this - there are lots of
firmware RAIDs out there which is just out of LVM scope. However,
with current changes with which we're able to get device info from
external sources (e.g. external_device_info_source="udev"), we can
do this easily if the external device status source has this kind
of information - which is the case of "udev" source where the results
of blkid scans are stored.
This detection should cover all firmware RAIDs that blkid can detect and
which are identified as:
ID_FS_TYPE = {adaptec,ddf,hpt45x,hpt37x,isw,jmicron,lsi_mega,nvidia,promise_fasttrack,silicon_medley,via}_raid_member
2014-09-09 17:05:57 +04:00
void init_fwraid_filtering ( int level ) ;
2008-10-30 20:40:00 +03:00
void init_pvmove ( int level ) ;
2015-11-07 19:17:37 +03:00
void init_external_device_info_source ( enum dev_ext_e src ) ;
2011-04-22 16:05:32 +04:00
void init_obtain_device_list_from_udev ( int device_list_from_udev ) ;
2008-10-30 20:40:00 +03:00
void init_debug ( int level ) ;
2013-01-08 02:25:19 +04:00
void init_debug_classes_logged ( int classes ) ;
2008-10-30 20:40:00 +03:00
void init_cmd_name ( int status ) ;
2019-02-22 21:01:20 +03:00
void init_log_command ( int log_name , int log_pid ) ;
2008-10-30 20:40:00 +03:00
void init_security_level ( int level ) ;
void init_mirror_in_sync ( int in_sync ) ;
void init_dmeventd_monitor ( int reg ) ;
2015-10-23 11:48:01 +03:00
void init_disable_dmeventd_monitoring ( int disable ) ;
2010-01-05 23:56:51 +03:00
void init_background_polling ( int polling ) ;
2008-10-30 20:40:00 +03:00
void init_ignore_suspended_devices ( int ignore ) ;
Mirror: Fix hangs and lock-ups caused by attempting label reads of mirrors
There is a problem with the way mirrors have been designed to handle
failures that is resulting in stuck LVM processes and hung I/O. When
mirrors encounter a write failure, they block I/O and notify userspace
to reconfigure the mirror to remove failed devices. This process is
open to a couple races:
1) Any LVM process other than the one that is meant to deal with the
mirror failure can attempt to read the mirror, fail, and block other
LVM commands (including the repair command) from proceeding due to
holding a lock on the volume group.
2) If there are multiple mirrors that suffer a failure in the same
volume group, a repair can block while attempting to read the LVM
label from one mirror while trying to repair the other.
Mitigation of these races has been attempted by disallowing label reading
of mirrors that are either suspended or are indicated as blocking by
the kernel. While this has closed the window of opportunity for hitting
the above problems considerably, it hasn't closed it completely. This is
because it is still possible to start an LVM command, read the status of
the mirror as healthy, and then perform the read for the label at the
moment after a the failure is discovered by the kernel.
I can see two solutions to this problem:
1) Allow users to configure whether mirrors can be candidates for LVM
labels (i.e. whether PVs can be created on mirror LVs). If the user
chooses to allow label scanning of mirror LVs, it will be at the expense
of a possible hang in I/O or LVM processes.
2) Instrument a way to allow asynchronous label reading - allowing
blocked label reads to be ignored while continuing to process the LVM
command. This would action would allow LVM commands to continue even
though they would have otherwise blocked trying to read a mirror. They
can then release their lock and allow a repair command to commence. In
the event of #2 above, the repair command already in progress can continue
and repair the failed mirror.
This patch brings solution #1. If solution #2 is developed later on, the
configuration option created in #1 can be negated - allowing mirrors to
be scanned for labels by default once again.
2013-10-23 04:14:33 +04:00
void init_ignore_lvm_mirrors ( int scan ) ;
2008-11-03 21:59:59 +03:00
void init_error_message_produced ( int produced ) ;
2008-12-18 08:27:17 +03:00
void init_is_static ( unsigned value ) ;
2010-01-11 18:40:03 +03:00
void init_udev_checking ( int checking ) ;
2011-02-18 17:11:22 +03:00
void init_pv_min_size ( uint64_t sectors ) ;
2011-07-01 18:09:19 +04:00
void init_activation_checks ( int checks ) ;
2011-09-22 21:39:56 +04:00
void init_retry_deactivation ( int retry ) ;
2016-02-29 22:38:31 +03:00
void init_unknown_device_name ( const char * name ) ;
2019-03-01 22:55:59 +03:00
void init_io_memory_size ( int val ) ;
2008-10-30 20:40:00 +03:00
void set_cmd_name ( const char * cmd_name ) ;
2015-06-16 23:13:10 +03:00
const char * get_cmd_name ( void ) ;
2010-08-11 16:14:23 +04:00
void set_sysfs_dir_path ( const char * path ) ;
2008-10-30 20:40:00 +03:00
int test_mode ( void ) ;
2018-11-16 21:21:20 +03:00
int use_aio ( void ) ;
2008-10-30 20:40:00 +03:00
int md_filtering ( void ) ;
2016-05-25 21:57:33 +03:00
int internal_filtering ( void ) ;
filters: add firmware RAID filter
Just like MD filtering that detects components of software RAID (md),
add detection for firmware RAID.
We're not adding any native code to detect this - there are lots of
firmware RAIDs out there which is just out of LVM scope. However,
with current changes with which we're able to get device info from
external sources (e.g. external_device_info_source="udev"), we can
do this easily if the external device status source has this kind
of information - which is the case of "udev" source where the results
of blkid scans are stored.
This detection should cover all firmware RAIDs that blkid can detect and
which are identified as:
ID_FS_TYPE = {adaptec,ddf,hpt45x,hpt37x,isw,jmicron,lsi_mega,nvidia,promise_fasttrack,silicon_medley,via}_raid_member
2014-09-09 17:05:57 +04:00
int fwraid_filtering ( void ) ;
2008-10-30 20:40:00 +03:00
int pvmove_mode ( void ) ;
2011-04-22 16:05:32 +04:00
int obtain_device_list_from_udev ( void ) ;
2015-11-07 19:17:37 +03:00
enum dev_ext_e external_device_info_source ( void ) ;
2008-10-30 20:40:00 +03:00
int verbose_level ( void ) ;
config: add silent mode
Accept -q as the short form of --quiet.
Suppress non-essential standard output if -q is given twice.
Treat log/silent in lvm.conf as equivalent to -qq.
Review all log_print messages and change some to
log_print_unless_silent.
When silent, the following commands still produce output:
dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck, pvdisplay,
pvs, version, vgcfgrestore -l, vgdisplay, vgs.
[Needs checking.]
Non-essential messages are shifted from log level 4 to log level 5
for syslog and lvm2_log_fn purposes.
2012-08-25 23:35:48 +04:00
int silent_mode ( void ) ;
2008-10-30 20:40:00 +03:00
int debug_level ( void ) ;
2013-01-08 02:25:19 +04:00
int debug_class_is_logged ( int class ) ;
2008-10-30 20:40:00 +03:00
int security_level ( void ) ;
int mirror_in_sync ( void ) ;
2010-01-05 23:56:51 +03:00
int background_polling ( void ) ;
2008-10-30 20:40:00 +03:00
int ignore_suspended_devices ( void ) ;
Mirror: Fix hangs and lock-ups caused by attempting label reads of mirrors
There is a problem with the way mirrors have been designed to handle
failures that is resulting in stuck LVM processes and hung I/O. When
mirrors encounter a write failure, they block I/O and notify userspace
to reconfigure the mirror to remove failed devices. This process is
open to a couple races:
1) Any LVM process other than the one that is meant to deal with the
mirror failure can attempt to read the mirror, fail, and block other
LVM commands (including the repair command) from proceeding due to
holding a lock on the volume group.
2) If there are multiple mirrors that suffer a failure in the same
volume group, a repair can block while attempting to read the LVM
label from one mirror while trying to repair the other.
Mitigation of these races has been attempted by disallowing label reading
of mirrors that are either suspended or are indicated as blocking by
the kernel. While this has closed the window of opportunity for hitting
the above problems considerably, it hasn't closed it completely. This is
because it is still possible to start an LVM command, read the status of
the mirror as healthy, and then perform the read for the label at the
moment after a the failure is discovered by the kernel.
I can see two solutions to this problem:
1) Allow users to configure whether mirrors can be candidates for LVM
labels (i.e. whether PVs can be created on mirror LVs). If the user
chooses to allow label scanning of mirror LVs, it will be at the expense
of a possible hang in I/O or LVM processes.
2) Instrument a way to allow asynchronous label reading - allowing
blocked label reads to be ignored while continuing to process the LVM
command. This would action would allow LVM commands to continue even
though they would have otherwise blocked trying to read a mirror. They
can then release their lock and allow a repair command to commence. In
the event of #2 above, the repair command already in progress can continue
and repair the failed mirror.
This patch brings solution #1. If solution #2 is developed later on, the
configuration option created in #1 can be negated - allowing mirrors to
be scanned for labels by default once again.
2013-10-23 04:14:33 +04:00
int ignore_lvm_mirrors ( void ) ;
2019-02-22 21:01:20 +03:00
const char * log_command_info ( void ) ;
const char * log_command_file ( void ) ;
2008-12-18 08:27:17 +03:00
unsigned is_static ( void ) ;
2010-01-11 18:40:03 +03:00
int udev_checking ( void ) ;
2010-08-11 16:14:23 +04:00
const char * sysfs_dir_path ( void ) ;
2011-02-18 17:11:22 +03:00
uint64_t pv_min_size ( void ) ;
2011-07-01 18:09:19 +04:00
int activation_checks ( void ) ;
2011-09-22 21:39:56 +04:00
int retry_deactivation ( void ) ;
2016-02-29 22:38:31 +03:00
const char * unknown_device_name ( void ) ;
2019-03-01 22:55:59 +03:00
int io_memory_size ( void ) ;
2008-10-30 20:40:00 +03:00
# define DMEVENTD_MONITOR_IGNORE -1
int dmeventd_monitor_mode ( void ) ;
# endif