2015-03-05 23:00:44 +03:00
/*
2015-07-03 18:34:40 +03:00
* Copyright ( C ) 2014 - 2015 Red Hat , Inc .
2015-03-05 23:00:44 +03:00
*
* This file is part of LVM2 .
*
* This copyrighted material is made available to anyone wishing to use ,
* modify , copy , or redistribute it subject to the terms and conditions
* of the GNU Lesser General Public License v .2 .1 .
*/
# ifndef _LVMLOCKD_H
# define _LVMLOCKD_H
2018-05-14 12:30:20 +03:00
# include "libdaemon/client/config-util.h"
# include "libdaemon/client/daemon-client.h"
2019-05-03 13:35:22 +03:00
# include "lib/metadata/metadata-exported.h" /* is_lockd_type() */
2015-03-05 23:00:44 +03:00
# define LOCKD_SANLOCK_LV_NAME "lvmlock"
/* lockd_lv flags */
# define LDLV_MODE_NO_SH 0x00000001
# define LDLV_PERSISTENT 0x00000002
2019-03-20 21:20:26 +03:00
# define LDLV_SH_EXISTS_OK 0x00000004
2015-03-05 23:00:44 +03:00
/* lvmlockd result flags */
# define LD_RF_NO_LOCKSPACES 0x00000001
# define LD_RF_NO_GL_LS 0x00000002
2015-07-27 22:51:43 +03:00
# define LD_RF_WARN_GL_REMOVED 0x00000004
2015-03-05 23:00:44 +03:00
# define LD_RF_DUP_GL_LS 0x00000008
2017-08-29 00:24:00 +03:00
# define LD_RF_NO_LM 0x00000010
2019-03-19 22:38:38 +03:00
# define LD_RF_SH_EXISTS 0x00000020
2015-03-05 23:00:44 +03:00
/* lockd_state flags */
# define LDST_EX 0x00000001
# define LDST_SH 0x00000002
# define LDST_FAIL_REQUEST 0x00000004
# define LDST_FAIL_NOLS 0x00000008
# define LDST_FAIL_STARTING 0x00000010
# define LDST_FAIL_OTHER 0x00000020
# define LDST_FAIL (LDST_FAIL_REQUEST | LDST_FAIL_NOLS | LDST_FAIL_STARTING | LDST_FAIL_OTHER)
2024-06-18 19:24:02 +03:00
/* --lockopt flags */
# define LOCKOPT_FORCE 0x00000001
# define LOCKOPT_SHUPDATE 0x00000002
# define LOCKOPT_NOREFRESH 0x00000004
# define LOCKOPT_SKIPGL 0x00000008
# define LOCKOPT_SKIPVG 0x00000010
# define LOCKOPT_SKIPLV 0x00000020
# define LOCKOPT_AUTO 0x00000040
# define LOCKOPT_NOWAIT 0x00000080
# define LOCKOPT_AUTONOWAIT 0x00000100
2024-06-18 21:26:09 +03:00
# define LOCKOPT_ADOPTLS 0x00000200
# define LOCKOPT_ADOPTGL 0x00000400
# define LOCKOPT_ADOPTVG 0x00000800
# define LOCKOPT_ADOPTLV 0x00001000
# define LOCKOPT_ADOPT 0x00002000
2024-10-25 01:13:33 +03:00
# define LOCKOPT_NODELAY 0x00004000
2024-06-18 19:24:02 +03:00
2015-03-05 23:00:44 +03:00
# ifdef LVMLOCKD_SUPPORT
2024-07-02 17:58:03 +03:00
void lockd_lockopt_get_flags ( const char * str , uint32_t * flags ) ;
locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".
Commands that used the ORPHAN flock in exclusive mode:
pvcreate, pvremove, vgcreate, vgextend, vgremove,
vgcfgrestore
Commands that used the ORPHAN flock in shared mode:
vgimportclone, pvs, pvscan, pvresize, pvmove,
pvdisplay, pvchange, fullreport
Commands that used the GLOBAL flock in exclusive mode:
pvchange, pvscan, vgimportclone, vgscan
Commands that used the GLOBAL flock in shared mode:
pvscan --cache, pvs
The ORPHAN lock covers the important cases of serializing
the use of orphan PVs. It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)
The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)
Neither lock correctly protects the VG namespace, or
orphan PV properties.
To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.
The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode. Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs. Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.
The locking of global state now looks like:
lockd_global()
previously named lockd_gl(), acquires the distributed
global lock through lvmlockd. This is unchanged.
It serializes distributed lvm commands that are changing
global state. This is a no-op when lvmlockd is not in use.
lockf_global()
acquires an flock on a local file. It serializes local lvm
commands that are changing global state.
lock_global()
first calls lockf_global() to acquire the local flock for
global state, and if this succeeds, it calls lockd_global()
to acquire the distributed lock for global state.
Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state. Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).
The following commands which change global state are now
serialized with the exclusive global flock:
pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit
Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs. The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.
The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-18 23:01:19 +03:00
struct lvresize_params ;
struct lvcreate_params ;
2015-03-05 23:00:44 +03:00
/* lvmlockd connection and communication */
void lvmlockd_set_socket ( const char * sock ) ;
void lvmlockd_set_use ( int use ) ;
int lvmlockd_use ( void ) ;
void lvmlockd_init ( struct cmd_context * cmd ) ;
void lvmlockd_connect ( void ) ;
void lvmlockd_disconnect ( void ) ;
2024-06-18 19:24:02 +03:00
2015-03-05 23:00:44 +03:00
/* vgcreate/vgremove use init/free */
2015-07-30 20:04:31 +03:00
int lockd_init_vg ( struct cmd_context * cmd , struct volume_group * vg , const char * lock_type , int lv_lock_count ) ;
2024-06-18 19:24:02 +03:00
int lockd_free_vg_before ( struct cmd_context * cmd , struct volume_group * vg , int changing , int yes ) ;
2015-03-05 23:00:44 +03:00
void lockd_free_vg_final ( struct cmd_context * cmd , struct volume_group * vg ) ;
/* vgrename */
int lockd_rename_vg_before ( struct cmd_context * cmd , struct volume_group * vg ) ;
int lockd_rename_vg_final ( struct cmd_context * cmd , struct volume_group * vg , int success ) ;
/* start and stop the lockspace for a vg */
2024-06-18 01:44:41 +03:00
int lockd_start_vg ( struct cmd_context * cmd , struct volume_group * vg , int * exists ) ;
2015-03-05 23:00:44 +03:00
int lockd_stop_vg ( struct cmd_context * cmd , struct volume_group * vg ) ;
int lockd_start_wait ( struct cmd_context * cmd ) ;
/* locking */
locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".
Commands that used the ORPHAN flock in exclusive mode:
pvcreate, pvremove, vgcreate, vgextend, vgremove,
vgcfgrestore
Commands that used the ORPHAN flock in shared mode:
vgimportclone, pvs, pvscan, pvresize, pvmove,
pvdisplay, pvchange, fullreport
Commands that used the GLOBAL flock in exclusive mode:
pvchange, pvscan, vgimportclone, vgscan
Commands that used the GLOBAL flock in shared mode:
pvscan --cache, pvs
The ORPHAN lock covers the important cases of serializing
the use of orphan PVs. It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)
The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)
Neither lock correctly protects the VG namespace, or
orphan PV properties.
To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.
The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode. Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs. Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.
The locking of global state now looks like:
lockd_global()
previously named lockd_gl(), acquires the distributed
global lock through lvmlockd. This is unchanged.
It serializes distributed lvm commands that are changing
global state. This is a no-op when lvmlockd is not in use.
lockf_global()
acquires an flock on a local file. It serializes local lvm
commands that are changing global state.
lock_global()
first calls lockf_global() to acquire the local flock for
global state, and if this succeeds, it calls lockd_global()
to acquire the distributed lock for global state.
Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state. Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).
The following commands which change global state are now
serialized with the exclusive global flock:
pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit
Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs. The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.
The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-18 23:01:19 +03:00
int lockd_global_create ( struct cmd_context * cmd , const char * def_mode , const char * vg_lock_type ) ;
int lockd_global ( struct cmd_context * cmd , const char * def_mode ) ;
2015-03-05 23:00:44 +03:00
int lockd_vg ( struct cmd_context * cmd , const char * vg_name , const char * def_mode ,
uint32_t flags , uint32_t * lockd_state ) ;
int lockd_vg_update ( struct volume_group * vg ) ;
int lockd_lv_name ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id ,
const char * lock_args , const char * def_mode , uint32_t flags ) ;
int lockd_lv ( struct cmd_context * cmd , struct logical_volume * lv ,
const char * def_mode , uint32_t flags ) ;
2019-03-20 21:20:26 +03:00
int lockd_lv_resize ( struct cmd_context * cmd , struct logical_volume * lv ,
const char * def_mode , uint32_t flags , struct lvresize_params * lp ) ;
2015-03-05 23:00:44 +03:00
/* lvcreate/lvremove use init/free */
int lockd_init_lv ( struct cmd_context * cmd , struct volume_group * vg , struct logical_volume * lv ,
struct lvcreate_params * lp ) ;
int lockd_init_lv_args ( struct cmd_context * cmd , struct volume_group * vg ,
struct logical_volume * lv , const char * lock_type , const char * * lock_args ) ;
int lockd_free_lv ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id , const char * lock_args ) ;
lvremove: fix failed remove of all LVs in shared VG
commit a125a3bb505cc "lv_remove: reduce commits for removed LVs"
changed "lvremove <vgname>" from removing one LV at a time,
to removing all LVs in one vg write/commit. It also changed
the behavior if some of the LVs could not be removed, from
removing those LVs that could be removed, to removing nothing
if any LV could not be removed. This caused a regression in
shared VGs using sanlock, in which the on-disk lease was
removed for any LV that could be removed, even if the command
decided to remove nothing. This would leave LVs without a
valid ondisk lease, and "lock failed: error -221" would be
returned for any command attempting to lock the LV.
Fix this by not freeing the on-disk leases until after the
command has decided to go ahead and remove everything, and
has written the VG metadata.
Before the fix:
node1: lvchange -ay vg/lv1
node2: lvchange -ay vg/lv2
node1: lvs
lv1 test -wi-a----- 4.00m
lv2 test -wi------- 4.00m
node2: lvs
lv1 test -wi------- 4.00m
lv2 test -wi-a----- 4.00m
node1: lvremove -y vg/lv1 vg/lv2
LV locked by other host: vg/lv2
(lvremove removed neither of the LVs, but it freed
the lock for lv1, which could have been removed
except for the proper locking failure on lv2.)
node1: lvs
lv1 test -wi------- 4.00m
lv2 test -wi------- 4.00m
node1: lvremove -y vg/lv1
LV vg/lv1 lock failed: error -221
(The lock for lv1 is gone, so nothing can be done with it.)
2024-10-16 20:29:13 +03:00
int lockd_free_lv_after_update ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id , const char * lock_args ) ;
void lockd_free_removed_lvs ( struct cmd_context * cmd , struct volume_group * vg , int remove_success ) ;
2015-03-05 23:00:44 +03:00
2015-10-08 18:38:35 +03:00
const char * lockd_running_lock_type ( struct cmd_context * cmd , int * found_multiple ) ;
2015-03-05 23:00:44 +03:00
int handle_sanlock_lv ( struct cmd_context * cmd , struct volume_group * vg ) ;
int lockd_lv_uses_lock ( struct logical_volume * lv ) ;
2019-03-20 21:20:26 +03:00
int lockd_lv_refresh ( struct cmd_context * cmd , struct lvresize_params * lp ) ;
2023-01-11 00:23:16 +03:00
int lockd_query_lv ( struct cmd_context * cmd , struct logical_volume * lv , int * ex , int * sh ) ;
2022-11-02 00:23:47 +03:00
2015-03-05 23:00:44 +03:00
# else /* LVMLOCKD_SUPPORT */
2024-07-02 17:58:03 +03:00
static inline void lockd_lockopt_get_flags ( const char * str , uint32_t * flags )
{
}
2015-03-05 23:00:44 +03:00
static inline void lvmlockd_set_socket ( const char * sock )
{
}
static inline void lvmlockd_set_use ( int use )
{
}
static inline void lvmlockd_init ( struct cmd_context * cmd )
{
}
static inline void lvmlockd_disconnect ( void )
{
}
static inline void lvmlockd_connect ( void )
{
}
static inline int lvmlockd_use ( void )
{
return 0 ;
}
2015-07-30 20:04:31 +03:00
static inline int lockd_init_vg ( struct cmd_context * cmd , struct volume_group * vg , const char * lock_type , int lv_lock_count )
2015-03-05 23:00:44 +03:00
{
return 1 ;
}
2024-07-02 17:58:03 +03:00
static inline int lockd_free_vg_before ( struct cmd_context * cmd , struct volume_group * vg , int changing , int yes )
2015-03-05 23:00:44 +03:00
{
return 1 ;
}
static inline void lockd_free_vg_final ( struct cmd_context * cmd , struct volume_group * vg )
{
return ;
}
static inline int lockd_rename_vg_before ( struct cmd_context * cmd , struct volume_group * vg )
{
return 1 ;
}
static inline int lockd_rename_vg_final ( struct cmd_context * cmd , struct volume_group * vg , int success )
{
return 1 ;
}
2024-06-18 01:44:41 +03:00
static inline int lockd_start_vg ( struct cmd_context * cmd , struct volume_group * vg , int * exists )
2015-03-05 23:00:44 +03:00
{
return 0 ;
}
static inline int lockd_stop_vg ( struct cmd_context * cmd , struct volume_group * vg )
{
return 0 ;
}
static inline int lockd_start_wait ( struct cmd_context * cmd )
{
return 0 ;
}
locking: unify global lock for flock and lockd
There have been two file locks used to protect lvm
"global state": "ORPHANS" and "GLOBAL".
Commands that used the ORPHAN flock in exclusive mode:
pvcreate, pvremove, vgcreate, vgextend, vgremove,
vgcfgrestore
Commands that used the ORPHAN flock in shared mode:
vgimportclone, pvs, pvscan, pvresize, pvmove,
pvdisplay, pvchange, fullreport
Commands that used the GLOBAL flock in exclusive mode:
pvchange, pvscan, vgimportclone, vgscan
Commands that used the GLOBAL flock in shared mode:
pvscan --cache, pvs
The ORPHAN lock covers the important cases of serializing
the use of orphan PVs. It also partially covers the
reporting of orphan PVs (although not correctly as
explained below.)
The GLOBAL lock doesn't seem to have a clear purpose
(it may have eroded over time.)
Neither lock correctly protects the VG namespace, or
orphan PV properties.
To simplify and correct these issues, the two separate
flocks are combined into the one GLOBAL flock, and this flock
is used from the locking sites that are in place for the
lvmlockd global lock.
The logic behind the lvmlockd (distributed) global lock is
that any command that changes "global state" needs to take
the global lock in ex mode. Global state in lvm is: the list
of VG names, the set of orphan PVs, and any properties of
orphan PVs. Reading this global state can use the global lock
in sh mode to ensure it doesn't change while being reported.
The locking of global state now looks like:
lockd_global()
previously named lockd_gl(), acquires the distributed
global lock through lvmlockd. This is unchanged.
It serializes distributed lvm commands that are changing
global state. This is a no-op when lvmlockd is not in use.
lockf_global()
acquires an flock on a local file. It serializes local lvm
commands that are changing global state.
lock_global()
first calls lockf_global() to acquire the local flock for
global state, and if this succeeds, it calls lockd_global()
to acquire the distributed lock for global state.
Replace instances of lockd_gl() with lock_global(), so that the
existing sites for lvmlockd global state locking are now also
used for local file locking of global state. Remove the previous
file locking calls lock_vol(GLOBAL) and lock_vol(ORPHAN).
The following commands which change global state are now
serialized with the exclusive global flock:
pvchange (of orphan), pvresize (of orphan), pvcreate, pvremove,
vgcreate, vgextend, vgremove, vgreduce, vgrename,
vgcfgrestore, vgimportclone, vgmerge, vgsplit
Commands that use a shared flock to read global state (and will
be serialized against the prior list) are those that use
process_each functions that are based on processing a list of
all VG names, or all PVs. The list of all VGs or all PVs is
global state and the shared lock prevents those lists from
changing while the command is processing them.
The ORPHAN lock previously attempted to produce an accurate
listing of orphan PVs, but it was only acquired at the end of
the command during the fake vg_read of the fake orphan vg.
This is not when orphan PVs were determined; they were
determined by elimination beforehand by processing all real
VGs, and subtracting the PVs in the real VGs from the list
of all PVs that had been identified during the initial scan.
This is fixed by holding the single global lock in shared mode
while processing all VGs to determine the list of orphan PVs.
2019-04-18 23:01:19 +03:00
static inline int lockd_global_create ( struct cmd_context * cmd , const char * def_mode , const char * vg_lock_type )
2015-03-05 23:00:44 +03:00
{
2015-07-16 23:12:07 +03:00
/*
* When lvm is built without lvmlockd support , creating a VG with
* a shared lock type should fail .
*/
if ( is_lockd_type ( vg_lock_type ) ) {
log_error ( " Using a shared lock type requires lvmlockd. " ) ;
return 0 ;
}
2015-03-05 23:00:44 +03:00
return 1 ;
}
2019-05-03 13:35:22 +03:00
static inline int lockd_global ( struct cmd_context * cmd , const char * def_mode )
2015-03-05 23:00:44 +03:00
{
return 1 ;
}
static inline int lockd_vg ( struct cmd_context * cmd , const char * vg_name , const char * def_mode ,
uint32_t flags , uint32_t * lockd_state )
{
* lockd_state = 0 ;
return 1 ;
}
static inline int lockd_vg_update ( struct volume_group * vg )
{
return 1 ;
}
static inline int lockd_lv_name ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id ,
const char * lock_args , const char * def_mode , uint32_t flags )
{
return 1 ;
}
static inline int lockd_lv ( struct cmd_context * cmd , struct logical_volume * lv ,
const char * def_mode , uint32_t flags )
{
return 1 ;
}
2019-03-20 21:20:26 +03:00
static inline int lockd_lv_resize ( struct cmd_context * cmd , struct logical_volume * lv ,
const char * def_mode , uint32_t flags , struct lvresize_params * lp )
{
2019-04-02 18:50:12 +03:00
return 1 ;
2019-03-20 21:20:26 +03:00
}
2015-03-05 23:00:44 +03:00
static inline int lockd_init_lv ( struct cmd_context * cmd , struct volume_group * vg ,
struct logical_volume * lv , struct lvcreate_params * lp )
{
2015-07-03 00:33:51 +03:00
return 1 ;
2015-03-05 23:00:44 +03:00
}
static inline int lockd_init_lv_args ( struct cmd_context * cmd , struct volume_group * vg ,
struct logical_volume * lv , const char * lock_type , const char * * lock_args )
{
2015-07-03 00:33:51 +03:00
return 1 ;
2015-03-05 23:00:44 +03:00
}
static inline int lockd_free_lv ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id , const char * lock_args )
{
2015-07-03 00:33:51 +03:00
return 1 ;
2015-03-05 23:00:44 +03:00
}
lvremove: fix failed remove of all LVs in shared VG
commit a125a3bb505cc "lv_remove: reduce commits for removed LVs"
changed "lvremove <vgname>" from removing one LV at a time,
to removing all LVs in one vg write/commit. It also changed
the behavior if some of the LVs could not be removed, from
removing those LVs that could be removed, to removing nothing
if any LV could not be removed. This caused a regression in
shared VGs using sanlock, in which the on-disk lease was
removed for any LV that could be removed, even if the command
decided to remove nothing. This would leave LVs without a
valid ondisk lease, and "lock failed: error -221" would be
returned for any command attempting to lock the LV.
Fix this by not freeing the on-disk leases until after the
command has decided to go ahead and remove everything, and
has written the VG metadata.
Before the fix:
node1: lvchange -ay vg/lv1
node2: lvchange -ay vg/lv2
node1: lvs
lv1 test -wi-a----- 4.00m
lv2 test -wi------- 4.00m
node2: lvs
lv1 test -wi------- 4.00m
lv2 test -wi-a----- 4.00m
node1: lvremove -y vg/lv1 vg/lv2
LV locked by other host: vg/lv2
(lvremove removed neither of the LVs, but it freed
the lock for lv1, which could have been removed
except for the proper locking failure on lv2.)
node1: lvs
lv1 test -wi------- 4.00m
lv2 test -wi------- 4.00m
node1: lvremove -y vg/lv1
LV vg/lv1 lock failed: error -221
(The lock for lv1 is gone, so nothing can be done with it.)
2024-10-16 20:29:13 +03:00
static inline int lockd_free_lv_after_update ( struct cmd_context * cmd , struct volume_group * vg ,
const char * lv_name , struct id * lv_id , const char * lock_args )
{
return 1 ;
}
static inline void lockd_free_removed_lvs ( struct cmd_context * cmd , struct volume_group * vg , int remove_success )
{
}
2015-10-08 18:38:35 +03:00
static inline const char * lockd_running_lock_type ( struct cmd_context * cmd , int * found_multiple )
2015-03-05 23:00:44 +03:00
{
2015-07-16 23:12:07 +03:00
log_error ( " Using a shared lock type requires lvmlockd. " ) ;
2015-03-05 23:00:44 +03:00
return NULL ;
}
static inline int handle_sanlock_lv ( struct cmd_context * cmd , struct volume_group * vg )
{
return 0 ;
}
static inline int lockd_lv_uses_lock ( struct logical_volume * lv )
{
return 0 ;
}
2019-03-20 21:20:26 +03:00
static inline int lockd_lv_refresh ( struct cmd_context * cmd , struct lvresize_params * lp )
{
return 0 ;
}
2023-01-11 15:34:38 +03:00
static inline int lockd_query_lv ( struct cmd_context * cmd , struct logical_volume * lv , int * ex , int * sh )
2022-11-02 00:23:47 +03:00
{
return 0 ;
}
2015-07-03 18:34:40 +03:00
# endif /* LVMLOCKD_SUPPORT */
2015-03-05 23:00:44 +03:00
2015-07-03 18:34:40 +03:00
# endif /* _LVMLOCKD_H */