1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
Commit Graph

17806 Commits

Author SHA1 Message Date
David Teigland
71cb54d92f coverity cleanups 2021-06-16 13:42:51 -05:00
Tony Asleson
f70d97b916 lvmdbusd: Defer dbus object removal
When we are walking the new lvm state comparing it to the old state we can
run into an issue where we remove a VG that is no longer present from the
object manager, but is still needed by LVs that are left to be processed.
When we try to process existing LVs to see if their state needs to be
updated, or if they need to be removed, we need to be able to reference the
VG that was associated with it.  However, if it's been removed from the
object manager we fail to find it which results in:

Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/lvmdbusd/utils.py", line 666, in _run
  self.rc = self.f(*self.args)
File "/usr/lib/python3.6/site-packages/lvmdbusd/fetch.py", line 36, in _main_thread_load
  cache_refresh=False)[1]
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 146, in load_lvs
  lv_name, object_path, refresh, emit_signal, cache_refresh)
File "/usr/lib/python3.6/site-packages/lvmdbusd/loader.py", line 68, in common
  num_changes += dbus_object.refresh(object_state=o)
File "/usr/lib/python3.6/site-packages/lvmdbusd/automatedproperties.py", line 160, in refresh
  search = self.lvm_id
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 483, in lvm_id
  return self.state.lvm_id
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 173, in lvm_id
  return "%s/%s" % (self.vg_name_lookup(), self.Name)
File "/usr/lib/python3.6/site-packages/lvmdbusd/lv.py", line 169, in vg_name_lookup
  return cfg.om.get_object_by_path(self.Vg).Name

Instead of removing objects from the object manager immediately, we will
keep them in a list and remove them once we have processed all of the state.

Ref:
https://bugzilla.redhat.com/show_bug.cgi?id=1968752
2021-06-16 12:19:02 -05:00
Tony Asleson
e8f3a63000 lvmdbusd: Don't setup search key unless needed
self.lvm_id is a property which actually executes some code which doesn't
need to be executed everytime.
2021-06-16 12:19:02 -05:00
Leo Yan
27abb03a0d tests: Fix building for IDM program
When execute IDM testing, the command reports error:

  /usr/bin/install: cannot stat ‘lib/idm_inject_failure’: No such file
  or directory

Since there have a stale program in my local environment, thus Makefile
always uses the stale program and doesn't report any issue.  In the
brand new repository, it doesn't contain an idm_inject_failure program,
and Makefile doesn't build it without specifying the dependency, thus
the test command complaints the file 'idm_inject_failure' is not found.

This patch adds the dependency 'lib/idm_inject_failure' for IDM testing,
so it can firstly build the injection program and dismiss the error.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-16 10:35:12 -05:00
Leo Yan
f25df0386e tests: stress: Change to use $SHARED for vgcreate
Use the variable $SHARED to replace "--shared" for vgcreate commands.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-16 10:35:12 -05:00
David Teigland
e5740e9646 tests: fix skip in stress_single_thread.sh 2021-06-16 09:37:04 -05:00
David Teigland
f8742b6df2 tests: add some LVM_TEST_LOCK_TYPE_IDM 2021-06-15 14:02:45 -05:00
David Teigland
440d6ae79f lvmdevices: add deviceidtype option
When adding a device to the devices file with --adddev, lvm
by default chooses the best device ID type for the new device.
The new --deviceidtype option allows the user to override the
built in preference.  This is useful if there's a problem with
the default type, or if a secondary type is preferrable.

If the specified deviceidtype does not produce a device ID,
then lvm falls back to the preference it would otherwise use.
2021-06-11 13:27:18 -05:00
Wu Guanghao
8331321070 pvck: add lock_global() before clean_hint_file()
Signed-off-by: Wu Guanghao <wuguanghao3@huawei.com>
2021-06-11 10:21:07 -05:00
Zdenek Kabelac
17b2746486 archive: avoid abuse of internal flag
Since archive is now postponned we use internal variable 'changed'
to mark we need to commit new metadata.
2021-06-09 16:18:20 +02:00
Zdenek Kabelac
bb45e33518 backup: automatically store data on vg_unlock
Previously there have been necessary explicit call of backup (often
either forgotten or over-used). With this patch the necessity to
store backup is remember at vg_commit and once the VG is unlocked,
the committed metadata are automatically store in backup file.

This may possibly alter some printed messages from command when the
backup is now taken later.
2021-06-09 14:56:13 +02:00
Zdenek Kabelac
ba3707d953 archiving: take archive automatically
Instead of calling explicit archive with command processing logic,
move this step towards 1st. vg_write() call, which will automatically
store archive of committed metadata.

This slightly changes some error path where the error in archiving
was detected earlier in the command, while now some on going command
'actions' might have been, but will be simply scratched in case
of error (since even new metadata would not have been even written).

So general effect should be only some command message ordering.
2021-06-09 14:56:13 +02:00
David Teigland
df27392c8c man/help: fix common option listing 2021-06-08 14:07:39 -05:00
David Teigland
ca930bd936 devices: don't use deleted loop backing file for device id
check for "(deleted)" in the backing_file string and
fall back to devname for id.

$ cat /sys/block/loop0/loop/backing_file
/root/looptmp (deleted)
2021-06-08 12:16:06 -05:00
Leo Yan
5e17203ff5 lvmlockd: Fix the compilation warning
As SUSE build tool reports the warning:

lvmlockd-core.c: In function 'client_thread_main':
lvmlockd-core.c:4959:37: warning: '%d' directive output may be truncated writing between 1 and 10 bytes into a region of size 6 [-Wformat-truncation=]
    snprintf(buf, sizeof(buf), "path[%d]", i);
                                     ^~
lvmlockd-core.c:4959:31: note: directive argument in the range [0, 2147483647]
    snprintf(buf, sizeof(buf), "path[%d]", i);
                               ^~~~~~~~~~

To dismiss the compilation warning, enlarge the array "buf" to 17
bytes to support the max signed integer: string format 6 bytes + signed
integer 10 bytes + terminal char "\0".

Reported-by: Heming Zhao <heming.zhao@suse.com>
Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-08 09:33:26 -05:00
David Teigland
9759f915e7 tests: add writecache-cache-blocksize-2
inconsistent physical block size of devs used
for main LV and cache
2021-06-07 15:40:40 -05:00
David Teigland
ff677aa69f tests: rename test 2021-06-07 12:12:33 -05:00
David Teigland
a7f334a532 tests: writecache-blocksize add dm-cache tests
Add the same tests for dm-cache as exist for dm-writecache,
dm-cache uses a different blocksize in a couple cases.
2021-06-07 12:11:12 -05:00
David Teigland
c43f2f8ae0 fix empty mem pool leak
of "config" when LVM_SYSTEM_DIR=""
2021-06-03 14:46:33 -05:00
Leo Yan
fe05828e7e tests: multi-hosts: Test lease timeout with LV shareable mode
This patch is to test timeout handling after activate LV with shareable
mode.  It has the same logic with the testing for LV exclusive mode,
except it verifies the locking with shareable mode.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_sh_timeout_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
0a4d6d9d1d tests: multi-hosts: Test lease timeout with LV exclusive mode
This patch is to test timeout handling after activate LV with exclusive
mode.  It contains two scripts for host A and host B separately.

The script on host A firstly creates VGs and LVs based on the passed
back devices, every back device is for a dedicated VG and a LV is
created as well in the VG.  Afterwards, all LVs are activated by host A,
so host A acquires the lease for these LVs.  Then the test is designed
to fail on host A.

After the host A fails, host B starts to run the paired testing script,
it firstly fails to activate the LVs since the locks are leased by
host A; after lease expiration (after 70s), host B can achieve the lease
for LVs and it can operate LVs and VGs.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_ex_timeout_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
e9950efff1 tests: multi-hosts: Add LV testing
This patch is to add LV testing on multi hosts.  There have two scripts,
the script multi_hosts_lv_hosta.sh is used to create LVs on one host,
and the second script multi_hosts_lv_hostb.sh will acquire
global lock and VG lock, and remove VGs.  The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_lv_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
e75bd71aae tests: multi-hosts: Add VG testing
This patch is to add VG testing on multi hosts.  There have two scripts,
the script multi_hosts_vg_hosta.sh is used to create VGs on one host,
and the second script multi_hosts_vg_hostb.sh afterwards will acquire
global lock and VG lock, and remove VGs.  The testing flow verifies the
locking operations between two hosts with lvmlockd and the backend
locking manager.

  On the host A:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hosta.sh

  On the host B:
    make check_lvmlockd_idm \
      LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3,/dev/sdl3 \
      LVM_TEST_MULTI_HOST=1 T=multi_hosts_vg_hostb.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
92b47d8eb8 tests: idm: Add testing for IDM lock manager failure
If the IDM lock manager fails to access drives, might partially fail to
access drives (e.g. it fails to access one of three drives), or totally
fail to access drives, the lock manager should handle properly for these
cases.  When the drives are partially failure, if the lock manager still
can renew the lease for the locking, then it doesn't need to take any
action for the drive failure; otherwise, if it detects it cannot renew
the locking majority, it needs ti immediately kill the VG from the
lvmlockd.

This patch adds the test for verification the IDM lock manager failure;
the command can be used as below:

  # make check_lvmlockd_idm \
    LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdl3,/dev/sdq3 \
    LVM_TEST_FAILURE=1 T=idm_ilm_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
38abd6bb2c tests: idm: Add testing for the fabric's half brain failure
If the fabric is broken instantly and the partial drives connected on
the fabric disappear from the system.  For this case, according to the
locking algorithm in idm, the lease will not lose since the half drives
are still alive so can renew the lease for the half drives.  On the
other hand, since the VG lock requires to acquire the majority of drive
number, but half drives failure cannot achieve the majority, so it
cannot acquire the lock for VG and thus cannot change metadata for VG.

This patch is to add half brain failure for idm; the test command is as
below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3,/dev/sdo3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_half_brain.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
91d3b56875 tests: idm: Add testing for the fabric failure and timeout
If the fabric is broken instantly, the drives connected on the fabric
will disappear from the system.  For worst case, the lease is timeout
and the drives cannot recovery back.  So a new test is added to emulate
this scenario, it uses a drive for LVM operations and this drive is also
used for locking scheme; if the drive and all its associated paths (if
the drive supports multiple paths) are disconnected, the lock manager
should stop the lockspace for the VG/LVs.

And afterwards, if the drive recovers back, the VG/LV resident in the
drive should be operated properly.  The test command is as below:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdp3 LVM_TEST_FAILURE=1 \
	T=idm_fabric_failure_timeout.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
fc0495ea04 tests: idm: Add testing for the fabric failure
When the fabric failure occurs, it will lose the connection with hosts
instantly, and after a while it can recovery back so that the hosts can
continue to access the drives.

For this case, the locking manager should be reliable for this case and
can dynamically handle this case and allows user to continue to use the
VG/LV with associated locking scheme.

This patch adds a testing to emulate the fabric faliure, verify LVM
commands for this case.  The testing usage is:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=idm_fabric_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
874001ee6e tests: Add testing for lvmlockd failure
After the lvmlockd abnormally exits and relaunch the daemon, if LVM
commands continue to run, lvmlockd and the backend lock manager (e.g.
sanlock lock manager or IDM lock manager) should can continue to serve
the requests from LVM commands.

This patch adds a test to emulate lvmlockd failure, and verify the LVM
commands after lvmlockd recovers back.  Below is an example for testing
the case:

  # make check_lvmlockd_idm \
	LVM_TEST_BACKING_DEVICE=/dev/sdo3,/dev/sdp3,/dev/sdp4 \
	LVM_TEST_FAILURE=1 T=lvmlockd_failure.sh

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
8c7b2df41f tests: Support idm failure injection
When the drive failure occurs, the IDM lock manager and lvmlockd should
handle this case properly.  E.g. when the IDM lock manager detects the
lease renewal failure caused by I/O errors, it should invoke the kill
path which is predefined by lvmlockd, so that the kill path program
(like lvmlockctl) can send requests to lvmlockd to stop and drop lock
for the relevant VG/LVs.

To verify the failure handling flow, this patch introduces an idm
failure injection program, it can input the "percentage" for drive
failures so that can emulate different failure cases.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
f83e11ff43 tests: stress: Add multi-threads stress testing for PV/VG/LV
This patch is to add the stress testing, which launches three threads,
one thread is for creating/removing PV, one thread is for
creating/removing VG, and the last one thread is for LV operations.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
692fe7bb31 tests: stress: Add multi-threads stress testing for VG/LV
This patch is to add the stress testing, which launches two threads,
each thread creates LV, activate and deactivate LV in the loop; so this
can test for multi-threading in lvmlockd and its backend lock manager.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
fe660467fa tests: stress: Add single thread stress testing
This patch is to add the stress testing, which loops to create LV,
activate and deactivate LV in the single thread.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
5b361b197e tests: Add checking for lvmlockd log
Add checking for lvmlockd log, this can be used for the test cases which
are interested in the interaction with lvmlockd.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
2097c27c05 tests: Cleanup idm context when prepare devices
For testing idm locking scheme, it's good to cleanup the idm context
before run the test cases.  This can give a clean environment for the
testing.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
759b0392d5 tests: Support multiple backing devices
In current implementation, the option "LVM_TEST_BACKING_DEVICE" only
supports to specify one backing device; this patch is to extend the
option to support multiple backing devices by using comma as separator,
e.g. below command specifies two backing devices:

  make check_lvmlockd_idm LVM_TEST_BACKING_DEVICE=/dev/sdj3,/dev/sdk3

This can allow the testing works on multiple drives and verify the
locking scheme if can work as expected for multiple drives case.  For
example, for Seagate IDM locking scheme, if a VG uses two PVs, every PV
is resident on a drive, thus the locking operations will be sent to two
drives respectively; so the extension for "LVM_TEST_BACKING_DEVICE" can
help to verify different drive configurations for locking.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
Leo Yan
c64dbc7ee8 tests: Enable the testing for IDM locking scheme
This patch is to introduce testing option LVM_TEST_LOCK_TYPE_IDM, with
specifying this option, the Seagate IDM lock manager will be launched as
backend for testing.  Also add the prepare and remove shell scripts for
IDM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-06-03 09:39:32 -05:00
David Teigland
2bce6faed0 pvchange: fix file locking deadlock
Calling clear_hint_file() to invalidate hints would acquire
the hints flock before the global flock which could cause deadlock.
The lock order requires the global lock to be taken first.

pvchange was always invalidating hints, which was unnecessary;
only invalidate hints when changing a PV uuid.  Because of the
lock ordering, take the global lock before clear_hint_file which
locks the hints file.
2021-06-02 16:29:54 -05:00
David Teigland
e7f107c246 writecache: don't pvmove device used by writecache
The existing check didn't cover the unusual case where the
cachevol exists on the same device as the origin LV.
2021-06-02 11:12:20 -05:00
David Teigland
247f69f9aa writecache: fix lv_on_pmem
dev_is_pmem on pv->dev requires a pv segment or it could segfault.
2021-06-02 10:51:12 -05:00
Zdenek Kabelac
b725b5ea6e vdo: fix preload of kvdo
Commit 5bf1dba9eb broke load of kvdo
kernel module - correct it by loading kvdo instead of trying dm-vdo.
2021-05-26 16:12:20 +02:00
David Teigland
4a746f7ffc lvremove: fix removing thin pool with writecache on data 2021-05-24 16:09:35 -05:00
David Teigland
a65f8e0a62 enable command syntax for thin and writecache
converting an LV with a writecache to thin pool data in
addition to previous attaching writecache to thin pool data
2021-05-24 16:09:35 -05:00
Leo Yan
102294f978 configure: Add macro LOCKDIDM_SUPPORT
The macro LOCKDIDM_SUPPORT is missed in configure.h.in file, thus when
execute "configure" command, it has no chance to add this macro in the
automatic generated header include/configure.h.

This patch adds macro LOCKDIDM_SUPPORT into configure.h.in.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-21 09:27:30 -05:00
Leo Yan
8b904dc711 tools: Add support for "idm" lock type
This patch is to update the comment and code to support "idm" lock type
which is used for LVM toolkit.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:05 -05:00
Leo Yan
affe1af148 lib: locking: Parse PV list for IDM locking
For shared VG or LV locking, IDM locking scheme needs to use the PV
list assocated with VG or LV for sending SCSI commands, thus it requires
to use some places to generate PV list.

In reviewing the flow for LVM commands, the best place to generate PV
list is in the locking lib.  So this is why this patch parses PV list as
shown.  It iterates over all the PV nodes one by one, and compare with
the VG name or LV prefix string.  If any PV matches, then the PV is
added into the PV list.  Finally the PV list is sent to lvmlockd daemon.

Here as mentioned, it compares LV prefix string with the format
"lv_name_", the reason is it needs to find out all relevant PVs, e.g.
for the thin pool, it has LVs for metadata, pool, error, and raw LV, so
we can use the prefix string to find out all PVs belonging to the thin
pool.

For the global lock, it's not covered in this patch.  To avoid the egg
and chicken issue, we need to prepare the global lock ahead before any
locking can be used.  So the global lock's PV list is established in
lvmlockd daemon by iterating all drives with partition labeled with
"propeller".

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:05 -05:00
Leo Yan
ef1c57e68f lib: locking: Add new type "idm"
We can consider the drive firmware a server to handle the locking
request from nodes, this essentially is a client-server model.
DLM uses the kernel as a central place to manage locks, so it also
complies with client-server model for locking operations.  This is
why IDM and DLM are similar with each other for their wrappers.

This patch largely works by generalizing the DLM code paths and then
providing degeneralized functions as wrappers for both IDM and DLM.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:05 -05:00
Leo Yan
d02f5392a0 lvmlockd: idm: Hook Seagate IDM wrapper APIs
To allow the IDM locking scheme be used by users, this patch hooks the
IDM wrapper; it also introducs a new locking type "idm" and we can use
it for global lock with option '-g idm'.

To support IDM locking type, the main change in the data structure is to
add pvs path arrary.  The pvs list is transferred from the lvm commands,
when lvmlockd core layer receives message, it extracts the message with
the keyword "path[idx]".  Finally, the pv list will pass to IDM lock
manager as the target drives for sending IDM SCSI commands.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:01:04 -05:00
Leo Yan
7a8b7b4add lvmlockd: idm: Introduce new locking scheme
Alongside the existed locking schemes of DLM and sanlock, this patch is
to introduce new locking scheme: In-Drive-Mutex (IDM).

With the IDM support in the drive, the locks are resident in the drive,
thus, the locking lease is maintained in a central place: the drive
firmware.  We can consider this is a typical client-server model,
every host (or node) in the server cluster launches the request for
leasing mutex to a drive firmware, the drive firmware works as an
arbitrator to grant the mutex to a requester and it can reject other
applicants if the mutex has been acquired.  To satisfy the LVM
activation for different modes, IDM supports two locking modes:
exclusive and shareable.

Every IDM is identified with two IDs, one is the host ID and another is
the resource ID.  The resource ID is a unique identifier for what the
resource it's protected, in the integration with lvmlockd, the resource
ID is combined with VG's UUID and LV's UUID; for the global locking,
the bytes in resource ID are all zeros, and for the VG locking, the
LV's UUID is set as zero.   Every host can generate a random UUID and
use it as the host ID for the SCSI command, this ID is used to clarify
the ownership for mutex.

For easily invoking the IDM commands to drive, like other locking
scheme (e.g. sanlock), a daemon program named IDM lock manager is
created, so the detailed IDM SCSI commands are encapsulated in the
daemon, and lvmlockd uses the wrapper APIs to communicate with the
daemon program.

This patch introduces the IDM locking wrapper layer, it forwards the
locking requests from lvmlockd to the IDM lock manager, and returns the
result from drives' responding.

One thing should be mentioned is the IDM's LVB.  IDM supports LVB to max
7 bytes when stores into the drive, the most significant byte of 8 bytes
is reserved for control bits.  For this reason, the patch maps the
timestamp in macrosecond unit with its cached LVB, essentially, if any
timestamp was updated by other nodes, that means the local LVB is
invalidate.  When the timestamp is stored into drive's LVB, it's
possbile to cause time-going-backwards issue, which is introduced by the
time precision or missing synchronization acrossing over multiple nodes.
So the IDM wrapper fixes up the timestamp by increment 1 to the latest
value and write back into drive.

Currently LVB is used to track VG changes and its purpose is to notify
lvmetad cache invalidation when detects any metadata has been altered;
but lvmetad is not used anymore for caching metadata, LVB doesn't
really work.  It's possible that the LVB functionality could be useful
again in the future, so let's enable it for IDM in the first place.

Signed-off-by: Leo Yan <leo.yan@linaro.org>
2021-05-20 16:00:59 -05:00
Marian Csontos
3e2f09d78b post-release 2021-05-07 23:09:15 +02:00
Marian Csontos
01b05cf51d pre-release 2021-05-07 23:09:09 +02:00