IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
With this change we now have vgcreate/vgextend liblvm functions.
Note that this changes the lock order of the following functions as the
orphan lock is now obtained first. With our policy of non-blocking
second locks, this should not be a problem.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Move the vg orphan lock inside vg_remove_single, now a complete liblvm
function. Note that this changes the order of the locks - originally
VG_ORPHAN was obtained first, then the vgname lock. With the current
policy of non-blocking second locks, this could mean we get a failure
obtaining the orphan lock. In the case of a vg with lvs being removed,
this could result in the lvs being removed but not the vg. Such a
scenario could have happened prior though with a different failure.
Other tools were examined for side-effects, and no major problems
were noted.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Move check for active LVs outside of library function. The vgremove
liblvm function function will fail if there are active LVs. It will
be the application's responsibility to check this condition and remove
the LVs individually before calling vgremove. Note also that we've
duplicated the EXPORTED_VG check in vgremove_single (tools) and
vg_remove_single (library). Duplication seemed the only option here
since we don't want to do the automatic removal of LVs (in the tools)
if the vg is exported, and we still need to protect the library call
from removal if the vg is exported.
We still need to deal with the ORPHAN lock but vg_remove_single is now
very close to our liblvm function.
TODO: Refactor lvremove in a similar way.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
vg_t *vg_create(struct cmd_context *cmd, const char *vg_name);
This is the first step towards the API called to create a VG.
Call vg_lock_newname() inside this function. Use _vg_make_handle()
where possible.
Now we have 2 ways to construct a volume group:
1) vg_read: Used when constructing an existing VG from disks
2) vg_create: Used when constructing a new VG
Both of these interfaces obtain a lock, and return a vg_t *.
The usage of _vg_make_handle() inside vg_create() doesn't fit
perfectly but it's ok for now. Needs some cleanup though and I've
noted "FIXME" in the code.
Add the new vg_create() plus vg 'set' functions for non-default
VG parameters in the following tools:
- vgcreate: Fairly straightforward refactoring. We just moved
vg_lock_newname inside vg_create so we check the return via
vg_read_error.
- vgsplit: The refactoring here is a bit more tricky. Originally
we called vg_lock_newname and depending on the error code, we either
read the existing vg or created the new one. Now vg_create()
calls vg_lock_newname, so we first try to create the VG. If this
fails with FAILED_EXIST, we can then do the vg_read. If the
create succeeds, we check the input parameters and set any new
values on the VG.
TODO in future patches:
1. The VG_ORPHAN lock needs some thought. We may want to treat
this as any other VG, and require the application to obtain a handle
and pass it to other API calls (for example, vg_extend). Or,
we may find that hiding the VG_ORPHAN lock inside other APIs is
the way to go. I thought of placing the VG_ORPHAN lock inside
vg_create() and tying it to the vg handle, but was not certain
this was the right approach.
2. Cleanup error paths. Integrate vg_read_error() with vg_create and
vg_read* error codes and/or the new error APIs.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
NOTE: vg_set_alloc_policy() returns success if you try to set a value that
is already stored. The behavior of vgchange is the same though - it fails.
There is a fixme noted in the code about this inconsistency, which should
be resolved if possible.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
In liblvm, we will reserve the word 'change' to mean an API that
both sets one or more values, and commits to disk. This will be
consistent with the LVM commandline. The existing vg_change_pesize()
function does not commit to disk, but just changes the extent_size
and ensures all internal structures are updated. This logic should
be contained in a function that sets the value.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
It would be nice to have one function that does all the validation
and setting of the VG's pesize. However, currently some checks
are in the higher-level function _vgchange_pesize(), and some
checks are in the lower function vg_change_pesize().
This patch moves most of the higher-level checks inside
vg_change_pesize. In one case a failure return code is
changed from ECMD_FAILED to EINVALID_CMD_LINE.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Remove unneeded LOCK_NONBLOCKING from vg_read() API and tools that
use it. We no longer need this flag anywhere since we now automatically
set LCK_NONBLOCK inside lock_vol() if vgs_locked().
For further details, see:
commit d52b3fd3fe
Author: Dave Wysochanski <dwysocha@redhat.com>
Date: Wed May 13 13:02:52 2009 +0000
Remove NON_BLOCKING lock flag from tools and set a policy to auto-set.
As a simplification to the tools and further liblvm, this patch pushes
the setting of NON_BLOCKING lock flag inside the lock_vol() call.
The policy we set is if any existing VGs are currently locked, we
set the NON_BLOCKING flag.
At some point it may make sense to add this flag back if we get an
RFE from a liblvm user, but for now let's keep it as simple as
possible.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Remove READ_CHECK_EXISTENCE and vg_might_exist().
This flag and API is no longer used now that we have a separate
API to check for existence.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Remove unneeded LOCK_KEEP from vg_read() interface.
Update comment to clarify cases where _vg_lock_and_read() may return
with an error but the lock held. Would be nice to make the vg_read()
interface consistent with regards to lock held and error behavior.
Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Sun May 3 13:06:14 CEST 2009 Petr Rockai <me@mornfall.net>
* Don't segfault in vg_release when vg->cmd is NULL.
Author: Petr Rockai <prockai@redhat.com>
Committer: Dave Wysochanski <dwysocha@redhat.com>
Sun May 3 12:32:30 CEST 2009 Petr Rockai <me@mornfall.net>
* Rework the toollib interface (process_each_*) on top of new vg_read.
Rebased 6/26/09 by Dave W.
- Add skipping message to process_each_lv
- Remove inconsistent_t.
status flag after reading the volume group. Thus, no need to set the flag
in vg_read() or clear it later before calling _vg_bad_status_bits().
Also, add back in the !lockingfailed() as part of the CLUSTERED flag check.
It's unclear why it was removed when the check was moved from
_vg_bad_status_bits() to inside _vg_lock_and_read().
There was an open question about the last check in the 'if' stmt for
lockingfailed() with a previous patch submitted. However, I would
defer that right now as it is a separate item and this patch should
be no functional change by including the !lockingfailed().
Petr acked this patch on 5/26 I just forgot to check it in.
Acked-by: Petr Rockai <prockai@redhat.com>
Various tools need to check for existence of a VG before doing something
(vgsplit, vgrename, vgcreate). Currently we don't have an interface to
check for existence, but the existence check is part of the vg_read* call(s).
This patch is an attempt to pull out some of that functionality into a
separate function, and hopefully simplify our vg_read interface, and
move those patches along.
vg_lock_newname() is only concerned about checking whether a vg exists in
the system. Unfortunately, we cannot just scan the system, but we must first
obtain a lock. Since we are reserving a vgname, we take a WRITE lock on
the vgname. Once obtained, we scan the system to ensure the name does
not exist. The return codes and behavior is in the function header.
You might think of this function as similar to an open() call with
O_CREAT and O_EXCL flags (returns failure with -EEXIST if file already
exists).
NOTE: I think including the word "lock" in the function name is important,
as it clearly states the function obtains a lock and makes the code more
readable, especially when it comes to cleanup / unlocking. The ultimate
function name is somewhat open for debate though so later we may rename.
During vgreduce is failed mirror image replaced with error segment,
this segmant type has always area_count == 0.
Current code expects that there is at least one area with device,
patch fixes it by additional check (fixes segfault during vgreduce).
Also do not calculate readahead in every lv_info call, we only need
to cache PV readahead before activation calls which locks memory.
When we are stacking LV over device, which has for some reason
increased read_ahead (e.g. MD RAID), the read_ahead hint
for libdevmapper is wrong (it is zero).
If the calculated read_ahead hint is zero, patch uses read_ahead of underlying device
(if first segment is PV) when setting DM_READ_AHEAD_MINIMUM_FLAG.
Because we are using dev-cache, it also store this value to cache for future use
(if several LVs are over one PV, BLKRAGET is called only once for underlying device.)
This should fix all the reamining problems with readahead mismatch reported
for DM over MD configurations (and similar cases).
We can temporarily violate max_lv during mirror conversion etc.
(If the operation fails, orphan mirror images are visible to administrator
for manual remove for example. Not that this should ever happen:-)
Force limit only for lvcreate (and vg merge) command.
Patch also adds simple max_lv tests into testsuite
The seg variable is temporary variable for list iterator,
code cannot expect that after iteration it remains NULL
(it contains non-NULL pointer here id list is empty).
Patch fixes first_seg function so it now correctly returns NULL
for empty segment list.
Since now, all code reading volume group is responsible for releasing
the memory allocated by calling vg_release(vg).
(For simplicity of use, vg_releae can be called for vg == NULL,
the same logic like free(NULL)).
Also providing simple macro for unlocking & releasing in one step,
tools usualy uses this approach.
The global memory pool (cmd->mem) should be used only for global
physical volume operations.
This patch have to be applied with all subsequent patches to complete
memory pool per vg logic.
Using separate memory pool has quite bit memory saving impact when
using large VGs, this is mainly needed when we have to use
preallocated and locked memory (and should not overflow from that
memory space).
The all_pvs list, used in vg_read, should make its own private
copy of pv structures, otherwise (when vg will use its own pool)
it will point to released memory pool.
The same applies for get_pvs() call.
Patch adds pv_list copy helper and adds explicit memory pool
parameter into _copy_pv.
(Please note that all these helper functions cannot guarantee that
vg related fields are valid - proper vg read & lock must be used
if it is requested.)
Patch fixes these problems:
- during the snapshot creation process, it needs create 2 LVs,
one is cow, second becomes snapshot.
If the code fails in vg_add_snapshot, code lvcreate will not remove
LV cow volume.
- if max_lv is set and VG contains snapshot, it can happen that
during the activation lv_count is temporarily increased over the limit
and VG metadata are not properly processed
see https://bugzilla.redhat.com/show_bug.cgi?id=490298
- vgcfgrestore alows restore with max_lv set to lower valuer that actual
LV count. This later leads to situation that max_lv is completely ignored.
- vgck doesn't call vg_validate(). It should at least try:-)
Signed-off-by: Milan Broz <mbroz@redhat.com>
This patch is not fully tested and leaves some related bugs unfixed.
Intended behaviour of the code now:
pe_start in the lvm2 format PV label header is set only by pvcreate (or
vgconvert -M2) and then preserved in *all* operations thereafter.
In some specialist cases, after the PV is added to a VG, the pe_start
field in the VG metadata may hold a different value and if so, it
overrides the other one for as long as the PV is in such a VG.
Currently, the field storing the size of the data area in the PV label
header always holds 0. As it only has meaning in the context of a
volume group, it is calculated whenever the PV is added to a VG (and can
be derived from extent_size and pe_count in the VG metadata).
Fixes problem when after downconvert to lvm1 VG is broken:
# lvcreate -n lv1 -l 4 vg_test
Invalid LV in extent map (PV /dev/sdb1, PE 0, LV 0, LE 0)
...
Fix missing VG unlocks in some pvchange error paths.
Add some missing validation of VG names.
Rename validate_vg_name() to validate_new_vg_name().
Change orphan lock to VG_ORPHANS.
Change format1 to use ORPHAN as orphan VG name.
Fix some memory leaks in error paths found by coverity.
Use C99 struct initialisers.
Move DEFS into configure.h.
Clean-ups to remove miscellaneous compiler warnings.
[Some activation-related features will stop working for a while now.
Some types of activation are getting split into two steps, with the
first step using the precommitted metadata.]
Clear many compiler warnings (i386) & associated bugs - hopefully without
introducing too many new bugs:-) (Same exercise required for other archs.)
Default compilation has optimisation - or else use ./configure --enable-debug
Lots of changes/very little testing so far => there'll be bugs!
Use 'vgcreate -M text' to create a volume group with its metadata stored
in text files. Text format metadata changes should be reasonably atomic,
with a (basic) automatic recovery mechanism if the system crashes while a
change is in progress.
Add a metadata section to lvm.conf to specify multiple directories if
you want (recommended) to keep multiple copies of the metadata (eg on
different filesystems).
e.g. metadata {
dirs = ["/etc/lvm/metadata1","/usr/local/lvm/metadata2"]
}
Plenty of refinements still in the pipeline.
from lock_vol() - otherwise it now attempts to acquire the lock and then
immediately releases it.
o Extend the id field in struct logical_volume to hold VG uuid + LV uuid
for format1. This unique lvid can be used directly when calling lock_vol().
o Add the VG uuid to vgcache to make VG uuid lookups possible. (Another
step towards using them instead of VG names internally.)
and add severe warning if it's used to make a device seem bigger than
it really is. This is not an option people should be using as it
breaks metadata integrity.
o Use uint64_t throughout (rather than unsigned long long)
o Convert a few messages that contain pathnames into the more common form:
pathname: message
struct pv_list {
struct list list;
struct physical_volume pv;
};
to
struct pv_list {
struct list list;
struct physical_volume *pv;
};
o New function in toollib 'create_pv_list', which creates a list of pv's
from a given command line array of pv's.
o Changed lvcreate/extend to use this (fixes lvextend [pv list] bug).
onto a new device). uuid specified must not already exist on the system.
o More message tidying.
o When checking for label, only read PV metadata.
o Add ataraid. [Needs moving into config/defaults files.]
is active in the device-mapper.
o Many operations can be carried out regardless of whether the VG is
active or not.
o vgscan does not activate anything - use vgchange.
o Change lvrename to support renaming of active LVs.
o Remove '//' appearing in some pathnames.
o Dummy lv_check_segments() for compilation.
o Full signed arguments to lvreduce/lvextend
o Consistent lv_number/pe map use
o Populate pv->pe_allocated
o Fixes for allocation/writing of multiple LVs