1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-22 17:35:59 +03:00
Commit Graph

1838 Commits

Author SHA1 Message Date
Alasdair Kergon
bce2869d92 Attempt to fix non-ALLOC_ANYWHERE allocation code after recent changes broke
The preference given to the PVs with the largest free areas.
2010-03-31 20:26:04 +00:00
Milan Broz
d7cbaae1fd Always use blocking lock for VGs and orphan locks.
Because we have now strong rule for lock ordering:
 - VG locks must be taken in alphabetical order
 - ORPHAN locks must be the last
vgs_locked() is now not needed.

This fixes problem with orphan locking, e.g.
vgremove VG1    |    vgremove VG2
lock(VG1)       |    lock(VG2)
lock(ORPHAN)    |    lock(ORPHAN) -> fail, non-blocking

https://bugzilla.redhat.com/show_bug.cgi?id=578413

(More similar places in code.)
2010-03-31 17:23:56 +00:00
Milan Broz
6733116a19 Fix all segments memory is allocated from vg private mempool.
Physical segments were still allocated from global
command context mempool.

This leads to very high memory usage when
activating large VG (vgchange).
(Memory usage was about 2G when >3000LVs).

Fix it by properly using vg->vgmem private pool,
so all the memory is released early.

New memory pool parameter is needed here for pv_split_segment
function.

Also fix the same problem in some minor allocations
(vg description, lv segment split).
2010-03-31 17:23:18 +00:00
Milan Broz
0423887528 Do not traverse PV segment list twice.
In addition to previous patch, we really do not need
to search for segment which was just allocated in
split request.

Make pv_split_segment function return newly allocated
(split) segment also.

(So after this patch, there is only one user
of slow find_peg_by_pe).
2010-03-31 17:22:26 +00:00
Milan Broz
80b96a8974 Optimise PV segments search.
The function find_peg_by_pe is incredibly inefficient
for Pvs with many segments.

In shiny future there should be binary (or interval) tree
instead of sorted linked list (volunteers?).

Anyway, for now, we can use dirty trick here to optimise this case:

 - Allocations are usually applied from the beginning
 of PV (we have no alloocation policy which allocates areas
 "backwards")

 - The only user of find_peg_by_pe is pv_split_segment()
 call. In *most* cases it need to split *last* PV segment.

So if we search sorted pv segment list backwards, we
hit the requested segment immediatelly.

This patch applies this tiny change.
(and saves >30% of processing time when >3000LVs segments are on one PV!)

To discourage using this inefficient function from other code,
it is moved to pv_manip.c and used static for now:-)
2010-03-31 17:21:40 +00:00
Milan Broz
c8b0988586 Remove vg_validate call when parsing cached metadata.
vg_validate call is an adept to optimisation, it is very
ineeficient and slow.

Anyway, we should call it only before writing data to disk.

The call in lvmcache was just temporary validation,
we realy do not need to revalidate cached metadata
every time.
(Actually, I added that there just to prove that cache works
properly and forgot to remove it.)

Patch removes it from lvmcache completely, this can hit only
internal bug in export function (and this bug must
be detected in any vg_write call anyway before).
2010-03-31 17:20:44 +00:00
Milan Broz
d59a2b6109 Use hash table for quick lv reference when reading metadata.
The _read_vg uses already hash for PVs to optimise
reading of large VGs and avoiding repeated PV list traversing.

Use the same aproach to speed up parsing VG with many LVs.
2010-03-31 17:20:02 +00:00
Mikulas Patocka
655849fb14 A missing space in the error message.
Add missing parentheses to an error message

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2010-03-31 12:06:30 +00:00
Zdenek Kabelac
5de52e34f0 Count only readable size for memlock stats.
As we mlock() only readable pages, makes statistics only
for readable bytes.
2010-03-30 14:41:58 +00:00
Zdenek Kabelac
3f67b36b35 Update memlock
Code moves initilization of stats values to _memlock_maps().
For dmeventd we need to use mlockall() - so avoid reading config value
and go with _use_mlockall code path.

Patch assumes dmeventd uses C locales!
Patch needs the call or memlock_inc_daemon() before memlock_inc()
(which is our common use case).

Some minor code cleanup patch for _un/_lock_mem_if_needed().
2010-03-30 14:41:23 +00:00
Alasdair Kergon
1dee5eb625 Fix --alloc contiguous policy only to allocate one set of parallel areas. 2010-03-29 17:59:46 +00:00
Zdenek Kabelac
b7be589ed0 Fixing another set of distclean problems where we left some generated files
in clvmd, dmevend, man, tests.

Don't include dependency files for clow and cscope.out targets

Improve dependency tracking for dmeventd and liblvm2cmd sources.
2010-03-29 14:17:59 +00:00
Zdenek Kabelac
b41f5924bf Update cflow file generation - support build dir and use $(top_srcdir)
to obtain sources. Create make.tmpl target for
simplier generation of cflow files with the help of
CFLOW_LIST, CFLOW_LIST_TARGET, CFLOW_TARGET.
Still cflow usage is not perfect.
2010-03-29 14:11:17 +00:00
Zdenek Kabelac
1a91d0914e distclean fixes
Move daemons/ and lib/ subtargets to their Makefiles so we don't get
double cleanup error during execution of distclean target.
Instead of duplicating clean target inside distclean target,
just use it as a subtarget and avoid add duplicating code.
2010-03-29 14:09:25 +00:00
Jonathan Earl Brassow
7a369d3704 Add ability to create mirrored logs for mirror LVs.
This check-in enables the 'mirrored' log type.  It can be specified
by using the '--mirrorlog' option as follows:
#> lvcreate -m1 --mirrorlog mirrored -L 5G -n lv vg

I've also included a couple updates to the testsuite.  These updates
include tests for the new log type, and some fixes to some of the
*lvconvert* tests.
2010-03-26 22:15:43 +00:00
Mike Snitzer
7b0f529d3e Fix clvmd cluster propagation of dmeventd monitoring mode.
clvmd's do_lock_lv() already properly controls dmeventd monitoring based
on LCK_DMEVENTD_MONITOR_MODE in lock_flags -- though one small fix was
needed for this to work: _lock_for_cluster() must treat
dmeventd_monitor_mode()'s return as a tri-state value.

Also cleanup do_lock_lv() to:
- explicitly init_dmeventd_monitor() based on LCK_DMEVENTD_MONITOR_MODE
- no longer reset init_dmeventd_monitor() to default at the end of
  do_lock_lv() -- it is unnecessary
2010-03-26 15:40:13 +00:00
Alasdair Kergon
2abbc07f3c Allow ALLOC_ANYWHERE to split contiguous areas. 2010-03-25 21:19:26 +00:00
Alasdair Kergon
a7ca334681 Add some assertions to allocation code. 2010-03-25 18:16:54 +00:00
Alasdair Kergon
f4cea344b1 improve a few comments in last check-in 2010-03-25 02:40:09 +00:00
Alasdair Kergon
8d6722c8ad Introduce pv_area_used into allocation algorithm and add debug messages.
This is the next preparatory step towards better --alloc anywhere
support and is not intended to break anything that currently works so
please report any problems - segfaults, bogus data in the new debug
messages, or if the code now chooses bizarre allocation layouts.
2010-03-25 02:31:48 +00:00
Mike Snitzer
a6bc975a24 Improve activation monitoring option processing
. Add "monitoring" option to "activation" section of lvm.conf
. Have clvmd consult the lvm.conf "activation/monitoring" too.
. Introduce toollib.c:get_activation_monitoring_mode().
. Error out when both --monitor and --ignoremonitoring are provided.
. Add --monitor and --ignoremonitoring support to lvcreate.  Update
  lvcreate man page accordingly.
. Clarify that '--monitor' controls the start and stop of monitoring in
  the {vg,lv}change man pages.

Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2010-03-23 22:30:18 +00:00
Petr Rockai
a2b6bbdfb2 Also honour abort_on_internal_errors when log_fn is set. 2010-03-23 18:18:49 +00:00
Alasdair Kergon
36f9d53b60 Allow dynamic extension of array of areas selected as allocation candidates. 2010-03-23 15:07:55 +00:00
Peter Rajnoha
5161ecb98d Autoreconf.
(Strictly require libudev if udev_sync is used)
2010-03-23 14:44:42 +00:00
Dave Wysochanski
15fdc8d3ee Avoid scanning all pvs in the system if operating on a device with mdas.
When we pv_read() a device that has an orphan vgname, we might need to scan
the system to be sure this is true.  However, if the PV has mdas, there's
no way possible for it to have an orphan vgname unless it is a true orphan.
Some areas of the code were optimized to take advantage of this fact, while
others were not (we would still do the expensive scan if a device had mdas
but had an orphan VG).

This patch unifies the code so that every place we are operating on such
a PV, we skip the expensive scan if there are mdas.

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
Acked-by: Petr Rockai <prockai@redhat.com>
Acked-by: Alasdair G Kergon <agk@redhat.com>
2010-03-18 17:29:12 +00:00
Alasdair Kergon
1091650a79 autoreconf & add missing WHATS_NEW entry 2010-03-18 13:24:35 +00:00
Petr Rockai
649c45078f Add infrastructure for running the functional testsuite with locking_type set
to 3, using a local (singlenode) clvmd.
2010-03-18 09:19:30 +00:00
Milan Broz
acb4b5e4de Fix pvcreate device check.
If user try to vgcreate or vgextend non-existent VG,
these messages appears:

# vgcreate xxx /dev/xxx
  Internal error: Volume Group xxx was not unlocked
  Device /dev/xxx not found (or ignored by filtering).
  Unable to add physical volume '/dev/xxx' to volume group 'xxx'.
  Internal error: Attempt to unlock unlocked VG xxx.

(the same with existing VG and non-existing PV & vgextend)
# vgextend vg_test /dev/xxx
...

It is caused because code tries to "refresh" cache if
md filter is switched on using cache destroy.

But we can change filters and rescan even without this
machinery now, just use refresh_filters
(and reset md filter afterwards).

(Patch also  discovers cache alias bug in vgsplit test,
fix it by using better filter line.)
2010-03-17 14:44:18 +00:00
Alasdair Kergon
0a5182fc97 Suppress repeated errors about the same missing PV uuids.
Bypass full device scans when using internally-cached VG metadata.
2010-03-17 02:11:18 +00:00
Alasdair Kergon
f168869702 fix last checkin 2010-03-16 19:06:57 +00:00
Alasdair Kergon
b1f9a2f5d1 Only do one full device scan during each read of text format metadata. 2010-03-16 17:30:00 +00:00
Alasdair Kergon
38220f9fe9 Remove unnecessary full_scan parameter from get_vgids and get_vgnames calls. 2010-03-16 16:57:03 +00:00
Alasdair Kergon
cccae7e633 Look up missing PVs by uuid not dev_name in _pvs_single to avoid invalid stat.
Make find_pv_in_vg_by_uuid() return same type as related functions.
2010-03-16 15:30:48 +00:00
Alasdair Kergon
770dc81b8e Introduce is_missing_pv(). 2010-03-16 14:37:38 +00:00
Alasdair Kergon
415feb2f44 some missing debug messages 2010-03-09 12:31:51 +00:00
Zdenek Kabelac
d0c3da55a0 Update comments for selecting maps
Use dm_snprintf and check result whether we create correct /proc path name
2010-03-09 10:25:50 +00:00
Alasdair Kergon
f0f43bc093 Misc cleanups in the new mlock code, incl. improving some variable names
& messages; using more statics (for now) to avoid redundant
recalculation; validating config file just once on loading; keeping maps
file open.
2010-03-09 03:16:11 +00:00
Zdenek Kabelac
4e27a85a2b Use mlock() only on 'r' memory maps 2010-03-08 17:14:21 +00:00
Zdenek Kabelac
26ade5f27e Unconditionaly ignore also Virtual Dynamically-linked Shared Object
(VDSO on 32bit is VSyscall on 64bit)
It seems it could be locked on 64bit kernels running 32bit binaries,
but it makes troubles on real 32bit machines where mlock() returns
error when trying to lock such map area. (0xffffe000)
Behavior of mlockall() seems to be similar.
2010-03-08 15:55:52 +00:00
Zdenek Kabelac
c900819b4e Use '_' prefix for local static variable. 2010-03-05 15:14:03 +00:00
Zdenek Kabelac
18b82048e4 mlockall() -> mlock()
This patch adds a new implementation of locking function instead
of mlockall() that may lock way too much memory (>100MB).
New function instead uses mlock() system call and selectively locks
memory areas from /proc/self/maps trying to avoid locking areas
unused during lock-ed state.

Patch also adds struct cmd_context to all memlock() calls to have
access to configuration.

For backward compatibility functionality of mlockall()
is preserved with "activation/use_mlockall" flag.

As a simple check, locking and unlocking counts the amount of memory
and compares whether values are matching.
2010-03-05 14:48:33 +00:00
Zdenek Kabelac
539f4a7728 Readline linking update
Modify linking of readline library. Create new  substituted varible
READLINE_LIBS - readline library is linked ONLY with tools that really use
it - i.e. lvm. (Static lvm does not use readlin).
Previous behaviour put this library into the variable LIBS and thus
linked it with all created object files of lvm project (i.e. plugins...).

READLINE detection is simplified.

Termcap library is linked in only if readline library doesn't have its own
dependency (i.e. old distributions).
2010-03-04 11:19:15 +00:00
Zdenek Kabelac
814aebc4e9 Use $(top_builddir) for inclusion of make.tmpl in Makefiles. 2010-03-04 09:51:37 +00:00
Mike Snitzer
c485fe183e Handle a misaligned device that reports a -1 alignment_offset.
The kernel's blk_stack_limits() function may flag a device as
'misaligned'.  If it does the alignment_offset will be -1.

Update set_pe_align_offset() to accommodate this corner case.
2010-03-02 21:56:14 +00:00
Alasdair Kergon
16d9293bd7 Extend core allocation code in preparation for mirrored log areas. 2010-03-01 20:00:20 +00:00
Milan Broz
65752052e1 Remove lvs_in_vg_activated_by_uuid_only call.
There is no difference from lvs_in_vg_activated now,
convert all users to this call.
2010-02-24 20:01:40 +00:00
Milan Broz
ab9663f394 Always query device by uuid only.
lvm2 devices have always UUID set even if imported from lvm1 metadata.

Patch removes name argument from dev_manager_info call and converts
all activation related calls to use query by UUID.

Also it simplifies mknode call (which is the only user on mknodes parameter).
2010-02-24 20:00:56 +00:00
Dave Wysochanski
3c23ff0f2e Add dm_pool_strdup to allocate memory and copy a tag in {lv|vg}_change_tag()
We need to allocate memory for the tag and copy the tag value before we
add it to the list of tags.  We could put this inside lvm2app since the
tools keep their memory around until vg_write/vg_commit is called, but
we put it inside the internal library to minimize code in lvm2app.
We need to copy the tag passed in by the caller to ensure the lifetime of
the memory until the {vg|lv} handle is released.

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-02-24 18:15:57 +00:00
Dave Wysochanski
cd69ee7453 Refactor lvchange_tag() to call lv_change_tag() library function.
Similar refactoring to vgchange - pull out common parts and put into
library function for reuse.  Should be no functional change.

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-02-24 18:15:49 +00:00
Dave Wysochanski
e17bcc7432 Refactor _vgchange_tag() to vg_change_tag() library function.
Pull out common code to be called from tools as well as lvm2app.
Leave archive() at tool level so we can use from vgcreate
as well as vgchange.  Should be no functional change.
- add stack macro in vgchange

Signed-off-by: Dave Wysochanski <dwysocha@redhat.com>
2010-02-24 18:15:05 +00:00