2008-10-08 16:50:13 +04:00
.TH LVCREATE 8 "LVM TOOLS #VERSION#" "Sistina Software UK" \" -*- nroff -*-
2015-10-06 15:55:09 +03:00
.
. \" Use 1st. parameter with \% to fix 'man2html' rendeing on same line!
.de SIZE_G
. IR \\ $1 \c
. RB [ b | B | s | S | k | K | m | M | g | G ]
. .
.de SIZE_E
. IR \\ $1 \c
. RB [ b | B | s | S | k | K | m | M | \c
. BR g | G | t | T | p | P | e | E ]
. .
.
2002-01-04 23:35:19 +03:00
.SH NAME
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
lvcreate \- create a logical volume in an existing volume group
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
.SH SYNOPSIS
2015-10-06 15:55:09 +03:00
.
2015-09-23 12:28:54 +03:00
.ad l
2002-01-04 23:35:19 +03:00
.B lvcreate
2014-10-07 16:48:38 +04:00
.RB [ \- a | \- \- activate
2015-09-23 12:28:54 +03:00
.RB [ a ][ e | l | s ]{ y | n }]
2011-10-19 20:49:13 +04:00
.RB [ \- \- addtag
.IR Tag ]
.RB [ \- \- alloc
2015-09-23 12:28:54 +03:00
.IR Allocation\%Policy ]
2011-10-19 20:49:13 +04:00
.RB [ \- A | \- \- autobackup
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2014-10-07 16:48:38 +04:00
.RB [ \- H | \- \- cache ]
.RB [ \- \- cachemode
2015-09-23 12:28:54 +03:00
.RB { passthrough | writeback | writethrough }]
2015-02-25 20:34:01 +03:00
.RB [ \- \- cachepolicy
2016-03-03 22:27:09 +03:00
.IR Policy ]
2015-10-06 15:55:09 +03:00
.RB \% [ \- \- cachepool
2015-09-23 12:28:54 +03:00
.IR CachePoolLogicalVolume ]
2015-07-20 12:55:43 +03:00
.RB [ \- \- cachesettings
2016-03-03 22:27:09 +03:00
.IR Key \fB = Value ]
2014-10-07 16:48:38 +04:00
.RB [ \- c | \- \- chunksize
2015-09-23 12:28:54 +03:00
.IR ChunkSize ]
2014-05-21 16:53:56 +04:00
.RB [ \- \- commandprofile
.IR ProfileName ]
2015-10-06 15:55:09 +03:00
.RB \% [ \- C | \- \- contiguous
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2011-10-19 20:49:13 +04:00
.RB [ \- d | \- \- debug ]
2014-10-07 16:48:38 +04:00
.RB [ \- \- discards
2015-09-23 12:28:54 +03:00
.RB \% { ignore | nopassdown | passdown }]
2015-01-15 17:20:57 +03:00
.RB [ \- \- errorwhenfull
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2014-10-07 16:48:38 +04:00
.RB [{ \- l | \- \- extents
2015-09-23 12:28:54 +03:00
.BR \fI LogicalExtents\%Number [ % { FREE | PVS | VG }]
.RB |
2014-10-07 16:48:38 +04:00
.BR \- L | \- \- size
2015-09-23 12:28:54 +03:00
.BR \fI LogicalVolumeSize }
2011-10-19 20:49:13 +04:00
.RB [ \- i | \- \- stripes
.IR Stripes
.RB [ \- I | \- \- stripesize
2014-10-07 16:48:38 +04:00
.IR StripeSize ]]]
.RB [ \- h | \- ? | \- \- help ]
.RB [ \- K | \- \- ignoreactivationskip ]
.RB [ \- \- ignoremonitoring ]
2011-10-19 20:49:13 +04:00
.RB [ \- \- minor
2016-03-03 22:27:09 +03:00
.IR Minor
2014-10-07 16:48:38 +04:00
.RB [ \- j | \- \- major
2016-03-03 22:27:09 +03:00
.IR Major ]]
2014-10-07 16:48:38 +04:00
.RB [ \- \- metadataprofile
2015-09-23 12:28:54 +03:00
.IR Profile\%Name ]
2011-10-19 20:49:13 +04:00
.RB [ \- m | \- \- mirrors
.IR Mirrors
2015-09-23 12:28:54 +03:00
.RB [ \- \- corelog | \- \- mirrorlog
.RB { disk | core | mirrored }]
2014-10-07 16:48:38 +04:00
.RB [ \- \- nosync ]
2011-10-19 20:49:13 +04:00
.RB [ \- R | \- \- regionsize
2015-09-23 12:28:54 +03:00
.BR \fI MirrorLogRegionSize ]]
2014-10-07 16:48:38 +04:00
.RB [ \- \- monitor
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2011-10-19 20:49:13 +04:00
.RB [ \- n | \- \- name
2015-09-23 12:28:54 +03:00
.IR Logical\%Volume ]
2014-10-07 16:48:38 +04:00
.RB [ \- \- noudevsync ]
2011-10-19 20:49:13 +04:00
.RB [ \- p | \- \- permission
2015-09-23 12:28:54 +03:00
.RB { r | rw }]
2014-10-07 16:48:38 +04:00
.RB [ \- M | \- \- persistent
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2014-11-24 19:38:39 +03:00
. \" .RB [ \-\-pooldatasize
2015-09-23 12:28:54 +03:00
. \" .I DataVolumeSize
.RB \% [ \- \- poolmetadatasize
.IR MetadataVolumeSize ]
2013-06-25 15:34:31 +04:00
.RB [ \- \- poolmetadataspare
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2014-10-07 16:48:38 +04:00
.RB [ \- \- [ raid ] maxrecoveryrate
.IR Rate ]
.RB [ \- \- [ raid ] minrecoveryrate
.IR Rate ]
.RB [ \- r | \- \- readahead
2015-09-23 12:28:54 +03:00
.RB { \fI ReadAheadSectors | auto | none }]
2016-06-24 15:51:20 +03:00
.RB [ \- \- reportformat
.RB {basic | json}]
2015-10-06 15:55:09 +03:00
.RB \% [ \- k | \- \- setactivationskip
2015-09-23 12:28:54 +03:00
.RB { y | n }]
.RB [ \- s | \- \- snapshot ]
2014-10-07 16:48:38 +04:00
.RB [ \- V | \- \- virtualsize
2015-09-23 12:28:54 +03:00
.IR VirtualSize ]
2014-10-07 16:48:38 +04:00
.RB [ \- t | \- \- test ]
.RB [ \- T | \- \- thin ]
.RB [ \- \- thinpool
2015-09-23 12:28:54 +03:00
.IR ThinPoolLogicalVolume ]
2011-10-19 20:49:13 +04:00
.RB [ \- \- type
.IR SegmentType ]
.RB [ \- v | \- \- verbose ]
2015-05-15 18:43:15 +03:00
.RB [ \- W | \- \- wipesignatures
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2011-10-19 20:49:13 +04:00
.RB [ \- Z | \- \- zero
2015-09-23 12:28:54 +03:00
.RB { y | n }]
2016-03-03 22:27:09 +03:00
.RI [ VolumeGroup
.RI |
.RI \% { ExternalOrigin | Origin | Pool } LogicalVolume
.RI \%[ PhysicalVolumePath [ \fB: \fIPE \fR[ \fB\- PE ]]...]]
2015-09-23 12:28:54 +03:00
.LP
2002-01-04 23:35:19 +03:00
.B lvcreate
2011-11-10 16:41:39 +04:00
.RB [ \- l | \- \- extents
2015-09-23 12:28:54 +03:00
.BR \fI LogicalExtentsNumber [ % { FREE | ORIGIN | PVS | VG }]
2011-10-19 20:49:13 +04:00
|
2014-11-24 19:38:39 +03:00
.BR \- L | \- \- size
. \" | \-\-pooldatasize
2015-09-23 12:28:54 +03:00
.IR LogicalVolumeSize ]
2011-10-19 20:49:13 +04:00
.RB [ \- c | \- \- chunksize
2015-09-23 12:28:54 +03:00
.IR ChunkSize ]
2015-10-06 15:55:09 +03:00
.RB \% [ \- \- commandprofile
2015-09-23 12:28:54 +03:00
.IR Profile\%Name ]
2011-10-19 20:49:13 +04:00
.RB [ \- \- noudevsync ]
.RB [ \- \- ignoremonitoring ]
2015-09-23 12:28:54 +03:00
.RB [ \- \- metadataprofile
.IR Profile\%Name ]
.RB \% [ \- \- monitor
.RB { y | n }]
2011-10-19 20:49:13 +04:00
.RB [ \- n | \- \- name
2016-03-03 22:27:09 +03:00
.IR SnapshotLogicalVolumeName ]
2016-06-24 15:51:20 +03:00
.RB [ \- \- reportformat
.RB {basic | json}]
2014-10-07 16:48:38 +04:00
.BR \- s | \- \- snapshot | \- H | \- \- cache
2015-10-06 15:55:09 +03:00
.RI \% {[ VolumeGroup \fB /\fP] OriginalLogicalVolume
.RB \% [ \- V | \- \- virtualsize
.IR VirtualSize ]}
2015-09-23 12:28:54 +03:00
.ad b
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
.SH DESCRIPTION
2015-10-06 15:55:09 +03:00
.
2012-04-11 16:42:10 +04:00
lvcreate creates a new logical volume in a volume group (see
.BR vgcreate "(8), " vgchange (8))
by allocating logical extents from the free physical extent pool
2002-01-04 23:35:19 +03:00
of that volume group. If there are not enough free physical extents then
2012-04-11 16:42:10 +04:00
the volume group can be extended (see
.BR vgextend (8))
with other physical volumes or by reducing existing logical volumes
of this volume group in size (see
.BR lvreduce (8)).
If you specify one or more PhysicalVolumes, allocation of physical
2009-10-26 16:41:13 +03:00
extents will be restricted to these volumes.
.br
2002-01-04 23:35:19 +03:00
.br
2012-04-11 16:42:10 +04:00
The second form supports the creation of snapshot logical volumes which
2002-01-04 23:35:19 +03:00
keep the contents of the original logical volume for backup purposes.
2015-09-23 12:28:54 +03:00
.
2002-01-04 23:35:19 +03:00
.SH OPTIONS
2015-09-23 12:28:54 +03:00
.
2011-10-19 20:49:13 +04:00
See
.BR lvm (8)
for common options.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- a | \- \- activate
.RB [ a ][ l | e | s ]{ y | n }
.br
2012-04-11 16:42:10 +04:00
Controls the availability of the Logical Volumes for immediate use after
2011-06-01 23:21:03 +04:00
the command finishes running.
2015-10-06 15:55:09 +03:00
By default, new Logical Volumes are activated (\fB \- ay\fP ).
If it is possible technically, \fB \- an\fP will leave the new Logical
2014-10-07 16:48:38 +04:00
Volume inactive. But for example, snapshots of active origin can only be
2015-10-06 15:55:09 +03:00
created in the active state so \fB \- an\fP cannot be used with
\fB -\- type snapshot\fP . This does not apply to thin volume snapshots,
2014-10-07 16:48:38 +04:00
which are by default created with flag to skip their activation
2015-10-06 15:55:09 +03:00
(\fB -ky\fP ).
Normally the \fB \- \- zero n\fP argument has to be supplied too because
2012-04-11 16:42:10 +04:00
zeroing (the default behaviour) also requires activation.
2015-10-06 15:55:09 +03:00
If autoactivation option is used (\fB \- aay\fP ), the logical volume is
2013-04-05 14:24:32 +04:00
activated only if it matches an item in the
2015-10-06 15:55:09 +03:00
\fB activation/auto_activation_volume_list\fP
2013-04-05 14:24:32 +04:00
set in \fB lvm.conf\fP (5).
2015-10-06 15:55:09 +03:00
For autoactivated logical volumes, \fB \- \- zero n\fP and
\fB \- \- wipesignatures n\fP is always assumed and it can't
2013-11-27 18:20:12 +04:00
be overridden. If the clustered locking is enabled,
2015-10-06 15:55:09 +03:00
\fB \- aey\fP will activate exclusively on one node and
.BR \- a { a | l } y
2013-05-05 23:38:44 +04:00
will activate only on the local node.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- H | \- \- cache
.br
2015-10-06 15:55:09 +03:00
Creates cache or cache pool logical volume.
. \" or both.
Specifying the optional argument \fB \- \- extents\fP or \fB \- \- size\fP
will cause the creation of the cache logical volume.
2014-11-24 19:38:39 +03:00
. \" Specifying the optional argument \fB\-\-pooldatasize\fP will cause
. \" the creation of the cache pool logical volume.
2015-10-06 15:55:09 +03:00
. \" Specifying both arguments will cause the creation of cache with its
. \" cache pool volume.
2014-10-07 16:48:38 +04:00
When the Volume group name is specified together with existing logical volume
2015-10-06 15:55:09 +03:00
name which is NOT a cache pool name, such volume is treated
as cache origin volume and cache pool is created. In this case the
\fB \- \- extents\fP or \fB \- \- size\fP is used to specify size of cache pool volume.
2014-10-07 16:48:38 +04:00
See \fB lvmcache\fP (7) for more info about caching support.
Note that the cache segment type requires a dm-cache kernel module version
1.3.0 or greater.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- cachemode
.RB { passthrough | writeback | writethrough }
.br
2014-02-25 22:17:03 +04:00
Specifying a cache mode determines when the writes to a cache LV
2015-09-23 12:28:54 +03:00
are considered complete. When \fB writeback\fP is specified, a write is
2014-02-25 22:17:03 +04:00
considered complete as soon as it is stored in the cache pool LV.
2015-09-23 12:28:54 +03:00
If \fB writethough\fP is specified, a write is considered complete only
2014-02-25 22:17:03 +04:00
when it has been stored in the cache pool LV and on the origin LV.
2015-09-23 12:28:54 +03:00
While \fB writethrough\fP may be slower for writes, it is more
2014-02-25 22:17:03 +04:00
resilient if something should happen to a device associated with the
2016-04-25 14:38:24 +03:00
cache pool LV. With \fB passthrough\fP mode, all reads are served
from origin LV (all reads miss the cache) and all writes are
forwarded to the origin LV; additionally, write hits cause cache
block invalidates. See \fB lvmcache(7)\fP for more details.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- cachepolicy
2016-03-03 22:27:09 +03:00
.IR Policy
2015-09-23 12:28:54 +03:00
.br
2015-07-20 12:55:43 +03:00
Only applicable to cached LVs; see also \fB lvmcache(7)\fP . Sets
2015-09-23 12:28:54 +03:00
the cache policy. \fB mq\fP is the basic policy name. \fB smq\fP is more advanced
2015-07-20 12:55:43 +03:00
version available in newer kernels.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- cachepool
.IR CachePoolLogicalVolume { Name | Path }
.br
2014-10-07 16:48:38 +04:00
Specifies the name of cache pool volume name. The other way to specify pool name
is to append name to Volume group name argument.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- cachesettings
2016-03-03 22:27:09 +03:00
.IB Key = Value
2015-09-23 12:28:54 +03:00
.br
2015-07-20 12:55:43 +03:00
Only applicable to cached LVs; see also \fB lvmcache(7)\fP . Sets
the cache tunable settings. In most use-cases, default values should be adequate.
2015-09-23 12:28:54 +03:00
Special string value \fB default\fP switches setting back to its default kernel value
2015-07-20 12:55:43 +03:00
and removes it from the list of settings stored in lvm2 metadata.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- c | \- \- chunksize
2015-10-06 15:55:09 +03:00
.SIZE_G \% ChunkSize
2015-09-23 12:28:54 +03:00
.br
2014-02-12 20:29:07 +04:00
Gives the size of chunk for snapshot, cache pool and thin pool logical volumes.
2013-04-05 14:24:32 +04:00
Default unit is in kilobytes.
.br
2015-10-06 15:55:09 +03:00
For snapshots the value must be power of 2 between 4KiB and 512KiB
2014-10-07 16:48:38 +04:00
and the default value is 4KiB.
2013-04-05 14:24:32 +04:00
.br
2015-10-06 15:55:09 +03:00
For cache pools the value must a multiple of 32KiB
2014-10-07 16:48:38 +04:00
between 32KiB and 1GiB. The default is 64KiB.
2016-05-05 22:34:21 +03:00
When the size is specified with volume caching, it may not be smaller
2016-05-31 00:13:40 +03:00
than cache pool creation chunk size was.
2014-02-12 20:29:07 +04:00
.br
2015-10-06 15:55:09 +03:00
For thin pools the value must be a multiple of 64KiB
2014-10-07 16:48:38 +04:00
between 64KiB and 1GiB.
Default value starts with 64KiB and grows up to
fit the pool metadata size within 128MiB,
2012-10-30 21:11:59 +04:00
if the pool metadata size is not specified.
2014-10-07 16:48:38 +04:00
See
.BR lvm.conf (5)
2015-09-23 12:28:54 +03:00
setting \fB allocation/thin_pool_chunk_size_policy\fP
2014-10-07 16:48:38 +04:00
to select different calculation policy.
Thin pool target version <1.4 requires this value to be a power of 2.
2013-04-05 14:24:32 +04:00
For target version <1.5 discard is not supported for non power of 2 values.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- C | \- \- contiguous
.RB { y | n }
.br
2002-01-04 23:35:19 +03:00
Sets or resets the contiguous allocation policy for
logical volumes. Default is no contiguous allocation based
on a next free principle.
2015-09-23 12:28:54 +03:00
.
.HP
2014-10-07 16:48:38 +04:00
.BR \- \- corelog
2015-09-23 12:28:54 +03:00
.br
This is shortcut for option \fB \- \- mirrorlog core\fP .
.
.HP
.BR \- \- discards
.RB { ignore | nopassdown | passdown }
.br
2013-04-05 14:24:32 +04:00
Sets discards behavior for thin pool.
2015-09-23 12:28:54 +03:00
Default is \fB passdown\fP .
.
.HP
.BR \- \- errorwhenfull
.RB { y | n }
.br
2015-01-15 17:20:57 +03:00
Configures thin pool behaviour when data space is exhausted.
2015-09-23 12:28:54 +03:00
Default is \fB n\fP o.
2015-01-15 17:20:57 +03:00
Device will queue I/O operations until target timeout
2015-10-06 15:55:09 +03:00
(see dm-thin-pool kernel module option \fP no_space_timeout\fP )
2015-01-15 17:20:57 +03:00
expires. Thus configured system has a time to i.e. extend
the size of thin pool data device.
2015-09-23 12:28:54 +03:00
When set to \fB y\fP es, the I/O operation is immeditelly errored.
.
.HP
.BR \- K | \- \- ignoreactivationskip
.br
2014-10-07 16:48:38 +04:00
Ignore the flag to skip Logical Volumes during activation.
Use \fB \- \- setactivationskip\fP option to set or reset
activation skipping flag persistently for logical volume.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- ignoremonitoring
.br
2013-04-05 14:24:32 +04:00
Make no attempt to interact with dmeventd unless \fB \- \- monitor\fP
is specified.
2015-09-23 12:28:54 +03:00
.
.HP
.BR -l | \- \- extents
.IR LogicalExtentsNumber \c
.RB [ % { VG | PVS | FREE | ORIGIN }]
.br
2016-07-07 21:11:43 +03:00
Specifies the size of the new LV in logical extents. The number of
physical extents allocated may be different, and depends on the LV type.
Certain LV types require more physical extents for data redundancy or
metadata. An alternate syntax allows the size to be determined indirectly
as a percentage of the size of a related VG, LV, or set of PVs. The
suffix \fB %VG\fP denotes the total size of the VG, the suffix \fB %FREE\fP
the remaining free space in the VG, and the suffix \fB %PVS\fP the free
2016-07-08 01:21:05 +03:00
space in the specified Physical Volumes. For a snapshot, the size
2016-07-07 21:11:43 +03:00
can be expressed as a percentage of the total size of the Origin Logical
Volume with the suffix \fB %ORIGIN\fP (\fB 100%ORIGIN\fP provides space for
2016-07-08 01:21:05 +03:00
the whole origin).
When expressed as a percentage, the size defines an upper limit for the
number of logical extents in the new LV. The precise number of logical
extents in the new LV is not determined until the command has completed.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- j | \- \- major
2016-03-03 22:27:09 +03:00
.IR Major
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Sets the major number.
Major numbers are not supported with pool volumes.
This option is supported only on older systems
(kernel version 2.4) and is ignored on modern Linux systems where major
numbers are dynamically assigned.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- metadataprofile
.IR ProfileName
.br
2015-10-06 15:55:09 +03:00
Uses and attaches the \fI ProfileName\fP configuration profile to the logical
2014-10-07 16:48:38 +04:00
volume metadata. Whenever the logical volume is processed next time,
the profile is automatically applied. If the volume group has another
profile attached, the logical volume profile is preferred.
See \fB lvm.conf\fP (5) for more information about \fB metadata profiles\fP .
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- minor
2016-03-03 22:27:09 +03:00
.IR Minor
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Sets the minor number.
Minor numbers are not supported with pool volumes.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- m | \- \- mirrors
.IR mirrors
.br
Creates a mirrored logical volume with \fI mirrors\fP copies.
For example, specifying \fB \- m 1\fP
2013-04-05 14:24:32 +04:00
would result in a mirror with two-sides; that is,
a linear volume plus one copy.
2006-08-19 01:49:19 +04:00
2013-04-05 14:24:32 +04:00
Specifying the optional argument \fB \- \- nosync\fP will cause the creation
2016-08-08 14:52:35 +03:00
of the mirror LV to skip the initial resynchronization. Any data written
afterwards will be mirrored, but the original contents will not be copied.
This is useful for skipping a potentially long and resource intensive initial
sync of an empty mirrored RaidLV.
2006-08-19 01:49:19 +04:00
2013-08-06 23:13:55 +04:00
There are two implementations of mirroring which can be used and correspond
2014-10-07 16:48:38 +04:00
to the "\fI raid1\fP " and "\fI mirror\fP " segment types.
The default is "\fI raid1\fP ". See the
2013-08-06 23:13:55 +04:00
\fB \- \- type\fP option for more information if you would like to use the
2014-10-07 16:48:38 +04:00
legacy "\fI mirror\fP " segment type. See
.BR lvm.conf (5)
2015-09-23 12:28:54 +03:00
settings \fB global/mirror_segtype_default\fP
and \fB global/raid10_segtype_default\fP
2014-10-07 16:48:38 +04:00
to configure default mirror segment type.
The options
\fB \- \- mirrorlog\fP and \fB \- \- corelog\fP apply
to the legacy "\fI mirror\fP " segment type only.
2016-08-12 20:14:28 +03:00
Note the current maxima for mirrors are 7 for "mirror" providing
8 mirror legs and 9 for "raid1" providing 10 legs.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- mirrorlog
.RB { disk | core | mirrored }
.br
2014-10-07 16:48:38 +04:00
Specifies the type of log to be used for logical volumes utilizing
the legacy "\fI mirror\fP " segment type.
.br
2015-09-23 12:28:54 +03:00
The default is \fB disk\fP , which is persistent and requires
2007-08-25 01:01:52 +04:00
a small amount of storage space, usually on a separate device from the
2014-10-07 16:48:38 +04:00
data being mirrored.
.br
2015-09-23 12:28:54 +03:00
Using \fB core\fP means the mirror is regenerated by copying the data
2014-10-07 16:48:38 +04:00
from the first device each time the logical volume is activated,
like after every reboot.
.br
2015-09-23 12:28:54 +03:00
Using \fB mirrored\fP will create a persistent log that is itself mirrored.
.
.HP
.BR \- \- monitor
.RB { y | n }
.br
2013-04-05 14:24:32 +04:00
Starts or avoids monitoring a mirrored, snapshot or thin pool logical volume with
dmeventd, if it is installed.
If a device used by a monitored mirror reports an I/O error,
the failure is handled according to
2015-09-23 12:28:54 +03:00
\fB activation/mirror_image_fault_policy\fP
and \fB activation/mirror_log_fault_policy\fP
2013-04-05 14:24:32 +04:00
set in \fB lvm.conf\fP (5).
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- n | \- \- name
.IR LogicalVolume { Name | Path }
.br
2013-04-05 14:24:32 +04:00
Sets the name for the new logical volume.
2002-01-04 23:35:19 +03:00
.br
2013-04-05 14:24:32 +04:00
Without this option a default name of "lvol#" will be generated where
2002-01-04 23:35:19 +03:00
# is the LVM internal number of the logical volume.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- nosync
.br
2016-08-08 14:52:35 +03:00
Causes the creation of mirror, raid1, raid4, raid5 and raid10 to skip the
initial resynchronization. In case of mirror, raid1 and raid10, any data
written afterwards will be mirrored, but the original contents will not be
copied. In case of raid4 and raid5, no parity blocks will be written,
though any data written afterwards will cause parity blocks to be stored.
.br
This is useful for skipping a potentially long and resource intensive initial
sync of an empty mirror/raid1/raid4/raid5 and raid10 LV.
.br
This option is not valid for raid6, because raid6 relies on proper parity
(P and Q Syndromes) being created during initial synchronization in order
to reconstruct proper user date in case of device failures.
raid0 and raid0_meta don't provide any data copies or parity support
and thus don't support initial resynchronization.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- noudevsync
.br
2013-04-05 14:24:32 +04:00
Disables udev synchronisation. The
2009-08-04 12:09:52 +04:00
process will not wait for notification from udev.
It will continue irrespective of any possible udev processing
in the background. You should only use this if udev is not running
or has rules that ignore the devices LVM2 creates.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- p | \- \- permission
.RB { r | rw }
.br
Sets access permissions to read only (\fB r\fP ) or read and write (\fB rw\fP ).
2002-01-04 23:35:19 +03:00
.br
Default is read and write.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- M | \- \- persistent
.RB { y | n }
.br
Set to \fB y\fP to make the minor number specified persistent.
2014-10-07 16:48:38 +04:00
Pool volumes cannot have persistent major and minor numbers.
2015-09-23 12:28:54 +03:00
Defaults to \fB y\fP es only when major or minor number is specified.
Otherwise it is \fB n\fP o.
. \" .HP
2014-11-24 19:38:39 +03:00
. \" .IR \fB\-\-pooldatasize " " PoolDataVolumeSize [ bBsSkKmMgGtTpPeE ]
. \" Sets the size of pool's data logical volume.
. \" For thin pools you may also specify the size
. \" with the option \fB\-\-size\fP.
2015-09-23 12:28:54 +03:00
. \"
.
.HP
.BR \- \- poolmetadatasize
2015-10-06 15:55:09 +03:00
.SIZE_G \% MetadataVolumeSize
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Sets the size of pool's metadata logical volume.
Supported values are in range between 2MiB and 16GiB for thin pool,
and upto 16GiB for cache pool. The minimum value is computed from pool's
data size.
Default value for thin pool is (Pool_LV_size / Pool_LV_chunk_size * 64b).
2016-09-16 22:50:14 +03:00
To work with a thin pool, there should be at least 25% of free space
when the size of metadata is smaller then 16MiB,
or at least 4MiB of free space otherwise.
2011-11-07 14:59:55 +04:00
Default unit is megabytes.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- poolmetadataspare
.RB { y | n }
.br
2013-06-25 15:34:31 +04:00
Controls creation and maintanence of pool metadata spare logical volume
2014-07-22 19:31:22 +04:00
that will be used for automated pool recovery.
2013-06-25 15:34:31 +04:00
Only one such volume is maintained within a volume group
2014-07-22 19:31:22 +04:00
with the size of the biggest pool metadata volume.
2015-09-23 12:28:54 +03:00
Default is \fB y\fP es.
.
.HP
.BR \- \- [ raid ] maxrecoveryrate
2015-10-06 15:55:09 +03:00
.SIZE_G \% Rate
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Sets the maximum recovery rate for a RAID logical volume. \fI Rate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- \- [ raid ] minrecoveryrate
2015-10-06 15:55:09 +03:00
.SIZE_G \% Rate
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Sets the minimum recovery rate for a RAID logical volume. \fI Rate\fP
is specified as an amount per second for each device in the array.
If no suffix is given, then KiB/sec/device is assumed. Setting the
recovery rate to 0 means it will be unbounded.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- r | \- \- readahead
.RB { \fI ReadAheadSectors | auto | none }
.br
2013-04-05 14:24:32 +04:00
Sets read ahead sector count of this logical volume.
2007-11-09 19:51:54 +03:00
For volume groups with metadata in lvm1 format, this must
be a value between 2 and 120.
2015-09-23 12:28:54 +03:00
The default value is \fB auto\fP which allows the kernel to choose
2007-11-09 19:51:54 +03:00
a suitable value automatically.
2015-09-23 12:28:54 +03:00
\fB none\fP is equivalent to specifying zero.
.
.HP
.BR \- R | \- \- regionsize
2015-10-06 15:55:09 +03:00
.SIZE_G \% MirrorLogRegionSize
2015-09-23 12:28:54 +03:00
.br
2013-04-05 14:24:32 +04:00
A mirror is divided into regions of this size (in MiB), and the mirror log
2006-04-28 17:11:05 +04:00
uses this granularity to track which regions are in sync.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- k | \- \- setactivationskip
.RB { y | n }
.br
2014-10-07 16:48:38 +04:00
Controls whether Logical Volumes are persistently flagged to be skipped during
activation. By default, thin snapshot volumes are flagged for activation skip.
See
.BR lvm.conf (5)
2015-09-23 12:28:54 +03:00
\fB activation/auto_set_activation_skip\fP
2014-10-07 16:48:38 +04:00
how to change its default behaviour.
2015-09-23 12:28:54 +03:00
To activate such volumes, an extra \fB \- \- ignoreactivationskip\fP
2014-10-07 16:48:38 +04:00
option must be used. The flag is not applied during deactivation. Use
2015-09-23 12:28:54 +03:00
\fB lvchange \- \- setactivationskip\fP
2014-10-07 16:48:38 +04:00
command to change the skip flag for existing volumes.
To see whether the flag is attached, use \fB lvs\fP command
where the state of the flag is reported within \fB lv_attr\fP bits.
2015-09-23 12:28:54 +03:00
.
.HP
.BR \- L | \- \- size
2015-10-06 15:55:09 +03:00
.SIZE_E \% LogicalVolumeSize
2015-09-23 12:28:54 +03:00
.br
2014-10-07 16:48:38 +04:00
Gives the size to allocate for the new logical volume.
2015-09-23 12:28:54 +03:00
A size suffix of \fB B\fP for bytes, \fB S\fP for sectors as 512 bytes,
\fB K\fP for kilobytes, \fB M\fP for megabytes,
\fB G\fP for gigabytes, \fB T\fP for terabytes, \fB P\fP for petabytes
or \fB E\fP for exabytes is optional.
2014-10-07 16:48:38 +04:00
.br
Default unit is megabytes.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- s | \fB \- \- snapshot
.IR OriginalLogicalVolume { Name | Path }
.br
2013-04-05 14:24:32 +04:00
Creates a snapshot logical volume (or snapshot) for an existing, so called
2002-01-04 23:35:19 +03:00
original logical volume (or origin).
Snapshots provide a 'frozen image' of the contents of the origin
while the origin can still be updated. They enable consistent
2012-01-20 14:50:39 +04:00
backups and online recovery of removed/overwritten data/files.
2014-10-07 16:48:38 +04:00
.br
2012-01-20 14:50:39 +04:00
Thin snapshot is created when the origin is a thin volume and
2013-07-15 13:52:05 +04:00
the size IS NOT specified. Thin snapshot shares same blocks within
2012-01-20 14:50:39 +04:00
the thin pool volume.
2013-07-15 13:52:05 +04:00
The non thin volume snapshot with the specified size does not need
the same amount of storage the origin has. In a typical scenario,
15-20% might be enough. In case the snapshot runs out of storage, use
2011-10-19 20:49:13 +04:00
.BR lvextend (8)
2002-01-04 23:35:19 +03:00
to grow it. Shrinking a snapshot is supported by
2011-10-19 20:49:13 +04:00
.BR lvreduce (8)
2002-01-04 23:35:19 +03:00
as well. Run
2013-04-05 14:24:32 +04:00
.BR lvs (8)
2002-01-04 23:35:19 +03:00
on the snapshot in order to check how much data is allocated to it.
2013-04-05 14:24:32 +04:00
Note: a small amount of the space you allocate to the snapshot is
2009-04-30 00:14:21 +04:00
used to track the locations of the chunks of data, so you should
2013-04-05 14:24:32 +04:00
allocate slightly more space than you actually need and monitor
(\fB \- \- monitor\fP ) the rate at which the snapshot data is growing
so you can \fB avoid\fP running out of space.
2013-04-05 14:56:19 +04:00
If \fB \- \- thinpool\fP is specified, thin volume is created that will
use given original logical volume as an external origin that
serves unprovisioned blocks.
Only read-only volumes can be used as external origins.
To make the volume external origin, lvm expects the volume to be inactive.
External origin volume can be used/shared for many thin volumes
even from different thin pools. See
.BR lvconvert (8)
for online conversion to thin volumes with external origin.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- i | \- \- stripes
.IR Stripes
.br
2014-10-07 16:48:38 +04:00
Gives the number of stripes.
This is equal to the number of physical volumes to scatter
2016-07-28 18:34:42 +03:00
the logical volume data. When creating a RAID 4/5/6 logical volume,
2014-10-07 16:48:38 +04:00
the extra devices which are necessary for parity are
2015-10-06 15:55:09 +03:00
internally accounted for. Specifying \fB \- i 3\fP
2016-07-28 18:34:42 +03:00
would cause 3 devices for striped and RAID 0 logical volumes,
4 devices for RAID 4/5, 5 devices for RAID 6 and 6 devices for RAID 10.
Alternatively, RAID 0 will stripe across 2 devices,
RAID 4/5 across 3 PVs, RAID 6 across 5 PVs and RAID 10 across
4 PVs in the volume group if the \fB \- i\fP argument is omitted.
In order to stripe across all PVs of the VG if the \fB \- i\fP argument is
omitted, set raid_stripe_all_devices=1 in the allocation
section of \fB lvm.conf (5)\fP or add
.br
\fB \- \- config allocation/raid_stripe_all_devices=1\fP
.br
to the command.
2016-08-12 20:14:28 +03:00
Note the current maxima for stripes depend on the created RAID type.
For raid10, the maximum of stripes is 32,
for raid0, it is 64,
for raid4/5, it is 63
and for raid6 it is 62.
2016-07-28 18:34:42 +03:00
2016-08-08 14:52:35 +03:00
See the \fB \- \- nosync\fP option to optionally avoid initial syncrhonization of RaidLVs.
2016-05-23 18:46:38 +03:00
Two implementations of basic striping are available in the kernel.
The original device-mapper implementation is the default and should
normally be used. The alternative implementation using MD, available
since version 1.7 of the RAID device-mapper kernel target (kernel
version 4.2) is provided to facilitate the development of new RAID
2016-08-08 14:52:35 +03:00
features. It may be accessed with \fB --type raid0[_meta]\fP , but is best
2016-05-23 18:46:38 +03:00
avoided at present because of assorted restrictions on resizing and converting
such devices.
2015-10-06 15:55:09 +03:00
.HP
.BR \- I | \- \- stripesize
.IR StripeSize
.br
2014-10-07 16:48:38 +04:00
Gives the number of kilobytes for the granularity of the stripes.
.br
StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format.
For metadata in LVM2 format, the stripe size may be a larger
power of 2 but must not exceed the physical extent size.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- T | \- \- thin
.br
2011-11-05 02:47:17 +04:00
Creates thin pool or thin logical volume or both.
2014-10-07 16:48:38 +04:00
Specifying the optional argument \fB \- \- size\fP or \fB \- \- extents\fP
will cause the creation of the thin pool logical volume.
2012-04-11 16:42:10 +04:00
Specifying the optional argument \fB \- \- virtualsize\fP will cause
the creation of the thin logical volume from given thin pool volume.
2011-11-05 02:47:17 +04:00
Specifying both arguments will cause the creation of both
thin pool and thin volume using this pool.
2014-06-10 13:05:51 +04:00
See \fB lvmthin\fP (7) for more info about thin provisioning support.
2014-10-07 16:48:38 +04:00
Thin provisioning requires device mapper kernel driver
from kernel 3.2 or greater.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- \- thinpool
.IR ThinPoolLogicalVolume { Name | Path }
.br
2014-10-07 16:48:38 +04:00
Specifies the name of thin pool volume name. The other way to specify pool name
is to append name to Volume group name argument.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- \- type
.IR SegmentType
.br
2014-10-07 16:48:38 +04:00
Creates a logical volume with the specified segment type.
Supported types are:
2015-10-06 15:55:09 +03:00
.BR cache ,
.BR cache-pool ,
.BR error ,
.BR linear ,
.BR mirror,
2016-05-23 18:46:38 +03:00
.BR raid0 ,
2015-10-06 15:55:09 +03:00
.BR raid1 ,
.BR raid4 ,
.BR raid5_la ,
.BR raid5_ls
.RB (=
.BR raid5 ),
.BR raid5_ra ,
.BR raid5_rs ,
.BR raid6_nc ,
.BR raid6_nr ,
.BR raid6_zr
.RB (=
.BR raid6 ),
.BR raid10 ,
.BR snapshot ,
.BR striped,
.BR thin ,
.BR thin-pool
2013-04-05 14:24:32 +04:00
or
2015-10-06 15:55:09 +03:00
.BR zero .
2014-10-07 16:48:38 +04:00
Segment type may have a commandline switch alias that will
enable its use.
When the type is not explicitly specified an implicit type
is selected from combination of options:
2015-10-06 15:55:09 +03:00
.BR \- H | \- \- cache | \- \- cachepool
(cache or cachepool),
.BR \- T | \- \- thin | \- \- thinpool
(thin or thinpool),
.BR \- m | \- \- mirrors
(raid1 or mirror),
.BR \- s | \- \- snapshot | \- V | \- \- virtualsize
(snapshot or thin),
.BR \- i | \- \- stripes
(striped).
2016-05-23 18:46:38 +03:00
The default segment type is \fB linear\fP .
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- V | \- \- virtualsize
.SIZE_E \% VirtualSize
.br
2014-10-30 21:44:42 +03:00
Creates a thinly provisioned device or a sparse device of the given size (in MiB by default).
See
.BR lvm.conf (5)
2015-10-06 15:55:09 +03:00
settings \fB global/sparse_segtype_default\fP
2014-10-30 21:44:42 +03:00
to configure default sparse segment type.
2014-10-07 16:48:38 +04:00
See \fB lvmthin\fP (7) for more info about thin provisioning support.
Anything written to a sparse snapshot will be returned when reading from it.
2009-05-27 20:30:29 +04:00
Reading from other areas of the device will return blocks of zeros.
2015-10-06 15:55:09 +03:00
Virtual snapshot (sparse snapshot) is implemented by creating
a hidden virtual device of the requested size using the zero target.
A suffix of _vorigin is used for this device.
Note: using sparse snapshots is not efficient for larger
2013-04-05 14:24:32 +04:00
device sizes (GiB), thin provisioning should be used for this case.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- W | \- \- wipesignatures
.RB { y | n }
.br
2016-07-12 16:20:12 +03:00
Controls detection and subsequent wiping of signatures on newly created
Logical Volume. There's a prompt for each signature detected to confirm
its wiping (unless \fB --yes\fP is used where LVM assumes 'yes' answer
for each prompt automatically). If this option is not specified, then by
default \fB -W\fP | \fB --wipesignatures y\fP is assumed each time the
zeroing is done (\fB \- Z\fP | \fB \- \- zero y\fP ). This default behaviour
2015-10-06 15:55:09 +03:00
can be controlled by \fB \% allocation/wipe_signatures_when_zeroing_new_lvs\fP
2014-10-07 16:48:38 +04:00
setting found in
.BR lvm.conf (5).
2013-11-27 18:20:12 +04:00
.br
2016-07-12 16:20:12 +03:00
If blkid wiping is used (\fB allocation/use_blkid_wiping\fP setting in
2014-10-07 16:48:38 +04:00
.BR lvm.conf (5))
2013-11-27 18:20:12 +04:00
and LVM2 is compiled with blkid wiping support, then \fB blkid\fP (8) library is used
2015-09-23 12:28:54 +03:00
to detect the signatures (use \fB blkid \- k\fP command to list the signatures that are recognized).
2013-11-27 18:20:12 +04:00
Otherwise, native LVM2 code is used to detect signatures (MD RAID, swap and LUKS
signatures are detected only in this case).
.br
2014-10-07 16:48:38 +04:00
Logical volume is not wiped if the read only flag is set.
2015-10-06 15:55:09 +03:00
.
.HP
.BR \- Z | \- \- zero
.RB { y | n }
.br
2013-12-12 13:01:37 +04:00
Controls zeroing of the first 4KiB of data in the new logical volume.
2015-10-06 15:55:09 +03:00
Default is \fB y\fP es.
2014-10-07 16:48:38 +04:00
Snapshot COW volumes are always zeroed.
Logical volume is not zeroed if the read only flag is set.
2002-01-04 23:35:19 +03:00
.br
Warning: trying to mount an unzeroed logical volume can cause the system to
hang.
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
.SH Examples
2015-10-06 15:55:09 +03:00
.
2013-04-05 14:24:32 +04:00
Creates a striped logical volume with 3 stripes, a stripe size of 8KiB
and a size of 100MiB in the volume group named vg00.
2011-10-29 00:36:05 +04:00
The logical volume name will be chosen by lvcreate:
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- i 3 \- I 8 \- L 100 M vg00
2002-01-04 23:35:19 +03:00
2011-10-29 00:36:05 +04:00
Creates a mirror logical volume with 2 sides with a useable size of 500 MiB.
2013-04-05 14:24:32 +04:00
This operation would require 3 devices (or option
2015-10-06 15:55:09 +03:00
\fB \- \- alloc \% anywhere\fP ) - two for the mirror
devices and one for the disk log:
2011-10-29 00:36:05 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- m1 \- L 500 M vg00
2006-08-19 01:49:19 +04:00
2011-10-29 00:36:05 +04:00
Creates a mirror logical volume with 2 sides with a useable size of 500 MiB.
2012-04-11 16:42:10 +04:00
This operation would require 2 devices - the log is "in-memory":
2011-10-29 00:36:05 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- m1 \- \- mirrorlog core \- L 500 M vg00
2007-08-02 01:01:06 +04:00
2014-06-17 15:33:42 +04:00
Creates a snapshot logical volume named "vg00/snap" which has access to the
contents of the original logical volume named "vg00/lvol1"
2002-01-04 23:35:19 +03:00
at snapshot logical volume creation time. If the original logical volume
contains a file system, you can mount the snapshot logical volume on an
arbitrary directory in order to access the contents of the filesystem to run
2012-04-11 16:42:10 +04:00
a backup while the original filesystem continues to get updated:
2011-10-29 00:36:05 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- \- size 100 m \- \- snapshot \- \- name snap /dev/vg00/lvol1
2002-01-04 23:35:19 +03:00
2014-06-17 15:33:42 +04:00
Creates a snapshot logical volume named "vg00/snap" with size
for overwriting 20% of the original logical volume named "vg00/lvol1".:
.sp
.B lvcreate \- s \- l 20 %ORIGIN \- \- name snap vg00/lvol1
2012-04-11 16:42:10 +04:00
Creates a sparse device named /dev/vg1/sparse of size 1TiB with space for just
under 100MiB of actual data on it:
2011-10-29 00:36:05 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- \- virtualsize 1 T \- \- size 100 M \- \- snapshot \- \- name sparse vg1
2009-08-10 21:23:04 +04:00
2011-10-29 00:36:05 +04:00
Creates a linear logical volume "vg00/lvol1" using physical extents
2014-06-11 13:06:30 +04:00
/dev/sda:0\- 7 and /dev/sdb:0\- 7 for allocation of extents:
2011-10-29 00:36:05 +04:00
.sp
2014-06-11 13:06:30 +04:00
.B lvcreate \- L 64 M \- n lvol1 vg00 /dev/sda:0\-7 /dev/sdb:0\-7
2009-08-10 21:23:04 +04:00
2011-10-29 00:36:05 +04:00
Creates a 5GiB RAID5 logical volume "vg00/my_lv", with 3 stripes (plus
2012-04-11 16:42:10 +04:00
a parity drive for a total of 4 devices) and a stripesize of 64KiB:
2011-10-29 00:36:05 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- \- type raid5 \- L 5 G \- i 3 \- I 64 \- n my_lv vg00
2009-04-25 05:17:59 +04:00
RAID: Allow implicit stripe (and parity) when creating RAID LVs
There are typically 2 functions for the more advanced segment types that
deal with parameters in lvcreate.c: _get_*_params() and _check_*_params().
(Not all segment types name their functions according to this scheme.)
The former function is responsible for reading parameters before the VG
has been read. The latter is for sanity checking and possibly setting
parameters after the VG has been read.
This patch adds a _check_raid_parameters() function that will determine
if the user has specified 'stripe' or 'mirror' parameters. If not, the
proper number is computed from the list of PVs the user has supplied or
the number that are available in the VG. Now that _check_raid_parameters()
is available, we move the check for proper number of stripes from
_get_* to _check_*.
This gives the user the ability to create RAID LVs as follows:
# 5-device RAID5, 4-data, 1-parity (i.e. implicit '-i 4')
~> lvcreate --type raid5 -L 100G -n lv vg /dev/sd[abcde]1
# 5-device RAID6, 3-data, 2-parity (i.e. implicit '-i 3')
~> lvcreate --type raid6 -L 100G -n lv vg /dev/sd[abcde]1
# If 5 PVs in VG, 4-data, 1-parity RAID5
~> lvcreate --type raid5 -L 100G -n lv vg
Considerations:
This patch only affects RAID. It might also be useful to apply this to
the 'stripe' segment type. LVM RAID may include RAID0 at some point in
the future and the implicit stripes would apply there. It would be odd
to have RAID0 be able to auto-determine the stripe count while 'stripe'
could not.
The only draw-back of this patch that I can see is that there might be
less error checking. Rather than informing the user that they forgot
to supply an argument (e.g. '-i'), the value would be computed and it
may differ from what the user actually wanted. I don't see this as a
problem, because the user can check the device count after creation
and remove the LV if they have made an error.
2014-02-18 06:18:23 +04:00
Creates a RAID5 logical volume "vg00/my_lv", using all of the free
2016-07-28 18:34:42 +03:00
space in the VG and spanning all the PVs in the VG (note that the command
will fail if there's more than 8 PVs in the VG in which case \fB \- i 7\fP
has to be used to get to the currently possible maximum of
8 devices including parity for RaidLVs):
RAID: Allow implicit stripe (and parity) when creating RAID LVs
There are typically 2 functions for the more advanced segment types that
deal with parameters in lvcreate.c: _get_*_params() and _check_*_params().
(Not all segment types name their functions according to this scheme.)
The former function is responsible for reading parameters before the VG
has been read. The latter is for sanity checking and possibly setting
parameters after the VG has been read.
This patch adds a _check_raid_parameters() function that will determine
if the user has specified 'stripe' or 'mirror' parameters. If not, the
proper number is computed from the list of PVs the user has supplied or
the number that are available in the VG. Now that _check_raid_parameters()
is available, we move the check for proper number of stripes from
_get_* to _check_*.
This gives the user the ability to create RAID LVs as follows:
# 5-device RAID5, 4-data, 1-parity (i.e. implicit '-i 4')
~> lvcreate --type raid5 -L 100G -n lv vg /dev/sd[abcde]1
# 5-device RAID6, 3-data, 2-parity (i.e. implicit '-i 3')
~> lvcreate --type raid6 -L 100G -n lv vg /dev/sd[abcde]1
# If 5 PVs in VG, 4-data, 1-parity RAID5
~> lvcreate --type raid5 -L 100G -n lv vg
Considerations:
This patch only affects RAID. It might also be useful to apply this to
the 'stripe' segment type. LVM RAID may include RAID0 at some point in
the future and the implicit stripes would apply there. It would be odd
to have RAID0 be able to auto-determine the stripe count while 'stripe'
could not.
The only draw-back of this patch that I can see is that there might be
less error checking. Rather than informing the user that they forgot
to supply an argument (e.g. '-i'), the value would be computed and it
may differ from what the user actually wanted. I don't see this as a
problem, because the user can check the device count after creation
and remove the LV if they have made an error.
2014-02-18 06:18:23 +04:00
.sp
2016-07-28 18:34:42 +03:00
.B lvcreate \- \- config allocation/raid_stripe_all_devices=1 \- \- type raid5 \- l 100 %FREE \- n my_lv vg00
RAID: Allow implicit stripe (and parity) when creating RAID LVs
There are typically 2 functions for the more advanced segment types that
deal with parameters in lvcreate.c: _get_*_params() and _check_*_params().
(Not all segment types name their functions according to this scheme.)
The former function is responsible for reading parameters before the VG
has been read. The latter is for sanity checking and possibly setting
parameters after the VG has been read.
This patch adds a _check_raid_parameters() function that will determine
if the user has specified 'stripe' or 'mirror' parameters. If not, the
proper number is computed from the list of PVs the user has supplied or
the number that are available in the VG. Now that _check_raid_parameters()
is available, we move the check for proper number of stripes from
_get_* to _check_*.
This gives the user the ability to create RAID LVs as follows:
# 5-device RAID5, 4-data, 1-parity (i.e. implicit '-i 4')
~> lvcreate --type raid5 -L 100G -n lv vg /dev/sd[abcde]1
# 5-device RAID6, 3-data, 2-parity (i.e. implicit '-i 3')
~> lvcreate --type raid6 -L 100G -n lv vg /dev/sd[abcde]1
# If 5 PVs in VG, 4-data, 1-parity RAID5
~> lvcreate --type raid5 -L 100G -n lv vg
Considerations:
This patch only affects RAID. It might also be useful to apply this to
the 'stripe' segment type. LVM RAID may include RAID0 at some point in
the future and the implicit stripes would apply there. It would be odd
to have RAID0 be able to auto-determine the stripe count while 'stripe'
could not.
The only draw-back of this patch that I can see is that there might be
less error checking. Rather than informing the user that they forgot
to supply an argument (e.g. '-i'), the value would be computed and it
may differ from what the user actually wanted. I don't see this as a
problem, because the user can check the device count after creation
and remove the LV if they have made an error.
2014-02-18 06:18:23 +04:00
2012-10-16 00:41:14 +04:00
Creates a 5GiB RAID10 logical volume "vg00/my_lv", with 2 stripes on
RAID: Allow implicit stripe (and parity) when creating RAID LVs
There are typically 2 functions for the more advanced segment types that
deal with parameters in lvcreate.c: _get_*_params() and _check_*_params().
(Not all segment types name their functions according to this scheme.)
The former function is responsible for reading parameters before the VG
has been read. The latter is for sanity checking and possibly setting
parameters after the VG has been read.
This patch adds a _check_raid_parameters() function that will determine
if the user has specified 'stripe' or 'mirror' parameters. If not, the
proper number is computed from the list of PVs the user has supplied or
the number that are available in the VG. Now that _check_raid_parameters()
is available, we move the check for proper number of stripes from
_get_* to _check_*.
This gives the user the ability to create RAID LVs as follows:
# 5-device RAID5, 4-data, 1-parity (i.e. implicit '-i 4')
~> lvcreate --type raid5 -L 100G -n lv vg /dev/sd[abcde]1
# 5-device RAID6, 3-data, 2-parity (i.e. implicit '-i 3')
~> lvcreate --type raid6 -L 100G -n lv vg /dev/sd[abcde]1
# If 5 PVs in VG, 4-data, 1-parity RAID5
~> lvcreate --type raid5 -L 100G -n lv vg
Considerations:
This patch only affects RAID. It might also be useful to apply this to
the 'stripe' segment type. LVM RAID may include RAID0 at some point in
the future and the implicit stripes would apply there. It would be odd
to have RAID0 be able to auto-determine the stripe count while 'stripe'
could not.
The only draw-back of this patch that I can see is that there might be
less error checking. Rather than informing the user that they forgot
to supply an argument (e.g. '-i'), the value would be computed and it
may differ from what the user actually wanted. I don't see this as a
problem, because the user can check the device count after creation
and remove the LV if they have made an error.
2014-02-18 06:18:23 +04:00
2 2-way mirrors. Note that the \fB -i\fP and \fB -m\fP arguments behave
differently.
2013-04-05 14:24:32 +04:00
The \fB -i\fP specifies the number of stripes.
The \fB -m\fP specifies the number of
.B additional
copies:
2012-10-16 00:41:14 +04:00
.sp
.B lvcreate \- \- type raid10 \- L 5 G \- i 2 \- m 1 \- n my_lv vg00
2011-11-05 02:47:17 +04:00
Creates 100MiB pool logical volume for thin provisioning
2012-10-30 21:11:59 +04:00
build with 2 stripes 64KiB and chunk size 256KiB together with
2012-04-11 16:42:10 +04:00
1TiB thin provisioned logical volume "vg00/thin_lv":
2011-11-05 02:47:17 +04:00
.sp
2012-04-11 16:42:10 +04:00
.B lvcreate \- i 2 \- I 64 \- c 256 \- L100M \- T vg00/pool \- V 1 T \- \- name thin_lv
2011-11-05 02:47:17 +04:00
2013-07-15 13:52:05 +04:00
Creates a thin snapshot volume "thinsnap" of thin volume "thinvol" that
will share the same blocks within the thin pool.
Note: the size MUST NOT be specified, otherwise the non-thin snapshot
is created instead:
2013-04-05 14:56:19 +04:00
.sp
2014-06-11 13:06:30 +04:00
.B lvcreate \- s vg00/thinvol \- \- name thinsnap
2013-07-15 13:52:05 +04:00
Creates a thin snapshot volume of read-only inactive volume "origin"
which then becomes the thin external origin for the thin snapshot volume
in vg00 that will use an existing thin pool "vg00/pool":
.sp
2014-06-11 13:06:30 +04:00
.B lvcreate \- s \- \- thinpool vg00/pool origin
2013-04-05 14:56:19 +04:00
2014-02-25 22:17:03 +04:00
Create a cache pool LV that can later be used to cache one
2014-02-12 20:29:07 +04:00
logical volume.
.sp
2014-06-11 13:06:30 +04:00
.B lvcreate \- \- type cache-pool \- L 1 G \- n my_lv_cachepool vg /dev/fast1
2014-02-12 20:29:07 +04:00
2014-02-25 22:17:03 +04:00
If there is an existing cache pool LV, create the large slow
device (i.e. the origin LV) and link it to the supplied cache pool LV,
creating a cache LV.
2014-02-12 20:29:07 +04:00
.sp
2014-10-07 16:48:38 +04:00
.B lvcreate \- \- cache \- L 100 G \- n my_lv vg/my_lv_cachepool /dev/slow1
2014-02-12 20:29:07 +04:00
2014-02-25 22:17:03 +04:00
If there is an existing logical volume, create the small and fast
cache pool LV and link it to the supplied existing logical
volume (i.e. the origin LV), creating a cache LV.
2014-02-12 20:29:07 +04:00
.sp
2014-06-11 13:06:30 +04:00
.B lvcreate \- \- type cache \- L 1 G \- n my_lv_cachepool vg/my_lv /dev/fast1
2014-02-12 20:29:07 +04:00
2014-11-24 19:38:39 +03:00
. \" Create a 1G cached LV "lvol1" with 10M cache pool "vg00/pool".
. \" .sp
2015-09-23 12:28:54 +03:00
. \" .B lvcreate \-\-cache \-L 1G \-n lv \-\-pooldatasize 10M vg00/pool
2015-10-06 15:55:09 +03:00
.
2002-01-04 23:35:19 +03:00
.SH SEE ALSO
2015-10-06 15:55:09 +03:00
.
.nh
2012-04-11 16:42:10 +04:00
.BR lvm (8),
2013-04-05 14:24:32 +04:00
.BR lvm.conf (5),
2014-06-10 13:05:51 +04:00
.BR lvmcache (7),
.BR lvmthin (7),
2013-04-05 14:56:19 +04:00
.BR lvconvert (8),
2012-04-11 16:42:10 +04:00
.BR lvchange (8),
.BR lvextend (8),
.BR lvreduce (8),
2013-04-05 14:24:32 +04:00
.BR lvremove (8),
.BR lvrename (8)
.BR lvs (8),
.BR lvscan (8),
2014-10-07 16:48:38 +04:00
.BR vgcreate (8),
.BR blkid (8)