1
0
mirror of git://sourceware.org/git/lvm2.git synced 2024-12-21 13:34:40 +03:00
lvm2/doc/vdo.md

105 lines
3.8 KiB
Markdown
Raw Normal View History

2018-01-25 13:12:38 +03:00
# VDO - Compression and deduplication.
Currently device stacking looks like this:
Physical x [multipath] x [partition] x [mdadm] x [LUKS] x [LVS] x [LUKS] x [FS|Database|...]
Adding VDO:
Physical x [multipath] x [partition] x [mdadm] x [LUKS] x [LVS] x [LUKS] x VDO x [LVS] x [FS|Database|...]
## Where VDO fits (and where it does not):
### Backing devices for VDO volumes:
1. Physical x [multipath] x [partition] x [mdadm],
2. LUKS over (1) - full disk encryption.
3. LVs (raids|mirror|stripe|linear) x [cache] over (1).
4. LUKS over (3) - especially when using raids.
Usual limitations apply:
- Never layer LUKS over another LUKS - it makes no sense.
- LUKS is better over the raids, than under.
2018-02-16 19:09:40 +03:00
Devices which are not best suitable as backing device:
- thin volumes - at the moment it is not possible to take snapshot of active VDO volume on top of thin volume.
2018-01-25 13:12:38 +03:00
### Using VDO as a PV:
2018-02-16 19:09:40 +03:00
1. under tdata
2018-01-25 13:12:38 +03:00
- The best fit - it will deduplicate additional redundancies among all
snapshots and will reduce the footprint.
- Risks: Resize! dmevent will not be able to handle resizing of tpool ATM.
2. under corig
- This is useful to keep the most frequently used data in cache
2018-02-16 19:09:40 +03:00
uncompressed or without deduplication if that happens to be a bottleneck.
- Cache may fit better under VDO device, depending on compressibility and
amount of duplicates, as
- compression will reduce amount of data, thus effectively increasing
size of cache,
- and deduplication may emphasize hotspots.
- Performance testing of your particular workload is strongly recommended.
2018-01-25 13:12:38 +03:00
3. under (multiple) linear LVs - e.g. used for VMs.
### And where VDO does not fit:
- *never* use VDO under LUKS volumes
- these are random data and do not compress nor deduplicate well,
- *never* use VDO under cmeta and tmeta LVs
- these are random data and do not compress nor deduplicate well,
- under raids
- raid{4,5,6} scrambles data, so they do not deduplicate well,
- raid{1,4,5,6,10} also causes amount of data grow, so more (duplicit in
case of raid{1,10}) work has to be done in order to find less duplicates.
### And where it could be useful:
- under snapshot CoW device - when there are multiple of those it could deduplicate
2018-02-16 19:09:40 +03:00
## Development
2018-01-25 13:12:38 +03:00
### Things to decide
2018-02-16 19:09:40 +03:00
- under integrity devices
- VDO should work well for data blocks,
- but hashes are mostly unique and not compressible - were it possible it
would make sense to have separate imeta and idata volumes for integrity
devices.
2018-01-25 13:12:38 +03:00
### Future Integration of VDO into LVM:
One issue is using both LUKS and RAID under VDO. We have two options:
- use mdadm x LUKS x VDO+LV
2018-02-16 19:09:40 +03:00
- use LV RAID x LUKS x VDO+LV
In both cases dmeventd will not be able to resize the volume at the moment.
2018-01-25 13:12:38 +03:00
2018-02-16 19:09:40 +03:00
Another issue is duality of VDO - it can be used as a top level LV (with a
filesystem on top) but it can be used as "pool" for multiple devices too.
2018-01-25 13:12:38 +03:00
2018-02-16 19:09:40 +03:00
This will be solved in similar way thin pools allow multiple volumes.
2018-01-25 13:12:38 +03:00
2018-02-16 19:09:40 +03:00
Also VDO, has two sizes - its physical size and virtual size - and when
overprovisioning, just like tpool, we face same problems - VDO can get full,
without exposing it to a FS. dmeventd monitoring will be needed.
2018-01-25 13:12:38 +03:00
2018-02-16 19:09:40 +03:00
Another possible RFE is to split data and metadata - keep data on HDD and metadata on SSD.
2018-01-25 13:12:38 +03:00
## Issues / Testing
- fstrim/discard pass down - does it work with VDO?
2018-02-16 19:09:40 +03:00
- VDO can run in synchronous vs. asynchronous mode:
- synchronous for devices where write is safe after it is confirmed. Some
devices are lying.
- asynchronous for devices requiring flush.
- Multiple devices under VDO - need to find and expose common properties, or
not allow grouping them together. (This is same for all volumes with more
physical devices below.)
- pvmove changing characteristics of underlying device.
- autoactivation during boot?
- Q: can we use VDO for RootFS? Dracut!
2018-01-25 13:12:38 +03:00