Bug 773482
Summary: | Support (non-clustered) thinly-provisioned snapshots in LVM | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Tom Coughlan <coughlan> |
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 6.1 | CC: | agk, borgan, cmarthal, coughlan, djuran, dwysocha, heinzm, jbrassow, mbroz, msnitzer, nperic, prajnoha, prockai, snagar, syeghiay, thornber, xiaoli, zkabelac |
Target Milestone: | beta | Keywords: | FutureFeature, TechPreview |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.95-1.el6 | Doc Type: | Technology Preview |
Doc Text: |
Title: LVM support for (non-clustered) thinly-provisioned snapshots
A new implementation of LVM copy-on-write (cow) snapshots is available in Red Hat Enterprise Linux 6.3 as a Technology Preview. The main advantage of this implementation, compared to the previous implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This implementation also provides support for arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots …).
This feature is for use on a single-system. It is not available for multi-system access in cluster environments.
For more information, refer to documentation of the -s, --snapshot option in the lvcreate man page.
|
Story Points: | --- |
Clone Of: | 636037 | Environment: | |
Last Closed: | 2012-06-20 15:00:49 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 723018 | ||
Bug Blocks: | 666674, 533057, 636037, 637693, 655920, 666675, 697866, 749672, 756082 |
Comment 1
Tom Coughlan
2012-01-11 22:59:49 UTC
The thin provisioning device-mapper target is being included in the kernel via bug 723018 . This bugzilla covers extensions to LVM2 to make use of that target to create snapshots of thinly-provisioned logical volumes. This is an alternative to the existing snapshot implementation that is optimised for different types of use. The new snapshots are designed to be efficient when many snapshots are created of the same origin. For creation of thin snapshot use this command: lvcreate -s vg/thinvolume Warning: there is no 'size' parameter. If the size parameter is given - the regular old-style snapshots are used - i.e. lvcreate -s vg/thinvolume -L10M creates 10MB old style snapshot which is not using pool for storing blocks. New snapshots behave much like any other thin LV (could be independently activated, resized, renamed, removed, snapshoted...) Unlike with the old snapshot new thin snapshots have many new possibilities: Snapshot could be again used as an LV origin for another snapshot. Snapshot do not need to be activated with its origin - so user may have only origin active and have 100 of inactive snapshot volumes of this origin. Instead of merging snapshot - user may just delete the origin and start using its snapshot as the origin. There should be no major slowdown when number of snapshots of the origin is increasing. Considerably better memory usage of the device driver with larger snapshot sizes. Snapshot are usable with exclusive activation from version 2.02.90 It is said in comment #3 >There should be no major slowdown when number of snapshots of the origin is >increasing. What is considered as no major slow down? Here are some time measurements of lvm commands while working with snapshots (a lot of snapshots in VG): =============== Creating 50 snapshots of previous snapshot (depth of 50 snapshots) for i in {1..50};do lvcreate -s vgforthin/chost$((i++)) -n zhost$i; done real 3m23.489s user 0m10.456s sys 0m3.661s ================ ================ Any LVM display command needs approximately 4 seconds to return output: (06:57:46) [root@node01:~]$ time vgs VG #PV #LV #SN Attr VSize VFree VolGroup 1 2 0 wz--n- 9.51g 0 vgforthin 4 239 0 wz--n- 19.05g 17.04g real 0m3.802s user 0m0.159s sys 0m0.072s ================== ================== (09:44:05) [root@node01:~]$ time vgextend vgforthin /dev/sdd1 Volume group "vgforthin" successfully extended real 0m7.598s user 0m0.235s sys 0m0.108s =================== =================== Deactivating 50 snapshots of snapshots: (09:49:34) [root@node01:~]$ time for i in {1..51}; do lvchange -an /dev/vgforthin/zhost$i; done One or more specified logical volume(s) not found. real 3m10.098s user 0m9.162s sys 0m2.938s ==================== I know that there are a lot of LVs in this VG, one is thin pool, three are thin LVs and the rest are snapshots (total of 235 snaps). Removal of 1 thin LV and 130 snapshots: real 7m0.882s user 0m24.791s sys 0m6.978s The commands show significant improvement (down to a third of the time needed previously). (10:10:07) [root@node01:~]$ time vgs VG #PV #LV #SN Attr VSize VFree VolGroup 1 2 0 wz--n- 9.51g 0 vgforthin 7 108 0 wz--n- 33.33g 31.33g real 0m1.043s user 0m0.138s sys 0m0.022s (10:10:41) [root@node01:~]$ time vgextend vgforthin /dev/sdd2 Volume group "vgforthin" successfully extended real 0m2.113s user 0m0.148s sys 0m0.050s (10:11:11) [root@node01:~]$ time for i in {1..51}; do lvchange -aey /dev/vgforthin/zhost$i; done One or more specified logical volume(s) not found. real 1m25.664s user 0m7.010s sys 0m1.995s Is this the performance impact which was expected? Comment 3 was about THIN snapshots. Old NON-THIN snapshots were never meant to be used with such large numbers (since it's simple implementation were each block is copies into each snapshot - i.e. if you have 50 snapshots of the same origin - you get 50 extra write operation - this isn't really usable unless you practically never write to the origin) I'd say even 10 snapshots of the same origin is too much.... Anyway - making 50 thin snapshot on my ~4 year old laptop takes about 10 seconds with command from comment 16. Further optimization is possible via grouping multiple operation in 1 command - which might eventually hit upstream in future. Old snapshot are not going to be accelerated (IMHO it could be seen as a feature it takes so long - so user will not try to make too many of them). (In reply to comment #17) The comment #16 was only about thinp snapshots, no old snapshots were being used. I used snapshot creation from a thin LV origin without specifying the size of snapshot, this should create a NEW thinp type of snapshots. No old snapshots were created or used in the test. In comment #3 I wrote "The new snapshots are designed to be efficient when many snapshots are created of the same origin." What that means is that the *data I/O performance* when *using* those snapshots (i.e. writing to new parts of them) is efficient compared to the old implementation. 3 minutes to create 50 snapshots is fine for now - it's a marginal usage scenario. (Most people will create snapshots one at a time with long gaps between.) We have several other bugzillas open to improve the performance further. Thank you for clarification, that was my question actually, if this spee dis expected. as far as snapshot themselves go: The snapshot functionality works as advertised: creating multiple snapshots of origin creating snapshots of snapshots having snapshot active while origin is inactive or deleted having a cascading series of snapshots of snapshot resizing snapshots with lvextend/lvreduce It still looks strange that something which takes ~10 sec on my machine, takes like ~230 seconds on RHEL6.3. Can I ask for 1st. and 50th. snapshot strace -tttt lvcreate -s Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0962.html |