| Summary: | lvm automatically unmounts docker thin devices when thin pool is full | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Vivek Goyal <vgoyal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | dmeventd | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | Milan Navratil <mnavrati> |
| Severity: | unspecified | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, mnavrati, msnitzer, prajnoha, prockai, rbednar, thornber, zkabelac |
| Version: | 7.3 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.159-1.el7 | Doc Type: | Release Note |
| Doc Text: |
LVM no longer applies LV polices on external volumes
Previously, LVM disruptively applied its own policy for LVM thin logical volumes (LVs) on external volumes as well, which could result in unexpected behavior. With this update, external users of thin pool can use their own management of external thin volumes, and LVM no longer applies LV polices on such volumes.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-11-04 04:20:26 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Vivek Goyal
2016-04-21 13:21:56 UTC
Should be handled better with upstream commit: https://www.redhat.com/archives/lvm-devel/2016-June/msg00186.html 2f638e07e814617152d617a2ca7c8acdae41968a + adc1fe4b3f84b341383d9737162397426e1a1295 Adding QA ack for 7.3. Testing should consist of: 1) set thin_pool_autoextend_threshold to less than 95% 2) create thin pool and thin lv in it 3) mount thin lv 4) start filling thin lv beyond threshold 5) check that it stays mounted after autoextension Addition to Step 2 from Comment 6: thin lv has to be created without using lvm (directly by dmsetup) to simulate the way docker uses it. (standard lvm volumes should still get unmounted when pool is over 95%) Verified. Non-LVM thin devices now stay mounted even though thin-pool is full.
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
...
POOL vg twi-aotz-- 256.00m 99.98 3.81
[POOL_tdata] vg Twi-ao---- 256.00m
[POOL_tmeta] vg ewi-ao---- 4.00m
lvm_thin vg Vwi-aotz-- 232.00m POOL 58.27
[lvol0_pmspare] vg ewi------- 4.00m
# dd if=/dev/zero of=file20 bs=100 count=10
10+0 records in
10+0 records out
1000 bytes (1.0 kB) copied, 0.000599041 s, 1.7 MB/s
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel_virt-283 -wi-ao---- 6.92g
swap rhel_virt-283 -wi-ao---- 848.00m
POOL vg twi-aotz-- 256.00m 100.00 3.81
[POOL_tdata] vg Twi-ao---- 256.00m
[POOL_tmeta] vg ewi-ao---- 4.00m
lvm_thin vg Vwi-aotz-- 232.00m POOL 58.27
[lvol0_pmspare] vg ewi------- 4.00m
# mount | grep dm_thin
/dev/mapper/dm_thin on /mnt/dm_thin type ext4 (rw,relatime,seclabel,stripe=64,data=ordered)
# dmsetup table
vg-POOL: 0 204800 linear 253:4 0
vg-POOL-tpool: 0 524288 thin-pool 253:2 253:3 128 0 0
vg-POOL_tdata: 0 524288 linear 8:0 10240
vg-POOL_tmeta: 0 8192 linear 8:64 2048
dm_thin: 0 262144 thin 253:4 0
...
vg-lvm_thin: 0 475136 thin 253:4 1
3.10.0-505.el7.x86_64
lvm2-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
lvm2-libs-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
lvm2-cluster-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
device-mapper-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
device-mapper-libs-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
device-mapper-event-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
device-mapper-event-libs-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 12:29:13 CEST 2016
cmirror-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html |