Red Hat Bugzilla – Bug 1329235
lvm automatically unmounts docker thin devices when thin pool is full
Last modified: 2016-11-04 00:20:26 EDT
Description of problem: lvm automatically unmounts docker thin devices when thin pool is full. So these mount points disappear from host. One of these mount points was created by containerd/runc and I suspect it will create issues. I see following message in journal. Apr 21 12:40:40 vm4-f23.localdomain lvm[1539]: Unmounting thin volume docker--vg-docker--pool from /var/lib/docker/devicemapper/mnt/c96c27a61c5b3bf7e5fd436137f14a6ae6b21b4f924bf7927db80a9450b7005f. Apr 21 12:40:40 vm4-f23.localdomain lvm[1539]: Unmounting thin volume docker--vg-docker--pool from /run/docker/libcontainerd/ac6543a92d43840a0c5e280123449419b3514cc60a7aed43ae5b7663a2e87152/rootfs. Mount point under /run/docker/libcontainerd/.. is bind mount of original thin device mount. So there are two issues here. - Provide a facility so that docker can opt-in/opt-out for unmount behavior when thin pool is full. As of now I am not sure what will be broken if automatic unmounting is done. - Log message seems confusing. It says "thin volume docker--vg-docker--pool" but this is thin pool lvm and not thin volume. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Should be handled better with upstream commit: https://www.redhat.com/archives/lvm-devel/2016-June/msg00186.html 2f638e07e814617152d617a2ca7c8acdae41968a + adc1fe4b3f84b341383d9737162397426e1a1295
Adding QA ack for 7.3. Testing should consist of: 1) set thin_pool_autoextend_threshold to less than 95% 2) create thin pool and thin lv in it 3) mount thin lv 4) start filling thin lv beyond threshold 5) check that it stays mounted after autoextension
Addition to Step 2 from Comment 6: thin lv has to be created without using lvm (directly by dmsetup) to simulate the way docker uses it. (standard lvm volumes should still get unmounted when pool is over 95%)
Verified. Non-LVM thin devices now stay mounted even though thin-pool is full. # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert ... POOL vg twi-aotz-- 256.00m 99.98 3.81 [POOL_tdata] vg Twi-ao---- 256.00m [POOL_tmeta] vg ewi-ao---- 4.00m lvm_thin vg Vwi-aotz-- 232.00m POOL 58.27 [lvol0_pmspare] vg ewi------- 4.00m # dd if=/dev/zero of=file20 bs=100 count=10 10+0 records in 10+0 records out 1000 bytes (1.0 kB) copied, 0.000599041 s, 1.7 MB/s # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root rhel_virt-283 -wi-ao---- 6.92g swap rhel_virt-283 -wi-ao---- 848.00m POOL vg twi-aotz-- 256.00m 100.00 3.81 [POOL_tdata] vg Twi-ao---- 256.00m [POOL_tmeta] vg ewi-ao---- 4.00m lvm_thin vg Vwi-aotz-- 232.00m POOL 58.27 [lvol0_pmspare] vg ewi------- 4.00m # mount | grep dm_thin /dev/mapper/dm_thin on /mnt/dm_thin type ext4 (rw,relatime,seclabel,stripe=64,data=ordered) # dmsetup table vg-POOL: 0 204800 linear 253:4 0 vg-POOL-tpool: 0 524288 thin-pool 253:2 253:3 128 0 0 vg-POOL_tdata: 0 524288 linear 8:0 10240 vg-POOL_tmeta: 0 8192 linear 8:64 2048 dm_thin: 0 262144 thin 253:4 0 ... vg-lvm_thin: 0 475136 thin 253:4 1 3.10.0-505.el7.x86_64 lvm2-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 lvm2-libs-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 lvm2-cluster-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 device-mapper-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 device-mapper-libs-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 device-mapper-event-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 device-mapper-event-libs-1.02.134-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 12:29:13 CEST 2016 cmirror-2.02.165-2.el7 BUILT: Wed Sep 14 16:01:43 CEST 2016
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html