Red Hat Bugzilla – Bug 903411
lvchange activation issues when dealing w/ thin pool volumes and discard options
Last modified: 2013-11-21 18:19:49 EST
Description of problem:
SCENARIO - [convert_pool_discard_opts]
Creating [ignore] discards thinpool volume and converting it to [nopassdown] discards
lvcreate --thinpool POOL --discards ignore -L 2G snapper_thinp
lvcreate --virtualsize 500M --thinpool snapper_thinp/POOL -n origin
lvcreate -s /dev/snapper_thinp/origin -n snap
Deactivating LV before conversion
lvchange --discards nopassdown snapper_thinp/POOL
device-mapper: reload ioctl on failed: Invalid argument
LV activation failed
Jan 23 17:00:15 qalvm-01 kernel: device-mapper: table: 253:4: thin-pool: Discard support cannot be disabled once enabled
Jan 23 17:00:15 qalvm-01 kernel: device-mapper: ioctl: error adding target to table
vgchange however works:
[root@qalvm-01 ~]# vgchange -an snapper_thinp
0 logical volume(s) in volume group "snapper_thinp" now active
Version-Release number of selected component (if applicable):
lvm2-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-libs-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-cluster-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
cmirror-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013
(In reply to comment #0)
> Description of problem:
> SCENARIO - [convert_pool_discard_opts]
> Creating [ignore] discards thinpool volume and converting it to [nopassdown]
> lvcreate --thinpool POOL --discards ignore -L 2G snapper_thinp
> lvcreate --virtualsize 500M --thinpool snapper_thinp/POOL -n origin
> lvcreate -s /dev/snapper_thinp/origin -n snap
> Deactivating LV before conversion
> lvchange --discards nopassdown snapper_thinp/POOL
> device-mapper: reload ioctl on failed: Invalid argument
> LV activation failed
> Jan 23 17:00:15 qalvm-01 kernel: device-mapper: table: 253:4: thin-pool:
> Discard support cannot be disabled once enabled
> Jan 23 17:00:15 qalvm-01 kernel: device-mapper: ioctl: error adding target
> to table
The kernel error speaks to the pool having been activated with discards enabled and, without first being deactivated, the discard mode change was attempted via reload.
Such a transition is not allowed by the kernel.
So if you're attempting to first deactivate the pool, then change the discard mode, then reactivate the pool I'd start with verifying the pool is no longer active in the kernel (via dmsetup table) when you expect it to not be active.
I cannot be more precise with my guidance because I don't understand the implicit deactivations that are associated with the lvm commands you've listed (Zdenek should be able to). But none of the commands clearly and explicitly deactivate the pool before you attempt to change the discard mode.
Yep - bug which needs some thinking - cluster supports makes it harder.
For now there is an 'incorrect' check for having vg/pool active - but the 'real' pool is in the table with -tpool. So if there is just deactivated pool - the LV/lock representing pool is deactivated - however other thin volumes using pool keeps the 'real' -tpool active. So the lvchange allows the 'change' of discard, since it thinks pool is inactive - but an next activation of committed metadata will face the troubles with unacceptable ioctl whatever command it will be.
Current workaround is to manually deactivate all thin volumes related to the changed thin pool in front of lvchange command.
Fixed upstream with these patches:
All thin pool thin volumes are checked prior some variant of discards are allowed.
Ran the test multiple times, with different configuration without a hitch (raid1/raid10 as origins as well)
snapper_thinp -o virt-011 -e convert_pool_discard_opts -i 3
(and -t raid1 and raid10)
Marking this VERIFIED with:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.