Hide Forgot
Description of problem: Activation and deactivation logic for 'local' flag is not working properly. Current state: For some volume types like snapshots/origins, thins, raids - local activation is converted implicitly into exclusive activation - but this is a bug, since the user requested local activation, and we may exclusively activate device on a different node via lvm.conf tags. Desired state: Local activation should be implicitly converted (for selected types) to local exclusive activation - which may fail to activate exclusively if i.e. tags setting prevent exclusive activation on the local node. -- We hit similar problem on deactivation as well, where we even influence non-clustered VG. Local deactivation is refused in non-clustered VG and in clustered VG it's converted to deactivation. Desired state: In non-clustered VG deactivation always need to work (-aln == -an) In clustered VG we may deactivate LV only if it's activated locally, so exclusively activated snapshot on a different node must stay running for lvchange -aln and command needs to return error. Version-Release number of selected component (if applicable): <=2.02.102 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
This has been improved with upstream patch: https://www.redhat.com/archives/lvm-devel/2013-November/msg00002.html
Tested activation and deactivation of raid1, thin pools, ordinary LVs and snapshots of ordinary LVs and thin LVs. There was a small issue with LVM behavior if an already inactive LV was being told to deactivate, which I opened a separate bug for (Bug #1124766) Additional issues with commands which are handling activation/deactivation are: -aen == -an which may not be what a user wants (it deactivates the LV on ALL the nodes, regardless of the fact that there may be no exclusively activated LVs) -aly fails silently, and returns 0 (even though id doesn't do anything) in case the volume_list check does not pass: [root@virt-064 ~]# grep " volume_list" /etc/lvm/lvm.conf volume_list = [ "vg1", "@tag1", "cluster/linear" ] [root@virt-064 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert linear cluster -wi------- 1.00g lvol1 cluster Vwi---tz-k 10.00g pool thin_lv pool cluster twi---tz-- 2.00g 0.00 1.17 raid1 cluster rwi---r--- 2.00g thin_lv cluster Vwi-a-tz-- 10.00g pool 0.00 lv_root vg_virt064 -wi-ao---- 6.71g lv_swap vg_virt064 -wi-ao---- 816.00m [root@virt-064 ~]# lvchange -aly cluster/raid1 [root@virt-064 ~]# echo $? 0 [root@virt-064 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert linear cluster -wi------- 1.00g lvol1 cluster Vwi---tz-k 10.00g pool thin_lv pool cluster twi---tz-- 2.00g 0.00 1.17 raid1 cluster rwi---r--- 2.00g thin_lv cluster Vwi-a-tz-- 10.00g pool 0.00 lv_root vg_virt064 -wi-ao---- 6.71g lv_swap vg_virt064 -wi-ao---- 816.00m [root@virt-064 ~]# It should try and at least warn the user that it didn't actually DO anything. Should this all be split into more bug reports or can it be handled inside this one which is related to handling the availability changes?
Changing the needinfo to another address.
Closing this bug as VERIFIED since the intended behaviour can be observed. However, will open a new bug for the funny things with CLI arguments. Marking it verified with: lvm2-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 lvm2-libs-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 lvm2-cluster-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 udev-147-2.57.el6 BUILT: Thu Jul 24 15:48:47 CEST 2014 device-mapper-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-libs-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-event-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-event-libs-1.02.88-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 15:43:06 CEST 2014 cmirror-2.02.109-1.el6 BUILT: Tue Aug 5 17:36:23 CEST 2014
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1387.html