Hide Forgot
Description of problem: When an LV is activated exclusively on another node and another node tries to activate it exclusively, the LV does not become active, but the vgchange command still returns success. Version-Release number of selected component (if applicable): 2.6.32-114.0.1.el6.x86_64 lvm2-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 lvm2-libs-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 lvm2-cluster-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 udev-147-2.33.el6 BUILT: Wed Feb 9 09:56:24 CST 2011 device-mapper-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-libs-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-event-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-event-libs-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 cmirror-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 How reproducible: Every time Steps to Reproduce: 1. on node A vgchange -aye $VG 2. on node B vgchange -aye $VG <- this should return non-zero Actual results: [root@dash-01 audit]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Conv linear_9_55810 linear_9_5581 -wima- 685.68g lv_home vg_dash01 -wi-ao 31.87g lv_root vg_dash01 -wi-ao 35.29g lv_swap vg_dash01 -wi-ao 6.86g [root@dash-02 audit]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Conv linear_9_55810 linear_9_5581 -wim-- 685.68g lv_home vg_dash02 -wi-ao 31.87g lv_root vg_dash02 -wi-ao 35.29g lv_swap vg_dash02 -wi-ao 6.86g [root@dash-02 audit]# vgchange -aye linear_9_5581; echo $? 0 logical volume(s) in volume group "linear_9_5581" now active 0 Expected results: vgchange should return non-zero Additional info:
That old chestnut! If it's already active the command has nothing to do so should it therefore fail? Or is it enough to say that you wanted it active, it is active so return success? I'm not sure we're ever going to resolve this to everyone's satisfaction.
(BTW Remember that vgchange -a is a clustered command which acts symmetrically on all nodes unless 'l' is used. vgchange -aey means activate it exclusively on any one node, subject to any tag and lvm.conf constraints. We don't support '-aely' yet.)
The "0 LVs active" message only queries local LVs. We probably do have the infrastructure available now to include LVs active remotely in those totals now.
Alasdair, this is a regression. We ran this test throughout the RHEL6.0 process. Here is the test output from the RHEL6.0-20100818.0 tree which contained lvm2-2.02.72-8.el6.x86_64. EXCLUSIVE VOLUME GROUP LOCKING deactivating volume group grabing the exclusive lock on dash-01 attempting to also grab an exclusive lock on dash-02 Error locking on node dash-02: Volume is busy on another node attempting to grab a non exclusive lock on dash-02 Error locking on node dash-02: Volume is busy on another node Error locking on node dash-03: Volume is busy on another node Error locking on node dash-01: Device or resource busy attempting to also grab an exclusive lock on dash-03 Error locking on node dash-03: Volume is busy on another node attempting to grab a non exclusive lock on dash-03 Error locking on node dash-03: Volume is busy on another node Error locking on node dash-02: Volume is busy on another node Error locking on node dash-01: Device or resource busy releasing the exclusive lock on dash-01
Does anyone know why this behavior appears to have changed between 6.0 and 6.1?
Looking through my test logs shows that this behavior was fixed at some point in the release. Testing against lvm2-2.02.83-3.el6.x86_64 passed this part of our tests.
Nate, do I read comment #12 correctly that it is in fact fixed in current 6.1?