Hide Forgot
Description of problem: ---------------------- clvmd is running on a new 4-node rhel 5.6 cluster. It sees all the LVs as that exist in a given VG. After creating a new LV in the VG, all the other nodes show nothing of the new LV. The creating node also reports the space being taken up from the VG (using vgdisplay) while the other nodes continue to show that space still available in the VG. Removing a LV reflects only on the node on which ti was removed while the other nodes will continue to list the LV using 'lvs'. Component Version-Release: ------------------------- kernel-2.6.18.238.el5 x86_64 lvm2-2.02.74.5.el5 x86_64 lvm2-cluster-2.02.74.3.el5 x86_64 cman-2.0.115.68.el5 x86_64 rgmanager-2.0.52.9.el5 x86_64 How reproducible: ---------------- consistent Steps to Reproduce: ------------------ 1) build new cluster 2) activate existing clustered VG 3) start clvmd ... 'clustered VG reports 22 logical volume(s) in volume group "vg_sas" now active' 4) make any changes to vg_sas (e.g., add/remove LV, etc.) 5) if adding a new LV, 'lvs' will list the new LV while 'lvs' on other nodes will not 6) if removing an LV, 'lvs' on changing node will not show the LV but it will be listed on the other nodes Actual results: -------------- - Changes to lvm are reflected on changing node only. Expected results: ---------------- - Changes to lvm are reflected on all clustered nodes. Additional info: --------------- 1) 'clvmd -R' has no apparent effect and as such does not help. 2) 'service clvmd status' appears to be accurate in its listing of active LVs while the 'lvs' command does not. 'lvs' will continue to list the removed LV (on nodes that did not perform the removal) while the service status check will correctly reflect the lack of that active LV on all nodes. 3) commands such as vgscan or lvscan have no apparent effect 4) messages file contains no obvious errors 5) The VG is use (vg_sas) was originally created in a rhel6.1 cluster. It was activated later by a rhel 5.4 cluster without issues and the existing LVs as well as new ones functioned as expected. Now the same clustered VG is activated in a rhel 5.6 cluster but changes to the VG are apparently not propagating. 6) on nodes that did not perform the LV removal, the lvdisplay command w/out args will continue to list the removed LV ... but an lvdisplay of that specific LV will fail. root@bills:~ # lvdisplay ... --- Logical volume --- LV Name /dev/vg_sas/lv_share_ASUITEoutput VG Name vg_sas LV UUID Gw5idW-wpFY-HZQn-0J78-Hhn2-gGGx-G37Ohi LV Write Access read/write LV Status NOT available LV Size 600.00 GB Current LE 153600 Segments 2 Allocation inherit Read ahead sectors auto root@bills:~ # lvdisplay /dev/vg_sas/lv_share_ASUITEoutput One or more specified logical volume(s) not found. 7) For some reason, the 'lvs -o+stripes,stripesize' command is listing existing LVs multiple times: ... oracle vg1 -wi- 80.00G 1 0 oracle vg1 -wi-a- 80.00G 1 0 oracle vg1 -wi-a- 80.00G 1 0 root vg1 -wi-a- 14.00G 1 0 root vg1 -wi-a- 14.00G 1 0 tmp vg1 -wi-a- 7.00G 1 0 tmp vg1 -wi-a- 7.00G 1 0 ... where the 'lvs' command w/out args does not: ... oracle vg1 -wi-a- 80.00G root vg1 -wi-a- 14.00G tmp vg1 -wi-a- 7.00G 8) lvm.conf has locking_type=3
Please can you update to RHEL5.7 version or 5.6.z and try again? Seems possible a problem fixed in lvm2-cluster-2.02.74-3.el5_6.1, see https://rhn.redhat.com/errata/RHBA-2011-0288.html
After upgrading to 5.6.z and rebooting, the clustered VG vg_sas behaves as expected. Changes to the VG on one node are reflected accordingly on other nodes. That resolves the primary reason for this bz. The only strangeness I still see are when adding the stripe info to the lvs command ... lvs -o+stripes,stripesize Using these args outputs the doubled listings of some LVs when the same command without args does not (as included below). root@bills:~ # lvs Found duplicate PV FBWYl5Eveoi0ZSjyCWwoa72G3fAROMTH: using /dev/sdr not /dev/sdaw Found duplicate PV 06I6gfMqXuJg8H6SO45badT4FHfeK03p: using /dev/sdfv not /dev/sdcs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 VolGroup00 -wi-a- 273.84G LogVol01 VolGroup00 -wi-ao 5.41G LogVol00 VolGroup01 -wi-ao 230.12G LogVol01 VolGroup01 -wi-ao 49.12G home vg1 -wi-a- 4.00G oracle vg1 -wi-a- 80.00G root vg1 -wi-a- 14.00G tmp vg1 -wi-a- 7.00G usr vg1 -wi-a- 4.00G var vg1 -wi-a- 2.00G lv_home vg_bills -wi-a- 60.61G lv_root vg_bills -wi-a- 50.00G lv_swap vg_bills -wi-ao 25.59G lv_b_ASUITE vg_sas -wi-a- 8.02G lv_b_ASUITEinput vg_sas -wi-a- 150.00G lv_b_ASUITEoutput vg_sas -wi-a- 150.00G lv_b_ASUITEwork vg_sas -wi-a- 150.00G lv_d_ASUITE vg_sas -wi-a- 8.02G lv_d_ASUITEinput vg_sas -wi-a- 150.00G lv_d_ASUITEoutput vg_sas -wi-a- 150.00G lv_d_ASUITEwork vg_sas -wi-a- 150.00G lv_j_ASUITE vg_sas -wi-a- 8.02G lv_j_ASUITEinput vg_sas -wi-a- 150.00G lv_j_ASUITEoutput vg_sas -wi-a- 150.00G lv_j_ASUITEwork vg_sas -wi-a- 150.00G lv_ora1 vg_sas -wi-a- 70.00G lv_ora2 vg_sas -wi-a- 70.00G lv_ora3 vg_sas -wi-a- 70.00G lv_ora4 vg_sas -wi-a- 70.00G lv_ora5 vg_sas -wi-a- 70.00G lv_p_ASUITE vg_sas -wi-a- 8.02G lv_p_ASUITEinput vg_sas -wi-a- 150.00G lv_p_ASUITEoutput vg_sas -wi-a- 150.00G lv_p_ASUITEwork vg_sas -wi-a- 150.00G lv_share_ASUITEoutput vg_sas -wi-a- 600.00G root@bills:~ # lvs -o+stripes,stripesize Found duplicate PV FBWYl5Eveoi0ZSjyCWwoa72G3fAROMTH: using /dev/sdr not /dev/sdaw Found duplicate PV 06I6gfMqXuJg8H6SO45badT4FHfeK03p: using /dev/sdfv not /dev/sdcs LV VG Attr LSize Origin Snap% Move Log Copy% Convert #Str Stripe LogVol00 VolGroup00 -wi-a- 273.84G 1 0 LogVol01 VolGroup00 -wi-ao 5.41G 1 0 LogVol00 VolGroup01 -wi-ao 230.12G 1 0 LogVol01 VolGroup01 -wi-ao 49.12G 1 0 home vg1 -wi-a- 4.00G 1 0 oracle vg1 -wi-a- 80.00G 1 0 oracle vg1 -wi-a- 80.00G 1 0 oracle vg1 -wi-a- 80.00G 1 0 root vg1 -wi-a- 14.00G 1 0 root vg1 -wi-a- 14.00G 1 0 tmp vg1 -wi-a- 7.00G 1 0 tmp vg1 -wi-a- 7.00G 1 0 usr vg1 -wi-a- 4.00G 1 0 var vg1 -wi-a- 2.00G 1 0 lv_home vg_bills -wi-a- 60.61G 1 0 lv_root vg_bills -wi-a- 50.00G 1 0 lv_swap vg_bills -wi-ao 25.59G 1 0 lv_b_ASUITE vg_sas -wi-a- 8.02G 12 64.00K lv_b_ASUITEinput vg_sas -wi-a- 150.00G 12 64.00K lv_b_ASUITEoutput vg_sas -wi-a- 150.00G 12 64.00K lv_b_ASUITEwork vg_sas -wi-a- 150.00G 12 64.00K lv_d_ASUITE vg_sas -wi-a- 8.02G 12 64.00K lv_d_ASUITEinput vg_sas -wi-a- 150.00G 12 64.00K lv_d_ASUITEoutput vg_sas -wi-a- 150.00G 12 64.00K lv_d_ASUITEwork vg_sas -wi-a- 150.00G 12 64.00K lv_j_ASUITE vg_sas -wi-a- 8.02G 12 64.00K lv_j_ASUITEinput vg_sas -wi-a- 150.00G 12 64.00K lv_j_ASUITEoutput vg_sas -wi-a- 150.00G 12 64.00K lv_j_ASUITEwork vg_sas -wi-a- 150.00G 12 64.00K lv_ora1 vg_sas -wi-a- 70.00G 8 1.00M lv_ora2 vg_sas -wi-a- 70.00G 8 1.00M lv_ora3 vg_sas -wi-a- 70.00G 8 1.00M lv_ora4 vg_sas -wi-a- 70.00G 8 1.00M lv_ora5 vg_sas -wi-a- 70.00G 8 1.00M lv_p_ASUITE vg_sas -wi-a- 8.02G 12 64.00K lv_p_ASUITEinput vg_sas -wi-a- 150.00G 12 64.00K lv_p_ASUITEoutput vg_sas -wi-a- 150.00G 12 64.00K lv_p_ASUITEwork vg_sas -wi-a- 150.00G 12 64.00K lv_share_ASUITEoutput vg_sas -wi-a- 600.00G 12 64.00K lv_share_ASUITEoutput vg_sas -wi-a- 600.00G 12 64.00K
(In reply to comment #2) > After upgrading to 5.6.z and rebooting, the clustered VG vg_sas behaves as > expected. Changes to the VG on one node are reflected accordingly on other > nodes. That resolves the primary reason for this bz. ok > The only strangeness I still see are when adding the stripe info to the lvs > command ... > lvs -o+stripes,stripesize > Using these args outputs the doubled listings of some LVs when the same command Perhaps that LVs has several segments. Anyway, please create new bugzilla if you think it is a bug, thanks. (Maybe also compare lvs output to lvs --segments / lvdisplay / lvdisplay -m)