Description of problem: In system-config-cluster, when you add an LVM resource, the blanks in the GUI for 'Name' and 'Logical Volume Name' are backwards; that is, 'Name' are written to cluster.conf as the 'lv_name' attribute, and 'Logical Volume Name' is written as the 'name' attribute. Additionally, since resource names must be unique across all resources, system-config-cluster incorrectly checks the value entered for 'Logical Volume Name' for uniqueness. Version-Release number of selected component (if applicable): system-config-cluster-1.0.50-1.3 How reproducible: Every time. Steps to Reproduce: Create an LVM resource in system-config-cluster. Actual results: For 'Name' I entered 'name'; for 'Volume Group Name' I entered 'vg_name'; and for 'Logical Volume Name' I entered 'lv_name'. The output in cluster.conf was: <lvm lv_name="name" name="lv_name" vg_name="vg_name"/> Expected results: <lvm lv_name="lv_name" name="name" vg_name="vg_name"/>
Thanks for this bug! This is a very simple fix.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
Setting QA_ACK, clear test case.
Oops - I might have spoken to soon. It works for me on version 1.0.51-1.3 Above, you mention that you are running 1.0.50-1.3....was this a typo? If not please upgrade to the latest version and try again.
That wasn't a typo. As far as I can tell (I don't have an AP or RHCS provision), 1.0.51-1.3 hasn't been released yet. It doesn't show up in Redhat's SRPMs, at least.
You are running a RHEL4 cluster, though, correct?
No, RHEL 5. The 'Version' dropdown in Bugzilla only provides 3 and 4 as options; there was no way to choose 5, so I assumed that was some sort of overall RHCS version number and just chose the newest one.
Ah - good thing I asked. One more thought - are you looking at the python code for this resource in the ResourceHandler.py file? In the val_lvm method, the text field names are a bit confusing, but try and stay standard with the other resource glade file text field names; for example, the 'name' of the resource for lvm is mapped to the text field 'lv_name'. The LV name is mapped to a text field called 'lv_lvname'. I checked the code and it is correct - and the correct resource name field value is being validated for uniqueness. *shrug* Anyway, I just built this package for fedora. The version number is: system-config-cluster-1.0.53-1.0 You should be able to retrieve this off of the fedoraproject.org site. If you have any more problems, please file against the fedora version. I really truly cannot reproduce this with your stated input above and I have even given the code a good look. Good luck.
Created attachment 315495 [details] Patch for /usr/share/system-config-cluster/ResourceHandler.py
Created attachment 315496 [details] Screenshot showing the error when creating LVM resource
I've attached a screenshot showing that the problem is in system-config-cluster before it even writes to cluster.conf. The screenshot shows s-c-c complaining that I haven't supplied a name for the resource when I quite clearly have (havg1). However if I put the 'havg1' string in the 'Logical Volume Name:' field, it happily accepts that as the resource name. Plus I've attached a patch that fixes the problem (at least for me), but there might be another/better way to fix it. This is all from system-config-cluster-1.0.54-2.0 on RHEL4u7. The same problem has been observed on system-config-cluster-1.0.51-2.0 on RHEL4u6 too. Thanks, Mark
This is not fixed - reopening.
The original filer of this bug should have filed it against RHEL5 instead of RHCS 4 as comment #7 suggests. So moving this to the proper product/component. And it looks like this has been resolve as of version system-config-cluster-1.0.55-1.0 in RHEL5 updates. If someone finds that this is not the case we can reopen the bug, but closing it for now as CURRENTRELEASE.