PM ACK for U2.
Implemented in 1.0.17
Features to test: - 'mount' checkbox, in LV new/edit dialog, should be enabled for local GFS only if supported by kernel either as a loadable (loaded or not) or built-in module - in addition to kernel support, clustered GFS should be mountable only if cluster is quorate, clustername matches GFS table and both use the same locking mechanism (dlm or gulm) - app. should offer creation of GFS only if GFS software is installed (it comes in different rpm then kernel GFS support) - if VG is marked as clustered, app should offer creation of clustered GFS only; otherwise, both clustered and local GFS should be offered - at creation of clustered GFS, properties dialog should popup prompting for clustername, unique GFS name, number of journals and locking type (dlm or gulm). Clustername and locking type should be pre-populated if node is a cluster member (has a valid /etc/cluster/cluster.conf) - when LV or uninitialized disk entity is selected, properties on the right side should display filesystem type
in #14 you mention that the mount checkbox should only be enabled for local GFS file systems. I see that it is enabled and working for clustered GFS too. Is the comment in #14 correct or is this not supposed to work?
Clustered GFS IS supposed to work (sorry for any confusion). Here is the description of how it is supposed to work: - at creation of clustered GFS, properties dialog should popup prompting for clustername, unique GFS name, number of journals and locking type (dlm or gulm). Clustername and locking type should be pre-populated if node is a cluster member (has a valid /etc/cluster/cluster.conf) - if VG is marked as clustered, app should offer creation of clustered GFS only; otherwise, both clustered and local GFS should be offered - clustered GFS should be mountable only if cluster is quorate, clustername matches GFS table, both use the same locking mechanism (dlm or gulm), and GFS is supported by kernel
There's still a problem when the cluster is not quorate. Trying to do anything that causes s-c-lvm to reload or check the lvm status will hang. When the cluster is not quorate, the lvm commands s-c-lvm uses do not return.
Test with 1.0.19-1.0 App. should gracefuly fail, on startup, if cluster is not quorate. There still remains a problem if quorum is lost in the middle of operation; a hang, until quorum is regained, should be acceptable solution, at least for now.
This doesn't work. The first thing it tries is an `lvs` which hangs. [root@tank-01 ~]# strace -f -e execve system-config-lvm execve("/usr/sbin/system-config-lvm", ["system-config-lvm"], [/* 21 vars */]) = 0 Process 12432 attached [pid 12432] execve("/bin/bash", ["/bin/bash", "-c", "LANG=C /usr/sbin/lvm lvs -- all"], [/* 21 vars */]) = 0 [pid 12432] execve("/usr/sbin/lvm", ["/usr/sbin/lvm", "lvs", "--all"], [/* 21 va rs */]) = 0 [* HANG *]
Ugh, didn't have 1.0.19 installed. It works much better now.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2006-0528.html