Description of problem: One feature to enhance cinder availability via RHS would be to allow options to be specified. I work around this by issuing this command, since the one option I care about is backupvolfile-server. ssh ${CONFIG_CINDER_HOST} "sed 's/^172.31.143.91:\\/OSTACKcinder\$/172.31.143.91:\\/OSTACKcinder -o backupvolfile-server=172.31.143.92/' /etc/cinder/shares.conf" See packstack BZ: 1030069 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
It looks like this is nfs-specific, and we could expose the nfs mount options[1] in foreman. [1] https://github.com/stackforge/puppet-cinder/commit/209a5bd36af085cf306cf084ed1af1bf0fc9f41d#diff-6908ab4596dc8266e0b555baad4a32b6R12
This is not just NFS, it would also be RHS/gluster.
Gluster mount options are exposed according to https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_GlusterFS in a pull request upstream: https://github.com/redhat-openstack/astapor/pull/165#issuecomment-40386287
To verify Gluster share mount options, enter a glusterfs mount line (as described in [1]) into glusterfs_shares array parameter. E.g.: ["gluster1:/cinder -o backupvolfile-server=gluster2"] After Puppet runs on HA Controller node, file /etc/cinder/shares.conf should contain the mount line including the options. [1] https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_GlusterFS
Merged upstream.
I configred this info glustrfs_shares = ["10.35.64.106:/cinder -o backupvolfile-server=10.35.102.17"] and run puppet agent -t -v on the client Openstack installation failed Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster openstack]/returns: executed successfully Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Error: unable to start cman Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting Cluster... Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting cluster: Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Checking if cluster has been disabled at boot... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Checking Network Manager... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Global setup... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Loading kernel modules... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Mounting configfs... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting cman... Cannot find node name in cluster.conf Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Unable to get the configuration Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Cannot find node name in cluster.conf Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: cman_tool: corosync daemon didn't start Check cluster logs for details Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: [FAILED] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping cluster: Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Leaving fence domain... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping gfs_controld... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping dlm_controld... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping fenced... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping cman... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Unloading kernel modules... [ OK ] Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Unmounting configfs... [ OK ] Error: /usr/sbin/pcs cluster start returned 1 instead of one of [0] Error: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: change from notrun to 0 failed: /usr/sbin/pcs cluster start returned 1 instead of one of [0] Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Skipping because of failed dependencies Notice: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Skipping because of failed dependencies Notice: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Skipping because of failed dependencies Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Skipping because of failed dependencies Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Skipping because of failed dependencies Notice: /File[/root/.my.cnf]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/root/.my.cnf]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Common/File[ha-all-in-one-util-bash-tests]/ensure: defined content as '{md5}73541c24158939763c0bddefb71a52e8' Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Skipping because of failed dependencies Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Skipping because of failed dependencies Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Skipping because of failed dependencies Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Triggered 'refresh' from 4 events Notice: /File[/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /File[/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies Info: Concat[/etc/haproxy/haproxy.cfg]: Scheduling refresh of Service[haproxy] Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Skipping because of failed dependencies Notice: /Stage[main]/Haproxy/Service[haproxy]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Haproxy/Service[haproxy]: Skipping because of failed dependencies Notice: /Stage[main]/Haproxy/Service[haproxy]: Triggered 'refresh' from 1 events Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Skipping because of failed dependencies Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Skipping because of failed dependencies Notice: Finished catalog run in 680.64 seconds
Hi Nathan, the issue you posted is probably not related to this BZ. It looks like it failed to start corosync, this gives a hint what might be wrong: Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting cman... Cannot find node name in cluster.conf Maybe the cause could be a typo when filling in the pacemaker_cluster_members variable? Also it might be worth checking if /etc/cluster/cluster.conf looks correct.
got this info from Steve Reichard on May 27 > I edit this parameters > > glusterfs_shars = ["10.35.64.106:/cinder -o backupvolfile-server=10.35.102.17"] backupvolfile-server is no longer a supported option it should be backup-volfile-servers and if you have more than one backup, separate with ':' > pacemaker_cluster_men = 10.35.163.52 10.35.162.34 > admin_password= qum5net
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0517.html