Bug 1062699 - [RFE] foreman should allow for cinder share mount options on HA Controller
Summary: [RFE] foreman should allow for cinder share mount options on HA Controller
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: z4
: 4.0
Assignee: Jiri Stransky
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks: 1040649 1045196
TreeView+ depends on / blocked
 
Reported: 2014-02-07 18:18 UTC by Steve Reichard
Modified: 2022-07-09 06:38 UTC (History)
10 users (show)

Fixed In Version: openstack-foreman-installer-1.0.7-1.el6ost
Doc Type: Enhancement
Doc Text:
Feature: HA All In One Controller host group should allow setting Block Storage backend mount options. Reason: This enhances Block Storage availability via RHS. Result: Gluster shares for Block Storage are specified using "glusterfs_shares" parameter on quickstack::pacemaker::cinder class. The value should be array of Gluster shares, and can contain mount options. Example: ["glusterserver1:/cinder -o backupvolfile-server=glusterserver2"]
Clone Of:
: 1092072 (view as bug list)
Environment:
Last Closed: 2014-05-29 20:30:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-16427 0 None None None 2022-07-09 06:38:52 UTC
Red Hat Product Errata RHSA-2014:0517 0 normal SHIPPED_LIVE Moderate: openstack-foreman-installer security, bug fix, and enhancement update 2014-05-30 00:26:29 UTC

Description Steve Reichard 2014-02-07 18:18:22 UTC
Description of problem:


One feature to enhance cinder availability via RHS would be to allow options to be specified.

I work around this by issuing this command, since the one option I care about is backupvolfile-server.

ssh ${CONFIG_CINDER_HOST} "sed 's/^172.31.143.91:\\/OSTACKcinder\$/172.31.143.91:\\/OSTACKcinder  -o backupvolfile-server=172.31.143.92/' /etc/cinder/shares.conf"


See packstack BZ: 1030069

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Crag Wolfe 2014-02-12 23:40:50 UTC
It looks like this is nfs-specific, and we could expose the nfs mount options[1] in foreman.
[1] https://github.com/stackforge/puppet-cinder/commit/209a5bd36af085cf306cf084ed1af1bf0fc9f41d#diff-6908ab4596dc8266e0b555baad4a32b6R12

Comment 3 Steve Reichard 2014-02-13 14:09:31 UTC
This is not just NFS, it would also be RHS/gluster.

Comment 4 Jiri Stransky 2014-04-14 16:36:39 UTC
Gluster mount options are exposed according to

https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_GlusterFS

in a pull request upstream:

https://github.com/redhat-openstack/astapor/pull/165#issuecomment-40386287

Comment 6 Jiri Stransky 2014-04-22 13:21:58 UTC
To verify Gluster share mount options, enter a glusterfs mount line (as described in [1]) into glusterfs_shares array parameter. E.g.:

["gluster1:/cinder -o backupvolfile-server=gluster2"]

After Puppet runs on HA Controller node, file /etc/cinder/shares.conf should contain the mount line including the options.

[1] https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_GlusterFS

Comment 11 Jiri Stransky 2014-05-05 13:28:35 UTC
Merged upstream.

Comment 13 nlevinki 2014-05-20 08:19:03 UTC
I configred this info 
glustrfs_shares = ["10.35.64.106:/cinder -o backupvolfile-server=10.35.102.17"]
and run puppet agent -t -v on the client
Openstack installation failed

Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster openstack]/returns: executed successfully                                    
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Error: unable to start cman                               
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting Cluster...                                       
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Starting cluster:                                         
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Checking if cluster has been disabled at boot... [  OK  ]
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Checking Network Manager... [  OK  ]                     
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Global setup... [  OK  ]                                 
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Loading kernel modules... [  OK  ]                       
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Mounting configfs... [  OK  ]                            
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Starting cman... Cannot find node name in cluster.conf   
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Unable to get the configuration                             
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Cannot find node name in cluster.conf                       
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: cman_tool: corosync daemon didn't start Check cluster logs for details                                                                                                                                              
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: [FAILED]                                                          
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping cluster:                                                 
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Leaving fence domain... [  OK  ]                               
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping gfs_controld... [  OK  ]                              
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping dlm_controld... [  OK  ]                              
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping fenced... [  OK  ]                                    
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping cman... [  OK  ]                                      
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Unloading kernel modules... [  OK  ]                           
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Unmounting configfs... [  OK  ]                                
Error: /usr/sbin/pcs cluster start returned 1 instead of one of [0]                                                                               
Error: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: change from notrun to 0 failed: /usr/sbin/pcs cluster start returned 1 instead of one of [0]                                                                                                                         
Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Dependency Exec[Start Cluster openstack] has failures: true                       
Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Skipping because of failed dependencies                                          
Notice: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Dependency Exec[Start Cluster openstack] has failures: true                   
Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Skipping because of failed dependencies                                      
Notice: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Dependency Exec[Start Cluster openstack] has failures: true                        
Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Skipping because of failed dependencies                                           
Notice: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Dependency Exec[Start Cluster openstack] has failures: true      
Warning: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Skipping because of failed dependencies                         
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Dependency Exec[Start Cluster openstack] has failures: true        
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Skipping because of failed dependencies                           
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                                
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Skipping because of failed dependencies                 
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                         
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Skipping because of failed dependencies          
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Dependency Exec[Start Cluster openstack] has failures: true         
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Skipping because of failed dependencies                            
Notice: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Dependency Exec[Start Cluster openstack] has failures: true     
Warning: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Skipping because of failed dependencies                        
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Skipping because of failed dependencies 
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                     
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Skipping because of failed dependencies                                                                                                                                        
Notice: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Dependency Exec[Start Cluster openstack] has failures: true       
Warning: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Skipping because of failed dependencies                          
Notice: /File[/root/.my.cnf]: Dependency Exec[Start Cluster openstack] has failures: true                                                         
Warning: /File[/root/.my.cnf]: Skipping because of failed dependencies                                                                            
Notice: /Stage[main]/Quickstack::Pacemaker::Common/File[ha-all-in-one-util-bash-tests]/ensure: defined content as '{md5}73541c24158939763c0bddefb71a52e8'                                                                                                                                           
Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                   
Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Skipping because of failed dependencies    
Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                                 
Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Skipping because of failed dependencies                  
Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                      
Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies                                                                                                                         
Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                            
Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies                                                                                                               
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Skipping because of failed dependencies 
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Dependency Exec[Start Cluster openstack] has failures: true                
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Skipping because of failed dependencies                                   
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                  
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Skipping because of failed dependencies   
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Triggered 'refresh' from 4 events
Notice: /File[/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /File[/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies
Info: Concat[/etc/haproxy/haproxy.cfg]: Scheduling refresh of Service[haproxy]
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Skipping because of failed dependencies
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Service[haproxy]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Haproxy/Service[haproxy]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Service[haproxy]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Skipping because of failed dependencies
Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Skipping because of failed dependencies
Notice: Finished catalog run in 680.64 seconds

Comment 14 Jiri Stransky 2014-05-20 11:38:03 UTC
Hi Nathan,

the issue you posted is probably not related to this BZ. It looks like it failed to start corosync, this gives a hint what might be wrong:

Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Starting cman... Cannot find node name in cluster.conf   

Maybe the cause could be a typo when filling in the pacemaker_cluster_members variable? Also it might be worth checking if /etc/cluster/cluster.conf looks correct.

Comment 16 nlevinki 2014-05-28 06:47:53 UTC
got this info from Steve Reichard  on May 27
> I edit this parameters
>
> glusterfs_shars = ["10.35.64.106:/cinder -o backupvolfile-server=10.35.102.17"]

backupvolfile-server is no longer a supported option it should be backup-volfile-servers and if you have more than one backup,
separate with ':'


> pacemaker_cluster_men = 10.35.163.52 10.35.162.34
> admin_password= qum5net

Comment 19 errata-xmlrpc 2014-05-29 20:30:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0517.html


Note You need to log in before you can comment on or make changes to this bug.