Bug 1064050 - [RFE] foreman should allow the configuraiton of glance using RHS/NFS and direct file access
Summary: [RFE] foreman should allow the configuraiton of glance using RHS/NFS and dire...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: z4
: 4.0
Assignee: Crag Wolfe
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks: 1040649 1045196 1082785
TreeView+ depends on / blocked
 
Reported: 2014-02-11 21:59 UTC by Steve Reichard
Modified: 2022-07-09 06:38 UTC (History)
10 users (show)

Fixed In Version: openstack-foreman-installer-1.0.6-1.el6ost
Doc Type: Enhancement
Doc Text:
In the HA-all-in-one controller, file access for glance to shared storage may be specified by the parameters: backend (must be 'file'), pcmk_fs_manage (must be 'true' for pacemaker to manage the filesystem), pcmk_fs_device (the shared storage device), pcmk_fs_options (any needed mount options). For cinder, the relevant options are: volume_backend ('glusterfs' or 'nfs'), glusterfs_shares (if using gluster), nfs_shares (if using nfs), and nfs_mount_options (if using nfs).
Clone Of:
: 1082785 (view as bug list)
Environment:
Last Closed: 2014-05-29 20:30:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-16425 0 None None None 2022-07-09 06:38:02 UTC
Red Hat Product Errata RHSA-2014:0517 0 normal SHIPPED_LIVE Moderate: openstack-foreman-installer security, bug fix, and enhancement update 2014-05-30 00:26:29 UTC

Description Steve Reichard 2014-02-11 21:59:41 UTC
Description of problem:

Foreman installs should allow RHS or NFS to be used to hold images, and also allow for direct file access to those images. 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Steve Reichard 2014-03-28 19:40:13 UTC
glance and cinder should be separate.

for glance, let separate into 2 categories:


File:

I think the only difference between nfs and gfs is the mount type, options, required packages.

This is the command I use on my current controller.

ssh ${CONFIG_GLANCE_HOST} "mkdir -p /var/lib/glance;chown 161.161 /var/lib/glance"
  ssh ${CONFIG_GLANCE_HOST} "echo 'ra-rhs-srv1-10g.storage.se.priv:/OSTACKglance /var/lib/glance glusterfs _netdev,selinux,backup-volfile-servers=ra-rhs-srv2-10g.storage.se.priv 0 0' >> /etc/fstab"
  ssh ${CONFIG_GLANCE_HOST} "mount /var/lib/glance"





direct file

Enabling this mean the images will not be transferred over http using the API endpoint but copying the file 'locally'

This requires this on the controller:

  ssh ${CONFIG_GLANCE_HOST} "openstack-config --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url True"
  ssh ${CONFIG_NOVA_API_HOST} "openstack-config --set /etc/nova/nova.conf DEFAULT allowed_direct_url_schemes \[file\]"


An this on the computes:

   ssh $i "mkdir -p /var/lib/glance;chown 161.161 /var/lib/glance"
   ssh $i "echo 'ra-rhs-srv1-10g.storage.se.priv:/OSTACKglance /var/lib/glance glusterfs _netdev,selinux,backup-volfile-servers=ra-r
hs-srv2-10g.storage.se.priv 0 0' >> /etc/fstab"
   ssh $i "mount /var/lib/glance"
   ssh $i "openstack-config --set /etc/nova/nova.conf DEFAULT allowed_direct_url_schemes \[file\]"

Comment 4 Steve Reichard 2014-03-28 19:44:43 UTC
I just remembered, this is how I do the mount using pcs:

ssh $HA_FIRST "pcs resource create glance-fs Filesystem device=\"ra-rhs-srv1-10g.storage.se.priv:/OSTACKglance\" directory=\"/var/lib/glance/\" fstype=\"glusterfs\" options=\"selinux,backup-volfile-servers=ra-rhs-srv2-10g.storage.se.priv\" --clone"

Comment 5 Steve Reichard 2014-03-31 20:24:16 UTC
This BZ was cloned to split the 2 requests separately.

One request was for nfs/rhs enablement for glance.  That half is staying with this BZ and should stay with RHOS 4 A4.  I believe this will be needed for HA.

The direct file enablement is something that is moved to BZ 1082785 which is now targeted at RHOS 5.

Comment 10 nlevinki 2014-05-19 14:39:01 UTC
configured Foreman:
change this parameters
1) pcmk_fs_device = 10.35.64.106://nlevinki_glance
2) pcmk_fs_type   = glusterfs
3) backend        = file (this is the default config)
4) pcmk_fs_manage = true (this is the default config)

installation of openstack failed

Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: [FAILED]                                                          
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: Stopping cluster:                                                 
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Leaving fence domain... [  OK  ]                               
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping gfs_controld... [  OK  ]                              
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping dlm_controld... [  OK  ]                              
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping fenced... [  OK  ]                                    
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Stopping cman... [  OK  ]                                      
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Unloading kernel modules... [  OK  ]                           
Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns:    Unmounting configfs... [  OK  ]                                
Error: /usr/sbin/pcs cluster start returned 1 instead of one of [0]                                                                               
Error: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster openstack]/returns: change from notrun to 0 failed: /usr/sbin/pcs cluster start returned 1 instead of one of [0]                                                                                                                         
Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Dependency Exec[Start Cluster openstack] has failures: true                       
Warning: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]: Skipping because of failed dependencies                                          
Notice: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Dependency Exec[Start Cluster openstack] has failures: true                   
Warning: /Stage[main]/Pacemaker::Corosync/Notify[pacemaker settled]: Skipping because of failed dependencies                                      
Notice: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Dependency Exec[Start Cluster openstack] has failures: true                        
Warning: /Stage[main]/Pacemaker::Stonith/Exec[Disable STONITH]: Skipping because of failed dependencies                                           
Notice: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Dependency Exec[Start Cluster openstack] has failures: true      
Warning: /Stage[main]/Quickstack::Pacemaker::Common/Exec[stonith-setup-complete]: Skipping because of failed dependencies                         
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Dependency Exec[Start Cluster openstack] has failures: true        
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-fs-to-be-active]: Skipping because of failed dependencies                           
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                                
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[sleep-so-really-sure-fs-is-mounted]: Skipping because of failed dependencies                 
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                         
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[create-socket-symlink-if-we-own-the-mount]: Skipping because of failed dependencies          
Notice: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Dependency Exec[Start Cluster openstack] has failures: true         
Warning: /Stage[main]/Quickstack::Hamysql::Node/Exec[wait-for-mysql-to-start]: Skipping because of failed dependencies                            
Notice: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Dependency Exec[Start Cluster openstack] has failures: true     
Warning: /Stage[main]/Quickstack::Hamysql::Node/File[are-we-running-mysql-script]: Skipping because of failed dependencies                        
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-00_mysql_listen_block]: Skipping because of failed dependencies 
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                     
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-mysql-mysql_mysql_balancermember_mysql]: Skipping because of failed dependencies                                                                                                                                        
Notice: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Dependency Exec[Start Cluster openstack] has failures: true       
Warning: /Stage[main]/Quickstack::Hamysql::Mysql::Rootpw/Exec[set_mysql_rootpw]: Skipping because of failed dependencies                          
Notice: /File[/root/.my.cnf]: Dependency Exec[Start Cluster openstack] has failures: true                                                         
Warning: /File[/root/.my.cnf]: Skipping because of failed dependencies                                                                            
Notice: /Stage[main]/Quickstack::Pacemaker::Common/File[ha-all-in-one-util-bash-tests]/ensure: defined content as '{md5}73541c24158939763c0bddefb71a52e8'                                                                                                                                           
Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                   
Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[pcs-memcached-server-set-up-on-this-node]: Skipping because of failed dependencies    
Notice: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                                 
Warning: /Stage[main]/Quickstack::Pacemaker::Memcached/Exec[all-memcached-nodes-are-up]: Skipping because of failed dependencies                  
Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                      
Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies                                                                                                                         
Notice: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                            
Warning: /Stage[main]/Quickstack::Load_balancer::Common/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Exec[exec_sysctl_net.ipv4.ip_nonlocal_bind]: Skipping because of failed dependencies                                                                                                               
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-stats-00_stats_listen_block]: Skipping because of failed dependencies 
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Dependency Exec[Start Cluster openstack] has failures: true                
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Qpid_user[openstack]: Skipping because of failed dependencies                                   
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Dependency Exec[Start Cluster openstack] has failures: true                                                                                                                                  
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-00_qpid_listen_block]: Skipping because of failed dependencies   
Notice: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /File[/var/lib/puppet/concat/_etc_haproxy_haproxy.cfg/fragments/20-qpid-qpid_qpid_balancermember_qpid]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Concat[/etc/haproxy/haproxy.cfg]/Exec[concat_/etc/haproxy/haproxy.cfg]: Triggered 'refresh' from 4 events
Notice: /File[/etc/haproxy/haproxy.cfg]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /File[/etc/haproxy/haproxy.cfg]: Skipping because of failed dependencies
Info: Concat[/etc/haproxy/haproxy.cfg]: Scheduling refresh of Service[haproxy]
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[pcs-qpid-server-set-up-on-this-node]: Skipping because of failed dependencies
Notice: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Qpid/Exec[all-qpid-nodes-are-up]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Service[haproxy]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Haproxy/Service[haproxy]: Skipping because of failed dependencies
Notice: /Stage[main]/Haproxy/Service[haproxy]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[pcs-haproxy-server-set-up-on-this-node]: Skipping because of failed dependencies
Notice: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Dependency Exec[Start Cluster openstack] has failures: true
Warning: /Stage[main]/Quickstack::Pacemaker::Load_balancer/Exec[all-haproxy-nodes-are-up]: Skipping because of failed dependencies
Notice: Finished catalog run in 731.18 seconds
[root@dhcp160-217 ~]#

Comment 14 errata-xmlrpc 2014-05-29 20:30:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0517.html


Note You need to log in before you can comment on or make changes to this bug.