Bug 1093759 - [HA]HA installed cluster - serviced under PCS control still are chkconfig on
Summary: [HA]HA installed cluster - serviced under PCS control still are chkconfig on
Keywords:
Status: CLOSED DUPLICATE of bug 1123303
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: z5
: 4.0
Assignee: Jason Guiditta
QA Contact: Ami Jeain
URL:
Whiteboard:
Depends On:
Blocks: 1040649
TreeView+ depends on / blocked
 
Reported: 2014-05-02 14:56 UTC by Steve Reichard
Modified: 2014-09-08 17:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-08 17:43:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Steve Reichard 2014-05-02 14:56:27 UTC
Description of problem:

My understanding that in the cluster a service that is under pacemaker control should no try to start on boot.

This is output a nova-network deployment-

[root@ospha1 conf.d(openstack_admin)]# pcs status
Cluster name: openstack
Last updated: Fri May  2 10:51:30 2014
Last change: Thu May  1 21:50:04 2014 via crmd on 10.16.139.32
Stack: cman
Current DC: 10.16.139.31 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
3 Nodes configured
70 Resources configured


Online: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]

Full list of resources:

 stonith-ipmilan-10.16.143.61	(stonith:fence_ipmilan):	Started 10.16.139.31 
 Resource Group: db
     fs-varlibmysql	(ocf::heartbeat:Filesystem):	Started 10.16.139.32 
     mysql-ostk-mysql	(ocf::heartbeat:mysql):	Started 10.16.139.32 
 stonith-ipmilan-10.16.143.62	(stonith:fence_ipmilan):	Started 10.16.139.33 
 stonith-ipmilan-10.16.143.63	(stonith:fence_ipmilan):	Started 10.16.139.31 
 Clone Set: lsb-memcached-clone [lsb-memcached]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.2	(ocf::heartbeat:IPaddr2):	Started 10.16.139.33 
 ip-10.16.139.3	(ocf::heartbeat:IPaddr2):	Started 10.16.139.31 
 Clone Set: lsb-qpidd-clone [lsb-qpidd]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.18	(ocf::heartbeat:IPaddr2):	Started 10.16.139.32 
 Clone Set: lsb-haproxy-clone [lsb-haproxy]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.19	(ocf::heartbeat:IPaddr2):	Started 10.16.139.33 
 ip-10.16.139.4	(ocf::heartbeat:IPaddr2):	Started 10.16.139.31 
 Clone Set: lsb-openstack-keystone-clone [lsb-openstack-keystone]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.5	(ocf::heartbeat:IPaddr2):	Started 10.16.139.32 
 Clone Set: fs-varlibglanceimages-clone [fs-varlibglanceimages]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-glance-api-clone [lsb-openstack-glance-api]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-glance-registry-clone [lsb-openstack-glance-registry]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.7	(ocf::heartbeat:IPaddr2):	Started 10.16.139.33 
 Clone Set: lsb-openstack-nova-scheduler-clone [lsb-openstack-nova-scheduler]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-nova-consoleauth-clone [lsb-openstack-nova-consoleauth]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-nova-conductor-clone [lsb-openstack-nova-conductor]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-nova-api-clone [lsb-openstack-nova-api]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-nova-novncproxy-clone [lsb-openstack-nova-novncproxy]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.6	(ocf::heartbeat:IPaddr2):	Started 10.16.139.32 
 Clone Set: lsb-openstack-cinder-api-clone [lsb-openstack-cinder-api]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-cinder-scheduler-clone [lsb-openstack-cinder-scheduler]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 ip-10.16.139.10	(ocf::heartbeat:IPaddr2):	Started 10.16.139.33 
 ip-10.16.139.17	(ocf::heartbeat:IPaddr2):	Started 10.16.139.31 
 Clone Set: lsb-openstack-heat-api-cloudwatch-clone [lsb-openstack-heat-api-cloudwatch]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-heat-api-cfn-clone [lsb-openstack-heat-api-cfn]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-httpd-clone [lsb-httpd]
     Stopped: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Clone Set: lsb-openstack-heat-api-clone [lsb-openstack-heat-api]
     Started: [ 10.16.139.31 10.16.139.32 10.16.139.33 ]
 Resource Group: heat
     lsb-openstack-heat-engine	(lsb:openstack-heat-engine):	Started 10.16.139.32 

Failed actions:
    lsb-httpd_start_0 on 10.16.139.31 'unknown error' (1): call=434, status=complete, last-rc-change='Fri May  2 08:55:56 2014', queued=54ms, exec=0ms
    lsb-openstack-heat-engine_monitor_30000 on 10.16.139.32 'not running' (7): call=725, status=complete, last-rc-change='Fri May  2 08:56:11 2014', queued=0ms, exec=0ms
    lsb-httpd_start_0 on 10.16.139.32 'unknown error' (1): call=446, status=complete, last-rc-change='Thu May  1 21:50:05 2014', queued=55ms, exec=0ms
    lsb-openstack-nova-consoleauth_monitor_30000 on 10.16.139.33 'not running' (7): call=183, status=complete, last-rc-change='Fri May  2 08:58:19 2014', queued=0ms, exec=0ms
    lsb-openstack-nova-api_monitor_30000 on 10.16.139.33 'not running' (7): call=199, status=complete, last-rc-change='Fri May  2 08:58:20 2014', queued=0ms, exec=0ms
    lsb-httpd_start_0 on 10.16.139.33 'unknown error' (1): call=374, status=complete, last-rc-change='Thu May  1 21:58:05 2014', queued=51ms, exec=0ms



[root@ospha1 conf.d(openstack_admin)]# chkconfig | grep -e openstack -e http -e neutron -e mysql -e haproxy | grep ":on"
haproxy        	0:off	1:off	2:on	3:on	4:on	5:on	6:off
httpd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-cinder-api	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-cinder-scheduler	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-cinder-volume	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-glance-api	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-glance-registry	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-heat-api	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-heat-api-cfn	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-heat-api-cloudwatch	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-keystone	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-api	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-cert	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-conductor	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-consoleauth	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-novncproxy	0:off	1:off	2:on	3:on	4:on	5:on	6:off
openstack-nova-scheduler	0:off	1:off	2:on	3:on	4:on	5:on	6:off
[root@ospha1 conf.d(openstack_admin)]# 




Version-Release number of selected component (if applicable):


[root@ospha-foreman nova_network]# yum list installed | grep -e foreman -e puppet
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
foreman.noarch                       1.3.0.4-1.el6sat      @RHOS-4.0            
foreman-installer.noarch             1:1.3.0-1.el6sat      @RHOS-4.0            
foreman-mysql.noarch                 1.3.0.4-1.el6sat      @RHOS-4.0            
foreman-mysql2.noarch                1.3.0.4-1.el6sat      @RHOS-4.0            
foreman-proxy.noarch                 1.3.0-3.el6sat        @RHOS-4.0            
foreman-selinux.noarch               1.3.0-1.el6sat        @RHOS-4.0            
openstack-foreman-installer.noarch   1.0.7-1.el6ost        @/openstack-foreman-installer-1.0.7-1.el6ost.noarch
openstack-puppet-modules.noarch      2013.2-9.el6ost       @RHOS-4.0            
puppet.noarch                        3.2.4-3.el6_5         @RHOS-4.0            
puppet-server.noarch                 3.2.4-3.el6_5         @RHOS-4.0            
ruby193-rubygem-foreman_openstack_simplify.noarch
rubygem-foreman_api.noarch           0.1.6-1.el6sat        @RHOS-4.0            
[root@ospha-foreman nova_network]# 



How reproducible:

Seen on all thre installs I 've done

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Mike Orazi 2014-05-05 20:10:24 UTC
After a bit of review, this does not appear to be the root cause of the issue we thought it was.  Moving to A5 as it will require some coordination with other puppet modules upstream to correctly expose the ability to specify a given service should start, but not be chkconfig'ed.

Comment 3 Mike Burns 2014-09-08 17:43:58 UTC
similar to bug 1123303 where a lot of work has already been done

*** This bug has been marked as a duplicate of bug 1123303 ***


Note You need to log in before you can comment on or make changes to this bug.