Bug 1368533

Summary: OSP-9/10 upgrades fails to restart nova-conductor.
Product: Red Hat OpenStack Reporter: Sofer Athlan-Guyot <sathlang>
Component: openstack-tripleo-heat-templatesAssignee: Jiri Stransky <jstransk>
Status: CLOSED ERRATA QA Contact: Omri Hochman <ohochman>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 10.0 (Newton)CC: jcoufal, jschluet, mburns, mlammon, rhel-osp-director-maint
Target Milestone: rcKeywords: Triaged
Target Release: 10.0 (Newton)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-5.0.0-0.20160907212643.90c852e.1.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-14 15:52:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1337794    

Description Sofer Athlan-Guyot 2016-08-19 16:45:26 UTC
Description of problem:  Everything in the upstream bug.

Comment 2 Sofer Athlan-Guyot 2016-08-19 16:46:58 UTC
Adding link to upstream review.

Comment 3 Sofer Athlan-Guyot 2016-08-29 21:16:43 UTC
HHi,

after major update of overcloud nova-controller fails to restart:

    2016-08-19 15:58:35.784 20259 CRITICAL nova [req-a6f2c2dc-99af-43ed-bcf2-c4d2afdefce5 - - - - -] ConfigFileValueError: Value for option scheduler_host_manager is not valid: Valid values are [host_manager, ironic_
    host_manager], but found 'nova.scheduler.host_manager.HostManager'
    2016-08-19 15:58:35.784 20259 ERROR nova Traceback (most recent call last):
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/bin/nova-conductor", line 10, in <module>
    2016-08-19 15:58:35.784 20259 ERROR nova sys.exit(main())
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/nova/cmd/conductor.py", line 47, in main
    2016-08-19 15:58:35.784 20259 ERROR nova service.wait()
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/nova/service.py", line 415, in wait
    2016-08-19 15:58:35.784 20259 ERROR nova _launcher.wait()
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 568, in wait
    2016-08-19 15:58:35.784 20259 ERROR nova self.conf.log_opt_values(LOG, logging.DEBUG)
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2630, in log_opt_values
    2016-08-19 15:58:35.784 20259 ERROR nova _sanitize(opt, getattr(self, opt_name)))
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2275, in __getattr__
    2016-08-19 15:58:35.784 20259 ERROR nova return self._get(name)
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2680, in _get
    2016-08-19 15:58:35.784 20259 ERROR nova value = self._do_get(name, group, namespace)
    2016-08-19 15:58:35.784 20259 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2723, in _do_get
    2016-08-19 15:58:35.784 20259 ERROR nova % (opt.name, str(ve)))
    2016-08-19 15:58:35.784 20259 ERROR nova ConfigFileValueError: Value for option scheduler_host_manager is not valid: Valid values are [host_manager, ironic_host_manager], but found 'nova.scheduler.host_manager.HostManager'

Setting scheduler_host_manager to host_manager fixes it.

Comment 6 mlammon 2016-11-15 19:31:03 UTC
Deployed RHOS 9 latest
Upgraded to RHOS 10 with latest puddle (2016-11-14.1)

I no longer see this issue.

[stack@undercloud-0 ~]$ ssh heat-admin.2.10
Last login: Tue Nov 15 19:04:39 2016 from gateway
[heat-admin@controller-0 ~]$ sudo -i
[root@controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: controller-2 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Nov 15 19:08:40 2016		Last change: Tue Nov 15 01:10:37 2016 by root via crm_resource on controller-0

3 nodes and 19 resources configured

Online: [ controller-0 controller-1 controller-2 ]

Full list of resources:

 ip-fd00.fd00.fd00.4000..10	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-192.0.2.6	(ocf::heartbeat:IPaddr2):	Started controller-1
 Clone Set: haproxy-clone [haproxy]
     Started: [ controller-0 controller-1 controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ controller-0 controller-1 controller-2 ]
 ip-2620.52.0.13b8.5054.ff.fe3e.1	(ocf::heartbeat:IPaddr2):	Started controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ controller-0 controller-1 controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ controller-0 ]
     Slaves: [ controller-1 controller-2 ]
 ip-fd00.fd00.fd00.3000..10	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-fd00.fd00.fd00.2000..10	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-fd00.fd00.fd00.2000..11	(ocf::heartbeat:IPaddr2):	Started controller-2
 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started controller-0

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Comment 9 errata-xmlrpc 2016-12-14 15:52:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-2948.html