RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/
Bug 1294683 - instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux.
Summary: instack-undercloud: "openstack undercloud install" throws errors and then get...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: RDO
Classification: Community
Component: rdo-manager
Version: Liberty
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: Liberty
Assignee: Hugh Brock
QA Contact: Shai Revivo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-29 16:40 UTC by Alexander Chuzhoy
Modified: 2017-06-18 06:11 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-06-18 06:11:30 UTC
Embargoed:


Attachments (Terms of Use)
messages and audit.log from the undercloud machine. (63.51 KB, application/x-gzip)
2015-12-29 16:42 UTC, Alexander Chuzhoy
no flags Details

Description Alexander Chuzhoy 2015-12-29 16:40:39 UTC
instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux.

Environment:
instack-undercloud-2.1.3-1.el7.noarch
openstack-selinux-0.6.43-1.el7ost.noarch

Steps to reproduce:
1. Follow https://repos.fedorapeople.org/repos/openstack-m/rdo-manager-docs/liberty/installation/installing.html on RHEL7.1 
2. Attempt to run "openstack undercloud install".

Result:
The undercloud deployment throws various errors and then gets stuck.



Error: Could not start Service[ironic-inspector-dnsmasq]: Execution of '/bin/systemctl start openstack-ironic-inspector-dnsmasq' returned 1: Job for openstack-ironic-inspector-dnsmasq.service failed because the control process exited with error code. See "systemctl status openstack-ironic-inspector-dnsmasq.service" and "journalctl -xe" for details.                                                                            
Wrapped exception:                           
Execution of '/bin/systemctl start openstack-ironic-inspector-dnsmasq' returned 1: Job for openstack-ironic-inspector-dnsmasq.service failed because the control process exited with error code. See "systemctl status openstack-ironic-inspector-dnsmasq.service" and "journalctl -xe" for details.                                                                                                                                      
Error: /Stage[main]/Ironic::Inspector/Service[ironic-inspector-dnsmasq]/ensure: change from stopped to running failed: Could not start Service[ironic-inspector-dnsmasq]: Execution of '/bin/systemctl start openstack-ironic-inspector-dnsmasq' returned 1: Job for openstack-ironic-inspector-dnsmasq.service failed because the control process exited with error code. See "systemctl status openstack-ironic-inspector-dnsmasq.service" and "journalctl -xe" for details.                                                                                                  
Notice: /Stage[main]/Main/File[/etc/keystone/ssl/private/signing_key.pem]/content: content changed '{md5}7ef9ad4d9c7880dbbe415583e71de2b3' to '{md5}ebaadb8f4b28a33c72577d3de06be6aa'                                
Notice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/ca.pem]/content: content changed '{md5}8803cfe79fa940d01de97de45cb7cf9c' to '{md5}c7cf8393ba10881c8aff28bc8d6458f5'                                           
Notice: /Stage[main]/Main/File[/etc/keystone/ssl/certs/signing_cert.pem]/content: content changed '{md5}37086304f93c4042e14756ffdc8a9ec4' to '{md5}9cb461cd55b4bda353971599009185de'                                 
Notice: /Stage[main]/Swift::Storage::Account/Service[swift-account-reaper]/ensure: ensure changed 'stopped' to 'running'                                                                                             
Notice: /Stage[main]/Swift::Storage::Account/Service[swift-account-auditor]/ensure: ensure changed 'stopped' to 'running'                                                                                            
Notice: /Stage[main]/Swift::Storage::Container/Service[swift-container-auditor]/ensure: ensure changed 'stopped' to 'running'                                                                                        
Notice: /Stage[main]/Swift::Storage::Object/Service[swift-object-auditor]/ensure: ensure changed 'stopped' to 'running'                                                                                              
Notice: /Stage[main]/Swift::Storage::Object/Service[swift-object-updater]/ensure: ensure changed 'stopped' to 'running'                                                                                              
Notice: /Stage[main]/Swift::Proxy/Service[swift-proxy]/ensure: ensure changed 'stopped' to 'running'                                                                                                                 
Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account-replicator]/ensure: ensure changed 'stopped' to 'running'                                                        
Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/Service[swift-account]/ensure: ensure changed 'stopped' to 'running'                                                                   
Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container-replicator]/ensure: ensure changed 'stopped' to 'running'                                                  
Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/Service[swift-container]/ensure: ensure changed 'stopped' to 'running'                                                             
Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object]/ensure: ensure changed 'stopped' to 'running'                                                                      
Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/Service[swift-object-replicator]/ensure: ensure changed 'stopped' to 'running'                                                           
Notice: /Stage[main]/Swift::Storage::Container/Service[swift-container-updater]/ensure: ensure changed 'stopped' to 'running'                                                                                        
Error: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a fatal signal was delivered to the control process. See "systemctl status neutron-server.service" and "journalctl -xe" for details.                                                                                                                                         
Wrapped exception:                                                                                                                                                                                                   
Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a fatal signal was delivered to the control process. See "systemctl status neutron-server.service" and "journalctl -xe" for details.                                                                                                                                                                                         
Error: /Stage[main]/Neutron::Server/Service[neutron-server]/ensure: change from stopped to running failed: Could not start Service[neutron-server]: Execution of '/bin/systemctl start neutron-server' returned 1: Job for neutron-server.service failed because a fatal signal was delivered to the control process. See "systemctl status neutron-server.service" and "journalctl -xe" for details.                                     
Notice: /Stage[main]/Neutron::Agents::Dhcp/Service[neutron-dhcp-service]/ensure: ensure changed 'stopped' to 'running'                                                                                               
Error: Could not start Service[keystone]: Execution of '/bin/systemctl start openstack-keystone' returned 1: Job for openstack-keystone.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-keystone.service" and "journalctl -xe" for details.                                                                                                                                   
Wrapped exception:                                   
Execution of '/bin/systemctl start openstack-keystone' returned 1: Job for openstack-keystone.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-keystone.service" and "journalctl -xe" for details.                                                                                                                                                                             
Error: /Stage[main]/Keystone::Service/Service[keystone]/ensure: change from stopped to running failed: Could not start Service[keystone]: Execution of '/bin/systemctl start openstack-keystone' returned 1: Job for openstack-keystone.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-keystone.service" and "journalctl -xe" for details.                                   
Notice: /Stage[main]/Keystone::Service/Service[keystone]: Triggered 'refresh' from 3 events          
Notice: /Stage[main]/Keystone/Anchor[keystone_started]: Dependency Service[keystone] has failures: true              
Warning: /Stage[main]/Keystone/Anchor[keystone_started]: Skipping because of failed dependencies                                     
Notice: Puppet::Provider::Openstack: project service is unavailable. Will retry for up to 9 seconds.      
Error: Could not prefetch keystone_tenant provider 'openstack': undefined method `each' for nil:NilClass       
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]: Dependency Service[keystone] has failures: true                              
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]: Skipping because of failed dependencies                                                                                                         
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[services]: Dependency Service[keystone] has failures: true                                                                                               
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[services]: Skipping because of failed dependencies                                                                                                      
Notice: Puppet::Provider::Openstack: role service is unavailable. Will retry for up to 10 seconds.                                                                                                                   
Error: Could not prefetch keystone_role provider 'openstack': undefined method `collect' for nil:NilClass                                                                                                            
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]: Dependency Service[keystone] has failures: true                                      
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]: Skipping because of failed dependencies                           
Notice: Puppet::Provider::Openstack: user service is unavailable. Will retry for up to 9 seconds.                            
Error: Could not prefetch keystone_user provider 'openstack': undefined method `each' for nil:NilClass            
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Dependency Service[keystone] has failures: true              
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Skipping because of failed dependencies                                      
Notice: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Dependency Service[keystone] has failures: true                                                                
Warning: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Skipping because of failed dependencies                                
Notice: Puppet::Provider::Openstack: domain service is unavailable. Will retry for up to 10 seconds.          
Error: Could not prefetch keystone_domain provider 'openstack': undefined method `collect' for nil:NilClass                          
Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat_domain]: Dependency Service[keystone] has failures: true                                                
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat_domain]: Skipping because of failed dependencies                                                                                                   
Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_domain_admin]: Dependency Service[keystone] has failures: true                                                                                        
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_domain_admin]: Skipping because of failed dependencies                                                                                               
Notice: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat_stack]: Dependency Service[keystone] has failures: true                                                                             
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin@::heat_stack]: Skipping because of failed dependencies                                                                                    
Error: Could not start Service[nova-cert]: Execution of '/bin/systemctl start openstack-nova-cert' returned 1: Job for openstack-nova-cert.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-cert.service" and "journalctl -xe" for details.                                                                                                                               
Wrapped exception:                                          
Execution of '/bin/systemctl start openstack-nova-cert' returned 1: Job for openstack-nova-cert.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-cert.service" and "journalctl -xe" for details.
Error: /Stage[main]/Nova::Cert/Nova::Generic_service[cert]/Service[nova-cert]/ensure: change from stopped to running failed: Could not start Service[nova-cert]: Execution of '/bin/systemctl start openstack-nova-cert' returned 1: Job for openstack-nova-cert.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-cert.service" and "journalctl -xe" for details.
Error: Could not start Service[nova-api]: Execution of '/bin/systemctl start openstack-nova-api' returned 1: Job for openstack-nova-api.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details.
Wrapped exception:
Execution of '/bin/systemctl start openstack-nova-api' returned 1: Job for openstack-nova-api.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details.
Error: /Stage[main]/Nova::Api/Nova::Generic_service[api]/Service[nova-api]/ensure: change from stopped to running failed: Could not start Service[nova-api]: Execution of '/bin/systemctl start openstack-nova-api' returned 1: Job for openstack-nova-api.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details.
Error: Could not start Service[nova-scheduler]: Execution of '/bin/systemctl start openstack-nova-scheduler' returned 1: Job for openstack-nova-scheduler.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-scheduler.service" and "journalctl -xe" for details.
Wrapped exception:
Execution of '/bin/systemctl start openstack-nova-scheduler' returned 1: Job for openstack-nova-scheduler.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-scheduler.service" and "journalctl -xe" for details.
Error: /Stage[main]/Nova::Scheduler/Nova::Generic_service[scheduler]/Service[nova-scheduler]/ensure: change from stopped to running failed: Could not start Service[nova-scheduler]: Execution of '/bin/systemctl start openstack-nova-scheduler' returned 1: Job for openstack-nova-scheduler.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-scheduler.service" and "journalctl -xe" for details.
Error: Could not start Service[nova-conductor]: Execution of '/bin/systemctl start openstack-nova-conductor' returned 1: Job for openstack-nova-conductor.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-conductor.service" and "journalctl -xe" for details.
Wrapped exception:
Execution of '/bin/systemctl start openstack-nova-conductor' returned 1: Job for openstack-nova-conductor.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-conductor.service" and "journalctl -xe" for details.
Error: /Stage[main]/Nova::Conductor/Nova::Generic_service[conductor]/Service[nova-conductor]/ensure: change from stopped to running failed: Could not start Service[nova-conductor]: Execution of '/bin/systemctl start openstack-nova-conductor' returned 1: Job for openstack-nova-conductor.service failed because a fatal signal was delivered to the control process. See "systemctl status openstack-nova-conductor.service" and "journalctl -xe" for details.



Expected result:
the undercloud installation should complete successfully without turning off the selinux.

Comment 1 Alexander Chuzhoy 2015-12-29 16:42:20 UTC
Created attachment 1110280 [details]
messages and audit.log from the undercloud machine.

Comment 2 Alexander Chuzhoy 2015-12-29 17:07:07 UTC
Note: The issue was observed on RHEL7.1

Comment 5 Christopher Brown 2017-06-17 19:05:44 UTC
This is long since fixed and can be closed.


Note You need to log in before you can comment on or make changes to this bug.