Bug 1258897 - "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6",
Summary: "status_reason": "deploy_status_code : Deployment exited with non-zero status...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Linux
medium
unspecified
Target Milestone: y2
: 7.0 (Kilo)
Assignee: Emilien Macchi
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-09-01 13:27 UTC by Tzach Shefi
Modified: 2016-04-18 06:55 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-15 23:59:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
keystone log from controller-0 (13.90 MB, text/plain)
2015-09-02 08:37 UTC, Tzach Shefi
no flags Details

Description Tzach Shefi 2015-09-01 13:27:30 UTC
Description of problem: Followed OSPD lab doc http://10.33.11.10/pub/director-training/ 

openstack overcloud deploy --templates     --ntp-server 10.5.26.10 --control-scale 3 --compute-scale 2     --neutron-tunnel-types vxlan --neutron-network-type vxlan
Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates
ERROR: openstack Heat Stack update failed.

Version-Release number of selected component (if applicable):
rhel7.1
openstack-tripleo-puppet-elements-0.0.1-4.el7ost.noarch
openstack-heat-common-2015.1.0-6.el7ost.noarch

How reproducible:
Unsure first time I hit this. 

Steps to Reproduce:
1. Follow guide till you get to -> openstack overcloud deploy step
2.
3.

Actual results:
ERROR: openstack Heat Stack update failed.

Expected results:
Should complete overcloud deployment. 

Additional info:

ControllerNodesPostDeployment               |
| ControllerOvercloudServicesDeployment_Step7 | 3ebdbf51-a507-4043-9b5e-c0a812d22502          | OS::Heat::StructuredDeployments                   | UPDATE_FAILED   | 2015-09-01T12:10:55Z | ControllerNodesPostDeployment               |
| 1                                           | 56c68ea8-0115-4018-b1b4-a42833254602          | OS::Heat::StructuredDeployment                    | CREATE_FAILED   | 2015-09-01T12:10:58Z | ControllerOvercloudServicesDeployment_Step7 |
+---------------------------------------------+-----------------------------------------------+---------------------------------------------------+-----------------+----------------------+---------------------------------------------+

[stack@undercloud ~]$ heat deployment-show 56c68ea8-0115-4018-b1b4-a42833254602 
.....
....
 release.\u001b[0m\n\u001b[1;31mWarning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.\n   (at /usr/share/ruby/vendor_ruby/puppet/type.rb:816:in `set_default')\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_tenant provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_role provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: Could not prefetch keystone_user provider 'openstack': undefined method `collect' for nil:NilClass\u001b[0m\n\u001b[1;31mError: /Stage[main]/Keystone::Roles::Admin/Keystone_user_role[admin@admin]: Could not evaluate: undefined method `empty?' for nil:NilClass\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Heat::Keystone::Domain/Exec[heat_domain_create]: Skipping because of failed dependencies\u001b[0m\n\u001b[1;31mWarning: /Stage[main]/Heat::Keystone::Domain/Heat_domain_id_setter[heat_domain_id]: Skipping because of failed dependencies\u001b[0m\n", 
    "deploy_status_code": 6
  }, 
  "creation_time": "2015-09-01T12:10:59Z", 
  "updated_time": "2015-09-01T12:15:49Z", 
  "input_values": {}, 
  "action": "CREATE", 
  "status_reason": "deploy_status_code : Deployment exited with non-zero status code: 6", 
  "id": "56c68ea8-0115-4018-b1b4-a42833254602"
}

System is still up if logs or access is needed.

Comment 3 Tzach Shefi 2015-09-02 08:37:15 UTC
Created attachment 1069299 [details]
keystone log from controller-0

Comment 4 Tzach Shefi 2015-09-02 08:38:15 UTC
Some more info if it helps:

It's a single BM running all the VM's on top. 
HA 3 controllers + 2 compute nodes.

On undercloud 
[stack@undercloud ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+----------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks             |
+--------------------------------------+------------------------+--------+------------+-------------+----------------------+
| f0cfafa0-839a-47dc-8d64-d31669dd2420 | overcloud-compute-0    | ACTIVE | -          | Running     | ctlplane=172.16.0.27 |
| d99c828f-2833-49a6-8bf2-f08c8ff0e275 | overcloud-compute-1    | ACTIVE | -          | Running     | ctlplane=172.16.0.25 |
| db6a26e1-2cc6-4780-80fb-b22da60b634b | overcloud-controller-0 | ACTIVE | -          | Running     | ctlplane=172.16.0.28 |
| 1fc3fb03-34cd-43ed-8b24-66f387419831 | overcloud-controller-1 | ACTIVE | -          | Running     | ctlplane=172.16.0.26 |
| 054c6108-bbe5-49e6-b8f3-4484141e8dca | overcloud-controller-2 | ACTIVE | -          | Running     | ctlplane=172.16.0.24 |
+--------------------------------------+------------------------+--------+------------+-------------+----------------------+


sshed into overcloud-controller-0 

I can see openstack bits installed with rpm -qa | grep openstack. 

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| test               |
+--------------------+
2 rows in set (0.07 sec)

grepping keystone log found these errors

2015-08-30 07:20:48.148 26541 DEBUG oslo_db.sqlalchemy.session [-] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py:513

2015-08-30 07:22:31.271 26542 DEBUG keystone.common.sql.core [-] Conflict domain: (_mysql_exceptions.IntegrityError) (1062, "Duplicate entry 'heat_stack' for key 'ixu_domain_name'") [SQL: u'INSERT INTO domain (id, name, enabled, extra) VALUES (%s, %s, %s, %s)'] [parameters: ('4783f89cbab74c4e913b1f1547bb88a4', 'heat_stack', 1, '{"description": "Contains users and projects created by heat"}')] wrapper /usr/lib/python2.7/site-packages/keystone/common/sql/core.py:408

Attaching keystone log, apologize for size 14mb compressed, didn't want to cut out any clues, it's been stuck for a few days.

Comment 6 Emilien Macchi 2015-10-01 20:53:01 UTC
I haven't been able to find what's wrong with your deployment. Could you tell us if you're still facing this issue?

Comment 7 Tzach Shefi 2015-10-13 12:41:33 UTC
Server is gone, I'll update bug if I hit this again.

Comment 8 Mike Burns 2015-10-15 23:59:17 UTC
closing for now.  please reopen if you encounter this again.


Note You need to log in before you can comment on or make changes to this bug.