Bug 1390317
| Summary: | Mitaka to Newton upgrade fails during controller upgrade step for neutron related migration 'File "/usr/bin/neutron-db-manage"' | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Marios Andreou <mandreou> | ||||||
| Component: | openstack-neutron | Assignee: | Assaf Muller <amuller> | ||||||
| Status: | CLOSED DUPLICATE | QA Contact: | Toni Freger <tfreger> | ||||||
| Severity: | unspecified | Docs Contact: | |||||||
| Priority: | high | ||||||||
| Version: | 10.0 (Newton) | CC: | amuller, chrisw, jcoufal, nyechiel, ohochman, sathlang, srevivo | ||||||
| Target Milestone: | ga | ||||||||
| Target Release: | 10.0 (Newton) | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2016-10-31 17:44:11 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
Created attachment 1215864 [details]
debug info from running list_nodes_status (output of heat deployment stdout or err)
update from Sofer who tested the fix from BZ 1389987 - marking this as duplicate (sorry for the noise, we are keed to avoid as much last minute pain as possible and eager to file bugs and get them looked at as soon as we hit them, thanks to mcornea and sofer for the extra info) *** This bug has been marked as a duplicate of bug 1389987 *** |
Created attachment 1215863 [details] full os-collect-config output from controller-0 Description of problem: Mitaka to Newton upgrade fails during controller upgrade step for neutron related migration 'File "/usr/bin/neutron-db-manage"' After successful ceilometer migration and upgrade init, the controller upgrade step fails. Initial debug shows this error on the controllers (on controller0) which seems to be related to a migration (fuller logs attached): 78116:Oct 31 14:54:25 overcloud-controller-0.localdomain os-collect-config[5462]: hon2.7/site-packages/pymysql/connections.py\", line 942, in _read_query_result\n result.read()\n File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 1138, in read\n first_packet = self.connection._read_packet()\n File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 906, in _read_packet\n packet.check_error()\n File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 367, in check_error\n err.raise_mysql_exception(self._data)\n File \"/usr/lib/python2.7/site-packages/pymysql/err.py\", line 120, in raise_mysql_exception\n _check_mysql_exception(errinfo)\n File \"/usr/lib/python2.7/site-packages/pymysql/err.py\", line 115, in _check_mysql_exception\n raise InternalError(errno, errorvalue)\noslo_db.exception.DBError: (pymysql.err.InternalError) (1067, u\"Invalid default value for 'created_at'\") [SQL: u\"\\nCREATE TABLE opendaylightjournal_new (\\n\\tseqnum BIGINT NOT NULL AUTO_INCREMENT, \\n\\tobject_type VARCHAR(36) NOT NULL, \\n\\tobject_uuid VARCHAR(36) NOT NULL, \\n\\toperation VARCHAR(36) NOT NULL, \\n\\tdata BLOB, \\n\\tstate ENUM('pending','processing','failed','completed') NOT NULL, \\n\\tretry_count INTEGER, \\n\\tcreated_at DATETIME DEFAULT now(), \\n\\tlast_retried TIMESTAMP NULL DEFAULT now(), \\n\\tPRIMARY KEY (seqnum)\\n)ENGINE=InnoDB\\n\\n\"]\n", "deploy_status_code": 1} 78316:Oct 31 14:54:25 overcloud-controller-0.localdomain os-collect-config[5462]: oslo_db.exception.DBError: (pymysql.err.InternalError) (1067, u"Invalid default value for 'created_at'") [SQL: u"\nCREATE TABLE opendaylightjournal_new (\n\tseqnum BIGINT NOT NULL AUTO_INCREMENT, \n\tobject_type VARCHAR(36) NOT NULL, \n\tobject_uuid VARCHAR(36) NOT NULL, \n\toperation VARCHAR(36) NOT NULL, \n\tdata BLOB, \n\tstate ENUM('pending','processing','failed','completed') NOT NULL, \n\tretry_count INTEGER, \n\tcreated_at DATETIME DEFAULT now(), \n\tlast_retried TIMESTAMP NULL DEFAULT now(), \n\tPRIMARY KEY (seqnum)\n)ENGINE=InnoDB\n\n"] Steps to Reproduce: 1. Deploy OSP9 2. Run ceilometer migration 3. Run upgrade init 4. Run controller upgrade - hit the error here " UPGRADE CONTROLLERS ": openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -e /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e network_env.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker.yaml Started Mistral Workflow. Execution ID: 6efa37d6-064b-4697-a67a-ad729d54514d 2016-10-31 12:51:32Z [BlockStorage]: UPDATE_IN_PROGRESS state changed ... 2016-10-31 14:54:31Z [overcloud]: UPDATE_FAILED resources.UpdateWorkflow: Error: resources.ControllerPacemakerUpgradeDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 I will attach the output of list_nodes_status which contains more debug information as well as the full os-collect-config run output from controller0 as another file. I've heard of one other person to have hit this using the latest puddle (Sofer) filing the bug as an AI from scrum today.