Bug 1390317 - Mitaka to Newton upgrade fails during controller upgrade step for neutron related migration 'File "/usr/bin/neutron-db-manage"'
Summary: Mitaka to Newton upgrade fails during controller upgrade step for neutron rel...
Keywords:
Status: CLOSED DUPLICATE of bug 1389987
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ga
: 10.0 (Newton)
Assignee: Assaf Muller
QA Contact: Toni Freger
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-31 16:59 UTC by Marios Andreou
Modified: 2016-10-31 17:44 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-31 17:44:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
full os-collect-config output from controller-0 (15.81 KB, text/plain)
2016-10-31 16:59 UTC, Marios Andreou
no flags Details
debug info from running list_nodes_status (output of heat deployment stdout or err) (17.07 KB, text/plain)
2016-10-31 17:01 UTC, Marios Andreou
no flags Details

Description Marios Andreou 2016-10-31 16:59:27 UTC
Created attachment 1215863 [details]
full os-collect-config output from controller-0

Description of problem:

Mitaka to Newton upgrade fails during controller upgrade step for neutron related migration 'File "/usr/bin/neutron-db-manage"'

After successful ceilometer migration and upgrade init, the controller upgrade step fails. Initial debug shows this error on the controllers (on controller0) which seems to be related to a migration (fuller logs attached): 



        78116:Oct 31 14:54:25 overcloud-controller-0.localdomain os-collect-config[5462]: hon2.7/site-packages/pymysql/connections.py\", line 942, in _read_query_result\n    result.read()\n  File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 1138, in read\n    first_packet = self.connection._read_packet()\n  File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 906, in _read_packet\n    packet.check_error()\n  File \"/usr/lib/python2.7/site-packages/pymysql/connections.py\", line 367, in check_error\n    err.raise_mysql_exception(self._data)\n  File \"/usr/lib/python2.7/site-packages/pymysql/err.py\", line 120, in raise_mysql_exception\n    _check_mysql_exception(errinfo)\n  File \"/usr/lib/python2.7/site-packages/pymysql/err.py\", line 115, in _check_mysql_exception\n    raise InternalError(errno, errorvalue)\noslo_db.exception.DBError: (pymysql.err.InternalError) (1067, u\"Invalid default value for 'created_at'\") [SQL: u\"\\nCREATE TABLE opendaylightjournal_new (\\n\\tseqnum BIGINT NOT NULL AUTO_INCREMENT, \\n\\tobject_type VARCHAR(36) NOT NULL, \\n\\tobject_uuid VARCHAR(36) NOT NULL, \\n\\toperation VARCHAR(36) NOT NULL, \\n\\tdata BLOB, \\n\\tstate ENUM('pending','processing','failed','completed') NOT NULL, \\n\\tretry_count INTEGER, \\n\\tcreated_at DATETIME DEFAULT now(), \\n\\tlast_retried TIMESTAMP NULL DEFAULT now(), \\n\\tPRIMARY KEY (seqnum)\\n)ENGINE=InnoDB\\n\\n\"]\n", "deploy_status_code": 1}
        78316:Oct 31 14:54:25 overcloud-controller-0.localdomain os-collect-config[5462]: oslo_db.exception.DBError: (pymysql.err.InternalError) (1067, u"Invalid default value for 'created_at'") [SQL: u"\nCREATE TABLE opendaylightjournal_new (\n\tseqnum BIGINT NOT NULL AUTO_INCREMENT, \n\tobject_type VARCHAR(36) NOT NULL, \n\tobject_uuid VARCHAR(36) NOT NULL, \n\toperation VARCHAR(36) NOT NULL, \n\tdata BLOB, \n\tstate ENUM('pending','processing','failed','completed') NOT NULL, \n\tretry_count INTEGER, \n\tcreated_at DATETIME DEFAULT now(), \n\tlast_retried TIMESTAMP NULL DEFAULT now(), \n\tPRIMARY KEY (seqnum)\n)ENGINE=InnoDB\n\n"]


Steps to Reproduce:

1. Deploy OSP9
2. Run ceilometer migration 
3. Run upgrade init
4. Run controller upgrade - hit the error here


        " UPGRADE CONTROLLERS ":
        openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -e  /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e network_env.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker.yaml
        Started Mistral Workflow. Execution ID: 6efa37d6-064b-4697-a67a-ad729d54514d
        2016-10-31 12:51:32Z [BlockStorage]: UPDATE_IN_PROGRESS  state changed
        ...
        2016-10-31 14:54:31Z [overcloud]: UPDATE_FAILED  resources.UpdateWorkflow: Error: resources.ControllerPacemakerUpgradeDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1


I will attach the output of list_nodes_status which contains more debug information as well as the full os-collect-config run output from controller0 as another file.

I've heard of one other person to have hit this using the latest puddle (Sofer) filing the bug as an AI from scrum today.

Comment 1 Marios Andreou 2016-10-31 17:01:13 UTC
Created attachment 1215864 [details]
debug info from running list_nodes_status (output of heat deployment stdout or err)

Comment 3 Marios Andreou 2016-10-31 17:44:11 UTC
update from Sofer who tested the fix from  BZ 1389987 - marking this as duplicate (sorry for the noise, we are keed to avoid as much last minute pain as possible and eager to file bugs and get them looked at as soon as we hit them, thanks to mcornea and sofer for the extra info)

*** This bug has been marked as a duplicate of bug 1389987 ***


Note You need to log in before you can comment on or make changes to this bug.