Description of problem: OSP10->11->12 upgrade: upgrade fails with Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: neutron-db-manage upgrade heads returned 1 instead of one of [0] Checking the logs on the controller: Dec 08 12:11:39 controller-0 os-collect-config[3356]: "Error: /Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: neutron-db-manage upgrade heads returned 1 instead of one of [0]", [root@controller-0 heat-admin]# [root@controller-0 heat-admin]# neutron-db-manage upgrade heads INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. OK INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Traceback (most recent call last): File "/bin/neutron-db-manage", line 10, in <module> sys.exit(main()) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 687, in main return_val |= bool(CONF.command.func(config, CONF.command.name)) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 206, in do_upgrade run_sanity_checks(config, revision) File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 671, in run_sanity_checks script_dir.run_env() File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 416, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py mod = imp.load_source(module_id, path, fp) File "/usr/lib/python2.7/site-packages/networking_bigswitch/plugins/bigswitch/db/migration/alembic_migrations/env.py", line 86, in <module> run_migrations_online() File "/usr/lib/python2.7/site-packages/networking_bigswitch/plugins/bigswitch/db/migration/alembic_migrations/env.py", line 77, in run_migrations_online context.run_migrations() File "<string>", line 8, in run_migrations File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 807, in run_migrations self.get_context().run_migrations(**kw) File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 312, in run_migrations for step in self._migrations_fn(heads, self): File "/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 662, in check_sanity revision, rev, implicit_base=True): File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 641, in _iterate_revisions requested_lowers = self.get_revisions(lower) File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 298, in get_revisions return sum([self.get_revisions(id_elem) for id_elem in id_], ()) File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 303, in get_revisions for rev_id in resolved_id) File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 303, in <genexpr> for rev_id in resolved_id) File "/usr/lib/python2.7/site-packages/alembic/script/revision.py", line 358, in _revision_for_ident resolved_id) alembic.script.revision.ResolutionError: No such revision or branch '2dc6f1b7c0a1' Version-Release number of selected component (if applicable): openstack-neutron-bigswitch-lldp-11.0.0-1.el7ost.noarch python-networking-bigswitch-11.0.0-1.el7ost.noarch openstack-neutron-bigswitch-agent-11.0.0-1.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy OSP10 with 3 controller + 2 computes + 3 ceph nodes 2. Upgrade to OSP11 3. Upgrade to OSP12 Actual results: major-upgrade-composable-steps-docker fails while running neutron db-sync Expected results: Upgrade completes fine. Additional info:
One workaround here is to remove python-networking-bigswitch from controller nodes if bigswitch is not in use: yum remove -y python-networking-bigswitch
This sounds like it would impact all customers regardless of whether bigswitch is used. We would need to force remove the bigswitch packages from the installed nodes on upgrades to avoid this, but that would introduce problems for people actually running bigswitch. Removing the packages would actively break them, even if they had fixed builds. I think the only solution here is to stop people from going 10->11->12 at ga and get a fix from bigswitch asap.
Hi, This seems like a fallout because networking-bigswitch is missing the stable/pike branch. Let me create that and see if it resolves this.
Hi Mike, I've created a stable/pike branch and pushed a new tag v11.0.1 to PYPI. I'm able to fetch it now, so your build processes should now proceed without blockage. There's a review open to change the default_branch and test-requirements.txt [1] which is yet to merge. That should not block the testing though. Let me know if you still face the issue. - Aditya [1] https://review.openstack.org/#/c/527190/
Correction: plugin v11.0.2 has the patches included. It also includes the minor fix that went in with the patch [1] in stable/pike. The delayed stable/pike branching got some neutron-lib changes of master which are not part of stable/pike. The above mentioned patch [1] fixes it. [1] https://review.openstack.org/#/c/527190/
Hi, The migration file is already in the downstream stable branch rhos-12.0-patches [1] So I think, we just have to re-build the package (python-networking-bigswitch) and it should be fine. [1] https://code.engineering.redhat.com/gerrit/gitweb?p=python-networking-bigswitch.git;a=tree;f=networking_bigswitch/plugins/bigswitch/db/migration/alembic_migrations/versions;h=63d295903bdd9c17065d27bbd0d9cc138b116931;hb=refs/heads/rhos-12.0-patches
Thierry, can you make sure this is rebuilt for z1? Thanks!
neutron-db-manage upgrade heads completes fine when upgrading from 10 to 12.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0617