Bug 1874474 - [osp13][update] Octavia container fails during update until converge fixes them.
Summary: [osp13][update] Octavia container fails during update until converge fixes them.
Keywords:
Status: CLOSED DUPLICATE of bug 1869587
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: z13
: 13.0 (Queens)
Assignee: Brent Eagles
QA Contact: David Rosenfeld
URL:
Whiteboard:
: 1877814 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-01 12:53 UTC by Sofer Athlan-Guyot
Modified: 2023-12-15 19:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-18 20:30:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sofer Athlan-Guyot 2020-09-01 12:53:11 UTC
Description of problem: During osp13 update we got one of our customer that had a failure during the update run on the controllers.  The octavia container were not starting correctly and failed with that kind of error:


INFO:__main__:Setting permission for /var/log/octavia/octavia.log
++ cat /run_command
+ CMD='/usr/bin/octavia-worker --config-file /usr/share/octavia/octavia-dist.conf --config-file /etc/octavia/octavia.conf --log-file /var/log/octavia/worker.log --config-file /etc/octavia/post-deploy.conf --config-dir /etc/octavia/conf.d/octavia-worker'
+ ARGS=
+ [[ ! -n '' ]]
+ . kolla_extend_start
++ set -o errexit
++ OCTAVIA_LOG_DIR=/var/log/kolla/octavia
++ [[ ! -d /var/log/kolla/octavia ]]
+++ stat -c %a /var/log/kolla/octavia
++ [[ 2755 != \7\5\5 ]]
++ chmod 755 /var/log/kolla/octavia
++ . /usr/local/bin/kolla_octavia_extend_start
+ echo 'Running command: '\''/usr/bin/octavia-worker --config-file /usr/share/octavia/octavia-dist.conf --config-file /etc/octavia/octavia.conf --log-file /var/log/octavia/worker.log --config-file /etc/octavia/post-deploy.conf --config-dir /etc/octavia/conf.d/octavia-worker'\'''
Running command: '/usr/bin/octavia-worker --config-file /usr/share/octavia/octavia-dist.conf --config-file /etc/octavia/octavia.conf --log-file /var/log/octavia/worker.log --config-file /etc/octavia/post-deploy.conf --config-dir /etc/octavia/conf.d/octavia-worker'
+ exec /usr/bin/octavia-worker --config-file /usr/share/octavia/octavia-dist.conf --config-file /etc/octavia/octavia.conf --log-file /var/log/octavia/worker.log --config-file /etc/octavia/post-deploy.conf --config-dir /etc/octavia/conf.d/octavia-worker
Traceback (most recent call last):
  File "/usr/bin/octavia-worker", line 10, in <module>
    sys.exit(main())
  File "/usr/lib/python2.7/site-packages/octavia/cmd/octavia_worker.py", line 30, in main
    octavia_service.prepare_service(sys.argv)
  File "/usr/lib/python2.7/site-packages/octavia/common/service.py", line 25, in prepare_service
    config.init(argv[1:])
  File "/usr/lib/python2.7/site-packages/octavia/common/config.py", line 631, in init
    **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2504, in __call__
    raise ConfigFilesNotFoundError(self._namespace._files_not_found)
oslo_config.cfg.ConfigFilesNotFoundError: Failed to find some config files: /etc/octavia/post-deploy.conf


Then when converge was applied (this is the deploy command re-applied) the container configuration was fix and started.

This is certainly related to https://review.opendev.org/#/c/691936 and releted https://review.opendev.org/#/c/691935/ .  In that patch we change the default configuration path.  But those tasks are only done during deployment.  So the command fails during update and is settle during converge.  Those roles “octavia-*” seems to be handled by mistral directly.  Not sure exactly during the converge (deploy) process this is triggered.

We need to be able to make those adjustment during update, either with a specific update tasks, or in the puppet run (not sure it's used to create the new file though).

With a update tasks we run it on the host, the it won't be easy to create, a puppet configuration would be much easier.

Comment 2 Brent Eagles 2020-09-10 15:36:17 UTC
*** Bug 1877814 has been marked as a duplicate of this bug. ***

Comment 6 Brent Eagles 2020-11-18 20:30:31 UTC

*** This bug has been marked as a duplicate of bug 1869587 ***


Note You need to log in before you can comment on or make changes to this bug.