Bug 1594367 - OSP13 minor update: all containers restarted even when Docker RPM is not updated
Summary: OSP13 minor update: all containers restarted even when Docker RPM is not updated
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z3
: 13.0 (Queens)
Assignee: Jiri Stransky
QA Contact: Raviv Bar-Tal
URL:
Whiteboard:
Depends On: 1589684
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-22 17:43 UTC by Assaf Muller
Modified: 2018-11-13 22:27 UTC (History)
23 users (show)

Fixed In Version: openstack-tripleo-heat-templates-8.0.7-2.el7ost
Doc Type: Bug Fix
Doc Text:
In prior versions, the conditions for checking whether a Docker daemon requires a restart were too strict. As a result, the Docker daemon and all containers were restarted whenever the Docker configuration changed or when the Docker RPM was updated. With this release, the conditions are relaxed to prevent unnecessary container restarts. Use the "live restore" functionality for configuration changes to make sure the Docker daemon and all containers are restarted when Docker RPM is updated, but not when the Docker configuration is changed.
Clone Of: 1589684
Environment:
Last Closed: 2018-11-13 22:26:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1777146 0 None None None 2018-06-22 17:43:41 UTC
OpenStack gerrit 601590 0 None MERGED [Queens] Don't stop docker on config changes 2021-01-28 19:26:32 UTC
Red Hat Product Errata RHBA-2018:3587 0 None None None 2018-11-13 22:27:19 UTC

Comment 4 Jiri Stransky 2018-06-26 12:50:31 UTC
Currently the containers are stopped if we detect either config change or a RPM update for docker.

Perhaps we could make the stops/failovers occur less frequently by either:

A) Only checking for RPM update. (Just config changes probably wouldn't cause that Docker wouldn't be able to manage previous containers after its restart. (?))

B) Going a bit further and actually trying to parse what kind of Docker RPM change are we doing during the update. Perhaps we could get away with keeping the containers running if we only change the patch number of RPM, but not the version number. But can we depend on patch-number-only RPM updates being "safe" and never causing the issue where we lose the ability to manage the containers which were left running?

C) There was also a suggestion that we'd only stop containers managed by Paunch and Pacemaker. (Also we'd have to think about software managed by external installers like ceph-ansible, so the approach would probably end up being "stop everything except the Neutron-managed containers".) I think we haven't yet completely ruled out the possiblility of the persisting containers still becoming unmanageable though.

Comment 10 Jiri Stransky 2018-09-11 15:20:16 UTC
Posted a Queens-only patch (Rocky+ has this done differently and already doesn't seem to stop containers on config change of Docker).

I tested by editing Docker config by hand and running minor update. The config got set to the state dictated by Puppet, Docker service got restarted (visible in `systemctl status docker` uptime), while containers remained up (visible in `docker ps` uptime). There didn't seem to be any duplicate services running (looked e.g. via `pgrep -a nova-compute`).

Comment 19 errata-xmlrpc 2018-11-13 22:26:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3587


Note You need to log in before you can comment on or make changes to this bug.