Bug 2101495 - podman 4.1.1 changes default "ipc" value
Summary: podman 4.1.1 changes default "ipc" value
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ansible-collection-containers-podman
Version: 17.0 (Wallaby)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: beta
: 17.0
Assignee: OSP Team
QA Contact: Joe H. Rahme
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-27 15:42 UTC by Cédric Jeanneret
Modified: 2022-09-21 12:23 UTC (History)
3 users (show)

Fixed In Version: ansible-collection-containers-podman-1.9.4-1.el9ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-21 12:23:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers ansible-podman-collections pull 444 0 None Merged Change IpcMode default to shareable 2022-06-29 06:10:44 UTC
OpenStack gerrit 847774 0 None master: MERGED tripleo-ansible: Update for new podman output (Iad3054bd384f7875f111dc0a72830c9e0e9fda9a) 2022-06-30 21:18:24 UTC
Red Hat Issue Tracker OSP-16054 0 None None None 2022-06-27 15:43:18 UTC
Red Hat Product Errata RHEA-2022:6543 0 None None None 2022-09-21 12:23:24 UTC

Description Cédric Jeanneret 2022-06-27 15:42:46 UTC
With podman 4.1.1 as shipped in el9 family (at least cs9), we're facing a breaking change:

The "healthcheck" key in the "podman inspect" has been renamed to "health"; this change would lead to a major service outage during day-2 operation, since all containers having a configured healthcheck would restart, even if they weren't supposed to be.


We must ensure tripleo-ansible content knows about this change, in order to ensure consistent comparison between running (showing "health") and configured (listing "healthcheck") containers.

Comment 9 Cédric Jeanneret 2022-06-28 15:08:28 UTC
Note: it's the State.Healthcheck that was renamed to State.Health - the Config.Healthcheck, supposedly used for indempotency (all of Config) hasn't changed (yet).

I'll do a deploy+redeploy on my cs9 env and check the inspect output for both. It should re-create the container, at least this what we can see on upstream CI, meaning there's something different at some point, either detected in the podman-collections, or in tripleo-ansible (or related).

I'd tend to think it's within tripleo codebase, not the collection, but I hope to know more tomorrow.

Here, we can see how the container is re-created during a molecule run:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_79b/847774/2/check/tripleo-ansible-centos-stream-molecule-tripleo_container_manage/79b561f/reports.html

"""
TASK [Assert that fedora container has not been re-created] ********************
fatal: [instance]: FAILED! => changed=false
  assertion: fedora_infos_new['containers'][0]['Id'] == fedora_infos_old['containers'][0]['Id']
  evaluated_to: false
  msg: fedora container was wrongly re-created

PLAY RECAP *********************************************************************
instance                   : ok=47   changed=15   unreachable=0    failed=1    skipped=12   rescued=0    ignored=0
"""

No need to say, it shouldn't happen.

Comment 11 Cédric Jeanneret 2022-07-08 12:12:38 UTC
Here's a possibility to verify this issue:

Needed resources: 1 undercloud
Steps:
- deploy the undercloud
- take note of the running containers (for instance: sudo podman ps > first-deploy.list)
- re-deploy the undercloud
- take note of the running containers (for instance: sudo podman ps > second-deploy.list)
- compare the two listing - containers shouldn't be recreated, meaning you should see the same container IDs in both files.

Comment 14 David Rosenfeld 2022-07-12 14:40:46 UTC
Used the procedure in Comment 11 and saw the container ids were the same before and after undercloud was redeployed.

Comment 20 errata-xmlrpc 2022-09-21 12:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:6543


Note You need to log in before you can comment on or make changes to this bug.