With podman 4.1.1 as shipped in el9 family (at least cs9), we're facing a breaking change: The "healthcheck" key in the "podman inspect" has been renamed to "health"; this change would lead to a major service outage during day-2 operation, since all containers having a configured healthcheck would restart, even if they weren't supposed to be. We must ensure tripleo-ansible content knows about this change, in order to ensure consistent comparison between running (showing "health") and configured (listing "healthcheck") containers.
Note: it's the State.Healthcheck that was renamed to State.Health - the Config.Healthcheck, supposedly used for indempotency (all of Config) hasn't changed (yet). I'll do a deploy+redeploy on my cs9 env and check the inspect output for both. It should re-create the container, at least this what we can see on upstream CI, meaning there's something different at some point, either detected in the podman-collections, or in tripleo-ansible (or related). I'd tend to think it's within tripleo codebase, not the collection, but I hope to know more tomorrow. Here, we can see how the container is re-created during a molecule run: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_79b/847774/2/check/tripleo-ansible-centos-stream-molecule-tripleo_container_manage/79b561f/reports.html """ TASK [Assert that fedora container has not been re-created] ******************** fatal: [instance]: FAILED! => changed=false assertion: fedora_infos_new['containers'][0]['Id'] == fedora_infos_old['containers'][0]['Id'] evaluated_to: false msg: fedora container was wrongly re-created PLAY RECAP ********************************************************************* instance : ok=47 changed=15 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 """ No need to say, it shouldn't happen.
Here's a possibility to verify this issue: Needed resources: 1 undercloud Steps: - deploy the undercloud - take note of the running containers (for instance: sudo podman ps > first-deploy.list) - re-deploy the undercloud - take note of the running containers (for instance: sudo podman ps > second-deploy.list) - compare the two listing - containers shouldn't be recreated, meaning you should see the same container IDs in both files.
Used the procedure in Comment 11 and saw the container ids were the same before and after undercloud was redeployed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:6543