Fixed by https://github.com/openshift/openshift-ansible/pull/8423, fix is available in openshift-ansible-3.10.0-0.69.0
We're planning to use PR https://github.com/openshift/openshift-ansible/pull/8684 to address this issue, right?
But I see the proposed PR for 3.10 and 3.9 both not merged yet.
So how about leave this bug tracking for 3.9, and reopen BZ#1587825 tracking for the new PR for 3.10.
Pls correct me if I made something wrong, thanks.
(In reply to Gaoyun Pei from comment #5)
> We're planning to use PR
> https://github.com/openshift/openshift-ansible/pull/8684 to address this
> issue, right?
PR #8684 would enable this setting for all containers - however httpd pod for CFME already has it enabled by https://github.com/openshift/openshift-ansible/pull/8423, so the bug is actually testable now.
Moving back to ON_QA
(In reply to Vadim Rutkovsky from comment #6)
> (In reply to Gaoyun Pei from comment #5)
> > We're planning to use PR
> > https://github.com/openshift/openshift-ansible/pull/8684 to address this
> > issue, right?
> PR #8684 would enable this setting for all containers - however httpd pod
> for CFME already has it enabled by
> https://github.com/openshift/openshift-ansible/pull/8423, so the bug is
> actually testable now.
> Moving back to ON_QA
Ok, actually the verification of https://github.com/openshift/openshift-ansible/pull/8423 on ocp-3.10 was already done in https://bugzilla.redhat.com/show_bug.cgi?id=1587825#c6.
This bug was cloned specially for 3.9, so I think it should be verified on ocp-3.9. PR https://github.com/openshift/openshift-ansible/pull/8839/ already merged into release-3.9 branch, waiting for new 3.9 openshift-ansible rpm package to verify it.
Moving to modified until a new build is ready.
Verify this bug with openshift-ansible-3.9.33-1.git.56.19ba16e.el7.noarch.
After fresh installation, container_manage_cgroup sebool was set to "on" on nodes.
[root@ip-172-18-3-148 ~]# getsebool -a |grep container_manage_cgroup
container_manage_cgroup --> on
Run CFME deployment playbook, all the pods could run well.
[root@ip-172-18-10-128 ~]# oc get pod -n openshift-management
NAME READY STATUS RESTARTS AGE
cloudforms-0 1/1 Running 0 10m
httpd-1-fjsd7 1/1 Running 0 9m
memcached-1-gprh8 1/1 Running 0 10m
postgresql-1-pfrk7 1/1 Running 0 10m
CloudForm-4.6 web-console is also available. Move this bug to verified.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.