Description of problem:
The curator always in Error status, no indices will be deleted, curator pod logs:
$ oc logs curator-1557378900-7wk8m
2019-05-09 05:15:17,357 INFO Found curator configuration in [/etc/curator/settings/config.yaml]
2019-05-09 05:15:17,363 INFO Converting config file.
Traceback (most recent call last):
File "/usr/bin/curator", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 3007, in <module>
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 728, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 626, in resolve
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Deploy logging using downstream images
2.set curator cronjob schedule to `*/5 * * * *`, wait for a while, check curator pod status
Also tested using upstream image(quay.io/openshift/origin-logging-curator5@sha256:4d7a56d09666abdd606c737c6d79b011cab7c8608ab9aa569d8f5e67a0afa413), doesn't have this issue.
Copy Josef comment from slack here.
package `python-elasticsearch-5.5.5-2.el7` in `rhel-7-server-ose-4.1-rpms` installs the same files twice, which makes resolution problems.
package `python-elasticsearch-5.4.0-1.el7` from `rhel-7-server-ose-3.11-rpms` installs correctly
Verified in openshift/ose-logging-curator5:v4.1.0-201905121530
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.