Created attachment 1565542 [details] Attachment contains log output of journalctl -xe command Description of problem: ceph mgr service is not activating in enforcing mode Version-Release number of selected component (if applicable): ceph-selinux-14.2.1-0.el8cp.x86_64 ceph-ansible-4.0.0-0.1.rc5.el8cp.noarch ceph-common-14.2.1-0.el8cp.x86_64 ceph-mgr-14.2.1-0.el8cp.x86_64 How reproducible: Always Steps to Reproduce: 1.Set selinux to enforcing mode 2.Deploy RHCS 4.0 cluster on RHEL 8 3.run ceph -s command Actual results: $ sudo ceph -s cluster: id: e507dd68-d877-489c-8ccb-c97e5d7cd452 health: HEALTH_WARN no active mgr clock skew detected on mon.magna006, mon.magna019 services: mon: 3 daemons, quorum magna004,magna006,magna019 (age 4m) mgr: no daemons active osd: 9 osds: 9 up (since 91s), 9 in (since 91s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: Expected results: mgr should be up and running in enforced mode Additional info: The workaround for this is move selinux to permissive mode and restart the mgr service.
Hey Uday, could you also attach the /var/log/audit/audit.log from the ceph-mgr node?
Hi. I just deployed RHCS 4 with slightly newer packages than used in the original report and while all the other daemons came up, the mgr daemon didn't: [root@jb-rhel-mon ~]# ceph -s cluster: id: 6cbdca86-e7e6-47d3-aafc-de338be62ee7 health: HEALTH_WARN no active mgr services: mon: 1 daemons, quorum jb-rhel-mon (age 16m) mgr: no daemons active osd: 3 osds: 3 up (since 16s), 3 in (since 16s) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: journalctl output: May 31 14:46:30 jb-rhel-mon setroubleshoot[10323]: SELinux is preventing /usr/bin/ceph-mgr from using the nnp_transition access on a process. For complete SELinux messages run: sealert -l b1e37801-382b-404e-a8ab-eef15220a3d4 May 31 14:46:30 jb-rhel-mon platform-python[10323]: SELinux is preventing /usr/bin/ceph-mgr from using the nnp_transition access on a process. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that ceph-mgr should be allowed nnp_transition access on processes labeled ceph_t by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'ceph-mgr' --raw | audit2allow -M my-cephmgr # semodule -X 300 -i my-cephmgr.pp May 31 14:46:33 jb-rhel-mon setroubleshoot[10323]: SELinux is preventing /usr/bin/ceph-mgr from read access on the file keyring. For complete SELinux messages run: sealert -l 256be181-9015-405f-a5a6-9fb1f097aaa7 May 31 14:46:33 jb-rhel-mon platform-python[10323]: SELinux is preventing /usr/bin/ceph-mgr from read access on the file keyring. ***** Plugin catchall (100. confidence) suggests ************************** If you believe that ceph-mgr should be allowed read access on the keyring file by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # ausearch -c 'ceph-mgr' --raw | audit2allow -M my-cephmgr # semodule -X 300 -i my-cephmgr.pp May 31 14:46:40 jb-rhel-mon systemd[1]: ceph-mgr: Service RestartSec=10s expired, scheduling restart. I'm attaching the audit.log from the mon/mgr node [root@jb-rhel-mon ~]# ceph -v ceph version 14.2.1-124-g35e6f59 (35e6f599741d153210217828daf1fdfd058d1db3) nautilus (stable) [root@jb-rhel-mon ~]# rpm -qa | grep ceph python3-ceph-argparse-14.2.1-124.g35e6f59.el8cp.x86_64 libcephfs2-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-common-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-mgr-14.2.1-124.g35e6f59.el8cp.x86_64 python3-cephfs-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-base-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-mon-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-mgr-dashboard-14.2.1-124.g35e6f59.el8cp.noarch ceph-selinux-14.2.1-124.g35e6f59.el8cp.x86_64 ceph-mgr-diskprediction-local-14.2.1-124.g35e6f59.el8cp.noarch
Created attachment 1575809 [details] jbrier audit.log from mon/mgr node
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312