Bug 1707762 - ceph mgr service is not activating in enforcing mode
Summary: ceph mgr service is not activating in enforcing mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Mgr Plugins
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 4.0
Assignee: Boris Ranto
QA Contact: subhash
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-08 09:44 UTC by Uday kurundwade
Modified: 2020-01-31 12:46 UTC (History)
10 users (show)

Fixed In Version: ceph-14.2.1-508.g0f9d32b.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-31 12:46:15 UTC
Embargoed:


Attachments (Terms of Use)
Attachment contains log output of journalctl -xe command (13.85 KB, text/plain)
2019-05-08 09:44 UTC, Uday kurundwade
no flags Details
jbrier audit.log from mon/mgr node (1.50 MB, text/plain)
2019-05-31 18:58 UTC, John Brier
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 28511 0 None closed nautilus: selinux: Update the policy for RHEL8 2021-01-25 17:04:06 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:46:51 UTC

Description Uday kurundwade 2019-05-08 09:44:02 UTC
Created attachment 1565542 [details]
Attachment contains log output of journalctl -xe command

Description of problem:
ceph mgr service is not activating in enforcing mode

Version-Release number of selected component (if applicable):
ceph-selinux-14.2.1-0.el8cp.x86_64
ceph-ansible-4.0.0-0.1.rc5.el8cp.noarch
ceph-common-14.2.1-0.el8cp.x86_64
ceph-mgr-14.2.1-0.el8cp.x86_64


How reproducible:
Always

Steps to Reproduce:
1.Set selinux to enforcing mode
2.Deploy RHCS 4.0 cluster on RHEL 8
3.run ceph -s command

Actual results:
$ sudo ceph -s
  cluster:
    id:     e507dd68-d877-489c-8ccb-c97e5d7cd452
    health: HEALTH_WARN
            no active mgr
            clock skew detected on mon.magna006, mon.magna019
 
  services:
    mon: 3 daemons, quorum magna004,magna006,magna019 (age 4m)
    mgr: no daemons active
    osd: 9 osds: 9 up (since 91s), 9 in (since 91s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     


Expected results:
mgr should be up and running in enforced mode

Additional info:
The workaround for this is move selinux to permissive mode and restart the mgr service.

Comment 1 Boris Ranto 2019-05-13 13:16:55 UTC
Hey Uday,

could you also attach the /var/log/audit/audit.log from the ceph-mgr node?

Comment 7 John Brier 2019-05-31 18:57:37 UTC
Hi. I just deployed RHCS 4 with slightly newer packages than used in the original report and while all the other daemons came up, the mgr daemon didn't:

[root@jb-rhel-mon ~]# ceph -s
  cluster:
    id:     6cbdca86-e7e6-47d3-aafc-de338be62ee7
    health: HEALTH_WARN
            no active mgr
 
  services:
    mon: 1 daemons, quorum jb-rhel-mon (age 16m)
    mgr: no daemons active
    osd: 3 osds: 3 up (since 16s), 3 in (since 16s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

journalctl output:

May 31 14:46:30 jb-rhel-mon setroubleshoot[10323]: SELinux is preventing /usr/bin/ceph-mgr from using the nnp_transition access on a process. For complete SELinux messages run: sealert -l b1e37801-382b-404e-a8ab-eef15220a3d4
May 31 14:46:30 jb-rhel-mon platform-python[10323]: SELinux is preventing /usr/bin/ceph-mgr from using the nnp_transition access on a process.
                                                    
                                                    *****  Plugin catchall (100. confidence) suggests   **************************
                                                    
                                                    If you believe that ceph-mgr should be allowed nnp_transition access on processes labeled ceph_t by default.
                                                    Then you should report this as a bug.
                                                    You can generate a local policy module to allow this access.
                                                    Do
                                                    allow this access for now by executing:
                                                    # ausearch -c 'ceph-mgr' --raw | audit2allow -M my-cephmgr
                                                    # semodule -X 300 -i my-cephmgr.pp
                                                    
May 31 14:46:33 jb-rhel-mon setroubleshoot[10323]: SELinux is preventing /usr/bin/ceph-mgr from read access on the file keyring. For complete SELinux messages run: sealert -l 256be181-9015-405f-a5a6-9fb1f097aaa7
May 31 14:46:33 jb-rhel-mon platform-python[10323]: SELinux is preventing /usr/bin/ceph-mgr from read access on the file keyring.
                                                    
                                                    *****  Plugin catchall (100. confidence) suggests   **************************
                                                    
                                                    If you believe that ceph-mgr should be allowed read access on the keyring file by default.
                                                    Then you should report this as a bug.
                                                    You can generate a local policy module to allow this access.
                                                    Do
                                                    allow this access for now by executing:
                                                    # ausearch -c 'ceph-mgr' --raw | audit2allow -M my-cephmgr
                                                    # semodule -X 300 -i my-cephmgr.pp
                                                    
May 31 14:46:40 jb-rhel-mon systemd[1]: ceph-mgr: Service RestartSec=10s expired, scheduling restart.



I'm attaching the audit.log from the mon/mgr node

 
[root@jb-rhel-mon ~]# ceph -v
ceph version 14.2.1-124-g35e6f59 (35e6f599741d153210217828daf1fdfd058d1db3) nautilus (stable)
[root@jb-rhel-mon ~]# rpm -qa | grep ceph
python3-ceph-argparse-14.2.1-124.g35e6f59.el8cp.x86_64
libcephfs2-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-common-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-mgr-14.2.1-124.g35e6f59.el8cp.x86_64
python3-cephfs-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-base-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-mon-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-mgr-dashboard-14.2.1-124.g35e6f59.el8cp.noarch
ceph-selinux-14.2.1-124.g35e6f59.el8cp.x86_64
ceph-mgr-diskprediction-local-14.2.1-124.g35e6f59.el8cp.noarch

Comment 8 John Brier 2019-05-31 18:58:10 UTC
Created attachment 1575809 [details]
jbrier audit.log from mon/mgr node

Comment 18 errata-xmlrpc 2020-01-31 12:46:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.