Bug 2024176

Summary: [Workload-DFG][CephQE]Cephadm - not able to deploy all the OSD's in a given cluster
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sunil Kumar Nagaraju <sunnagar>
Component: CephadmAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Sunil Kumar Nagaraju <sunnagar>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 5.1CC: agunn, amsyedha, ceph-eng-bugs, ceph-qe-bugs, ckulal, dwalsh, epuertat, gabrioux, gsitlani, mgowri, mmurthy, pdhiran, psathyan, sangadi, sunnagar, tserlin, twilkins, vereddy, vumrao
Target Milestone: ---Keywords: Automation, AutomationBlocker, Regression
Target Release: 5.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.7-6.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-04 10:22:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 5 Sebastian Wagner 2021-11-18 14:23:42 UTC
Hi Sunil, I meant the ceph's audit log. That's either in the MGR's log or in a dedicated log. I'd like to know which MON Commands were issues by the cluster. MON log should also be ok

Comment 6 Sunil Kumar Nagaraju 2021-11-18 14:33:12 UTC
(In reply to Sebastian Wagner from comment #5)
> Hi Sunil, I meant the ceph's audit log. That's either in the MGR's log or in
> a dedicated log. I'd like to know which MON Commands were issues by the
> cluster. MON log should also be ok

Hi Sebastian,

MON log files are already attached in `Ceph logs` attachment.

Comment 7 Yaniv Kaul 2021-11-23 09:39:21 UTC
Any updates? This is a blocker for QE's CI.

Comment 8 Sebastian Wagner 2021-11-23 09:57:03 UTC
https://chat.google.com/room/AAAAHvpVoDg/lkg7VaW4qys

Comment 16 Sebastian Wagner 2021-11-26 14:04:34 UTC
Pr is in upstream QA

Comment 17 Guillaume Abrioux 2021-12-01 19:50:10 UTC
*** Bug 2028132 has been marked as a duplicate of this bug. ***

Comment 21 Sebastian Wagner 2021-12-08 16:25:59 UTC
workaround: disable selinux for now

Comment 29 Daniel Walsh 2021-12-09 18:48:47 UTC
Containers can not tell whether or not they are confined.  So running getenforce inside of a container could be correct or the selinux library could be lying.

$ getenforce 
Enforcing
$ podman run fedora id -Z
id: --context (-Z) works only on an SELinux-enabled kernel

This shows that inside of the container, it thinks that SELinux is disabled, but the host is clearly in enforcing mode. 
Bottom line all that matters is  how the host is set.  SElinux does not change per container, only for the host.
Containers can run with different types,  container_t being a confined type, while spc_t being an unconfined type.

$ podman run fedora cat /proc/self/attr/current 
system_u:system_r:container_t:s0:c36,c918
$ podman run --privileged fedora cat /proc/self/attr/current 
unconfined_u:system_r:spc_t:s0

Comment 30 Daniel Walsh 2021-12-09 18:49:20 UTC
BTW is there a reason this is being blamed on SELinux?

Comment 41 errata-xmlrpc 2022-04-04 10:22:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174