Bug 2024176 - [Workload-DFG][CephQE]Cephadm - not able to deploy all the OSD's in a given cluster
Summary: [Workload-DFG][CephQE]Cephadm - not able to deploy all the OSD's in a given c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 5.1
Assignee: Guillaume Abrioux
QA Contact: Sunil Kumar Nagaraju
URL:
Whiteboard:
: 2028132 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-17 13:59 UTC by Sunil Kumar Nagaraju
Modified: 2022-04-04 10:23 UTC (History)
19 users (show)

Fixed In Version: ceph-16.2.7-6.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:22:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 53397 0 None None None 2021-12-01 09:05:03 UTC
Github ceph ceph pull 44104 0 None open cephadm: pass `CEPH_VOLUME_SKIP_RESTORECON=yes` 2021-12-08 09:13:06 UTC
Red Hat Issue Tracker RHCEPH-2366 0 None None None 2021-11-17 14:01:16 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:23:22 UTC

Comment 5 Sebastian Wagner 2021-11-18 14:23:42 UTC
Hi Sunil, I meant the ceph's audit log. That's either in the MGR's log or in a dedicated log. I'd like to know which MON Commands were issues by the cluster. MON log should also be ok

Comment 6 Sunil Kumar Nagaraju 2021-11-18 14:33:12 UTC
(In reply to Sebastian Wagner from comment #5)
> Hi Sunil, I meant the ceph's audit log. That's either in the MGR's log or in
> a dedicated log. I'd like to know which MON Commands were issues by the
> cluster. MON log should also be ok

Hi Sebastian,

MON log files are already attached in `Ceph logs` attachment.

Comment 7 Yaniv Kaul 2021-11-23 09:39:21 UTC
Any updates? This is a blocker for QE's CI.

Comment 8 Sebastian Wagner 2021-11-23 09:57:03 UTC
https://chat.google.com/room/AAAAHvpVoDg/lkg7VaW4qys

Comment 16 Sebastian Wagner 2021-11-26 14:04:34 UTC
Pr is in upstream QA

Comment 17 Guillaume Abrioux 2021-12-01 19:50:10 UTC
*** Bug 2028132 has been marked as a duplicate of this bug. ***

Comment 21 Sebastian Wagner 2021-12-08 16:25:59 UTC
workaround: disable selinux for now

Comment 29 Daniel Walsh 2021-12-09 18:48:47 UTC
Containers can not tell whether or not they are confined.  So running getenforce inside of a container could be correct or the selinux library could be lying.

$ getenforce 
Enforcing
$ podman run fedora id -Z
id: --context (-Z) works only on an SELinux-enabled kernel

This shows that inside of the container, it thinks that SELinux is disabled, but the host is clearly in enforcing mode. 
Bottom line all that matters is  how the host is set.  SElinux does not change per container, only for the host.
Containers can run with different types,  container_t being a confined type, while spc_t being an unconfined type.

$ podman run fedora cat /proc/self/attr/current 
system_u:system_r:container_t:s0:c36,c918
$ podman run --privileged fedora cat /proc/self/attr/current 
unconfined_u:system_r:spc_t:s0

Comment 30 Daniel Walsh 2021-12-09 18:49:20 UTC
BTW is there a reason this is being blamed on SELinux?

Comment 41 errata-xmlrpc 2022-04-04 10:22:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.