Bug 1858877

Summary: Module 'cephadm' has failed: auth get failed: failed to find client.crash.`hostname -s` in keyring retval: -2
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: jolmomar, pnataraj, sewagner, tserlin, vereddy
Target Milestone: ---   
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.0.0-7209.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:26:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vikhyat Umrao 2020-07-20 16:34:25 UTC
Description of problem:
Module 'cephadm' has failed: auth get failed: failed to find client.crash.`hostname -s` in keyring retval: -2

Version-Release number of selected component (if applicable):
# cephadm version
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

Looks like this one is fixed in upstream - https://tracker.ceph.com/issues/45726 - 15.2.5 and needs a backport?



## cluster status


[root@dell-per630-13 ~]# ceph -s
  cluster:
    id:     c365eda6-c766-11ea-8cfb-b083fee95e35
    health: HEALTH_ERR
            Module 'cephadm' has failed: auth get failed: failed to find client.crash.dell-per630-13 in keyring retval: -2
 
  services:
    mon: 3 daemons, quorum dell-per630-13.gsslab.pnq2.redhat.com,dell-per630-12,dell-per630-11 (age 3d)
    mgr: dell-per630-13.gsslab.pnq2.redhat.com.ubgekg(active, since 4d), standbys: dell-per630-12.awkxnp
    osd: 6 osds: 6 up (since 3d), 6 in (since 3d)
    rgw: 3 daemons active (test.us-east.dell-per630-11.dbcagc, test.us-east.dell-per630-12.kovdgi, test.us-east.dell-per630-13.yikuae)
 
  task status:
 
  data:
    pools:   6 pools, 137 pgs
    objects: 205 objects, 5.3 KiB
    usage:   6.1 GiB used, 1.7 TiB / 1.7 TiB avail
    pgs:     137 active+clean

Comment 1 RHEL Program Management 2020-07-20 16:34:33 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Yaniv Kaul 2020-09-09 11:56:48 UTC
Juan, are you looking at this?

Comment 3 Juan Miguel Olmo 2020-09-09 12:00:58 UTC
This is solved in upstream a couple of months ago:
https://tracker.ceph.com/issues/45726
https://github.com/ceph/ceph/pull/35274

Comment 4 Vikhyat Umrao 2020-09-09 12:23:38 UTC
Thanks. Moving it to 5.0.

Comment 6 Preethi 2020-11-19 11:43:16 UTC
@Juan, The issue is not seen with the latest downstream image. 

[root@magna105 ubuntu]# ./cephadm version
Using recent ceph image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest
ceph version 16.0.0-7209.el8cp (dc005a4e27b091d75a4fd83f9972f7fcdf9f2e18) pacific (dev)

Comment 9 errata-xmlrpc 2021-08-30 08:26:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294