Bug 1858877 - Module 'cephadm' has failed: auth get failed: failed to find client.crash.`hostname -s` in keyring retval: -2
Summary: Module 'cephadm' has failed: auth get failed: failed to find client.crash.`ho...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-20 16:34 UTC by Vikhyat Umrao
Modified: 2021-08-30 08:26 UTC (History)
5 users (show)

Fixed In Version: ceph-16.0.0-7209.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 45726 0 None None None 2020-07-20 16:34:25 UTC
Github ceph ceph pull 35274 0 None closed cephadm: error trying to get ceph auth entry for crash daemon 2021-01-26 08:03:34 UTC
Red Hat Issue Tracker RHCEPH-1060 0 None None None 2021-08-27 05:18:57 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:26:30 UTC

Description Vikhyat Umrao 2020-07-20 16:34:25 UTC
Description of problem:
Module 'cephadm' has failed: auth get failed: failed to find client.crash.`hostname -s` in keyring retval: -2

Version-Release number of selected component (if applicable):
# cephadm version
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

Looks like this one is fixed in upstream - https://tracker.ceph.com/issues/45726 - 15.2.5 and needs a backport?



## cluster status


[root@dell-per630-13 ~]# ceph -s
  cluster:
    id:     c365eda6-c766-11ea-8cfb-b083fee95e35
    health: HEALTH_ERR
            Module 'cephadm' has failed: auth get failed: failed to find client.crash.dell-per630-13 in keyring retval: -2
 
  services:
    mon: 3 daemons, quorum dell-per630-13.gsslab.pnq2.redhat.com,dell-per630-12,dell-per630-11 (age 3d)
    mgr: dell-per630-13.gsslab.pnq2.redhat.com.ubgekg(active, since 4d), standbys: dell-per630-12.awkxnp
    osd: 6 osds: 6 up (since 3d), 6 in (since 3d)
    rgw: 3 daemons active (test.us-east.dell-per630-11.dbcagc, test.us-east.dell-per630-12.kovdgi, test.us-east.dell-per630-13.yikuae)
 
  task status:
 
  data:
    pools:   6 pools, 137 pgs
    objects: 205 objects, 5.3 KiB
    usage:   6.1 GiB used, 1.7 TiB / 1.7 TiB avail
    pgs:     137 active+clean

Comment 1 RHEL Program Management 2020-07-20 16:34:33 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Yaniv Kaul 2020-09-09 11:56:48 UTC
Juan, are you looking at this?

Comment 3 Juan Miguel Olmo 2020-09-09 12:00:58 UTC
This is solved in upstream a couple of months ago:
https://tracker.ceph.com/issues/45726
https://github.com/ceph/ceph/pull/35274

Comment 4 Vikhyat Umrao 2020-09-09 12:23:38 UTC
Thanks. Moving it to 5.0.

Comment 6 Preethi 2020-11-19 11:43:16 UTC
@Juan, The issue is not seen with the latest downstream image. 

[root@magna105 ubuntu]# ./cephadm version
Using recent ceph image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest
ceph version 16.0.0-7209.el8cp (dc005a4e27b091d75a4fd83f9972f7fcdf9f2e18) pacific (dev)

Comment 9 errata-xmlrpc 2021-08-30 08:26:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.