Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2370541

Summary: [Test Build] NFS service deployment failing on freshly installed cluster - grace tool failed: rados_pool_create: -1
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.0CC: adking, bkunal, cephqe-warriors, pdhange, shbhosal, tserlin, vdas
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-223 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2372431 (view as bug list) Environment:
Last Closed: 2026-01-29 06:49:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2394541    
Bug Blocks: 2372431    

Description Manisha Saini 2025-06-06 05:30:56 UTC
Description of problem:
======================

Deployed cluster with Hotfix Build: https://bugzilla.redhat.com/show_bug.cgi?id=2364414#c10 
NFS Service deployment is failing with below error


# ceph health detail
HEALTH_WARN Failed to place 3 daemon(s)
[WRN] CEPHADM_DAEMON_PREPARE_CREATE_FAIL: Failed to place 3 daemon(s)
    Failed to prepare for creation of nfs.nfs6.1.1.ceph-manisaini-pa8ozw-node1-installer.izytsk on ceph-manisaini-pa8ozw-node1-installer: grace tool failed: rados_pool_create: -1
Can't connect to cluster: -1
terminate called after throwing an instance of 'std::runtime_error'
  what():  EVP_DecryptInit_ex failed

    Failed to prepare for creation of nfs.nfs6.2.1.ceph-manisaini-pa8ozw-node3.nfrjia on ceph-manisaini-pa8ozw-node3: grace tool failed: rados_pool_create: -1
Can't connect to cluster: -1

    Failed to prepare for creation of nfs.nfs6.0.1.ceph-manisaini-pa8ozw-node2.huicrp on ceph-manisaini-pa8ozw-node2: grace tool failed: rados_pool_create: -1
Can't connect to cluster: -1
terminate called after throwing an instance of 'std::runtime_error'
  what():  EVP_DecryptInit_ex failed

-------
# ceph -s
  cluster:
    id:     372303a2-428f-11f0-8f6b-fa163eb3b23a
    health: HEALTH_WARN
            Failed to place 3 daemon(s)

  services:
    mon: 3 daemons, quorum ceph-manisaini-pa8ozw-node1-installer,ceph-manisaini-pa8ozw-node2,ceph-manisaini-pa8ozw-node3 (age 52m)
    mgr: ceph-manisaini-pa8ozw-node1-installer.uksiim(active, since 55m), standbys: ceph-manisaini-pa8ozw-node2.jifrju
    mds: 1/1 daemons up, 1 standby
    osd: 18 osds: 18 up (since 50m), 18 in (since 51m)
    rgw: 2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   8 pools, 689 pgs
    objects: 253 objects, 456 KiB
    usage:   1.2 GiB used, 269 GiB / 270 GiB avail
    pgs:     689 active+clean
-----

# ceph orch ls | grep nfs
nfs.nfs1                   ?:2049           0/3  -          14m  *
nfs.nfs2                   ?:2059           0/3  -          14m  *
nfs.nfs3                   ?:2069           0/3  -          14m  *
nfs.nfs4                   ?:2079           0/3  -          14m  *
nfs.nfs5                   ?:2089           0/3  -          14m  *
nfs.nfs6                   ?:2099           0/3  -          14m  *

-------
[ceph: root@ceph-manisaini-pa8ozw-node1-installer /]# ceph orch ps | grep nfs
[ceph: root@ceph-manisaini-pa8ozw-node1-installer /]#

Version-Release number of selected component (if applicable):
===================
# ceph --version
ceph version 19.2.0-137.3.hotfix.bz2364414.el9cp (dfe1e9413a6a62d8312014a16de082b958cc7913) squid (stable)


How reproducible:
==============
Twice


Steps to Reproduce:
==================
1.Create ceph cluster with hotfix build
2.Deploy NFS services in parallel using the spec File

Actual results:
===============
NFS service deployment failed with 

---
    Failed to prepare for creation of nfs.nfs6.1.1.ceph-manisaini-pa8ozw-node1-installer.izytsk on ceph-manisaini-pa8ozw-node1-installer: grace tool failed: rados_pool_create: -1
Can't connect to cluster: -1
terminate called after throwing an instance of 'std::runtime_error'
  what():  EVP_DecryptInit_ex failed
----


Expected results:
===============
NFS cluster deployment should pass as expected


Additional info:

========
Before running the test, everything was HEALTHY
-----
# ceph -s
  cluster:
    id:     372303a2-428f-11f0-8f6b-fa163eb3b23a
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-manisaini-pa8ozw-node1-installer,ceph-manisaini-pa8ozw-node2,ceph-manisaini-pa8ozw-node3 (age 37m)
    mgr: ceph-manisaini-pa8ozw-node1-installer.uksiim(active, since 41m), standbys: ceph-manisaini-pa8ozw-node2.jifrju
    mds: 1/1 daemons up, 1 standby
    osd: 18 osds: 18 up (since 35m), 18 in (since 36m)
    rgw: 2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   8 pools, 689 pgs
    objects: 253 objects, 456 KiB
    usage:   1.2 GiB used, 269 GiB / 270 GiB avail
    pgs:     689 active+clean


# cat /var/lib/ceph/nfs.yaml
service_type: nfs
service_id: nfs1
service_name: nfs.nfs1
placement:
  host_pattern: '*'
spec:
  port: 2049
  monitoring_port: 3049
---
service_type: nfs
service_id: nfs2
service_name: nfs.nfs2
placement:
  host_pattern: '*'
spec:
  port: 2059
  monitoring_port: 3059
---
service_type: nfs
service_id: nfs3
service_name: nfs.nfs3
placement:
  host_pattern: '*'
spec:
  port: 2069
  monitoring_port: 3069
---
service_type: nfs
service_id: nfs4
service_name: nfs.nfs4
placement:
  host_pattern: '*'
spec:
  port: 2079
  monitoring_port: 3079
---
service_type: nfs
service_id: nfs5
service_name: nfs.nfs5
placement:
  host_pattern: '*'
spec:
  port: 2089
  monitoring_port: 3089
---
service_type: nfs
service_id: nfs6
service_name: nfs.nfs6
placement:
  host_pattern: '*'
spec:
  port: 2099
  monitoring_port: 3099


# ceph orch apply -i /var/lib/ceph/nfs.yaml
Scheduled nfs.nfs1 update...
Scheduled nfs.nfs2 update...
Scheduled nfs.nfs3 update...
Scheduled nfs.nfs4 update...
Scheduled nfs.nfs5 update...
Scheduled nfs.nfs6 update...

Comment 13 errata-xmlrpc 2026-01-29 06:49:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536