Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use Jira Cloud for all bug tracking management.

Bug 2402035

Summary: [NFS-Ganesha] NFS Ganesha daemons are marked as “stray” by cephadm after deployment using a spec file, changing cluster health from HEALTH_OK to HEALTH_WARN.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: NFS-GaneshaAssignee: Sreedhar Agraharam <sragraha>
NFS-Ganesha sub component: Ceph QA Contact: Manisha Saini <msaini>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: unspecified CC: akane, cephqe-warriors, kkeithle, ngangadh, shbhosal, spunadik, sragraha, tserlin, vereddy
Version: 9.0Flags: msaini: needinfo? (sragraha)
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: nfs-ganesha-7.0-0.6.8.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 07:01:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Manisha Saini 2025-10-07 03:35:39 UTC
Description of problem:
========================
After deploying the NFS Ganesha cluster using a spec file, the deployment initially appears successful and all NFS daemons report as running. However, after some time, cephadm automatically marks the NFS daemons as “stray daemon(s) not managed by cephadm)”, causing the overall cluster health status to degrade from HEALTH_OK to HEALTH_WARN.

Cephadm is losing track of the deployed NFS services, even though the daemons continue to run.

[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph -s
  cluster:
    id:     c7275c42-a2bf-11f0-883a-fa163e83ae9c
    health: HEALTH_WARN
            3 stray daemon(s) not managed by cephadm

  services:
    mon:         3 daemons, quorum ceph-msaini-znmfam-node1-installer,ceph-msaini-znmfam-node2,ceph-msaini-znmfam-node3 (age 13h) [leader: ceph-msaini-znmfam-node1-installer]
    mgr:         ceph-msaini-znmfam-node1-installer.lmirgp(active, since 13h), standbys: ceph-msaini-znmfam-node2.nomwwi
    mds:         1/1 daemons up, 1 standby
    osd:         18 osds: 18 up (since 13h), 18 in (since 13h)
    nfs-ganesha: 3 daemons active (2 hosts)
    rgw:         2 daemons active (2 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   9 pools, 721 pgs
    objects: 301 objects, 461 KiB
    usage:   1.2 GiB used, 269 GiB / 270 GiB avail
    pgs:     721 active+clean

  io:
    client:   767 B/s rd, 0 op/s rd, 0 op/s wr

[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph orch ps | grep nfs
nfs.mani1.0.0.ceph-msaini-znmfam-node3.urhqkm            ceph-msaini-znmfam-node3            *:50051           running (71m)     7m ago  71m    84.2M        -  7.0              3f8a2f8937a5  cfa487f658fd
nfs.mani1.1.0.ceph-msaini-znmfam-node1-installer.tucvmw  ceph-msaini-znmfam-node1-installer  *:50051           running (71m)     6m ago  71m     107M        -  7.0              3f8a2f8937a5  eb84fac0a180
nfs.mani1.2.0.ceph-msaini-znmfam-node2.qplacf            ceph-msaini-znmfam-node2            *:50051           running (71m)     7m ago  71m    86.8M        -  7.0              3f8a2f8937a5  84cb2e052a8b
nfs.mani2.0.0.ceph-msaini-znmfam-node3.xqtmhl            ceph-msaini-znmfam-node3            *:50052           running (71m)     7m ago  71m    86.1M        -  7.0              3f8a2f8937a5  6bd1c794b4ee
nfs.mani2.1.0.ceph-msaini-znmfam-node1-installer.pafkai  ceph-msaini-znmfam-node1-installer  *:50052           running (71m)     6m ago  71m     110M        -  7.0              3f8a2f8937a5  42527e117ac0
nfs.mani2.2.0.ceph-msaini-znmfam-node2.jazwkb            ceph-msaini-znmfam-node2            *:50052           running (71m)     7m ago  71m    86.0M        -  7.0              3f8a2f8937a5  dd247cc43a86
nfs.mani3.0.0.ceph-msaini-znmfam-node3.owzqxe            ceph-msaini-znmfam-node3            *:50053           running (71m)     7m ago  71m    86.8M        -  7.0              3f8a2f8937a5  60e0a1463a2c
nfs.mani3.1.0.ceph-msaini-znmfam-node1-installer.nfoqwo  ceph-msaini-znmfam-node1-installer  *:50053           running (71m)     6m ago  71m     106M        -  7.0              3f8a2f8937a5  9e7366f69be7
nfs.mani3.2.0.ceph-msaini-znmfam-node2.vxawka            ceph-msaini-znmfam-node2            *:50053           running (71m)     7m ago  71m    85.5M        -  7.0              3f8a2f8937a5  09e0a1929247
[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph orch ls | grep nfs
nfs.mani1                  ?:50051          3/3  7m ago     71m  *
nfs.mani2                  ?:50052          3/3  7m ago     71m  *
nfs.mani3                  ?:50053          3/3  7m ago     71m  *


Version-Release number of selected component (if applicable):
============================================================
# ceph --version
ceph version 20.1.0-30.el9cp (c67d9ef2f8ecdabc8dbb6436bd938caff22c954a) tentacle (rc)

How reproducible:
==================
Always


Steps to Reproduce:
=================
1. Deploy 3 NFS cluster using spec file

[ceph: root@ceph-msaini-znmfam-node1-installer /]# cat /var/lib/ceph/deploy_nfs_spec_file.yaml
placement:
  host_pattern: '*'
service_id: mani1
service_type: nfs
spec:
  monitoring_port: 60051
  port: 50051

---
placement:
  host_pattern: '*'
service_id: mani2
service_type: nfs
spec:
  monitoring_port: 60052
  port: 50052

---
placement:
  host_pattern: '*'
service_id: mani3
service_type: nfs
spec:
  monitoring_port: 60053
  port: 50053

[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph orch apply -i /var/lib/ceph/deploy_nfs_spec_file.yaml
Scheduled nfs.mani1 update...
Scheduled nfs.mani2 update...
Scheduled nfs.mani3 update...

2. Check "ceph -s" status after sometime


Actual results:
===============
Cephadm marks the NFS daemons as “stray”, reporting them as unmanaged.
Cluster health changes to HEALTH_WARN, despite the daemons running normally.


Expected results:
==================
NFS Ganesha daemons deployed via spec file should remain managed by cephadm.
Cluster health should remain in HEALTH_OK state as long as all daemons are running properly.


Additional info:
===============

[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph orch ps | grep nfs
nfs.mani1.0.0.ceph-msaini-znmfam-node3.urhqkm            ceph-msaini-znmfam-node3            *:50051           running (71m)     7m ago  71m    84.2M        -  7.0              3f8a2f8937a5  cfa487f658fd
nfs.mani1.1.0.ceph-msaini-znmfam-node1-installer.tucvmw  ceph-msaini-znmfam-node1-installer  *:50051           running (71m)     6m ago  71m     107M        -  7.0              3f8a2f8937a5  eb84fac0a180
nfs.mani1.2.0.ceph-msaini-znmfam-node2.qplacf            ceph-msaini-znmfam-node2            *:50051           running (71m)     7m ago  71m    86.8M        -  7.0              3f8a2f8937a5  84cb2e052a8b
nfs.mani2.0.0.ceph-msaini-znmfam-node3.xqtmhl            ceph-msaini-znmfam-node3            *:50052           running (71m)     7m ago  71m    86.1M        -  7.0              3f8a2f8937a5  6bd1c794b4ee
nfs.mani2.1.0.ceph-msaini-znmfam-node1-installer.pafkai  ceph-msaini-znmfam-node1-installer  *:50052           running (71m)     6m ago  71m     110M        -  7.0              3f8a2f8937a5  42527e117ac0
nfs.mani2.2.0.ceph-msaini-znmfam-node2.jazwkb            ceph-msaini-znmfam-node2            *:50052           running (71m)     7m ago  71m    86.0M        -  7.0              3f8a2f8937a5  dd247cc43a86
nfs.mani3.0.0.ceph-msaini-znmfam-node3.owzqxe            ceph-msaini-znmfam-node3            *:50053           running (71m)     7m ago  71m    86.8M        -  7.0              3f8a2f8937a5  60e0a1463a2c
nfs.mani3.1.0.ceph-msaini-znmfam-node1-installer.nfoqwo  ceph-msaini-znmfam-node1-installer  *:50053           running (71m)     6m ago  71m     106M        -  7.0              3f8a2f8937a5  9e7366f69be7
nfs.mani3.2.0.ceph-msaini-znmfam-node2.vxawka            ceph-msaini-znmfam-node2            *:50053           running (71m)     7m ago  71m    85.5M        -  7.0              3f8a2f8937a5  09e0a1929247

[ceph: root@ceph-msaini-znmfam-node1-installer /]# ceph orch ls | grep nfs
nfs.mani1                  ?:50051          3/3  7m ago     71m  *
nfs.mani2                  ?:50052          3/3  7m ago     71m  *
nfs.mani3                  ?:50053          3/3  7m ago     71m  *

Comment 19 errata-xmlrpc 2026-01-29 07:01:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536