Bug 2372821

Summary: [Hotfix] [NFS-Ganesha] Ganesha with HA deployment failing " Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: hacharya
Severity: urgent Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.0CC: cephqe-warriors, mobisht, rpollack
Target Milestone: ---Flags: mobisht: needinfo+
Target Release: 8.1z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-223 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-08-18 14:01:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Manisha Saini 2025-06-13 23:38:59 UTC
Description of problem:
=====================

Deploy ganesha cluster with HA

[ceph: root@cali013 /]# ceph nfs cluster create nfsganesha "1 cali016 cali020" --ingress --virtual-ip 10.8.130.191/22

[ceph: root@cali013 /]# ceph nfs cluster info nfsganesha
{
  "nfsganesha": {
    "backend": [],
    "monitor_port": 9049,
    "port": 2049,
    "virtual_ip": "10.8.130.191"
  }
}

===================
# ceph -s
  cluster:
    id:     d62df802-4759-11f0-b17e-b49691cee574
    health: HEALTH_ERR
            Failed to place 1 daemon(s)
            Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)

  services:
    mon: 5 daemons, quorum cali013,cali016,cali015,cali020,cali019 (age 4m)
    mgr: cali013.napmfg(active, since 4m), standbys: cali016.tjreee
    mds: 1/1 daemons up, 1 standby
    osd: 28 osds: 28 up (since 26h), 28 in (since 40h)

  data:
    volumes: 1/1 healthy
    pools:   4 pools, 1073 pgs
    objects: 493 objects, 198 MiB
    usage:   31 GiB used, 69 TiB / 69 TiB avail
    pgs:     1073 active+clean

===================
# ceph health detail
HEALTH_ERR Failed to place 1 daemon(s); Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
[WRN] CEPHADM_DAEMON_PREPARE_CREATE_FAIL: Failed to place 1 daemon(s)
    Failed to prepare for creation of keepalived.nfs.nfsganesha.cali016.ppnhgv on cali016: Failed to generate keepalived.conf: No daemons deployed for ingress.nfs.nfsganesha
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
    Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)


==================
# ceph orch ls
NAME                       PORTS                   RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094                 1/1  6m ago     40h  count:1
ceph-exporter                                          5/5  6m ago     40h  *
crash                                                  5/5  6m ago     40h  *
grafana                    ?:3000                      1/1  6m ago     40h  count:1
ingress.nfs.nfsganesha     10.8.130.191:2049,9049      0/2  -          5m   cali016;cali020;count:1
mds.cephfs                                             2/2  6m ago     25h  count:2
mgr                                                    2/2  6m ago     40h  count:2
mon                                                    5/5  6m ago     40h  count:5
nfs.nfsganesha             ?:12049                     0/1  -          5m   cali016;cali020;count:1
node-exporter              ?:9100                      5/5  6m ago     40h  *
osd.all-available-devices                               28  6m ago     40h  *
prometheus                 ?:9095                      1/1  6m ago     40h  count:1

Version-Release number of selected component (if applicable):


How reproducible:
================
2/2


Steps to Reproduce:
===================
1. Create NFS Ganesha cluster with HA


Actual results:
==============
Ganesha cluster deployment is failing with HA


Expected results:
================
Ganesha cluster deployment should pass


Additional info:

Comment 6 errata-xmlrpc 2025-08-18 14:01:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.1 security and bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:14015