Bug 2300273

Summary: [RHCS 8.0][NFS-Ganesha] cmount_path param are missing from export file
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: avan <athakkar>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: unspecified    
Version: 8.0CC: akraj, athakkar, cephqe-warriors, kkeithle, tonay, tserlin, vdas
Target Milestone: ---Keywords: Regression
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.1.0-61.el9cp Doc Type: Enhancement
Doc Text:
.New `cmount_path` option with a unique user ID generated for CephFS With this enhancement, you can add the optional `cmount_path` option and generate a unique user ID for each Ceph File System. Unique user IDs allow sharing CephFS clients across multiple Ganesha exports. Reducing clients across exports also reduces memory usage for a single CephFS client.
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-11-25 09:04:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2308414    
Bug Blocks: 2317218    

Description Manisha Saini 2024-07-28 23:13:21 UTC
Description of problem:
=============

Cmount_param are missing from export file.  Ref BZ - https://bugzilla.redhat.com/show_bug.cgi?id=2246077#c9

This is required to fix the below memory consumption issue in NFS.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2236325

RHCS 8.0
================

Export file --->

# ceph nfs export info nfsganesha /ganesha2
{
  "access_type": "RW",
  "clients": [],
  "cluster_id": "nfsganesha",
  "export_id": 2,
  "fsal": {
    "fs_name": "cephfs",
    "name": "CEPH",
    "user_id": "nfs.nfsganesha.2"
  },
  "path": "/",
  "protocols": [
    4
  ],
  "pseudo": "/ganesha2",
  "security_label": true,
  "squash": "none",
  "transports": [
    "TCP"
  ]
}


RHCS 7.1
============
Export file

# ceph nfs export info nfsganesha /export_1
{
  "access_type": "RW",
  "clients": [],
  "cluster_id": "nfsganesha",
  "export_id": 1,
  "fsal": {
    "cmount_path": "/",
    "fs_name": "cephfs",
    "name": "CEPH",
    "user_id": "nfs.nfsganesha.cephfs"
  },
  "path": "/volumes/ganeshagroup/ganesha1/39f13294-c062-43c2-97d6-77c875e1b020",
  "protocols": [
    3,
    4
  ],
  "pseudo": "/export_1",
  "security_label": true,
  "squash": "none",
  "transports": [
    "TCP"
  ]
}

Version-Release number of selected component (if applicable):
============================================================
# rpm -qa | grep nfs
libnfsidmap-2.5.4-25.el9.x86_64
nfs-utils-2.5.4-25.el9.x86_64
nfs-ganesha-selinux-5.9-1.el9cp.noarch
nfs-ganesha-5.9-1.el9cp.x86_64
nfs-ganesha-rgw-5.9-1.el9cp.x86_64
nfs-ganesha-ceph-5.9-1.el9cp.x86_64
nfs-ganesha-rados-grace-5.9-1.el9cp.x86_64
nfs-ganesha-rados-urls-5.9-1.el9cp.x86_64

# ceph --version
ceph version 19.1.0-3.el9cp (2032ce88d0c0820de58f7fde69952e9e1c790f42) squid (rc)




How reproducible:
===============
Everytime


Steps to Reproduce:
===============
1. Create ganesha cluster
2. Create a cephfs filesystem. 
3. Create an export out of filesystem
4. List info of the nfs export

# ceph nfs export info <nfs_cluster> /<export>

Actual results:
=============

Cmount_path is missing from default export file


Expected results:
==============
Cmount_path should be added to export file by default to fix the memory issues


Additional info:

Comment 11 errata-xmlrpc 2024-11-25 09:04:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216

Comment 12 Red Hat Bugzilla 2025-03-26 04:25:49 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days