Bug 2246077

Summary: Need to add the cmount_path in ganesha export block to fix the NFS memory consumption issue
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manisha Saini <msaini>
Component: CephadmAssignee: avan <athakkar>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: urgent Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 7.0CC: adking, akraj, athakkar, cephqe-warriors, kdreyer, tserlin, vdas
Target Milestone: ---   
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.0-108.el9cp Doc Type: Enhancement
Doc Text:
.Add `cmount_path` option and generate unique user ID With this enhancement, you can add the optional `cmount_path` option and generate a unique user ID for each Ceph File System to allow sharing CephFS clients across multiple Ganesha exports thereby reducing the memory usage for a single CephFS client.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-13 15:24:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2236325, 2239769, 2247214    

Description Manisha Saini 2023-10-25 09:45:48 UTC
Description of problem:
===========

This is the tracker BZ to track the cephadm side changes needed for NFS

   # Exporting FSAL
    FSAL {
         Name = CEPH;
         cmount_path = "/";
    }

cmount_path param is required to be added in ganesha export file created by cephadm to fix the below 2 memory consumption issues in NFS 



[1] https://bugzilla.redhat.com/show_bug.cgi?id=2236325
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2239769



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 7 Manisha Saini 2023-11-07 03:17:39 UTC
Verified this BZ with

]# rpm -qa | grep nfs
libnfsidmap-2.5.4-18.el9.x86_64
nfs-utils-2.5.4-18.el9.x86_64
nfs-ganesha-selinux-5.6-3.el9cp.noarch
nfs-ganesha-5.6-3.el9cp.x86_64
nfs-ganesha-rgw-5.6-3.el9cp.x86_64
nfs-ganesha-ceph-5.6-3.el9cp.x86_64
nfs-ganesha-rados-grace-5.6-3.el9cp.x86_64
nfs-ganesha-rados-urls-5.6-3.el9cp.x86_64

# ceph --version
ceph version 18.2.0-113.el9cp (32cbda69435c7145d09eeaf5b5016e5d46370a5d) reef (stable)

# ceph nfs export get cephfs-nfs /export_0 
{
  "access_type": "RW",
  "clients": [],
  "cluster_id": "cephfs-nfs",
  "export_id": 1,
  "fsal": {
    "cmount_path": "/",
    "fs_name": "cephfs",
    "name": "CEPH",
    "user_id": "nfs.cephfs-nfs.cephfs"
  },
  "path": "/",
  "protocols": [
    4
  ],
  "pseudo": "/export_0",
  "sectype": [
    "krb5"
  ],
  "security_label": true,
  "squash": "none",
  "transports": [
    "TCP"
  ]
}


Param "cmount_path" is now added in the nfs export file.Moving this BZ to verified state,

Comment 10 errata-xmlrpc 2023-12-13 15:24:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780