Bug 2049272
Summary: | mgr/nfs: allow dynamic updates of CephFS NFS exports | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ram Raja <rraja> |
Component: | Ceph-Mgr Plugins | Assignee: | Ram Raja <rraja> |
Ceph-Mgr Plugins sub component: | orchestrator | QA Contact: | Hemanth Kumar <hyelloji> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | unspecified | CC: | adking, akraj, ceph-eng-bugs, fpantano, gfarnum, gfidente, ngangadh, tserlin, vereddy |
Version: | 5.1 | Keywords: | Rebase |
Target Milestone: | --- | ||
Target Release: | 5.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | Enhancement |
Doc Text: |
.CephFS NFS export can be dynamically updated using the `ceph nfs export apply` command
Previously, when updating a CephFS NFS export, the NFS-Ganesha servers were always restarted. This temporarily affected all the client connections served by the ganesha servers including those exports that were not updated.
With this release, a CephFS NFS export can now be dynamically updated using the `ceph nfs export apply` command. The NFS servers are no longer restarted every time a CephFS NFS export is updated.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-08-09 17:37:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1961115, 2071977, 2102272 |
Description
Ram Raja
2022-02-01 20:06:02 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. The feature was merged in master and pacific (should be available in 16.2.8). Quincy backport is pending. You can see the following tracker ticker for more details, https://tracker.ceph.com/issues/54025 https://tracker.ceph.com/issues/54987 Steps to test: - create a NFS cluster using cephadm # ceph nfs cluster create nfs-ganesha - create a CephFS subvolume # ceph fs volume create a # ceph fs subvolume create a subvol00 # ceph fs subvolume getpath a subvol00 /volumes/_nogroup/subvol00/ce8c002b-5f6e-4db5-8189-7bec6aceb39f - create a read-write NFS export for the subvolume path # ceph nfs export create cephfs nfs-ganesha /cephfs a /volumes/_nogroup/subvol00/ce8c002b-5f6e-4db5-8189-7bec6aceb39f - mount export # mount.nfs4 192.168.0.14:/cephfs /mnt/nfs/ where 192.168.0.14 is the ganesha server IP and /cephfs is pseudo path of export - from the mount point perform writes - note the PID of NFS server - update NFS export's access type from read-write to read-only # ceph nfs export get nfs-ganesha /cephfs > export1.conf # sed -i 's/RW/RO/g' export1.conf # ceph nfs export apply nfs-ganesha -i export1.conf - try writing from NFS mountpoint. It'll say it's a read-only file system - check the PID of NFS server. It shouldn't have changed. This confirms that the NFS server was not restarted when the NFS export was reloaded. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |