Bug 1724526 - snapshot clone volume is not exported via NFS-Ganesha
Summary: snapshot clone volume is not exported via NFS-Ganesha
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.5
Hardware: All
OS: All
low
low
Target Milestone: ---
: ---
Assignee: Srijan Sivakumar
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-27 08:36 UTC by Soumya Koduri
Modified: 2021-07-05 07:26 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-05 07:12:09 UTC
Embargoed:


Attachments (Terms of Use)

Description Soumya Koduri 2019-06-27 08:36:40 UTC
Description of problem:

If a snapshot is taken for a volume exported via nfs-ganesha and then cloned, the resultant clone volume should also be exported via nfs-ganesha, which is not happening.


Version-Release number of selected component (if applicable):
glusterfs-6.0-6.el7rhgs.x86_64
nfs-ganesha-2.7.3-5.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Export a volume via NFS-Ganesha
2. Create a snapshot (say 'snap1') of that volume
3. Now clone that snapshot. 

Actual results:

The clone volume contains ganesha.enable set to 'on' but is not exported via NFS-Ganesha


[root@dhcp41-180 ~]# gluster snapshot clone snap1_clone snap1_notimestamp
snapshot clone: success: Clone snap1_clone created successfully
[root@dhcp41-180 ~]# 
[root@dhcp41-180 ~]# showmount -e localhost
Export list for localhost:
/cpu_vol  (everyone)
/snap_vol (everyone)
[root@dhcp41-180 ~]# gluster v info snap1_clone
 
Volume Name: snap1_clone
Type: Distributed-Replicate
Volume ID: 2f5cb9b8-90c5-42b1-8e13-41200162e8d6
Status: Created
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick1/s1
Brick2: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick2/s1
Brick3: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick3/s1
Brick4: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick4/s2
Brick5: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick5/s2
Brick6: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick6/s2
Brick7: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick7/s3
Brick8: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick8/s3
Brick9: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick9/s3
Brick10: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick10/s4
Brick11: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick11/s4
Brick12: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick12/s4
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
[root@dhcp41-180 ~]# 

[root@dhcp41-180 ~]# ls /var/run/gluster/shared_storage/nfs-ganesha/exports/
export.cpu_vol.conf  export.snap_vol.conf
[root@dhcp41-180 ~]# 

[root@dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable on
volume set: failed: ganesha.enable is already 'on'.
[root@dhcp41-180 ~]#

[root@dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable off
volume set: failed: Dynamic export addition/deletion failed. Please see log file for details
[root@dhcp41-180 ~]#

Expected results:

snap1_clone (snapshot clone volume) should be exported via NFS-Ganesha

Additional info:

For this volume, ganesha.enable can neither be turned on or off. To get past this issue, we should either edit glusterd options or create export config file and export the volume manually

Comment 4 Mohammed Rafi KC 2020-01-21 13:38:52 UTC
A patch has been posted to address the issue https://review.gluster.org/#/c/glusterfs/+/24050/.


Note You need to log in before you can comment on or make changes to this bug.