Bug 1793490 - snapshot clone volume is not exported via NFS-Ganesha
Summary: snapshot clone volume is not exported via NFS-Ganesha
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-21 13:34 UTC by Mohammed Rafi KC
Modified: 2020-06-30 19:30 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-12 14:24:46 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 24050 0 None Open snapshot/ganesha: Modify ganesha export file while creating clone 2020-07-14 09:14:30 UTC

Description Mohammed Rafi KC 2020-01-21 13:34:52 UTC
This bug was initially created as a copy of Bug #1724526

I am copying this bug because: 



Description of problem:

If a snapshot is taken for a volume exported via nfs-ganesha and then cloned, the resultant clone volume should also be exported via nfs-ganesha, which is not happening.


Version-Release number of selected component (if applicable):
glusterfs-6.0-6.el7rhgs.x86_64
nfs-ganesha-2.7.3-5.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Export a volume via NFS-Ganesha
2. Create a snapshot (say 'snap1') of that volume
3. Now clone that snapshot. 

Actual results:

The clone volume contains ganesha.enable set to 'on' but is not exported via NFS-Ganesha


[root@dhcp41-180 ~]# gluster snapshot clone snap1_clone snap1_notimestamp
snapshot clone: success: Clone snap1_clone created successfully
[root@dhcp41-180 ~]# 
[root@dhcp41-180 ~]# showmount -e localhost
Export list for localhost:
/cpu_vol  (everyone)
/snap_vol (everyone)
[root@dhcp41-180 ~]# gluster v info snap1_clone
 
Volume Name: snap1_clone
Type: Distributed-Replicate
Volume ID: 2f5cb9b8-90c5-42b1-8e13-41200162e8d6
Status: Created
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick1/s1
Brick2: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick2/s1
Brick3: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick3/s1
Brick4: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick4/s2
Brick5: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick5/s2
Brick6: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick6/s2
Brick7: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick7/s3
Brick8: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick8/s3
Brick9: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick9/s3
Brick10: dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick10/s4
Brick11: dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick11/s4
Brick12: dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick12/s4
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
[root@dhcp41-180 ~]# 

[root@dhcp41-180 ~]# ls /var/run/gluster/shared_storage/nfs-ganesha/exports/
export.cpu_vol.conf  export.snap_vol.conf
[root@dhcp41-180 ~]# 

[root@dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable on
volume set: failed: ganesha.enable is already 'on'.
[root@dhcp41-180 ~]#

[root@dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable off
volume set: failed: Dynamic export addition/deletion failed. Please see log file for details
[root@dhcp41-180 ~]#

Expected results:

snap1_clone (snapshot clone volume) should be exported via NFS-Ganesha

Additional info:

For this volume, ganesha.enable can neither be turned on or off. To get past this issue, we should either edit glusterd options or create export config file and export the volume manually

Comment 1 Sunny Kumar 2020-02-04 09:21:24 UTC
Upstream patch:
https://review.gluster.org/#/c/glusterfs/+/24050/.

Comment 2 Worker Ant 2020-03-12 14:24:46 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/1043, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.