Bug 1450595 - [GDEPLOY+GANESHA] Volume gets exported by ganesha even when not present in nfs-ganesha export block in gdeploy conf file
Summary: [GDEPLOY+GANESHA] Volume gets exported by ganesha even when not present in nf...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: RHGS 3.3.0
Assignee: Sachidananda Urs
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-05-13 14:46 UTC by Manisha Saini
Modified: 2017-09-21 04:49 UTC (History)
7 users (show)

Fixed In Version: gdeploy-2.0.2-8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:49:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2777 0 normal SHIPPED_LIVE gdeploy bug fix and enhancement update for RHEL7 2017-09-21 08:23:08 UTC

Description Manisha Saini 2017-05-13 14:46:04 UTC
Description of problem:

Last volume in gdeploy conf file gets automatically exported via ganesha even when not present in [nfs-ganesha] export block.

Version-Release number of selected component (if applicable):

gdeploy-2.0.1-7.el7rhgs.noarch
ansible-2.2.1.0-2.el7.noarch


How reproducible:
Consistently

Steps to Reproduce:
1.Install gdeplopy
2.Create gdeploy conf file to create 3 volumes and ganesha cluster

[hosts]
dhcp37-102.lab.eng.blr.redhat.com
dhcp37-92.lab.eng.blr.redhat.com
dhcp37-119.lab.eng.blr.redhat.com
dhcp37-122.lab.eng.blr.redhat.com


[backend-setup]
devices=/dev/sdb,/dev/sdc,/dev/sdd
vgs=vg1,vg2,vg3
pools=pool1,pool2,pool3
lvs=lv1,lv2,lv3
mountpoints=/gluster/brick1,/gluster/brick2,/gluster/brick3
#brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota

[volume1]
action=create
volname=ganesha
transport=tcp
replica_count=2
brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1
force=yes

[volume2]
action=create
volname=ganesha1
transport=tcp
replica_count=2
brick_dirs=/gluster/brick1/2,/gluster/brick2/2,/gluster/brick3/2
force=yes

[volume3]
action=create
volname=ganesha2
transport=tcp
replica_count=2
brick_dirs=/gluster/brick1/3,/gluster/brick2/3,/gluster/brick3/3
force=yes



[nfs-ganesha]
action=create-cluster
ha-name=ganesha-ha-360
cluster-nodes=dhcp37-102.lab.eng.blr.redhat.com,dhcp37-92.lab.eng.blr.redhat.com,dhcp37-119.lab.eng.blr.redhat.com,dhcp37-122.lab.eng.blr.redhat.com
vip=10.70.36.217,10.70.36.218,10.70.36.219,10.70.36.220


Actual results:

Last volume i.e ganesha2 gets exported via nfs-ganesha.

# showmount -e localhost
Export list for localhost:
/ganesha2 (everyone)


Expected results:

Volume should not get exported when  the volume is not present under export block of [nfs-ganesha]

Additional info:

Comment 2 Atin Mukherjee 2017-05-13 15:19:26 UTC
(In reply to Manisha Saini from comment #0)
> Description of problem:
> 
> Last volume in gdeploy conf file gets automatically exported via ganesha
> even when not present in [nfs-ganesha] export block.
> 
> Version-Release number of selected component (if applicable):
> 
> gdeploy-2.0.1-7.el7rhgs.noarch

What gDeploy build is this? Did you mean 2.0.2-7?

> ansible-2.2.1.0-2.el7.noarch
> 
> 
> How reproducible:
> Consistently
> 
> Steps to Reproduce:
> 1.Install gdeplopy
> 2.Create gdeploy conf file to create 3 volumes and ganesha cluster
> 
> [hosts]
> dhcp37-102.lab.eng.blr.redhat.com
> dhcp37-92.lab.eng.blr.redhat.com
> dhcp37-119.lab.eng.blr.redhat.com
> dhcp37-122.lab.eng.blr.redhat.com
> 
> 
> [backend-setup]
> devices=/dev/sdb,/dev/sdc,/dev/sdd
> vgs=vg1,vg2,vg3
> pools=pool1,pool2,pool3
> lvs=lv1,lv2,lv3
> mountpoints=/gluster/brick1,/gluster/brick2,/gluster/brick3
> #brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1
> 
> [firewalld]
> action=add
> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
> services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota
> 
> [volume1]
> action=create
> volname=ganesha
> transport=tcp
> replica_count=2
> brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1
> force=yes
> 
> [volume2]
> action=create
> volname=ganesha1
> transport=tcp
> replica_count=2
> brick_dirs=/gluster/brick1/2,/gluster/brick2/2,/gluster/brick3/2
> force=yes
> 
> [volume3]
> action=create
> volname=ganesha2
> transport=tcp
> replica_count=2
> brick_dirs=/gluster/brick1/3,/gluster/brick2/3,/gluster/brick3/3
> force=yes
> 
> 
> 
> [nfs-ganesha]
> action=create-cluster
> ha-name=ganesha-ha-360
> cluster-nodes=dhcp37-102.lab.eng.blr.redhat.com,dhcp37-92.lab.eng.blr.redhat.
> com,dhcp37-119.lab.eng.blr.redhat.com,dhcp37-122.lab.eng.blr.redhat.com
> vip=10.70.36.217,10.70.36.218,10.70.36.219,10.70.36.220
> 
> 
> Actual results:
> 
> Last volume i.e ganesha2 gets exported via nfs-ganesha.
> 
> # showmount -e localhost
> Export list for localhost:
> /ganesha2 (everyone)
> 
> 
> Expected results:
> 
> Volume should not get exported when  the volume is not present under export
> block of [nfs-ganesha]
> 
> Additional info:

Comment 3 Manisha Saini 2017-05-13 15:54:20 UTC
Sorry my bad.
Its with latest gdeploy version

# rpm -qa | grep gdeploy
gdeploy-2.0.2-7.el7rhgs.noarch

Comment 5 Sachidananda Urs 2017-05-14 02:15:39 UTC
Commit: https://github.com/gluster-deploy/gdeploy/commit/cba45e458 should fix the issue.

Comment 9 Manisha Saini 2017-06-12 12:34:03 UTC
Verified this on 

# rpm -qa | grep gdeploy
gdeploy-2.0.2-10.el7rhgs.noarch

Volume is not exported via ganesha unless present under [nfs-ganesha] block.
Moving bug to verified state.

Comment 11 errata-xmlrpc 2017-09-21 04:49:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2777


Note You need to log in before you can comment on or make changes to this bug.