Bug 962727

Summary: nfs mount - able to nfs mount a volume(and create files and dir) which is not present and unable to nfs mount volume which exist.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: glusterfsAssignee: rjoseph
Status: CLOSED ERRATA QA Contact: amainkar
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: rhs-bugs, rjoseph, sdharane, spradhan, vagarwal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-23 22:35:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Rachana Patel 2013-05-14 10:37:12 UTC
Description of problem:
nfs mount - able to mount volume which is not present and unable to mount volume which exist.

Version-Release number of selected component (if applicable):
3.4.0.7rhs-1.el6rhs.x86_64

How reproducible:


Steps to Reproduce:
1.had a cluster of 3 RHS server and upgrade rpm. due to Bug 962692 have to 
remove files from /var/lib/glusterd/peers and /var/lib/glusterd/vols and restart glusterd
rm -rf /var/lib/glusterd/peers/*
rm -rf /var/lib/glusterd/vol/*
on all RHS server

(had a volume name '1' having brick on 3 server mia, fred, fan - brick location - /rhs/brick1/1 on all server)

2. again peer probe the servers to create cluster.

3. created 2 volumes as below and started them
[root@fred peers]# gluster volume create sanity fan.lab.eng.blr.redhat.com:/rhs/brick1/sanity mia.lab.eng.blr.redhat.com:/rhs/brick1/sanity fred.lab.eng.blr.redhat.com:/rhs/brick1/sanity
volume create: sanity: success: please start the volume to access data

[root@fred peers]# gluster volume create t1 fan.lab.eng.blr.redhat.com:/rhs/brick1/t1 mia.lab.eng.blr.redhat.com:/rhs/brick1/t1 fred.lab.eng.blr.redhat.com:/rhs/brick1/t1
 volume create: t1: success: please start the volume to access data

4. verify volume status
[root@fred glusterd]# gluster volume status
Status of volume: sanity
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick fan.lab.eng.blr.redhat.com:/rhs/brick1/sanity	49154	Y	22863
Brick mia.lab.eng.blr.redhat.com:/rhs/brick1/sanity	49154	Y	23414
Brick fred.lab.eng.blr.redhat.com:/rhs/brick1/sanity	49152	Y	15005
NFS Server on localhost					2049	Y	15273
NFS Server on fdcb0533-eeb3-4054-8265-26558e92e65a	2049	Y	23104
NFS Server on d665808d-a42a-4eac-bf05-ca53c595486d	2049	Y	23641
 
There are no active volume tasks
Status of volume: t1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick fan.lab.eng.blr.redhat.com:/rhs/brick1/t1		49155	Y	23093
Brick mia.lab.eng.blr.redhat.com:/rhs/brick1/t1		49155	Y	23630
Brick fred.lab.eng.blr.redhat.com:/rhs/brick1/t1	49153	Y	15263
NFS Server on localhost					2049	Y	15273
NFS Server on fdcb0533-eeb3-4054-8265-26558e92e65a	2049	Y	23104
NFS Server on d665808d-a42a-4eac-bf05-ca53c595486d	2049	Y	23641
 
There are no active volume tasks


5. 

a)
tried to mount t1 as nfs mount from client using fan server and it got failed
and able to mount same volume as fuse - succeesful

[root@rhsauto037 ~]# mkdir /mnt/nfst
[root@rhsauto037 ~]# mount -t nfs -o vers=3 -o nolock  fan.lab.eng.blr.redhat.com:/t1 /mnt/nfst
mount.nfs: mounting fan.lab.eng.blr.redhat.com:/t1 failed, reason given by server: No such file or directory

[root@rhsauto037 ~]# mount -t glusterfs   fan.lab.eng.blr.redhat.com:/t1 /mnt/t




b). check the showmount for all 3 RHS server and out of 3 server 2 are shwoing old volume '1' and not recent volume 't1'


[root@rhsauto037 ~]# showmount -e mia.lab.eng.blr.redhat.com
Export list for mia.lab.eng.blr.redhat.com:
/1      *
/sanity *
[root@rhsauto037 ~]# showmount -e fan.lab.eng.blr.redhat.com
Export list for fan.lab.eng.blr.redhat.com:
/1      *
/sanity *
[root@rhsauto037 ~]# showmount -e fred.lab.eng.blr.redhat.com
Export list for fred.lab.eng.blr.redhat.com:
/sanity *
/t1     *



c) 
able to mount vol '1' which does not exist and also able to create directory

client:-
[root@rhsauto037 ~]# mount -t nfs -o vers=3 -o nolock  fan.lab.eng.blr.redhat.com:/1 /mnt/nfst

[root@rhsauto037 ~]# mkdir /mnt/nfst/dir1
[root@rhsauto037 ~]# ls /mnt/nfst
dir1

server:-
[root@fan peers]# ls /rhs/brick1/t1
dir

[root@mia peers]# ls /rhs/brick1/1
dir1

[root@fred peers]# ls /rhs/brick1/1


d)on server vol files are updated and do not show old volume

[root@fan peers]# ls /var/lib/glusterd/vols/
sanity  t1

[root@fred peers]# ls /var/lib/glusterd/vols/
sanity  t1

[root@mia peers]# ls /var/lib/glusterd/vols/
sanity  t1

on all 3 server - less /var/lib/glusterd/nfs/nfs-server.vol

<snip>
 option rpc-auth.addr.sanity.allow *
    option nfs.nlm on
    option nfs.dynamic-volumes on
    subvolumes sanity t1
<snip>

Actual results:
a) not able to nfs mount vol 't1' which is present
b) showmount is not updated.
c) able to mount vol '1' which does not exist and also able to create directory
d) on all server /var/lib/glusterd/vols/ and /var/lib/glusterd/nfs/nfs-server.vol is updated



Expected results:


Additional info:

Comment 3 rjoseph 2013-06-10 17:40:51 UTC
I need some more information to investigate the issue.


1) How reproducible is this problem?
2) Did you restarted all the servers after you delete the server's vol and peer info files?
3) Any errors seen in log files while the server is restarted?

Comment 4 Rachana Patel 2013-06-11 07:27:13 UTC
(In reply to rjoseph from comment #3)
> I need some more information to investigate the issue.
> 
> 
> 1) How reproducible is this problem?
not able to reproduce. encountered it once

> 2) Did you restarted all the servers after you delete the server's vol and
> peer info files?

I did not restart server. as mentione in step 1 , killed all glusterd process and restarted glusterd.

> 3) Any errors seen in log files while the server is restarted?
no server restart.

Comment 5 Vivek Agarwal 2013-06-17 06:56:59 UTC
Since this defect is not reproducible by Rachna, moving this to ON_QA. If this persists, can reopen the defect.

Comment 6 Rachana Patel 2013-06-18 06:25:09 UTC
Not able to reproduce with 3.4.0.9rhs-1.el6.x86_64.

Comment 7 Scott Haines 2013-09-23 22:35:33 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html