Description of problem: Once the quota is enabled and limit set. First time executing the gluster volume quota $volname list" is executed, a glusterfs mount is created on the server for that particular volume. Problem is, even when this volume is stopped, glusterfs mount for that volume still exists and "df -h" displays, "Transport endpoint is not connected" Version-Release number of selected component (if applicable): glusterfs-3.4.0.38rhs-1 How reproducible: always Steps to Reproduce: 1. enable quota on a volume, set some limit on "/" of the volume 2. gluster volume quota $volume list 3. df -h 4. gluster volume stop $volname 5. df -h Actual results: step3, a new mount related to $volname can be seen. step5. reports "Transport endpoint is not connected" for that same mount Expected results: I would like to have a umount been done when a volume is stopped Additional info:
The problem I am facing with this issue is that, once the volume is stopped and deleted. The mount-point is still there. If I create a new volume with the same name, now being the information of that same name is still there, hence the gluster volume quota $volname fails, as can be seen with the example. [root@quota7 ~]# gluster volume quota dist-rep list quota: Could not start quota auxiliary mount [root@quota7 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_quota7-lv_root 42G 6.3G 33G 17% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 52M 408M 12% /boot df: `/var/run/gluster/dist-rep': Transport endpoint is not connected df: `/var/run/gluster/dist-rep1': Transport endpoint is not connected /dev/mapper/RHS_vgvdb-RHS_lv1 1.5T 36M 1.5T 1% /rhs/brick1 /dev/mapper/RHS_vgvdc-RHS_lv2 1.6T 35M 1.6T 1% /rhs/brick2 Here the volume in consideration are dist-rep and dist-rep1 and the command "gluster volume quota dist-rep list" failed because of earlier mount info is still there. Now, it works only if I manually do a "umount". This is not ok from user experience point of view.
*** Bug 1034880 has been marked as a duplicate of this bug. ***
Upstream Fix : http://review.gluster.org/6656
Marking for denali as it got merged as a part of the rebase
Fixed in version : glusterfs-3.6.0.5-1.el6rhs
[root@nfs1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_nfs1-lv_root 50G 1.9G 45G 4% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/vg_nfs1-rhs 425G 3.1G 422G 1% /bricks localhost:dist-rep 2.5T 16G 2.5T 1% /var/run/gluster/dist-rep [root@nfs1 ~]# gluster volume stop dist-rep Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: dist-rep: success [root@nfs1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_nfs1-lv_root 50G 1.9G 45G 4% / tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/vg_nfs1-rhs 425G 3.1G 422G 1% /bricks Hence, moving the BZ to verified
Hi Susant, Please review the edited doc text for technical accuracy and sign off.
Doc looks good to me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html