Bug 860315 - Two running processes found for single mount of single gluster volume on a single client
Summary: Two running processes found for single mount of single gluster volume on a si...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: x86_64
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: shishir gowda
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-25 15:00 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:56 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-05 07:28:04 UTC
Embargoed:


Attachments (Terms of Use)

Description M S Vishwanath Bhat 2012-09-25 15:00:59 UTC
Description of problem:
While using gluster fuse mountpoint as vmstore for RHEV-M, two glusterfs processes with different pid's are running for single mount of single gluster volume on a single client machine. None of these mounts are manual though. It's been mounted by RHEV-M.

Version-Release number of selected component (if applicable):
glusterfs 3.3.0rhsvirt1 built on Sep 20 2012 03:26:22
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:
Hit once. 

Steps to Reproduce:
1. Created 3*2 distributed-replciated volume. 
2. Did a fuse mount and used it as storage domain for the rhev-m to create vms
3. After successfull creation of few vm's one of the server had gone unresponsive. So this server was rebooted. And after the reboot  since the automount in fstab was not configured, / was used 100%. Later the fstab entried were added and rebooted again. Details are here  https://engineering.redhat.com/trac/rhs-tests/ticket/226#comment:2
4. After that the back-end bricks present in that server is removed and remounted the disks. Now glusterd is restarted and self-heal is triggered
5. Launch rhev-m and activate this storage domain.
6. Execute "ps aux | grep gluster" on the hypervisor.
  
Actual results:
[root@rhs-gp-srv14 /]# ps -aef | grep gluster
root      3770     1  0 Sep22 ?        00:00:00 /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
root     14712 19774  0 20:25 pts/2    00:00:00 grep gluster
root     24131     1  4 17:58 ?        00:06:11 /usr/sbin/glusterfs --volfile-id=vmstore --volfile-server=inception.lab.eng.blr.redhat.com /rhev/data-center/mnt/inception.lab.eng.blr.redhat.com:vmstore
root     27826     1  3 Sep12 ?        10:58:38 /usr/sbin/glusterfs --volfile-id=vmstore --volfile-server=inception.lab.eng.blr.redhat.com /rhev/data-center/mnt/inception.lab.eng.blr.redhat.com:vmstore

There are two glusterfs processes with pid's 24131 and 27826 respectively. And both use the same mountpoint. I was unable to find two different log files though. One of them seems to stale process. 

Also please note that none of these are mounted manually and been mounted by RHEv-M when storage domain is re-activated. 

Expected results:
There should be only one process running.

Additional info:

Not sure what log entries are required. I'm keeping the machine in same state.

Comment 1 Amar Tumballi 2012-09-25 15:05:03 UTC
Try running 'glusterfs' command directly (not through mount cmd) on a machine and it succeeds, So not sure if we should treat it as the bug.

Comment 3 M S Vishwanath Bhat 2012-09-25 15:17:44 UTC
(In reply to comment #1)
> Try running 'glusterfs' command directly (not through mount cmd) on a
> machine and it succeeds, So not sure if we should treat it as the bug.

Actually we aren't mounting it manually. Gluster volume is being mounted by RHEV-M when storage-domain is activated.

Comment 4 shishir gowda 2012-10-05 07:28:04 UTC
Not able to re-produce the bug. One of the process might be a stale one.
Please re-open the bug if you hit it again.


Note You need to log in before you can comment on or make changes to this bug.