Hide Forgot
created a distribute volume and started it. glustershd was not running. Then created a replicate volume and started. Found that glustershd was started. Now I stopped the replicate volume. When I checked the gluster processes running, found that glustershd was still running inspite of no replicate volume running. ps aux | grep gluster root 5319 0.1 2.7 226356 107308 ? Ssl 15:36 0:00 glusterd root 5512 0.0 1.9 276448 76312 ? Ssl 15:38 0:00 /usr/local/sbin/glusterfsd -s localhost --volfile-id vol.hyperspace.mnt-sda7-export1 -p /etc/glusterd/vols/vol/run/hyperspace-mnt-sda7-export1.pid -S /tmp/a1d6bbc397f2dbe2b2bea5d7556c1723.socket --brick-name /mnt/sda7/export1 -l /usr/local/var/log/glusterfs/bricks/mnt-sda7-export1.log --brick-port 24011 --xlator-option vol-server.listen-port=24011 root 5517 0.0 1.9 276448 76268 ? Ssl 15:38 0:00 /usr/local/sbin/glusterfsd -s localhost --volfile-id vol.hyperspace.mnt-sda8-export1 -p /etc/glusterd/vols/vol/run/hyperspace-mnt-sda8-export1.pid -S /tmp/8768514db5b27f126ff2cebf4b3a4fb7.socket --brick-name /mnt/sda8/export1 -l /usr/local/var/log/glusterfs/bricks/mnt-sda8-export1.log --brick-port 24012 --xlator-option vol-server.listen-port=24012 root 5594 0.5 2.5 225980 100236 ? Ssl 15:41 0:00 /usr/local/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /usr/local/var/log/glusterfs/nfs.log root 5600 0.3 1.9 182752 75572 ? Ssl 15:41 0:00 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /etc/glusterd/glustershd/run/glustershd.pid -l /usr/local/var/log/glusterfs/glustershd.log -S /tmp/c1e275cc4fd258634235306bd2c1eeed.socket root 5610 0.0 0.0 13124 1068 pts/8 S+ 15:41 0:00 grep --color=auto gluster raghu 11568 0.0 0.0 45084 2864 pts/7 S+ 10:36 0:00 ssh shell.gluster.com -l raghavendrabhat root@hyperspace:~# gluster volume info Volume Name: vol Type: Distribute Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: hyperspace:/mnt/sda7/export1 Brick2: hyperspace:/mnt/sda8/export1 Volume Name: mirror Type: Replicate Status: Stopped Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: hyperspace:/mnt/sda7/export3 Brick2: hyperspace:/mnt/sda8/export3 And when checked the glustershd volfile it had the distribute volumes graph in its volfile. volume vol-client-0 type protocol/client option remote-host hyperspace option remote-subvolume /mnt/sda7/export1 option transport-type tcp end-volume volume vol-client-1 type protocol/client option remote-host hyperspace option remote-subvolume /mnt/sda8/export1 option transport-type tcp end-volume volume vol-replicate-0 type cluster/replicate option background-self-heal-count 0 option data-self-heal on option self-heal-daemon on subvolumes vol-client-0 end-volume volume vol-replicate-1 type cluster/replicate option background-self-heal-count 0 option data-self-heal on option self-heal-daemon on subvolumes vol-client-1 end-volume volume glustershd type debug/io-stats subvolumes vol-replicate-0 vol-replicate-1 end-volume
CHANGE: http://review.gluster.com/535 (Change-Id: I1bb83342bc0fa883ede527527ec8fd6ee470f781) merged in master by Vijay Bellur (vijay)