Bug 1111041 - SNAPSHOT[USS]:gluster volume status volume-name doesnot show the snapd port after uss is enabled.
Summary: SNAPSHOT[USS]:gluster volume status volume-name doesnot show the snapd port ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard:
Depends On: 1110864
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-19 06:17 UTC by Raghavendra Bhat
Modified: 2014-11-11 08:35 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1110864
Environment:
Last Closed: 2014-11-11 08:35:22 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2014-06-19 06:17:06 UTC
Description of problem:
gluster volume status <vol-name> doesnot show the snapd process after the uss feature is enabled.

How reproducible:
100%

Steps to Reproduce:
1. Create a 2*2 distribute-replicate volume
2. Enable the uss for the volume
3. Check the snapd process is running or not.
4. Issue gluster volume status <volume name>

Actual results:
gluster volume status <volume name> doesnot show the snapd process and port that is listening to.

Expected results:
gluster volume status < volume name> should show snapd process and port it is listening to for better debugbility.


Additional info:
[root@snapshot09 ~]# gluster volume info newvol
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: cadc8635-d42b-4715-8447-d4fed9537f6a
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: host1:/brick2/newvol
Brick2: host2:/brick2/newvol
Brick3: host3:/brick2/newvol
Brick4: host4:/brick2/newvol
Options Reconfigured:
features.uss: enable
features.barrier: disable
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@snapshot09 ~]# 

[root@snapshot09 ~]# gluster peer status
Number of Peers: 3

Hostname: host3
Uuid: 901bbb7b-faaa-4285-a71f-c2fdd9fd0148
State: Peer in Cluster (Connected)

Hostname: host4
Uuid: 22b95754-f5c0-40e8-9763-a6dbfe134536
State: Peer in Cluster (Connected)

Hostname: host2
Uuid: 81051960-41d5-48c6-abde-0466fe846450
State: Peer in Cluster (Connected)
[root@snapshot09 ~]# 

[root@snapshot09 ~]# gluster snapshot info
Snapshot                  : snap1
Snap UUID                 : 88e8ca57-7e94-436a-bbe4-6fb7086a0c09
Created                   : 2014-06-18 15:56:56
Snap Volumes:

	Snap Volume Name          : 125f10844c6a44f298f4ad7ab82df8c6
	Origin Volume name        : testvol
	Snaps taken for testvol      : 3
	Snaps available for testvol  : 253
	Status                    : Started
 
Snapshot                  : snap2
Snap UUID                 : 17fd72e5-5c59-4a0a-85eb-b4875869e05d
Created                   : 2014-06-18 16:16:10
Snap Volumes:

	Snap Volume Name          : 8ea6d24570e2434d80b87b7e1d6d06e9
	Origin Volume name        : testvol
	Snaps taken for testvol      : 3
	Snaps available for testvol  : 253
	Status                    : Started
 
Snapshot                  : snap4
Snap UUID                 : 7e15d300-1f6e-4c29-ac12-1f8b07445ed5
Created                   : 2014-06-18 16:49:13
Snap Volumes:

	Snap Volume Name          : 6201be1b5c634d4ca8c2bfa8aab8b2c7
	Origin Volume name        : testvol
	Snaps taken for testvol      : 3
	Snaps available for testvol  : 253
	Status                    : Started
 
Snapshot                  : new_snap1
Snap UUID                 : 91b8865a-11e0-4c7f-9991-9d7d8facfffe
Created                   : 2014-06-18 18:57:05
Snap Volumes:

	Snap Volume Name          : 4ae18af52cd24c62a2377da83149bf01
	Origin Volume name        : newvol
	Snaps taken for newvol      : 2
	Snaps available for newvol  : 254
	Status                    : Started
 
Snapshot                  : new_snap2
Snap UUID                 : 94888284-22a5-4787-a2a8-ce700978d999
Created                   : 2014-06-18 18:58:21
Snap Volumes:

	Snap Volume Name          : cc645c35f8704f79bdf65f1fca24a7f2
	Origin Volume name        : newvol
	Snaps taken for newvol      : 2
	Snaps available for newvol  : 254
	Status                    : Started
 
[root@snapshot09 ~]# ps -ef | grep snapd
root     18300     1  0 18:46 ?        00:00:01 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/testvol -p /var/lib/glusterd/vols/testvol/run/testvol-snapd.pid -l /var/log/glusterfs/testvol-snapd.log --brick-name snapd-testvol -S /var/run/01a048c227f57cefd087f71a1d63acdd.socket --brick-port 49161 --xlator-option testvol-server.listen-port=49161
root     18898     1  0 18:58 ?        00:00:01 /usr/sbin/glusterfsd -s localhost --volfile-id snapd/newvol -p /var/lib/glusterd/vols/newvol/run/newvol-snapd.pid -l /var/log/glusterfs/newvol-snapd.log --brick-name snapd-newvol -S /var/run/fc8c9af66fbd88a16637d6778ff7086e.socket --brick-port 49165 --xlator-option newvol-server.listen-port=49165
root     19606 19570  0 21:08 pts/0    00:00:00 grep snapd
[root@snapshot09 ~]# mount
/dev/mapper/vg_snapshot09-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/vda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/VolGroup0-thin_vol0 on /brick0 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol1 on /brick1 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol2 on /brick2 type xfs (rw)
/dev/mapper/VolGroup0-thin_vol3 on /brick3 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol4 on /brick4 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol5 on /brick5 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol6 on /brick6 type xfs (rw)
/dev/mapper/VolGroup1-thin_vol7 on /brick7 type xfs (rw)
/dev/mapper/VolGroup0-125f10844c6a44f298f4ad7ab82df8c6_0 on /var/run/gluster/snaps/125f10844c6a44f298f4ad7ab82df8c6/brick1 type xfs (rw)
/dev/mapper/VolGroup0-8ea6d24570e2434d80b87b7e1d6d06e9_0 on /var/run/gluster/snaps/8ea6d24570e2434d80b87b7e1d6d06e9/brick1 type xfs (rw)
/dev/mapper/VolGroup0-6201be1b5c634d4ca8c2bfa8aab8b2c7_0 on /var/run/gluster/snaps/6201be1b5c634d4ca8c2bfa8aab8b2c7/brick1 type xfs (rw)
10.70.44.62:/testvol on /mnt/test1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.70.44.62:/newvol on /mnt/newvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
/dev/mapper/VolGroup0-4ae18af52cd24c62a2377da83149bf01_0 on /var/run/gluster/snaps/4ae18af52cd24c62a2377da83149bf01/brick1 type xfs (rw)
/dev/mapper/VolGroup0-cc645c35f8704f79bdf65f1fca24a7f2_0 on /var/run/gluster/snaps/cc645c35f8704f79bdf65f1fca24a7f2/brick1 type xfs (rw)
[root@snapshot09 ~]# gluster volume status newvol
Status of volume: newvol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick host1:/brick2/newvol			49162	Y	18540
Brick host2:/brick2/newvol			49160	Y	12418
Brick host3:/brick2/newvol			49160	Y	3236
Brick host4:/brick2/newvol			49160	Y	4498
NFS Server on localhost					2049	Y	18905
Self-heal Daemon on localhost				N/A	Y	18566
NFS Server on host2				2049	Y	12645
Self-heal Daemon on host2			N/A	Y	12439
NFS Server on host4				2049	Y	4723
Self-heal Daemon on host4			N/A	Y	4519
NFS Server on host3				2049	Y	3484
Self-heal Daemon on host3			N/A	Y	3258
 
Task Status of Volume newvol
------------------------------------------------------------------------------
There are no active volume tasks

Comment 1 Anand Avati 2014-06-19 11:00:10 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#1) for review on master by Raghavendra Bhat (raghavendra)

Comment 2 Anand Avati 2014-06-20 07:22:22 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#2) for review on master by Raghavendra Bhat (raghavendra)

Comment 3 Anand Avati 2014-06-20 07:27:32 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#3) for review on master by Raghavendra Bhat (raghavendra)

Comment 4 Anand Avati 2014-06-23 06:23:44 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#4) for review on master by Raghavendra Bhat (raghavendra)

Comment 5 Anand Avati 2014-06-23 06:30:17 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#5) for review on master by Raghavendra Bhat (raghavendra)

Comment 6 Anand Avati 2014-06-23 11:42:03 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#6) for review on master by Raghavendra Bhat (raghavendra)

Comment 7 Anand Avati 2014-06-24 10:48:06 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#7) for review on master by Raghavendra Bhat (raghavendra)

Comment 8 Anand Avati 2014-06-24 11:50:58 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#8) for review on master by Raghavendra Bhat (raghavendra)

Comment 9 Anand Avati 2014-06-30 07:41:13 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#9) for review on master by Raghavendra Bhat (raghavendra)

Comment 10 Anand Avati 2014-06-30 18:49:50 UTC
REVIEW: http://review.gluster.org/8114 (mgmt/glusterd: display snapd status as part of volume status) posted (#10) for review on master by Raghavendra Bhat (raghavendra)

Comment 11 Anand Avati 2014-07-01 05:31:07 UTC
COMMIT: http://review.gluster.org/8114 committed in master by Kaushal M (kaushal) 
------
commit c6f040524d75011c44dcc9afdfef80c60c78f7f7
Author: Raghavendra Bhat <raghavendra>
Date:   Thu Jun 19 15:51:39 2014 +0530

    mgmt/glusterd: display snapd status as part of volume status
    
    * Made changes to save the port used by snapd in the info file for the volume
      i.e. <glusterd-working-directory>/vols/<volname>/info
    
    This is how the gluster volume status of a volume would look like for which the
    uss feature is enabled.
    
    [root@tatooine ~]# gluster volume status vol
    Status of volume: vol
    Gluster process                                         Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick tatooine:/export1/vol                             49155   Y       5041
    Snapshot Daemon on localhost                            49156   Y       5080
    NFS Server on localhost                                 2049    Y       5087
    
    Task Status of Volume vol
    ------------------------------------------------------------------------------
    There are no active volume tasks
    
    Change-Id: I8f3e5d7d764a728497c2a5279a07486317bd7c6d
    BUG: 1111041
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/8114
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kaushal M <kaushal>

Comment 12 Anand Avati 2014-07-01 06:35:07 UTC
REVIEW: http://review.gluster.org/8209 (mgmt/glusterd: use the right rpc for snapd while getting pending node rpc) posted (#1) for review on master by Raghavendra Bhat (raghavendra)

Comment 13 Anand Avati 2014-07-01 08:59:06 UTC
COMMIT: http://review.gluster.org/8209 committed in master by Kaushal M (kaushal) 
------
commit 991dd5e4709296d80358d6d076507635c6b3b1e1
Author: Raghavendra Bhat <raghavendra>
Date:   Tue Jul 1 11:57:19 2014 +0530

    mgmt/glusterd: use the right rpc for snapd while getting pending node rpc
    
    * Also changed the testcase bug-1111041.t to correctly get the snapshot
      daemon's pid
    
    Change-Id: I22c09a1e61f049f21f1886f8baa5ff421af3f8fa
    BUG: 1111041
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/8209
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Sachin Pandit <spandit>
    Reviewed-by: Kaushal M <kaushal>

Comment 14 Niels de Vos 2014-09-22 12:43:12 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 15 Niels de Vos 2014-11-11 08:35:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.