Bug 1289439 - snapd doesn't come up automatically after node reboot.
snapd doesn't come up automatically after node reboot.
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.3
Assigned To: Avra Sengupta
Anil Shah
: Patch, Triaged, ZStream
Depends On: 1322765
Blocks: 1299184 1316437 1316806
  Show dependency treegraph
 
Reported: 2015-12-08 01:41 EST by Shashank Raj
Modified: 2016-11-07 22:52 EST (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1316437 (view as bug list)
Environment:
Last Closed: 2016-06-23 00:58:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shashank Raj 2015-12-08 01:41:16 EST
Description of problem:

snapd doesn't come up automatically after node reboot.

Version-Release number of selected component (if applicable):

glusterfs-3.7.5-9

How reproducible:

Always

Steps to Reproduce:
1. Create a volume and start it.
2. Enable USS on the volume.
3. Make sure snapd is running on all the nodes in the cluster.
4. Reboot any of the node in the cluster.
5. Observe that once the node is up, snapd is no longer running on that node.

Actual results:

snapd is not running once the node comes up after reboot.

Expected results:

snapd should start automatically on the rebooted node once it comes up.

Additional info:
Comment 3 Avra Sengupta 2016-03-10 04:48:23 EST
Master URL: http://review.gluster.org/#/c/13665/ (IN REVIEW)
Comment 6 Anil Shah 2016-04-18 02:57:52 EDT
Verifying this bug as introduced bug https://bugzilla.redhat.com/show_bug.cgi?id=1322765 
Waiting for next build for BZ#1322765 to Fixed then only this can be verified.
Comment 7 Anil Shah 2016-04-25 04:21:11 EDT
features.inode-quota: on
[root@dhcp46-4 ~]# gluster v set newvol uss enable
volume set: success
[root@dhcp46-4 ~]# gluster v info
 
Volume Name: newvol
Type: Distributed-Replicate
Volume ID: d5bd98a8-a03d-495b-8686-b372d7afb290
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.4:/rhs/brick1/b1
Brick2: 10.70.47.46:/rhs/brick1/b2
Brick3: 10.70.46.213:/rhs/brick1/b3
Brick4: 10.70.46.148:/rhs/brick1/b4
Options Reconfigured:
features.uss: enable
features.quota-deem-statfs: on
features.barrier: disable
cluster.entry-change-log: enable
changelog.capture-del-path: on
changelog.changelog: on
storage.build-pgfid: on
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
[root@dhcp46-4 ~]# gluster v status
Status of volume: newvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.4:/rhs/brick1/b1             49174     0          Y       3156 
Brick 10.70.47.46:/rhs/brick1/b2            49174     0          Y       17225
Brick 10.70.46.213:/rhs/brick1/b3           49174     0          Y       3650 
Brick 10.70.46.148:/rhs/brick1/b4           49174     0          Y       8247 
Snapshot Daemon on localhost                49180     0          Y       1828 
NFS Server on localhost                     2049      0          Y       1836 
Self-heal Daemon on localhost               N/A       N/A        Y       3137 
Quota Daemon on localhost                   N/A       N/A        Y       4377 
Snapshot Daemon on 10.70.46.148             49180     0          Y       30712
NFS Server on 10.70.46.148                  2049      0          Y       30720
Self-heal Daemon on 10.70.46.148            N/A       N/A        Y       8276 
Quota Daemon on 10.70.46.148                N/A       N/A        Y       9274 
Snapshot Daemon on 10.70.46.213             49180     0          Y       15712
NFS Server on 10.70.46.213                  2049      0          Y       15720
Self-heal Daemon on 10.70.46.213            N/A       N/A        Y       4785 
Quota Daemon on 10.70.46.213                N/A       N/A        Y       23759
Snapshot Daemon on 10.70.47.46              49180     0          Y       7514 
NFS Server on 10.70.47.46                   2049      0          Y       7522 
Self-heal Daemon on 10.70.47.46             N/A       N/A        Y       17254
Quota Daemon on 10.70.47.46                 N/A       N/A        Y       18267
==========================================
After Node reboot

[root@dhcp46-4 ~]# init 6
Connection to 10.70.46.4 closed by remote host.
Connection to 10.70.46.4 closed.
[ashah@localhost ~]$ ssh root@10.70.46.4
root@10.70.46.4's password: 
Last login: Mon Apr 25 17:45:18 2016
[root@dhcp46-4 ~]# gluster v status
Status of volume: newvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.4:/rhs/brick1/b1             49174     0          Y       2621 
Brick 10.70.47.46:/rhs/brick1/b2            49174     0          Y       17225
Brick 10.70.46.213:/rhs/brick1/b3           49174     0          Y       3180 
Brick 10.70.46.148:/rhs/brick1/b4           49174     0          Y       8247 
Snapshot Daemon on localhost                49180     0          Y       2664 
NFS Server on localhost                     2049      0          Y       2564 
Self-heal Daemon on localhost               N/A       N/A        Y       2583 
Quota Daemon on localhost                   N/A       N/A        Y       2595 
Snapshot Daemon on 10.70.47.46              49180     0          Y       7514 
NFS Server on 10.70.47.46                   2049      0          Y       7522 
Self-heal Daemon on 10.70.47.46             N/A       N/A        Y       17254
Quota Daemon on 10.70.47.46                 N/A       N/A        Y       18267
Snapshot Daemon on 10.70.46.148             49180     0          Y       30712
NFS Server on 10.70.46.148                  2049      0          Y       30720
Self-heal Daemon on 10.70.46.148            N/A       N/A        Y       8276 
Quota Daemon on 10.70.46.148                N/A       N/A        Y       9274 
Snapshot Daemon on 10.70.46.213             49180     0          Y       3201 
NFS Server on 10.70.46.213                  2049      0          Y       3156 
Self-heal Daemon on 10.70.46.213            N/A       N/A        Y       3163 
Quota Daemon on 10.70.46.213                N/A       N/A        Y       3171 
 
Bug verified on build glusterfs-3.7.9-2.el7rhgs.x86_64
Comment 9 errata-xmlrpc 2016-06-23 00:58:54 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240

Note You need to log in before you can comment on or make changes to this bug.