Description of problem: snapd doesn't come up automatically after node reboot. Version-Release number of selected component (if applicable): glusterfs-3.7.5-9 How reproducible: Always Steps to Reproduce: 1. Create a volume and start it. 2. Enable USS on the volume. 3. Make sure snapd is running on all the nodes in the cluster. 4. Reboot any of the node in the cluster. 5. Observe that once the node is up, snapd is no longer running on that node. Actual results: snapd is not running once the node comes up after reboot. Expected results: snapd should start automatically on the rebooted node once it comes up. Additional info:
Master URL: http://review.gluster.org/#/c/13665/ (IN REVIEW)
Master URL: http://review.gluster.org/#/c/13665/ Release 3.7 URL: http://review.gluster.org/#/c/13675/ RHGS 3.1.3 URL: https://code.engineering.redhat.com/gerrit/#/c/70553/
Verifying this bug as introduced bug https://bugzilla.redhat.com/show_bug.cgi?id=1322765 Waiting for next build for BZ#1322765 to Fixed then only this can be verified.
features.inode-quota: on [root@dhcp46-4 ~]# gluster v set newvol uss enable volume set: success [root@dhcp46-4 ~]# gluster v info Volume Name: newvol Type: Distributed-Replicate Volume ID: d5bd98a8-a03d-495b-8686-b372d7afb290 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.4:/rhs/brick1/b1 Brick2: 10.70.47.46:/rhs/brick1/b2 Brick3: 10.70.46.213:/rhs/brick1/b3 Brick4: 10.70.46.148:/rhs/brick1/b4 Options Reconfigured: features.uss: enable features.quota-deem-statfs: on features.barrier: disable cluster.entry-change-log: enable changelog.capture-del-path: on changelog.changelog: on storage.build-pgfid: on performance.readdir-ahead: on features.quota: on features.inode-quota: on [root@dhcp46-4 ~]# gluster v status Status of volume: newvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.46.4:/rhs/brick1/b1 49174 0 Y 3156 Brick 10.70.47.46:/rhs/brick1/b2 49174 0 Y 17225 Brick 10.70.46.213:/rhs/brick1/b3 49174 0 Y 3650 Brick 10.70.46.148:/rhs/brick1/b4 49174 0 Y 8247 Snapshot Daemon on localhost 49180 0 Y 1828 NFS Server on localhost 2049 0 Y 1836 Self-heal Daemon on localhost N/A N/A Y 3137 Quota Daemon on localhost N/A N/A Y 4377 Snapshot Daemon on 10.70.46.148 49180 0 Y 30712 NFS Server on 10.70.46.148 2049 0 Y 30720 Self-heal Daemon on 10.70.46.148 N/A N/A Y 8276 Quota Daemon on 10.70.46.148 N/A N/A Y 9274 Snapshot Daemon on 10.70.46.213 49180 0 Y 15712 NFS Server on 10.70.46.213 2049 0 Y 15720 Self-heal Daemon on 10.70.46.213 N/A N/A Y 4785 Quota Daemon on 10.70.46.213 N/A N/A Y 23759 Snapshot Daemon on 10.70.47.46 49180 0 Y 7514 NFS Server on 10.70.47.46 2049 0 Y 7522 Self-heal Daemon on 10.70.47.46 N/A N/A Y 17254 Quota Daemon on 10.70.47.46 N/A N/A Y 18267 ========================================== After Node reboot [root@dhcp46-4 ~]# init 6 Connection to 10.70.46.4 closed by remote host. Connection to 10.70.46.4 closed. [ashah@localhost ~]$ ssh root.46.4 root.46.4's password: Last login: Mon Apr 25 17:45:18 2016 [root@dhcp46-4 ~]# gluster v status Status of volume: newvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.46.4:/rhs/brick1/b1 49174 0 Y 2621 Brick 10.70.47.46:/rhs/brick1/b2 49174 0 Y 17225 Brick 10.70.46.213:/rhs/brick1/b3 49174 0 Y 3180 Brick 10.70.46.148:/rhs/brick1/b4 49174 0 Y 8247 Snapshot Daemon on localhost 49180 0 Y 2664 NFS Server on localhost 2049 0 Y 2564 Self-heal Daemon on localhost N/A N/A Y 2583 Quota Daemon on localhost N/A N/A Y 2595 Snapshot Daemon on 10.70.47.46 49180 0 Y 7514 NFS Server on 10.70.47.46 2049 0 Y 7522 Self-heal Daemon on 10.70.47.46 N/A N/A Y 17254 Quota Daemon on 10.70.47.46 N/A N/A Y 18267 Snapshot Daemon on 10.70.46.148 49180 0 Y 30712 NFS Server on 10.70.46.148 2049 0 Y 30720 Self-heal Daemon on 10.70.46.148 N/A N/A Y 8276 Quota Daemon on 10.70.46.148 N/A N/A Y 9274 Snapshot Daemon on 10.70.46.213 49180 0 Y 3201 NFS Server on 10.70.46.213 2049 0 Y 3156 Self-heal Daemon on 10.70.46.213 N/A N/A Y 3163 Quota Daemon on 10.70.46.213 N/A N/A Y 3171 Bug verified on build glusterfs-3.7.9-2.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240