Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1163030 - [USS]: snapd process is not getting start on the newly attached nodes in cluster
[USS]: snapd process is not getting start on the newly attached nodes in cluster
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot (Show other bugs)
3.0
x86_64 Linux
high Severity urgent
: ---
: RHGS 3.0.3
Assigned To: Sachin Pandit
Rahul Hinduja
USS
: ZStream
Depends On:
Blocks: 1162694
  Show dependency treegraph
 
Reported: 2014-11-12 04:21 EST by Rahul Hinduja
Modified: 2016-09-17 09:00 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.6.0.33-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-01-15 08:42:16 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0038 normal SHIPPED_LIVE Red Hat Storage 3.0 enhancement and bug fix update #3 2015-01-15 13:35:28 EST

  None (edit)
Description Rahul Hinduja 2014-11-12 04:21:14 EST
Description of problem:
=======================

If a new node is attached to the cluster where uss is enabled, the snapd process does not start on newly attached node. Which makes unreachable to the snap world if the client is mounted with this newly attached node.

[root@inception ~]# gluster peer status
Number of Peers: 2

Hostname: rhs-arch-srv2.lab.eng.blr.redhat.com
Uuid: b51831ee-139e-4b02-83dc-cd1f140b8e5f
State: Peer in Cluster (Connected)

Hostname: rhs-arch-srv3.lab.eng.blr.redhat.com
Uuid: b2943c56-5622-43d2-adea-77e79108cdd3
State: Peer in Cluster (Connected)
[root@inception ~]# 


Newly node is: rhs-arch-srv3.lab.eng.blr.redhat.com

[root@rhs-arch-srv3 ~]# ps -eaf | grep snapd
root     10181  6085  0 09:17 pts/0    00:00:00 grep snapd
[root@rhs-arch-srv3 ~]# 

[root@rhs-arch-srv3 ~]# gluster v i | grep uss
features.uss: on
[root@rhs-arch-srv3 ~]# 


Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.6.0.32-1.el6rhs.x86_64


How reproducible:
=================
always


Steps to Reproduce:
===================
1. Create 2 node cluster
2. Create 2*2 volume from 2 node cluster
3. Enable USS
4. snapd process should run on the two nodes
5. Add another node to the cluster

Actual results:
===============

snapd is not started on the newly attached node.


Expected results:
=================

snapd should start on the newly attached node as it could serve as a primary server for a client, if snapd is not running than you can not enter to the snap world from client.
Comment 3 Sachin Pandit 2014-11-17 03:47:41 EST
This issue is resolved and the patch which fixes this issue is under review upstream. I'll send a relevant patch downstream once the patch gets merged upstream.
Comment 4 Sachin Pandit 2014-11-17 06:42:30 EST
https://code.engineering.redhat.com/gerrit/#/c/36772/ fixes the issue
Comment 8 errata-xmlrpc 2015-01-15 08:42:16 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0038.html

Note You need to log in before you can comment on or make changes to this bug.