Bug 1163030 - [USS]: snapd process is not getting start on the newly attached nodes in cluster
Summary: [USS]: snapd process is not getting start on the newly attached nodes in cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.0.3
Assignee: Sachin Pandit
QA Contact: Rahul Hinduja
URL:
Whiteboard: USS
Depends On:
Blocks: 1162694
TreeView+ depends on / blocked
 
Reported: 2014-11-12 09:21 UTC by Rahul Hinduja
Modified: 2016-09-17 13:00 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.6.0.33-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-01-15 13:42:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0038 0 normal SHIPPED_LIVE Red Hat Storage 3.0 enhancement and bug fix update #3 2015-01-15 18:35:28 UTC

Description Rahul Hinduja 2014-11-12 09:21:14 UTC
Description of problem:
=======================

If a new node is attached to the cluster where uss is enabled, the snapd process does not start on newly attached node. Which makes unreachable to the snap world if the client is mounted with this newly attached node.

[root@inception ~]# gluster peer status
Number of Peers: 2

Hostname: rhs-arch-srv2.lab.eng.blr.redhat.com
Uuid: b51831ee-139e-4b02-83dc-cd1f140b8e5f
State: Peer in Cluster (Connected)

Hostname: rhs-arch-srv3.lab.eng.blr.redhat.com
Uuid: b2943c56-5622-43d2-adea-77e79108cdd3
State: Peer in Cluster (Connected)
[root@inception ~]# 


Newly node is: rhs-arch-srv3.lab.eng.blr.redhat.com

[root@rhs-arch-srv3 ~]# ps -eaf | grep snapd
root     10181  6085  0 09:17 pts/0    00:00:00 grep snapd
[root@rhs-arch-srv3 ~]# 

[root@rhs-arch-srv3 ~]# gluster v i | grep uss
features.uss: on
[root@rhs-arch-srv3 ~]# 


Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.6.0.32-1.el6rhs.x86_64


How reproducible:
=================
always


Steps to Reproduce:
===================
1. Create 2 node cluster
2. Create 2*2 volume from 2 node cluster
3. Enable USS
4. snapd process should run on the two nodes
5. Add another node to the cluster

Actual results:
===============

snapd is not started on the newly attached node.


Expected results:
=================

snapd should start on the newly attached node as it could serve as a primary server for a client, if snapd is not running than you can not enter to the snap world from client.

Comment 3 Sachin Pandit 2014-11-17 08:47:41 UTC
This issue is resolved and the patch which fixes this issue is under review upstream. I'll send a relevant patch downstream once the patch gets merged upstream.

Comment 4 Sachin Pandit 2014-11-17 11:42:30 UTC
https://code.engineering.redhat.com/gerrit/#/c/36772/ fixes the issue

Comment 8 errata-xmlrpc 2015-01-15 13:42:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0038.html


Note You need to log in before you can comment on or make changes to this bug.