Bug 1168606 - [USS]: setting the uss option to on fails when volume is in stopped state
Summary: [USS]: setting the uss option to on fails when volume is in stopped state
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: RHGS 3.2.0
Assignee: Avra Sengupta
QA Contact: Anil Shah
URL:
Whiteboard: USS
Depends On:
Blocks: 1168830 1351522
TreeView+ depends on / blocked
 
Reported: 2014-11-27 11:59 UTC by Rahul Hinduja
Modified: 2017-03-23 05:21 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1168830 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:21:03 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Rahul Hinduja 2014-11-27 11:59:02 UTC
Description of problem:
=======================

If volume is in stopped state, generally the volume set operation works. But In case of USS if the volume is in stopped state, the volume set operation of uss to "on" fails and setting uss to "off" is successful. Volume set operations for uss should be independent, should be allowed to set to ON when volume is stopped and when volume is started, it should start the snapd process.


[root@inception ~]# gluster v stop vol1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol1: success
[root@inception ~]# gluster v set vol1 uss on
volume set: failed: Commit failed on localhost. Please check the log file for more details.
[root@inception ~]# gluster v set vol0 uss on
volume set: failed: Commit failed on localhost. Please check the log file for more details.
[root@inception ~]# gluster v set vol0 uss off
volume set: success
[root@inception ~]# gluster v set vol1 uss off
volume set: success
[root@inception ~]# 
[root@inception ~]# gluster v info vol0 | grep "Status:"
Status: Stopped
[root@inception ~]# gluster v info vol1 | grep "Status:"
Status: Stopped
[root@inception ~]# 
[ inception ] [ 0*$ bas


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0.34-1.el6rhs.x86_64


How reproducible:
=================

always

Steps to Reproduce:
===================
1. create a cluster
2. create a volume
3. set the volume option uss to on

Actual results:
===============

It fails with "volume set: failed: Commit failed on localhost. Please check the log file for more details."


Expected results:
=================

It should success, and once the volume is started, it should start the snapd process


Additional info:
================

when volume is in stopped state we can set the volume options such as:

self-heal-daemon on/off
uss off
nfs.disable 0/1
etc...

Comment 3 Avra Sengupta 2016-07-27 09:02:46 UTC
Has been fixed in master as part of http://review.gluster.org/#/c/9206

Comment 4 Avra Sengupta 2016-07-27 10:03:19 UTC
Marking this as MODIFIED, as this will be part of the next build

Comment 5 Atin Mukherjee 2016-07-29 07:46:38 UTC
Till the rebase happens, the bug should be kept in POST state.

Comment 7 Atin Mukherjee 2016-09-17 15:37:15 UTC
Upstream mainline : http://review.gluster.org/9206
Upstream 3.8 : Available as part of branching from mainline

And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.

Comment 12 errata-xmlrpc 2017-03-23 05:21:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.