Bug 978310

Summary: [RHSC] When trying to reset all options set on the volume, it still retains one option calles user.cifs=on on 2.0U5 nodes.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RamaKasturi <knarra>
Component: rhscAssignee: Sahina Bose <sabose>
Status: CLOSED DUPLICATE QA Contact: RamaKasturi <knarra>
Severity: unspecified Docs Contact:
Priority: high    
Version: 2.1CC: dtsang, knarra, mmahoney, pprakash, rhs-bugs, ssampat
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-03 09:54:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Attaching engine and vdsm logs. none

Description RamaKasturi 2013-06-26 10:41:43 UTC
Created attachment 765506 [details]
Attaching engine and vdsm logs.

Description of problem:
When trying to reset all options set on the volume, it still retains one option calles user.cifs=on.

Version-Release number of selected component (if applicable):
glusterfs-3.3.0.10rhs-1.el6rhs.x86_64
vdsm-4.9.6-24.el6rhs.x86_64
rhsc-2.1.0-0.bb4.el6rhs.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a 3.1 cluster and add 2.0U5 nodes to it.
2. Create a volume from rhsc UI
3. Now click on the volume created , and click on the Volume Options subtab and click "Reset All" link.

Actual results:
It removes all the options expect user.cifs option and event tab says that "volume options has been reset"and "Detected new option user.cifs=on on volume Vol4 of cluster Cluster_anshi, and added it to engine DB."

Expected results:
It should all the avialable options from the volume.

Additional info:

Comment 2 Sahina Bose 2013-07-02 06:15:28 UTC
Possibly a glusterfs bug

Comment 3 Sahina Bose 2013-07-03 09:54:11 UTC

*** This bug has been marked as a duplicate of bug 880058 ***