Bug 1265281

Summary: 'ignore_deletes' and 'use_meta_volume' values displayed incorrectly
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Harold Miller <hamiller>
Component: rhscAssignee: Sahina Bose <sabose>
Status: CLOSED WONTFIX QA Contact: Sweta Anandpara <sanandpa>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: anbabu, bkunal, hamiller, nlevinki, rhs-bugs, sabose, sanandpa, sankarshan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1265284 (view as bug list) Environment:
Last Closed: 2018-06-15 17:13:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1265284    
Attachments:
Description Flags
Screenshot of issue none

Description Harold Miller 2015-09-22 14:30:15 UTC
Description of problem: The values in the console are opposite of the values shown in the cli for 'ignore_deletes' and 'use_meta_volume' 


Version-Release number of selected component (if applicable): RHS-c 3.1


How reproducible: Every time


Steps to Reproduce:
1. Set values in CLI for 'ignore_deletes' and 'use_meta_volume'
2. Inspect values in Consoe
3.

Actual results:In CLI ignore_deletes=false, use_meta_volume=true, in console ignore_deletes=true, use_meta_volume=false.


Expected results: values should be the same


Additional info:

Comment 2 Harold Miller 2015-09-24 15:56:18 UTC
RPM Files - Versions
rhsc-setup-3.1.0-0.62.el6.noarch
rhsc-setup-base-3.1.0-0.62.el6.noarch
rhsc-setup-plugin-ovirt-engine-3.1.0-0.62.el6.noarch
rhsc-setup-plugin-ovirt-engine-common-3.1.0-0.62.el6.noarch
rhsc-setup-plugins-3.1.0-3.el6rhs.noarch

Comment 6 Sahina Bose 2015-12-24 11:33:28 UTC
It looks the default boolean value is not set correctly in UI. Anmol, can you confirm?

Comment 7 anmol babu 2015-12-31 13:03:34 UTC
The issue in UI was fixed as part of fix for 

https://bugzilla.redhat.com/show_bug.cgi?id=1233171

and in fact the patch that fixes this bug i.e, 

https://gerrit.ovirt.org/#/c/42459/ 

defaults the boolean to false.

Could you please confirm if the information was synced(it takes probably 5 mins)
or please try clincking on the sync button to sync manually and then try openinf the config dialog.

Let me know if it still doesn't work.

Comment 8 Harold Miller 2015-12-31 16:17:19 UTC
Anmol,

The fix has been released? Or am I supposed to build a custom reproducer to test this?
Please let me know,

Harold Miller
RHGS Support
Emerging Technologies

Comment 9 Harold Miller 2016-01-05 18:50:25 UTC
Created attachment 1111924 [details]
Screenshot of issue

Comment 10 Harold Miller 2016-01-05 18:51:22 UTC
Customer confirms that the GUI does not match values changed in CLI. Screenshot attached.

Comment 11 Harold Miller 2016-01-19 19:25:20 UTC
Update please?

Comment 12 Ramesh N 2016-01-20 08:36:36 UTC
I am not able to reproduce this issue in RHGSC 3.1.2 build. I tried setting options 'ignore_deletes' and 'use_meta_volume' as true, true or true, false, or false, true and false, false. UI always shows the same values given in CLI. 

Do we the RHSC version at the customer site?. Also as explained in comment#7, you have to wait for 5 mins or click 'Sync' to sync the changed done in CLI to RHSC.

Comment 16 Sahina Bose 2016-01-28 08:42:11 UTC
The network communication error seems to indicate some underlying issue. Are the hosts shown as UP in console? Are other volume management actions successful?

Could you attach engine and vdsm logs when you get the error.

If slave cluster is not managed by RHS-C, how was the geo-rep session setup? outside of console and synced?

Comment 19 Sahina Bose 2016-03-16 06:29:21 UTC
Can you check when slave cluster is created via CLI, if all options are synced?

Comment 20 Sahina Bose 2016-03-17 08:28:03 UTC
Looking at the vdsm logs, there seems to be communication errors in vdsm logs of all nodes (glfs-brick01 -glfs-brick04). Is there any error communication between engine and nodes - are other volume operations successful?
For instance, volume capacity monitoring - is it being updated correctly

I also do not see the geoRep configuration being queried from vdsm. 
Either the scheduler that runs the geoRepSyncjob is not running, or there is an exception. Will need the engine logs to check this, I don't see it attached to the case.

Can the customer click on "Sync" on the geo-rep sub tab to make sure that sync completes successfully?
If the schedule that periodically syncs geo-rep is not running, the customer may need to restart the ovirt-engine service 
#service ovirt-engine restart

Comment 22 Sweta Anandpara 2018-01-19 03:38:30 UTC
Cancelled hamiller's needinfo as well. Apologies.

Comment 23 Sweta Anandpara 2018-01-25 05:08:46 UTC
Setting the needinfo on myself that got cleared by mistake.

Comment 24 Bipin Kunal 2018-05-30 12:04:59 UTC
@Sahina: can we close this bug? I don't see a point in keeping it open.

Comment 25 Sahina Bose 2018-06-15 17:13:32 UTC
(In reply to Bipin Kunal from comment #24)
> @Sahina: can we close this bug? I don't see a point in keeping it open.

Done