Bug 1265281 - 'ignore_deletes' and 'use_meta_volume' values displayed incorrectly [NEEDINFO]
'ignore_deletes' and 'use_meta_volume' values displayed incorrectly
Status: ASSIGNED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
3.1
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Sahina Bose
RHS-C QE
: ZStream
Depends On:
Blocks: 1265284
  Show dependency treegraph
 
Reported: 2015-09-22 10:30 EDT by Harold Miller
Modified: 2017-03-25 12:25 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1265284 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sabose: needinfo? (trao)
sabose: needinfo? (hamiller)


Attachments (Terms of Use)
Screenshot of issue (89.61 KB, image/png)
2016-01-05 13:50 EST, Harold Miller
no flags Details

  None (edit)
Description Harold Miller 2015-09-22 10:30:15 EDT
Description of problem: The values in the console are opposite of the values shown in the cli for 'ignore_deletes' and 'use_meta_volume' 


Version-Release number of selected component (if applicable): RHS-c 3.1


How reproducible: Every time


Steps to Reproduce:
1. Set values in CLI for 'ignore_deletes' and 'use_meta_volume'
2. Inspect values in Consoe
3.

Actual results:In CLI ignore_deletes=false, use_meta_volume=true, in console ignore_deletes=true, use_meta_volume=false.


Expected results: values should be the same


Additional info:
Comment 2 Harold Miller 2015-09-24 11:56:18 EDT
RPM Files - Versions
rhsc-setup-3.1.0-0.62.el6.noarch
rhsc-setup-base-3.1.0-0.62.el6.noarch
rhsc-setup-plugin-ovirt-engine-3.1.0-0.62.el6.noarch
rhsc-setup-plugin-ovirt-engine-common-3.1.0-0.62.el6.noarch
rhsc-setup-plugins-3.1.0-3.el6rhs.noarch
Comment 6 Sahina Bose 2015-12-24 06:33:28 EST
It looks the default boolean value is not set correctly in UI. Anmol, can you confirm?
Comment 7 anmol babu 2015-12-31 08:03:34 EST
The issue in UI was fixed as part of fix for 

https://bugzilla.redhat.com/show_bug.cgi?id=1233171

and in fact the patch that fixes this bug i.e, 

https://gerrit.ovirt.org/#/c/42459/ 

defaults the boolean to false.

Could you please confirm if the information was synced(it takes probably 5 mins)
or please try clincking on the sync button to sync manually and then try openinf the config dialog.

Let me know if it still doesn't work.
Comment 8 Harold Miller 2015-12-31 11:17:19 EST
Anmol,

The fix has been released? Or am I supposed to build a custom reproducer to test this?
Please let me know,

Harold Miller
RHGS Support
Emerging Technologies
Comment 9 Harold Miller 2016-01-05 13:50 EST
Created attachment 1111924 [details]
Screenshot of issue
Comment 10 Harold Miller 2016-01-05 13:51:22 EST
Customer confirms that the GUI does not match values changed in CLI. Screenshot attached.
Comment 11 Harold Miller 2016-01-19 14:25:20 EST
Update please?
Comment 12 Ramesh N 2016-01-20 03:36:36 EST
I am not able to reproduce this issue in RHGSC 3.1.2 build. I tried setting options 'ignore_deletes' and 'use_meta_volume' as true, true or true, false, or false, true and false, false. UI always shows the same values given in CLI. 

Do we the RHSC version at the customer site?. Also as explained in comment#7, you have to wait for 5 mins or click 'Sync' to sync the changed done in CLI to RHSC.
Comment 16 Sahina Bose 2016-01-28 03:42:11 EST
The network communication error seems to indicate some underlying issue. Are the hosts shown as UP in console? Are other volume management actions successful?

Could you attach engine and vdsm logs when you get the error.

If slave cluster is not managed by RHS-C, how was the geo-rep session setup? outside of console and synced?
Comment 19 Sahina Bose 2016-03-16 02:29:21 EDT
Can you check when slave cluster is created via CLI, if all options are synced?
Comment 20 Sahina Bose 2016-03-17 04:28:03 EDT
Looking at the vdsm logs, there seems to be communication errors in vdsm logs of all nodes (glfs-brick01 -glfs-brick04). Is there any error communication between engine and nodes - are other volume operations successful?
For instance, volume capacity monitoring - is it being updated correctly

I also do not see the geoRep configuration being queried from vdsm. 
Either the scheduler that runs the geoRepSyncjob is not running, or there is an exception. Will need the engine logs to check this, I don't see it attached to the case.

Can the customer click on "Sync" on the geo-rep sub tab to make sure that sync completes successfully?
If the schedule that periodically syncs geo-rep is not running, the customer may need to restart the ovirt-engine service 
#service ovirt-engine restart

Note You need to log in before you can comment on or make changes to this bug.