Bug 1265281
Summary: | 'ignore_deletes' and 'use_meta_volume' values displayed incorrectly | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Harold Miller <hamiller> | ||||
Component: | rhsc | Assignee: | Sahina Bose <sabose> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Sweta Anandpara <sanandpa> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | rhgs-3.1 | CC: | anbabu, bkunal, hamiller, nlevinki, rhs-bugs, sabose, sanandpa, sankarshan | ||||
Target Milestone: | --- | Keywords: | ZStream | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1265284 (view as bug list) | Environment: | |||||
Last Closed: | 2018-06-15 17:13:32 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1265284 | ||||||
Attachments: |
|
Description
Harold Miller
2015-09-22 14:30:15 UTC
RPM Files - Versions rhsc-setup-3.1.0-0.62.el6.noarch rhsc-setup-base-3.1.0-0.62.el6.noarch rhsc-setup-plugin-ovirt-engine-3.1.0-0.62.el6.noarch rhsc-setup-plugin-ovirt-engine-common-3.1.0-0.62.el6.noarch rhsc-setup-plugins-3.1.0-3.el6rhs.noarch It looks the default boolean value is not set correctly in UI. Anmol, can you confirm? The issue in UI was fixed as part of fix for https://bugzilla.redhat.com/show_bug.cgi?id=1233171 and in fact the patch that fixes this bug i.e, https://gerrit.ovirt.org/#/c/42459/ defaults the boolean to false. Could you please confirm if the information was synced(it takes probably 5 mins) or please try clincking on the sync button to sync manually and then try openinf the config dialog. Let me know if it still doesn't work. Anmol, The fix has been released? Or am I supposed to build a custom reproducer to test this? Please let me know, Harold Miller RHGS Support Emerging Technologies Created attachment 1111924 [details]
Screenshot of issue
Customer confirms that the GUI does not match values changed in CLI. Screenshot attached. Update please? I am not able to reproduce this issue in RHGSC 3.1.2 build. I tried setting options 'ignore_deletes' and 'use_meta_volume' as true, true or true, false, or false, true and false, false. UI always shows the same values given in CLI. Do we the RHSC version at the customer site?. Also as explained in comment#7, you have to wait for 5 mins or click 'Sync' to sync the changed done in CLI to RHSC. The network communication error seems to indicate some underlying issue. Are the hosts shown as UP in console? Are other volume management actions successful? Could you attach engine and vdsm logs when you get the error. If slave cluster is not managed by RHS-C, how was the geo-rep session setup? outside of console and synced? Can you check when slave cluster is created via CLI, if all options are synced? Looking at the vdsm logs, there seems to be communication errors in vdsm logs of all nodes (glfs-brick01 -glfs-brick04). Is there any error communication between engine and nodes - are other volume operations successful? For instance, volume capacity monitoring - is it being updated correctly I also do not see the geoRep configuration being queried from vdsm. Either the scheduler that runs the geoRepSyncjob is not running, or there is an exception. Will need the engine logs to check this, I don't see it attached to the case. Can the customer click on "Sync" on the geo-rep sub tab to make sure that sync completes successfully? If the schedule that periodically syncs geo-rep is not running, the customer may need to restart the ovirt-engine service #service ovirt-engine restart Cancelled hamiller's needinfo as well. Apologies. Setting the needinfo on myself that got cleared by mistake. @Sahina: can we close this bug? I don't see a point in keeping it open. (In reply to Bipin Kunal from comment #24) > @Sahina: can we close this bug? I don't see a point in keeping it open. Done |