Bug 1313497 - Enabling Gluster Service post-facto on HE does not update brick info
Summary: Enabling Gluster Service post-facto on HE does not update brick info
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Gluster
Version: 3.6.1.3
Hardware: x86_64
OS: Linux
medium
medium vote
Target Milestone: ovirt-4.0.5
: 4.0.5
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2
TreeView+ depends on / blocked
 
Reported: 2016-03-01 17:26 UTC by Will Dennis
Modified: 2016-12-15 10:16 UTC (History)
4 users (show)

Fixed In Version: 4.0.5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-15 10:16:28 UTC
oVirt Team: Gluster
rule-engine: ovirt-4.0.z+
ylavi: planning_ack+
sabose: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 62108 master MERGED engine: Update hosts when gluster service enabled 2016-09-01 08:35:23 UTC
oVirt gerrit 63864 ovirt-engine-4.0 MERGED engine: Update hosts when gluster service enabled 2016-09-20 06:55:12 UTC

Description Will Dennis 2016-03-01 17:26:39 UTC
Description of problem:
I had deployed Hosted Engine via the OVF image in package "ovirt-engine-appliance" on my hyperconverged setup (2 virt hosts, also servicing bricks for the two Gluster volumes used for hosted_storage and vm_storage storage domains.) I found out some time after deployment that the Gluster volume management integration isn't enabled by default in this OVF image. So based on a mailing list thread response (http://lists.ovirt.org/pipermail/users/2016-March/038188.html) I executed the needful SQL update statement that enabled the Gluster integration in the UI. After restarting the "ovirt-engine" service in the HE host, I could then see the Gluster volume nodes in the UI, and browse to them for info/management purposes, but noticed that the "Number of Bricks" attribute was reading "0", and nothing was showing in the "Bricks" tab.

Based on a following mailing list thread response, setting each host to "Maintenance" and then activating them again worked to update the Gluster bricks on a per-host basis, and when I had done this process to all hosts in my setup, I now see a true and correct list of bricks per volume.

Version-Release number of selected component (if applicable):
ovirt-engine-appliance-3.6-20160126.1.el7.centos.noarch

How reproducible:
Do an automatic install for HE from the OVF, then subsequently update the Postgres table "vdc_options" setting the value of ApplicationMode to '255'

Steps to Reproduce:
1.
2.
3.

Actual results:
The Gluster Volume info showed up, but the number of bricks per volume was showing as "0"
Had to go thru oVirt hosts one by one, and put into Maintenance mode, then Activate, to get brick info to show in UI

Expected results:
When Gluster Service is enabled post-facto, not only the Volume info is populated, but also the underlying brick info as well per volume

Additional info:

Comment 1 Will Dennis 2016-03-01 17:28:55 UTC
ERRATA: The line above stating "2 virt hosts, also servicing bricks" should read "3 virt hosts, also servicing bricks"

Comment 2 Sahina Bose 2016-08-09 08:56:10 UTC
Re-targeting to 4.0.4 as this bug prevents adding HC hosts to cluster (if gluster service is enabled after the first host is added)

Comment 3 SATHEESARAN 2016-10-21 08:23:28 UTC
Tested with RHV 4.0.5.1-0.1.el7ev 

1. Setup the hosted-engine on one of the hypervisor. ( this host also serves gluster volumes )
2. Once the hosted engine is up, updated the engine-config to enable gluster capability on the cluster.
3. From RHV UI, edit the cluster to check 'gluster' capability.

Once the gluster capability is enabled bricks, volume, peers and other information are updated.


Note You need to log in before you can comment on or make changes to this bug.