Bug 1313497

Summary: Enabling Gluster Service post-facto on HE does not update brick info
Product: [oVirt] ovirt-engine Reporter: Will Dennis <willarddennis>
Component: BLL.GlusterAssignee: Sahina Bose <sabose>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.6.1.3CC: bugs, gveitmic, sabose, ylavi
Target Milestone: ovirt-4.0.5Flags: rule-engine: ovirt-4.0.z+
ylavi: planning_ack+
sabose: devel_ack+
sasundar: testing_ack+
Target Release: 4.0.5   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: 4.0.5 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-15 10:16:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1277939    

Description Will Dennis 2016-03-01 17:26:39 UTC
Description of problem:
I had deployed Hosted Engine via the OVF image in package "ovirt-engine-appliance" on my hyperconverged setup (2 virt hosts, also servicing bricks for the two Gluster volumes used for hosted_storage and vm_storage storage domains.) I found out some time after deployment that the Gluster volume management integration isn't enabled by default in this OVF image. So based on a mailing list thread response (http://lists.ovirt.org/pipermail/users/2016-March/038188.html) I executed the needful SQL update statement that enabled the Gluster integration in the UI. After restarting the "ovirt-engine" service in the HE host, I could then see the Gluster volume nodes in the UI, and browse to them for info/management purposes, but noticed that the "Number of Bricks" attribute was reading "0", and nothing was showing in the "Bricks" tab.

Based on a following mailing list thread response, setting each host to "Maintenance" and then activating them again worked to update the Gluster bricks on a per-host basis, and when I had done this process to all hosts in my setup, I now see a true and correct list of bricks per volume.

Version-Release number of selected component (if applicable):
ovirt-engine-appliance-3.6-20160126.1.el7.centos.noarch

How reproducible:
Do an automatic install for HE from the OVF, then subsequently update the Postgres table "vdc_options" setting the value of ApplicationMode to '255'

Steps to Reproduce:
1.
2.
3.

Actual results:
The Gluster Volume info showed up, but the number of bricks per volume was showing as "0"
Had to go thru oVirt hosts one by one, and put into Maintenance mode, then Activate, to get brick info to show in UI

Expected results:
When Gluster Service is enabled post-facto, not only the Volume info is populated, but also the underlying brick info as well per volume

Additional info:

Comment 1 Will Dennis 2016-03-01 17:28:55 UTC
ERRATA: The line above stating "2 virt hosts, also servicing bricks" should read "3 virt hosts, also servicing bricks"

Comment 2 Sahina Bose 2016-08-09 08:56:10 UTC
Re-targeting to 4.0.4 as this bug prevents adding HC hosts to cluster (if gluster service is enabled after the first host is added)

Comment 3 SATHEESARAN 2016-10-21 08:23:28 UTC
Tested with RHV 4.0.5.1-0.1.el7ev 

1. Setup the hosted-engine on one of the hypervisor. ( this host also serves gluster volumes )
2. Once the hosted engine is up, updated the engine-config to enable gluster capability on the cluster.
3. From RHV UI, edit the cluster to check 'gluster' capability.

Once the gluster capability is enabled bricks, volume, peers and other information are updated.