Bug 1313497 - Enabling Gluster Service post-facto on HE does not update brick info
Enabling Gluster Service post-facto on HE does not update brick info
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.Gluster (Show other bugs)
3.6.1.3
x86_64 Linux
medium Severity medium (vote)
: ovirt-4.0.5
: 4.0.5
Assigned To: Sahina Bose
SATHEESARAN
:
Depends On:
Blocks: Gluster-HC-2
  Show dependency treegraph
 
Reported: 2016-03-01 12:26 EST by Will Dennis
Modified: 2016-12-15 05:16 EST (History)
4 users (show)

See Also:
Fixed In Version: 4.0.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-12-15 05:16:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Gluster
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.0.z+
ylavi: planning_ack+
sabose: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 62108 master MERGED engine: Update hosts when gluster service enabled 2016-09-01 04:35 EDT
oVirt gerrit 63864 ovirt-engine-4.0 MERGED engine: Update hosts when gluster service enabled 2016-09-20 02:55 EDT

  None (edit)
Description Will Dennis 2016-03-01 12:26:39 EST
Description of problem:
I had deployed Hosted Engine via the OVF image in package "ovirt-engine-appliance" on my hyperconverged setup (2 virt hosts, also servicing bricks for the two Gluster volumes used for hosted_storage and vm_storage storage domains.) I found out some time after deployment that the Gluster volume management integration isn't enabled by default in this OVF image. So based on a mailing list thread response (http://lists.ovirt.org/pipermail/users/2016-March/038188.html) I executed the needful SQL update statement that enabled the Gluster integration in the UI. After restarting the "ovirt-engine" service in the HE host, I could then see the Gluster volume nodes in the UI, and browse to them for info/management purposes, but noticed that the "Number of Bricks" attribute was reading "0", and nothing was showing in the "Bricks" tab.

Based on a following mailing list thread response, setting each host to "Maintenance" and then activating them again worked to update the Gluster bricks on a per-host basis, and when I had done this process to all hosts in my setup, I now see a true and correct list of bricks per volume.

Version-Release number of selected component (if applicable):
ovirt-engine-appliance-3.6-20160126.1.el7.centos.noarch

How reproducible:
Do an automatic install for HE from the OVF, then subsequently update the Postgres table "vdc_options" setting the value of ApplicationMode to '255'

Steps to Reproduce:
1.
2.
3.

Actual results:
The Gluster Volume info showed up, but the number of bricks per volume was showing as "0"
Had to go thru oVirt hosts one by one, and put into Maintenance mode, then Activate, to get brick info to show in UI

Expected results:
When Gluster Service is enabled post-facto, not only the Volume info is populated, but also the underlying brick info as well per volume

Additional info:
Comment 1 Will Dennis 2016-03-01 12:28:55 EST
ERRATA: The line above stating "2 virt hosts, also servicing bricks" should read "3 virt hosts, also servicing bricks"
Comment 2 Sahina Bose 2016-08-09 04:56:10 EDT
Re-targeting to 4.0.4 as this bug prevents adding HC hosts to cluster (if gluster service is enabled after the first host is added)
Comment 3 SATHEESARAN 2016-10-21 04:23:28 EDT
Tested with RHV 4.0.5.1-0.1.el7ev 

1. Setup the hosted-engine on one of the hypervisor. ( this host also serves gluster volumes )
2. Once the hosted engine is up, updated the engine-config to enable gluster capability on the cluster.
3. From RHV UI, edit the cluster to check 'gluster' capability.

Once the gluster capability is enabled bricks, volume, peers and other information are updated.

Note You need to log in before you can comment on or make changes to this bug.