Description of problem: Brick status does not get updated in the UI when the brick is killed from gluster CLI. Version-Release number of selected component (if applicable): ovirt-engine-4.1.0-0.3.beta2.el7.noarch How reproducible: Always Steps to Reproduce: 1. Install latest RHEV4.1 bits. 2. Bring down one the brick by running the command 'kill <PID of the brick>' 3. Actual results: I see that there is an event message in the UI which says "Detected that the brick is down' but brick status in bricks sub tab remains in 'UP' state. Expected results: Brick status should be shown as "Down" in the UI. Additional info:
Following is seen in engine.logs: ===================================== 2017-01-13 05:02:47,875-05 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler4) [5202f72c] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Detected change in status of brick 10.70.36.82:/rhgs/brick1/vmstore of volume vmstore of cluster Default from UP to DOWN . 2017-01-13 05:02:47,876-05 WARN [org.ovirt.engine.core.compat.backendcompat.TypeCompat] (DefaultQuartzScheduler4) [5202f72c] Unable to get value of property: 'clusterName' for class org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase: null 2017-01-13 05:02:47,876-05 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [5202f72c] Error while refreshing brick statuses for volume 'vmstore' of cluster 'Default': null 2017-01-13 05:02:47,877-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [5202f72c] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = hosted_engine3, GlusterVolumeAdvancedDetailsVDSParameters:{runAsync='true', hostId='87a03787-4bf5-4b01-9059-c5281c5a3eb2', volumeName='data'}), log id: 17c3877
sosreports can be found in the link below: ================================================= http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1412973/
Hi Ramesh, I do see that brick status gets updated properly for replicate volume and it does not get updated for arbiter volumes. After some time that is during the sync interval the brick which is down goes to unknown state. Is this the expected behavior with arbiter volumes? Thanks kasturi
Verified and works fine with build ovirt-engine-4.1.0.2-0.1.el7.noarch. I created two new volumes replica and arbiter. When one of the brick is brought down i see that brick status changes in the UI and an event is displayed which says "Status of brick 10.70.36.82:/b11 of volume newvol on cluster Default is down."
This issue is not found in a GAed product. So doc text not required.