Description of problem: After killing a brick process, the brick status still shows the brick as "UP" Version-Release number of selected component (if applicable): Red Hat Enterprise Virtualization Manager Version: 3.2.0-10.20.master.el6ev How reproducible: 100% Steps to Reproduce: 1. Create a "Gluster Cluster" 2. Add one or more hosts 3. Create a volume and start it 4. Find the PID of the bricks using "gluster volume status <VOLNAME>" and kill one or more bricks using # kill -9 <PID> Once done, check the brick status from the UI Actual results: The brick status is not updated immediately. Expected results: The brick status in the UI should immediately get reflected and change it to "DOWN", if down and vice versa Additional info: It's easily reproducible. However, if you still needs the logs, please let me know.
Brick status is updated every 5 minutes. Have you checked waiting for 5 minutes?
(In reply to comment #1) > Brick status is updated every 5 minutes. Have you checked waiting for 5 > minutes? Yes, I know it will get updated every 5min and I've tested and confirmed that too. But as I already mentioned in the "Actual Results", status of critical processes like this should get updated and notified immediately to the admin without much delay. A brick being down can affect a part or entire functioning of the volume depending upon the volume type. This can even affect the production if all the bricks goes down in a volume and no alerts or notifications are send to the admin for 5min. I know this is resource consuming task, but we should find a suitable solution for this to mitigate this issue. As a min requirement, we should ateast provide a manual refresh button for all the sub-tabs, especially for bricks. On refresh, this should fetch the latest status from all the servers and update the fields accordingly.
still planned for 3.4?
Yes - a manual refresh option to be provided for 3.4
The patch to introduce sync button did not make it to 3.4 - so retargeting to 3.5
3.5.1 is already full with bugs (over 80), and since none of these bugs were added as urgent for 3.5.1 release in the tracker bug, moving to 3.5.2
moving to 3.5.4 due to capacity planning for 3.5.3. if you believe this should remain in 3.5.3, please sync with pm/dev/qe and a full triple ack for it. also - ensure priority is set accordingly.
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
Retargeting to coincide with alerting mechanism from gluster
(In reply to Sahina Bose from comment #12) > Retargeting to coincide with alerting mechanism from gluster Please use target milestone and flags to change to 4.0 from now on.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA.
Moving to modified as dependent RFE is now merged
If no code change since Jan 31, this should probably be ON_QE. Can you check?
Yes, moved to ON_QA
Sahina, on which version of ovirt-engine has this been fixed?
ovirt-engine-4.2.2.2 - this bug was dependent on bug 1379309
Tested with RHV 4.2.3 and gluster 3.12 RHV makes use of eventing mechanism, and RHV UI shows syncs much faster. One problem observed is that RHV uses still uses the CLI based polling method, and that takes sometime to respond ( obvious ). Sometimes brick kill event are found via CLI based polling. I will keep track of this issue in a separate bug
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.