+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1741102 +++ ====================================================================== Description of problem: When a host providing gluster services for RHHI is activated, the RHV-M initiates gluster services restart. This causes gluster volumes to lose quorum and stop working. Version-Release number of selected component (if applicable): rhvm 4.3.3 How reproducible: 100% Steps to Reproduce: 1. Put a gluster node into maintenance 2. Set debug level vdsm-client Host setLogLevel level=DEBUG name=jsonrpc #. Activate the node again Actual results: 2019-08-14 09:30:00,866+0000 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer] Calling 'GlusterService.action' in bridge with {u'action': u'restart', u'serviceNames': [u'glusterd']} (__init__:329) Expected results: No restart happens. Keep in mind that the node can be put into maintenance mode without stopping the gluster services. (Originally by Roman Hodain)
sync2jira (Originally by Daniel Gur)
Verified with RHV 4.3.6.6 with the following steps 1. Created a HC cluster 2. Move the HC node in to MAINTENANCE, without stopping gluster services 3. Note the PID of glusterd process and brick process. 4. Activate the host back Observed that the gluster process PID remained the same post the host is activated
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3010