+++ This bug was initially created as a clone of Bug #1741102 +++ Description of problem: When a host providing gluster services for RHHI is activated, the RHV-M initiates gluster services restart. This causes gluster volumes to lose quorum and stop working. Version-Release number of selected component (if applicable): rhvm 4.3.3 How reproducible: 100% Steps to Reproduce: 1. Put a gluster node into maintenance 2. Set debug level vdsm-client Host setLogLevel level=DEBUG name=jsonrpc #. Activate the node again Actual results: 2019-08-14 09:30:00,866+0000 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer] Calling 'GlusterService.action' in bridge with {u'action': u'restart', u'serviceNames': [u'glusterd']} (__init__:329) Expected results: No restart happens. Keep in mind that the node can be put into maintenance mode without stopping the gluster services.
Verified with RHV Version 4.4.0-0.33.master.el8ev 0. Note down the process ID (PID) of glusterd process 1. Moved the host to maintenance without stopping gluster services 2. Activate the host back 3. Note down the process ID(PID) of the glusterd process Process ID(PID) of glusterd remains the same, which means that the glusterd service is not restarted, when the host is activated, which was earlier put in to maintenance without stopping glusterd service Before the host was put in to maintenance, without stopping gluster services [root@ ~]# pidof glusterd 650894 After the host is activated [root@ ~]# pidof glusterd 650894
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3314