Description of problem: After installing a three node gluster with self hosted engine and deploying the engine on all cluster members, the members not running the engine repeatedly logging VM HostedEngine is down with error. Exit message: resource busy: Failed to acquire lock: Lease is held by another host. Version-Release number of selected component (if applicable): ovirt-node-ng-installer-master-2017121109 4.2.1-0.0.master.20171210113630.git504d08f.el7.centos How reproducible: 100% Steps to Reproduce: 1. Install three node cluster using gluster and install self hosted engine 2. Complete cluster setup 3. Complete storage domain setup 4. Set member into local maintenance and trigger reinstall with engine deployment for members not running the engine Actual results: Repeated lock errors Expected results: No errors Additional info:
Can you please attach logs?
Please excuse me: What kind of logs do you need?
(In reply to Bernhard Seidl from comment #2) > Please excuse me: What kind of logs do you need? Usually, the ovirt-hosted-engine HA logs, which are somewhere under /var/log/ovirt-hosted-engine-*
Created attachment 1369323 [details] agent and broker logs from /var/log/ovirt-hosted-engine-ha
Can you please add the logs from the other nodes? This one looks like it comes from the host that currently runs the VM. You can also attach the output of hosted-engine --vm-status from all the nodes. This is a potential duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1527394
And if it is the same issue then simple systemctl restart ovirt-ha-broker on all nodes might fix it.
Unfortunately I already reinstalled the setup. I was unable to reproduce this with 4.2.0. I will try again using master branch first week next year.
I just tested and this seems to be a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1527394 Restarting ovirt-ha-broker on all nodes fixed it. Test Version: ovirt-node-ng-installer-master-2018010109.iso
Thanks for checking. We will handle it as duplicate then. *** This bug has been marked as a duplicate of bug 1527394 ***