Bug 1817892 - [libvirtd] HE Deployment Failed as failing to restart libvirtd
Summary: [libvirtd] HE Deployment Failed as failing to restart libvirtd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.8
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1835176
Blocks: RHHI-V-1.8-Engineering-Inflight-BZs
TreeView+ depends on / blocked
 
Reported: 2020-03-27 08:38 UTC by milind
Modified: 2020-08-04 14:52 UTC (History)
3 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-10.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-04 14:52:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3314 0 None None None 2020-08-04 14:52:25 UTC

Description milind 2020-03-27 08:38:22 UTC
Description of problem:
HE Deployment Failed as failing to restart libvirtd

Version-Release number of selected component (if applicable):

[node.example.com ~]#  imgbase w
You are on rhvh-4.4.0.14-0.20200325.0+1

How reproducible:
Always

Steps to Reproduce:
1. start Gluster Deployment 
2. start HE Deployment this step will fail 


Actual results:
failing to restart libvirtd

Expected results:
Deployment should be successful

Additional info:

2020-03-27 07:11:49,578+0000 ERROR ansible failed {
    "ansible_host": "localhost",
    "ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
    "ansible_result": {
        "_ansible_no_log": false,
        "changed": false,
        "invocation": {
            "module_args": {
                "daemon_reexec": false,
                "daemon_reload": false,
                "enabled": true,
                "force": null,
                "masked": null,
                "name": "libvirtd",
                "no_block": false,
                "scope": null,
                "state": "started",
                "user": null
            }
        },
        "msg": "Unable to start service libvirtd: Job for libvirtd.service failed because the control process exited with error code.\nSee \"systemctl status libvirtd.service\" and \"journalctl -xe\" for details.\n"
    },
    "ansible_task": "Start libvirt",
    "ansible_type": "task",
    "status": "FAILED",
    "task_duration": 1
}

Comment 4 Yaniv Kaul 2020-04-27 08:35:07 UTC
- How is it a RHHI-v issue?
- What do you see in the sosreport that explains what did not start and why?

Comment 6 SATHEESARAN 2020-05-13 10:28:43 UTC
RCA for this issue is out.
The current set of logic that does the deployment, generates the multipath configuration
using the command - 'vdsm-tool configure --force' that configures various components of the host
like vdsm, libvirt, multipath, etc.

But it should enough to configure only the multipath component specifically using the command:
# vdsm-tool configure --module multipath

Comment 8 SATHEESARAN 2020-06-06 11:21:44 UTC
Verified with gluster-ansible-infra-1.0.4-10.el8rhgs and RHVH 4.4.1 with the following steps:

1. RHHI-V deployment is completed with vdsm configuration done specifically for multipath

<snip_from_log>
Jun  3 18:16:06 newhost platform-python[16543]: ansible-command Invoked with _raw_params=vdsm-tool configure --module multipath _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
</snip>

Comment 10 errata-xmlrpc 2020-08-04 14:52:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314


Note You need to log in before you can comment on or make changes to this bug.