Bug 1560666
Summary: | Hosted Engine VM (deployed in the past) fails to reboot with 'libvirtError: internal error: failed to format device alias for PTY retrieval' due to an error in console device in libvirt XML generated by the engine | |||
---|---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-ha | Reporter: | Simone Tiraboschi <stirabos> | |
Component: | Agent | Assignee: | Andrej Krejcir <akrejcir> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Nikolai Sednev <nsednev> | |
Severity: | high | Docs Contact: | ||
Priority: | urgent | |||
Version: | 2.2.5 | CC: | akrejcir, bugs, gveitmic, jiyan, mavital, mgoldboi, nsednev, ratamir, stirabos | |
Target Milestone: | ovirt-4.2.2 | Keywords: | Triaged | |
Target Release: | 2.2.10 | Flags: | rule-engine:
ovirt-4.2+
rule-engine: blocker+ |
|
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | ovirt-hosted-engine-ha-2.2.10-1.el7ev | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- | |
Clone Of: | ||||
: | 1560976 (view as bug list) | Environment: | ||
Last Closed: | 2018-04-27 07:22:03 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1556971, 1566072, 1566111 | |||
Bug Blocks: | 1458711, 1504606, 1560976 |
Description
Simone Tiraboschi
2018-03-26 16:59:08 UTC
Hi, Simone. Version: vdsm-4.20.23-1.el7ev.x86_64 qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64 kernel-3.10.0-862.el7.x86_64 libvirt-3.9.0-14.virtcov.el7_5.2.x86_64 Could you please help me with how to configure "pty" console in RHV web UI, I can only see 'unix' console generated. Thank you very much. ... <console ** type='unix' **> <source mode='bind' path='/var/run/ovirt-vmconsole-console/df899f5c-db94-48b2-867a-e0c266b59b7a.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> ... And another doubt, I could not see 'ua-' alias for console, serial and channel by cheking dump xml file of VM in register host. ... <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/df899f5c-db94-48b2-867a-e0c266b59b7a.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> ** <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/df899f5c-db94-48b2-867a-e0c266b59b7a.sock'/> <target type='serial' port='0'/> ** <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/df899f5c-db94-48b2-867a-e0c266b59b7a.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> ** <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/df899f5c-db94-48b2-867a-e0c266b59b7a.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> ** <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> ... In the vintage hosted-engine deployment, hosted-engine-setup was directly creating a VM over vdsm and the engine was importing it (adding aliases and so on), I'm not sure you can reproduce this properly creating a VM from admin UI. To reproduce on hosted-engine: 1. deploy hosted-engine with the vintage flow (4.1 or 4.2 passing --noansible option to ovirt-hosted-engine-setup) 2. connect to the engine and add another storage domain 3. wait for the engine VM to be imported by the engine 4. wait for the OVF_STORE disks to appear (normally after 60 minutes, you can force to appear earlier editing engine VM configuration from the engine) I tried reproducing this with master vdsm and ovirt-hosted-engine (without the workaround patch) on centos 7.4. The VM starts successfully, there is no error. The XML generated by the engine contains: <console type="pty"> <target type="virtio" port="0" /> <alias name="ua-ba3264b3-1a04-4e7b-a590-9c4528d63ac6" /> </console> And after the HE VM starts, 'virsh -r dumpxml HostedEngine' shows: <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='virtio' port='0'/> <alias name='console0'/> </console> Versions: libvirt - 3.2.0-14.el7_4.9 vdsm - 4.30.0-176.gitb930fd4.el7.centos ovirt-hosted-engine-ha - 2.3.0-0.0.master.20180323105559.20180323105555.git558fa11.el7.centos ovirt-hosted-engine-setup - 2.3.0-0.0.master.20180323165102.git5a3d63d.el7.centos I will try it again with a newer version of libvirt. (In reply to Andrej Krejcir from comment #3) > I tried reproducing this with master vdsm and ovirt-hosted-engine (without > the workaround patch) on centos 7.4. As far as I understood, it's specific to RHEL 7.5 with a fresher libvirt The fix is wrong. We should fix the alias when starting the VM instead of disabling a feature that serves as a base for dozen of other fixes. 1. deployed hosted-engine with the vintage flow over iSCSI. 2. connected to the engine and added another NFS storage domain. 3. waited for the engine VM to be imported by the engine. 4. wait for the OVF_STORE disks to appear (I've forced it to appear earlier by editing the engine VM with attachment of 2 additional VCPUs from UI). 5. set the host in to global maintenance and powered-off the engine. 6. removed global maintenance. 7. waited for the engine to get started by ha-agent. 8. engine got started and I've connected to it's UI. virsh -r dumpxml HostedEngine </interface> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='virtio' port='0'/> <alias name='ua-633d97eb-5b89-4774-8c'/> </console> libvirt-client-3.9.0-14.el7_5.3.x86_64 ovirt-hosted-engine-setup-2.2.18-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.10-1.el7ev.noarch rhvm-appliance-4.2-20180420.0.el7.noarch vdsm-common-4.20.26-1.el7ev.noarch Linux 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.5 (Maipo) Works for me using vintage deployment flow on latest components forth to previous comment #6. Moving to verified. Please feel free to reopen in case that you still being able to reproduce. This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |