RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1746404 - [s390x] Importing qcow2 image fails with libvirt.libvirtError: operation failed: domain 'rhel8.1' is already being removed
Summary: [s390x] Importing qcow2 image fails with libvirt.libvirtError: operation fail...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virt-manager
Version: 8.1
Hardware: s390x
OS: Unspecified
low
medium
Target Milestone: rc
: 8.1
Assignee: Pavel Hrdina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1746399
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-28 11:52 UTC by Alexander Todorov
Modified: 2020-02-21 22:06 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-21 22:06:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
lorax-composer.log for qcow2 image build (81.38 KB, text/plain)
2019-08-29 08:23 UTC, smitterl
no flags Details
lorax-program.log for qcow2 image build (1.59 KB, text/plain)
2019-08-29 08:29 UTC, smitterl
no flags Details

Description Alexander Todorov 2019-08-28 11:52:46 UTC
Description of problem:

I've built a qcow2 image with lorax-composer/composer-cli. Then I'm trying to start this image with virt-manager to see if it works b/c our automated tests are failing with Bug #1746399.

Upon completing all steps in virt-manager I get an exception.

Version-Release number of selected component (if applicable):
libvirt-4.5.0-32.module+el8.1.0+4010+d6842f29.s390x
virt-manager-2.2.1-2.el8.noarch

How reproducible:
Always

Steps to Reproduce:
1. Import existing qcow2 image and try to start a VM from it
2.
3.

Actual results:

Exception seen in UI. Here's virt-manager.log below:

[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:850) Using storage pool events
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:868) Using node device events
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:749) storage pool refresh event: pool=lorax
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:1113) interface=enca00 status=Активна added
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:1113) interface=lo status=Активна added
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:1113) network=default status=Активна added
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:1113) pool=lorax status=Активна added
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:749) storage pool refresh event: pool=default
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:1113) pool=default status=Активна added
[Wed, 28 Aug 2019 07:46:50 virt-manager 2256] DEBUG (connection:533) conn=qemu:///system changed to state=Активна
[Wed, 28 Aug 2019 07:47:10 virt-manager 2256] DEBUG (xmleditor:15) Using GtkSource 3.0
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (createvm:197) Showing new vm wizard
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (createvm:692) Guest type set to os_type=hvm, arch=s390x, dom_type=kvm
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (createvm:692) Guest type set to os_type=hvm, arch=s390x, dom_type=qemu
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (createvm:692) Guest type set to os_type=hvm, arch=s390x, dom_type=kvm
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (storage:140) Found default pool name=default target=/var/lib/libvirt/images
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (storage:140) Found default pool name=default target=/var/lib/libvirt/images
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (engine:391) window counter incremented to 2
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (connection:749) storage pool refresh event: pool=default
[Wed, 28 Aug 2019 07:47:12 virt-manager 2256] DEBUG (connection:749) storage pool refresh event: pool=default
[Wed, 28 Aug 2019 07:47:24 virt-manager 2256] DEBUG (storagebrowse:36) Showing storage browser
[Wed, 28 Aug 2019 07:47:24 virt-manager 2256] DEBUG (storage:140) Found default pool name=default target=/var/lib/libvirt/images
[Wed, 28 Aug 2019 07:47:29 virt-manager 2256] DEBUG (storagebrowse:119) Chosen volume XML:
<volume type="file">
  <name>d6d7c1ba-64d4-41b2-b184-2f781da0521e-disk.qcow2</name>
  <key>/root/lorax/d6d7c1ba-64d4-41b2-b184-2f781da0521e-disk.qcow2</key>
  <source>
  </source>
  <capacity unit="bytes">4275044352</capacity>
  <allocation unit="bytes">1565917184</allocation>
  <physical unit="bytes">1565917184</physical>
  <target>
    <path>/root/lorax/d6d7c1ba-64d4-41b2-b184-2f781da0521e-disk.qcow2</path>
    <format type="qcow2"/>
    <permissions>
      <mode>0644</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:admin_home_t:s0</label>
    </permissions>
    <timestamps>
      <atime>1566992611.670586461</atime>
      <mtime>1566992131.943279539</mtime>
      <ctime>1566992489.513339575</ctime>
    </timestamps>
    <compat>1.1</compat>
    <features/>
  </target>
</volume>

[Wed, 28 Aug 2019 07:47:29 virt-manager 2256] DEBUG (storagebrowse:49) Closing storage browser
[Wed, 28 Aug 2019 07:47:35 virt-manager 2256] DEBUG (guest:463) Setting Guest osinfo name <_OsVariant name=rhel8.1>
[Wed, 28 Aug 2019 07:47:35 virt-manager 2256] DEBUG (osdict:326) No recommended value found for key='n-cpus', using minimum=1 * 2
[Wed, 28 Aug 2019 07:47:35 virt-manager 2256] DEBUG (createvm:1662) Recommended resources for os=rhel8.1: ram=1073741824 ncpus=2 storage=10737418240
[Wed, 28 Aug 2019 07:47:38 virt-manager 2256] DEBUG (createvm:1954) Starting create finish() sequence
[Wed, 28 Aug 2019 07:47:38 virt-manager 2256] DEBUG (createvm:2088) Starting background install process
[Wed, 28 Aug 2019 07:47:38 virt-manager 2256] DEBUG (installer:442) Generated install XML: None required
[Wed, 28 Aug 2019 07:47:38 virt-manager 2256] DEBUG (installer:443) Generated boot XML: 
<domain type="kvm">
  <name>rhel8.1</name>
  <uuid>350b3ec3-0302-4ecb-a917-0bc7a54e8db1</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://redhat.com/rhel/8.1"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory>1048576</memory>
  <currentMemory>1048576</currentMemory>
  <vcpu>2</vcpu>
  <os>
    <type arch="s390x" machine="s390-ccw-virtio">hvm</type>
    <boot dev="hd"/>
  </os>
  <clock offset="utc"/>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/root/lorax/d6d7c1ba-64d4-41b2-b184-2f781da0521e-disk.qcow2"/>
      <target dev="vda" bus="virtio"/>
    </disk>
    <interface type="network">
      <source network="default"/>
      <mac address="52:54:00:45:b9:9e"/>
      <model type="virtio"/>
    </interface>
    <console type="pty">
      <target type="sclp"/>
    </console>
    <channel type="unix">
      <source mode="bind"/>
      <target type="virtio" name="org.qemu.guest_agent.0"/>
    </channel>
    <memballoon model="virtio"/>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
    </rng>
  </devices>
</domain>

[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:765) node device lifecycle event: nodedev=net_vnet0_fe_54_00_45_b9_9e state=VIR_NODE_DEVICE_EVENT_CREATED reason=0
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:705) domain agent lifecycle event: domain=rhel8.1 state=VIR_CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_DISCONNECTED reason=1
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:690) domain lifecycle event: domain=rhel8.1 state=VIR_DOMAIN_EVENT_RESUMED reason=VIR_DOMAIN_EVENT_RESUMED_UNPAUSED
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:690) domain lifecycle event: domain=rhel8.1 state=VIR_DOMAIN_EVENT_STARTED reason=VIR_DOMAIN_EVENT_STARTED_BOOTED
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:690) domain lifecycle event: domain=rhel8.1 state=VIR_DOMAIN_EVENT_SUSPENDED reason=VIR_DOMAIN_EVENT_SUSPENDED_PAUSED
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:690) domain lifecycle event: domain=rhel8.1 state=VIR_DOMAIN_EVENT_CRASHED reason=VIR_DOMAIN_EVENT_CRASHED_PANICKED
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:765) node device lifecycle event: nodedev=net_vnet0_fe_54_00_45_b9_9e state=VIR_NODE_DEVICE_EVENT_DELETED reason=0
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:690) domain lifecycle event: domain=rhel8.1 state=VIR_DOMAIN_EVENT_STOPPED reason=VIR_DOMAIN_EVENT_STOPPED_CRASHED
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:492) Domain XML inactive flag not supported.
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:498) Domain XML secure flag not supported.
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (libvirtobject:196) Error initializing libvirt state for <vmmDomain name=rhel8.1 id=0x3ff94c723a8>
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 193, in init_libvirt_state
    self._init_libvirt_state()
  File "/usr/share/virt-manager/virtManager/object/domain.py", line 218, in _init_libvirt_state
    info = self._backend.info()
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1356, in info
    if ret is None: raise libvirtError ('virDomainGetInfo() failed', dom=self)
libvirt.libvirtError: Domain not found: no domain with matching uuid '350b3ec3-0302-4ecb-a917-0bc7a54e8db1' (rhel8.1)
[Wed, 28 Aug 2019 07:47:39 virt-manager 2256] DEBUG (connection:1083) nodedev=net_vnet0_fe_54_00_45_b9_9e removed
[Wed, 28 Aug 2019 07:47:40 virt-manager 2256] DEBUG (error:84) error dialog message:
summary=Не може да се завърши инсталацията: 'operation failed: domain 'rhel8.1' is already being removed'
details=Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createvm.py", line 2089, in _do_async_install
    guest.installer_instance.start_install(guest, meter=meter)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 544, in start_install
    doboot, transient)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 493, in _create_guest
    domain = self.conn.defineXML(final_xml)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirt.libvirtError: operation failed: domain 'rhel8.1' is already being removed

[Wed, 28 Aug 2019 07:47:40 virt-manager 2256] DEBUG (connection:1095) Blacklisting domain=rhel8.1
[Wed, 28 Aug 2019 07:47:40 virt-manager 2256] DEBUG (connection:1098) Object added in blacklist, count=1


Expected results:
VM is created and it tries to boot.

Additional info:

Comment 6 zhoujunqin 2019-08-29 05:26:22 UTC
Hi all,
since I cannot reproduce this issue on x86_64 platform with packages' version:
virt-manager-2.2.1-2.el8.noarch
libvirt-4.5.0-33.module+el8.1.0+4066+0f1aadab.x86_64
qemu-kvm-2.12.0-85.module+el8.1.0+4066+0f1aadab.x86_64

I change hardware to s390x, thanks.

BR,
juzhou.

Comment 7 smitterl 2019-08-29 08:20:43 UTC
Reproduced:
lorax-composer-28.14.30-1.el8.s390x
libvirt-4.5.0-32.module+el8.1.0+4010+d6842f29.s390x
virt-install-2.2.1-2.el8.noarch

Steps
1. composer-cli compose start example-atlas qcow2
2. Copy image to libvirt storage pool
3. virt-install --name rhel8.1 --memory 2048 --vcpus 1 --disk /var/lib/libvirt/images/examples.atlas.s390x.lorax-composed.qcow2 --import --os-variant rhel8.1
>>> ERROR    operation failed: domain 'rhel8.1' is already being removed


virst-install.log

[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (cli:208) Launched with command line: /usr/share/virt-manager/virt-install --name rhel8.1 --memory 2048 --vcpus 1 --disk /var/lib/libvirt/images/examples.atlas.s390x.lorax-composed.qcow2 --import --os-variant rhel8.1
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (virt-install:207) Distilled --network options: ['default']
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (virt-install:139) Distilled --disk options: ['/var/lib/libvirt/images/examples.atlas.s390x.lorax-composed.qcow2']
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (cli:224) Requesting libvirt URI default
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (cli:227) Received libvirt URI qemu:///system
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (guest:463) Setting Guest osinfo name <_OsVariant name=generic>
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (installer:396) No media for distro detection.
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (installer:398) installer.detect_distro returned=None
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (guest:463) Setting Guest osinfo name <_OsVariant name=rhel8.1>
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (osdict:326) No recommended value found for key='n-cpus', using minimum=1 * 2
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (virt-install:648) Guest.has_install_phase: False
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (cli:272) 
Starting install...
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (installer:442) Generated install XML: None required
[Thu, 29 Aug 2019 04:18:24 virt-install 56875] DEBUG (installer:443) Generated boot XML: 
<domain type="kvm">
  <name>rhel8.1</name>
  <uuid>725b7e88-6ecb-4c6a-bdf7-ad0e236b918c</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://redhat.com/rhel/8.1"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch="s390x" machine="s390-ccw-virtio">hvm</type>
    <boot dev="hd"/>
  </os>
  <clock offset="utc"/>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/var/lib/libvirt/images/examples.atlas.s390x.lorax-composed.qcow2"/>
      <target dev="vda" bus="virtio"/>
    </disk>
    <interface type="network">
      <source network="default"/>
      <mac address="52:54:00:2e:a4:be"/>
      <model type="virtio"/>
    </interface>
    <console type="pty">
      <target type="sclp"/>
    </console>
    <channel type="unix">
      <source mode="bind"/>
      <target type="virtio" name="org.qemu.guest_agent.0"/>
    </channel>
    <memballoon model="virtio"/>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
    </rng>
  </devices>
</domain>

[Thu, 29 Aug 2019 04:18:25 virt-install 56875] DEBUG (cli:263)   File "/usr/share/virt-manager/virt-install", line 1005, in <module>
    sys.exit(main())
  File "/usr/share/virt-manager/virt-install", line 999, in main
    start_install(guest, installer, options)
  File "/usr/share/virt-manager/virt-install", line 686, in start_install
    fail(e, do_exit=False)
  File "/usr/share/virt-manager/virtinst/cli.py", line 263, in fail
    log.debug("".join(traceback.format_stack()))

[Thu, 29 Aug 2019 04:18:25 virt-install 56875] ERROR (cli:264) operation failed: domain 'rhel8.1' is already being removed
[Thu, 29 Aug 2019 04:18:25 virt-install 56875] DEBUG (cli:266) 
Traceback (most recent call last):
  File "/usr/share/virt-manager/virt-install", line 659, in start_install
    transient=options.transient)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 544, in start_install
    doboot, transient)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 493, in _create_guest
    domain = self.conn.defineXML(final_xml)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirt.libvirtError: operation failed: domain 'rhel8.1' is already being removed
[Thu, 29 Aug 2019 04:18:25 virt-install 56875] DEBUG (cli:278) Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start rhel8.1
otherwise, please restart your installation.

Anaconda compose log attached.

Comment 8 smitterl 2019-08-29 08:23:15 UTC
Created attachment 1609302 [details]
lorax-composer.log for qcow2 image build

Adding compose log as requested on related issue https://bugzilla.redhat.com/show_bug.cgi?id=1746399

Comment 9 smitterl 2019-08-29 08:29:48 UTC
Created attachment 1609303 [details]
lorax-program.log for qcow2 image build

Comment 11 Pavel Hrdina 2019-08-30 08:08:37 UTC
I've debugged this issue and the conclusion is that it's not virt-manager nor libvirt
bug.  What happens is that the guest crashes, libvirt receives panic from QEMU.

Now the question is whether it's a QEMU bug or lorax-composer bug.

When I downloaded the rhel8 qcow image [1] and used that image instead of the lorax qcow2
image everything worked fine.  But when I tried to use empty qcow2 image created using

  qemu-img create -f qcow2 test.qcow2 100M

it also panicked the same way as with the lorax image.  I'm not familiar with s390 architecture
and I have no idea why it would panic with empty or possibly incorrectly build image so I'm
moving this BZ to QEMU.

Same thing happens if I run /usr/libexec/qemu-kvm /path/to/image where the rhel image [1] works
fine but empty qcow2 image and the lorax one exits immediately.


[1] <http://download.devel.redhat.com/rhel-8/rel-eng/RHEL-8/latest-RHEL-8/compose/BaseOS/s390x/images/rhel-guest-image-8.1-167.s390x.qcow2>

Comment 12 Thomas Huth 2019-08-30 16:22:30 UTC
(In reply to Pavel Hrdina from comment #11)
> I've debugged this issue and the conclusion is that it's not virt-manager
> nor libvirt
> bug.  What happens is that the guest crashes, libvirt receives panic from
> QEMU.
> 
> Now the question is whether it's a QEMU bug or lorax-composer bug.
> 
> When I downloaded the rhel8 qcow image [1] and used that image instead of
> the lorax qcow2
> image everything worked fine.  But when I tried to use empty qcow2 image
> created using
> 
>   qemu-img create -f qcow2 test.qcow2 100M
> 
> it also panicked the same way as with the lorax image.  I'm not familiar
> with s390 architecture
> and I have no idea why it would panic with empty or possibly incorrectly
> build image so I'm moving this BZ to QEMU.

It's the "expected" behavior of the guest firmware on s390x that it shuts down the guest (with a panic message IIRC). Normal x86 or ppc64 firmware simply drops the user of the guest to the firmware prompt or menu when there is no way to boot, but the guest firmware on s390x is non-interactive and thus simply shuts down the guest when it fails to boot.

So the main question is: Why is the guest image here not bootable here? ... thus we should have a closer look at BZ 1746399, I guess...

Anyway, the error messages from libvirt / virt-install do not really sound helpful here ... could that be improved somehow?

Comment 13 Thomas Huth 2019-09-02 10:48:23 UTC
After having a look at an image created with "composer-cli", it's indeed the zipl bootloader that is missing there. So this bug here is just a subsequent error of BZ 1746399.

There is nothing we can do here from the qemu-kvm side. The only difference on s390x is really that the guest firmware shuts down the guest if the disk is unbootable, while the x86 bios simply waits forever after printing out a "No bootable device" message.

I think virt-install should be able to handle the guest shutdown in a more graceful way and print a more appropriate error message instead, so I'm assigning the BZ  back to virt-manager.

Comment 14 Thomas Huth 2019-09-02 11:00:19 UTC
FWIW, there also seems to be a race here, I do not always get the strange error message. Sometimes I get:

# qemu-img create -f qcow2 /var/lib/libvirt/images/empty.qcow2 8G
Formatting '/var/lib/libvirt/images/empty.qcow2', fmt=qcow2 size=8589934592 cluster_size=65536 lazy_refcounts=off refcount_bits=16
# virt-install --name testguest --memory 2048 --vcpus 1 --disk /var/lib/libvirt/images/empty.qcow2 --import --os-variant rhel8.1 --debug
[...]
[Mon, 02 Sep 2019 06:56:10 virt-install 180715] DEBUG (cli:404) Connecting to text console
[Mon, 02 Sep 2019 06:56:10 virt-install 180715] DEBUG (cli:370) Running: virsh --connect qemu:///system console testguest
error: The domain is not running
[Mon, 02 Sep 2019 06:56:10 virt-install 180715] DEBUG (virt-install:705) Domain state after install: 5
[Mon, 02 Sep 2019 06:56:10 virt-install 180715] DEBUG (cli:272) Domain creation completed.
Domain creation completed.
[Mon, 02 Sep 2019 06:56:10 virt-install 180715] DEBUG (cli:272) You can restart your domain by running:
  virsh --connect qemu:///system start testguest
You can restart your domain by running:
  virsh --connect qemu:///system start testguest

But often I also get that confusing error message (that IMHO should be fixed):

# virt-install --name testguest --memory 2048 --vcpus 1 --disk /var/lib/libvirt/images/empty.qcow2 --import --os-variant rhel8.1 --debug
[...]
[Mon, 02 Sep 2019 06:58:36 virt-install 180932] DEBUG (cli:263)   File "/usr/share/virt-manager/virt-install", line 1005, in <module>
    sys.exit(main())
  File "/usr/share/virt-manager/virt-install", line 999, in main
    start_install(guest, installer, options)
  File "/usr/share/virt-manager/virt-install", line 686, in start_install
    fail(e, do_exit=False)
  File "/usr/share/virt-manager/virtinst/cli.py", line 263, in fail
    log.debug("".join(traceback.format_stack()))
[Mon, 02 Sep 2019 06:58:36 virt-install 180932] ERROR (cli:264) operation failed: domain 'testguest' is already being removed
[Mon, 02 Sep 2019 06:58:36 virt-install 180932] DEBUG (cli:266) 
Traceback (most recent call last):
  File "/usr/share/virt-manager/virt-install", line 659, in start_install
    transient=options.transient)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 544, in start_install
    doboot, transient)
  File "/usr/share/virt-manager/virtinst/install/installer.py", line 493, in _create_guest
    domain = self.conn.defineXML(final_xml)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirt.libvirtError: operation failed: domain 'testguest' is already being removed
[Mon, 02 Sep 2019 06:58:36 virt-install 180932] DEBUG (cli:278) Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start testguest
otherwise, please restart your installation.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
  virsh --connect qemu:///system start testguest
otherwise, please restart your installation.

Comment 15 Pavel Hrdina 2019-09-02 11:31:11 UTC
Thanks Thomas for debugging it from QEMU POV.  Removing blocker from this BZ because
even if we fix the error reported by virt-manager it will still fail to install the VM
since there is different lorax bug that is the root cause of the issue.

If the issue needs to be fixed and the qcow2 image should boot successfully please
escalate as blocker in the referenced lorax BZ 1746399.

Comment 17 Alexander Todorov 2020-02-21 22:06:05 UTC
I've retested with a recent snapshot and the zipl error is gone. I am able to boot the VM from the newly built image and there is no traceback in virt-manager. The VM itself fails with a dracut timeout but that is unrelated. Closing.


Note You need to log in before you can comment on or make changes to this bug.