Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Probably a dupe of Bug 1031943, which we can't reproduce. We can't reproduce this one either... QE: please try to reproduce it once more and if you suceed, give us more details about your environment.
(In reply to Ademar Reis from comment #2)
> Probably a dupe of Bug 1031943, which we can't reproduce. We can't reproduce
> this one either... QE: please try to reproduce it once more and if you
> suceed, give us more details about your environment.
Same as bug 1031943. I do test on guest which has no healthy os installed.
I can't reproduce it with an healthy guest.
Glusterfs server are installed with latest glusterfs packages
glusterfs-server-3.4.0.59rhs-1.el6rhs.x86_64
on host
# rpm -q libvirt qemu-kvm-rhev glusterfs
libvirt-1.1.1-22.el7.x86_64
qemu-kvm-rhev-1.5.3-45.el7.x86_64
glusterfs-3.4.0.59rhs-1.el7.x86_64
1. prepare guest with glusterfs volume as source disk
# virsh dumpxml rhel6|grep disk -A 4
<disk type='network' device='disk'>
<driver name='qemu' type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/test.img'>
<host name='10.66.5.78' port='24007'/>
</source>
--
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</controller>
# qemu-img info gluster://10.66.5.78/gluster-vol1/test.img
image: gluster://10.66.5.78/gluster-vol1/test.img
file format: qcow2
virtual size: 100G (107374182400 bytes)
disk size: 194K
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
2. start guest and save/restore it
# virsh start rhel6
Domain rhel6 started
# virsh save rhel6 /tmp/rhel6.save
Domain rhel6 saved to /tmp/rhel6.save
# virsh restore /tmp/rhel6.save
error: Failed to restore domain from /tmp/rhel6.save
error: internal error: early end of file from monitor: possible problem:
qemu-kvm: VQ 2 size 0x80 Guest index 0x0 inconsistent with Host index 0x100: delta 0xff00
qemu: warning: error while loading state for instance 0x0 of device '0000:00:08.0/virtio-scsi'
load of migration failed
(In reply to Shanzhi Yu from comment #3)
> (In reply to Ademar Reis from comment #2)
> > Probably a dupe of Bug 1031943, which we can't reproduce. We can't reproduce
> > this one either... QE: please try to reproduce it once more and if you
> > suceed, give us more details about your environment.
>
> Same as bug 1031943. I do test on guest which has no healthy os installed.
> I can't reproduce it with an healthy guest.
Closing as duplicated then. Thanks.
*** This bug has been marked as a duplicate of bug 1031943 ***
Description of problem: Fail to restore guest when guest using glusterfs volume Version-Release number of selected component (if applicable): qemu-kvm-rhev-1.5.3-19.el7.x86_64 libvirt-1.1.1-12.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. create an guest with glusterfs volume # virsh dumpxml rhel6 .. <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='gluster' name='gluster-vol1/rhel6-qcow2.img'> <host name='10.66.106.22' port='24007' transport='tcp'/> </source> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller> .. 2. start guest and save guest # virsh start rhel6 ;virsh save rhel6 /tmp/rhel6.save Domain rhel6 started Domain rhel6 saved to /tmp/rhel6.save 3. restore guest # virsh restore /tmp/rhel6.save error: Failed to restore domain from /tmp/rhel6.save error: internal error: early end of file from monitor: possible problem: qemu-kvm: VQ 2 size 0x80 Guest index 0x0 inconsistent with Host index 0x100: delta 0xff00 qemu: warning: error while loading state for instance 0x0 of device '0000:00:08.0/virtio-scsi' load of migration failed 4. change guest target disk from vda to sda # virsh dumpxml rhel6 <disk type='network' device='disk'> <driver name='qemu' type='qcow2'/> <source protocol='gluster' name='gluster-vol1/rhel6-qcow2.img'> <host name='10.66.106.22' port='24007'/> </source> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller> 5. do step 2 and 3 again, succeed without any problem # virsh restore rhel6.save Domain restored from rhel6.save Actual results: as above Expected results: Additional info: