Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
"domjobinfo --complete" displays different time elapsed and time elapsed w/o after migration.
This issue should be a regression. It does not exist on libvirt-1.3.2-1.el7.x86_64.
Version-Release number of selected component (if applicable):
libvirt-1.3.3-1.el7.x86_64
qemu-kvm-rhev-2.5.0-4.el7.x86_64
How reproducible:
100%
Steps to Reproduce:
1. Setup nfs on both hosts.
On remote host:
# df -k
10.66.5.225:/usr/share/avocado/data/avocado-vt/images 52403200 33089536 19313664 64% /var/lib/libvirt/migrate
Guest XML:
<devices>
...
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/libvirt/migrate/jeos-21-64.qcow2'/>
<target dev='vda' bus='virtio'/>
...
</disk>
2. Run migration command:
virsh migrate avocado-vt-vm-ci --live --verbose --unsafe qemu+ssh://10.66.4.167:22/system
Migration: [100 %]
Migration succeeds.
3. On the local host, run domjobinfo:
/bin/virsh domjobinfo avocado-vt-vm-ci --completed
Job type: Completed
Time elapsed: 2950 ms
Time elapsed w/o network: 2945 ms
Data processed: 231.416 MiB
Data remaining: 0.000 B
Data total: 1.016 GiB
Memory processed: 231.416 MiB
Memory remaining: 0.000 B
Memory total: 1.016 GiB
Memory bandwidth: 93.107 MiB/s
Dirty rate: 0 pages/s
Iteration: 4
Constant pages: 209288
Normal pages: 58668
Normal data: 229.172 MiB
Total downtime: 118 ms
Downtime w/o network: 113 ms
Setup time: 8 ms
4. On the remote host run domjobinfo:
# /bin/virsh domjobinfo avocado-vt-vm-ci --completed
Job type: Completed
Time elapsed: 2949 ms
Time elapsed w/o network: 2944 ms
Data processed: 231.416 MiB
Data remaining: 0.000 B
Data total: 1.016 GiB
Memory processed: 231.416 MiB
Memory remaining: 0.000 B
Memory total: 1.016 GiB
Memory bandwidth: 93.107 MiB/s
Dirty rate: 0 pages/s
Iteration: 4
Constant pages: 209288
Normal pages: 58668
Normal data: 229.172 MiB
Total downtime: 118 ms
Downtime w/o network: 113 ms
Setup time: 8 ms
Actual results:
"Time elapsed" and "Time elapsed w/o network" are different on both hosts.
Expected results:
They should be equal.
Additional info:
This is actually expected because there's no way the source libvirtd could send the statistics to the destination libvirtd once migration is complete. The statistics are sent when the destination libvirtd is asked to start the newly migrated domain, but the source libvirtd keeps measuring the total time until it kills the domain on the source host.
Description of problem: "domjobinfo --complete" displays different time elapsed and time elapsed w/o after migration. This issue should be a regression. It does not exist on libvirt-1.3.2-1.el7.x86_64. Version-Release number of selected component (if applicable): libvirt-1.3.3-1.el7.x86_64 qemu-kvm-rhev-2.5.0-4.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Setup nfs on both hosts. On remote host: # df -k 10.66.5.225:/usr/share/avocado/data/avocado-vt/images 52403200 33089536 19313664 64% /var/lib/libvirt/migrate Guest XML: <devices> ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/migrate/jeos-21-64.qcow2'/> <target dev='vda' bus='virtio'/> ... </disk> 2. Run migration command: virsh migrate avocado-vt-vm-ci --live --verbose --unsafe qemu+ssh://10.66.4.167:22/system Migration: [100 %] Migration succeeds. 3. On the local host, run domjobinfo: /bin/virsh domjobinfo avocado-vt-vm-ci --completed Job type: Completed Time elapsed: 2950 ms Time elapsed w/o network: 2945 ms Data processed: 231.416 MiB Data remaining: 0.000 B Data total: 1.016 GiB Memory processed: 231.416 MiB Memory remaining: 0.000 B Memory total: 1.016 GiB Memory bandwidth: 93.107 MiB/s Dirty rate: 0 pages/s Iteration: 4 Constant pages: 209288 Normal pages: 58668 Normal data: 229.172 MiB Total downtime: 118 ms Downtime w/o network: 113 ms Setup time: 8 ms 4. On the remote host run domjobinfo: # /bin/virsh domjobinfo avocado-vt-vm-ci --completed Job type: Completed Time elapsed: 2949 ms Time elapsed w/o network: 2944 ms Data processed: 231.416 MiB Data remaining: 0.000 B Data total: 1.016 GiB Memory processed: 231.416 MiB Memory remaining: 0.000 B Memory total: 1.016 GiB Memory bandwidth: 93.107 MiB/s Dirty rate: 0 pages/s Iteration: 4 Constant pages: 209288 Normal pages: 58668 Normal data: 229.172 MiB Total downtime: 118 ms Downtime w/o network: 113 ms Setup time: 8 ms Actual results: "Time elapsed" and "Time elapsed w/o network" are different on both hosts. Expected results: They should be equal. Additional info: