Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Destination host fail to switch active when migration is active on source host
Version-Release number of selected component (if applicable):
the latest rhel 9.2.0 (qemu-kvm-7.2)
How reproducible:
1/50
Steps to Reproduce:
1. Start zerocopy migration (multifd enabled)
2.
3.
Actual results:
During zerocopy migration, once get migration active status on the src host, then switch to the dst host for checking migration status, check for 20 mins, migration isn't active.
In 20 mins, the migration status of the dst host is always like the last line.
Shall we file a new bug for the issue?
2023-02-07-07:53:11: Host(10.19.241.182) Sending qmp command : {"execute": "migrate-incoming", "arguments": {"uri": "tcp:[::]:4000"}, "id": "x0o5K1Uv"}
2023-02-07-07:53:12: Host(10.19.241.182) Responding qmp command: {"return": {}, "id": "x0o5K1Uv"}
2023-02-07-07:53:12: Host(10.19.241.172) Sending qmp command : {"execute": "migrate", "arguments": {"uri": "tcp:10.19.241.182:4000", "blk": false, "inc": false, "detach": true, "resume": false}, "id": "gcRVq85Q"}
2023-02-07-07:53:12: Host(10.19.241.172) Responding qmp command: {"return": {}, "id": "gcRVq85Q"}
2023-02-07-07:53:12: Host(10.19.241.172) Sending qmp command : {"execute": "query-migrate", "id": "U9X5ug18"}
2023-02-07-07:53:12: Host(10.19.241.172) Responding qmp command: {"return": {"expected-downtime": 300, "status": "active", "setup-time": 6, "total-time": 7, "ram": {"total": 4429328384, "postcopy-requests": 0, "dirty-sync-count": 1, "multifd-bytes": 5376, "pages-per-second": 0, "downtime-bytes": 0, "page-size": 4096, "remaining": 4429328384, "postcopy-bytes": 0, "mbps": 0, "transferred": 5376, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 0, "duplicate": 0, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 0, "normal": 0}}, "id": "U9X5ug18"}
2023-02-07-07:53:12: Host(10.19.241.182) Sending qmp command : {"execute": "query-migrate", "id": "9PJKTJ5W"}
2023-02-07-07:53:12: Host(10.19.241.182) Responding qmp command: {"return": {"socket-address": [{"port": "4000", "ipv6": true, "host": "::", "type": "inet"}]}, "id": "9PJKTJ5W"}
Expected results:
Dst host would switch active at once if migration is active on the source host.
Additional info:
Now reproduce this bug on aarch64, will try to reproduce it on x86.
Didn't reproduce this bug on x86 (kernel-5.14.0-284.4.1.el9_2.x86_64 && qemu-kvm-7.2.0-14.el9_2.x86_64) with repeating 550 times.
So change Hardware to aarch64.
Hi
Could you try to see if you can reproduce without zerocopy? The bug is certainly strange, and will help to triage it if we know that only happens with zerocopy.
Could you also post the command line that you are using for migration and for launching qemu?
Thanks, Juan
Hi Juan,
As repeating tests on the latest RHEL 9.3.0 (qemu-kvm-8.0.0-9.el9.aarch64) all pass, can we close this bug currentrelease?
1. repeating 600 times with zerocopy migration, test steps same as Description, all pass
2. repeating 300 times with multifd migration, also pass.
Description of problem: Destination host fail to switch active when migration is active on source host Version-Release number of selected component (if applicable): the latest rhel 9.2.0 (qemu-kvm-7.2) How reproducible: 1/50 Steps to Reproduce: 1. Start zerocopy migration (multifd enabled) 2. 3. Actual results: During zerocopy migration, once get migration active status on the src host, then switch to the dst host for checking migration status, check for 20 mins, migration isn't active. In 20 mins, the migration status of the dst host is always like the last line. Shall we file a new bug for the issue? 2023-02-07-07:53:11: Host(10.19.241.182) Sending qmp command : {"execute": "migrate-incoming", "arguments": {"uri": "tcp:[::]:4000"}, "id": "x0o5K1Uv"} 2023-02-07-07:53:12: Host(10.19.241.182) Responding qmp command: {"return": {}, "id": "x0o5K1Uv"} 2023-02-07-07:53:12: Host(10.19.241.172) Sending qmp command : {"execute": "migrate", "arguments": {"uri": "tcp:10.19.241.182:4000", "blk": false, "inc": false, "detach": true, "resume": false}, "id": "gcRVq85Q"} 2023-02-07-07:53:12: Host(10.19.241.172) Responding qmp command: {"return": {}, "id": "gcRVq85Q"} 2023-02-07-07:53:12: Host(10.19.241.172) Sending qmp command : {"execute": "query-migrate", "id": "U9X5ug18"} 2023-02-07-07:53:12: Host(10.19.241.172) Responding qmp command: {"return": {"expected-downtime": 300, "status": "active", "setup-time": 6, "total-time": 7, "ram": {"total": 4429328384, "postcopy-requests": 0, "dirty-sync-count": 1, "multifd-bytes": 5376, "pages-per-second": 0, "downtime-bytes": 0, "page-size": 4096, "remaining": 4429328384, "postcopy-bytes": 0, "mbps": 0, "transferred": 5376, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 0, "duplicate": 0, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 0, "normal": 0}}, "id": "U9X5ug18"} 2023-02-07-07:53:12: Host(10.19.241.182) Sending qmp command : {"execute": "query-migrate", "id": "9PJKTJ5W"} 2023-02-07-07:53:12: Host(10.19.241.182) Responding qmp command: {"return": {"socket-address": [{"port": "4000", "ipv6": true, "host": "::", "type": "inet"}]}, "id": "9PJKTJ5W"} Expected results: Dst host would switch active at once if migration is active on the source host. Additional info: Now reproduce this bug on aarch64, will try to reproduce it on x86.