RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1565064 - Much more failures when migration back concurrently hit error: "unable to connect to server: Connection timed out"
Summary: Much more failures when migration back concurrently hit error: "unable to con...
Keywords:
Status: CLOSED DUPLICATE of bug 1614182
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: chhu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-09 09:53 UTC by chhu
Modified: 2018-09-06 13:24 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-06 13:24:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
mig_3.log (8.78 KB, text/plain)
2018-04-09 09:55 UTC, chhu
no flags Details
mig_4.log (12.17 KB, text/plain)
2018-04-09 09:56 UTC, chhu
no flags Details
Time out related log in libvirtd_source log (31.63 KB, text/plain)
2018-04-09 10:28 UTC, chhu
no flags Details
libvirtd_log.tar.gz.0 (19.00 MB, application/x-gzip)
2018-04-12 01:46 UTC, chhu
no flags Details
libvirtd_log.tar.gz.1 (19.00 MB, application/octet-stream)
2018-04-12 02:04 UTC, chhu
no flags Details
libvirtd_log.tar.gz.2 (19.00 MB, application/octet-stream)
2018-04-12 02:05 UTC, chhu
no flags Details
libvirtd_log.tar.gz.3 (19.00 MB, application/octet-stream)
2018-04-12 02:07 UTC, chhu
no flags Details
libvirtd_log.tar.gz.4 (19.00 MB, application/octet-stream)
2018-04-12 02:09 UTC, chhu
no flags Details
libvirtd_log.tar.gz.5 (19.00 MB, application/octet-stream)
2018-04-12 02:10 UTC, chhu
no flags Details
libvirtd_log.tar.gz.6 (19.00 MB, application/octet-stream)
2018-04-12 02:12 UTC, chhu
no flags Details
libvirtd_log.tar.gz.7 (19.00 MB, application/octet-stream)
2018-04-12 02:13 UTC, chhu
no flags Details
libvirtd_log.tar.gz.8 (11.66 MB, application/octet-stream)
2018-04-12 02:14 UTC, chhu
no flags Details

Description chhu 2018-04-09 09:53:10 UTC
Description of problem:
Migrate 60 guests concurrently for 10 times(total 600 guests) from source to target, then migrate back. Much more failures when migration back concurrently. Hit "unable to connect to server ****: Connection timed out"

Version-Release number of selected component (if applicable):
libvirt-3.9.0-14.el7_5.2.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64
kernel-3.10.0-861.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create 600 guests on source host
2. Configure on both source and target server
- libvirtd.cfg
listen_tls = 1
auth_tls = "none"
max_clients = 5000
max_queued_clients = 1000
min_workers = 500
max_workers = 1000
max_client_requests = 1000
keepalive_interval = -1

- qemu.cfg
lock_manager = "lockd"
max_processes = 65535
max_files = 65535
keepalive_interval = -1

- Restart libvirtd service

2. Migrate 60 guests concurrently for 10 times(total 600 guests) from source to target.
# cat migcon60.sh
#!/bin/sh
for i in {0..9} ; do
let j=i*60+1
k=`expr $j + 60`
while [ $j -lt $k ]
do
virsh migrate --live --p2p --undefinesource --persistent --verbose guest-$j qemu+tls://****/system &
j=`expr $j + 1`
done
wait
done
# sh migcon60.sh

3. Check there are 4 guests failed to migrate due to timeout in file: mig_3.log, and the status are running in the source host.
Migration: [ 96 %]error: unable to connect to server at '****:49166': Connection timed out

4. Migrate the left 4 guests to target host manually.

5. Migarte 60 guests concurrently for 10 times(total 600 guests) back.

6. Check there are 22 guests failed to migrate due to timeout in file: mig_4.log


Actual results:
In step6: Much more guests failed to migrate back concurrently, hit error: "unable to connect to server ****: Connection timed out".

Expected results:
In step6: The guests can be migrated back successfully

Additional info:
 - file: mig_3.log, mig_4.log, libvirtd logs
 - Current env: run 4 times
    - 1: from hostA to hostB, one guest failed to migrate
    - 2: migrate back, 26 guests failed to migrate
    - 3: from hostA to hostB, 4 guests failed to migrate 
    - 4: migrate back, 22 guests failed to migrate

Comment 2 chhu 2018-04-09 09:55:19 UTC
Created attachment 1419172 [details]
mig_3.log

Comment 3 chhu 2018-04-09 09:56:37 UTC
Created attachment 1419173 [details]
mig_4.log

Comment 4 chhu 2018-04-09 10:28:59 UTC
Created attachment 1419180 [details]
Time out related log in libvirtd_source log

Comment 5 Jiri Denemark 2018-04-09 10:31:54 UTC
So apparently the hosts are quite loaded and so is the network. Either the
network is so loaded that the packets just don't come to the destination
libvirtd and back in time or libvirtd is not able to accept the connection in
time. Could you please capture the TCP connection attempts on both hosts and
attach the pcap files and corresponding debug logs from libvirtd from both
hosts?

Comment 6 chhu 2018-04-12 01:46:24 UTC
Created attachment 1420615 [details]
libvirtd_log.tar.gz.0

Comment 7 chhu 2018-04-12 02:04:12 UTC
Created attachment 1420620 [details]
libvirtd_log.tar.gz.1

Comment 8 chhu 2018-04-12 02:05:52 UTC
Created attachment 1420621 [details]
libvirtd_log.tar.gz.2

Comment 9 chhu 2018-04-12 02:07:20 UTC
Created attachment 1420622 [details]
libvirtd_log.tar.gz.3

Comment 10 chhu 2018-04-12 02:09:25 UTC
Created attachment 1420623 [details]
libvirtd_log.tar.gz.4

Comment 11 chhu 2018-04-12 02:10:57 UTC
Created attachment 1420624 [details]
libvirtd_log.tar.gz.5

Comment 12 chhu 2018-04-12 02:12:24 UTC
Created attachment 1420625 [details]
libvirtd_log.tar.gz.6

Comment 13 chhu 2018-04-12 02:13:46 UTC
Created attachment 1420626 [details]
libvirtd_log.tar.gz.7

Comment 14 chhu 2018-04-12 02:14:59 UTC
Created attachment 1420628 [details]
libvirtd_log.tar.gz.8

Comment 15 chhu 2018-04-12 02:17:23 UTC
Hi, Jiri

I sent an email to you for more libvirtd logs on env, I'll try to capture the TCP connection attempts on both hosts and attach the pcap files later, thank you!


Regards,
chhu

Comment 16 Jiri Denemark 2018-09-06 13:24:58 UTC
We're still waiting for the pcap files.

Anyway, this seems to be pretty similar to bug 1614182, where some packets
were delayed a lot. If it's the packet which is initiating a new connection,
the connection times out.

*** This bug has been marked as a duplicate of bug 1614182 ***


Note You need to log in before you can comment on or make changes to this bug.