RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1254971 - RFE: support setting 'reconnect' parameter on TCP chardev backends (for USB redir and all other chardev users)
Summary: RFE: support setting 'reconnect' parameter on TCP chardev backends (for USB r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Pavel Hrdina
QA Contact: jiyan
URL:
Whiteboard:
Depends On:
Blocks: 1401400
TreeView+ depends on / blocked
 
Reported: 2015-08-19 11:13 UTC by Fangge Jin
Modified: 2018-04-10 10:35 UTC (History)
23 users (show)

Fixed In Version: libvirt-3.7.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:33:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
dmesg of guest (26.14 KB, text/plain)
2015-08-19 11:14 UTC, Fangge Jin
no flags Details
qemu log on source host (7.04 KB, text/plain)
2015-08-19 11:14 UTC, Fangge Jin
no flags Details
qemu log on target host (2.62 KB, text/plain)
2015-08-19 11:15 UTC, Fangge Jin
no flags Details
coredump file (6.97 KB, text/plain)
2016-01-15 06:02 UTC, yafu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 0 None None None 2018-04-10 10:35:51 UTC

Description Fangge Jin 2015-08-19 11:13:26 UTC
Description of problem:
The guest has a redirected USB device with type='tcp', migrate it from RHEL7.2 host to RHEL7.2 host, the usbredir device disconnected after migration.

# usbredirserver -p 4000 0951:1625
usbredirparser: Peer version: qemu usb-redir guest 2.3.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00


Target version:
libvirt-1.2.17-5.el7.x86_64
qemu-kvm-rhev-2.3.0-18.el7.x86_64
usbredir-server-0.6-7.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
0.Prepare two RHEL7.2 hosts: source(10.66.6.6) and target(10.66.4.141).

1.Plug a USB device on source host(10.66.4.208), and start usbredirserver:
# lsusb
Bus 002 Device 002: ID 0951:1625 Kingston Technology DataTraveler 101 II
.....

# usbredirserver -p 4000 0951:1625

2.Prepare a running RHEL6 guest with GUI on source host with usb controller, spice graphic and usb-redir source mode.
...
<redirdev bus='usb' type='tcp'>
<source mode='connect' host='10.66.4.208' service='4000'/>
<protocol type='raw'/>
</redirdev>

3.Connect to the guest via spice:
# remote-viewer spice://10.66.6.6:5901

The USB device appears on the desktop of the guest.

4.Migrate the guest to target host
# virsh migrate --live rhel6.6-GUI qemu+ssh://10.66.4.141/system --verbose
root.6.6's password:
Migration: [ 100 %]

Then I found that the USB device disappeared from the guest, it's disconnected:
# usbredirserver -p 4000 0951:1625
usbredirparser: Peer version: qemu usb-redir guest 2.3.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00

Actual results:
The usbredir device disconnected after migration.

Expected results:
The USB device keeps connected after migration.

Comment 1 Fangge Jin 2015-08-19 11:14:17 UTC
Created attachment 1064754 [details]
dmesg of guest

Comment 2 Fangge Jin 2015-08-19 11:14:57 UTC
Created attachment 1064755 [details]
qemu log on source host

Comment 3 Fangge Jin 2015-08-19 11:15:41 UTC
Created attachment 1064757 [details]
qemu log on target host

Comment 5 Fangge Jin 2015-08-19 11:19:43 UTC
Sorry, some typo in step1, it should be:

1.Plug a USB device on **a** host(10.66.4.208),

Comment 7 Pavel Hrdina 2016-01-06 16:06:34 UTC
Moving to qemu-kvm, libvirt starts the new qemu with correct arguments.  Qemu disconnects from usbredirserver but never connect again after migration is finished.

Comment 8 Gerd Hoffmann 2016-01-08 07:54:32 UTC
Is usbredirserver actually supported in the first place?
As far I know it is more a debug tool ...

spice usb redirection supports live migration (and puts quite some effort in to make it work).

Comment 9 Fangge Jin 2016-01-11 12:44:50 UTC
(In reply to Gerd Hoffmann from comment #8)
> Is usbredirserver actually supported in the first place?
> As far I know it is more a debug tool ...
> 
> spice usb redirection supports live migration (and puts quite some effort in
> to make it work).


If usbredirserver is not supported, how to test the usbredir tcp mode? 

The usbredirserver works well before migration.

Comment 10 Gerd Hoffmann 2016-01-11 13:56:32 UTC
I don't think tcp mode is supported either and thus doesn't needs special testing.

The main difference between tcp and spice mode is that (a) spice supports migration and (b) a different network transport is used.  So apart from migration support there shouldn't be much of a difference.

So, for QE purposes it might be useful to use tcp mode instead of spice mode simply because it is probably easier to use usbredirserver in automated testing.  Other than that I see little reason to pay much attention to usbredirserver.

Comment 11 Ademar Reis 2016-01-12 16:16:55 UTC
(In reply to Gerd Hoffmann from comment #10)
> I don't think tcp mode is supported either and thus doesn't needs special
> testing.
> 
> The main difference between tcp and spice mode is that (a) spice supports
> migration and (b) a different network transport is used.  So apart from
> migration support there shouldn't be much of a difference.
> 
> So, for QE purposes it might be useful to use tcp mode instead of spice mode
> simply because it is probably easier to use usbredirserver in automated
> testing.  Other than that I see little reason to pay much attention to
> usbredirserver.

I'm closing as WONTFIX and will follow up with the doc team to check if this should be documented somewhere.

Comment 12 yafu 2016-01-15 05:54:27 UTC
(In reply to Ademar Reis from comment #11)
> (In reply to Gerd Hoffmann from comment #10)
> > I don't think tcp mode is supported either and thus doesn't needs special
> > testing.
> > 
> > The main difference between tcp and spice mode is that (a) spice supports
> > migration and (b) a different network transport is used.  So apart from
> > migration support there shouldn't be much of a difference.
> > 
> > So, for QE purposes it might be useful to use tcp mode instead of spice mode
> > simply because it is probably easier to use usbredirserver in automated
> > testing.  Other than that I see little reason to pay much attention to
> > usbredirserver.
> 
> I'm closing as WONTFIX and will follow up with the doc team to check if this
> should be documented somewhere.


Maybe it's better to refuse migration with usbredir tcp mode. I met another issue when do migration with usbredir tcp mode. Migrate the guest with both virtio disk and usebredir tcp mode, the guest crashed on the target host once it trying to mount the usb device after migration.


Test version
libvirt-0.10.2-55.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.482.el6.x86_64
usbredir-server-0.5.1-3.el6.x86_64

Steps to reproduce:
1.Plug a USB device on the host, and start usbredirserver:
# lsusb
...
Bus 001 Device 003: ID 0951:1624 Kingston Technology DataTraveler 101 II

# usbredirserver -p 4000 0951:1624

2.Start a guest with virtio disk and usb-redir source mode:
  ...
   <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
  ...
  <redirdev bus='usb' type='tcp'>
      <source mode='connect' host='10.66.4.148' service='4000'/>
      <protocol type='raw'/>
  </redirdev>
  ...

3.Do migration:
  #virsh migrate guest qemu+ssh://10.66.4.148/system --live --verbose

4.Open the guest in virt-manager, the guest crashed once trying to mount usb device

5.Check the qemu log, the guest crashed since assert failed:
  ...
  qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.12.1.2/usb-redir.c:1024: usbredir_chardev_read: Assertion `dev->read_buf == ((void *)0)' failed.

6.The coredump file is in the attachment.

Comment 13 yafu 2016-01-15 06:02:00 UTC
Created attachment 1115031 [details]
coredump file

Comment 14 Ademar Reis 2016-01-15 23:22:24 UTC
(In reply to yafu from comment #12)
> (In reply to Ademar Reis from comment #11)
> > (In reply to Gerd Hoffmann from comment #10)
> > > I don't think tcp mode is supported either and thus doesn't needs special
> > > testing.
> > > 
> > > The main difference between tcp and spice mode is that (a) spice supports
> > > migration and (b) a different network transport is used.  So apart from
> > > migration support there shouldn't be much of a difference.
> > > 
> > > So, for QE purposes it might be useful to use tcp mode instead of spice mode
> > > simply because it is probably easier to use usbredirserver in automated
> > > testing.  Other than that I see little reason to pay much attention to
> > > usbredirserver.
> > 
> > I'm closing as WONTFIX and will follow up with the doc team to check if this
> > should be documented somewhere.
> 
> 
> Maybe it's better to refuse migration with usbredir tcp mode. I met another
> issue when do migration with usbredir tcp mode. Migrate the guest with both
> virtio disk and usebredir tcp mode, the guest crashed on the target host
> once it trying to mount the usb device after migration.
> 

Makes sense, I agree. Gerd, can this be done?

Comment 15 Gerd Hoffmann 2016-05-12 09:24:27 UTC
  Hi,

> > Maybe it's better to refuse migration with usbredir tcp mode. I met another
> > issue when do migration with usbredir tcp mode. Migrate the guest with both
> > virtio disk and usebredir tcp mode, the guest crashed on the target host
> > once it trying to mount the usb device after migration.
> > 
> 
> Makes sense, I agree. Gerd, can this be done?

Hmm, not so easy as things are handled indirectly via chardev.
usbredir doesn't know what the actual transport is ...

Comment 16 Ademar Reis 2016-06-10 12:45:14 UTC
(In reply to Gerd Hoffmann from comment #15)
>   Hi,
> 
> > > Maybe it's better to refuse migration with usbredir tcp mode. I met another
> > > issue when do migration with usbredir tcp mode. Migrate the guest with both
> > > virtio disk and usebredir tcp mode, the guest crashed on the target host
> > > once it trying to mount the usb device after migration.
> > > 
> > 
> > Makes sense, I agree. Gerd, can this be done?
> 
> Hmm, not so easy as things are handled indirectly via chardev.
> usbredir doesn't know what the actual transport is ...

So given this is not a valid use-case for customers (tcp here should be used only for debug) and the fix is complex, I'm closing it again.

Comment 17 Dmitry Melekhov 2016-08-12 07:58:33 UTC
> So given this is not a valid use-case for customers 

Really, this can be real use case.
There is article about it, in russian though
https://habrahabr.ru/post/265065/

It is possible to use usbredirserver to export usb keys , which are still widely in use for windows applications, into windows guests, so tcp migration can be extremilu useful.

Comment 18 Dmitry Melekhov 2016-08-12 08:02:43 UTC
so, I'd like this bug  reopened and fixed :-)

Comment 19 Gerd Hoffmann 2016-08-12 15:11:30 UTC
(In reply to Need Real Name from comment #17)
> > So given this is not a valid use-case for customers 
> 
> Really, this can be real use case.
> There is article about it, in russian though
> https://habrahabr.ru/post/265065/
> 
> It is possible to use usbredirserver to export usb keys , which are still
> widely in use for windows applications, into windows guests, so tcp
> migration can be extremilu useful.

There is usb-host (for usb keys connected to the virtualization host).
There is spice redirection (for usb keys connected to the users machine).
Why tcp redirection?

Comment 20 Dmitry Melekhov 2016-08-13 19:01:45 UTC
There is no user's machine here, i.e. usb key is not plugged into it.
Application server is protected by usb key, or, another scenario, application is on windows terminal server.
So currently we plug keys into host and pass through them to VMs , but this prevents migration.
There is  http://www.digi.com/products/usb-and-serial-connectivity/usb-over-ip-hubs/anywhereusb
but it's cost is high enough to buy reserve...
So we found usbredirserver, so we can use one of linux servers as usb keys server, and in case of it's failure plug keys to another server, we alredy have.
But! No live migration , so no difference with just plugging keys into host...

Thank you!

Comment 21 Dmitry Melekhov 2016-09-08 05:28:37 UTC
Hello!

Could you tell me is my explanation clear enough?

Thank you!

Comment 22 Dmitry Melekhov 2016-09-08 05:31:40 UTC
btw, somebody uses this with proxomox:
https://github.com/kvaps/usbredirtools

Comment 23 Gerd Hoffmann 2016-09-08 08:00:04 UTC
(In reply to Need Real Name from comment #21)
> Hello!
> 
> Could you tell me is my explanation clear enough?
> 
> Thank you!

Yes, they build a kind of licensing server which exports all those usb dongles.

Problem is there is no easy way to make this fully guest transparent.  spice puts quite some effort into this.

What happens if you unplug the usb dongle, then re-plug it after a short time?
i.e. would "unplug -> live migrate -> plug" work?

Comment 24 Dmitry Melekhov 2016-09-08 08:22:00 UTC
(In reply to Gerd Hoffmann from comment #23)

> What happens if you unplug the usb dongle, then re-plug it after a short
> time?
> i.e. would "unplug -> live migrate -> plug" work?

Asked colleague to test this

he replied:

unplug:
virsh qemu-monitor-command --hmp manzan device_del usbredirdev1


then live migration

then on new host plug:

virsh qemu-monitor-command --hmp manzan chardev-add socket,id=usbredirdev1,port=4000,host=192.168.22.31
virsh qemu-monitor-command --hmp manzan device_add usb-redir,chardev=usbredirdev1,id=usbredirdev1,bus=usb.0

works.

Problem here is that this require additional scripting , on both hosts, which makes ready-to-use scripts, let's say virtualdomain from pacemaker, useless :-(

Thank you!

Comment 25 Gerd Hoffmann 2016-09-08 08:56:51 UTC
> unplug:
> then on new host plug:
> works.

Good.

> Problem here is that this require additional scripting , on both hosts,
> which makes ready-to-use scripts, let's say virtualdomain from pacemaker,
> useless :-(

Sure.  The manual scripting issue is probably solvable, by moving the unplug + usb-device-reset and re-plug into qemu.  We do something simliar in usb-host already.  I'll have a look.

Comment 26 Gerd Hoffmann 2017-01-09 13:49:11 UTC
Waded through the code.  The redirection code seems to do the correct thing already when it sees CHR_EVENT_{OPENED,CLOSED} events.

Typically qemu will run in server mode and listen for connects, or it gets passed in filehandles from libvirt.  tcp chardevs can also connect to a peer at startup.  But it is rarely used, and not supported very well.  So qemu doesn't really have a concept of re-connecting: if a connection goes down -- bad luck, You have to power-cycle the virtual machine to reconnect (and that is without live migration involved yet).

Daniel, you've rewrote much of the socket code in qemu.  Any opinion on this?  Can we implement something like -chardev tcp,reconnect_interval=3sec ?

Comment 27 Daniel Berrangé 2017-01-09 13:55:06 UTC
The tcp chardev already has ability to do reconnects when operating has a client. I don't think that is wired up into libvirt yet though.

Comment 28 Gerd Hoffmann 2017-01-10 07:22:40 UTC
(In reply to Daniel Berrange from comment #27)
> The tcp chardev already has ability to do reconnects when operating has a
> client. I don't think that is wired up into libvirt yet though.

--verbose please.  How can I kick a reconnect?  monitor?

Comment 29 Daniel Berrangé 2017-02-20 12:25:52 UTC
Just set the 'reconnect' flag on the chardev backed - it sets the timeout for reconnecting on non-server sockets when the remote end goes away. qemu will delay this many seconds and then attempt to reconnect, but this defaults to 0, hence does not reconnect by default. Libvirt doesn't expose the "reconnect" attribute in XML though.

Comment 30 Dmitry Melekhov 2017-02-22 07:43:40 UTC
Looks like reconnect works-just tested on one host, but , because it is not supported by libvirt (at least in RHEL7 version), then we can't use it for live migration :-(

Comment 31 Dmitry Melekhov 2017-02-22 07:48:39 UTC
May be it is possible to set default reconnect value to not 0?

Comment 32 Gerd Hoffmann 2017-04-25 09:24:55 UTC
Changing the default isn't a good idea as this has a high chance to just trade one issue for another.

There is a special libvirt syntax to pass additional command line switches to qemu, see http://blog.vmsplice.net/2011/04/how-to-pass-qemu-command-line-options.html

You can try this to tweak the reconnect value:

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='chardev.charredir0.reconnect=10'/>
  </qemu:commandline>

In any case making the reconnect configurable goes into libvirt territory, reassigning ...

Comment 33 Dmitry Melekhov 2017-05-05 05:53:57 UTC
Hello!

Unfortunately we failed to set reconnect parameter using  <qemu:commandline>, may be we are doing something wrong, but..

Thank you!

Comment 34 Pavel Hrdina 2017-08-28 12:57:25 UTC
Upstream patches posted:

https://www.redhat.com/archives/libvir-list/2017-August/msg00818.html

Comment 35 Pavel Hrdina 2017-08-30 15:00:13 UTC
Upstream commit:

commit 3ba6b532d11736fe82fedd53244f3c334e911b7c
Author: Pavel Hrdina <phrdina>
Date:   Fri Aug 25 18:57:15 2017 +0200

    qemu: implement chardev source reconnect

v3.6.0-223-g3ba6b532d1

Comment 37 jiyan 2017-10-25 04:10:55 UTC
Test env components:
qemu-kvm-rhev-2.10.0-3.el7.x86_64
kernel-3.10.0-742.el7.x86_64
libvirt-3.8.0-1.el7.x86_64

Test Scenario:
1. 'tcp' USB redirected device with 'connect' mode and 'reconnect' element ( 'enabled' equals 'yes' and 'timeout' is set)
2. 'tcp' USB redirected device with 'connect' mode and 'reconnect' element ( 'enabled' equals 'no')
3. 'tcp' channel device with 'connect' mode and without configuration for 'reconnect' element

Scenario-1: USB redirected device with 'connect' mode, 'enabled' equals 'yes' and 'timeout' is set
1. Prepare physical hostA with USB device plugged
(hostA)# lsusb 
Bus 002 Device 051: ID 058f:6387 Alcor Micro Corp. Flash Drive
(hostA)# usbredirserver -p 4000 058f:6387

2. Prepare physical  hostB used as migration source machine, and VM named 'pc' with USB redirected device configured runs in hostB, start VM
(hostB)# virsh dumpxml pc --inactive |grep redir -A5
    <redirdev bus='usb' type='tcp'>
      <source mode='connect' host='hostA IP' service='4000'>
        <reconnect enabled='yes' timeout='5'/>
      </source>
      <protocol type='raw'/>
      <address type='usb' bus='0' port='1'/>
    </redirdev>

(hostB)# virsh start pc
Domain pc started

3. After starting VM 'pc', check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids

4. Prepare physical hostC used as migration destination machine, migrate VM 'pc' from hostB to hostC
(hostB)# virsh migrate --live pc qemu+ssh://hostC IP/system --verbose
root@hostC IP's password: 
Migration: [100 %]

5. Check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00

6. Check the VM 'pc' in hostC
(hostC) # virsh list --all |grep pc
 13    pc                             running
(hostC) # virsh console pc
Connected to domain pc
Escape character is ^]
Last login: Tue Oct 24 17:26:01 on tty1
# lsusb 
Bus 001 Device 003: ID 058f:6387 Alcor Micro Corp. Flash Drive

7. Check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids

It shows reconnection succeeds as configurtion in 'reconnect' element.

Scenario-2: USB redirected device with 'connect' mode, 'enabled' equals 'no' 
1. Prepare physical hostA with USB device plugged
(hostA)# lsusb 
Bus 002 Device 051: ID 058f:6387 Alcor Micro Corp. Flash Drive
(hostA)# usbredirserver -p 4000 058f:6387

2. Prepare physical  hostB used as migration source machine, and VM named 'pc' with USB redirected device configured runs in hostB, start VM
(hostB)# virsh dumpxml pc --inactive |grep redir -A5
    <redirdev bus='usb' type='tcp'>
      <source mode='connect' host='hostA IP' service='4000'>
        <reconnect enabled='no'/>
      </source>
      <protocol type='raw'/>
      <address type='usb' bus='0' port='1'/>
    </redirdev>

(hostB)# virsh start pc
Domain pc started

3. After starting VM 'pc', check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids

4. Prepare physical hostC used as migration destination machine, migrate VM 'pc' from hostB to hostC
(hostB)# virsh migrate --live pc qemu+ssh://hostC IP/system --verbose
root@hostC IP's password: 
Migration: [100 %]

5. Check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00

6. Check the VM 'pc' in hostC
(hostC) # virsh list --all |grep pc
 13    pc                             running
(hostC) # virsh console pc
Connected to domain pc
Escape character is ^]
Last login: Tue Oct 24 17:26:01 on tty1
# lsusb 
No USB redirected device

7. Check the info returned by command  'usbredirserver' in hostA
(hostA)# usbredirserver -p 4000 058f:6387
usbredirparser: Peer version: qemu usb-redir guest 2.10.0, using 64-bits ids
usbredirhost: device disconnected
usbredirparser: error data len 33 != header len 0 ep 00

It shows reconnection fails as configurtion in 'reconnect' element.

Scenario-3:
1. Prepare physical hostA with server socket program run, as for the code please refer to attachment
(hostA)# ./server

2. Prepare physical  hostB used as migration source machine, and VM named 'pc' with channel device configured runs in hostB, start VM
(hostB)# virsh dumpxml pc --inactive |grep channel -A6
    <channel type='tcp'>
      <source mode='connect' host='hostA' service='2445'/>
      <protocol type='raw'/>
      <target type='virtio' name='test1'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>

(hostB)# virsh start pc
Domain pc started

3. After starting VM 'pc', check the info returned by program in hostA
(hostA)#  ./server
connect from hostB IP

(hostA)# netstat -anlp |grep 2445
tcp        0      0 0.0.0.0:2445            0.0.0.0:*               LISTEN      16188/./server      
tcp        0      0 HostA:2445        hostB:56830        ESTABLISHED 16188/./server      

4. Kill the 'server' program
# kill -9 16188
# netstat -anlp |grep 2445
No output

5. Restart the 'server' program
(hostA)# ./server

6. Try to send several characters in guest to hostA through channel
()hostB # virsh console pc
Connected to domain pc
Escape character is ^]
Last login: Wed Oct 25 11:10:02 on ttyS0
# echo abdadsafds >/dev/vport0p1
-bash: echo: write error: Interrupted system call

It shows reconnection fails as no configurtion for 'reconnect' element.

All the results are as expected, move this bug to be verified.

Additional info:
# cat server.c 
#include <sys/types.h> 
#include <sys/socket.h> 
#include <netinet/in.h> 
#include <arpa/inet.h> 
#include <unistd.h> 
#include <stdlib.h> 
#include <string.h> 
#include <stdio.h> 
#define PORT 2445 
#define MAXSOCKFD 10 
int Bind(int fd,const struct sockaddr *sa,socklen_t salen)
{ 
	return(bind(fd,sa,salen)); 
} 
main() 
{ 
	int sockfd,newsockfd,is_connected[MAXSOCKFD],fd; 
	struct sockaddr_in addr; // struct sockaddr *addrt; 
	int addr_len = sizeof(struct sockaddr_in); 
	fd_set readfds; char buffer[256]; 
	char msg[] ="Welcome to server!"; 
	if ((sockfd = socket(AF_INET,SOCK_STREAM,0))<0)
		{ 
			perror("socket"); 
			exit(1); 
		} 
	printf("%d\n", sockfd); bzero(&addr,sizeof(addr)); // memset(&addr,0,sizeof(addr)); 
	addr.sin_family = AF_INET; 
	addr.sin_port = htons(PORT); 
	addr.sin_addr.s_addr = htonl(INADDR_ANY); // addrt = &addr; 
	if(bind(sockfd,(struct sockaddr *)&addr,sizeof(addr))<0)
	{ 
		perror("connect"); 
		exit(1); 
	} 
	printf("%d\n", sockfd); 
	if(listen(sockfd,3)<0)
	{ 
		perror("listen"); 
		exit(1); 
	} 
	printf("%d\n", sockfd); 
	for(fd=0;fd<MAXSOCKFD;fd++) 
	is_connected[fd]=0; 
	while(1){ 
		FD_ZERO(&readfds); 
		FD_SET(sockfd,&readfds); // printf("%d\n",sockfd ); 
		for(fd=0;fd<MAXSOCKFD;fd++) 
			if(is_connected[fd]) FD_SET(fd,&readfds); 
		if(!select(MAXSOCKFD,&readfds,NULL,NULL,NULL))
			continue; 
		for(fd=0;fd<MAXSOCKFD;fd++) 
			if(FD_ISSET(fd,&readfds))
			{ 
				if(sockfd ==fd)
				{ 
					if((newsockfd = accept (sockfd,(struct sockaddr *)&addr,&addr_len))<0) 
						perror("accept"); 
						write(newsockfd,msg,sizeof(msg)); 
						is_connected[newsockfd] =1; 
						printf("connect from %s\n",inet_ntoa(addr.sin_addr)); 
				}
				else
				{ 
					bzero(buffer,sizeof(buffer)); 
					if(read(fd,buffer,sizeof(buffer))<=0)
					{ 
						printf("connect closed.\n"); 
						is_connected[fd]=0; 
						close(fd); 
					}
					else 
						printf("%s",buffer); 
				} 
			} 
		} 
}
# gcc server.c -o server

Comment 38 jiyan 2017-10-25 06:46:21 UTC
Hi Pavel, I test the another scenario when verifying this bug, the scenario is: migrate VM with 'tcp' channel in 'connect' mode but without 'reconnect' configuration from hostB to hostC, after migration, VM can still send message to hostA which runs a server socket program, could you help to check whether it is normal or it is a bug, thanks in advance.
Steps to reoproduce: 
1. Prepare physical hostA with server socket program run, as for the code please refer to attachment, the program shows in comment37
(hostA)# ./server

2. Prepare physical  hostB used as migration source machine, and VM named 'pc' with channel device configured runs in hostB, start VM
(hostB)# virsh dumpxml pc --inactive |grep channel -A6
    <channel type='tcp'>
      <source mode='connect' host='hostA' service='2445'/>
      <protocol type='raw'/>
      <target type='virtio' name='test1'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>

(hostB)# virsh start pc
Domain pc started

3. After starting VM 'pc', check the info returned by program in hostA
(hostA)#  ./server
connect from hostB IP

(hostA)# netstat -anlp |grep 2445
tcp        0      0 0.0.0.0:2445            0.0.0.0:*               LISTEN      16188/./server      
tcp        0      0 HostA:2445        hostB:56830        ESTABLISHED 16188/./server      

4. Try to send several characters in guest to hostA through channel and check the info in hostA
(hostB) # virsh console pc
Connected to domain pc
Escape character is ^]
Last login: Wed Oct 25 11:10:02 on ttyS0
# echo abdadsafds >/dev/vport0p1

(hostA)# ./server 
connect from hostB IP
dsdankdnasiofew

5. Prepare physical hostC used as migration destination machine, migrate VM 'pc' from hostB to hostC
(hostB) # virsh migrate --live pc qemu+ssh://hostC IP/system --verbose
root@hostC IP's password: 
Migration: [100 %]

6. Check the info in hostA
# ./server 
connect from hostB IP
dsdankdnasiofew
connect from hostC IP
connect closed.

7. Check the VM 'pc' in hostC and Try to send several characters in guest to hostA through channel 
(hostC) # virsh list --all |grep pc
 13    pc                             running
(hostC) # virsh console pc
Connected to domain pc
Escape character is ^]
Last login: Tue Oct 24 17:26:01 on tty1
# echo xewqicrecervqf >/dev/vport0p1

8. Check the info in hostA
# ./server 
connect from hostB IP
dsdankdnasiofew
connect from hostC IP
connect closed.
xewqicrecervqf

Comment 39 Pavel Hrdina 2017-11-24 13:34:29 UTC
I wouldn't say that's a bug.  If you don't configure reconnect at all it's
up to the hypervisor to use some default.  In case of migration QEMU probably tries to reconnect to server.

Comment 47 errata-xmlrpc 2018-04-10 10:33:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704


Note You need to log in before you can comment on or make changes to this bug.