RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 663283 - [virtio serial] The connection does not close in guest when closed nc in host side
Summary: [virtio serial] The connection does not close in guest when closed nc in host...
Keywords:
Status: CLOSED DUPLICATE of bug 621484
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 6.1
Assignee: Amit Shah
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 580954
TreeView+ depends on / blocked
 
Reported: 2010-12-15 09:35 UTC by Qunfang Zhang
Modified: 2013-01-09 23:25 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-01-06 02:16:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qunfang Zhang 2010-12-15 09:35:27 UTC
Description of problem:
Boot a rhel6 guest with virtio serial device. Using nc to connect to QEMU from host, and do read-write between host and guest.
Then close the nc in host, but the connection in guest side can not close normally.
So, when I connect to guest again, host can not receive the messages sent by guest.

Version-Release number of selected component (if applicable):
2.6.32-91.el6.x86_64
qemu-kvm-0.12.1.2-2.125.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot a guest with virtio serial:
 /usr/libexec/qemu-kvm -m 1G -smp 1 -cpu cpu64-rhel6,+x2apic -usbdevice tablet -drive file=/home/RHEL-Server-6.0-64-virtio.qcow2,format=qcow2,if=none,id=drive-virtio0,boot=on,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio0,id=virtio-blk-pci0 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,mac=00:10:43:30:8a:10,bus=pci.0,addr=0x4 -boot c -uuid 046e6d5f-d6c5-4c48-ae1b-7c1248b1abc4  -rtc-td-hack -no-kvm-pit-reinjection -monitor stdio -name win2k8-64 -vnc :2 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=4,bus=pci.0,addr=0x7 -chardev socket,id=channel0,host=127.0.0.1,port=12345,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=0,chardev=channel0,name=com.redhat.rhevm.vdsm

2.In host:
#nc 127.0.0.1 12345

3.Check the connection in host:
#[root@dhcp-91-152 ~]# netstat -a -n | grep 12345
tcp        0      0 127.0.0.1:12345             0.0.0.0:*                   LISTEN      
tcp        0      0 127.0.0.1:12345             127.0.0.1:50890             ESTABLISHED 
tcp        0      0 127.0.0.1:50890             127.0.0.1:12345             ESTABLISHED 

4.Write date to virtio serial port in guest
#echo aaa > /dev/vport0p0

5.Close the nc in host by press keyboard "Ctrl+C"

6.Check the connection in host again:
[root@dhcp-91-152 ~]# netstat -a -n | grep 12345
tcp        0      0 127.0.0.1:12345             0.0.0.0:*                   LISTEN      
tcp        1      0 127.0.0.1:12345             127.0.0.1:50890             CLOSE_WAIT

(In fact, waiting for a few mins, the status is still CLOSE_WAIT)

7.In host, connect again:
#nc 127.0.0.1 12345

8. 4.Write date to virtio serial port in guest again:
#echo aaa > /dev/vport0p0


  
Actual results:
After step 4, can receive "aaa" in host side.
As step6, the connection in guest side is not closed.
After step 8, host can not receive the message anymore.

Expected results:
The guest connections should be closed after close nc in host side.

Additional info:

Comment 2 Amit Shah 2011-01-04 16:54:38 UTC
Can you try the same with unix sockets instead of tcp sockets on the host?

Comment 3 Qunfang Zhang 2011-01-05 03:22:18 UTC
(In reply to comment #2)
> Can you try the same with unix sockets instead of tcp sockets on the host?

This issue also exsits when using unix sockets.
Steps:
1. boot a guest with virtio serial device 
/usr/libexec/qemu-kvm -enable-kvm -m 2G -smp 2 -uuid ef242d8b-9faa-4785-b48b-d0a13bba094c -rtc base=utc,clock=host,driftfix=slew -boot c -drive file=/home/RHEL-Server-6.0-64-virtio.qcow2,if=none,id=drive-virtio-disk0,boot=on,format=qcow2 -device virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,mac=00:10:12:20:77:0c,bus=pci.0,addr=0x7,id=hostnet0 -usb -device usb-tablet,id=input0 -monitor stdio -vnc :1 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=4,bus=pci.0,addr=0x4 -chardev socket,id=channel0,path=/var/lib/libvirt/qemu/rhel6.channel0,server,nowait -device virtserialport,chardev=channel0,name=org.linux-kvm.port.0,bus=virtio-serial0.0,id=port0

2.In host:
nc -U /var/lib/libvirt/qemu/rhel6.channel0

3. In guest:
echo aaa > /dev/vport0p1

4. close the nc connection in host and then connect again

5. repeat step 3.

Result:
After step 3, the "aaa" display in host.
But after step 5, does not display anymore.

Comment 4 Amit Shah 2011-01-05 09:17:40 UTC
First, I see you're using /dev/vport0p0 in the original comment.  Don't do that; port 0 is reserved for console ports and has a special meaning.

You should also not explicitly set the 'vectors' value unless you know for sure how many ports are going to be used.  And if the vectors value is less than max_ports, MSI will be disabled for that device.

Can you try this in the guest:

while true; do echo aaaa > /dev/vport0p1; done;

after you disconnect and re-connect nc in the host?  After a while, the output should start pouring in.

If that happens, this is a dup of the Bug 621484.

BTW I can't reproduce this with unix sockets, only with tcp sockets.

Comment 5 Qunfang Zhang 2011-01-06 02:16:54 UTC
(In reply to comment #4)
> First, I see you're using /dev/vport0p0 in the original comment.  Don't do
> that; port 0 is reserved for console ports and has a special meaning.
OK
> 
> You should also not explicitly set the 'vectors' value unless you know for sure
> how many ports are going to be used.  And if the vectors value is less than
> max_ports, MSI will be disabled for that device.
> 
> Can you try this in the guest:
> 
> while true; do echo aaaa > /dev/vport0p1; done;
> 
> after you disconnect and re-connect nc in the host?  After a while, the output
> should start pouring in.
> 
> If that happens, this is a dup of the Bug 621484.
Using "while true; do echo aaaa > /dev/vport0p1; done;" in the guest, I can see many "aaaa" display in host side for both tcp and unix sockets after disconnect and re-connect nc.

Repeatedly input "echo aaaa > /dev/vport0p1" or using "while true; do echo aaaa > /dev/vport0p1;sleep 1; done;"in guest after disconnect then reconnect nc in host. I find many "aaaa" lost in host side, and the output can display in host "after a while".

> 
> BTW I can't reproduce this with unix sockets, only with tcp sockets.
Still can reproduce with both unix and tpc sockets using the steps in comment 3.

I will close it as a dup of Bug 621484.


Thanks.

*** This bug has been marked as a duplicate of bug 621484 ***


Note You need to log in before you can comment on or make changes to this bug.