Bug 613537
| Summary: | [LXC] Fail to start vm that have multi network interfaces. | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Johnny Liu <jialiu> |
| Component: | libvirt | Assignee: | Daniel Berrangé <berrange> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | 6.0 | CC: | ajia, dallan, dyuan, eblake, jyang, llim, ozaki.ryota, rwu, xen-maint |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-06-20 06:23:51 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This issue has been proposed when we are only considering blocker issues in the current Red Hat Enterprise Linux release. It has been denied for the current Red Hat Enterprise Linux release. ** If you would still like this issue considered for the current release, ask your support representative to file as a blocker on your behalf. Otherwise ask that it be considered for the next Red Hat Enterprise Linux release. ** Should be fixed in the release of 0.8.2. (commit d2ac3c2fdde7ad397f0abbd65d6c5e0129fd2236) Since RHEL 6.1 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as an exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux. mutli-network works with libvirt-0.9.4-23.el6.x86_64 while lxc fails to start with libvirt-0.9.8-1.el6.x86_64, even with the single network interface. Do I need to new a bug or just trace the issue in this bug?
# cat single_toy.xml
<domain type='lxc'>
<name>single_toy</name>
<uuid>386f5b25-43ee-9d62-4ce2-62c3809e47c1</uuid>
<memory>500000</memory>
<currentMemory>500000</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64'>exe</type>
<init>/bin/sh</init>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<interface type='network'>
<source network='default'/>
</interface>
<interface type='network'>
<source network='default'/>
</interface>
<interface type='network'>
<source network='default'/>
</interface>
<console type='pty'>
<target port='0'/>
</console>
</devices>
</domain>
# virsh -c lxc:/// define single_toy.xml
Domain single_toy defined from single_toy.xml
# virsh -c lxc:/// start single_toy
error: Failed to start domain single_toy
error: internal error guest failed to start: 2011-12-21 03:31:00.026+0000: 18012: info : libvirt version: 0.9.8, package: 1.el6 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2011-12-08-10:01:37, x86-008.build.bos.redhat.com)
2011-12-21 03:31:00.026+0000: 18012: error : lxcControllerRun:1393 : unsupported configuration: Expected exactly one TTY fd
(In reply to comment #11) typo the xml for single network: # cat single_toy.xml <domain type='lxc'> <name>single_toy</name> <uuid>386f5b25-43ee-9d62-4ce2-62c3809e47c1</uuid> <memory>500000</memory> <currentMemory>500000</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64'>exe</type> <init>/bin/sh</init> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/libvirt_lxc</emulator> <interface type='network'> <source network='default'/> </interface> <console type='pty'> <target port='0'/> </console> </devices> </domain> (In reply to comment #11) > mutli-network works with libvirt-0.9.4-23.el6.x86_64 while lxc fails to start > with libvirt-0.9.8-1.el6.x86_64, even with the single network interface. Do I > need to new a bug or just trace the issue in this bug? Ugh, yes, please open a new BZ. Let's keep this one open until we confirm that multinetwork works properly with the 0.9.8 builds. Are LXC guests starting with the 0.9.10-rc builds? (In reply to comment #14) > Are LXC guests starting with the 0.9.10-rc builds? It still is failed to start a LXC guest with multi/single network interface: 1. multi-network interface # virsh -c lxc:/// start multi_toy error: Failed to start domain multi_toy error: internal error Child process (PATH=/sbin:/usr/sbin:/bin:/usr/bin LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc --name multi_toy --console 22 --handshake 25 --background --veth veth1 --veth veth3 --veth veth5) status unexpected: exit status 1 2. single-network interface # virsh -c lxc:/// start single_toy error: Failed to start domain single_toy error: internal error Child process (PATH=/sbin:/usr/sbin:/bin:/usr/bin LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc --name single_toy --console 22 --handshake 25 --background --veth veth1) status unexpected: exit status 1 Notes, however, these errors are different with comment 11. Dan, are you experiencing this? Nope, i have no trouble starting LXC guests with multiple interfaces. Plesae provide the full guest config you are testing with (In reply to comment #17) > Nope, i have no trouble starting LXC guests with multiple interfaces. Plesae > provide the full guest config you are testing with Hi Daniel, I only followed the above xml configration(please see Comment 11, 12), and I will double check this on another host, maybe, I have a env issue. Thanks, Alex libvirt-0.9.4-23.el6.x86_64 with single/multi network works well for me, then I directly upgrade 0.9.4-23 to 0.9.10-0rc1, the test result is the same to Comment 15, and libvirtd.log said: 2012-02-08 16:26:53.085+0000: 7495: error : virCommandWait:2308 : internal error Child process (PATH=/sbin:/usr/sbin:/bin:/usr/bin LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc --name multi_toy --console 19 --handshake 22 --background --veth veth1 --veth veth3 --veth veth5) status unexpected: exit status 1 2012-02-08 16:27:42.892+0000: 8346: info : libvirt version: 0.9.10, package: 0rc1.el6 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2012-02-06-03:43:55, x86-004.build.bos.redhat.com) 2012-02-08 16:27:42.892+0000: 8346: error : virDomainObjParseNode:8417 : XML error: unexpected root element <domain>, expecting <domstatus> 2012-02-08 16:27:46.664+0000: 8337: error : virCommandWait:2308 : internal error Child process (PATH=/sbin:/usr/sbin:/bin:/usr/bin LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc --name multi_toy --console 19 --handshake 22 --background --veth veth1 --veth veth3 --veth veth5) status unexpected: exit status 1 2012-02-08 16:28:49.568+0000: 8339: error : virCommandWait:2308 : internal error Child process (PATH=/sbin:/usr/sbin:/bin:/usr/bin LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc --name single_toy --console 19 --handshake 22 --background --veth veth1) status unexpected: exit status 1 The patch has been ACKed and pushed on upstream:
commit d474dbaddebfce8a2f6cfc4d2c4a9c50c2fab6df
Author: Daniel P. Berrange <berrange>
Date: Wed Feb 8 14:21:28 2012 +0000
Populate /dev/std{in,out,err} symlinks in LXC containers
Some applications expect /dev/std{in,out,err} to exist. Populate
them during container startup as symlinks to /proc/self/fd
And a lxc guest with single/multi network interface can be successfully started with the patch.
Tested with libvirt-0.9.10-1.el6, the vm start successfully with multi network interface, it should be fixed now. setenforce=0 before start the vm(according to the comm 25 of bug 607496), then we can see the interface via "ifconfig -a". Define the vm with the xml in bug description. virsh # start toy Domain toy started virsh # console toy2 Connected to domain toy2 Escape character is ^] sh-4.1# service network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] sh-4.1# ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:EB:EE:66 inet addr:192.168.122.21 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:feeb:ee66/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2604 (2.5 KiB) TX bytes:1526 (1.4 KiB) eth1 Link encap:Ethernet HWaddr 52:54:00:B3:A3:D0 inet6 addr: fe80::5054:ff:feb3:a3d0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2970 (2.9 KiB) TX bytes:552 (552.0 b) eth2 Link encap:Ethernet HWaddr 52:54:00:57:DE:5D inet6 addr: fe80::5054:ff:fe57:de5d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2970 (2.9 KiB) TX bytes:552 (552.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sh-4.1# ping 10.66.4.198 PING 10.66.4.198 (10.66.4.198) 56(84) bytes of data. 64 bytes from 10.66.4.198: icmp_seq=1 ttl=63 time=1.55 ms 64 bytes from 10.66.4.198: icmp_seq=2 ttl=63 time=0.465 ms (In reply to comment #21) > Tested with libvirt-0.9.10-1.el6, the vm start successfully with multi network > interface, it should be fixed now. Can you change the status? Thanks, Dave (In reply to comment #22) > (In reply to comment #21) > > Tested with libvirt-0.9.10-1.el6, the vm start successfully with multi network > > interface, it should be fixed now. > > Can you change the status? Thanks, Dave Move the bug to VERIFIED status based on Comment 20 and Comment 21. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-0748.html |
Description of problem: When trying to start LXC vm that have multi network interfaces, the following error is seen: # virsh -c lxc:/// start toy error: Failed to start domain toy error: internal error Failed to create veth device pair: 512 Version-Release number of selected component (if applicable): libvirt-0.8.1-13.el6.x86_64 kernel-2.6.32-44.el6.x86_64 How reproducible: Always Steps to Reproduce: 1. Define a LXC guest with multi network interfaces. # cat lxc_vm.xml <domain type='lxc'> <name>toy</name> <uuid>386f5b25-43ee-9d62-4ce2-58c3809e47c1</uuid> <memory>500000</memory> <currentMemory>500000</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64'>exe</type> <init>/bin/sh</init> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/libvirt_lxc</emulator> <interface type='network'> <source network='default'/> </interface> <interface type='network'> <source network='default'/> </interface> <interface type='network'> <source network='default'/> </interface> <console type='pty'> <target port='0'/> </console> </devices> </domain> # virsh -c lxc:/// define lxc_vm.xml Domain toy defined from lxc_vm.xml 2. Try to start the LXC guest # virsh -c lxc:/// start toy error: Failed to start domain toy error: internal error Failed to create veth device pair: 512 Actual results: Fail to start the vm with multi network interfaces. Expected results: Should be started successfully. Additional info: If the vm only have one network interface, the domain is started successfully.