RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 962411 - [LXC] libvirtd buffer overflow when starting 144+ containers
Summary: [LXC] libvirtd buffer overflow when starting 144+ containers
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Daniel Berrangé
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-13 12:00 UTC by Monson Shao
Modified: 2014-09-21 22:51 UTC (History)
12 users (show)

Fixed In Version: libvirt-1.0.6-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-13 11:24:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log in journald (175.49 KB, text/plain)
2013-05-13 12:00 UTC, Monson Shao
no flags Details

Description Monson Shao 2013-05-13 12:00:28 UTC
Created attachment 747195 [details]
libvirtd log in journald

Description of problem:
When starting more than 144 containers, libvirtd terminated with log:

May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: *** buffer overflow detected ***: /usr/sbin/libvirtd terminated
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: ======= Backtrace: =========
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libc.so.6(__fortify_fail+0x37)[0x7ff1790fc907]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libc.so.6(+0x10aad0)[0x7ff1790faad0]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libc.so.6(+0x10c877)[0x7ff1790fc877]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(virNetlinkCommand+0x10b)[0x7ff17bf5978b]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(virNetDevMacVLanCreate+0x1af)[0x7ff17bf54bbf]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(virNetDevMacVLanCreateWithVPortProfile+0x20b)[0x7ff17bf5538b]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so(virLXCProcessSetupInterfaceDirect+0xb6)[0x7ff16adedf46]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so(virLXCProcessStart+0x929)[0x7ff16adef259]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so(+0x1b1d4)[0x7ff16adf31d4]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(virDomainCreateXML+0x97)[0x7ff17bfd1107]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /usr/sbin/libvirtd(+0x31f8f)[0x7ff17c9d9f8f]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(virNetServerProgramDispatch+0x367)[0x7ff17c034527]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(+0x147738)[0x7ff17c02f738]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(+0x80235)[0x7ff17bf68235]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libvirt.so.0(+0x7fcc1)[0x7ff17bf67cc1]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libpthread.so.0(+0x7c53)[0x7ff1797c7c53]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: /lib64/libc.so.6(clone+0x6d)[0x7ff1790e506d]
May 13 06:38:44 hp-dl585g7-03.rhts.eng.nay.redhat.com libvirtd[4088]: ======= Memory map: ========
<skip>

Notice that some rlimit has been overrided in libvirtd.service:
LimitNOFILE=20480
LimitSTACK=81920

This bug may be regression, since it won't happen with libvirt-sandbox-0.1.2 and libvirt-daemon-1.0.2.


Version-Release number of selected component (if applicable):
libvirt-daemon-1.0.4-1.1.el7.x86_64
libvirt-sandbox-0.2.0-1.el7.x86_64
systemd-202-3.el7.x86_64
kernel-3.9.0-0.55.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. setup 144+ containers with httpd.
2. num=0800; for i in $(seq -w 1 $num); do echo "[$i]..."; virt-sandbox-service start httpd$i >log/httpd$i.stdout 2>log/httpd$i.stderr; done
  
Actual results:
libvirtd terminated

Expected results:
libvirtd won't terminate, and all containers run well.

Additional info:

Comment 1 Daniel Berrangé 2013-05-13 12:06:27 UTC
Please capture a full stack trace of all threads with GDB, ensuring all -debuginfo RPMs are present.

Comment 4 Monson Shao 2013-05-13 13:29:29 UTC
BTW, in this test I use macvlan instead of bridge for networking.

gdb backtrace:

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ff2cffaf700 (LWP 18875)]
0x00007ff4c38bd9a9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56	  return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);
(gdb) bt
#0  0x00007ff4c38bd9a9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ff4c38bf0b8 in __GI_abort () at abort.c:90
#2  0x00007ff4c38fdcd7 in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x7ff4c3a048de "*** %s ***: %s terminated\n") at ../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x00007ff4c3994907 in __GI___fortify_fail (msg=msg@entry=0x7ff4c3a04884 "buffer overflow detected") at fortify_fail.c:31
#4  0x00007ff4c3992ad0 in __GI___chk_fail () at chk_fail.c:28
#5  0x00007ff4c3994877 in __fdelt_chk (d=d@entry=1058) at fdelt_chk.c:25
#6  0x00007ff4c67f178b in virNetlinkCommand (nl_msg=nl_msg@entry=0x7ff2bc0cf5b0, respbuf=respbuf@entry=0x7ff2cffadf48, respbuflen=respbuflen@entry=0x7ff2cffadf44, src_pid=src_pid@entry=0, 
    dst_pid=dst_pid@entry=0, protocol=protocol@entry=0, groups=groups@entry=0) at util/virnetlink.c:246
#7  0x00007ff4c67ecbbf in virNetDevMacVLanCreate (ifname=ifname@entry=0x7ff2cffae020 "macvlan4", type=type@entry=0x7ff4c6956714 "macvlan", macaddress=macaddress@entry=0x7ff2bc0d45a4, 
    srcdev=srcdev@entry=0x7ff2bc0cf040 "p11p1", macvlan_mode=macvlan_mode@entry=4, retry=retry@entry=0x7ff2cffae010) at util/virnetdevmacvlan.c:166
#8  0x00007ff4c67ed38b in virNetDevMacVLanCreateWithVPortProfile (tgifname=<optimized out>, macaddress=macaddress@entry=0x7ff2bc0d45a4, linkdev=0x7ff2bc0cf040 "p11p1", 
    mode=mode@entry=VIR_NETDEV_MACVLAN_MODE_BRIDGE, withTap=withTap@entry=false, vnet_hdr=vnet_hdr@entry=0, 
    vmuuid=vmuuid@entry=0x7ff2bc0d2c28 "\344\016\t\027$hQ\366\324\377\343\324\246", <incomplete sequence \341\270>, virtPortProfile=virtPortProfile@entry=0x0, res_ifname=res_ifname@entry=0x7ff2cffae138, 
    vmOp=vmOp@entry=VIR_NETDEV_VPORT_PROFILE_OP_CREATE, stateDir=stateDir@entry=0x7ff4b020c270 "/var/run/libvirt/lxc", bandwidth=bandwidth@entry=0x0) at util/virnetdevmacvlan.c:898
#9  0x00007ff4b5685f46 in virLXCProcessSetupInterfaceDirect (conn=conn@entry=0x7ff330000de0, def=def@entry=0x7ff2bc0d2c20, net=0x7ff2bc0d45a0) at lxc/lxc_process.c:396
#10 0x00007ff4b5687259 in virLXCProcessSetupInterfaces (veths=<optimized out>, nveths=<optimized out>, def=<optimized out>, conn=<optimized out>) at lxc/lxc_process.c:511
#11 virLXCProcessStart (conn=conn@entry=0x7ff330000de0, driver=driver@entry=0x7ff4b0205e30, vm=vm@entry=0x7ff2bc0c2120, autoDestroy=autoDestroy@entry=true, reason=reason@entry=1) at lxc/lxc_process.c:1140
#12 0x00007ff4b568b1d4 in lxcDomainCreateAndStart (conn=0x7ff330000de0, xml=<optimized out>, flags=2) at lxc/lxc_driver.c:1095
#13 0x00007ff4c6869107 in virDomainCreateXML (conn=0x7ff330000de0, 
    xmlDesc=0x7ff2bc0c6140 "<domain type=\"lxc\">\n  <name>httpd0097</name>\n  <memory unit=\"KiB\">524288</memory>\n  <features>\n    <privnet/>\n  </features>\n  <os>\n    <type arch=\"x86_64\">exe</type>\n    <init>/usr/libexec/libvirt-san"..., flags=2) at libvirt.c:1988
#14 0x00007ff4c7271f8f in remoteDispatchDomainCreateXML (server=<optimized out>, msg=<optimized out>, ret=0x7ff2bc0c20f0, args=0x7ff2bc0c2090, rerr=0x7ff2cffaec90, client=0x7ff4c97bf900)
    at remote_dispatch.h:1172
#15 remoteDispatchDomainCreateXMLHelper (server=<optimized out>, client=0x7ff4c97bf900, msg=<optimized out>, rerr=0x7ff2cffaec90, args=0x7ff2bc0c2090, ret=0x7ff2bc0c20f0) at remote_dispatch.h:1152
#16 0x00007ff4c68cc527 in virNetServerProgramDispatchCall (msg=0x7ff4c97eff70, client=0x7ff4c97bf900, server=0x7ff4c8f7eae0, prog=0x7ff4c8f79570) at rpc/virnetserverprogram.c:439
#17 virNetServerProgramDispatch (prog=0x7ff4c8f79570, server=server@entry=0x7ff4c8f7eae0, client=0x7ff4c97bf900, msg=0x7ff4c97eff70) at rpc/virnetserverprogram.c:305
#18 0x00007ff4c68c7738 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7ff4c8f7eae0) at rpc/virnetserver.c:162
#19 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7ff4c8f7eae0) at rpc/virnetserver.c:183
#20 0x00007ff4c6800235 in virThreadPoolWorker (opaque=opaque@entry=0x7ff4c97d18a0) at util/virthreadpool.c:144
#21 0x00007ff4c67ffcc1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#22 0x00007ff4c405fc53 in start_thread (arg=0x7ff2cffaf700) at pthread_create.c:308
#23 0x00007ff4c397d06d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb)

Comment 5 Daniel Berrangé 2013-05-13 13:34:12 UTC
Arrggh file util/virnetlink.c:246 resolves to this:

    FD_SET(fd, &readfds);

The select() fdset size is limited to 1024, and almost certainly the file descriptor you have will exceed that.

This select() code needs to die and be replaced by poll()

Comment 6 Daniel Berrangé 2013-05-13 14:33:37 UTC
Patch available upstream

https://www.redhat.com/archives/libvir-list/2013-May/msg00914.html

Comment 7 zhe peng 2013-05-15 06:06:30 UTC
Can reproduce with build:
libvirt-1.0.4-1.1.el7.x86_64
libvirt-sandbox-0.1.2-1.el7.x86_64
kernel-3.9.0-0.55.el7.x86_64
systemd-204-2.el7.1.x86_64

Comment 8 Daniel Berrangé 2013-05-21 12:49:24 UTC
commit 8845d8dfa340f3065d7ee1e6e51cfb1ec9028ee6
Author: Daniel P. Berrange <berrange>
Date:   Mon May 13 14:43:20 2013 +0100

    Remove & ban use of select() for waiting for I/O

Comment 9 Alex Jia 2013-06-05 10:52:27 UTC
Although I haven't seen any libvirtd buffer overflow in libvirtd.log, but I still met some questions, maybe, I should file a new bug to trace it, for details, please see the following contents.

[root@amd-9600b-8-1 ~]# for i in {1..200}; do virt-sandbox-service create -C -u httpd.service apache$i; done


[root@amd-9600b-8-1 ~]# for i in {1..200}; do virt-sandbox-service start apache$i & done

<slice>

Set hostname to <apache14>.
Default target could not be isolated, starting instead: Operation refused, unit may not be isolated.
[  OK  ] Reached target Paths.
[  OK  ] Listening on Delayed Shutdown Socket.
[  OK  ] Listening on Journal Socket.
[  OK  ] Reached target Swap.
[  OK  ] Reached target Local File Systems.
         Starting Recreate Volatile Files and Directories...
         Starting Journal Service...
[  OK  ] Started Journal Service.
[  OK  ] Started Recreate Volatile Files and Directories.
[  OK  ] Reached target System Initialization.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Timers.
[  OK  ] Reached target Basic System.
         Starting The Apache HTTP Server...
         Starting Cleanup of Temporary Directories...
[  OK  ] Started Cleanup of Temporary Directories.
httpd.service: main process exited, code=exited, status=1/FAILURE
[FAILED] Failed to start The Apache HTTP Server.
See 'systemctl status httpd.service' for details.
MESSAGE=Unit httpd.service entered failed state.
[  OK  ] Reached target Sandbox multi-user target.
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
systemd 202 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
Detected virtualization 'lxc-libvirt'.

Welcome to Linux!

Set hostname to <apache9>.
Default target could not be isolated, starting instead: Operation refused, unit may not be isolated.
[  OK  ] Reached target Paths.
[  OK  ] Listening on Delayed Shutdown Socket.
[  OK  ] Listening on Journal Socket.
[  OK  ] Reached target Swap.
[  OK  ] Reached target Local File Systems.
         Starting Recreate Volatile Files and Directories...
         Starting Journal Service...
[  OK  ] Started Journal Service.
[  OK  ] Started Recreate Volatile Files and Directories.
[  OK  ] Reached target System Initialization.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Timers.
[  OK  ] Reached target Basic System.
         Starting The Apache HTTP Server...
         Starting Cleanup of Temporary Directories...
[  OK  ] Started Cleanup of Temporary Directories.
httpd.service: main process exited, code=exited, status=1/FAILURE
[FAILED] Failed to start The Apache HTTP Server.
See 'systemctl status httpd.service' for details.
MESSAGE=Unit httpd.service entered failed state.
[  OK  ] Reached target Sandbox multi-user target.
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer
Unable to open connection: Unable to open lxc:///: Cannot recv data: Connection reset by peer

</slice>


And got some errrors in libvirtd.log.

<slice>

2013-06-05 09:39:50.050+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.051+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.051+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.052+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.053+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.053+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.054+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.055+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.055+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.056+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.056+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.057+0000: 6497: error : virNetServerAddClient:263 : Too many active clients (20), dropping connection from 127.0.0.1;0
2013-06-05 09:39:50.213+0000: 6497: error : virLXCProcessGetNsInode:658 : Unable to stat /proc/21737/ns/pid: No such file or directory
2013-06-05 09:39:50.213+0000: 6497: warning : virLXCProcessMonitorInitNotify:692 : Cannot obtain pid NS inode for 21737: Unable to stat /proc/21737/ns/pid: No such file or directory
2013-06-05 09:39:50.478+0000: 6497: error : virLXCProcessGetNsInode:658 : Unable to stat /proc/21786/ns/pid: No such file or directory
2013-06-05 09:39:50.478+0000: 6497: warning : virLXCProcessMonitorInitNotify:692 : Cannot obtain pid NS inode for 21786: Unable to stat /proc/21786/ns/pid: No such file or directory

</slice>

In addition, when I created container again, I got the following error:

[root@amd-9600b-8-1 ~]# for i in {1..5}; do virt-sandbox-service create -C -u httpd.service httpd$i; done
/usr/bin/virt-sandbox-service: [Errno 2] No such file or directory: '/var/lib/libvirt/filesystems/httpd1/etc/machine-id'
/usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer
/usr/bin/virt-sandbox-service: [Errno 2] No such file or directory: '/var/lib/libvirt/filesystems/httpd2/etc/machine-id'
/usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer
/usr/bin/virt-sandbox-service: [Errno 2] No such file or directory: '/var/lib/libvirt/filesystems/httpd3/etc/machine-id'
/usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer
/usr/bin/virt-sandbox-service: [Errno 2] No such file or directory: '/var/lib/libvirt/filesystems/httpd4/etc/machine-id'
/usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer
/usr/bin/virt-sandbox-service: [Errno 2] No such file or directory: '/var/lib/libvirt/filesystems/httpd5/etc/machine-id'
/usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer

BTW, I have ever tried to change max_clients and max_works to 256 in libvirtd.conf and restart libvirrt service, but it still doesn't work.

[root@amd-9600b-8-1 ~]# ps -ef|grep lxc|grep -v grep
root     20407 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache1
root     20408 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache2
root     20409 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache3
root     20410 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache4
root     20412 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache6
root     20413 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache7
root     20414 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache8
root     20415 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache9
root     20416 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache10
root     20419 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache13
root     20420 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache14
root     20423 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache17
root     20424 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache18
root     20425 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache19
root     20431 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache25
root     20432 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache26
root     20440 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache34
root     20451 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache45
root     20505 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache99
root     20521 11596  0 17:39 pts/4    00:00:00 virt-sandbox-service-util -c lxc:/// -s apache115
root     20638     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache1 --console 25 --security=selinux --handshake 28 --background
root     20692     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache4 --console 79 --security=selinux --handshake 83 --background
root     20738     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache9 --console 79 --security=selinux --handshake 89 --background
root     20787     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache13 --console 79 --security=selinux --handshake 92 --background
root     20841     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache18 --console 79 --security=selinux --handshake 95 --background
root     20895     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache25 --console 79 --security=selinux --handshake 98 --background
root     21007     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache8 --console 79 --security=selinux --handshake 104 --background
root     21060     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache6 --console 79 --security=selinux --handshake 107 --background
root     21111     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache19 --console 79 --security=selinux --handshake 110 --background
root     21166     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache7 --console 79 --security=selinux --handshake 113 --background
root     21219     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache3 --console 79 --security=selinux --handshake 116 --background
root     21271     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache34 --console 79 --security=selinux --handshake 119 --background
root     21324     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache2 --console 79 --security=selinux --handshake 122 --background
root     21378     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache17 --console 79 --security=selinux --handshake 125 --background
root     21484     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache26 --console 79 --security=selinux --handshake 131 --background
root     21538     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache10 --console 139 --security=selinux --handshake 142 --background
root     21590     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache45 --console 139 --security=selinux --handshake 145 --background
root     21643     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache14 --console 141 --security=selinux --handshake 150 --background
root     21735     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache115 --console 101 --security=selinux --handshake 128 --background
root     21781     1  0 17:39 ?        00:00:00 /usr/libexec/libvirt_lxc --name apache99 --console 102 --security=selinux --handshake 154 --background

Notes, in fact, not all of container are successfully started.


[root@amd-9600b-8-1 ~]# rpm -q libvirt libvirt-sandbox kernel systemd
libvirt-1.0.6-1.el7.x86_64
libvirt-sandbox-0.2.0-1.el7.x86_64
kernel-3.7.0-0.36.el7.x86_64
systemd-202-2.el7.x86_64

Comment 10 Daniel Berrangé 2013-06-05 11:04:06 UTC
> /usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer

That almost certainly means you're hitting the 'max_clients' limit in libvirtd.conf file. If increasing max_clients doesn't fix it, please file a separate bug report.

Comment 11 Monson Shao 2013-06-06 02:46:31 UTC
Alex,

Starting many containers in background simultaneously will fail because libvirtd cannot run parallelly in some critical region(?), try sleep(3) after each process running into background please.

Comment 12 Alex Jia 2013-06-07 03:23:28 UTC
(In reply to Daniel Berrange from comment #10)
> > /usr/bin/virt-sandbox-service: Unable to open lxc:///: Cannot recv data: Connection reset by peer
> 
> That almost certainly means you're hitting the 'max_clients' limit in
> libvirtd.conf file. If increasing max_clients doesn't fix it, please file a
> separate bug report.

Daniel, thanks for your comments, I will double check it then file a new bug.

Comment 13 Alex Jia 2013-06-07 03:25:16 UTC
(In reply to Monson Shao from comment #11)
> Alex,
> 
> Starting many containers in background simultaneously will fail because
> libvirtd cannot run parallelly in some critical region(?), try sleep(3)
> after each process running into background please.

Monson, got it, thanks. btw, Is the libvirt-1.0.6-1.el7 okay for you? if so, I will close the bug later, thanks.

Comment 14 Monson Shao 2013-06-07 04:20:04 UTC
(In reply to Alex Jia from comment #13)
>
> Monson, got it, thanks. btw, Is the libvirt-1.0.6-1.el7 okay for you? if so,
> I will close the bug later, thanks.

yes, libvirt-1.0.6-1.el7 works for me.

Comment 15 Alex Jia 2013-06-13 03:40:19 UTC
The bug has been verified on libvirt-1.0.6-1.el7 based on Comment 9 and Comment 14, so move the bug to VERIFIED status.

Comment 16 Ludek Smid 2014-06-13 11:24:14 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.