RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1070221 - Introduce locking to virNetDevVethCreate
Summary: Introduce locking to virNetDevVethCreate
Keywords:
Status: CLOSED DUPLICATE of bug 1014604
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: TRACKER-bugs-affecting-libguestfs 992980 1058606 1086175
TreeView+ depends on / blocked
 
Reported: 2014-02-26 12:31 UTC by Michal Privoznik
Modified: 2018-08-30 12:18 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 981729
Environment:
Last Closed: 2014-02-26 16:22:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Michal Privoznik 2014-02-26 12:31:46 UTC
+++ This bug was initially created as a clone of Bug #981729 +++

--- Additional comment from Alex Jia on 2014-02-25 11:18:06 CET ---

Daniel, I can successfully start 41 containers not 40 now, is it an expected result? 

# tail -3 /etc/libvirt/libvirtd.conf 
max_clients = 20
max_workers = 20
max_queued_clients = 20

# for i in {1..50}; do virt-sandbox-service create -C -u httpd.service -N dhcp myapache$i;done

# for i in {1..50}; do virsh -c lxc:/// start myapache$i & done

# virsh -c lxc:/// -q list |wc -l
41

# rpm -q libvirt-daemon libvirt-sandbox kernel
libvirt-daemon-1.1.1-23.el7.x86_64
libvirt-sandbox-0.5.0-9.el7.x86_64
kernel-3.10.0-86.el7.x86_64

Additional info:

error: Failed to start domain myapache36
error: internal error: Failed to allocate free veth pair after 10 attempts

error: Failed to start domain myapache29
error: internal error: Failed to allocate free veth pair after 10 attempts

NOTE: Maybe, 10 attempts are too few for some users then they possibly want to change this, so I think it will be better if we have a configuration item for it, otherwise, we should document 10 attempts in libvirtd.conf or relevant guide.

--- Additional comment from Michal Privoznik on 2014-02-25 16:51:28 CET ---

(In reply to Alex Jia from comment #10)
> Daniel, I can successfully start 41 containers not 40 now, is it an expected
> result? 
> 
> # tail -3 /etc/libvirt/libvirtd.conf 
> max_clients = 20
> max_workers = 20
> max_queued_clients = 20
> 
> # for i in {1..50}; do virt-sandbox-service create -C -u httpd.service -N
> dhcp myapache$i;done
> 
> # for i in {1..50}; do virsh -c lxc:/// start myapache$i & done
> 
> # virsh -c lxc:/// -q list |wc -l
> 41

Yes and no. Kernel does some caching on sockets and some partial opening even if the server is not currently responsive too. So you may end up with more than 40 guests running. Hence I think anything above or equal to 40 is okay.

> 
> # rpm -q libvirt-daemon libvirt-sandbox kernel
> libvirt-daemon-1.1.1-23.el7.x86_64
> libvirt-sandbox-0.5.0-9.el7.x86_64
> kernel-3.10.0-86.el7.x86_64
> 
> Additional info:
> 
> error: Failed to start domain myapache36
> error: internal error: Failed to allocate free veth pair after 10 attempts
> 
> error: Failed to start domain myapache29
> error: internal error: Failed to allocate free veth pair after 10 attempts
> 

This is an internal (buggy) implementation. Let me see if I can fix this.

--- Additional comment from Michal Privoznik on 2014-02-25 17:08:22 CET ---

Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2014-February/msg01548.html

--- Additional comment from Michal Privoznik on 2014-02-26 10:07:51 CET ---

Moving to POST:

http://post-office.corp.redhat.com/archives/rhvirt-patches/2014-February/msg00829.html

Comment 1 Michal Privoznik 2014-02-26 13:00:48 UTC
Hooray, there's no need to repost the patch linked in comment #1. Hence directly moving to POST.

Comment 2 Jiri Denemark 2014-02-26 16:22:21 UTC

*** This bug has been marked as a duplicate of bug 1014604 ***


Note You need to log in before you can comment on or make changes to this bug.