RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 806752 - virsh list --all has inactive guests number limit less than 1024
Summary: virsh list --all has inactive guests number limit less than 1024
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-03-26 07:18 UTC by weizhang
Modified: 2012-10-30 06:46 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-30 06:46:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description weizhang 2012-03-26 07:18:05 UTC
Description of problem:
I have 1025 shutdown guest and when I want to list them all there is an error
# virsh list --all
error: Failed to list inactive domains
error: too many remote undefineds: 1025 > 1024
error: Reconnected to the hypervisor

For running guests, virsh list --all can work well
# virsh list --all |wc -l
1028

Version-Release number of selected component (if applicable):
libvirt-0.9.10-6.el6.x86_64
qemu-kvm-0.12.1.2-2.262.el6.x86_64
kernel-2.6.32-250.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Define 1025 guests
2. run # virsh list --all
3.
  
Actual results:
Report error 
Failed to list inactive domains
error: too many remote undefineds: 1025 > 1024
error: Reconnected to the hypervisor

Expected results:
Should not have this limitation because 1024+ guests have already supported

Additional info:

Comment 6 Dave Allan 2012-10-29 14:39:17 UTC
Osier, I believe this has been fixed; can you comment with the commit id and close CURRENTRELEASE?

Comment 7 Osier Yang 2012-10-30 06:46:48 UTC
(In reply to comment #6)
> Osier, I believe this has been fixed; can you comment with the commit id and
> close CURRENTRELEASE?

commit eb635de1fed3257c5c62b552d1ec981c9545c1d7
Author: Michal Privoznik <mprivozn>
Date:   Fri Apr 27 14:49:48 2012 +0200

    rpc: Size up RPC limits
    
    Since we are allocating RPC buffer dynamically, we can increase limits
    for max. size of RPC message and RPC string. This is needed to cover
    some corner cases where libvirt is run on such huge machines that their
    capabilities XML is 4 times bigger than our current limit. This leaves
    users with inability to even connect.

commit a2c304f6872f15c13c1cd642b74008009f7e115b
Author: Michal Privoznik <mprivozn>
Date:   Thu Apr 26 17:21:24 2012 +0200

    rpc: Switch to dynamically allocated message buffer
    
    Currently, we are allocating buffer for RPC messages statically.
    This is not such pain when RPC limits are small. However, if we want
    ever to increase those limits, we need to allocate buffer dynamically,
    based on RPC message len (= the first 4 bytes). Therefore we will
    decrease our mem usage in most cases and still be flexible enough in
    corner cases.


Note You need to log in before you can comment on or make changes to this bug.