| Summary: | Failed to list active vols - too many remote undefineds | ||
|---|---|---|---|
| Product: | [Community] Virtualization Tools | Reporter: | Ivelin Slavov <ivelin.slavov> |
| Component: | libvirt | Assignee: | Libvirt Maintainers <libvirt-maint> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | unspecified | CC: | crobinso, eblake, xen-maint |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-03-23 20:50:34 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
== Repost publicly with private IP stripped.
are you sure it's really 1024 vols? 1024 vols work fine with my testing.
# virsh pool-dumpxml test
<pool type='dir'>
<name>test</name>
<uuid>c8c06dd5-85c8-2eb5-3807-e8227b606134</uuid>
<capacity unit='bytes'>21137846272</capacity>
<allocation unit='bytes'>17001058304</allocation>
<available unit='bytes'>4136787968</available>
<source>
</source>
<target>
<path>/var/lib/libvirt/test</path>
<permissions>
<mode>0700</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
# for i in {1..1024}; do > /var/lib/libvirt/test/$i.vol; done
# virsh pool-refresh test
# virsh vol-list test | wc -l
1027
# echo $?
0
Not that there are two lines headers in the command output, and one white line
in the end.
Yes, 1024 is fine. Adding *more* than 1024 seems to be the problem. As I investigated, the problem comes from hardcoded limits defined in the remote_protocol.h header file. 39 #define REMOTE_DOMAIN_ID_LIST_MAX 16384 40 #define REMOTE_DOMAIN_NAME_LIST_MAX 1024 41 #define REMOTE_CPUMAP_MAX 256 42 #define REMOTE_VCPUINFO_MAX 2048 43 #define REMOTE_CPUMAPS_MAX 16384 44 #define REMOTE_MIGRATE_COOKIE_MAX 16384 45 #define REMOTE_NETWORK_NAME_LIST_MAX 256 46 #define REMOTE_INTERFACE_NAME_LIST_MAX 256 47 #define REMOTE_DEFINED_INTERFACE_NAME_LIST_MAX 256 48 #define REMOTE_STORAGE_POOL_NAME_LIST_MAX 256 49 #define REMOTE_STORAGE_VOL_NAME_LIST_MAX 1024 50 #define REMOTE_NODE_DEVICE_NAME_LIST_MAX 16384 51 #define REMOTE_NODE_DEVICE_CAPS_LIST_MAX 16384 52 #define REMOTE_NWFILTER_NAME_LIST_MAX 1024 53 #define REMOTE_DOMAIN_SCHEDULER_PARAMETERS_MAX 16 54 #define REMOTE_DOMAIN_BLKIO_PARAMETERS_MAX 16 55 #define REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX 16 56 #define REMOTE_DOMAIN_BLOCK_IO_TUNE_PARAMETERS_MAX 16 57 #define REMOTE_DOMAIN_NUMA_PARAMETERS_MAX 16 58 #define REMOTE_NODE_CPU_STATS_MAX 16 59 #define REMOTE_NODE_MEMORY_STATS_MAX 16 60 #define REMOTE_DOMAIN_BLOCK_STATS_PARAMETERS_MAX 16 61 #define REMOTE_NODE_MAX_CELLS 1024 62 #define REMOTE_AUTH_SASL_DATA_MAX 65536 63 #define REMOTE_AUTH_TYPE_LIST_MAX 20 64 #define REMOTE_DOMAIN_MEMORY_STATS_MAX 1024 65 #define REMOTE_DOMAIN_SNAPSHOT_LIST_NAMES_MAX 1024 66 #define REMOTE_DOMAIN_BLOCK_PEEK_BUFFER_MAX 65536 67 #define REMOTE_DOMAIN_MEMORY_PEEK_BUFFER_MAX 65536 68 #define REMOTE_SECURITY_MODEL_MAX VIR_SECURITY_MODEL_BUFLEN 69 #define REMOTE_SECURITY_LABEL_MAX VIR_SECURITY_LABEL_BUFLEN 70 #define REMOTE_SECURITY_DOI_MAX VIR_SECURITY_DOI_BUFLEN 71 #define REMOTE_SECRET_VALUE_MAX 65536 72 #define REMOTE_SECRET_UUID_LIST_MAX 16384 73 #define REMOTE_CPU_BASELINE_MAX 256 74 #define REMOTE_DOMAIN_SEND_KEY_MAX 16 75 #define REMOTE_DOMAIN_INTERFACE_PARAMETERS_MAX 16 76 #define REMOTE_DOMAIN_GET_CPU_STATS_NCPUS_MAX 128 77 #define REMOTE_DOMAIN_GET_CPU_STATS_MAX 2048 78 #define REMOTE_DOMAIN_DISK_ERRORS_MAX 256 Anyone have any thoughts about the motivation behind these limits ? (In reply to comment #3) > Yes, 1024 is fine. Adding *more* than 1024 seems to be the problem. As I > investigated, the problem comes from hardcoded limits defined in the > remote_protocol.h header file. > > 39 #define REMOTE_DOMAIN_ID_LIST_MAX 16384 > 40 #define REMOTE_DOMAIN_NAME_LIST_MAX 1024 Yep, that would be the issue. > > Anyone have any thoughts about the motivation behind these limits ? Yes - RPC calls have to have a finite byte length, otherwise, a client could DoS a server by making an outrageous request that would cause the server to buffer up a huge reply. We enforce the finite length RPCs by limiting the length of arrays, although it might be possible to argue that a particular limit is too small. At some point this was bumped much higher: const REMOTE_STORAGE_VOL_LIST_MAX = 16384; |
Description of problem: When having more than 1024 volumes in a pool and you call vol-list on the same pool, libvirt throws an exception stating: error: Failed to list active vols error: too many remote undefineds: 1028 > 1024 Version-Release number of selected component (if applicable): 0.9.10 0.9.6 How reproducible: Steps to Reproduce: 1.Create more than 1024 volumes in a pool 2.Call 'list' on the pool ( from virsh/python/whatever ) Actual results: error: Failed to list active vols error: too many remote undefineds: 1028 > 1024 Expected results: A list of volumes Additional info: