RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 740899 - [qemu-kvm][scalability] qemu could not open disk error at 256 devices
Summary: [qemu-kvm][scalability] qemu could not open disk error at 256 devices
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 611326 747120
TreeView+ depends on / blocked
 
Reported: 2011-09-23 17:26 UTC by Dave Allan
Modified: 2016-04-26 14:30 UTC (History)
27 users (show)

Fixed In Version: libvirt-0.9.4-15.el6
Doc Type: Bug Fix
Doc Text:
Clone Of: 739489
Environment:
Last Closed: 2011-12-06 11:34:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:1513 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-12-06 01:23:30 UTC

Comment 12 Eric Blake 2011-10-04 15:36:33 UTC
Looks like both F14 and RHEL 6 added support for files in /etc/sysctl.d/*, so I'm working on a patch to have libvirt install /etc/sysctl.d/libvirtd.sysctl as needed.

Comment 13 Eric Blake 2011-10-04 17:40:47 UTC
Upstream patch awaiting review:
https://www.redhat.com/archives/libvir-list/2011-October/msg00075.html

Comment 22 weizhang 2011-10-09 08:53:39 UTC
can reproduce on 
qemu-kvm-0.12.1.2-2.195.el6.x86_64
kernel-2.6.32-206.el6.x86_64
libvirt-0.9.4-12.el6.x86_64

test on machine intel-e7420-128-1
change the  /proc/sys/fs/aio-max-nr to 65535

with attach 513 disks like
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/vg0/guest1-u'/>
      <target dev='vdu' bus='virtio'/>
    </disk>

when start 21th guests, it reports error 

error: Failed to start domain guest21
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/22
qemu-kvm: -drive file=/dev/vg0/guest21-l,if=none,id=drive-virtio-disk11,format=raw,cache=none,aio=native: could not open disk image /dev/vg0/guest21-l: Invalid argument

on libvirt-0.9.4-16.el6.x86_64, report the same error, qemu-kvm bug is still in New status, so add depends on bug 739489

Comment 23 Eric Blake 2011-10-10 14:20:22 UTC
Remember that the libvirt fix is to set a sysfs setting at boot.  The only way to verify the behavior of this fix is to install the desired libvirt (whether to prove the bug exists in older libvirt or to prove it is fixed with newer libvirt), then reboot the system, then 'cat /proc/sys/fs/aio-max-nr', then run the test with more than 256 disks.  The reboot between each change of libvirt is essential to prove that /etc/sysctl.d is being sourced correctly at bootup as part of the fix.

Comment 24 weizhang 2011-10-11 03:22:22 UTC
(In reply to comment #23)
> Remember that the libvirt fix is to set a sysfs setting at boot.  The only way
> to verify the behavior of this fix is to install the desired libvirt (whether
> to prove the bug exists in older libvirt or to prove it is fixed with newer
> libvirt), then reboot the system, then 'cat /proc/sys/fs/aio-max-nr', then run
> the test with more than 256 disks.  The reboot between each change of libvirt
> is essential to prove that /etc/sysctl.d is being sourced correctly at bootup
> as part of the fix.

Thanks for Eric's reminding. I retest with the following steps on libvirt-0.9.4-16

1. install libvirt-0.9.4.12 and reboot host 
2. check the aio limit
#cat /proc/sys/fs/aio-max-nr 
65536
3. Start 22 guests with 24 disks on each guest and aio=native
On the 22th guest start, it will report error like
error: Failed to start domain guest22
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/23
qemu-kvm: -drive file=/var/lib/libvirt/images/guest22-h.img,if=none,id=drive-virtio-disk7,format=raw,cache=none,aio=native: could not open disk image /var/lib/libvirt/images/guest22-h.img: Inappropriate ioctl for device
4. update to libvirt-0.9.4-16 and reboot host again
5. check the aio limit again
#cat /proc/sys/fs/aio-max-nr
1048576
6. Start 22 guests with 24 disks on each guest and aio=native
All the guest can started successfully

So verify pass 

qemu-kvm-0.12.1.2-2.195.el6.x86_64
kernel-2.6.32-206.el6.x86_64

Comment 25 errata-xmlrpc 2011-12-06 11:34:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html


Note You need to log in before you can comment on or make changes to this bug.