Bug 1389503 - director should increase libvirtd FD limits on ceph backed compute nodes
Summary: director should increase libvirtd FD limits on ceph backed compute nodes
Keywords:
Status: CLOSED DUPLICATE of bug 1372589
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-puppet-elements
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 11.0 (Ocata)
Assignee: Giulio Fidente
QA Contact: Yogev Rabl
Derek
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-27 18:24 UTC by Tim Wilkinson
Modified: 2016-11-23 10:57 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-23 10:57:09 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Tim Wilkinson 2016-10-27 18:24:08 UTC
Description of problem:
----------------------
In our OSP10d deployed env, we had to increase the file descriptor limits for libvirtd in order to avoid problems with the computes communicating with the ceph monitors resulting in qemu-kvm process hangs. 


[root@overcloud-novacompute-0 qemu]# ps -lef|grep libvirtd |grep -v grep
4 S root      104929       1  0  80   0 - 628325 poll_s 14:24 ?       00:00:55 /usr/sbin/libvirtd --listen

[root@overcloud-novacompute-0 qemu]# grep "open files" /proc/104929/limits 
Max open files            1024                 4096                 files     


This env has 21 computes backed by 1043 Ceph OSDs. Ceph tracker http://tracker.ceph.com/issues/17573 provides a detailed description of the KVM guest hang that we can reproduce and Ceph development has confirmed that Ceph librbd will open a TCP socket to every OSD and keep it open.

If left unchanged, RHOSP 10 will fail to scale on RHCS 2.0. 



Component Version-Release:
-------------------------
Red Hat Enterprise Linux Server release 7.3 Beta (Maipo)

kernel-3.10.0-510.el7.x86_64

ceph-*.x86_64                     1:10.2.2-41.el7cp @rhos-10.0-ceph-2.0-mon-signed
openstack-aodh-*.noarch            3.0.0-0.20160921151816.bb5103e.el7ost
openstack-ceilometer-*.noarch      1:7.0.0-0.20160928024313.67bbd3f.el7ost
openstack-cinder.noarch              1:9.0.0-0.20160928223334.ab95181.el7ost
openstack-dashboard.noarch 1:10.0.0-0.20161002185148.3252153.1.el7ost
openstack-glance.noarch 1:13.0.0-0.20160928121721.4404ae6.el7ost
openstack-gnocchi-*.noarch         3.0.1-0.20160923180636.c6b2c51.el7ost
openstack-heat-*.noarch            1:7.0.0-0.20160926200847.dd707bc.el7ost
openstack-ironic-*.noarch          1:6.2.1-0.20160930163405.3f54fec.el7ost
openstack-keystone.noarch 1:10.0.0-0.20160928144040.6520523.el7ost
openstack-manila.noarch              1:3.0.0-0.20160916162617.8f2fa31.el7ost
openstack-mistral-api.noarch         3.0.0-0.20160929083341.c0a4501.el7ost
openstack-neutron.noarch             1:9.0.0-0.20160929051647.71f2d2b.el7ost
openstack-nova-api.noarch 1:14.0.0-0.20160929203854.59653c6.el7ost
openstack-puppet-modules.noarch      1:9.0.0-0.20160915155755.8c758d6.el7ost
openstack-sahara.noarch              1:5.0.0-0.20160926213141.cbd51fa.el7ost
openstack-selinux.noarch             0.7.9-1.el7ost @rhos-10.0-puddle
openstack-swift-account.noarch       2.10.1-0.20160929005314.3349016.el7ost
openstack-swift-plugin-swift3.noarch 1.11.1-0.20160929001717.e7a2b88.el7ost
openstack-zaqar.noarch               1:3.0.0-0.20160921221617.3ef0881.el7ost
openvswitch.x86_64                   1:2.5.0-5.git20160628.el7fdb
puppet-ceph.noarch                   2.2.0-1.el7ost @rhos-10.0-puddle
puppet-openstack_extras.noarch       9.4.0-1.el7ost @rhos-10.0-puddle
puppet-openstacklib.noarch           9.4.0-0.20160929212001.0e58c86.el7ost
puppet-vswitch.noarch                5.4.0-1.el7ost @rhos-10.0-puddle
python-openstack-mistral.noarch      3.0.0-0.20160929083341.c0a4501.el7ost
python-openstackclient.noarch        3.2.0-0.20160914003636.8241f08.el7ost
python-openstacksdk.noarch           0.9.5-0.20160912180601.d7ee3ad.el7ost
python-openvswitch.noarch            1:2.5.0-5.git20160628.el7fdb
python-rados.x86_64                  1:10.2.2-41.el7cp @rhos-10.0-ceph-2.0-mon-signed
python-rbd.x86_64                    1:10.2.2-41.el7cp @rhos-10.0-ceph-2.0-mon-signed



How reproducible:
----------------
consistent



Steps to Reproduce:
------------------
1. set nofile limit low ... ulimit -n 512
2. start sequential write (16G w/4M transfer size) using librbd
3. observe I/O start then crawl to a stop/hang



Actual results:
--------------
'dd' writes and fio tests to rbd volume hang.



Expected results:
----------------
All I/O to ceph storage completes without hang or error.



Additional info:
---------------
This issue was resolved by increasing the nofile limits for libvirtd as follows on every compute node and restarting all KVM guests:

mkdir /etc/systemd/system/libvirtd.service.d/
echo -e "[Service]\nLimitNOFILE=16384 " > /etc/systemd/system/libvirtd.service.d/limits.conf
systemctl stop libvirtd
systemctl start libvirtd

Comment 1 Ben England 2016-10-27 18:35:59 UTC
This impacts scalability of OSP 10 on Ceph storage, and limits use of the scale lab to test OpenStack with Ceph storage.  It does not happen on small configs.

Comment 3 Daniel Berrangé 2016-11-03 15:59:05 UTC
There's several things here

QEMU should not hang when the number of files is too low. If ceph can't open a socket/file during qemu startup, QEMU should fail to start with an appropriate error message. If it happens during later runtime, then the guest OS should be paused. QEMU itself should never hang. Maybe the guest OS pausing was mistaken for a hang ?

I don't think we should need to raise the limit of libvirtd here - if it is QEMU having the problem, then we should raise the QEMU limit in /etc/libvirt/qemu.conf via the 'max_files' parameter. 

There's probably a reasonable argument to be made for libvirt to ship with a higher default limit. The Linux kernel default ulimits haven't changed in decades, and are very pessimistically low IMHO given current hardware scale.

Comment 4 Tim Wilkinson 2016-11-03 16:19:21 UTC
(In reply to Daniel Berrange from comment #3)
> QEMU itself should never hang. Maybe the guest OS
> pausing was mistaken for a hang ?

Yes, that could have been the case. I should have been more specific. I observed all I/O to the ceph backed cinder volumes stop, even though many guests were still running FIO jobs.

Comment 5 Tim Wilkinson 2016-11-03 17:21:03 UTC
Correction: I/O stops on each of the guests affected by the problem.

Comment 6 Ben England 2016-11-03 18:31:16 UTC
>QEMU should not hang when the number of files is too low. If ceph can't open a >socket/file during qemu startup, QEMU should fail to start with an appropriate >error message. If it happens during later runtime, then the guest OS should be >paused. QEMU itself should never hang. Maybe the guest OS pausing was mistaken >for a hang ?

Ceph librbd doesn't actually open the TCP socket to the OSD until it needs to talk to it.   You can see the number of sockets growing as the application accesses more of the volume, because it is then hitting new OSDs (block devices on Ceph servers) that it didn't need before.  To see this, try creating a fio volume, pre-populate it with data, and then:

  fio --ioengine=rbd --clientname=admin --pool=ben --rbdname=v3 --rw=randread --ramp=30 --size=16g --bs=4k --runtime=50 --rate_iops=100 --name=foo > /tmp/fio.log 2>&1 &

  

In this example, we'll see librbd gradually create a socket for every OSD containing data associated with the RBD volume.  For large enough RBD volumes, it could be *all* of them - since OSDs are chosen at random for replication, it's hard to predict when the threshold of 1024 file descriptors will be crossed.  For small Ceph clusters you might never cross this threshold.  But if you want OpenStack to be scalable with Ceph-backed storage, you don't want to hit this threshold.

(root@c07-h01-6048r) - (18:28) - (~)
-=>>while [ 1 ] ; do netstat -anp | grep fio | wc -l ; sleep 2 ; done138
192
242
286
325
364
397
434
462
488
517
538
560
[1]+  Done                    fio --ioengine=rbd --clientname=admin --pool=ben --rbdname=v3 --rw=randread --size=16g --bs=4k --runtime=50 --rate_iops=20 --name=foo > /tmp/fio.log 2>&1
0

Comment 7 Ben England 2016-11-07 21:48:57 UTC
Any chance of doing this fix in OSP 10 to improve its scalability?

In the bottom of the initial post, we provided an example fix for the problem.  In comment 6 I show how this fix is necessary for librbd.  

On re-reading comment 3 I see that there is more than one way to adjust the file descriptor limit, as Daniel Berrange suggested, and I defer to developers on which way is best.  All that is needed is for OpenStack to automate the process of deploying that adjustment to file descriptor limit to ensure that all guests can create sufficient fds to access Ceph storage.  

For example, one customer that we work with routinely has many OpenStack+Ceph clusters with 500 OSDs in them, and they want to go to 1000-OSD deployments but are afraid to because of problems with guest response times or hangs.  Hangs are exactly what I encountered when exceeding the FD limit, see:

http://tracker.ceph.com/issues/17573

And guests encounter them too after accessing enough of their Cinder volumes to hit the fd limit, but they may not hit this problem until long after the guests have started to run the application, and the problem may not be encountered consistently, adding to the difficulty of diagnosis.  Hence we reduced the ulimit in the report to show how you could easily reproduce the problem, but it is not necessary to reduce ulimit -n to encounter it, as long as OSD count >= file descriptor limit.

Ideally, Ceph should fix the hang in librbd resulting from insufficient fds, but it's *far* easier to prevent the problem than to diagnose and fix it.  For example, you'd have to implement the fix documented above by hand in all the compute hosts and restart libvirtd on all of them, then you have to stop and start your guests, am I right?  For some users that can be very disruptive.  

With this fix and the other fix to kernel.pid-max in bz 1389502, which is being worked on for OSP10 right now, Tim and I are now running a wide variety of I/O tests with 500 guests (and soon 1000 guests) on this configuration and could not have done that without these two adjustments.  Assuming these tests complete without finding more scaling limitations, then we can say that OSP10, with these two fixes, would support much greater scalability with Ceph than previous releases.

Comment 8 Giulio Fidente 2016-11-16 18:26:32 UTC
is this a duplicate of BZ 1372589?

Comment 9 Tim Wilkinson 2016-11-16 22:39:50 UTC
(In reply to Giulio Fidente from comment #8)
> is this a duplicate of BZ 1372589?

It sounds like it would accomplish the same overall goal of increasing the FD limits automatically without user intervention.

Comment 10 Giulio Fidente 2016-11-23 10:57:09 UTC
Thanks Tim, marking this as duplicate as the other one is a bit older. Hopefully we can get it fixed quickly.

*** This bug has been marked as a duplicate of bug 1372589 ***


Note You need to log in before you can comment on or make changes to this bug.