Bug 1637984

Summary: ceph balancer hangs in larger cluster
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tim Wilkinson <twilkins>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: Madhavi Kasturi <mkasturi>
Severity: high Docs Contact:
Priority: high    
Version: 3.1CC: anharris, aschoen, bengland, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, hnallurv, johfulto, nthomas, pasik, sankarshan, sweil, tserlin, twilkins
Target Milestone: rcKeywords: Reopened
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.rc1.el7cp Ubuntu: ceph-ansible_3.2.0~rc1-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:02:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tim Wilkinson 2018-10-10 12:49:41 UTC
Description of problem:
----------------------
The ceph balancer commands are slow to respond or do not respond at all in our scale lab env (510 OSDs, 927TB). In our small, simple (no OSP) cluster (20 OSDs, 2.7 TB), the balancer status and eval commands return immediately but in the larger env (Ceph 3.1 in OSP13) the status command has taken upwards of 30sec while the eval command on the smallest (empty) pool does not return.



Component Version-Release:
-------------------------
Red Hat Enterprise Linux Server release 7.5 (Maipo)

3.10.0-862.11.6.el7.x86_64

ceph-common.x86_64               2:12.2.4-42.el7       @rhos-13.0-signed
libcephfs2.x86_64                2:12.2.4-42.el7       @rhos-13.0-signed
openstack-cinder.noarch          1:12.0.3-2.el7ost     @rhos-13.0-signed
openstack-dashboard.noarch       1:13.0.1-2.el7ost     @rhos-13.0-signed
openstack-dashboard-theme.noarch 1:13.0.0-1.el7ost     @rhos-13.0-signed
openstack-ec2-api.noarch         6.0.1-0.20180329214259.219dbc7.el7ost
openstack-glance.noarch          1:16.0.1-3.el7ost     @rhos-13.0-signed
openstack-heat-agents.noarch     1.5.4-0.20180308153305.ecf43c7.el7ost
openstack-heat-api.noarch        1:10.0.1-2.el7ost     @rhos-13.0-signed
openstack-heat-api-cfn.noarch    1:10.0.1-2.el7ost     @rhos-13.0-signed
openstack-heat-common.noarch     1:10.0.1-2.el7ost     @rhos-13.0-signed
openstack-heat-engine.noarch     1:10.0.1-2.el7ost     @rhos-13.0-signed
openstack-ironic-api.noarch      1:10.1.3-5.el7ost     @rhos-13.0-signed
openstack-ironic-common.noarch   1:10.1.3-5.el7ost     @rhos-13.0-signed
openstack-keystone.noarch        1:13.0.1-1.el7ost     @rhos-13.0-signed
openstack-manila.noarch          1:6.0.1-2.el7ost      @rhos-13.0-signed
openstack-manila-share.noarch    1:6.0.1-2.el7ost      @rhos-13.0-signed
openstack-manila-ui.noarch       2.13.0-5.el7ost       @rhos-13.0-signed
openstack-mistral-api.noarch     6.0.3-1.el7ost        @rhos-13.0-signed
openstack-mistral-common.noarch  6.0.3-1.el7ost        @rhos-13.0-signed
openstack-mistral-engine.noarch  6.0.3-1.el7ost        @rhos-13.0-signed
openstack-neutron.noarch         1:12.0.3-2.el7ost     @rhos-13.0-signed
openstack-neutron-common.noarch  1:12.0.3-2.el7ost     @rhos-13.0-signed
openstack-neutron-ml2.noarch     1:12.0.3-2.el7ost     @rhos-13.0-signed
openstack-nova-api.noarch        1:17.0.5-3.d7864fbgit.el7ost
openstack-nova-common.noarch     1:17.0.5-3.d7864fbgit.el7ost
openstack-nova-scheduler.noarch  1:17.0.5-3.d7864fbgit.el7ost
openstack-panko-api.noarch       4.0.2-1.el7ost        @rhos-13.0-signed
openstack-panko-common.noarch    4.0.2-1.el7ost        @rhos-13.0-signed
openstack-sahara.noarch          1:8.0.1-2.el7ost      @rhos-13.0-signed
openstack-sahara-ui.noarch       8.0.1-1.el7ost        @rhos-13.0-signed
openstack-selinux.noarch         0.8.14-14.el7ost      @rhos-13.0-signed
openstack-swift-account.noarch   2.17.1-0.20180314165245.caeeb54.el7ost
openstack-swift-container.noarch 2.17.1-0.20180314165245.caeeb54.el7ost
puppet-ceph.noarch               2.5.0-1.el7ost        @rhos-13.0-signed



How reproducible:
----------------
consistent



Steps to Reproduce:
------------------
1. verify ceph-mgr is running ...
   systemctl status -l ceph-mgr@overcloud-controller-0

2. ceph mgr module enable balancer

3. time ceph balancer status

4. time ceph balancer eval backups



Actual results:
--------------
In the larger env the balancer 'status' responded in 0m29.255s while the 'eval' of the backups pool hung.



Expected results:
----------------
The expected output of the balancer commands.



Additional info:
---------------
I disabled the balancer, restarted ceph-mgr, and re-enabled the balancer and then the status command responded almost immediately. The eval command does not return whether I pass a pool name or not. An strace of the PID shows it talking to OSDs looking for connection but gets ECONNREFUSED.

Comment 3 Sage Weil 2018-10-10 13:33:26 UTC
Tim, can you set me up with access to the cluster so I can take a closer look?

Comment 5 Ben England 2018-10-11 18:32:03 UTC
Raising priority of this problem because now we know that ceph-mgr can become effectively disabled by this problem.

So ceph-mgr was bouncing continuously.  This was hard to watch as docker logs would not show what was happening, but eventually I removed the CGroup limit of 1 GB from the systemd unit file /etc/systemd/system/ceph-mgr@.service and did systemctl daemon-reload and restarted the service and now it seems to be running.

Suggest that we remove this CGroup limit or make it much much larger, because it can cause the ceph-mgr to get a memory allocation failure and crash, which impacts the entire ceph cluster.  For example, "ceph pg dump" no longer works.

Comment 6 John Fulton 2018-10-15 15:30:16 UTC
what version of ceph-ansible were you using as it generates the unit files?

Comment 7 Tim Wilkinson 2018-10-15 17:24:03 UTC
(In reply to John Fulton from comment #6)
> what version of ceph-ansible were you using as it generates the unit files?

I'm afraid the scale lab allocation ended yesterday and was immediately wiped so I can not provide the exact version used. I saved off the undercloud ~stack homedir but I can't find that version buried in a log anywhere. I can say that the OC was director deployed OSP13 w/Ceph 3.0. I slipped the Ceph 3.1 (ceph tag: 3-13) container image in later to test newer code validating the OSD memory usage fix. The ceph-ansible version would have been whatever rhos-release gave me for '13' at the time.

Comment 8 Ben England 2018-10-15 17:34:47 UTC
A possibly related bz that could provide an alternative explanation for why the ceph-mgr balancer module failed is:  

https://bugzilla.redhat.com/show_bug.cgi?id=1593110#c9

We found out today that ceph-mgr CGroup limit was set by ceph-ansible.

However, in latest available RHCS 3.1:

http://download-node-02.eng.bos.redhat.com/rel-eng/RHCEPH-3.1-RHEL-7-20180927.1/compose/Tools/x86_64/os/Packages/ceph-ansible-3.1.5-1.el7cp.noarch.rpm

The default ceph mgr docker memory limit is 1 GB, as shown here:

root@perfshift02:~/bene/usr/share/ceph-ansible
# grep -ir docker_memory_limit .
,,,
./group_vars/mgrs.yml.sample:#ceph_mgr_docker_memory_limit: 1g
...
./roles/ceph-mgr/defaults/main.yml:ceph_mgr_docker_memory_limit: 1g
./roles/ceph-mgr/templates/ceph-mgr.service.j2:  --memory={{ ceph_mgr_docker_memory_limit }} \
...

This is unchanged in 

http://download.eng.bos.redhat.com/composes/auto/ceph-3.2-rhel-7/latest-RHCEPH-3-RHEL-7/compose/Tools/x86_64/os/Packages/ceph-ansible-3.2.0-0.1.beta3.el7cp.noarch.rpm

Comment 9 Ben England 2018-10-15 18:50:40 UTC
targeting RHCS 3.2

Comment 11 Sébastien Han 2018-11-07 16:18:56 UTC
Oops wrong BZ sorry

Comment 21 errata-xmlrpc 2019-01-03 19:02:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020