Bug 1637984
| Summary: | ceph balancer hangs in larger cluster | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tim Wilkinson <twilkins> |
| Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> |
| Status: | CLOSED ERRATA | QA Contact: | Madhavi Kasturi <mkasturi> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.1 | CC: | anharris, aschoen, bengland, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, hnallurv, johfulto, nthomas, pasik, sankarshan, sweil, tserlin, twilkins |
| Target Milestone: | rc | Keywords: | Reopened |
| Target Release: | 3.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-ansible-3.2.0-0.1.rc1.el7cp Ubuntu: ceph-ansible_3.2.0~rc1-2redhat1 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-01-03 19:02:09 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Tim Wilkinson
2018-10-10 12:49:41 UTC
Tim, can you set me up with access to the cluster so I can take a closer look? Raising priority of this problem because now we know that ceph-mgr can become effectively disabled by this problem. So ceph-mgr was bouncing continuously. This was hard to watch as docker logs would not show what was happening, but eventually I removed the CGroup limit of 1 GB from the systemd unit file /etc/systemd/system/ceph-mgr@.service and did systemctl daemon-reload and restarted the service and now it seems to be running. Suggest that we remove this CGroup limit or make it much much larger, because it can cause the ceph-mgr to get a memory allocation failure and crash, which impacts the entire ceph cluster. For example, "ceph pg dump" no longer works. what version of ceph-ansible were you using as it generates the unit files? (In reply to John Fulton from comment #6) > what version of ceph-ansible were you using as it generates the unit files? I'm afraid the scale lab allocation ended yesterday and was immediately wiped so I can not provide the exact version used. I saved off the undercloud ~stack homedir but I can't find that version buried in a log anywhere. I can say that the OC was director deployed OSP13 w/Ceph 3.0. I slipped the Ceph 3.1 (ceph tag: 3-13) container image in later to test newer code validating the OSD memory usage fix. The ceph-ansible version would have been whatever rhos-release gave me for '13' at the time. A possibly related bz that could provide an alternative explanation for why the ceph-mgr balancer module failed is: https://bugzilla.redhat.com/show_bug.cgi?id=1593110#c9 We found out today that ceph-mgr CGroup limit was set by ceph-ansible. However, in latest available RHCS 3.1: http://download-node-02.eng.bos.redhat.com/rel-eng/RHCEPH-3.1-RHEL-7-20180927.1/compose/Tools/x86_64/os/Packages/ceph-ansible-3.1.5-1.el7cp.noarch.rpm The default ceph mgr docker memory limit is 1 GB, as shown here: root@perfshift02:~/bene/usr/share/ceph-ansible # grep -ir docker_memory_limit . ,,, ./group_vars/mgrs.yml.sample:#ceph_mgr_docker_memory_limit: 1g ... ./roles/ceph-mgr/defaults/main.yml:ceph_mgr_docker_memory_limit: 1g ./roles/ceph-mgr/templates/ceph-mgr.service.j2: --memory={{ ceph_mgr_docker_memory_limit }} \ ... This is unchanged in http://download.eng.bos.redhat.com/composes/auto/ceph-3.2-rhel-7/latest-RHCEPH-3-RHEL-7/compose/Tools/x86_64/os/Packages/ceph-ansible-3.2.0-0.1.beta3.el7cp.noarch.rpm targeting RHCS 3.2 confirmed mon to 3g and osd to 5g as per: https://bugzilla.redhat.com/show_bug.cgi?id=1591871 but still 1g for mgrs: https://github.com/ceph/ceph-ansible/blob/824ec6d256fc23794d69dd82f789fb05ef5c7bb6/roles/ceph-mgr/defaults/main.yml#L27 Oops wrong BZ sorry Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0020 |