Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use Jira Cloud for all bug tracking management.

Bug 1531607

Summary: Change default ceph_osd_docker_memory_limit from 1g to 3g
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: John Fulton <johfulto>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.0CC: adeza, aschoen, bengland, ceph-eng-bugs, gcharot, gfidente, gmeno, jefbrown, kdreyer, mburns, nthomas, rhel-osp-director-maint, sankarshan, shan
Target Milestone: rcKeywords: TestOnly, Triaged
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.0.25-1.el7cp Ubuntu: ceph-ansible_3.0.25-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1553655 (view as bug list) Environment:
Last Closed: 2018-03-22 00:58:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1548353, 1553655    

Description John Fulton 2018-01-05 16:10:30 UTC
ceph-ansible's ceph_osd_docker_memory_limit is used when starting an OSD container as follows: 

`docker run ... --memory $ceph_osd_docker_memory_limit ... $osd_container_name`

As per https://bugzilla.redhat.com/show_bug.cgi?id=1527660, the field product manager is requesting this have a new default in TripleO of 1G not 3G.

Comment 3 Sébastien Han 2018-01-08 15:35:19 UTC
This can be done in ceph-ansible.

Comment 12 Ken Dreyer (Red Hat) 2018-03-16 22:17:51 UTC
The "containers: bump memory limit" commit has been in upstream's stable-3.0 branch since v3.0.25, so this is already GA to our customers in RHCEPH 3.

Comment 13 John Fulton 2018-03-22 00:22:41 UTC
Looks good to me. An OSP12 overcloud deploy using fixed-in pacakge produced the desired results. Thank you. 

(undercloud) [stack@undercloud-0 virt]$ rpm -q ceph-ansible
ceph-ansible-3.0.25-1.el7cp.noarch
(undercloud) [stack@undercloud-0 virt]$ 

[root@ceph-0 ~]# grep memory /usr/share/ceph-osd-run.sh
  --memory=3g \
[root@ceph-0 ~]#

[root@ceph-0 ~]# ps axu | grep 75080
root       75080  0.0  0.2  80992  9448 ?        Sl   Mar21   0:00 /usr/bin/docker-current run --rm --net=host --privileged=true --pid=host --memory=3g --cpu-quota=100000 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph -v /etc/ceph:/etc/ceph -e OSD_JOURNAL=/dev/disk/by-partuuid/b3c9808c-d55e-464b-9f49-b05169db70b4 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -e OSD_DEVICE=/dev/vdb -e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE --name=ceph-osd-ceph-0-vdb docker-registry.engineering.redhat.com/ceph/rhceph-2-rhel7:2.4-4
root       83115  0.0  0.0 112664   972 pts/0    S+   00:19   0:00 grep --color=auto 75080
[root@ceph-0 ~]#

Comment 14 John Fulton 2018-03-22 00:58:17 UTC
As per comment #12 the fix is released already so I am marking this closed errata.