ceph-ansible's ceph_osd_docker_memory_limit is used when starting an OSD container as follows: `docker run ... --memory $ceph_osd_docker_memory_limit ... $osd_container_name` As per https://bugzilla.redhat.com/show_bug.cgi?id=1527660, the field product manager is requesting this have a new default in TripleO of 1G not 3G.
This can be done in ceph-ansible.
The "containers: bump memory limit" commit has been in upstream's stable-3.0 branch since v3.0.25, so this is already GA to our customers in RHCEPH 3.
Looks good to me. An OSP12 overcloud deploy using fixed-in pacakge produced the desired results. Thank you. (undercloud) [stack@undercloud-0 virt]$ rpm -q ceph-ansible ceph-ansible-3.0.25-1.el7cp.noarch (undercloud) [stack@undercloud-0 virt]$ [root@ceph-0 ~]# grep memory /usr/share/ceph-osd-run.sh --memory=3g \ [root@ceph-0 ~]# [root@ceph-0 ~]# ps axu | grep 75080 root 75080 0.0 0.2 80992 9448 ? Sl Mar21 0:00 /usr/bin/docker-current run --rm --net=host --privileged=true --pid=host --memory=3g --cpu-quota=100000 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph -v /etc/ceph:/etc/ceph -e OSD_JOURNAL=/dev/disk/by-partuuid/b3c9808c-d55e-464b-9f49-b05169db70b4 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -e OSD_DEVICE=/dev/vdb -e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE --name=ceph-osd-ceph-0-vdb docker-registry.engineering.redhat.com/ceph/rhceph-2-rhel7:2.4-4 root 83115 0.0 0.0 112664 972 pts/0 S+ 00:19 0:00 grep --color=auto 75080 [root@ceph-0 ~]#
As per comment #12 the fix is released already so I am marking this closed errata.