Description of problem: When we install ceph rgw via ceph-ansible ,the Ansible playbook is inserting entry in systemd for rgw.ceph-rgw-01.service in rgw-02 and rgw-03 nodes . [a_ansible@ceph-rgw-02 ~]$ sudo su - [root@ceph-rgw-02 ~]# systemctl | grep rado ● ceph-radosgw.service loaded failed failed Ceph rados gateway ceph-radosgw.service loaded active running Ceph rados gateway system-ceph\x2dradosgw.slice loaded active active system-ceph\x2dradosgw.slice ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once [a_ansible@admin-01 ansible]$ ssh ceph-rgw-03 [root@ceph-rgw-03 ~]# systemctl | grep rado ● ceph-radosgw.service loaded failed failed Ceph rados gateway ceph-radosgw.service loaded active running Ceph rados gateway system-ceph\x2dradosgw.slice loaded active active system-ceph\x2dradosgw.slice ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once [root@ceph-rgw-03 ~]# Possible solution: Changes to ceph-common handlers for RGWs # serial: 1 would be the proper solution here, but that can only be set on play level # upstream issue: https://github.com/ansible/ansible/issues/12170 run_once: true - with_items: "{{ groups.get(rgw_group_name, []) }}" - delegate_to: "{{ item }}" + #with_items: "{{ groups.get(rgw_group_name, []) }}" + #delegate_to: "{{ item }}" when: - rgw_group_name in group_names
lgtm
Present in v3.1.0rc3.
Verified with version ceph-ansible-3.1.0-0.1.rc9.el7cp.noarch , ansible-2.4.5.0-1.el7ae.noarch Have deployed a cluster with following Config. Inventory hosts file [mons] magna021 [osds] magna028 magna031 magna030 [rgws] magna031 magna033 magna029 [mgrs] magna021 [ubuntu@magna033 ~]$ sudo systemctl list-units | grep rados ceph-radosgw.service loaded active running Ceph rados gateway system-ceph\x2dradosgw.slice loaded active active system-ceph\x2dradosgw.slice ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once [ubuntu@magna029 ~]$ sudo systemctl list-units | grep rados ceph-radosgw.service loaded active running Ceph rados gateway system-ceph\x2dradosgw.slice loaded active active system-ceph\x2dradosgw.slice ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once [ubuntu@magna031 ~]$ sudo systemctl list-units | grep rados ceph-radosgw.service loaded active running Ceph rados gateway system-ceph\x2dradosgw.slice loaded active active system-ceph\x2dradosgw.slice ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once Only the `systemd` unit files for the respective Ceph Object Gateway hosts are created. Hence Moving to verfied state
Doc text is already present.
(In reply to leseb from comment #13) > Doc text is already present. Yes but it was written when the issue was a known issue, and was not updated to reflect that the issue is resolved. I updated it to explain that. Let me know if it needs changes or feel free to make them yourself.
lgtm, thanks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819