Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1917144

Summary: Add 2 RGWS on the same node - Realms Configuration.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Avi Mor <avmor>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED DUPLICATE QA Contact: Ameena Suhani S H <amsyedha>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1CC: aschoen, ceph-eng-bugs, gmeno, nthomas, ykaul
Target Milestone: ---   
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-18 19:06:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Avi Mor 2021-01-17 14:49:04 UTC
Description of problem:

Hello,
ceph-ansible (radosgw_num_instances: 2) don't increase the RGW num on the same node in realms configuration.

How reproducible:
1) Install Ceph cluster with 2 realms configuration.
2) Try to set "radosgw_num_instances: 2" in all.yml file.
3) Run the playbook and you will be able to see that ceph-ansible not increase the rgw num per node and ignore this configuration which works great in one realm configuration.


Steps to Reproduce:
1) Install Ceph cluster with 2 realms configuration.
2) Try to set "radosgw_num_instances: 2" in all.yml file.
3) Run the playbook and you will be able to see that ceph-ansible not increase the rgw num per node.

Actual results:
ceph-ansible not increase the rgw num per node

Expected results:
ceph-ansible ignore the detention in the all.yml file 

Additional info:
Basically, the solution is to use the rgw_instances in hostvars folder.
In the hostvars folder, list all the rgw node configurations of both realms.
The example below is from one of the rgw nodes in the cluster.
For example:

rgw_zonemaster: true
rgw_zonesecondary: false
rgw_zonegroupmaster: true
rgw_multisite_proto: http
rgw_instances:
  - instance_name: rgw0
    rgw_realm: realm1
    rgw_zonegroup: realm1-zonegroup
    rgw_zone: realm1-zone
    radosgw_address: "{{ _radosgw_address }}"
    radosgw_frontend_port: 80
    rgw_zone_user: sds.team
    rgw_zone_user_display_name: "SDS Team"
    system_access_key: nPZC66HvEi5zrujqns4kF
    system_secret_key: P3HRsE7RpLEYliBOnwNvCJaCZQI6oqKVkg7aZ1VH
  - instance_name: rgw1
    rgw_realm: realm1
    rgw_zonegroup: realm1-zonegroup
    rgw_zone: realm1-zone
    radosgw_address: "{{ _radosgw_address }}"
    radosgw_frontend_port: 8080
    rgw_zone_user: sds.team
    rgw_zone_user_display_name: "SDS Team"
    system_access_key: nPZC66HvEi5zrujqns4kF
    system_secret_key: P3HRsE7RpLEYliBOnwNvCJaCZQI6oqKVkg7aZ1VH

** host file for example.
[rgws]
ceph-osd01 _radosgw_address=0.0.0.0
ceph-osd01 _radosgw_address=0.0.0.0:8080
ceph-osd02 _radosgw_address=0.0.0.0
ceph-osd02 _radosgw_address=0.0.0.0:8080
ceph-osd03 _radosgw_address=0.0.0.0
ceph-osd03 _radosgw_address=0.0.0.0:8080
ceph-osd04 _radosgw_address=0.0.0.0
ceph-osd04 _radosgw_address=0.0.0.0:8080

Comment 1 Guillaume Abrioux 2021-01-18 19:06:19 UTC

*** This bug has been marked as a duplicate of bug 1888630 ***