Bug 1926624 - [RFE] RGW deployment - cannot specify a binding ip on the storage network using the spec file
Summary: [RFE] RGW deployment - cannot specify a binding ip on the storage network usi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Vidushi Mishra
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks: 1820257 1839169
TreeView+ depends on / blocked
 
Reported: 2021-02-09 08:53 UTC by Francesco Pantano
Modified: 2021-08-30 08:28 UTC (History)
9 users (show)

Fixed In Version: ceph-16.2.0-46.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:28:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 40048 0 None closed mgr/cephadm: allow RGWSpec networks list to select an IP to bind to 2021-04-12 10:40:15 UTC
Red Hat Issue Tracker RHCEPH-1051 0 None None None 2021-08-27 05:15:01 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:28:39 UTC

Description Francesco Pantano 2021-02-09 08:53:17 UTC
Description of problem:

RGW can be specified and deployed using a spec definition like:

service_type: rgw
service_id: default.default.default
service_name: rgw.default.default.default
placement:
  hosts:
  - <host1>
  - <host2>
  - <host3>
spec:
  rgw_frontend_port: 8080
  rgw_realm: default
  rgw_zone: default
  subcluster: default

where the port and a few other parameters can be specified.
However, when the spec section described above is processed, this component is
started on  *::8080, instead of using the cluster network or the ip address on a specific network defined by the operator.
This is a gap compared to ceph-ansible, which is able to provide this kind of feature via [1]

[1] https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-config/templates/ceph.conf.j2
 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Juan Miguel Olmo 2021-04-14 15:33:54 UTC
Hi Francesco.. Sorry... probably I set the bug to RFE and set to 5.1 thinking that i worked with another bug.... really i do not know, (things that happen when shepherding bugs... :-)

You are right, this is implemented and available in RHCS 5.0 since a couple of weeks ago!!

Thanks!!!

Comment 5 Daniel Pivonka 2021-05-05 16:05:29 UTC
similar problem here https://bugzilla.redhat.com/show_bug.cgi?id=1954019

your incorrectly using the apply command. you can either use 'ceph orch appy rgw ...." to apply a rgw spec via cli parameters or you can use 'ceph orch apply -i spec.yml' to apply a spec via yaml file

you cant combined 'ceph orch apply rgw ..." and a spec file


in your  example above your doing this "ceph orch apply rgw  rgw.movies  rgw1.yaml"

the correct way to apply your spec file would be "ceph orch apply -i rgw1.yaml"



whats actually happening for you is  your calling the 'apply rgw' cli'     

>>> orch apply rgw <svc_id> [<realm>] [<zone>] [<port:int>] [--ssl]    Update the number of RGW instances for the given zone
>>>  [<placement>] [--dry-run] [--format {plain|json|json-pretty|      
>>>  yaml}] [--unmanaged] [--no-overwrite] 


with the <svc_id> as "rgw.movies" and [<realm>] as "rgw1.yaml"

you can see that is confirmed in the output from the "ceph orch  ls rgw -f yaml" command 

"service_id: rgw.movies"

"spec:
  rgw_realm: rgw1.yaml"

all other parameters of the spec where then defaults. for example the placement was 'count: 2'






additionally the spec you are trying to use does not have a network in it so if applied correctly its not going to bind to a specific network

add "networks: 10.0.208.0/22" to your spec file to use this feature

>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     0e6bdd12-adba-11eb-92f3-5254003ca783
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon: 3 daemons, quorum vm-00,vm-02,vm-01 (age 4m)
>>>     mgr: vm-00.gxjgfb(active, since 6m), standbys: vm-01.ptpibf
>>>     osd: 3 osds: 3 up (since 3m), 3 in (since 4m)
>>>  
>>>   data:
>>>     pools:   1 pools, 88 pgs
>>>     objects: 0 objects, 0 B
>>>     usage:   45 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     1.136% pgs not active
>>>              87 active+clean
>>>              1  peering
>>>  
>>>   progress:
>>>     Global Recovery Event (20s)
>>>       [===========================.] 
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# radosgw-admin realm create --rgw-realm=movies --default
>>> {
>>>     "id": "8365d038-e2e6-4f28-9853-a3f1a216ec4e",
>>>     "name": "movies",
>>>     "current_period": "2944cd0f-39d5-412f-b538-1c27fa8678ce",
>>>     "epoch": 1
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin zonegroup create --rgw-zonegroup=us --master --default 
>>> {
>>>     "id": "340ca1e6-51cd-4c1c-99d7-c3e6d88cf3e9",
>>>     "name": "us",
>>>     "api_name": "us",
>>>     "is_master": "true",
>>>     "endpoints": [],
>>>     "hostnames": [],
>>>     "hostnames_s3website": [],
>>>     "master_zone": "",
>>>     "zones": [],
>>>     "placement_targets": [],
>>>     "default_placement": "",
>>>     "realm_id": "8365d038-e2e6-4f28-9853-a3f1a216ec4e",
>>>     "sync_policy": {
>>>         "groups": []
>>>     }
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin zone create --rgw-zone=us-east --rgw-zonegroup=us --master  --default
>>> {
>>>     "id": "e3406172-8de2-4afc-b1cd-65d555eebe6f",
>>>     "name": "us-east",
>>>     "domain_root": "us-east.rgw.meta:root",
>>>     "control_pool": "us-east.rgw.control",
>>>     "gc_pool": "us-east.rgw.log:gc",
>>>     "lc_pool": "us-east.rgw.log:lc",
>>>     "log_pool": "us-east.rgw.log",
>>>     "intent_log_pool": "us-east.rgw.log:intent",
>>>     "usage_log_pool": "us-east.rgw.log:usage",
>>>     "roles_pool": "us-east.rgw.meta:roles",
>>>     "reshard_pool": "us-east.rgw.log:reshard",
>>>     "user_keys_pool": "us-east.rgw.meta:users.keys",
>>>     "user_email_pool": "us-east.rgw.meta:users.email",
>>>     "user_swift_pool": "us-east.rgw.meta:users.swift",
>>>     "user_uid_pool": "us-east.rgw.meta:users.uid",
>>>     "otp_pool": "us-east.rgw.otp",
>>>     "system_key": {
>>>         "access_key": "",
>>>         "secret_key": ""
>>>     },
>>>     "placement_pools": [
>>>         {
>>>             "key": "default-placement",
>>>             "val": {
>>>                 "index_pool": "us-east.rgw.buckets.index",
>>>                 "storage_classes": {
>>>                     "STANDARD": {
>>>                         "data_pool": "us-east.rgw.buckets.data"
>>>                     }
>>>                 },
>>>                 "data_extra_pool": "us-east.rgw.buckets.non-ec",
>>>                 "index_type": 0
>>>             }
>>>         }
>>>     ],
>>>     "realm_id": "8365d038-e2e6-4f28-9853-a3f1a216ec4e",
>>>     "notif_pool": "us-east.rgw.log:notif"
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin period update --rgw-realm=movies --commit
>>> {
>>>     "id": "e03432c5-603a-4ab1-a922-733f562557e5",
>>>     "epoch": 1,
>>>     "predecessor_uuid": "2944cd0f-39d5-412f-b538-1c27fa8678ce",
>>>     "sync_status": [],
>>>     "period_map": {
>>>         "id": "e03432c5-603a-4ab1-a922-733f562557e5",
>>>         "zonegroups": [
>>>             {
>>>                 "id": "340ca1e6-51cd-4c1c-99d7-c3e6d88cf3e9",
>>>                 "name": "us",
>>>                 "api_name": "us",
>>>                 "is_master": "true",
>>>                 "endpoints": [],
>>>                 "hostnames": [],
>>>                 "hostnames_s3website": [],
>>>                 "master_zone": "e3406172-8de2-4afc-b1cd-65d555eebe6f",
>>>                 "zones": [
>>>                     {
>>>                         "id": "e3406172-8de2-4afc-b1cd-65d555eebe6f",
>>>                         "name": "us-east",
>>>                         "endpoints": [],
>>>                         "log_meta": "false",
>>>                         "log_data": "false",
>>>                         "bucket_index_max_shards": 11,
>>>                         "read_only": "false",
>>>                         "tier_type": "",
>>>                         "sync_from_all": "true",
>>>                         "sync_from": [],
>>>                         "redirect_zone": ""
>>>                     }
>>>                 ],
>>>                 "placement_targets": [
>>>                     {
>>>                         "name": "default-placement",
>>>                         "tags": [],
>>>                         "storage_classes": [
>>>                             "STANDARD"
>>>                         ]
>>>                     }
>>>                 ],
>>>                 "default_placement": "default-placement",
>>>                 "realm_id": "8365d038-e2e6-4f28-9853-a3f1a216ec4e",
>>>                 "sync_policy": {
>>>                     "groups": []
>>>                 }
>>>             }
>>>         ],
>>>         "short_zone_ids": [
>>>             {
>>>                 "key": "e3406172-8de2-4afc-b1cd-65d555eebe6f",
>>>                 "val": 129399253
>>>             }
>>>         ]
>>>     },
>>>     "master_zonegroup": "340ca1e6-51cd-4c1c-99d7-c3e6d88cf3e9",
>>>     "master_zone": "e3406172-8de2-4afc-b1cd-65d555eebe6f",
>>>     "period_config": {
>>>         "bucket_quota": {
>>>             "enabled": false,
>>>             "check_on_raw": false,
>>>             "max_size": -1,
>>>             "max_size_kb": 0,
>>>             "max_objects": -1
>>>         },
>>>         "user_quota": {
>>>             "enabled": false,
>>>             "check_on_raw": false,
>>>             "max_size": -1,
>>>             "max_size_kb": 0,
>>>             "max_objects": -1
>>>         }
>>>     },
>>>     "realm_id": "8365d038-e2e6-4f28-9853-a3f1a216ec4e",
>>>     "realm_name": "movies",
>>>     "realm_epoch": 2
>>> }
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# cat spec.yml 
>>> service_type: rgw
>>> service_id: rgw.movies
>>> service_name: rgw.ms.movies
>>> placement:
>>>   hosts:
>>>   - vm-00
>>>   - vm-01
>>> spec:
>>>   rgw_frontend_port: 8080
>>>   rgw_realm: movies
>>>   rgw_zone: us-east
>>> networks: 192.169.142.0/24 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch apply -i spec.yml 
>>> Scheduled rgw.rgw.movies update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch  ls rgw -f yaml
>>> service_type: rgw
>>> service_id: rgw.movies
>>> service_name: rgw.rgw.movies
>>> placement:
>>>   hosts:
>>>   - vm-00
>>>   - vm-01
>>> networks:
>>> - 192.169.142.0/24
>>> spec:
>>>   rgw_frontend_port: 8080
>>>   rgw_realm: movies
>>>   rgw_zone: us-east
>>> status:
>>>   created: '2021-05-05T16:02:34.550329Z'
>>>   ports:
>>>   - 8080
>>>   running: 0
>>>   size: 2
>>> events:
>>> - 2021-05-05T16:02:32.414256Z service:rgw.rgw.movies [INFO] "service was created"
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# exit
>>> [root@vm-00 ~]#  
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# netstat -plnt | grep radosgw
>>> tcp        0      0 192.169.142.251:8080    0.0.0.0:*               LISTEN      32212/radosgw       
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# ssh vm-01
>>> [root@vm-01 ~]# 
>>> [root@vm-01 ~]# 
>>> [root@vm-01 ~]# netstat -plnt | grep radosgw
>>> tcp        0      0 192.169.142.48:8080     0.0.0.0:*               LISTEN      18248/radosgw       
>>> [root@vm-01 ~]#

Comment 9 errata-xmlrpc 2021-08-30 08:28:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.