Bug 1954019 - [Cephadm][RGW]: RGW listens on port 80 when deployed via a spec file with port set as 8080
Summary: [Cephadm][RGW]: RGW listens on port 80 when deployed via a spec file with por...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-27 12:59 UTC by Vidushi Mishra
Modified: 2021-04-28 16:15 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-28 16:15:22 UTC
Embargoed:


Attachments (Terms of Use)

Comment 1 Daniel Pivonka 2021-04-28 16:15:22 UTC
your incorrectly using the apply command. you can either use 'ceph orch appy rgw ...." to apply a rgw spec via cli parameters or you can use 'ceph orch apply -i spec.yml' to apply a spec via yaml file

you cant combined 'ceph orch apply rgw ..." and a spec file


in your  example above your doing this "ceph orch apply rgw  rgw.movies  rgw1.yaml"

the correct way to apply your spec file would be "ceph orch apply -i rgw1.yaml"



whats actually happening for you is  your calling the 'apply rgw' cli'     

>>> orch apply rgw <svc_id> [<realm>] [<zone>] [<port:int>] [--ssl]    Update the number of RGW instances for the given zone
>>>  [<placement>] [--dry-run] [--format {plain|json|json-pretty|      
>>>  yaml}] [--unmanaged] [--no-overwrite] 


with the <svc_id> as "rgw.movies" and [<realm>] as "rgw1.yaml"

you can see that is confirmed in the output from the "ceph orch  ls rgw -f yaml" command 

"service_id: rgw.movies"

"spec:
  rgw_realm: rgw1.yaml"

all other parameters of the spec where then defaults. for example the placement was 'count:2



in my test everything works as it should when applied correctly

>>> [ceph: root@vm-00 /]# cat spec.yml 
>>> service_type: rgw
>>> service_id: rgw.movies
>>> placement:
>>>   hosts:
>>>   - vm-00
>>> spec:
>>>   rgw_frontend_port: 8080
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch apply -i spec.yml 
>>> Scheduled rgw.movies update...
>>> [ceph: root@vm-00 /]#
>>> [ceph: root@vm-00 /]# ceph -s     
>>>   cluster:
>>>     id:     b9e25404-a82f-11eb-b196-52540098c1d4
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon: 3 daemons, quorum vm-00,vm-01,vm-02 (age 70m)
>>>     mgr: vm-00.ejcadn(active, since 73m), standbys: vm-01.eqdoop
>>>     osd: 3 osds: 3 up (since 69m), 3 in (since 70m)
>>>     rgw: 1 daemon active (1 hosts, 1 zones)
>>>  
>>>   data:
>>>     pools:   5 pools, 160 pgs
>>>     objects: 189 objects, 4.9 KiB
>>>     usage:   93 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     0.625% pgs not active
>>>              159 active+clean
>>>              1   clean+premerge+peered
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch  ls rgw -f yaml
>>> service_type: rgw
>>> service_id: movies
>>> service_name: rgw.movies
>>> placement:
>>>   hosts:
>>>   - vm-00
>>> spec:
>>>   rgw_frontend_port: 8080
>>> status:
>>>   created: '2021-04-28T15:51:39.853829Z'
>>>   ports:
>>>   - 8080
>>>   running: 0
>>>   size: 1
>>> events:
>>> - 2021-04-28T15:51:39.788046Z service:rgw.movies [INFO] "service was created"
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# exit
>>> [root@vm-00 ~]#
>>> [root@vm-00 ~]# netstat -plnt | grep radosgw
>>> tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      39214/radosgw       
>>> tcp6       0      0 :::8080                 :::*                    LISTEN      39214/radosgw       
>>> [root@vm-00 ~]# 



closing as not a bug


Note You need to log in before you can comment on or make changes to this bug.