Bug 1944864 - RGW service only accessible from localhost
Summary: RGW service only accessible from localhost
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-30 20:06 UTC by John Harrigan
Modified: 2023-09-15 01:04 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Feature: Reason: Result:
Clone Of:
Environment:
Last Closed: 2021-04-21 20:42:38 UTC
Embargoed:


Attachments (Terms of Use)
rgw specification file (269 bytes, text/plain)
2021-03-30 20:06 UTC, John Harrigan
no flags Details

Description John Harrigan 2021-03-30 20:06:04 UTC
Created attachment 1767788 [details]
rgw specification file

Description of problem:
After deploying RGWs with the attached specification file the service is only accessible through localhost:8080

Version-Release number of selected component (if applicable):
ceph version 16.1.0-486.el8cp

How reproducible:
always

Steps to Reproduce:
1. perform 'dry-run' using attached rgw_spec.yml' specification file
# ceph orch apply -i rgw_spec.yml --dry-run
|SERVICE  |NAME      |ADD_TO    ++++++++++++++++
|rgw      |rgw.rgws  |pcloud12.perf.lab.eng.bos.redhat.com pcloud10.perf.lab.eng.bos.redhat.com pcloud08.perf.lab.eng.bos.redhat.com  |

2. apply RGW specification to deploy
# ceph orch apply -i rgw_spec.yml
Scheduled rgw.rgws update...

3. check Ceph status
# ceph status
  cluster:
    id:     8e9d2cec-87eb-11eb-8d27-d4856479e90c
    health: HEALTH_OK
  services:
    mon: 1 daemons, quorum pcloud07.perf.lab.eng.bos.redhat.com (age 4d)
    mgr: pcloud07.perf.lab.eng.bos.redhat.com.gkuhoj(active, since 4d)
    osd: 15 osds: 15 up (since 2m), 15 in (since 2m)
    rgw: 3 daemons active (rgws.pcloud08.sdaafe, rgws.pcloud10.xqwqql, rgws.pcloud12.nqjqsw)

4. Check RGW service port
# ssh pcloud08 netstat -tulpn | grep 8080   ← also on pcloud10, pcloud12
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      640382/radosgw      
tcp6       0      0 :::8080                 :::*                    LISTEN      640382/radosgw

5. Check RGW first through localhost and then remotely
# ssh pcloud08 wget -q -O - http://localhost:8080   ← WORKS
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

# wget -q -O - http://172.10.16.8:8080      ← FAILS
<NOTHING Returned>

Actual results:
# wget -q -O - http://172.10.16.8:8080      ← FAILS
<NOTHING Returned>

Expected results:
# wget -q -O - http://172.10.16.8:8080
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

Additional info:
netstat returns   0 0.0.0.0:8080
How to access RGW service?
Is the specicfication file missing an element?

Comment 1 RHEL Program Management 2021-03-30 20:06:09 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Juan Miguel Olmo 2021-04-07 16:26:02 UTC
@Rachana: 

You need to use something like:

service_type: rgw 
service_id: rgws 	
placement: 
  hosts: 
    - pcloud08.perf.lab.eng.bos.redhat.com
    - pcloud10.perf.lab.eng.bos.redhat.com
    - pcloud12.perf.lab.eng.bos.redhat.com 
rgw_zone: default 
rgw_frontend_port: 8080
unmanaged: false
networks:
  - 192.168.0.0/16

This was merged in pacific a few days ago (https://github.com/ceph/ceph/pull/40048), and in downstream probably past week. 

I have seen that this is not reflected in the upstream documentation. Lets use this bug to add this parameter in the documentation the documentation

Please provide feedback about the use of this new parameter. thx!

Comment 4 Juan Miguel Olmo 2021-04-08 09:15:56 UTC
@John:

Firewall blocking conections?

Comment 5 John Harrigan 2021-04-08 12:53:40 UTC
Neither firewalld nor iptables were running

Comment 6 Daniel Pivonka 2021-04-14 21:13:15 UTC
>>> [dpivonka@localhost kcli_plans]$ kcli list vms
>>> +-------+--------+-----------------+------------------------------------+--------------+---------+
>>> |  Name | Status |       Ips       |               Source               |     Plan     | Profile |
>>> +-------+--------+-----------------+------------------------------------+--------------+---------+
>>> | vm-00 |   up   |  192.168.122.9  | rhel-8.3-update-2-x86_64-kvm.qcow2 | grave-django |  kvirt  |
>>> | vm-01 |   up   | 192.168.122.226 | rhel-8.3-update-2-x86_64-kvm.qcow2 | grave-django |  kvirt  |
>>> | vm-02 |   up   | 192.168.122.147 | rhel-8.3-update-2-x86_64-kvm.qcow2 | grave-django |  kvirt  |
>>> +-------+--------+-----------------+------------------------------------+--------------+---------+
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ ssh root.122.9
>>> Activate the web console with: systemctl enable --now cockpit.socket
>>> 
>>> This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
>>> To register this system, run: insights-client --register
>>> 
>>> Last login: Wed Apr 14 16:05:58 2021 from 192.168.122.1
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# cephadm bootstrap --registry-url registry.redhat.io --registry-username dpivonka --registry-password Xcirca6! --mon-ip 192.168.122.9 --initial-dashboard-password admin  --dashboard-password-noupdate
>>> Verifying podman|docker is present...
>>> Verifying lvm2 is present...
>>> Verifying time synchronization is in place...
>>> Unit chronyd.service is enabled and running
>>> Repeating the final host check...
>>> podman|docker (/usr/bin/podman) is present
>>> systemctl is present
>>> lvcreate is present
>>> Unit chronyd.service is enabled and running
>>> Host looks OK
>>> Cluster fsid: 4ffa98e8-9d5e-11eb-ad3c-52540002b83a
>>> Verifying IP 192.168.122.9 port 3300 ...
>>> Verifying IP 192.168.122.9 port 6789 ...
>>> Mon IP 192.168.122.9 is in CIDR network 192.168.122.0/24
>>> - internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
>>> Logging into custom registry.
>>> Pulling container image registry.redhat.io/rhceph-beta/rhceph-5-rhel8:latest...
>>> Ceph version: ceph version 16.2.0-4.el8cp (987b1d2838ad9c505a6f557f32ee75c1e3ed7028) pacific (stable)
>>> Extracting ceph user uid/gid from container image...
>>> Creating initial keys...
>>> Creating initial monmap...
>>> Creating mon...
>>> Waiting for mon to start...
>>> Waiting for mon...
>>> mon is available
>>> Assimilating anything we can from ceph.conf...
>>> Generating new minimal ceph.conf...
>>> Restarting the monitor...
>>> Setting mon public_network to 192.168.122.0/24
>>> Wrote config to /etc/ceph/ceph.conf
>>> Wrote keyring to /etc/ceph/ceph.client.admin.keyring
>>> Creating mgr...
>>> Verifying port 9283 ...
>>> Waiting for mgr to start...
>>> Waiting for mgr...
>>> mgr not available, waiting (1/15)...
>>> mgr not available, waiting (2/15)...
>>> mgr not available, waiting (3/15)...
>>> mgr is available
>>> Enabling cephadm module...
>>> Waiting for the mgr to restart...
>>> Waiting for mgr epoch 5...
>>> mgr epoch 5 is available
>>> Setting orchestrator backend to cephadm...
>>> Generating ssh key...
>>> Wrote public SSH key to /etc/ceph/ceph.pub
>>> Adding key to root@localhost authorized_keys...
>>> Adding host vm-00...
>>> Deploying mon service with default placement...
>>> Deploying mgr service with default placement...
>>> Deploying crash service with default placement...
>>> Enabling mgr prometheus module...
>>> Deploying prometheus service with default placement...
>>> Deploying grafana service with default placement...
>>> Deploying node-exporter service with default placement...
>>> Deploying alertmanager service with default placement...
>>> Enabling the dashboard module...
>>> Waiting for the mgr to restart...
>>> Waiting for mgr epoch 13...
>>> mgr epoch 13 is available
>>> Generating a dashboard self-signed certificate...
>>> Creating initial admin user...
>>> Fetching dashboard port number...
>>> Ceph Dashboard is now available at:
>>> 
>>> 	     URL: https://vm-00:8443/
>>> 	    User: admin
>>> 	Password: admin
>>> 
>>> You can access the Ceph CLI with:
>>> 
>>> 	sudo /usr/sbin/cephadm shell --fsid 4ffa98e8-9d5e-11eb-ad3c-52540002b83a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
>>> 
>>> Please consider enabling telemetry to help improve Ceph:
>>> 
>>> 	ceph telemetry on
>>> 
>>> For more information see:
>>> 
>>> 	https://docs.ceph.com/docs/pacific/mgr/telemetry/
>>> 
>>> Bootstrap complete.
>>> [root@vm-00 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@vm-01
>>> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
>>> The authenticity of host 'vm-01 (192.168.122.226)' can't be established.
>>> ECDSA key fingerprint is SHA256:fAx5JRuQGAU0BRHmhbfEagf06ZSwxIP0UDZP95Wdrwk.
>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
>>> 
>>> Number of key(s) added: 1
>>> 
>>> Now try logging into the machine, with:   "ssh 'root@vm-01'"
>>> and check to make sure that only the key(s) you wanted were added.
>>> 
>>> [root@vm-00 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@vm-02
>>> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
>>> The authenticity of host 'vm-02 (192.168.122.147)' can't be established.
>>> ECDSA key fingerprint is SHA256:nQXwoSVFzCFNgR+HlP29t0WugJcTUHL8aGUbcZ1BWJo.
>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
>>> 
>>> Number of key(s) added: 1
>>> 
>>> Now try logging into the machine, with:   "ssh 'root@vm-02'"
>>> and check to make sure that only the key(s) you wanted were added.
>>> 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# cephadm shell
>>> Inferring fsid 4ffa98e8-9d5e-11eb-ad3c-52540002b83a
>>> Inferring config /var/lib/ceph/4ffa98e8-9d5e-11eb-ad3c-52540002b83a/mon.vm-00/config
>>> Using recent ceph image registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:24c617082680ef85c43c6e2c4fe462c69805d2f38df83e51f968cec6b1c097a2
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch host add vm-01
>>> Added host 'vm-01'
>>> [ceph: root@vm-00 /]# ceph orch host add vm-02
>>> Added host 'vm-02'
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch apply osd --all-available-devices 
>>> Scheduled osd.all-available-devices update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     4ffa98e8-9d5e-11eb-ad3c-52540002b83a
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon: 3 daemons, quorum vm-00,vm-01,vm-02 (age 3m)
>>>     mgr: vm-00.dwvjqf(active, since 7m), standbys: vm-01.fktfbv
>>>     osd: 3 osds: 3 up (since 3m), 3 in (since 3m)
>>>  
>>>   data:
>>>     pools:   1 pools, 1 pgs
>>>     objects: 0 objects, 0 B
>>>     usage:   15 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     1 active+clean
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# cat spec.yml 
>>> service_type: rgw
>>> service_id: rgws
>>> placement:
>>>   hosts:
>>>     - vm-00
>>> rgw_frontend_port: 8081
>>> unmanaged: false
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch apply -i spec.yml 
>>> Scheduled rgw.rgws update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch ps
>>> NAME                   HOST   STATUS          REFRESHED  AGE   PORTS          VERSION         IMAGE ID      CONTAINER ID  
>>> alertmanager.vm-00     vm-00  running (33m)   112s ago   38m   *:9093 *:9094  0.20.0          a6d8bb89b5e4  60969afddaa1  
>>> crash.vm-00            vm-00  running (38m)   112s ago   38m   -              16.2.0-4.el8cp  7c956aac1349  e0daa903fd75  
>>> crash.vm-01            vm-01  running (36m)   49s ago    36m   -              16.2.0-4.el8cp  7c956aac1349  ef62f6031594  
>>> crash.vm-02            vm-02  running (34m)   49s ago    34m   -              16.2.0-4.el8cp  7c956aac1349  9c543aa01dfb  
>>> grafana.vm-00          vm-00  running (37m)   112s ago   37m   *:3000         6.7.4           11da1f9bfab5  31e771d47af6  
>>> mgr.vm-00.dwvjqf       vm-00  running (39m)   112s ago   39m   *:9283         16.2.0-4.el8cp  7c956aac1349  26ef93afc615  
>>> mgr.vm-01.fktfbv       vm-01  running (36m)   49s ago    36m   *:8443 *:9283  16.2.0-4.el8cp  7c956aac1349  4570463199f3  
>>> mon.vm-00              vm-00  running (39m)   112s ago   39m   -              16.2.0-4.el8cp  7c956aac1349  9fb148978130  
>>> mon.vm-01              vm-01  running (35m)   49s ago    35m   -              16.2.0-4.el8cp  7c956aac1349  852c143dde50  
>>> mon.vm-02              vm-02  running (34m)   49s ago    34m   -              16.2.0-4.el8cp  7c956aac1349  2e3b258fb55e  
>>> node-exporter.vm-00    vm-00  running (37m)   112s ago   37m   *:9100         0.18.1          8846086cd87b  57f6dd646ac8  
>>> node-exporter.vm-01    vm-01  running (35m)   49s ago    35m   *:9100         0.18.1          8846086cd87b  8a9764356dcd  
>>> node-exporter.vm-02    vm-02  running (34m)   49s ago    34m   *:9100         0.18.1          8846086cd87b  79d8d3dd0589  
>>> osd.0                  vm-00  running (35m)   112s ago   35m   -              16.2.0-4.el8cp  7c956aac1349  0b71544be2eb  
>>> osd.1                  vm-01  running (35m)   49s ago    35m   -              16.2.0-4.el8cp  7c956aac1349  ffab75672bb8  
>>> osd.2                  vm-02  running (34m)   49s ago    34m   -              16.2.0-4.el8cp  7c956aac1349  d02703f2b787  
>>> prometheus.vm-00       vm-00  running (33m)   112s ago   37m   *:9095         2.22.2          c1f3defdd8fd  2f6a477dabbb  
>>> rgw.rgws.vm-00.hsrfrq  vm-00  running (119s)  112s ago   119s  *:8081         16.2.0-4.el8cp  7c956aac1349  fbfa36c2824c  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# exit
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# wget -q -O - http://localhost:8081
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# wget -q -O - http://192.168.122.9:8081
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# ssh vm-02
>>> Activate the web console with: systemctl enable --now cockpit.socket
>>> 
>>> This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
>>> To register this system, run: insights-client --register
>>> 
>>> Last login: Wed Apr 14 16:09:28 2021 from 192.168.122.1
>>> [root@vm-02 ~]# 
>>> [root@vm-02 ~]# 
>>> [root@vm-02 ~]# 
>>> [root@vm-02 ~]# wget -q -O - http://192.168.122.9:8081
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@vm-02 ~]# 
>>> [root@vm-02 ~]# 
>>> [root@vm-02 ~]# logout
>>> Connection to vm-02 closed.
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# logout
>>> Connection to 192.168.122.9 closed.
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ wget -q -O - http://192.168.122.9:8081
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ ssh root.122.9
>>> Activate the web console with: systemctl enable --now cockpit.socket
>>> 
>>> This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
>>> To register this system, run: insights-client --register
>>> 
>>> Last login: Wed Apr 14 16:59:58 2021 from 192.168.122.1
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# cephadm shell
>>> Inferring fsid 4ffa98e8-9d5e-11eb-ad3c-52540002b83a
>>> Inferring config /var/lib/ceph/4ffa98e8-9d5e-11eb-ad3c-52540002b83a/mon.vm-00/config
>>> Using recent ceph image registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:24c617082680ef85c43c6e2c4fe462c69805d2f38df83e51f968cec6b1c097a2
>>> WARNING: The same type, major and minor should not be used for multiple devices.
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph --version
>>> ceph version 16.2.0-4.el8cp (987b1d2838ad9c505a6f557f32ee75c1e3ed7028) pacific (stable)
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph config get mon container_image
>>> registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:24c617082680ef85c43c6e2c4fe462c69805d2f38df83e51f968cec6b1c097a2
>>> [ceph: root@vm-00 /]# 



rgw service is reachable by the ip of the machine the daemon is running on from everywhere that is reachable



need more details of the setup where this problem occurred. are your trying to use multiple networks? its not clear if the ip of pcloud08.perf.lab.eng.bos.redhat.com is 172.10.16.8



additionally your using 'rgw_zone: default ' in your spec. this should only be used in combination with 'rgw_realm: '

Comment 7 Daniel Pivonka 2021-04-15 16:15:25 UTC
additionally here is how to use the networks option to deploy the service on a specific network

>>> [root@vm-00 ~]# ip a
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
>>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>     inet 127.0.0.1/8 scope host lo
>>>        valid_lft forever preferred_lft forever
>>>     inet6 ::1/128 scope host 
>>>        valid_lft forever preferred_lft forever
>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
>>>     link/ether 52:54:00:ec:59:47 brd ff:ff:ff:ff:ff:ff
>>>     inet 192.168.122.208/24 brd 192.168.122.255 scope global dynamic noprefixroute eth0
>>>        valid_lft 3530sec preferred_lft 3530sec
>>>     inet6 fe80::5054:ff:feec:5947/64 scope link 
>>>        valid_lft forever preferred_lft forever
>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
>>>     link/ether 52:54:00:6d:a4:a3 brd ff:ff:ff:ff:ff:ff
>>>     inet 192.169.142.239/24 brd 192.169.142.255 scope global dynamic noprefixroute eth1
>>>        valid_lft 3530sec preferred_lft 3530sec
>>>     inet6 fe80::3d48:2d3b:46ac:5cd1/64 scope link noprefixroute 
>>>        valid_lft forever preferred_lft forever
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# ./cephadm shell
>>> Inferring fsid 34a8630e-9e03-11eb-8004-525400ec5947
>>> Inferring config /var/lib/ceph/34a8630e-9e03-11eb-8004-525400ec5947/mon.vm-00/config
>>> Using recent ceph image docker.io/ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph --version
>>> ceph version 16.2.0 (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable)
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph -s 
>>>   cluster:
>>>     id:     34a8630e-9e03-11eb-8004-525400ec5947
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon: 3 daemons, quorum vm-00,vm-02,vm-01 (age 7m)
>>>     mgr: vm-00.gnkuyx(active, since 10m), standbys: vm-02.ikrzpu
>>>     osd: 3 osds: 3 up (since 7m), 3 in (since 7m)
>>>  
>>>   data:
>>>     pools:   1 pools, 1 pgs
>>>     objects: 0 objects, 0 B
>>>     usage:   15 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     1 active+clean
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# cat spec.yml 
>>> service_type: rgw 
>>> service_id: rgws 	
>>> placement: 
>>>   hosts: 
>>>     - vm-00
>>> rgw_frontend_port: 8080
>>> unmanaged: false
>>> networks:
>>>   - 192.169.142.0/24
>>> 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch apply -i spec.yml 
>>> Scheduled rgw.rgws update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch ps
>>> NAME                   HOST   STATUS         REFRESHED  AGE  PORTS                 VERSION  IMAGE ID      CONTAINER ID  
>>> alertmanager.vm-00     vm-00  running (8m)   12s ago    11m  *:9093 *:9094         0.20.0   0881eb8f169f  e40ee34a9c57  
>>> crash.vm-00            vm-00  running (11m)  12s ago    11m  -                     16.2.0   24ecd6d5f14c  327d4d2be921  
>>> crash.vm-01            vm-01  running (9m)   6m ago     9m   -                     16.2.0   24ecd6d5f14c  def40f9f9892  
>>> crash.vm-02            vm-02  running (9m)   5m ago     9m   -                     16.2.0   24ecd6d5f14c  ac13280df800  
>>> grafana.vm-00          vm-00  running (8m)   12s ago    10m  *:3000                6.7.4    80728b29ad3f  df9d10fff33a  
>>> mgr.vm-00.gnkuyx       vm-00  running (12m)  12s ago    12m  *:9283                16.2.0   24ecd6d5f14c  f5468d40dbb0  
>>> mgr.vm-02.ikrzpu       vm-02  running (9m)   5m ago     9m   *:8443 *:9283         16.2.0   24ecd6d5f14c  1d9525b7cef9  
>>> mon.vm-00              vm-00  running (12m)  12s ago    12m  -                     16.2.0   24ecd6d5f14c  63a2e71364fb  
>>> mon.vm-01              vm-01  running (9m)   6m ago     9m   -                     16.2.0   24ecd6d5f14c  fe04212105a4  
>>> mon.vm-02              vm-02  running (9m)   5m ago     9m   -                     16.2.0   24ecd6d5f14c  6f670ac3fa2b  
>>> node-exporter.vm-00    vm-00  running (10m)  12s ago    10m  *:9100                0.18.1   e5a616e4b9cf  1d016351c587  
>>> node-exporter.vm-01    vm-01  running (8m)   6m ago     8m   *:9100                0.18.1   e5a616e4b9cf  7636b7f36e5e  
>>> node-exporter.vm-02    vm-02  running (8m)   5m ago     8m   *:9100                0.18.1   e5a616e4b9cf  63910bf9e82f  
>>> osd.0                  vm-00  running (8m)   12s ago    8m   -                     16.2.0   24ecd6d5f14c  f6a2c5da6474  
>>> osd.1                  vm-02  running (8m)   5m ago     8m   -                     16.2.0   24ecd6d5f14c  f5d67a805c1a  
>>> osd.2                  vm-01  running (8m)   6m ago     8m   -                     16.2.0   24ecd6d5f14c  e68b84ae9a6a  
>>> prometheus.vm-00       vm-00  running (8m)   12s ago    10m  *:9095                2.18.1   de242295e225  c3d12d8fcfdc  
>>> rgw.rgws.vm-00.rtdtvj  vm-00  running (14s)  12s ago    14s  192.169.142.239:8080  16.2.0   24ecd6d5f14c  90805c69001e  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# exit
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# curl 192.169.142.239:8080
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# logout
>>> Connection to 192.168.122.208 closed.
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ curl 192.169.142.239:8080
>>> <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$ curl 192.168.122.208:8080
>>> curl: (7) Failed to connect to 192.168.122.208 port 8080: Connection refused
>>> [dpivonka@localhost kcli_plans]$ 
>>> [dpivonka@localhost kcli_plans]$

Comment 10 Daniel Pivonka 2021-04-21 20:42:38 UTC
closing as this is not a bug. opened a doc bz https://bugzilla.redhat.com/show_bug.cgi?id=1952244

Comment 11 Red Hat Bugzilla 2023-09-15 01:04:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.