Bug 1981682 - [cephadm][rgw][ssl]: error 'failed initializing frontend' seen on configuring beast frontend with ssl
Summary: [cephadm][rgw][ssl]: error 'failed initializing frontend' seen on configuring...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Matt Benjamin (redhat)
QA Contact: Madhavi Kasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-13 06:59 UTC by Madhavi Kasturi
Modified: 2021-08-30 08:31 UTC (History)
16 users (show)

Fixed In Version: ceph-16.2.0-114.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:31:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 52000 0 None None None 2021-08-02 14:35:17 UTC
Github ceph ceph pull 42372 0 None open doc/cephadm: Add RGW ssl 2021-07-16 10:55:12 UTC
Github ceph ceph pull 42587 0 None None None 2021-08-04 15:14:37 UTC
Red Hat Issue Tracker RHCEPH-1321 0 None None None 2021-08-30 00:24:57 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:31:43 UTC

Internal Links: 1987010

Comment 13 Sebastian Wagner 2021-07-16 10:55:12 UTC
upstream doc pr: https://github.com/ceph/ceph/pull/42372

Comment 16 Sebastian Wagner 2021-07-16 13:45:11 UTC
could you please drop the service `ceph orch rm`, wait for it to get removed, and then re-apply it again?

Comment 18 Sebastian Wagner 2021-07-19 15:54:15 UTC
the daemon is printing:

---
...
Jul 19 15:35:24 magna009 conmon[1131092]: rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e361610) v8 -- 0x557b3e436400 con 0x557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: ) v8 ==== 199+0+110 (crc 0 0 0) 0x557b3dde3440 con 0x557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: 557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: 366957839,v1:10.8.128.4:6801/3366957839] -- osd_op(unknown.0.0:3782 6.6 6:7eaf5fb0:::.dir.adacbe1b-02b4-41b8-b11d-0d505b442ed4.534524.1.1611:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e361610) v8 -- 0x557b3e327800 con 0x557b3de7ac00
Jul 19 15:35:25 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Main process exited, code=exited, status=22/n/a
Jul 19 15:35:25 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Failed with result 'exit-code'.
Jul 19 15:35:35 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Service RestartSec=10s expired, scheduling restart.
Jul 19 15:35:35 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Scheduled restart job, restart counter is at 5.

---

I'd thing there is an error with bucket_list or something and I think the issue with the certificate is gone. Can you verify Tejas  ?

Comment 31 Veera Raghava Reddy 2021-07-28 17:44:11 UTC
Created new Bug to track Upgrade scenario - https://bugzilla.redhat.com/show_bug.cgi?id=1987010

Comment 39 Sage Weil 2021-08-02 15:40:28 UTC
I think this would address the warning: https://github.com/ceph/ceph/pull/42587

Comment 40 Sebastian Wagner 2021-08-04 15:13:56 UTC
Moving this BZ to the RGW component. I think there is not much to be done here except backporting and cherry-picking this. right?

Comment 55 errata-xmlrpc 2021-08-30 08:31:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.