Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1981682

Summary: [cephadm][rgw][ssl]: error 'failed initializing frontend' seen on configuring beast frontend with ssl
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Madhavi Kasturi <mkasturi>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Madhavi Kasturi <mkasturi>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.0CC: aoconnor, asakthiv, cbodley, ceph-eng-bugs, gsitlani, jthottan, kbader, kdreyer, mbenjamin, mwatts, sewagner, sweil, tchandra, tserlin, vereddy, vimishra
Target Milestone: ---Keywords: Automation, Regression
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-114.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:31:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 13 Sebastian Wagner 2021-07-16 10:55:12 UTC
upstream doc pr: https://github.com/ceph/ceph/pull/42372

Comment 16 Sebastian Wagner 2021-07-16 13:45:11 UTC
could you please drop the service `ceph orch rm`, wait for it to get removed, and then re-apply it again?

Comment 18 Sebastian Wagner 2021-07-19 15:54:15 UTC
the daemon is printing:

---
...
Jul 19 15:35:24 magna009 conmon[1131092]: rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e361610) v8 -- 0x557b3e436400 con 0x557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: ) v8 ==== 199+0+110 (crc 0 0 0) 0x557b3dde3440 con 0x557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: 557b3de7a800
Jul 19 15:35:24 magna009 conmon[1131092]: 366957839,v1:10.8.128.4:6801/3366957839] -- osd_op(unknown.0.0:3782 6.6 6:7eaf5fb0:::.dir.adacbe1b-02b4-41b8-b11d-0d505b442ed4.534524.1.1611:head [call rgw.bucket_list in=47b] snapc 0=[] ondisk+read+known_if_redirected e361610) v8 -- 0x557b3e327800 con 0x557b3de7ac00
Jul 19 15:35:25 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Main process exited, code=exited, status=22/n/a
Jul 19 15:35:25 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Failed with result 'exit-code'.
Jul 19 15:35:35 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Service RestartSec=10s expired, scheduling restart.
Jul 19 15:35:35 magna009 systemd[1]: ceph-0abfa50c-99df-11eb-a239-002590fc2772.magna009.rexvtb.service: Scheduled restart job, restart counter is at 5.

---

I'd thing there is an error with bucket_list or something and I think the issue with the certificate is gone. Can you verify Tejas  ?

Comment 31 Veera Raghava Reddy 2021-07-28 17:44:11 UTC
Created new Bug to track Upgrade scenario - https://bugzilla.redhat.com/show_bug.cgi?id=1987010

Comment 39 Sage Weil 2021-08-02 15:40:28 UTC
I think this would address the warning: https://github.com/ceph/ceph/pull/42587

Comment 40 Sebastian Wagner 2021-08-04 15:13:56 UTC
Moving this BZ to the RGW component. I think there is not much to be done here except backporting and cherry-picking this. right?

Comment 55 errata-xmlrpc 2021-08-30 08:31:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294