Bug 1961517 - rook-ceph-rbd-mirror pods show failed: AdminSocket::bind_and_listen: The UNIX domain socket path
Summary: rook-ceph-rbd-mirror pods show failed: AdminSocket::bind_and_listen: The UNIX...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: rook
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: OCS 4.8.0
Assignee: Sébastien Han
QA Contact: Aviad Polak
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-18 07:21 UTC by Pratik Surve
Modified: 2021-08-03 18:16 UTC (History)
6 users (show)

Fixed In Version: 4.8.0-406.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-03 18:16:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift rook pull 242 0 None open Resync release-4.8 with Rook master 2021-05-24 17:04:59 UTC
Github rook rook pull 7935 0 None open ceph: rehydrate the bootstrap peer token secret on monitor changes 2021-05-18 17:42:51 UTC
Red Hat Product Errata RHBA-2021:3003 0 None None None 2021-08-03 18:16:26 UTC

Description Pratik Surve 2021-05-18 07:21:39 UTC
Description of problem (please be detailed as possible and provide log
snippests):

rook-ceph-rbd-mirror pods show debug 2021-05-18 07:14:48.756 7faba8d1c680 -1 asok(0x55d6a127e000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: The UNIX domain socket path /var/run/ceph/client.rbd-mirror-peer.16.4cc34565-eff6-46f8-9805-91761cf219c9-openshift-storage.94380315121088.asok is too long! The maximum length on this system is 107

Version of all relevant components (if applicable):

OCP version:- 4.8.0-0.nightly-2021-05-15-141455
OCS version:- ocs-operator.v4.8.0-394.ci


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy 2 OCP cluster 
2. Connect them with submariner
3. Deploy cephrbdmirror pods and check logs 


Actual results:

Pods logs:- http://pastebin.test.redhat.com/964594

Expected results:


Additional info:

Comment 3 Sébastien Han 2021-05-18 12:44:22 UTC
The socket takes the following construct: "$run_dir/$name.$pid.$cluster.$cctid.asok", so the $cluster part is definitely the problem here.
Rook forms the cluster name with the ceph fsid + the namespace when creating the bootstrap peer, it uses --site-name.

We need to decide which part to chop off... I feel like both are relevant, at least the cluster fsid.
Thoughts Madhu for the DR integration?

Comment 5 Sébastien Han 2021-05-18 12:50:29 UTC
What do you mean by a standalone cluster?

Comment 14 errata-xmlrpc 2021-08-03 18:16:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.8.0 container images bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3003


Note You need to log in before you can comment on or make changes to this bug.