Bug 2115462 - extend error message when hitting execnet exception on closed I/O
Summary: extend error message when hitting execnet exception on closed I/O
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.1
Hardware: All
OS: Linux
unspecified
low
Target Milestone: ---
: 7.0
Assignee: Adam King
QA Contact: Aditya Ramteke
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2237662
TreeView+ depends on / blocked
 
Reported: 2022-08-04 17:35 UTC by Michaela Lang
Modified: 2024-08-29 05:37 UTC (History)
7 users (show)

Fixed In Version: ceph-18.2.0-1
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-13 15:19:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-5005 0 None None None 2022-08-04 17:41:29 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:19:20 UTC

Description Michaela Lang 2022-08-04 17:35:10 UTC
Description of problem:
Exception reporting to generic

Version-Release number of selected component (if applicable):
5.1


How reproducible:
everytime


Steps to Reproduce:
1. setup a Ceph cluster
2. prepare a host to be added to the cluster
3. do not configure "NOPASSWD" in sudoers file for the cephorch (cephadm configured) user
3.1 echo "${CEPHUSER} ALL=(ALL) ALL" > /etc/sudoers.d/cephorch
4. cephadm shell ceph orch host add node1 127.0.0.1 _admin

Actual results:
Can't communicate with remote host `127.0.0.1`, possibly because python3 is not installed there: cannot send (already closed?)


Expected results:
Added host 'node1' with addr '127.0.0.1'


Additional info:
Reason for the issue is comming from sudo with password closes the I/O and execnet fails which is catched by cephadm/ssh.py only with a generic error message. This leads to difficult debugging even though, the procedure and documentation for setting up is correct and mentioning NOPASSWD in sudoers as only supported way.

I additionally created a pull request upstream as well https://github.com/ceph/ceph/pull/47464.

Comment 1 Adam King 2022-08-08 11:08:08 UTC
I've checked out the posted PR. It looks good other than some stuff with the commit message.

Comment 14 errata-xmlrpc 2023-12-13 15:19:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780


Note You need to log in before you can comment on or make changes to this bug.