Bug 2216295 - [cee/sd][cephadm] got a message "host.containers.internal’s server IP address could not be found" while accessing Grafana graphs on the ceph-dashboard
Summary: [cee/sd][cephadm] got a message "host.containers.internal’s server IP address...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.1z1
Assignee: Adam King
QA Contact: Vinayak Papnoi
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2221020
TreeView+ depends on / blocked
 
Reported: 2023-06-20 18:53 UTC by Milind
Modified: 2024-05-28 06:03 UTC (History)
6 users (show)

Fixed In Version: ceph-17.2.6-84.el9cp
Doc Type: Bug Fix
Doc Text:
.Special lines are no longer included in the host's `/etc/hosts` file when mounting into the container Previously, podman versions added a special line to the `/etc/hosts` file inside the container which messed with the host name resolution within the container, causing it to think that the FQDN of the current host is _host.containers.internal_. With this fix, the host's `/etc/hosts` file is mounted into the container without the special lines and users can use `/etc/hosts` for hostname resolution. Users will no longer encounter errors related to being unable to find the IP for _host.container.internal_ while accessing Grafana graphs in the Ceph dashboard.
Clone Of:
Environment:
Last Closed: 2023-08-03 16:45:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6897 0 None None None 2023-06-20 18:56:50 UTC
Red Hat Product Errata RHBA-2023:4473 0 None None None 2023-08-03 16:46:02 UTC

Description Milind 2023-06-20 18:53:44 UTC
Created attachment 1971757 [details]
error screenshot

Description of problem:
I have a fresh RHCS 6 cluster. When I try to access any of the grafana dashboards via Ceph Dashboard, I get an "host.containers.internal’s server IP address could not be found" error.

See attachment (Error screenshot)

Version-Release number of selected component (if applicable):
ceph version 17.2.5-75.el9cp (52c8ab07f1bc5423199eeb6ab5714bc30a930955) 

How reproducible:
Always

Steps to Reproduce:
1. Install a RHCS 6 cluster
2. Open the ceph dashboard
3. Try to open any graph provided by grafana

Actual results:
the iframe shows "host.containers.internal not found"

Expected results:
The graphs shoudl be present

Additional info:

Upon searching more I see that each of the podman container has this entry mentioned in their /etc/hosts where the container is running and the name host.containers.internal. In the upstream tracker [1] it is mentioned that this issue in only with podman version 4.1
------
[root@mgmt-0 ~]# podman exec -it ceph-38f5dc9a-0885-11ee-9a72-fa163efb4790-mon-mgmt-0-milvermarhcs5-lab-upshift-rdu2-redhat-com   cat /etc/hosts
127.0.0.1	localhost localhost.localdomain localhost4 localhost4.localdomain4
::1	localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.90.20	host.containers.internal   <<<<<<<
------

Podman version:
[root@mgmt-0 38f5dc9a-0885-11ee-9a72-fa163efb4790]# podman version
Client:       Podman Engine
Version:      4.4.1
API Version:  4.4.1
Go Version:   go1.19.6
Built:        Wed Apr 26 12:50:28 2023
OS/Arch:      linux/amd64

Able to resolve by manually updating the API host.
------
ceph dashboard set-alertmanager-api-host http://...
ceph dashboard set-grafana-api-url https://...
ceph dashboard set-prometheus-api-host http://...
------

[1] https://tracker.ceph.com/issues/57018

Need to know the root cause of this issue and is the above workaround is legitimate?

Comment 20 errata-xmlrpc 2023-08-03 16:45:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4473


Note You need to log in before you can comment on or make changes to this bug.