Bug 1440580 - "Untrusted host" error message displays when adding remote host on cockpit dashboard
Summary: "Untrusted host" error message displays when adding remote host on cockpit da...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: cockpit
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: pre-dev-freeze
: ---
Assignee: Martin Pitt
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks: ovirt-node-ng-43-el76-platform
TreeView+ depends on / blocked
 
Reported: 2017-04-10 02:55 UTC by Wei Wang
Modified: 2018-01-24 07:22 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-24 07:22:20 UTC
Target Upstream Version:


Attachments (Terms of Use)
picture (14.69 KB, image/png)
2017-04-10 02:55 UTC, Wei Wang
no flags Details
log files (459.58 KB, application/x-gzip)
2017-04-10 02:59 UTC, Wei Wang
no flags Details
picture1 (45.89 KB, image/png)
2018-01-23 07:10 UTC, Wei Wang
no flags Details
20180123log_files (1.85 MB, application/x-gzip)
2018-01-23 07:11 UTC, Wei Wang
no flags Details
host key dialog for adding host2 to host1 (46.28 KB, image/png)
2018-01-23 10:33 UTC, Martin Pitt
no flags Details

Description Wei Wang 2017-04-10 02:55:18 UTC
Created attachment 1270354 [details]
picture

Description of problem:
"Untrusted host" error message displays when adding remote host on cockpit dashboard

Version-Release number of selected component (if applicable):
host1:
update rhvh: rhev-hypervisor6-6.8-20160707.3.el6ev -->redhat-virtualization-host-3.6-20170404.0 --> redhat-virtualization-host-4.1-20170403.0
cockpit-ws-126-1.el7.x86_64
cockpit-ovirt-dashboard-0.10.7-0.0.16.el7ev.noarch
imgbased-0.9.20-0.1.el7ev.noarch

host2:
update rhvh: redhat-virtualization-host-3.6-20170404.0 --> redhat-virtualization-host-4.1-20170403.0
cockpit-ws-126-1.el7.x86_64
cockpit-ovirt-dashboard-0.10.7-0.0.16.el7ev.noarch
imgbased-0.9.20-0.1.el7ev.noarch


How reproducible:
100%


Steps to Reproduce:
1. Install and update host1 or host2 according to bug 1421098 comment #15
2. Login host1 and host2 cockpit with root account
3. Add host2 to host1 on host1 dashboard
   or
   Add host1 to host2 on host2 dashboard
4. Check the process after clicking "connect" button



Actual results:
"Untrusted host" error message displays, and add remote host fail.

Expected results:
No error message displays, and add remote host successfully.


Additional info:

Comment 1 Wei Wang 2017-04-10 02:59:18 UTC
Created attachment 1270356 [details]
log files

Comment 3 Peter 2017-08-11 15:33:09 UTC
I am unable to reproduce. Have you tried with latest cockpit releases? Or can you give me access to an environment where you are able to reproduce.

Comment 4 Martin Pitt 2018-01-22 14:26:02 UTC
The general dialog with the fingerprint just means that the user that you logged into Cockpit with never used ssh to connect to the system you added. In other words, this mirrors ssh's fingerprint confirmation question, which is quite deliberate.

I cannot reproduce the "untrusted host" error message in that dialog, though; I've tried with the current RHEL 7.4 image. Do you still get it there?

Downgrading severity, as this is just about an extra warning message which isn't actually wrong either.

Comment 5 Wei Wang 2018-01-23 07:08:57 UTC
Retest this bug
"Cockpit could not contact the given host." error message displays when adding host1 to host2 on host2's cockpit dashboard (Pref: picture1)

Test Version:
host1:
update rhvh: rhev-hypervisor6-6.8-20160707.3.el6ev -->redhat-virtualization-host-3.6-20170404.0 --> redhat-virtualization-host-4.2-20180115.0
cockpit-ws-157-1.el7.x86_64
cockpit-dashboard-157-1.el7.x86_64
cockpit-bridge-157-1.el7.x86_64
cockpit-157-1.el7.x86_64
cockpit-system-157-1.el7.noarch
cockpit-ovirt-dashboard-0.11.4-0.1.el7ev.noarch
vdsm-4.20.13-1.el7ev.x86_64
imgbased-1.0.6-0.1.el7ev.noarch

host2:
update rhvh: redhat-virtualization-host-3.6-0.20180103.0 --> redhat-virtualization-host-4.2-20180115.0
cockpit-ws-157-1.el7.x86_64
cockpit-dashboard-157-1.el7.x86_64
cockpit-bridge-157-1.el7.x86_64
cockpit-157-1.el7.x86_64
cockpit-storaged-157-1.el7.noarch
cockpit-system-157-1.el7.noarch
cockpit-ovirt-dashboard-0.11.4-0.1.el7ev.noarch
vdsm-4.20.13-1.el7ev.x86_64
imgbased-1.0.6-0.1.el7ev.noarch


Test Steps:
1. Install and update host1 or host2 according to bug 1421098 comment #15
2. Login host1 and host2 cockpit with root account
3. Add host1 to host2 on host2's dashboard
4. Check the process


Actual results:
"Cockpit could not contact the given host." error message displays, remote host1 is added to host2 cockpit dashboard failed.

Expected results:
No error message displays, and add remote host successfully.

More Info:
Adding host2 to host1 on host1's cockpit dashboard is successful.

mpitt@
The reproduced environment will be held for one day.If you need the environment, I can send you via email.

Comment 6 Wei Wang 2018-01-23 07:10:07 UTC
Created attachment 1384693 [details]
picture1

Comment 7 Wei Wang 2018-01-23 07:11:00 UTC
Created attachment 1384695 [details]
20180123log_files

Comment 8 Martin Pitt 2018-01-23 09:30:10 UTC
@weiwang: This is now a rather different error message, and not just cosmetical. It seems it doesn't allow you to confirm the fingerprint now, and just plain fails to connect?

Emailing me how to access these test systems would be appreciated, thanks!

Comment 10 Martin Pitt 2018-01-23 10:33:30 UTC
Created attachment 1384789 [details]
host key dialog for adding host2 to host1

I can confirm this on your test systems. When trying to add host1 to host2's dashboard, I get the above error, and journal shows:

Jan 23 05:23:34 bootp-73-131-222.rhts.eng.pek2.redhat.com cockpit-ssh[60735]: cockpit-ssh 10.73.131.63: -1 couldn't connect: Timeout connecting to 10.73.131.63 '10.73.131.63' '22'

The same happens with ssh:

[host2] # ssh -vv 10.73.131.63  # host 1 IP
[...]
debug1: Connecting to 10.73.131.63 [10.73.131.63] port 22.

and it hangs. Apparently there's some firewall blocking this? On the hosts themselves `firewall-cmd --state` says "not running" and `iptables -L` shows that it's (mostly) open, so it's something with the routing in between.

The other way around it works, from host1 I can ssh to host2. So I removed /etc/ssh/ssh_known_hosts on host1 (to make sure the fingerprint is unknown) and added host2 to host1's cockpit dashboard. Then I get the expected "unknown host key" dialog without an extra warning or error (see attached screenshot), can confirm it, and host2 appears on the dashboard as expected.

Thus so far things seem as expected?

Comment 11 Wei Wang 2018-01-23 11:30:35 UTC
I find the /etc/ssh/ssh_known_hosts has different between the two host
host1
[root@dell-per730-34 ssh]# cat ssh_known_hosts
10.73.131.65 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLDPMAakHUt8CtiiQJeVVcQlOIhqaNpEVEZ2xIjYMOE

host2
[root@bootp-73-131-222 ssh]# cat ssh_known_hosts
10.73.73.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFRdqFCFl8WXvyILXyzp4RNlVQDJwuuTH7QP0ABCWCFs

should it have 10.73.131.63 on host2? is it means the host key about 10.73.131.63 is not created? Is it the reason?

Comment 12 Martin Pitt 2018-01-23 11:42:12 UTC
It's normal that ssh_known_hosts are different on different machines. For example, a host would rarely have its own key there, just the keys from remote machines that you add to Cockpit (or in ssh) after confirming their keys.

I don't know what 10.73.73.15 is, it's not host1 or host2, so it's unrelated and shouldn't matter here.

> should it have 10.73.131.63 on host2? 

*If* ssh from host2 to host1 would actually work, then ssh_known_hosts will get host1's key after confirming it in cockpit (or again, on the ssh command line). However, as the port is blocked by the firewall, it never gets that far.

Comment 13 Wei Wang 2018-01-24 03:25:52 UTC
(In reply to Martin Pitt from comment #12)
> It's normal that ssh_known_hosts are different on different machines. For
> example, a host would rarely have its own key there, just the keys from
> remote machines that you add to Cockpit (or in ssh) after confirming their
> keys.
> 
> I don't know what 10.73.73.15 is, it's not host1 or host2, so it's unrelated
> and shouldn't matter here.
> 
> > should it have 10.73.131.63 on host2? 
> 
> *If* ssh from host2 to host1 would actually work, then ssh_known_hosts will
> get host1's key after confirming it in cockpit (or again, on the ssh command
> line). However, as the port is blocked by the firewall, it never gets that
> far.

Hi Pitt,
I can understand now. You are completely right, the problem is the routing.
I have found the key point about the new issue. Host2 had additional NIC (em2) up automatically after rebooting, so the default route is from em2. I have shut down em2, changed the default route from rhevm. Then I tried to add remote hosts on cockpit UI again. Both of them can be added successfully.
Thank you very much for your detail guide!

Now both of the hosts can be added to the other's cockpit UI, No "Untrusted host" error message displays, so the original problem is gone.

Comment 14 Martin Pitt 2018-01-24 07:22:20 UTC
Thanks for confirming!


Note You need to log in before you can comment on or make changes to this bug.