Bug 1440580
Summary: | "Untrusted host" error message displays when adding remote host on cockpit dashboard | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Wei Wang <weiwang> | ||||||||||||
Component: | cockpit | Assignee: | Martin Pitt <mpitt> | ||||||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | qe-baseos-daemons | ||||||||||||
Severity: | low | Docs Contact: | |||||||||||||
Priority: | unspecified | ||||||||||||||
Version: | 7.3 | CC: | bugs, cshao, huzhao, leiwang, mpitt, qiyuan, rbarry, weiwang, yaniwang, ycui, yisong | ||||||||||||
Target Milestone: | pre-dev-freeze | Keywords: | Extras | ||||||||||||
Target Release: | --- | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | |||||||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2018-01-24 07:22:20 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Bug Depends On: | |||||||||||||||
Bug Blocks: | 1447254 | ||||||||||||||
Attachments: |
|
Description
Wei Wang
2017-04-10 02:55:18 UTC
Created attachment 1270356 [details]
log files
I am unable to reproduce. Have you tried with latest cockpit releases? Or can you give me access to an environment where you are able to reproduce. The general dialog with the fingerprint just means that the user that you logged into Cockpit with never used ssh to connect to the system you added. In other words, this mirrors ssh's fingerprint confirmation question, which is quite deliberate. I cannot reproduce the "untrusted host" error message in that dialog, though; I've tried with the current RHEL 7.4 image. Do you still get it there? Downgrading severity, as this is just about an extra warning message which isn't actually wrong either. Retest this bug "Cockpit could not contact the given host." error message displays when adding host1 to host2 on host2's cockpit dashboard (Pref: picture1) Test Version: host1: update rhvh: rhev-hypervisor6-6.8-20160707.3.el6ev -->redhat-virtualization-host-3.6-20170404.0 --> redhat-virtualization-host-4.2-20180115.0 cockpit-ws-157-1.el7.x86_64 cockpit-dashboard-157-1.el7.x86_64 cockpit-bridge-157-1.el7.x86_64 cockpit-157-1.el7.x86_64 cockpit-system-157-1.el7.noarch cockpit-ovirt-dashboard-0.11.4-0.1.el7ev.noarch vdsm-4.20.13-1.el7ev.x86_64 imgbased-1.0.6-0.1.el7ev.noarch host2: update rhvh: redhat-virtualization-host-3.6-0.20180103.0 --> redhat-virtualization-host-4.2-20180115.0 cockpit-ws-157-1.el7.x86_64 cockpit-dashboard-157-1.el7.x86_64 cockpit-bridge-157-1.el7.x86_64 cockpit-157-1.el7.x86_64 cockpit-storaged-157-1.el7.noarch cockpit-system-157-1.el7.noarch cockpit-ovirt-dashboard-0.11.4-0.1.el7ev.noarch vdsm-4.20.13-1.el7ev.x86_64 imgbased-1.0.6-0.1.el7ev.noarch Test Steps: 1. Install and update host1 or host2 according to bug 1421098 comment #15 2. Login host1 and host2 cockpit with root account 3. Add host1 to host2 on host2's dashboard 4. Check the process Actual results: "Cockpit could not contact the given host." error message displays, remote host1 is added to host2 cockpit dashboard failed. Expected results: No error message displays, and add remote host successfully. More Info: Adding host2 to host1 on host1's cockpit dashboard is successful. mpitt@ The reproduced environment will be held for one day.If you need the environment, I can send you via email. Created attachment 1384693 [details]
picture1
Created attachment 1384695 [details]
20180123log_files
@weiwang: This is now a rather different error message, and not just cosmetical. It seems it doesn't allow you to confirm the fingerprint now, and just plain fails to connect? Emailing me how to access these test systems would be appreciated, thanks! Created attachment 1384789 [details]
host key dialog for adding host2 to host1
I can confirm this on your test systems. When trying to add host1 to host2's dashboard, I get the above error, and journal shows:
Jan 23 05:23:34 bootp-73-131-222.rhts.eng.pek2.redhat.com cockpit-ssh[60735]: cockpit-ssh 10.73.131.63: -1 couldn't connect: Timeout connecting to 10.73.131.63 '10.73.131.63' '22'
The same happens with ssh:
[host2] # ssh -vv 10.73.131.63 # host 1 IP
[...]
debug1: Connecting to 10.73.131.63 [10.73.131.63] port 22.
and it hangs. Apparently there's some firewall blocking this? On the hosts themselves `firewall-cmd --state` says "not running" and `iptables -L` shows that it's (mostly) open, so it's something with the routing in between.
The other way around it works, from host1 I can ssh to host2. So I removed /etc/ssh/ssh_known_hosts on host1 (to make sure the fingerprint is unknown) and added host2 to host1's cockpit dashboard. Then I get the expected "unknown host key" dialog without an extra warning or error (see attached screenshot), can confirm it, and host2 appears on the dashboard as expected.
Thus so far things seem as expected?
I find the /etc/ssh/ssh_known_hosts has different between the two host host1 [root@dell-per730-34 ssh]# cat ssh_known_hosts 10.73.131.65 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLDPMAakHUt8CtiiQJeVVcQlOIhqaNpEVEZ2xIjYMOE host2 [root@bootp-73-131-222 ssh]# cat ssh_known_hosts 10.73.73.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFRdqFCFl8WXvyILXyzp4RNlVQDJwuuTH7QP0ABCWCFs should it have 10.73.131.63 on host2? is it means the host key about 10.73.131.63 is not created? Is it the reason? It's normal that ssh_known_hosts are different on different machines. For example, a host would rarely have its own key there, just the keys from remote machines that you add to Cockpit (or in ssh) after confirming their keys.
I don't know what 10.73.73.15 is, it's not host1 or host2, so it's unrelated and shouldn't matter here.
> should it have 10.73.131.63 on host2?
*If* ssh from host2 to host1 would actually work, then ssh_known_hosts will get host1's key after confirming it in cockpit (or again, on the ssh command line). However, as the port is blocked by the firewall, it never gets that far.
(In reply to Martin Pitt from comment #12) > It's normal that ssh_known_hosts are different on different machines. For > example, a host would rarely have its own key there, just the keys from > remote machines that you add to Cockpit (or in ssh) after confirming their > keys. > > I don't know what 10.73.73.15 is, it's not host1 or host2, so it's unrelated > and shouldn't matter here. > > > should it have 10.73.131.63 on host2? > > *If* ssh from host2 to host1 would actually work, then ssh_known_hosts will > get host1's key after confirming it in cockpit (or again, on the ssh command > line). However, as the port is blocked by the firewall, it never gets that > far. Hi Pitt, I can understand now. You are completely right, the problem is the routing. I have found the key point about the new issue. Host2 had additional NIC (em2) up automatically after rebooting, so the default route is from em2. I have shut down em2, changed the default route from rhevm. Then I tried to add remote hosts on cockpit UI again. Both of them can be added successfully. Thank you very much for your detail guide! Now both of the hosts can be added to the other's cockpit UI, No "Untrusted host" error message displays, so the original problem is gone. Thanks for confirming! |