Bug 1569119

Summary: Cannot ssh into the launched instance, despite being able to reach port 22 and Ping
Product: Red Hat OpenStack Reporter: karan singh <karan>
Component: openstack-novaAssignee: OSP DFG:Compute <osp-dfg-compute>
Status: CLOSED NOTABUG QA Contact: OSP DFG:Compute <osp-dfg-compute>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 12.0 (Pike)CC: berrange, dasmith, eglynn, jhakimra, karan, kchamart, sbauza, sferdjao, sgordon, srevivo, vromanso
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-19 18:40:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Image-1
none
you can compare the SSH keys on the instance that of the public key (in the logs) both are same and they should be none

Description karan singh 2018-04-18 15:56:05 UTC
Created attachment 1423651 [details]
Image-1

Description of problem:
After a fresh deployment of OSP-12 and RHCS, when instance is created using either CLI or GUI, unable to ssh into the instnace either by KeyPairs or by Password. Console LOGIN works fine (tested with Cirros)

Version-Release number of selected component (if applicable):
OSP-12

How reproducible:
Deploy OSP-12 with Ceph, try creating standard networks, image, security group, flavour and launch instance using CIRROS, Fedora, RHEL 7.5 

Steps to Reproduce:
1. Deploy osp-12 with Ceph
2. Create network, image (cirros, fedora,RHEL7.5), flavour, floating ip, security group
3. create server
4. SSH to the instance using floating IP provide either KEY or Password (in cae of cirros)

Actual results:

Unable to ssh into the instance
Expected results:

Able to ssh into the instance either via SSH keys or Password (in case of cirros)

Additional info:

As of now I have tried
- Cirros , RHEL 7.5 , Fedora images
- Manually imported keypair
- Freshly created keypair (ssh-keygen)
- Using Cirros default credentials (cirros/gocubsgo version : cirros-0.4.0) I could login from console, and could see that my authorized_key file is populated with the correct keypair value, but still, I can’t ssh into the node
- SSH service is running on instance
- Can reach to the instance over port 22 and can ping 
- Also if I enter cirros user default password gocubsgo on the SSH prompt, it does NOT allow to login with password too
 
None of the above worked so far.
 
See attached screenshots from the horizon console
Image – 1 : Cirros instance has the public key created using “create keypair” method
Image – 2 : Cirros instance has the public key added creatd using “imort keypair” method. This matches the value from the logs.
 
Detailed Logs https://pastebin.com/raw/nMFfwdiA

Comment 1 karan singh 2018-04-18 15:58:11 UTC
Created attachment 1423652 [details]
you can compare the SSH keys on the instance that of the public key (in the logs) both are same and they should be

you can compare the SSH keys on the instance that of the public key (in the logs) both are same and they should be

Comment 2 Artom Lifshitz 2018-04-19 18:34:10 UTC
Hello,

Thanks for the well written bug report!

SSH login is tested extensively in CI, so my honest opinion is that this is unlikely to be a bug.

We see what's going on from the client's side, would it be possible to do the same from inside the VM (ie, the sshd service you're attempting to connect to). I realise this isn't easy to do with just console access, but maybe it would be possible to upload the VM's /var/log/syslog (or /var/log/messages, I can never remember which is which) somewhere? I'd also like to see the output of 'ifconfig', 'uname -a', and 'ls -l /home'. Essentially, I'm looking to make sure that the client is connecting to the correct VM, on the correct IP, with the correct username for that VM.

Cheers!

Comment 3 karan singh 2018-04-19 18:40:51 UTC
Just 10 minutes back I managed to fix it. It was the issue in the external network configuration. Sorry to bug you with this BZ.

Happyp to close this.