Bug 1378850 - [RFE] Deployment: cannot install using a key from a vm [NEEDINFO]
Summary: [RFE] Deployment: cannot install using a key from a vm
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.7.0
Hardware: x86_64
OS: Linux
low
low
Target Milestone: GA
: cfme-future
Assignee: Loic Avenel
QA Contact: Einat Pacifici
URL:
Whiteboard: container
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-23 11:22 UTC by Dafna Ron
Modified: 2018-01-05 23:50 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-05 15:58:33 UTC
Category: ---
Cloudforms Team: Container Management
Target Upstream Version:
dajohnso: needinfo? (epacific)


Attachments (Terms of Use)
evm log (4.14 MB, application/x-gzip)
2016-09-23 11:22 UTC, Dafna Ron
no flags Details

Description Dafna Ron 2016-09-23 11:22:03 UTC
Created attachment 1204105 [details]
evm log

Description of problem:

when we select "Specify a list of machines to deploy on (No existing provider)" and we are asked to enter "Private SSH Key" we are only able to add a local key (i.e brows a local machine and add a key from the local machine) 

Version-Release number of selected component (if applicable):

using upstream master/origin hash b2414903aa11
alon's updates merged

How reproducible:

100%

Steps to Reproduce:
1. compute -> containers -> providers
2. configuration -> create containers provider 
3. select "Specify a list of machines to deploy on (No existing provider) " 


Actual results:

1. in "ssh private key" we can only brows local machine
2. I tried adding a location of file in a remote machine (which is what I would expect) but it fails 
3. there is no place to add a password (which is another sign this is only for a local machine)


Expected results:

in an organization there are many sysadmins and I think that we should allow adding a remote key rather than a local machine's one. 

There is also a question of if I add a personal user, does it have to have access and sudo on all opensihft servers?  


Additional info:


[----] E, [2016-09-23T11:08:57.376739 #672:3fb70510d154] ERROR -- : Q-task_id([automation_task_2]) <AutomationEngine> <AEMethod check_ssh> [Could not parse PKey: no start line]
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/key_factory.rb:77:in `read'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/key_factory.rb:77:in `load_data_private_key'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/key_manager.rb:228:in `block in load_identities'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/key_manager.rb:217:in `map'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/key_manager.rb:217:in `load_identities'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/key_manager.rb:117:in `each_identity'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/methods/publickey.rb:19:in `authenticate'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/session.rb:79:in `block in authenticate'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/session.rb:66:in `each'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh/authentication/session.rb:66:in `authenticate'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/net-ssh-3.2.0/lib/net/ssh.rb:236:in `start'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/linux_admin-0.18.0/lib/linux_admin/ssh.rb:38:in `execute_commands'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/gems/2.3.0/gems/linux_admin-0.18.0/lib/linux_admin/ssh.rb:20:in `perform_commands'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/app/models/container_deployment/automate.rb:117:in `check_connection'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:280:in `public_send'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:280:in `block in object_send'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:299:in `ar_method'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:309:in `ar_method'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:278:in `object_send'
(druby://127.0.0.1:42479) /var/www/miq/vmdb/lib/miq_automation_engine/engine/miq_ae_service_model_base.rb:122:in `block (2 levels) in expose'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/2.3.0/drb/drb.rb:1624:in `perform_without_block'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/2.3.0/drb/drb.rb:1584:in `perform'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/2.3.0/drb/drb.rb:1657:in `block (2 levels) in main_loop'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/2.3.0/drb/drb.rb:1653:in `loop'
(druby://127.0.0.1:42479) /opt/rubies/ruby-2.3.1/lib/ruby/2.3.0/drb/drb.rb:1653:in `block in main_loop'
<code: $evm.root['container_deployment'].check_connection>:7:in `check_ssh'
<code: check_ssh>:13:in `<main>'

Comment 1 Dave Johnson 2017-07-14 03:48:33 UTC
Please assess the importance of this issue and update the priority accordingly.  Somewhere it was missed in the bug triage process.  Please refer to https://bugzilla.redhat.com/page.cgi?id=fields.html#priority for a reminder on each priority's definition.

If it's something like a tracker bug where it doesn't matter, please set it to Low/Low.

Comment 2 Barak 2017-12-31 12:20:01 UTC
This bug is about deploying an openshift cluster through CFME,
This feature was shelved a long time ago.
Moving to PM for further handling (close or prioritize for future)


Note You need to log in before you can comment on or make changes to this bug.