Bug 1339068

Summary: [ceph-ansible] Ubuntu ISO installation failing with "gpg: no valid OpenPGP data found"
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Tejas <tchandra>
Component: ceph-ansibleAssignee: Alfredo Deza <adeza>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, kdreyer, nthomas, racpatel, sankarshan, tchandra
Target Milestone: ---   
Target Release: 2   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-1.0.5-16.el7scon Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-23 19:51:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tejas 2016-05-24 05:49:23 UTC
Description of problem:
on an ISO installation on ubuntu,I am seeing the error 
"gpg: no valid OpenPGP data found"
in the step: install the rh ceph storage repository key

Version-Release number of selected component (if applicable):
ceph-ansible: 1.0.5-15.el7scon.noarch


How reproducible:
Always

Steps to Reproduce:
1. perform the necessary prerequisites on the nodes ( firewall ports etc)
2. configure the installer node, and start the installation
3. the installation fails, and the repo file is populated on the remote node

Actual results:
the install fails

Expected results:
install expected to go through

Additional info:

TASK: [ceph.ceph-common | install ceph-test] ********************************** 
skipping: [magna052]

TASK: [ceph.ceph-common | install rados gateway] ****************************** 
skipping: [magna052]

TASK: [ceph.ceph-common | install ceph mds] *********************************** 
skipping: [magna052]

TASK: [ceph.ceph-common | install the rh ceph storage repository key] ********* 
failed: [magna052] => {"cmd": "apt-key add -", "failed": true, "rc": 2}
stderr: gpg: no valid OpenPGP data found.

msg: gpg: no valid OpenPGP data found.

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/site.sample.retry

magna052                   : ok=21   changed=6    unreachable=0    failed=1  



the remote node has the ISO:
root@magna052:/tmp/rh-storage-repo# ll
total 60
drwxr-xr-x 6 root root  4096 May 24 05:25 ./
drwxrwxrwt 9 root root  4096 May 24 05:27 ../
dr-xr-xr-x 6 root root  4096 May 24 05:25 Agent/
-r--r--r-- 1 root root  8775 May 24 05:25 EULA
-r--r--r-- 1 root root 18092 May 24 05:25 GPL
dr-xr-xr-x 6 root root  4096 May 24 05:25 MON/
dr-xr-xr-x 6 root root  4096 May 24 05:25 OSD/
-r--r--r-- 1 root root   165 May 24 05:25 README
dr-xr-xr-x 6 root root  4096 May 24 05:25 Tools/

the group_vars files are located at:
magna006://root/bz/gpg/

ceph_stable_rh_storage: true
#ceph_stable_rh_storage_cdn_install: true # assumes all the nodes can connect to cdn.redhat.com
ceph_stable_rh_storage_iso_install: true # usually used when nodes don't have access to cdn.redhat.com
ceph_stable_rh_storage_iso_path: /tmp/Ceph-2-Ubuntu-x86_64-20160519.t.0-dvd.iso
#ceph_stable_rh_storage_iso_path: /tmp/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso
#ceph_stable_rh_storage_mount_path: /tmp/rh-storage-mount
#ceph_stable_rh_storage_repository_path: /tmp/rh-storage-repo # where to copy iso's content

Comment 2 Alfredo Deza 2016-05-24 12:29:09 UTC
Upstream pull request opened: https://github.com/ceph/ceph-ansible/pull/809

Comment 6 Tejas 2016-06-14 10:25:32 UTC
Verified in build:
ceph-ansible-1.0.5-19.el7scon.noarch
ceph version 10.2.1-8redhat1xenial (0abd18aa5da0dde0f01404d8ac10876cb3691bb3)

Moving to verified state.

Comment 8 errata-xmlrpc 2016-08-23 19:51:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754