Bug 1338551 - [ceph-ansible]: Installation via ISO fails on RHEL cluster saying Destination Directory does not exist
Summary: [ceph-ansible]: Installation via ISO fails on RHEL cluster saying Destination...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 2
Assignee: Alfredo Deza
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-22 17:30 UTC by Rachana Patel
Modified: 2016-08-23 19:51 UTC (History)
10 users (show)

Fixed In Version: ceph-ansible-1.0.5-24.el7cp
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:51:37 UTC
Embargoed:


Attachments (Terms of Use)
Play book log latest (8.56 KB, text/plain)
2016-07-15 07:16 UTC, Tejas
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1355762 0 unspecified CLOSED ceph_stable_rh_storage_iso_path should not be created as directory 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Internal Links: 1355762

Description Rachana Patel 2016-05-22 17:30:05 UTC
Description of problem:
======================
 Installation via ISO fails on RHEL cluster saying Destination Directory does not exist


TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
failed: [magna057] => {"checksum": "bd27df2f1fee8d3e4285e40eac8b08861eeb2102", "failed": true}
msg: Destination directory /root/iso does not exist



Version-Release number of selected component (if applicable):
=============================================================
ceph-ansible-1.0.5-15.el7scon.noarch
ceph - 10.2.1-6.el7cp.x86_64



How reproducible:
=================
always


Steps to Reproduce
==================
1. perform prerequisite on all nodes
2. modify inventory file as below
[mons]
magna067

[osds]
magna067
magna074
magna084

[mdss]
magna100

3. copy iso on installer node and run below command from installer node

4.[root@magna051 ceph-ansible]#  ansible-playbook site.yml -vv -i  /etc/ansible/hosts  --extra-vars '{"ceph_stable": true, "ceph_stable_rh_storage": true, "monitor_interface": "eno1", "ceph_stable_rh_storage_iso_install": true,"ceph_stable_rh_storage_iso_path": "/root/iso/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc"], "journal_size": 100, "public_network": "xxx", "cephx": true,



Actual results:
===============
Installation fails as nodes in cluster dont have Directory path given in 'ceph_stable_rh_storage_iso_path'

TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
failed: [magna057] => {"checksum": "bd27df2f1fee8d3e4285e40eac8b08861eeb2102", "failed": true}
msg: Destination directory /root/iso does not exist


Expected results:
================
Ceph-ansible should create Directory on destination nodes 
or
we should update installation doc to include instructions to create same directory path on all cluster node.


Additional info:

Comment 3 Alfredo Deza 2016-05-23 11:34:43 UTC
What are the contents of the group_vars/all file? The /root/iso seems odd because that means that:

1) the ISO would need to be mounted where ansible is being run at /root/iso
2) the group_vars/all file would specify ceph_stable_rh_storage_mount_path=/root/iso
3) the remote user for all the destination servers will be 'root', which means it will have access to write to /root/iso

The default is to use /tmp/rh-storage-mount so something is not right.

It would be useful to default to attach the group_vars file being used always.

Comment 4 Tejas 2016-05-23 12:09:32 UTC
Hi Alfredo,

   I just tried the same with these settings:

ceph_stable_rh_storage: true
#ceph_stable_rh_storage_cdn_install: true # assumes all the nodes can connect to cdn.redhat.com
ceph_stable_rh_storage_iso_install: true # usually used when nodes don't have access to cdn.redhat.com
ceph_stable_rh_storage_iso_path: /tmp/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso
#ceph_stable_rh_storage_mount_path: /tmp/rh-storage-mount
#ceph_stable_rh_storage_repository_path: /tmp/rh-storage-repo # where to copy iso's content

and the ISO contents are copied to the remote node:

[root@magna080 ~]# ls -l /tmp/rh-storage-repo/
total 56
-r--r--r--. 1 root root  8774 May 23 12:03 EULA
-r--r--r--. 1 root root 18092 May 23 12:03 GPL
dr-xr-xr-x. 3 root root  4096 May 23 12:03 MON
dr-xr-xr-x. 3 root root  4096 May 23 12:03 OSD
-r--r--r--. 1 root root   165 May 23 12:03 README
-r--r--r--. 1 root root  3211 May 23 12:03 RPM-GPG-KEY-redhat-release
dr-xr-xr-x. 3 root root  4096 May 23 12:03 Tools
-r--r--r--. 1 root root  1976 May 23 12:03 TRANS.TBL

But it still fails with :
TASK: [ceph.ceph-common | install distro or red hat storage ceph mon] ********* 
failed: [magna080] => {"changed": false, "failed": true, "rc": 0, "results": []}
msg: No Package matching 'ceph-mon' found available, installed or updated

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/site.sample.retry

whereas ceph-mon is claerly available on magna080

Comment 5 Alfredo Deza 2016-05-23 12:39:32 UTC
@Tejas: This looks like a different issue; a "cdn" install excludes an "iso" install. It is either or.

This also seems unrelated to the description for this BZ. Can you please open a separate ticket for it and include the full output of the ansible play?

Comment 6 Alfredo Deza 2016-05-23 12:41:09 UTC
Closing this as it looks like misconfiguration. 

@rachana, feel free to re-open if you can reproduce with the correct ISO mount paths and configuration as described in comment #3

Comment 7 Rachana Patel 2016-05-25 00:08:42 UTC
reopening bug and reasoning:-

Doc -  'https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-red-hat-enterprise-linux/#configuring_ceph_global_settings


Following steps mentioned in  above Document and it is failing each time if '<dir path>' which is given in "ceph_stable_rh_storage_iso_path": "<dir path>/Ceph-2.xxx.iso" does not exist on cluster node. 

failing for both case. a.) parameter given as command line and b) modify values in file

So either this is a Document bug and we need to change steps in Document or we need to change ceph-ansible to create <dir path> on destination if it doesn't exists.

(running it as root so there should not be any permission issue)


pre condition:-
================
<dir path> does not exists on cluster node. In below cases, '/root/abc' does not exist

[root@magna067 ubuntu]# ls -l /root/abc
ls: cannot access /root/abc: No such file or directory

Case a:-
========
Steps:-
0. <dir path> does not exists on cluster node. In below cases, '/root/abc' does not exist
1. do all pre installation steps on all cluster node
2. do not modify/copy in of the configuration file(all, osds, etc.)
3. copy ISO on installer node @ /root/abc

[root@magna051 ceph-ansible]# ls -l /root/abc
total 293024
-rw-r--r--. 1 root root 300056576 May 20 21:16 Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso

4. give all parameter as command line (mentioned in section 3.2.2 of doc)

root@magna051 ceph-ansible]#  ansible-playbook site.yml -vv -i  /etc/ansible/hosts  --extra-vars '{"ceph_stable": true, "ceph_stable_rh_storage": true, "monitor_interface": "eno1", "ceph_stable_rh_storage_iso_install": true,"ceph_stable_rh_storage_iso_path": "/root/abc/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc"], "journal_size": 100, "public_network": "xxxx/21", "cephx": true, "fetch_directory": "~/ubuntu-key"}'



TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
failed: [magna067] => {"checksum": "02c3adf7f8951f4c40d74d3cbe87f96ca8285911", "failed": true}
msg: Destination directory /root/abc does not exist



Case b:-
========
0. <dir path> does not exists on cluster node. In below cases, '/root/abc' does not exist
1. do all pre installation steps on all cluster node
2. modify variable as mentioned in section 3.2.2 of doc
(please refer attachment for 'all' file)
3. copy ISO on installer node @ /root/abc

[root@magna051 ceph-ansible]# ls -l /root/abc
total 293024
-rw-r--r--. 1 root root 300056576 May 20 21:16 Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso

and give that path for 
ceph_stable_rh_storage_iso_path: /root/abc/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso
4. run below command 

[root@magna051 ceph-ansible]#  ansible-playbook site.yml -vv -i  /etc/ansible/hosts  


TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
failed: [magna067] => {"checksum": "02c3adf7f8951f4c40d74d3cbe87f96ca8285911", "failed": true}
msg: Destination directory /root/abc does not exist



now change pre condtion:-
==========================
on cluster node that path should exist
i.e.
[root@magna074 ubuntu]# ls -l /root/abc
total 0

Case c:-
=======
0. make sure cluster node have 'root/abc'
1. do all pre installation steps on all cluster node
2. do not modify/copy in of the configuration file(all, osds, etc.)
3. copy ISO on installer node @ /root/abc

[root@magna051 ceph-ansible]# ls -l /root/abc
total 293024
-rw-r--r--. 1 root root 300056576 May 20 21:16 Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso

4. give all parameter as command line (mentioned in section 3.2.2 of doc)
[root@magna051 ceph-ansible]#  ansible-playbook site.yml -vv -i  /etc/ansible/hosts  --extra-vars '{"ceph_stable": true, "ceph_stable_rh_storage": true, "monitor_interface": "eno1", "ceph_stable_rh_storage_iso_install": true,"ceph_stable_rh_storage_iso_path": "/root/abc/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc"], "journal_size": 100, "public_network": "10.8.128.0/21", "cephx": true, "fetch_directory": "~/ubuntu-key"}'


TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
changed: [magna084] => {"changed": true, "checksum": "02c3adf7f8951f4c40d74d3cbe87f96ca8285911", "dest": "/root/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso", "gid": 0, "group": "root", "md5sum": "0ad19e43e8f38294cbb57b3804941d81", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:admin_home_t:s0", "size": 300056576, "src": "/root/.ansible/tmp/ansible-tmp-1464132987.86-222399651582157/source", "state": "file", "uid": 0}

TASK: [ceph.ceph-common | mount red hat storage iso file] ********************* 
<magna084> REMOTE_MODULE mount opts=ro,loop,noauto state=mounted fstype=iso9660 name=/tmp/rh-storage-mount src=/root/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso
changed: [magna084] => {"changed": true, "fstab": "/etc/fstab", "fstype": "iso9660", "name": "/tmp/rh-storage-mount", "opts": "ro,loop,noauto", "passno": 2, "src": "/root/Ceph-2-RHEL-7-20160520.t.0-x86_64-dvd.iso"}


===================================================================

In case a and b it fails at task 'fetch the red hat storage iso from the ansible server' as it is is trying to copy ISO from installer node and destination path is similar to source path but Destination path does not exist

while in case of C that tasks do not show any failures as Destination path exist and it is able to copy ISO
=====================================================================

Comment 9 Rachana Patel 2016-05-25 00:10:18 UTC
Created attachment 1161241 [details]
all file

Comment 12 Alfredo Deza 2016-05-31 13:56:24 UTC
@rachana: what user are you configuring ansible to connect to the remote nodes? (I wasn't able to find what that value was from the attachments).

I believe this is failing because you are trying with the root user. Using the root user will not prevent failures as you still need to connect to remote nodes that *may not* be root.

As I mentioned in comment #3 it is better to use a well-known path that remote users will have access. If you are using /tmp/ and it works I would insist this is a configuration error (and should probably be noted in the docs).

Comment 13 Rachana Patel 2016-07-06 15:20:18 UTC
(In reply to Alfredo Deza from comment #12)
> @rachana: what user are you configuring ansible to connect to the remote
> nodes? (I wasn't able to find what that value was from the attachments).
> 


Able to reproduce with both - root user and create new user for ceph deployment

> I believe this is failing because you are trying with the root user. Using
> the root user will not prevent failures as you still need to connect to
> remote nodes that *may not* be root. 

remote nodes also have same user and password less ssh is created.
(again for both user - 'root' and new user just for ceph-deployment)

> 
> As I mentioned in comment #3 it is better to use a well-known path that
> remote users will have access. If you are using /tmp/ and it works I would
> insist this is a configuration error (and should probably be noted in the
> docs).

- It works if I dont change default location but whenever i change location (via command line argument or modify value in file) and destination path doesnot exist it fails

Comment 14 Alfredo Deza 2016-07-06 17:21:17 UTC
Pull request opened: https://github.com/ceph/ceph-ansible/pull/871

Comment 21 Tejas 2016-07-15 07:16:52 UTC
Created attachment 1180054 [details]
Play book log latest

Comment 25 Alfredo Deza 2016-07-15 15:13:42 UTC
Tejas: your failure doesn't look like the same failure described for this ticket.

That is: 

    TASK: [ceph.ceph-common | fetch the red hat storage iso from the ansible server] *** 
    failed: [magna067] => {"checksum": "02c3adf7f8951f4c40d74d3cbe87f96ca8285911", "failed": true} 
    msg: Destination directory /root/abc does not exist

From the log output you've attached, it looks like the directory is created, and the ISO is able to get mounted.

Comment 26 Alfredo Deza 2016-07-15 15:34:37 UTC
After reviewing the logs, I can confirm this is a bug, but it is a different one from this ticket.

It is also clear that this bug has been resolved now (the iso is able to get mounted and repositories are copied).

Tejas: can you please close this bug and open a new one that explains that when using an ISO install the repo file is not being created? (that is the reason why the install fails)

Comment 29 errata-xmlrpc 2016-08-23 19:51:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.