Bug 1452316 - [ceph-ansible] [ceph-container] Installation fails in task 'prepare osd disk' with error in case of dm crypt+ collocated journal scenario
Summary: [ceph-ansible] [ceph-container] Installation fails in task 'prepare osd disk'...
Keywords:
Status: CLOSED DUPLICATE of bug 1391920
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Rachana Patel
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-18 16:53 UTC by Rachana Patel
Modified: 2017-05-23 18:07 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-23 18:07:02 UTC
Embargoed:


Attachments (Terms of Use)

Description Rachana Patel 2017-05-18 16:53:07 UTC
Description of problem:
======================
If Admin choosed dm crypt and collocated journal scenario then installation fails for OSDs in prepare OSD

Version-Release number of selected component (if applicable):
================================================================
ceph-ansible-2.2.4-1.el7scon.noarch
ceph-2-rhel-7-docker-candidate-20170516172622

How reproducible:
=================
always


Steps to Reproduce:
=====================
1. Perform preflight ops on all nodes
2. set variable as mentioned in additional info
3. run plabook for container installation


Actual results:
=================

Installation fails for OSDS




Additional info:
===============
osds.yml
---------

devices:
  - /dev/sdb
  - /dev/sdc
osd_containerized_deployment: true
ceph_osd_docker_prepare_env: -e CLUSTER={{ cluster }} -e OSD_JOURNAL_SIZE={{ journal_size }} -e OSD_FORCE_ZAP=1 -e OSD_DMCRYPT=1
ceph_osd_docker_devices: "{{ devices }}"
ceph_osd_docker_extra_env: -e CLUSTER={{ cluster }} -e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE -e OSD_JOURNAL_SIZE={{ journal_size }} -e OSD_DMCRYPT=1



all.yml
--------

fetch_directory: temp
cluster: temp
journal_size: 100 # OSD journal size in MB
public_network: XX
docker: true 
ceph_docker_image: "rhceph"
ceph_docker_image_tag: ceph-2-rhel-7-docker-candidate-20170516172622
ceph_docker_registry: XX

Comment 4 Alfredo Deza 2017-05-18 18:06:56 UTC
This is the same issue as #1451168 and ceph-disk is failing to pass the ``--cluster=temp`` here as well. Can you try with the default cluster name? I am sure you will get pass this one error.

Comment 5 Andrew Schoen 2017-05-18 18:07:38 UTC
Rachana,

I see 'Error initializing cluster client: Error('error calling conf_read_file: error
code 22',)' in that output, which means it's a custom cluster name issue. If you select dmcrypt you can not use a custom cluster name. I also don't see 'journal_collocation: true' in your group_vars, which might also cause issues.

Can you try again with ceph as the cluster name and journal_collocation: true set?

Thanks.

Comment 6 Christina Meno 2017-05-22 18:50:43 UTC
Seb,

We discussed this morning that you'd track down the commits to ceph-disk that should fix this.
Please update when you find them.

thanks


Note You need to log in before you can comment on or make changes to this bug.