Bug 1391468 - [ceph-ansible] Encrypted OSD creation fails with dedicated journal devices
Summary: [ceph-ansible] Encrypted OSD creation fails with dedicated journal devices
Keywords:
Status: CLOSED DUPLICATE of bug 1366808
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3
Assignee: Andrew Schoen
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-03 11:50 UTC by Vasishta
Modified: 2017-03-03 16:41 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-03 16:41:18 UTC
Embargoed:


Attachments (Terms of Use)
Recent log (288.73 KB, text/plain)
2016-11-10 07:23 UTC, Vasishta
no flags Details

Description Vasishta 2016-11-03 11:50:12 UTC
Description of problem:
Encrypted OSD creation is failing when it is choosen to have dedicated journal devices.

Version-Release number of selected component (if applicable):
ceph version 10.2.3-12.el7cp
ceph-ansible-1.0.5-39.el7scon.noarch

How reproducible:
Always

Steps to Reproduce:
1. Install ceph-ansible
2. Change following settings in /usr/share/ceph-ansible/group_vars/osds file

dmcrypt_dedicated_journal: true
devices:
  - /dev/sdb

raw_journal_devices:
  - /dev/sdd

3. Run playbook.


Actual results:


Expected results:


Additional info:
I have copied ansible-playbook log and group_vars folder to /home/ubuntu of magna111.ceph.redhat.com as 'ansible-log' and group_vars respectively.

Comment 4 seb 2016-11-03 13:02:32 UTC
I'm looking into this, I'm logged on the machines

Comment 5 seb 2016-11-03 14:07:04 UTC
Found the error, fix is already upstream. I need to cherry-pick that.

Comment 6 seb 2016-11-03 14:13:04 UTC
Andrew already fixed that here: https://github.com/ceph/ceph-ansible/pull/1060

Comment 7 seb 2016-11-03 14:34:51 UTC
Andrew is going to push that downstream today

Comment 8 Andrew Schoen 2016-11-03 15:34:18 UTC
I've got the necessary patches for this pushed downstream.

Comment 11 Tejas 2016-11-04 07:25:46 UTC
Ken,

 Could ypu please generate a downstream compose with this version of ceph-ansible:
ceph-ansible-1.0.5-40.el7scon

As its not there in the latest rhscon composes.

Thanks,
Tejas

Comment 14 Vasishta 2016-11-10 07:23:32 UTC
Created attachment 1219213 [details]
Recent log

Comment 15 Vasishta 2016-11-10 07:26:03 UTC
Hi,

Still it's failing.

PLAY RECAP ******************************************************************** 
magna003                   : ok=89   changed=0    unreachable=0    failed=0   
magna013                   : ok=89   changed=0    unreachable=0    failed=0   
magna023                   : ok=89   changed=0    unreachable=0    failed=0   
magna056                   : ok=202  changed=7    unreachable=0    failed=0   
magna092                   : ok=202  changed=7    unreachable=0    failed=0   
magna112                   : ok=202  changed=7    unreachable=0    failed=0   

[ubuntu@magna003 ceph-ansible]$ sudo ceph -s
    cluster 3cdf8f83-bcb8-4051-9942-1ba7265ab0cf
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            no osds
     monmap e1: 3 mons at {magna003=10.8.128.3:6789/0,magna013=10.8.128.13:6789/0,magna023=10.8.128.23:6789/0}
            election epoch 6, quorum 0,1,2 magna003,magna013,magna023
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

Moving back to ASSIGNED state.

Regards,
Vasishta

Comment 17 Ken Dreyer (Red Hat) 2017-03-03 16:41:18 UTC
We test dmcrypt with dedicated journals with dmcrypt every day upstream now, and it works in the latest RPM (v2.1.9). I'm closing this as a dup of the general "dmcrypt support" bug, 1366808

*** This bug has been marked as a duplicate of bug 1366808 ***


Note You need to log in before you can comment on or make changes to this bug.