Bug 1400967

Summary: [ceph-ansible] ceph-ansible failing to check raw journal devices
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Vimal Kumar <vikumar>
Component: ceph-ansibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, edonnell, gmeno, hnallurv, kdreyer, nthomas, sankarshan, seb, skinjo, tchandra, vumrao
Target Milestone: ---   
Target Release: 2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-2.1.1-1.el7scon Doc Type: Bug Fix
Doc Text:
Previously, installation using the ceph-ansible utility failed on the "fix partitions gpt header or labels of the journal devices" task in the ceph-osd role because of an empty variable. The underlying source code has been modified, and the installation no longer fails in this case.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-14 15:51:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1405630    
Bug Blocks:    

Description Vimal Kumar 2016-12-02 13:04:08 UTC
a) Description of problem:

An installation using ceph-ansible failed on the section "fix partitions gpt header or labels of the journal devices" task in ceph-osd role. 

The error should look something like “failed to evaluate raw_multi_journal and item.0.rc != 0 conditional”. 

A few variables definitions from group_vars/osds looks like the following:

~~~
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdh
  - /dev/sdi
  - /dev/sdj
  - /dev/sdk
  - /dev/sdl
  - /dev/sdn
  - /dev/sdo
  - /dev/sdp
  - /dev/sdq
  - /dev/sdr

raw_multi_journal: true
raw_journal_devices:
  - /dev/sda
  - /dev/sda
  - /dev/sda
  - /dev/sda
  - /dev/sda
  - /dev/sdg
  - /dev/sdg
  - /dev/sdg
  - /dev/sdg
  - /dev/sdg
  - /dev/sdm
  - /dev/sdm
  - /dev/sdm
  - /dev/sdm
  - /dev/sdm
~~~

To fix this, the following patch was applied by the customer to get the playbook run properly.

~~~
# git diff HEAD^1
diff --git a/roles/ceph-osd/tasks/check_devices.yml b/roles/ceph-osd/tasks/check_devices.yml
index c916ff4..14a9898 100644
--- a/roles/ceph-osd/tasks/check_devices.yml
+++ b/roles/ceph-osd/tasks/check_devices.yml
@@ -108,7 +108,7 @@
   shell: "sgdisk --zap-all --clear --mbrtogpt -g -- {{ item.1 }} || sgdisk --zap-all --clear --mbrtogpt -g -- {{ item.1 }}"
   with_together:
     - journal_partition_status.results
-    - raw_journal_devices
+    - "{{ raw_journal_devices|default([])|unique }}"
   changed_when: false
   when:
     raw_multi_journal and
~~~

b) Version-Release number of selected component (if applicable):

RHCS2.0

c) How reproducible:

Always

d) Additional info

This has been fixed by https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-osd/defaults/main.yml#L129.

Comment 3 seb 2016-12-02 17:00:18 UTC
If the target release is 3 then you will get the fix during the next sync between upsteam and downstream.

Comment 6 Alfredo Deza 2017-01-12 13:01:15 UTC
*** Bug 1405370 has been marked as a duplicate of this bug. ***

Comment 9 Tejas 2017-02-06 07:46:07 UTC
This bug has been Fixed as part of ceph-ansible version:
ceph-ansible-2.1.6-1.el7scon.noarch

When this ships customers can use this build.
Moving to Verified.

Comment 11 errata-xmlrpc 2017-03-14 15:51:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:0515