Bug 1488462 - [ceph-container] : osd disk preparation failing - saying Waiting for /dev/sd*3 to show up
Summary: [ceph-container] : osd disk preparation failing - saying Waiting for /dev/sd*...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Container
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.0
Assignee: Sébastien Han
QA Contact: Vasishta
URL:
Whiteboard:
Depends On: 1493920
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-05 12:37 UTC by Vasishta
Modified: 2017-12-05 23:18 UTC (History)
8 users (show)

Fixed In Version: rhceph:ceph-3.0-rhel-7-docker-candidate-42468-20170919164130
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-05 23:18:20 UTC
Embargoed:


Attachments (Terms of Use)
File contains contents of inventory file, all.yml and ansible-playbook log (787.71 KB, text/plain)
2017-09-05 12:37 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3388 0 normal SHIPPED_LIVE new container image: rhceph-3-rhel7 2017-12-06 02:43:29 UTC

Description Vasishta 2017-09-05 12:37:08 UTC
Created attachment 1322193 [details]
File contains contents of inventory file, all.yml and ansible-playbook log

Description of problem:
Prepare ceph osd disk task failed on while running ansible-playbook saying waiting for /dev/sd*3 ( /dev/sdb, /dev/sdc, /dev/sdd) to show up. ceph-ansible was configured to set-up dmcrypt OSDs with collocated journals. Osds were expected to be collocated with either mon & mgr/ rgw /mdss.

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.0-0.1.rc4.el7cp.noarch
ceph-3.0-rhel-7-docker-candidate-71465-20170804220045

Steps to Reproduce:
1. Configure ceph-ansible to setup containerized ceph cluster with OSDs having dmcrypt & collocated journals
2.Run ansible-playbook site-docker.yml
3.

Actual results (Log snippet, larger log snippet have been added as an attachment):
"Warning: The kernel is still using the old partition table.", "The new table will be used at the next reboot.", "The operation has completed successfully.", "Unmounting LOCKBOX directory", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up", "Waiting for /dev/sdd3 to show up"]}

Expected results:
ceph-ansible should successfully prepare osds

Additional info:
I've copied group_vars/osds.yml contents here, Please let me know If I have missed anything
$ cat group_vars/osds.yml | egrep -v ^# | grep -v ^$
---
dummy:
copy_admin_key: true
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
ceph_osd_docker_prepare_env: -e CLUSTER={{ cluster }} -e OSD_JOURNAL_SIZE={{ journal_size }} -e OSD_FORCE_ZAP=1 -e OSD_DMCRYPT=1 -e OSD_FILESTORE=1
ceph_osd_docker_extra_env: -e CLUSTER={{ cluster }} -e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE -e OSD_JOURNAL_SIZE={{ journal_size }} -e OSD_DMCRYPT=1 -e OSD_FILESTORE=1

Comment 2 Harish NV Rao 2017-09-15 12:00:15 UTC
@Loic, could you please let us know when can we get the fix for this?

This BZ is blocking our encrypted osd scenarios for containers.

Comment 3 seb 2017-09-18 15:13:08 UTC
We have a newer container image, please test with: ceph-3.0-rhel-7-docker-candidate-32702-20170915121952

Comment 9 errata-xmlrpc 2017-12-05 23:18:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3388


Note You need to log in before you can comment on or make changes to this bug.