Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1701097

Summary: [CEE/SD] 'osd_disk_activate.sh' is unable to determine osd_id when dmcrypt is enabled
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ashish Singh <assingh>
Component: ContainerAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED DUPLICATE QA Contact: Vasishta <vashastr>
Severity: high Docs Contact:
Priority: high    
Version: 3.2CC: ceph-eng-bugs, gabrioux
Target Milestone: rc   
Target Release: 3.*   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-18 13:31:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ashish Singh 2019-04-18 05:05:17 UTC
* Description of problem:

When 'dmcrypt' is set to 'true' with 'filestore' option, while deploying Ceph with RHOSP 13, ceph-osd containers keep on restarting and log the following error :
===================
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: exec: PID 761046: spawning /usr/bin/ceph-osd --cluster ceph -f -i  --setuser ceph --setgroup disk
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: exec: Waiting 761046 to quit
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: usage: ceph-osd -i <ID> [flags]
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --osd-data PATH data directory
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --osd-journal PATH
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: journal file or block device
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --mkfs            create a [new] data directory
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --mkkey           generate a new secret key. This is normally used in combination with --mkfs
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --convert-filestore
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: run any pending upgrade operations
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --flush-journal   flush all data out of journal
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --mkjournal       initialize a new journal
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --check-wants-journal
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: check whether a journal is desired
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --check-allows-journal
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: check whether a journal is allowed
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --check-needs-journal
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: check whether a journal is required
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --debug_osd <N>   set debug level (e.g. 10)
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --get-device-fsid PATH
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: get OSD fsid for the given block device
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --conf/-c FILE    read configuration from the given configuration file
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --id/-i ID        set ID portion of my name
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --name/-n TYPE.ID set name
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --cluster NAME    set cluster name (default: ceph)
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --setuser USER    set uid to user or uid (and gid to user's gid)
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --setgroup GROUP  set gid to group or gid
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --version         show version and quit
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: -d                run in foreground, log to stderr
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: -f                run in foreground, log to usual location
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: --debug_ms N      set message debug level (e.g. 1)
Apr 17 12:34:57 overcloud-cephstorage-0 ceph-osd-run.sh[755003]: 2019-04-17 12:34:57.571174 7f70fb826d80 -1 must specify '-i #' where # is the osd number
===================

'osd_disk_activate.sh' fails to determine osd_id when dmcrypt is enabled and thus ceph-osd containers don't start.
 

* Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3.2


* How reproducible:
Always


* Steps to Reproduce:
1. Deploy Ceph using Director with 'dmcrypt = true' in storage_environment.yaml


* Actual results:
ceph-osd containers keep restarting.


* Expected results:
ceph-osd containers should be up and running.


* Additional info:
NA

Comment 1 Dimitri Savineau 2019-04-18 13:31:08 UTC

*** This bug has been marked as a duplicate of bug 1695852 ***