Bug 1695852 - Unable to determine osd id when dmcrypt is enabled
Summary: Unable to determine osd id when dmcrypt is enabled
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Container
Version: 3.2
Hardware: Unspecified
OS: Linux
medium
medium
Target Milestone: z2
: 3.2
Assignee: Dimitri Savineau
QA Contact: Vasishta
Bara Ancincova
URL:
Whiteboard:
: 1701097 (view as bug list)
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2019-04-03 19:35 UTC by Dimitri Savineau
Modified: 2019-04-30 17:30 UTC (History)
7 users (show)

Fixed In Version: ceph-3.2-rhel-7-containers-candidate-46897-20190403200403
Doc Type: Bug Fix
Doc Text:
.Deploying encrypted OSDs in containers by using `ceph-disk` works as expected When attempting to deploy a containerized OSD by using the `ceph-disk` and `dmcrypt` utilities, the container process failed to start because the OSD ID could not be found by the mounts table. With this update, the OSD ID is correctly determined, and the container process no longer fails.
Clone Of:
Environment:
Last Closed: 2019-04-30 17:05:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-container issues 1343 0 'None' closed Unable to determine osd id when dmcrypt is enabled 2020-06-06 14:53:06 UTC
Github ceph ceph-container pull 1344 0 'None' closed With dmcrypt expect /dev/mapper/device to be used 2020-06-06 14:53:06 UTC
Red Hat Product Errata RHBA-2019:0912 0 None None None 2019-04-30 17:05:31 UTC

Description Dimitri Savineau 2019-04-03 19:35:48 UTC
Description of problem:
When trying to deploy OSD using ceph-disk and dmcrypt (either bluestore or filestore), the container process will fail to start because the OSD ID can't be found via the mounts table.
The OSD id is determined by looking the dmcrypt partition in /proc/mounts but we're looking for /dev/dm-X instead of the /dev/mapper/{UUID} due to the readlink -f command.

Version-Release number of selected component (if applicable):
RHCS 3.2 container image rhceph-3-rhel7:3-23
ceph version 12.2.8-89.el7cp (2f66ab2fa63b2879913db6d6cf314572a83fd1f0) luminous (stable)

How reproducible:
100%

Steps to Reproduce:
1. Deploy OSD with dmcrypt enabled (-e OSD_DMCRYPT=1) and ceph-disk (-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE)

ceph-ansible config used:
-------
osd_scenario: collocated
dmcrypt: true
devices:
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
-------

Actual results:
the container fails to start due to OSD ID variable empty on the ceph-osd process

: exec: PID 29737: spawning /usr/bin/ceph-osd --cluster ceph -f -i  --setuser ceph --setgroup disk
: exec: Waiting 29737 to quit
(...)
: teardown: managing teardown after SIGCHLD
: teardown: Waiting PID 29737 to terminate
: teardown: Process 29737 is terminated
: teardown: Bye Bye, container will die with return code -1
: teardown: if you don't want me to die and have access to a shell to debug this situation, next time run me with '-e DEBUG=stayalive'

Expected results:
the container should start.

: exec: PID 31469: spawning /usr/bin/ceph-osd --cluster ceph -f -i 0 --setuser ceph --setgroup disk
: exec: Waiting 31469 to quit
: starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

Comment 8 Dimitri Savineau 2019-04-18 13:31:08 UTC
*** Bug 1701097 has been marked as a duplicate of this bug. ***

Comment 12 errata-xmlrpc 2019-04-30 17:05:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0912


Note You need to log in before you can comment on or make changes to this bug.