Bug 1806033

Summary: OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Dimitri Savineau <dsavinea>
Component: ContainerAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Karen Norteman <knortema>
Priority: medium    
Version: 4.0CC: bniver, ceph-eng-bugs, ceph-qe-bugs, gabrioux, hyelloji, knortema, tserlin
Target Milestone: rcFlags: hyelloji: needinfo-
Target Release: 4.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhceph:ceph-4.1-rhel-8-containers-candidate-93074-20200316153729 Doc Type: Bug Fix
Doc Text:
Cause: When using OSD encryption, the lockbox partitions were mounted for each OSD container in order to scan the legacy OSD (ceph-disk based) Consequence: The lockbox partition for a specific OSD was still mounted in other OSD containers which could lead to failures when doing operation on that partition (ressource busy) Fix: The lockbox partitions are unmounted after the legacy OSD scan. Result: Operation on lockbox partitions don't fail.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-03 16:22:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1818917    
Bug Blocks:    

Description Dimitri Savineau 2020-02-21 20:48:30 UTC
Description of problem:

OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt

Version-Release number of selected component (if applicable):
RHCS 4.0 rhceph-4-rhel8:4-15


How reproducible:
100%


Steps to Reproduce:
1. Deploy dmcrypt ceph-disk OSDs with RHCS 3
2. Upgrade to RHCS 4

Actual results:

All lockbox partitions are mounted inside each OSD container


Expected results:

No lockbox partitions mounted.

Comment 8 Vasishta 2020-04-20 15:25:23 UTC
Hi Dimitri,

I upgraded cluster from registry.access.redhat.com/rhceph/rhceph-3-rhel7 to ceph-4.1-rhel-8-containers-candidate-37018-20200413024316.

I see lockboxes still mounted, can you please help me to check what is missing ?

[ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper
>> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper
>> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

>> magna033 dedicated_devices="['/dev/sdd','/dev/sdd']" devices="['/dev/sdb','/dev/sdc']" osd_scenario="non-collocated" osd_objectstore="filestore" dmcrypt="true"

Comment 9 Dimitri Savineau 2020-04-20 16:00:31 UTC
Hi Vasishta,

This is not the lockbox mount point but the filestore data mount point.

The lockbox mount point is /var/lib/ceph/osd-lockbox/${UUID} where ${UUID} is the OSD data partition UUID (always the partition number 1).

The lockbox partition is always the partition number 5 (or 3 is upgrading from jewel).

You can verify that /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b is actually either /dev/sdb1 or /dev/sdc1 as the data partition for the OSD 0

But you shouldn't have any /dev/sdb5 or /dev/sdc5 mounted

Comment 10 Vasishta 2020-04-20 16:12:17 UTC
Thanks a lot for the detailed explanation, 
Moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 12 errata-xmlrpc 2020-06-03 16:22:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2385