Bug 1806033 - OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt
Summary: OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Container
Version: 4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 4.1
Assignee: Dimitri Savineau
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On: 1818917
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-21 20:48 UTC by Dimitri Savineau
Modified: 2020-06-03 16:22 UTC (History)
7 users (show)

Fixed In Version: rhceph:ceph-4.1-rhel-8-containers-candidate-93074-20200316153729
Doc Type: Bug Fix
Doc Text:
Cause: When using OSD encryption, the lockbox partitions were mounted for each OSD container in order to scan the legacy OSD (ceph-disk based) Consequence: The lockbox partition for a specific OSD was still mounted in other OSD containers which could lead to failures when doing operation on that partition (ressource busy) Fix: The lockbox partitions are unmounted after the legacy OSD scan. Result: Operation on lockbox partitions don't fail.
Clone Of:
Environment:
Last Closed: 2020-06-03 16:22:16 UTC
Embargoed:
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-container pull 1592 0 None closed osd_volume_activate: umount lockbox after scanning 2021-01-12 18:22:05 UTC
Github ceph ceph-container pull 1597 0 None closed osd_volume_activate: umount lockbox after scanning (bp #1592) 2021-01-12 18:22:05 UTC
Red Hat Product Errata RHBA-2020:2385 0 None None None 2020-06-03 16:22:32 UTC

Description Dimitri Savineau 2020-02-21 20:48:30 UTC
Description of problem:

OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt

Version-Release number of selected component (if applicable):
RHCS 4.0 rhceph-4-rhel8:4-15


How reproducible:
100%


Steps to Reproduce:
1. Deploy dmcrypt ceph-disk OSDs with RHCS 3
2. Upgrade to RHCS 4

Actual results:

All lockbox partitions are mounted inside each OSD container


Expected results:

No lockbox partitions mounted.

Comment 8 Vasishta 2020-04-20 15:25:23 UTC
Hi Dimitri,

I upgraded cluster from registry.access.redhat.com/rhceph/rhceph-3-rhel7 to ceph-4.1-rhel-8-containers-candidate-37018-20200413024316.

I see lockboxes still mounted, can you please help me to check what is missing ?

[ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper
>> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper
>> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

>> magna033 dedicated_devices="['/dev/sdd','/dev/sdd']" devices="['/dev/sdb','/dev/sdc']" osd_scenario="non-collocated" osd_objectstore="filestore" dmcrypt="true"

Comment 9 Dimitri Savineau 2020-04-20 16:00:31 UTC
Hi Vasishta,

This is not the lockbox mount point but the filestore data mount point.

The lockbox mount point is /var/lib/ceph/osd-lockbox/${UUID} where ${UUID} is the OSD data partition UUID (always the partition number 1).

The lockbox partition is always the partition number 5 (or 3 is upgrading from jewel).

You can verify that /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b is actually either /dev/sdb1 or /dev/sdc1 as the data partition for the OSD 0

But you shouldn't have any /dev/sdb5 or /dev/sdc5 mounted

Comment 10 Vasishta 2020-04-20 16:12:17 UTC
Thanks a lot for the detailed explanation, 
Moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 12 errata-xmlrpc 2020-06-03 16:22:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2385


Note You need to log in before you can comment on or make changes to this bug.