Description of problem: OSD lockbox partition aren't umounted for legacy ceph-disk OSDs using dmcrypt Version-Release number of selected component (if applicable): RHCS 4.0 rhceph-4-rhel8:4-15 How reproducible: 100% Steps to Reproduce: 1. Deploy dmcrypt ceph-disk OSDs with RHCS 3 2. Upgrade to RHCS 4 Actual results: All lockbox partitions are mounted inside each OSD container Expected results: No lockbox partitions mounted.
Hi Dimitri, I upgraded cluster from registry.access.redhat.com/rhceph/rhceph-3-rhel7 to ceph-4.1-rhel-8-containers-candidate-37018-20200413024316. I see lockboxes still mounted, can you please help me to check what is missing ? [ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper >> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) [ubuntu@magna033 ~]$ sudo docker exec ceph-osd-0 mount |grep mapper >> /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b on /var/lib/ceph/osd/ceph-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) >> magna033 dedicated_devices="['/dev/sdd','/dev/sdd']" devices="['/dev/sdb','/dev/sdc']" osd_scenario="non-collocated" osd_objectstore="filestore" dmcrypt="true"
Hi Vasishta, This is not the lockbox mount point but the filestore data mount point. The lockbox mount point is /var/lib/ceph/osd-lockbox/${UUID} where ${UUID} is the OSD data partition UUID (always the partition number 1). The lockbox partition is always the partition number 5 (or 3 is upgrading from jewel). You can verify that /dev/mapper/099e7b5f-ba33-4569-8c7c-87760ee8e61b is actually either /dev/sdb1 or /dev/sdc1 as the data partition for the OSD 0 But you shouldn't have any /dev/sdb5 or /dev/sdc5 mounted
Thanks a lot for the detailed explanation, Moving to VERIFIED state. Regards, Vasishta Shastry QE, Ceph
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2385