+++ This bug was initially created as a clone of Bug #2261977 +++ Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Brett Hull on 2024-01-30 17:36:53 UTC --- Description of problem: HotSwap drives are not allowed as storage under v6.1 whereas they were under 5.3. Customer requests a backport or supportability statement on workarounds. > > Version-Release number of selected component (if applicable): ceph version 17.2.6-170.el9cp <- 6.1.z3 Async - 6.1.3 Async > > How reproducible: 100% > > > Steps to Reproduce: 1. Have HotSwap capable storage hardware on v5.3 2. Upgrade to v6.1 3. devices are no longer available for use. > Actual results: A filter in ceph-volume was introduced here: https://github.com/ceph/ceph/commit/5705e10e809cdc9f70018263c54a63ac4a02809c that discard _any_ "removable medias" from being in the ceph-volume inventory. This filters out our SATA drives. Further commits down the line appear to fix this oversight for SATA drives, e.g.: https://github.com/ceph/ceph/commit/bd5e1a83495e31e457827f564c56fba23f4da8c9 > Expected results: ceph orch apply osd --all-available-devices --dry-run log output in ceph v5.3 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "available": true, 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "device_id": "ST6000NM023A-2R7_*****", 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "lsm_data": {}, 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "lvs": [], 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "p/sdm", 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "rejected_reaath": "/devsons": [], 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "sys_api": { 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "human_readable_size": "5.46 TB", ... 2024-01-12 11:42:32,297 7fd8a6956b80 DEBUG /usr/bin/podman: "model": "ST6000NM023A-2R7", 2024-01-12 11:42:32,298 7fd8a6956b80 DEBUG /usr/bin/podman: "path": "/dev/sdm", 2024-01-12 11:42:32,298 7fd8a6956b80 DEBUG /usr/bin/podman: "removable": "1", > > Additional info: associated tickets from the git commit message: Fixes: https://tracker.ceph.com/issues/57907 Fixes: https://tracker.ceph.com/issues/58190 Fixes: https://tracker.ceph.com/issues/58306 Fixes: https://tracker.ceph.com/issues/58591 Using ceph containers in the tagged release 19.0 we can see drives are back in the inventory: [root@ceph04 cephadm]# ./cephadm.py ceph-volume inventory Inferring fsid 818a2462-bf5e-11ee-ad73-0c42a1f3a450 Using ceph image with id '7bfdd8569f57' and tag 'main' created on 2024-01-29 23:45:44 +0000 UTC quay.ceph.io/ceph-ci/ceph@sha256:6759c14bcf369c4f29c06363bd87b48a2d4d95e656e05ae984d9ea8afd7760f9 Device Path Size Device nodes rotates available Model name /dev/sdb 5.46 TB sdb True True ST6000NM023A-2R7 /dev/sdd 5.46 TB sdd True True ST6000NM023A-2R7 /dev/sde 5.46 TB sde True True ST6000NM023A-2R7 /dev/sdf 5.46 TB sdf True True ST6000NM023A-2R7 /dev/sdg 5.46 TB sdg True True ST6000NM023A-2R7 /dev/sdi 5.46 TB sdi True True ST6000NM023A-2R7 /dev/sdj 5.46 TB sdj True True ST6000NM023A-2R7 /dev/sdl 5.46 TB sdl True True ST6000NM023A-2R7 /dev/sdm 5.46 TB sdm True True ST6000NM023A-2R7 /dev/sdc 5.46 TB sdc True False ST6000NM023A-2R7 /dev/sdh 5.46 TB sdh True False ST6000NM023A-2R7 /dev/sdk 5.46 TB sdk True False ST6000NM023A-2R7 Customer cannot change the BIOs settings to set the drives to non-HotSwap Cannot simply modify the kernel by changing /sys/block/sd*/removable <- read only, shows "1" which is removable. [bhull@bhull block]$ grep removable *sd* udevadm_info_-a_.dev.sda: ATTR{removable}=="0" udevadm_info_-a_.dev.sdb: ATTR{removable}=="1" udevadm_info_-a_.dev.sdc: ATTR{removable}=="1" udevadm_info_-a_.dev.sdd: ATTR{removable}=="1" udevadm_info_-a_.dev.sde: ATTR{removable}=="1" udevadm_info_-a_.dev.sdf: ATTR{removable}=="1" udevadm_info_-a_.dev.sdg: ATTR{removable}=="1" udevadm_info_-a_.dev.sdh: ATTR{removable}=="1" udevadm_info_-a_.dev.sdi: ATTR{removable}=="1" udevadm_info_-a_.dev.sdj: ATTR{removable}=="1" udevadm_info_-a_.dev.sdk: ATTR{removable}=="1" udevadm_info_-a_.dev.sdl: ATTR{removable}=="1" udevadm_info_-a_.dev.sdm: ATTR{removable}=="1" They have tried and were successful with the following workaround: We also went through the ceph issues related to ours and found two ways to bypass this issue: from this ceph ticket: https://tracker.ceph.com/issues/38833 Updated by Alfredo Deza almost 5 years ago It is not possible to disable that internal check from ceph-volume, however, the workaround in your case would be to create the LV on that removable device and just create the OSD there vs. trying to use the disk directly. We managed to enroll disks into a ceph 17 cluster this way with this command: ceph orch daemon add osd --method lvm ceph-cluster-node04:/dev/vg-test/lv-test During our initial testing we also were able to bypass the ceph-volume filter using: ceph orch daemon add osd --method raw ceph-cluster-node04:/dev/sdg. In both cases, the osd got created and the disk storage space became available to the cluster. E.g.: Ceph df in this case reports osd correctly. ceph-volume inventory doesn't see the enrolled disk in its inventory. They would prefer if Engineering sanctioned to use: ceph orch daemon add osd --method raw ceph-cluster-node04:/dev/sdg Could you please state what what the consequences of these two workarounds are. If any of these methods are unsupported, please let us know as it would be an instant no-go. --- Additional comment from Brett Hull on 2024-01-30 18:29:14 UTC --- Hello, I suspect this customer is very impatient. Please find attach an lsblk output from the device lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 600M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 445G 0 part ├─rhel-root 253:0 0 70G 0 lvm / ├─rhel-swap 253:1 0 4G 0 lvm [SWAP] └─rhel-home 253:2 0 371G 0 lvm /home sdb 8:16 1 5.5T 0 disk sdc 8:32 1 5.5T 0 disk sdd 8:48 1 5.5T 0 disk sde 8:64 1 5.5T 0 disk sdf 8:80 1 5.5T 0 disk sdg 8:96 1 5.5T 0 disk sdh 8:112 1 5.5T 0 disk sdi 8:128 1 5.5T 0 disk sdj 8:144 1 5.5T 0 disk sdk 8:160 1 5.5T 0 disk sdl 8:176 1 5.5T 0 disk sdm 8:192 1 5.5T 0 disk ^ '1' indicates removable. Our data disks are sd{b..m} -- looking at device '/devices/pci0000:40/0000:40:08.2/0000:43:00.0/ata2/host2/target2:0:0/2:0:0:0/block/sdc': KERNEL=="sdc" ATTR{queue/io_timeout}=="30000" ATTRS{wwid}=="naa.5000c500d53ee836" ATTR{removable}=="1" lrwxrwxrwx. 1 bhull bhull 77 Jan 22 10:08 host2 -> ../../devices/pci0000:40/0000:40:08.2/0000:43:00.0/ata2/host2/scsi_host/host2 43:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51) (prog-if 01 [AHCI 1.0]) Subsystem: Gigabyte Technology Co., Ltd Device [1458:1000] Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- [bhull@bhull sosreport-ceph01-2024-01-30-wzrfgdt]$ grep "Product Name" dmidecode Product Name: R282-Z90-00 <- Rack Server - AMD EPYC™ 7002 - 2U DP 12+2-Bay SATA/SAS Product Name: MZ92-FS0-00 12 x 3.5"/2.5" SATA/SAS hot-swappable bays 2 x 2.5" SATA/SAS hot-swappable bays on rear side Best regards, Brett --- Additional comment from Brett Hull on 2024-01-30 19:39:46 UTC --- Hello Guillaume, This is the upstream fix, looks pretty straight forward. Please add this to the v6 z-stream. This filters out our SATA drives. Further commits down the line appear to fix this oversight for SATA drives, e.g.: https://github.com/ceph/ceph/commit/bd5e1a83495e31e457827f564c56fba23f4da8c9 Do you feel this can be done? Next question: Is the workaround Engineering sustainable/acceptable? from this ceph ticket: https://tracker.ceph.com/issues/38833 Best regards, Brett -- regression: A filter in ceph-volume was introduced here: https://github.com/ceph/ceph/commit/5705e10e809cdc9f70018263c54a63ac4a02809c that discard _any_ "removable medias" from being in the ceph-volume inventory. We also went through the ceph issues related to ours and found two ways to bypass this issue: from this ceph ticket: https://tracker.ceph.com/issues/38833 Updated by Alfredo Deza almost 5 years ago It is not possible to disable that internal check from ceph-volume, however, the workaround in your case would be to create the LV on that removable device and just create the OSD there vs. trying to use the disk directly. We managed to enroll disks into a ceph 17 cluster this way with this command: ceph orch daemon add osd --method lvm ceph-cluster-node04:/dev/vg-test/lv-test During our initial testing we also were able to bypass the ceph-volume filter using: ceph orch daemon add osd --method raw ceph-cluster-node04:/dev/sdg. In both cases, the osd got created and the disk storage space became available to the cluster. E.g.: Ceph df in this case reports osd correctly. ceph-volume inventory doesn't see the enrolled disk in its inventory. They would prefer if Engineering sanctioned to use: ceph orch daemon add osd --method raw ceph-cluster-node04:/dev/sdg Could you please state what what the consequences of these two workarounds are. If any of these methods are unsupported, please let us know as it would be an instant no-go. --- Additional comment from Brett Hull on 2024-01-30 21:49:40 UTC --- Hello Guillaume, Would it be possible to get this fix into v6.1z4? I know the build is set for 01-Feb-2024, then testing, and the fix looks pretty benign. Best regards, Brett --- Additional comment from Manny on 2024-01-31 15:07:07 UTC --- The customer requested a HF and I have done the same via the Ceph PRIO List ~~~ From: Manuel Caldeira <mcaldeir> Date: Wed, Jan 31, 2024 at 10:03 AM Subject: [6.1z3async] [Hotfix Request] [Account #6993897 - LuxProvide S.A.] [Case #03724324] [Bug #2261977] [Hot Swappable disks are excluded for use as an OSD.] To: ceph-prio-list <ceph-prio-list> Hello All, We have a HF request for LuxProvide S.A. #6993897. Issue description: Hot Swappable disks are excluded for use as an OSD. The customer has tried (unsuccessfully) to change HW setting so the disks are not seen as Hot Swappable Business Justification: For reasons unknown, the customer is re-installing this cluster at the latest code and this unforeseen issue is blocking them from getting this system back into production. Case : 03724324 Bug : 2261977 Product Version: RHCS: 6.1z3async Is there a root cause for the issue from RHCS Engineering documented in the BZ? Yes Does workaround exist: No Does the fix exist in higher versions of Ceph? No, only upstream Patch link : https://github.com/ceph/ceph/pull/54706 ~~~ BR Manny --- Additional comment from on 2024-01-31 20:22:45 UTC --- Requesting qa_ack+ so we can attach this BZ to the errata advisory. QE, the code is in the latest builds, but I don't know to what extent (if any) QE needs to verify? (In the past, we'd enter the CodeChange keyword.) Thanks, Thomas --- Additional comment from errata-xmlrpc on 2024-02-01 05:02:47 UTC --- Bug report changed to ON_QA status by Errata System. A QE request has been submitted for advisory RHBA-2024:125725-01 https://errata.engineering.redhat.com/advisory/125725 --- Additional comment from errata-xmlrpc on 2024-02-01 05:02:55 UTC --- This bug has been added to advisory RHBA-2024:125725 by Thomas Serlin (tserlin) --- Additional comment from Guillaume Abrioux on 2024-02-01 12:17:10 UTC --- (In reply to Brett Hull from comment #4) > Hello Guillaume, > > Would it be possible to get this fix into v6.1z4? I know the build is set > for 01-Feb-2024, then testing, and the fix looks pretty benign. > > Best regards, > Brett yes, I cherry-picked all the fixes into 6.1z4 downstream branch (In reply to Brett Hull from comment #3) > from this ceph ticket: https://tracker.ceph.com/issues/38833 > > Updated by Alfredo Deza almost 5 years ago > It is not possible to disable that internal check from ceph-volume, however, > the workaround in your case would be to create the LV on that removable > device and just create the OSD there vs. > trying to use the disk directly. > > We managed to enroll disks into a ceph 17 cluster this way with this > command: ceph orch daemon add osd --method lvm > ceph-cluster-node04:/dev/vg-test/lv-test > > During our initial testing we also were able to bypass the ceph-volume > filter using: ceph orch daemon add osd --method raw > ceph-cluster-node04:/dev/sdg. > > In both cases, the osd got created and the disk storage space became > available to the cluster. E.g.: Ceph df in this case reports osd correctly. > ceph-volume inventory doesn't see the enrolled disk in its inventory. > > They would prefer if Engineering sanctioned to use: ceph orch daemon add osd > --method raw ceph-cluster-node04:/dev/sdg As far as I know, both scenarios are supported. Maybe the only thing with the raw based OSDs is that it is less flexible than the 'lvm batch' deployment which offers more possibilities. (raw method has a strict 1:1 relation between block device and db devices). I tested the pre-created VGs/LVs approach with a 6.1z3 cluster where devices are seen as removable which worked. # ceph-volume inventory /dev/vdc --format json | jq { "path": "/dev/vdc", "sys_api": { "removable": 1, "ro": "0", "vendor": "0x1af4", "model": "", "rev": "", "sas_address": "", "sas_device_handle": "", "support_discard": "512", "rotational": "1", "nr_requests": "256", "device_nodes": "vdc", "scheduler_mode": "none", "partitions": {}, "sectors": 0, "sectorsize": "512", "size": 214748364800.0, "human_readable_size": "200.00 GB", "path": "/dev/vdc", "locked": 0, "type": "disk" }, "ceph_device": false, "lsm_data": {}, "available": false, "rejected_reasons": [ "removable" <--- we can see it is rejected here because ceph-volume sees it as removable ], "device_id": "", "lvs": [] } lvm batch won't create OSDs: # ceph-volume lvm batch --prepare --no-systemd --bluestore /dev/vdc --report --> DEPRECATION NOTICE --> You are using the legacy automatic disk sorting behavior --> The Pacific release will change the default to --no-auto --> passed data devices: 1 physical, 0 LVM --> relative data size: 1.0 --> All data devices are unavailable Total OSDs: 0 Type Path LV Size % of device # creating VGs/LVs in advance: # pvcreate /dev/vdc Physical volume "/dev/vdc" successfully created. # vgcreate vg_test1 /dev/vdc Volume group "vg_test1" successfully created # lvcreate -n lv1 -l 100%FREE vg_test1 Logical volume "lv1" created. # ceph-volume lvm batch --prepare --no-systemd --bluestore vg_test1/lv1 --report --> DEPRECATION NOTICE --> You are using the legacy automatic disk sorting behavior --> The Pacific release will change the default to --no-auto --> passed data devices: 0 physical, 1 LVM --> relative data size: 1.0 Total OSDs: 1 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- data vg_test1/lv1 200.00 GB 10000.00% # --- Additional comment from Shravan Kumar Tiwari on 2024-02-01 13:21:45 UTC --- (In reply to Guillaume Abrioux from comment #9) > (In reply to Brett Hull from comment #4) > > Hello Guillaume, > > > > Would it be possible to get this fix into v6.1z4? I know the build is set > > for 01-Feb-2024, then testing, and the fix looks pretty benign. > > > > Best regards, > > Brett > > yes, I cherry-picked all the fixes into 6.1z4 downstream branch > > > (In reply to Brett Hull from comment #3) > > from this ceph ticket: https://tracker.ceph.com/issues/38833 > > > > Updated by Alfredo Deza almost 5 years ago > > It is not possible to disable that internal check from ceph-volume, however, > > the workaround in your case would be to create the LV on that removable > > device and just create the OSD there vs. > > trying to use the disk directly. > > > > We managed to enroll disks into a ceph 17 cluster this way with this > > command: ceph orch daemon add osd --method lvm > > ceph-cluster-node04:/dev/vg-test/lv-test > > > > During our initial testing we also were able to bypass the ceph-volume > > filter using: ceph orch daemon add osd --method raw > > ceph-cluster-node04:/dev/sdg. > > > > In both cases, the osd got created and the disk storage space became > > available to the cluster. E.g.: Ceph df in this case reports osd correctly. > > ceph-volume inventory doesn't see the enrolled disk in its inventory. > > > > They would prefer if Engineering sanctioned to use: ceph orch daemon add osd > > --method raw ceph-cluster-node04:/dev/sdg > > As far as I know, both scenarios are supported. > Maybe the only thing with the raw based OSDs is that it is less flexible > than the 'lvm batch' deployment which offers more possibilities. (raw method > has a strict 1:1 relation between block device and db devices). > > I tested the pre-created VGs/LVs approach with a 6.1z3 cluster where devices > are seen as removable which worked. > > > > # ceph-volume inventory /dev/vdc --format json | jq > { > "path": "/dev/vdc", > "sys_api": { > "removable": 1, > "ro": "0", > "vendor": "0x1af4", > "model": "", > "rev": "", > "sas_address": "", > "sas_device_handle": "", > "support_discard": "512", > "rotational": "1", > "nr_requests": "256", > "device_nodes": "vdc", > "scheduler_mode": "none", > "partitions": {}, > "sectors": 0, > "sectorsize": "512", > "size": 214748364800.0, > "human_readable_size": "200.00 GB", > "path": "/dev/vdc", > "locked": 0, > "type": "disk" > }, > "ceph_device": false, > "lsm_data": {}, > "available": false, > "rejected_reasons": [ > "removable" <--- we can see it is rejected here because > ceph-volume sees it as removable > ], > "device_id": "", > "lvs": [] > } > > > > lvm batch won't create OSDs: > # ceph-volume lvm batch --prepare --no-systemd --bluestore /dev/vdc --report > --> DEPRECATION NOTICE > --> You are using the legacy automatic disk sorting behavior > --> The Pacific release will change the default to --no-auto > --> passed data devices: 1 physical, 0 LVM > --> relative data size: 1.0 > --> All data devices are unavailable > > Total OSDs: 0 > > Type Path LV > Size % of device > # > > > > > creating VGs/LVs in advance: > # pvcreate /dev/vdc > Physical volume "/dev/vdc" successfully created. > # vgcreate vg_test1 /dev/vdc > Volume group "vg_test1" successfully created > # lvcreate -n lv1 -l 100%FREE vg_test1 > Logical volume "lv1" created. > # ceph-volume lvm batch --prepare --no-systemd --bluestore vg_test1/lv1 > --report > --> DEPRECATION NOTICE > --> You are using the legacy automatic disk sorting behavior > --> The Pacific release will change the default to --no-auto > --> passed data devices: 0 physical, 1 LVM > --> relative data size: 1.0 > > Total OSDs: 1 > > Type Path LV > Size % of device > ----------------------------------------------------------------------------- > ----------------------- > data vg_test1/lv1 > 200.00 GB 10000.00% > # Thanks a lot Guillaume on wokring on the workaround. I will check it with our consultant involved in this engagement and see if we can try this immediately in customer env. and thanks for also clarifying that it also supported workaround for customer. Just one thing if customer go ahead with this workaround to unblock and then later update to 6.1z4 (where fix will be GA) then they can upgrade to 6.1z4 and it should be fine.nothing extra needed from their side on existing cluster or when adding new server or disks. right? --- Additional comment from Manny on 2024-02-01 17:05:45 UTC --- Hello, Was this fix included in RHCS 6.1z4? We have a conference call with the CU tomorrow and I want to make sure I have this one aspect correct. I may also ask the build guys. Also, thanks for the workaround. We may not use it here, but I will add it to the KCS I have written, thanks again Best regards, Manny Caldeira Software Maintenance Engineer Red Hat Ceph Storage (RHCS) --- Additional comment from Brett Hull on 2024-02-01 17:19:41 UTC --- (In reply to Manny from comment #11) > Hello, > > Was this fix included in RHCS 6.1z4? We have a conference call with the CU > tomorrow and I want to make sure I have this one aspect correct. I may also > ask the build guys. > Also, thanks for the workaround. We may not use it here, but I will add it > to the KCS I have written, thanks again > > Best regards, > Manny Caldeira > Software Maintenance Engineer > Red Hat Ceph Storage (RHCS) Hello Manny, I has been added. Did not want to waste Guillaume time with this. The following bugs have been added: bug 2261977 - [6.1z3] regression from 5.3z5 on the handling of HotSwap SATA drives. I have updated the case with the answers to the customer. Guillaume!!!! Thank you so much for this excellent work!!!!! Best regards, Brett --- Additional comment from on 2024-02-01 23:14:35 UTC --- The testfix container tarball can be downloaded from here: http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/testfixes/bz2261977-0-el9/rhceph-6-267.0.TEST.bz2261977.tar.gz http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/testfixes/bz2261977-0-el9/rhceph-6-267.0.TEST.bz2261977.tar.gz.SHA256SUM http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/testfixes/bz2261977-0-el9/rhceph-6-267.0.TEST.bz2261977.tar.SHA256SUM Info about the testfix (which is based on the RHCS 6.1 z3 async release): * rhceph-container-6-267.0.TEST.bz2261977 * Pull from: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:6-267.0.TEST.bz2261977 * Brew link for testfix container: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=2884660 * Ceph build inside container: ceph-17.2.6-170.0.TEST.bz2261977.el9cp Based the following -patches branch (c6a40c6b42048e558594e97254521f7e875bea17): https://gitlab.cee.redhat.com/ceph/ceph/-/commits/private-tserlin-ceph-6.1-rhel-9-test-hotfix-bz2261977-patches Thomas --- Additional comment from Sayalee on 2024-02-02 04:37:28 UTC --- Hello Guillaume, Are the below steps to verify the fix on RHCS 6.1z4 aprropriate? 1) Deploy a RHCS 6.1z4 cluster where nodes have hot-swappable SATA drives. 2) Deploy OSD on the hot-swappable SATA drives. If there is anything I am missing or any modifications are required, please let me know. Thanks, Sayalee --- Additional comment from Guillaume Abrioux on 2024-02-02 10:25:42 UTC --- (In reply to Shravan Kumar Tiwari from comment #10) > Just one thing if customer go ahead with this workaround to unblock and then > later update to 6.1z4 (where fix will be GA) then they can upgrade to 6.1z4 > and it should be fine.nothing extra needed from their side on existing > cluster or when adding new server or disks. right? that's correct --- Additional comment from Guillaume Abrioux on 2024-02-02 10:26:59 UTC --- (In reply to Sayalee from comment #14) > Hello Guillaume, > > Are the below steps to verify the fix on RHCS 6.1z4 aprropriate? > > 1) Deploy a RHCS 6.1z4 cluster where nodes have hot-swappable SATA drives. > 2) Deploy OSD on the hot-swappable SATA drives. > > If there is anything I am missing or any modifications are required, please > let me know. > > > Thanks, > Sayalee that's correct
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925