Description of problem: [RFE] Sharded BlueStore https://tracker.ceph.com/issues/41690 Version-Release number of selected component (if applicable): RHCS 3.3
This comment was flagged a spam, view the edit history to see the original text if required.
Did not found any issues in the regression runs. https://trello.com/c/onPhiMMg/136-teuthology-suites-for-50 Hence closing the bug.
Verified the feature with the below procedure- As a root 1.yum-config-manager --add-repo=http://download.eng.bos.redhat.com/rhel-8/composes/auto/ceph-5.0-rhel-8/latest-RHCEPH-5-RHEL-8/compose/OSD/x86_64/os/ 2.sudo dnf install --nogpgcheck -y ceph-osd podman ps 3. podman stop <container-id> 4. systemctl stop ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service 5.cd /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.<OSD.No>/ 6. vi unit.run 7. Modified the unit.run file /bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342-osd.0 -d --log-driver journald --conmon-pidfile /run/ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service-pid --cidfile /run/ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service-cid -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5def3ad899adfa082811411812633323847f3299c20dfb260b64d2caa391df44 -e NODE_NAME=depressa008 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342:/var/run/ceph:z -v /var/log/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342:/var/log/ceph:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/selinux:/sys/fs/selinux:ro -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5def3ad899adfa082811411812633323847f3299c20dfb260b64d2caa391df44 -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true '--default-log-stderr-prefix=debug ' TO /bin/podman run -it --entrypoint /bin/bash --privileged --group-add=disk --init --name ceph-0249fae2-910d-11eb-8630-002590fbc342-osd.0 -d --log-driver journald --conmon-pidfile /run/ceph-0249fae2-910d-11eb-8630-002590fbc342.service-pid --cidfile /run/ceph-0249fae2-910d-11eb-8630-002590fbc342.service-cid -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:e83a69844f3359fa74e8eabd2a4bfa171f55605cf6ea154b0aab504d0296ca23 -e NODE_NAME=depressa008 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/0249fae2-910d-11eb-8630-002590fbc342:/var/run/ceph:z -v /var/log/ceph/0249fae2-910d-11eb-8630-002590fbc342:/var/log/ceph:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/selinux:/sys/fs/selinux:ro -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:e83a69844f3359fa74e8eabd2a4bfa171f55605cf6ea154b0aab504d0296ca23 8.systemctl start ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service 9.systemctl status ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service 10."podman ps -a" The status shows that the service should run 11. "ps -aef | grep /usr/bin/ceph-osd " should not contain the osd.0 entry 12.podman exec it <container-id> /bin/bash 13.ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph-0/ --sharding="m(3) p(3,0-12) O(3,0-13) L P" reshard 14.ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-0/ show-sharding Final Output:- [root@depressa008 /]# ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph-0/ --sharding="m(3) p(3,0-12) O(3,0-13) L P" reshard reshard success [root@depressa008 /]# ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-0/ show-sharding m(3) p(3,0-12) O(3,0-13) L PWelcome to the Ceph Etherpad!
Checked at below ceph version- [ceph: root@magna045 ceph]# ceph -v ceph version 16.1.0-1323.el8cp (46ac37397f0332c20aceceb8022a1ac1ddf8fa73) pacific (rc) [ceph: root@magna045 ceph]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days