Bug 1749919 - [RFE] Sharded BlueStore
Summary: [RFE] Sharded BlueStore
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.3
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 5.0
Assignee: Adam Kupczyk
QA Contact: skanta
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 1750994 1929682 1959686
TreeView+ depends on / blocked
 
Reported: 2019-09-06 19:04 UTC by Vikhyat Umrao
Modified: 2023-09-15 00:18 UTC (History)
11 users (show)

Fixed In Version: ceph-16.0.0-8633.el8cp
Doc Type: Enhancement
Doc Text:
.Sharding of RocksDB database using column families is supported With the BlueStore admin tool, the goal is to achieve less read and write amplification, decrease DB (Database) expansion during compaction, and also improve IOPS performance. With this release, you can reshard the database with the BlueStore admin tool. The data in RocksDB (DB) database is split into multiple Column Families (CF). Each CF has its own options and the split is performed according to type of data such as omap, object data, delayed cached writes, and PGlog. For more information on resharding, see the link:{admin-guide}#resharding-the-rocksdb-database-using-the-bluestore-admin-tool_admin[_Resharding the RocksDB database using the BlueStore admin tool_] section in the _{storage-product} Administration Guide_.
Clone Of:
Environment:
Last Closed: 2021-08-30 08:22:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 41690 0 None None None 2019-09-06 19:04:27 UTC
Github ceph ceph pull 34006 0 None closed Sharding of rocksdb database using column families 2021-03-15 20:48:20 UTC
Red Hat Issue Tracker RHCEPH-810 0 None None None 2021-08-19 16:43:26 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:23:31 UTC

Description Vikhyat Umrao 2019-09-06 19:04:27 UTC
Description of problem:
[RFE] Sharded BlueStore
https://tracker.ceph.com/issues/41690

Version-Release number of selected component (if applicable):
RHCS 3.3

Comment 2 Jane smith 2020-11-04 08:07:39 UTC Comment hidden (spam)
Comment 5 skanta 2021-03-15 16:41:37 UTC
Did not found any issues in the regression runs.
https://trello.com/c/onPhiMMg/136-teuthology-suites-for-50

Hence closing the bug.

Comment 9 skanta 2021-03-31 10:51:05 UTC
Verified the feature with the below procedure-

As a root

1.yum-config-manager --add-repo=http://download.eng.bos.redhat.com/rhel-8/composes/auto/ceph-5.0-rhel-8/latest-RHCEPH-5-RHEL-8/compose/OSD/x86_64/os/   
2.sudo dnf install --nogpgcheck -y ceph-osd
  podman ps
    
3. podman stop <container-id>
4. systemctl stop ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service 

5.cd /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.<OSD.No>/
6. vi unit.run 

7. Modified the unit.run file 

/bin/podman run --rm --ipc=host --authfile=/etc/ceph/podman-auth.json --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342-osd.0 -d --log-driver journald --conmon-pidfile /run/ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service-pid --cidfile /run/ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service-cid -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5def3ad899adfa082811411812633323847f3299c20dfb260b64d2caa391df44 -e NODE_NAME=depressa008 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342:/var/run/ceph:z -v /var/log/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342:/var/log/ceph:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /var/lib/ceph/ea3b0a34-8d21-11eb-aa2f-002590fbc342/selinux:/sys/fs/selinux:ro -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:5def3ad899adfa082811411812633323847f3299c20dfb260b64d2caa391df44 -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true '--default-log-stderr-prefix=debug '

TO

/bin/podman run -it --entrypoint /bin/bash --privileged --group-add=disk --init --name ceph-0249fae2-910d-11eb-8630-002590fbc342-osd.0 -d --log-driver journald --conmon-pidfile /run/ceph-0249fae2-910d-11eb-8630-002590fbc342.service-pid --cidfile /run/ceph-0249fae2-910d-11eb-8630-002590fbc342.service-cid -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:e83a69844f3359fa74e8eabd2a4bfa171f55605cf6ea154b0aab504d0296ca23 -e NODE_NAME=depressa008 -e CEPH_USE_RANDOM_NONCE=1 -v /var/run/ceph/0249fae2-910d-11eb-8630-002590fbc342:/var/run/ceph:z -v /var/log/ceph/0249fae2-910d-11eb-8630-002590fbc342:/var/log/ceph:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/osd.0:/var/lib/ceph/osd/ceph-0:z -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/osd.0/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /var/lib/ceph/0249fae2-910d-11eb-8630-002590fbc342/selinux:/sys/fs/selinux:ro -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:e83a69844f3359fa74e8eabd2a4bfa171f55605cf6ea154b0aab504d0296ca23

8.systemctl start ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service

9.systemctl status ceph-ea3b0a34-8d21-11eb-aa2f-002590fbc342.service

10."podman ps -a" The status shows that the service should run

11. "ps -aef | grep /usr/bin/ceph-osd " should not contain the osd.0 entry

12.podman exec it <container-id> /bin/bash

13.ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph-0/ --sharding="m(3) p(3,0-12) O(3,0-13) L P" reshard

14.ceph-bluestore-tool  --path /var/lib/ceph/osd/ceph-0/  show-sharding

Final Output:-

  [root@depressa008 /]# ceph-bluestore-tool --log-level 10 -l log.txt --path /var/lib/ceph/osd/ceph-0/ --sharding="m(3) p(3,0-12) O(3,0-13) L P" reshard
                reshard success
  [root@depressa008 /]# ceph-bluestore-tool  --path /var/lib/ceph/osd/ceph-0/  show-sharding
                m(3) p(3,0-12) O(3,0-13) L PWelcome to the Ceph Etherpad!

Comment 10 skanta 2021-04-01 04:30:05 UTC
Checked at below ceph version-
[ceph: root@magna045 ceph]# ceph -v
ceph version 16.1.0-1323.el8cp (46ac37397f0332c20aceceb8022a1ac1ddf8fa73) pacific (rc)
[ceph: root@magna045 ceph]#

Comment 21 errata-xmlrpc 2021-08-30 08:22:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294

Comment 22 Red Hat Bugzilla 2023-09-15 00:18:37 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.