Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1652475

Summary: ceph-volume batch - scaleup fails in bluestore dedicated scenario
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: Ceph-VolumeAssignee: Alfredo Deza <adeza>
Status: CLOSED WONTFIX QA Contact: Tiffany Nguyen <tunguyen>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2CC: agunn, anharris, ceph-eng-bugs, ceph-qe-bugs, gmeno, hnallurv, kdreyer, vashastr
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-12-04 16:03:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1637523    

Description Vasishta 2018-11-22 08:19:31 UTC
Description of problem:
When tried to add a new device to configure OSDs in a node where already OSDs are running using mixture of HDDs and NVMe configured using ceph-volume lvm-batch, playbook is failing in task ceph-osd : use ceph-volume lvm batch to create bluestore osds saying that size of lv cannot be 0.

This happens only in bluestore scenario as ceph-volume reserves all the size for block-ds part in initial run. In Filestore, as the journal_size value is considered for OSD journal part, adding osds is possible.

Version-Release number of selected component (if applicable):
3.2.0~rc3-2redhat1

How reproducible:
Always

Steps to Reproduce:
1. Bring up cluster bluestore OSDs with lvm batch scenario (devices + osd_scenario-lvm from ceph-ansible) 
2. Try to add new OSD by adding a HDD to the list of devices.

Actual results:
 "Running command: lvcreate --yes -L 0G -n osd-block-db-581246f6-b136-457b-9ba6-2a52b9c23d90 ceph-block-dbs-ca8a6463-ffd2-4988-a605-c6033e958ec8", " stderr: --size may not be zero.", "  Run `lvcreate --help' for more information."

Expected results:
OSD addition must be successful

Additional info:
block-db size must be restricted as in journal size in filestore and keep provision instead of using all space during initial run.

We had 1HDD, 1NVMe at begining, with osd_per_device 4, 4 lvs for block-db was created with 232G each.

Comment 3 Alfredo Deza 2018-11-22 15:07:52 UTC
> Additional info:
> block-db size must be restricted as in journal size in filestore and keep 
> provision instead of using all space during initial run.

This is not the intended behavior (on purpose). There is no "best size" to define what that restricted size should be, so the default is to use as much as possible.

The description is correct as in this is configurable, so if an admin wants to avoid using "as much as possible" from the backing device, it should define this.

There is no fix for this ticket, feel free to close as NOTABUG.

Comment 4 Vasishta 2018-11-22 16:57:51 UTC
Hi Alfredo,

(In reply to Alfredo Deza from comment #3)
> The description is correct as in this is configurable, so if an admin wants
> to avoid using "as much as possible" from the backing device, it should
> define this.

I could observe that block-db was not configurable, only filestore journal was configured with restricted size using default value of journal_device.
 
Couldn't get how an admin can restrict block-db size.

Comment 5 Alfredo Deza 2018-11-26 12:30:20 UTC
In the `ceph-volume lvm batch --help` menu you can get the following:

>  --block-db-size BLOCK_DB_SIZE
>                        Set (or override) the "bluestore_block_db_size" value,
>                        in bytes
>  --journal-size JOURNAL_SIZE
>                        Override the "osd_journal_size" value, in megabytes


The ceph.conf options for bluestore and filestore aren't well documented in Ceph itself, but you can set them there as well. The CLI flags can override if those are present in ceph.conf

Comment 8 Christina Meno 2018-11-27 16:20:29 UTC
not a blocker -- needs a doc BZ cut