Bug 1685253

Summary: ceph-ansible non-collocated OSD scenario should not create block.wal by default
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mike Hackett <mhackett>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Yogesh Mane <ymane>
Severity: high Docs Contact: Erin Donnelly <edonnell>
Priority: unspecified    
Version: 3.2CC: agunn, aschoen, ceph-eng-bugs, dsavinea, edonnell, gmeno, jbrier, nthomas, sankarshan, tchandra, tserlin, vumrao, ymane
Target Milestone: rc   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.16-1.el7cp Ubuntu: ceph-ansible_3.2.16-2redhat1 Doc Type: Bug Fix
Doc Text:
.The BlueStore WAL and DB partitions are now only created when dedicated devices are specified for them Previously, in containerized deployments using the `non-collocated` scenario, the BlueStore WAL partition was created by default on the same device as the BlueStore DB partition when it was not required. With this update, the `bluestore_wal_devices` variable is no longer set to `dedicated_devices` by default, and the BlueStore WAL partition is no longer created on the BlueStore DB device.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-21 15:10:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1726135    

Description Mike Hackett 2019-03-04 18:58:57 UTC
Description of problem:
Currently when deploying BlueStore OSD's using the non-colocated OSD scenario (ceph-disk) we will create ceph block.wal by default. We should not be creating block.wal unless THREE tiers of storage are being used, HDD, SSD and NVDIMM for example.  

The playbook does have the option to use 'bluestore_wal_devices', could we ONLY create the block.wal if this parameter is populated with a device and change the default to ONLY create ceph block.db and ceph data?

Version-Release number of selected component (if applicable):
RHCS 3.2

How reproducible:
Constant

Steps to Reproduce:
1. Deploy a BlueStore OSD using Ceph-Ansible and the non-collocated scenario.

Actual results:
In non-collocated scenario we create ceph data, ceph block.db and ceph block.wal. 

[root@ceph6 ~]# lsblk
sdb                   8:16   0 111.8G  0 disk 
├─sdb5                8:21   0     1G  0 part 
└─sdb6                8:22   0   576M  0 part 
sdc                   8:32   0 465.8G  0 disk 
├─sdc1                8:33   0   100M  0 part /var/lib/ceph/osd/ceph-2
└─sdc2                8:34   0 465.7G  0 part 

[root@ceph6 ~]# blkid /dev/sdc*
/dev/sdc: PTTYPE="gpt" 
/dev/sdc1: UUID="d1d98c71-61ae-42f0-98e0-b6fb64930044" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="0b88f43d-f7d3-4326-ba96-b1d8c2a41684" 
/dev/sdc2: PARTLABEL="ceph block" PARTUUID="bb1d67ca-5fc1-438f-936a-e32c54bfac36" 

[root@ceph6 ~]# blkid /dev/sdb*
/dev/sdb5: PARTLABEL="ceph block.db" PARTUUID="e2376d5c-f42b-4e59-aa79-b789bdc43cef" 
/dev/sdb6: PARTLABEL="ceph block.wal" PARTUUID="b60c1235-5c8e-4e1e-8704-7b4e49061197"


Expected results:
We should only be deploying ceph block and block.db.

Additional info:

Comment 11 errata-xmlrpc 2019-08-21 15:10:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2538