Bug 1598374

Summary: Installation of OSD scenario "bluestore_wal_devices" for bluestore is failing in Ubuntu and RHEL
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ramakrishnan Periyasamy <rperiyas>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED WONTFIX QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 3.1CC: anharris, aschoen, ceph-eng-bugs, flucifre, gmeno, hnallurv, kdreyer, mmurthy, nthomas, sankarshan, seb, shan
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Other   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-05 17:05:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1644347    
Attachments:
Description Flags
ceph-ansible log file.
none
ansible hosts file none

Description Ramakrishnan Periyasamy 2018-07-05 09:01:40 UTC
Created attachment 1456716 [details]
ceph-ansible log file.

Description of problem:
ceph-ansible failed to create OSD's for OSD scenario "bluestore_wal_devices" in Ubuntu. In RHEL it worked without any issues.


Added OSD scenario in /ansible/hosts file as below
------------------------------------------------------
[osds]
node057 dedicated_devices="['/dev/sdc']" devices="['/dev/sdb']" bluestore_wal_devices="['/dev/sdd']" osd_scenario="non-collocated" osd_objectstore="bluestore"

Playbook failed with below error.
--------------------------------------
2018-07-05 07:13:51,418 p=28963 u=ubuntu |  changed: [magna057] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u"unit 'MiB' print", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/sdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/sdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/sdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'ATA Hitachi HUA72201', u'unit': u'mib', u'size': 953870.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/sdc', u'/dev/sdd', u'/dev/sdb']) => {
    "changed": true,
    "cmd": [
        "ceph-disk",
        "prepare", 
        "--cluster",
        "ceph", 
        "--bluestore",
        "--block.db",
        "/dev/sdc", 
        "--block.wal",
        "/dev/sdd",
        "/dev/sdb"
    ],
    "delta": "0:00:17.802939",
    "end": "2018-07-05 07:13:51.400425",
    "failed": false,
    "invocation": {
        "module_args": {
            "_raw_params": "ceph-disk prepare --cluster ceph --bluestore --block.db /dev/sdc --block.wal /dev/sdd /dev/sdb",
            "_uses_shell": false,
            "chdir": null, 
            "creates": null, 
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "item": [
        {
            "_ansible_ignore_errors": null,
            "_ansible_item_result": true,
            "_ansible_no_log": false,
            "_ansible_parsed": true,
            "changed": false,
            "disk": {
                "dev": "/dev/sdb",
                "logical_block": 512,
                "model": "ATA Hitachi HUA72201",
                "physical_block": 512,
                "size": 953870.0,
                "table": "unknown",
                "unit": "mib"
            },
            "failed": false,
            "invocation": {
                "module_args": {
                    "align": "optimal",
                    "device": "/dev/sdb",
                    "flags": null,
                    "label": "msdos",
                    "name": null,
                    "number": null,
                    "part_end": "100%",
                    "part_start": "0%",
                    "part_type": "primary",
                    "state": "info",
                    "unit": "MiB"
                }
            },
            "item": "/dev/sdb",
            "partitions": [],
            "script": "unit 'MiB' print"
        },
        "/dev/sdc",
        "/dev/sdd",
        "/dev/sdb"
    ],
    "rc": 0,
    "start": "2018-07-05 07:13:33.597486",
    "stderr": "prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd data\nprepare_device: OSD will not be hot-swappable if block.wal is not the same device as the osd data",
    "stderr_lines": [
        "prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd data",
        "prepare_device: OSD will not be hot-swappable if block.wal is not the same device as the osd data"


Version-Release number of selected component (if applicable):
ceph version: 12.2.5-13redhat1xenial
ceph-ansible version: 3.1.0~rc9-2redhat1

How reproducible:
1/1

Steps to Reproduce:
1. Prepare node for Cluster installation in Ubuntu
2. update all.yml
3. start installation

Actual results:
OSD Scenario "bluestore_wal_devices" failed for Ubuntu.

Expected results:
OSD should be created

Additional info:
NA

Comment 3 Ramakrishnan Periyasamy 2018-07-05 09:02:48 UTC
Created attachment 1456719 [details]
ansible hosts file

Comment 7 seb 2018-07-25 13:44:48 UTC
If this moves to 3.2 then we might as well close this since ceph-disk based deployments won't be allowed in RHCS 3.2.
Let's keep it open for tracking when this is completed in ceph-ansible upstream.

Comment 8 Ken Dreyer (Red Hat) 2018-10-02 17:12:19 UTC
Sebastien what upstream change disallows ceph-disk deployments in 3.2?

Comment 9 Sébastien Han 2018-10-02 21:20:23 UTC
Ken, there is no such thing in 3.2 yet, but this will likely come after this PR: https://github.com/ceph/ceph-ansible/pull/2866.

Comment 11 Sébastien Han 2018-11-05 16:11:23 UTC
Did you get the info you wanted? Thanks

Comment 12 Harish NV Rao 2018-11-05 16:51:14 UTC
(In reply to leseb from comment #11)
> Did you get the info you wanted? Thanks

This is from Federico in one of the emails "From a QE perspective, your direct tests should go to Ceph-volume, and so should doc work. Ceph-disk is still there and used in places, that gets tested by higher layer validation (specifically of Ceph-Ansible)."

Comment 13 Sébastien Han 2018-11-05 17:05:58 UTC
This means only ceph-volume is supported too, as we don't encourage people using it anymore. So I'm closing this. Feel free to re-open if you have more concerns.