Bug 1598374 - Installation of OSD scenario "bluestore_wal_devices" for bluestore is failing in Ubuntu and RHEL
Summary: Installation of OSD scenario "bluestore_wal_devices" for bluestore is failing...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.1
Hardware: Unspecified
OS: Other
high
high
Target Milestone: rc
: 3.2
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1644347
TreeView+ depends on / blocked
 
Reported: 2018-07-05 09:01 UTC by Ramakrishnan Periyasamy
Modified: 2020-01-23 04:01 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-05 17:05:58 UTC
Embargoed:


Attachments (Terms of Use)
ceph-ansible log file. (3.56 MB, text/plain)
2018-07-05 09:01 UTC, Ramakrishnan Periyasamy
no flags Details
ansible hosts file (898 bytes, text/plain)
2018-07-05 09:02 UTC, Ramakrishnan Periyasamy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3187 0 'None' closed [skip ci] Remove ceph-disk support 2020-02-11 21:41:12 UTC

Description Ramakrishnan Periyasamy 2018-07-05 09:01:40 UTC
Created attachment 1456716 [details]
ceph-ansible log file.

Description of problem:
ceph-ansible failed to create OSD's for OSD scenario "bluestore_wal_devices" in Ubuntu. In RHEL it worked without any issues.


Added OSD scenario in /ansible/hosts file as below
------------------------------------------------------
[osds]
node057 dedicated_devices="['/dev/sdc']" devices="['/dev/sdb']" bluestore_wal_devices="['/dev/sdd']" osd_scenario="non-collocated" osd_objectstore="bluestore"

Playbook failed with below error.
--------------------------------------
2018-07-05 07:13:51,418 p=28963 u=ubuntu |  changed: [magna057] => (item=[{'_ansible_parsed': True, u'changed': False, '_ansible_no_log': False, u'script': u"unit 'MiB' print", '_ansible_item_result': True, 'failed': False, 'item': u'/dev/sdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/sdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/sdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'ATA Hitachi HUA72201', u'unit': u'mib', u'size': 953870.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/sdc', u'/dev/sdd', u'/dev/sdb']) => {
    "changed": true,
    "cmd": [
        "ceph-disk",
        "prepare", 
        "--cluster",
        "ceph", 
        "--bluestore",
        "--block.db",
        "/dev/sdc", 
        "--block.wal",
        "/dev/sdd",
        "/dev/sdb"
    ],
    "delta": "0:00:17.802939",
    "end": "2018-07-05 07:13:51.400425",
    "failed": false,
    "invocation": {
        "module_args": {
            "_raw_params": "ceph-disk prepare --cluster ceph --bluestore --block.db /dev/sdc --block.wal /dev/sdd /dev/sdb",
            "_uses_shell": false,
            "chdir": null, 
            "creates": null, 
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "item": [
        {
            "_ansible_ignore_errors": null,
            "_ansible_item_result": true,
            "_ansible_no_log": false,
            "_ansible_parsed": true,
            "changed": false,
            "disk": {
                "dev": "/dev/sdb",
                "logical_block": 512,
                "model": "ATA Hitachi HUA72201",
                "physical_block": 512,
                "size": 953870.0,
                "table": "unknown",
                "unit": "mib"
            },
            "failed": false,
            "invocation": {
                "module_args": {
                    "align": "optimal",
                    "device": "/dev/sdb",
                    "flags": null,
                    "label": "msdos",
                    "name": null,
                    "number": null,
                    "part_end": "100%",
                    "part_start": "0%",
                    "part_type": "primary",
                    "state": "info",
                    "unit": "MiB"
                }
            },
            "item": "/dev/sdb",
            "partitions": [],
            "script": "unit 'MiB' print"
        },
        "/dev/sdc",
        "/dev/sdd",
        "/dev/sdb"
    ],
    "rc": 0,
    "start": "2018-07-05 07:13:33.597486",
    "stderr": "prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd data\nprepare_device: OSD will not be hot-swappable if block.wal is not the same device as the osd data",
    "stderr_lines": [
        "prepare_device: OSD will not be hot-swappable if block.db is not the same device as the osd data",
        "prepare_device: OSD will not be hot-swappable if block.wal is not the same device as the osd data"


Version-Release number of selected component (if applicable):
ceph version: 12.2.5-13redhat1xenial
ceph-ansible version: 3.1.0~rc9-2redhat1

How reproducible:
1/1

Steps to Reproduce:
1. Prepare node for Cluster installation in Ubuntu
2. update all.yml
3. start installation

Actual results:
OSD Scenario "bluestore_wal_devices" failed for Ubuntu.

Expected results:
OSD should be created

Additional info:
NA

Comment 3 Ramakrishnan Periyasamy 2018-07-05 09:02:48 UTC
Created attachment 1456719 [details]
ansible hosts file

Comment 7 seb 2018-07-25 13:44:48 UTC
If this moves to 3.2 then we might as well close this since ceph-disk based deployments won't be allowed in RHCS 3.2.
Let's keep it open for tracking when this is completed in ceph-ansible upstream.

Comment 8 Ken Dreyer (Red Hat) 2018-10-02 17:12:19 UTC
Sebastien what upstream change disallows ceph-disk deployments in 3.2?

Comment 9 Sébastien Han 2018-10-02 21:20:23 UTC
Ken, there is no such thing in 3.2 yet, but this will likely come after this PR: https://github.com/ceph/ceph-ansible/pull/2866.

Comment 11 Sébastien Han 2018-11-05 16:11:23 UTC
Did you get the info you wanted? Thanks

Comment 12 Harish NV Rao 2018-11-05 16:51:14 UTC
(In reply to leseb from comment #11)
> Did you get the info you wanted? Thanks

This is from Federico in one of the emails "From a QE perspective, your direct tests should go to Ceph-volume, and so should doc work. Ceph-disk is still there and used in places, that gets tested by higher layer validation (specifically of Ceph-Ansible)."

Comment 13 Sébastien Han 2018-11-05 17:05:58 UTC
This means only ceph-volume is supported too, as we don't encourage people using it anymore. So I'm closing this. Feel free to re-open if you have more concerns.


Note You need to log in before you can comment on or make changes to this bug.