Bug 1581571

Summary: Skip GPT header creation for lvm osd scenario
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sébastien Han <shan>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.0CC: adeza, aschoen, ceph-eng-bugs, ceph-qe-bugs, gmeno, hnallurv, kdreyer, nthomas, sankarshan, vashastr
Target Milestone: rc   
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.rc4.el7cp Ubuntu: ceph-ansible_3.1.0~rc4-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-26 18:21:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sébastien Han 2018-05-23 06:24:56 UTC
Using lvm scenario for bluestore and using raw disks for block, partitions for wal & db, I get the following error:
`failed: [10.34.34.101] (item={u'data': u'/dev/vdd', u'wal': u'/dev/vde6', u'db': u'/dev/vde5'}) => {"changed": true, "cmd": ["ceph-volume", "--cluster", "ceph", "lvm", "create", "--bluestore", "--data", "/dev/vdd", "--block.db", "/dev/vde5", "--block.wal", "/dev/vde6"], "delta": "0:00:00.626756", "end": "2018-05-16 14:30:24.151505", "failed": true, "item": {"data": "/dev/vdd", "db": "/dev/vde5", "wal": "/dev/vde6"}, "msg": "non-zero return code", "rc": 1, "start": "2018-05-16 14:30:23.524749", "stderr": "Traceback (most recent call last):\n  File \"/usr/sbin/ceph-volume\", line 6, in <module>\n    main.Volume()\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/main.py\", line 37, in __init__\n    self.main(self.argv)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py\", line 59, in newfunc\n    return f(*a, **kw)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/main.py\", line 153, in main\n    terminal.dispatch(self.mapper, subcommand_args)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py\", line 182, in dispatch\n    instance.main()\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py\", line 38, in main\n    terminal.dispatch(self.mapper, self.argv)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py\", line 182, in dispatch\n    instance.main()\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/create.py\", line 74, in main\n    self.create(args)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py\", line 16, in is_root\n    return func(*a, **kw)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/create.py\", line 26, in create\n    prepare_step.safe_prepare(args)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py\", line 220, in safe_prepare\n    rollback_osd(args, self.osd_id)\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/common.py\", line 31, in rollback_osd\n    '--yes-i-really-mean-it'])\n  File \"/usr/lib/python2.7/dist-packages/ceph_volume/process.py\", line 149, in run\n    raise RuntimeError(msg)\nRuntimeError: command returned non-zero exit status: 1", "stderr_lines": ["Traceback (most recent call last):", "  File \"/usr/sbin/ceph-volume\", line 6, in <module>", "    main.Volume()", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/main.py\", line 37, in __init__", "    self.main(self.argv)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py\", line 59, in newfunc", "    return f(*a, **kw)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/main.py\", line 153, in main", "    terminal.dispatch(self.mapper, subcommand_args)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py\", line 182, in dispatch", "    instance.main()", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py\", line 38, in main", "    terminal.dispatch(self.mapper, self.argv)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py\", line 182, in dispatch", "    instance.main()", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/create.py\", line 74, in main", "    self.create(args)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py\", line 16, in is_root", "    return func(*a, **kw)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/create.py\", line 26, in create", "    prepare_step.safe_prepare(args)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py\", line 220, in safe_prepare", "    rollback_osd(args, self.osd_id)", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/common.py\", line 31, in rollback_osd", "    '--yes-i-really-mean-it'])", "  File \"/usr/lib/python2.7/dist-packages/ceph_volume/process.py\", line 149, in run", "    raise RuntimeError(msg)", "RuntimeError: command returned non-zero exit status: 1"], "stdout": "Running command: /usr/bin/ceph-authtool --gen-print-key\nRunning command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9d57d55b-fc66-4903-b7b5-9ce684ca011b\nRunning command: vgcreate --force --yes ceph-8e3629b5-2023-43d1-85ea-124336f4d2a7 /dev/vdd\n stderr: Device /dev/vdd not found (or ignored by filtering).\n--> Was unable to complete a new OSD, will rollback changes\n--> OSD will be fully purged from the cluster, because the ID was generated\nRunning command: ceph osd purge osd.10 --yes-i-really-mean-it\n stderr: 2018-05-16 14:30:24.128313 7f9c905ce700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory\n stderr: 2018-05-16 14:30:24.130700 7f9c905ce700 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication\n2018-05-16 14:30:24.130711 7f9c905ce700  0 librados: client.admin authentication error (95) Operation not supported\n stderr: [errno 95] error connecting to the cluster", "stdout_lines": ["Running command: /usr/bin/ceph-authtool --gen-print-key", "Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9d57d55b-fc66-4903-b7b5-9ce684ca011b", "Running command: vgcreate --force --yes ceph-8e3629b5-2023-43d1-85ea-124336f4d2a7 /dev/vdd", " **stderr: Device /dev/vdd not found (or ignored by filtering)**.", "--> Was unable to complete a new OSD, will rollback changes", "--> OSD will be fully purged from the cluster, because the ID was generated", "Running command: ceph osd purge osd.10 --yes-i-really-mean-it", " stderr: 2018-05-16 14:30:24.128313 7f9c905ce700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory", " stderr: 2018-05-16 14:30:24.130700 7f9c905ce700 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication", "2018-05-16 14:30:24.130711 7f9c905ce700  0 librados: client.admin authentication error (95) Operation not supported", " stderr: [errno 95] error connecting to the cluster"]}`

The osd scenario is as follows:

```
lvm_volumes:
  - data: /dev/vdb
    db: /dev/vde1
    wal: /dev/vde2
  - data: /dev/vdc
    db: /dev/vde3
    wal: /dev/vde4
  - data: /dev/vdd
    db: /dev/vde5
    wal: /dev/vde6
```

Solution
======
The problem is with the task `roles/ceph-osd/tasks/check_gpt.yml` which creates GPT header regardless of osd scenario. Due to the GPT header, the device is not recognized for lv creation. We should add a condition that GPT header is not required for osd_scenario == lvm. 

```
root@upgrade-test-osd-f-4-1064324:/home/vishal.kanaujia# parted /dev/vdc
GNU Parted 3.2
Using /dev/vdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
root@d42-upgrade-test-osd-f-4-1064324:/home/vishal.kanaujia# sgdisk -Z !$
sgdisk -Z /dev/vdc

root@upgrade-test-osd-f-4-1064324:/home/vishal.kanaujia# ceph-volume lvm prepare --bluestore --data /dev/vdc --block.db /dev/vde1 --block.wal /dev/vde2
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new adc6e047-00ba-48da-b787-babd09c23f4b
Running command: vgcreate --force --yes ceph-a311c894-d040-4de9-a880-ad813e0f1e38 /dev/vdc
 stdout: Physical volume "/dev/vdc" successfully created.
 stdout: Volume group "ceph-a311c894-d040-4de9-a880-ad813e0f1e38" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-adc6e047-00ba-48da-b787-babd09c23f4b ceph-a311c894-d040-4de9-a880-ad813e0f1e38
 stdout: Wiping xfs signature on /dev/ceph-a311c894-d040-4de9-a880-ad813e0f1e38/osd-block-adc6e047-00ba-48da-b787-babd09c23f4b.
 stdout: Logical volume "osd-block-adc6e047-00ba-48da-b787-babd09c23f4b" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-14
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s /dev/ceph-a311c894-d040-4de9-a880-ad813e0f1e38/osd-block-adc6e047-00ba-48da-b787-babd09c23f4b /var/lib/ceph/osd/ceph-14/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-14/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-14/keyring --create-keyring --name osd.14 --add-key AQBT9PtaBLnlBhAAAEoS87wPTYs6ZAevF54S/w==
 stdout: creating /var/lib/ceph/osd/ceph-14/keyring
added entity osd.14 auth auth(auid = 18446744073709551615 key=AQBT9PtaBLnlBhAAAEoS87wPTYs6ZAevF54S/w== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-14/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-14/
Running command: chown -R ceph:ceph /dev/vde2
Running command: chown -R ceph:ceph /dev/vde1
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 14 --monmap /var/lib/ceph/osd/ceph-14/activate.monmap --keyfile - --bluestore-block-wal-path /dev/vde2 --bluestore-block-db-path /dev/vde1 --osd-data /var/lib/ceph/osd/ceph-14/ --osd-uuid adc6e047-00ba-48da-b787-babd09c23f4b --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/vdc

```

Comment 8 errata-xmlrpc 2018-09-26 18:21:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819