Bug 1478598 - 'create' needs to allow filtering by uuid
'create' needs to allow filtering by uuid
Status: POST
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Volume (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: rc
: 3.1
Assigned To: Alfredo Deza
Depends On:
  Show dependency treegraph
Reported: 2017-08-04 18:58 EDT by Alfredo Deza
Modified: 2017-09-12 08:02 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Github ceph/ceph/pull/17606 None None None 2017-09-08 14:09 EDT
Github ceph/ceph/pull/17653 None None None 2017-09-11 16:14 EDT

  None (edit)
Description Alfredo Deza 2017-08-04 18:58:15 EDT
Description of problem: `lvm create` passes args to `activate` that want an ID. But the args object doesn't have it.

Actual results:
task path: /Users/andrewschoen/dev/ceph/src/ceph-volume/ceph_volume/tests/functional/.tox/create/tmp/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm.yml:2
failed: [osd0] (item={'key': u'test_volume', 'value': u'/dev/sdb'}) => {
    "changed": true,
    "cmd": [
    "delta": "0:00:16.407122",
    "end": "2017-08-04 21:05:59.109897",
    "failed": true,
    "item": {
        "key": "test_volume",
        "value": "/dev/sdb"
    "rc": 1,
    "start": "2017-08-04 21:05:42.702775",
    "warnings": []


Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 70457fe1-8561-426f-8fc4-c2227fa1480d
Running command: sudo vgs --reportformat=json
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: sudo vgs --reportformat=json
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: sudo lvchange --addtag ceph.osd_id=0 /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.osd_fsid=70457fe1-8561-426f-8fc4-c2227fa1480d /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.type=data /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.cluster_fsid=efd63a88-1a43-415d-b678-97d0a0d31465 /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.journal_device=/dev/sdb /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.data_device=/dev/test_group/test_volume /dev/test_group/test_volume
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: ceph-authtool --gen-print-key
Running command: sudo mkfs -t xfs -f -i size=2048 /dev/test_group/test_volume
 stdout: meta-data=/dev/test_group/test_volume isize=2048   agcount=4, agsize=703744 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2814976, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Running command: sudo mount -t xfs -o noatime,inode64 /dev/test_group/test_volume /var/lib/ceph/osd/ceph-0
Running command: sudo ln -s /dev/sdb /var/lib/ceph/osd/ceph-0/journal
Running command: sudo ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: chown -R ceph:ceph /dev/sdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: sudo ceph-osd --cluster ceph --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --osd-data /var/lib/ceph/osd/ceph-0/ --osd-journal /var/lib/ceph/osd/ceph-0/journal --osd-uuid 70457fe1-8561-426f-8fc4-c2227fa1480d --setuser ceph --setgroup ceph
 stderr: 2017-08-04 21:05:53.945251 7fc29fd87d00 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 70457fe1-8561-426f-8fc4-c2227fa1480d, invalid (someone else's?) journal
 stderr: 2017-08-04 21:05:53.956185 7fc29fd87d00 -1 read_settings error reading settings: (2) No such file or directory
Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCm4YRZLNa+MBAAVIa+afiFox39x8lq3dVajA==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCm4YRZLNa+MBAAVIa+afiFox39x8lq3dVajA== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json


-->  RuntimeError: could not find osd.None with fsid 70457fe1-8561-426f-8fc4-c2227fa1480d

Additional info: The OSD ID needs to be optional in this case, use it in `activate` if provided, else ignore and filter only by the `uuid`.
Comment 2 Alfredo Deza 2017-09-11 14:11:56 EDT
While trying to address this issue we found a problem with activating (or re-activating rather) OSDs after a reboot: if the osd ID (say 1) was present in the UUID, there was a chance that the UUID would be parsed incorrectly.

The bug was in the `parse_osd_uuid` function:

>>> lvm.trigger.parse_osd_uuid("1-abc959fd-1ec9-4864-b141-3154f9b9f8ed")

The correct result should've been: "abc959fd-1ec9-4864-b141-3154f9b9f8ed"

Note You need to log in before you can comment on or make changes to this bug.