Bug 1478598 - 'create' needs to allow filtering by uuid
Summary: 'create' needs to allow filtering by uuid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 3.1
Assignee: Alfredo Deza
QA Contact: Parikshith
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-04 22:58 UTC by Alfredo Deza
Modified: 2018-09-26 18:17 UTC (History)
9 users (show)

Fixed In Version: RHEL: ceph-12.2.4-6.el7cp Ubuntu: ceph_12.2.4-7redhat1xenial
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-26 18:16:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 17606 0 None closed ceph-volume allow filtering by `uuid`, do not require osd id 2020-04-14 15:59:48 UTC
Github ceph ceph pull 17653 0 None closed luminous: ceph-volume allow filtering by `uuid`, do not require osd id 2020-04-14 15:59:49 UTC
Red Hat Product Errata RHBA-2018:2819 0 None None None 2018-09-26 18:17:47 UTC

Description Alfredo Deza 2017-08-04 22:58:15 UTC
Description of problem: `lvm create` passes args to `activate` that want an ID. But the args object doesn't have it.



Actual results:
task path: /Users/andrewschoen/dev/ceph/src/ceph-volume/ceph_volume/tests/functional/.tox/create/tmp/ceph-ansible/roles/ceph-osd/tasks/scenarios/lvm.yml:2
failed: [osd0] (item={'key': u'test_volume', 'value': u'/dev/sdb'}) => {
    "changed": true,
    "cmd": [
        "ceph-volume",
        "lvm",
        "create",
        "--filestore",
        "--data",
        "test_volume",
        "--journal",
        "/dev/sdb"
    ],
    "delta": "0:00:16.407122",
    "end": "2017-08-04 21:05:59.109897",
    "failed": true,
    "item": {
        "key": "test_volume",
        "value": "/dev/sdb"
    },
    "rc": 1,
    "start": "2017-08-04 21:05:42.702775",
    "warnings": []
}

STDOUT:

Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 70457fe1-8561-426f-8fc4-c2227fa1480d
Running command: sudo vgs --reportformat=json
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: sudo vgs --reportformat=json
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: sudo lvchange --addtag ceph.osd_id=0 /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.osd_fsid=70457fe1-8561-426f-8fc4-c2227fa1480d /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.type=data /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.cluster_fsid=efd63a88-1a43-415d-b678-97d0a0d31465 /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.journal_device=/dev/sdb /dev/test_group/test_volume
Running command: sudo lvchange --addtag ceph.data_device=/dev/test_group/test_volume /dev/test_group/test_volume
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json
Running command: ceph-authtool --gen-print-key
Running command: sudo mkfs -t xfs -f -i size=2048 /dev/test_group/test_volume
 stdout: meta-data=/dev/test_group/test_volume isize=2048   agcount=4, agsize=703744 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2814976, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Running command: sudo mount -t xfs -o noatime,inode64 /dev/test_group/test_volume /var/lib/ceph/osd/ceph-0
Running command: sudo ln -s /dev/sdb /var/lib/ceph/osd/ceph-0/journal
Running command: sudo ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: chown -R ceph:ceph /dev/sdb
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: sudo ceph-osd --cluster ceph --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --osd-data /var/lib/ceph/osd/ceph-0/ --osd-journal /var/lib/ceph/osd/ceph-0/journal --osd-uuid 70457fe1-8561-426f-8fc4-c2227fa1480d --setuser ceph --setgroup ceph
 stderr: 2017-08-04 21:05:53.945251 7fc29fd87d00 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 70457fe1-8561-426f-8fc4-c2227fa1480d, invalid (someone else's?) journal
 stderr: 2017-08-04 21:05:53.956185 7fc29fd87d00 -1 read_settings error reading settings: (2) No such file or directory
Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCm4YRZLNa+MBAAVIa+afiFox39x8lq3dVajA==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCm4YRZLNa+MBAAVIa+afiFox39x8lq3dVajA== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: sudo lvs -o lv_tags,lv_path,lv_name,vg_name --reportformat=json


STDERR:

-->  RuntimeError: could not find osd.None with fsid 70457fe1-8561-426f-8fc4-c2227fa1480d


Additional info: The OSD ID needs to be optional in this case, use it in `activate` if provided, else ignore and filter only by the `uuid`.

Comment 2 Alfredo Deza 2017-09-11 18:11:56 UTC
While trying to address this issue we found a problem with activating (or re-activating rather) OSDs after a reboot: if the osd ID (say 1) was present in the UUID, there was a chance that the UUID would be parsed incorrectly.

The bug was in the `parse_osd_uuid` function:

>>> lvm.trigger.parse_osd_uuid("1-abc959fd-1ec9-4864-b141-3154f9b9f8ed")
'3154f9b9f8ed'

The correct result should've been: "abc959fd-1ec9-4864-b141-3154f9b9f8ed"

Comment 8 errata-xmlrpc 2018-09-26 18:16:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819


Note You need to log in before you can comment on or make changes to this bug.