Bug 1316736 - "check if the device is a partition" ceph-osd task fails
Summary: "check if the device is a partition" ceph-osd task fails
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-installer
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Alfredo Deza
QA Contact: Rachana Patel
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-10 23:30 UTC by Ken Dreyer (Red Hat)
Modified: 2016-08-23 19:48 UTC (History)
11 users (show)

Fixed In Version: ceph-installer-1.0.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:48:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Ken Dreyer (Red Hat) 2016-03-10 23:30:37 UTC
After posting a payload to the /api/osd/configure endpoint, the corresponding ansible run fails. 

Here is the python dict I posted (after converting to JSON with the requests library):

  {'host': 'kdreyer-clot-3', 'redhat_storage': False, 'fsid': 'deedcb4c-a67a-4997-93a6-92149ad2622a', 'public_network': '172.16.0.0/12', 'cluster_network': '172.16.0.0/12', 'monitors': [{'interface': 'eth0', 'host': 'kdreyer-clot-2'}], 'devices': {'/dev/vdb': '/dev/vdc'}, 'journal_size': 5120}

And the playbook failed with the following:

  TASK: [ceph-osd | check if the device is a partition] ******************* 
  fatal: [kdreyer-clot-3] => with_items expects a list or a set

The versions I was using:

  ceph-ansible-1.0.1-1.20160309git1a62a81.el7.noarch
  ceph-installer-0.2.5-1.20160310gitcd9e90d.el7.noarch

Inventory looks fine:

  $ sudo cat /tmp/f28657ab-dc9c-4b92-8d04-078b8b3e6eb4_h7ZkQp
  [osds]
  kdreyer-clot-3
  [mons]
  kdreyer-clot-2 monitor_interface=eth0

Full task details:

$ ceph-installer task f28657ab-dc9c-4b92-8d04-078b8b3e6eb4
--> endpoint: /api/osd/configure/
--> succeeded: False
--> stdout: 
PLAY [mons] ******************************************************************* 

GATHERING FACTS *************************************************************** 
ok: [kdreyer-clot-2]

TASK: [ceph-fetch-keys | find ceph keys] ************************************** 
ok: [kdreyer-clot-2]

TASK: [ceph-fetch-keys | set keys permissions] ******************************** 
ok: [kdreyer-clot-2] => (item=/etc/ceph/ceph.client.admin.keyring)

TASK: [ceph-fetch-keys | copy keys to the ansible server] ********************* 
ok: [kdreyer-clot-2] => (item=/etc/ceph/ceph.client.admin.keyring)
ok: [kdreyer-clot-2] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring)
ok: [kdreyer-clot-2] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring)
ok: [kdreyer-clot-2] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring)

PLAY [osds] ******************************************************************* 

GATHERING FACTS *************************************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | fail on unsupported system] ************************* 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | fail on unsupported architecture] ******************* 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | fail on unsupported distribution] ******************* 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | fail on unsupported distribution for red hat storage] *** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | fail on unsupported ansible version] **************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | make sure journal_size configured] ****************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | make sure monitor_interface configured] ************* 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | make sure cluster_network configured] *************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | make sure public_network configured] **************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | make sure an osd scenario was chosen] *************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | verify only one osd scenario was chosen] ************ 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | verify devices have been provided] ****************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | verify journal devices have been provided] ********** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | verify directories have been provided] ************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | check if nmap is installed] ************************* 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | inform that nmap is not present] ******************** 
ok: [kdreyer-clot-3] => {
    "msg": "nmap is not installed, can not test if ceph ports are allowed :("
}

TASK: [ceph.ceph-common | check if monitor port is not filtered] ************** 
skipping: [kdreyer-clot-3] => (item=kdreyer-clot-2)

TASK: [ceph.ceph-common | fail if monitor port is filtered] ******************* 
skipping: [kdreyer-clot-3] => (item={u'skipped': True, u'changed': False})

TASK: [ceph.ceph-common | check if osd and mds range is not filtered] ********* 
skipping: [kdreyer-clot-3] => (item=kdreyer-clot-3)

TASK: [ceph.ceph-common | fail if osd and mds range is filtered (osd hosts)] *** 
skipping: [kdreyer-clot-3] => (item={u'skipped': True, u'changed': False})

TASK: [ceph.ceph-common | check if osd and mds range is not filtered] ********* 
skipping: [kdreyer-clot-3] => (item=groups.mdss)

TASK: [ceph.ceph-common | fail if osd and mds range is filtered (mds hosts)] *** 
skipping: [kdreyer-clot-3] => (item={u'skipped': True, u'changed': False})

TASK: [ceph.ceph-common | check if rados gateway port is not filtered] ******** 
skipping: [kdreyer-clot-3] => (item=groups.rgws)

TASK: [ceph.ceph-common | fail if rados gateway port is filtered] ************* 
skipping: [kdreyer-clot-3] => (item={u'skipped': True, u'changed': False})

TASK: [ceph.ceph-common | disable osd directory parsing by updatedb] ********** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | disable transparent hugepage] *********************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | disable swap] *************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | get default vm.min_free_kbytes] ********************* 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | define vm.min_free_kbytes] ************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | apply operating system tuning] ********************** 
changed: [kdreyer-clot-3] => (item={'name': 'kernel.pid_max', 'value': 4194303})
changed: [kdreyer-clot-3] => (item={'name': 'fs.file-max', 'value': 26234859})
changed: [kdreyer-clot-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0})
changed: [kdreyer-clot-3] => (item={'name': 'vm.vfs_cache_pressure', 'value': 50})
changed: [kdreyer-clot-3] => (item={'name': 'vm.min_free_kbytes', 'value': u'67584'})

TASK: [ceph.ceph-common | get ceph rhcs version] ****************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact is_ceph_infernalis={{ (ceph_stable and ceph_stable_release not in ceph_stable_releases) or (ceph_dev) or (ceph_stable_rh_storage and (rh_storage_version.stdout | version_compare('0.94', '>'))) }}] *** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact is_ceph_infernalis=True] ******************* 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph.ceph-common | set_fact ] ****************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | check for a ceph socket] **************************** 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | check for a rados gateway socket] ******************* 
ok: [kdreyer-clot-3]

TASK: [ceph.ceph-common | create a local fetch directory if it does not exist] *** 
ok: [kdreyer-clot-3 -> 127.0.0.1]

TASK: [ceph.ceph-common | generate cluster uuid] ****************************** 
ok: [kdreyer-clot-3 -> 127.0.0.1]

TASK: [ceph.ceph-common | read cluster uuid if it already exists] ************* 
ok: [kdreyer-clot-3 -> 127.0.0.1]

TASK: [ceph.ceph-common | create ceph conf directory] ************************* 
changed: [kdreyer-clot-3]

TASK: [ceph.ceph-common | generate ceph configuration file] ******************* 
changed: [kdreyer-clot-3]

TASK: [ceph.ceph-common | create rbd client directory] ************************ 
changed: [kdreyer-clot-3]

TASK: [ceph-osd | install dependencies] *************************************** 
skipping: [kdreyer-clot-3]

TASK: [ceph-osd | install dependencies] *************************************** 
ok: [kdreyer-clot-3]

TASK: [ceph-osd | create bootstrap-osd directory] ***************************** 
changed: [kdreyer-clot-3]

TASK: [ceph-osd | copy osd bootstrap key] ************************************* 
changed: [kdreyer-clot-3] => (item={'name': '/var/lib/ceph/bootstrap-osd/ceph.keyring', 'copy_key': True})
skipping: [kdreyer-clot-3] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': u'False'})

TASK: [ceph-osd | check if the device is a partition] ************************* 
fatal: [kdreyer-clot-3] => with_items expects a list or a set

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/var/lib/ceph-installer/osd-configure.retry

kdreyer-clot-2             : ok=4    changed=0    unreachable=0    failed=0   
kdreyer-clot-3             : ok=33   changed=6    unreachable=1    failed=0   


--> started: 2016-03-10 18:13:49.594307
--> exit_code: 3
--> ended: 2016-03-10 18:13:54.934672
--> command: /bin/ansible-playbook -u ceph-installer /usr/share/ceph-ansible/osd-configure.yml -i /tmp/f28657ab-dc9c-4b92-8d04-078b8b3e6eb4_h7ZkQp --extra-vars {"raw_journal_devices": ["/dev/vdc"], "ceph_stable": true, "devices": {"/dev/vdb": "/dev/vdc"}, "public_network": "172.16.0.0/12", "fetch_directory": "/var/lib/ceph-installer/fetch", "cluster_network": "172.16.0.0/12", "raw_multi_journal": true, "fsid": "deedcb4c-a67a-4997-93a6-92149ad2622a", "journal_size": 5120} --skip-tags package-install
--> stderr: 
--> identifier: f28657ab-dc9c-4b92-8d04-078b8b3e6eb4

Comment 1 Alfredo Deza 2016-03-11 12:56:04 UTC
Pull request opened https://github.com/ceph/ceph-installer/pull/117

Comment 3 Mike McCune 2016-03-28 22:42:18 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 12 Rachana Patel 2016-07-28 23:00:13 UTC
verified with

ceph-ansible-1.0.5-23.el7scon.noarch
ceph-installer-1.0.12-3.el7scon.noarch

working as expected hence moving to verified

Comment 14 errata-xmlrpc 2016-08-23 19:48:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.