Deployment fails with: overcloud.AllNodesDeploySteps.WorkflowTasks_Step2_Execution: Logs in /var/log/mistral/ceph-install-workflow.log show the task "prepare ceph osd disk" failed like the following: 2017-09-28 15:30:55,885 p=2514 u=mistral | failed: [192.168.1.28] (item=[u'/dev/sde', {'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-28 19:29:12.103851', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"lsblk -o PARTLABEL /dev/sde | grep -sq 'ceph'", u'rc': 1, 'item': u'/dev/sde', u'delta': u'0:00:00.004349', u'stderr': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"lsblk -o PARTLABEL /dev/sde | grep -sq 'ceph'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-28 19:29:12.099502', 'failed': False}]) => {"changed": true, "cmd": "docker run --net=host --pid=host --privileged=true --name=\"ceph-osd-prepare-overcloud-cephstorage-0-devdevsde\" -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e \"OSD_DEVICE=/dev/sde\" -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e CLUSTER=ceph -e OSD_JOURNAL_SIZE=5120 -e OSD_FORCE_ZAP=1 \"docker-registry.engineering.redhat.com/ceph/rhceph-2-rhel7:candidate-28733-20170928022719\"", "delta": "0:00:05.867419", "end": "2017-09-28 19:30:55.863875", "failed": true, "item": ["/dev/sde", {"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": true, "cmd": "lsblk -o PARTLABEL /dev/sde | grep -sq 'ceph'", "delta": "0:00:00.004349", "end": "2017-09-28 19:29:12.103851", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "lsblk -o PARTLABEL /dev/sde | grep -sq 'ceph'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sde", "rc": 1, "start": "2017-09-28 19:29:12.099502", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}], "rc": 1, "start": "2017-09-28 19:30:49.996456", "stderr": "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nzap: Zapping partition table on /dev/sde\ncommand_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sde\ncommand_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sde\nupdate_partition: Calling partprobe on zapped device /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\ncommand: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nset_type: Will colocate journal with data on /dev/sde\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nptype_tobe_for_name: name = journal\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\ncreate_partition: Creating journal partition num 2 size 5120 on /dev/sde\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:83a140a2-4d6b-4c3d-b1c8-5016eb95d395 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sde\nupdate_partition: Calling partprobe on created device /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\ncommand: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde2 uuid path is /sys/dev/block/8:66/dm/uuid\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/83a140a2-4d6b-4c3d-b1c8-5016eb95d395\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/83a140a2-4d6b-4c3d-b1c8-5016eb95d395\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nset_data_partition: Creating osd partition on /dev/sde\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nptype_tobe_for_name: name = data\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\ncreate_partition: Creating data partition num 1 size 0 on /dev/sde\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:4eb0ec85-d13a-4500-8957-8a46378d6d2b --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sde\nupdate_partition: Calling partprobe on created device /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\ncommand: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid\nget_dm_uuid: get_dm_uuid /dev/sde1 uuid path is /sys/dev/block/8:65/dm/uuid\npopulate_data_path_device: Creating xfs fs on /dev/sde1\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/sde1\n/dev/sde1: No such file or directory\nUsage: mkfs.xfs\n/* blocksize */\t\t[-b log=n|size=num]\n/* metadata */\t\t[-m crc=0|1,finobt=0|1,uuid=xxx]\n/* data subvol */\t[-d agcount=n,agsize=n,file,name=xxx,size=num,\n\t\t\t (sunit=value,swidth=value|su=num,sw=num|noalign),\n\t\t\t sectlog=n|sectsize=num\n/* force overwrite */\t[-f]\n/* inode size */\t[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,\n\t\t\t projid32bit=0|1]\n/* no discard */\t[-K]\n/* log subvol */\t[-l agnum=n,internal,size=num,logdev=xxx,version=n\n\t\t\t sunit=value|su=num,sectlog=n|sectsize=num,\n\t\t\t lazy-count=0|1]\n/* label */\t\t[-L label (maximum 12 characters)]\n/* naming */\t\t[-n log=n|size=num,version=2|ci,ftype=0|1]\n/* no-op info only */\t[-N]\n/* prototype file */\t[-p fname]\n/* quiet */\t\t[-q]\n/* realtime subvol */\t[-r extsize=num,size=num,rtdev=xxx]\n/* sectorsize */\t[-s log=n|size=num]\n/* version */\t\t[-V]\n\t\t\tdevicename\n<devicename> is required unless -d name=xxx is given.\n<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),\n xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).\n<value> is xxx (512 byte blocks).\nTraceback (most recent call last):\n File \"/usr/sbin/ceph-disk\", line 9, in <module>\n load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 5343, in run\n main(sys.argv[1:])\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 5294, in main\n args.func(args)\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1896, in main\n Prepare.factory(args).prepare()\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1885, in prepare\n self.prepare_locked()\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1916, in prepare_locked\n self.data.prepare(self.journal)\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2583, in prepare\n self.prepare_device(*to_prepare_list)\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2747, in prepare_device\n self.populate_data_path_device(*to_prepare_list)\n File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2702, in populate_data_path_device\n raise Error(e)\nceph_disk.main.Error: Error: Command '['/usr/sbin/mkfs', '-t', u'xfs', u'-f', u'-i', u'size=2048', '-f', '--', '/dev/sde1']' returned non-zero exit status 1", "stderr_lines": ["get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "zap: Zapping partition table on /dev/sde", "command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sde", "command_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sde", "update_partition: Calling partprobe on zapped device /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "command: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid", "command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph", "command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph", "command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "set_type: Will colocate journal with data on /dev/sde", "command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type", "command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs", "command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "ptype_tobe_for_name: name = journal", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "create_partition: Creating journal partition num 2 size 5120 on /dev/sde", "command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:83a140a2-4d6b-4c3d-b1c8-5016eb95d395 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sde", "update_partition: Calling partprobe on created device /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "command: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde2 uuid path is /sys/dev/block/8:66/dm/uuid", "prepare_device: Journal is GPT partition /dev/disk/by-partuuid/83a140a2-4d6b-4c3d-b1c8-5016eb95d395", "prepare_device: Journal is GPT partition /dev/disk/by-partuuid/83a140a2-4d6b-4c3d-b1c8-5016eb95d395", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "set_data_partition: Creating osd partition on /dev/sde", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "ptype_tobe_for_name: name = data", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "create_partition: Creating data partition num 1 size 0 on /dev/sde", "command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:4eb0ec85-d13a-4500-8957-8a46378d6d2b --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sde", "update_partition: Calling partprobe on created device /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "command: Running command: /usr/bin/flock -s /dev/sde /usr/sbin/partprobe /dev/sde", "command_check_call: Running command: /usr/bin/udevadm settle --timeout=600", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde uuid path is /sys/dev/block/8:64/dm/uuid", "get_dm_uuid: get_dm_uuid /dev/sde1 uuid path is /sys/dev/block/8:65/dm/uuid", "populate_data_path_device: Creating xfs fs on /dev/sde1", "command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/sde1", "/dev/sde1: No such file or directory", "Usage: mkfs.xfs", "/* blocksize */\t\t[-b log=n|size=num]", "/* metadata */\t\t[-m crc=0|1,finobt=0|1,uuid=xxx]", "/* data subvol */\t[-d agcount=n,agsize=n,file,name=xxx,size=num,", "\t\t\t (sunit=value,swidth=value|su=num,sw=num|noalign),", "\t\t\t sectlog=n|sectsize=num", "/* force overwrite */\t[-f]", "/* inode size */\t[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,", "\t\t\t projid32bit=0|1]", "/* no discard */\t[-K]", "/* log subvol */\t[-l agnum=n,internal,size=num,logdev=xxx,version=n", "\t\t\t sunit=value|su=num,sectlog=n|sectsize=num,", "\t\t\t lazy-count=0|1]", "/* label */\t\t[-L label (maximum 12 characters)]", "/* naming */\t\t[-n log=n|size=num,version=2|ci,ftype=0|1]", "/* no-op info only */\t[-N]", "/* prototype file */\t[-p fname]", "/* quiet */\t\t[-q]", "/* realtime subvol */\t[-r extsize=num,size=num,rtdev=xxx]", "/* sectorsize */\t[-s log=n|size=num]", "/* version */\t\t[-V]", "\t\t\tdevicename", "<devicename> is required unless -d name=xxx is given.", "<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),", " xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).", "<value> is xxx (512 byte blocks).", "Traceback (most recent call last):", " File \"/usr/sbin/ceph-disk\", line 9, in <module>", " load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 5343, in run", " main(sys.argv[1:])", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 5294, in main", " args.func(args)", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1896, in main", " Prepare.factory(args).prepare()", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1885, in prepare", " self.prepare_locked()", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 1916, in prepare_locked", " self.data.prepare(self.journal)", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2583, in prepare", " self.prepare_device(*to_prepare_list)", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2747, in prepare_device", " self.populate_data_path_device(*to_prepare_list)", " File \"/usr/lib/python2.7/site-packages/ceph_disk/main.py\", line 2702, in populate_data_path_device", " raise Error(e)", "ceph_disk.main.Error: Error: Command '['/usr/sbin/mkfs', '-t', u'xfs', u'-f', u'-i', u'size=2048', '-f', '--', '/dev/sde1']' returned non-zero exit status 1"], "stdout": "ownership of '/var/run/ceph/' retained as ceph:ceph\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\nownership of '/var/lib/ceph/mds/ceph-mds-overcloud-cephstorage-0' retained as ceph:ceph\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\nownership of '/var/lib/ceph/mon/ceph-overcloud-cephstorage-0' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-mds/ceph.keyring' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-rgw/ceph.keyring' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\nownership of '/var/lib/ceph/radosgw/overcloud-cephstorage-0' retained as ceph:ceph\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\n2017-09-28 19:30:50 /entrypoint.sh: static: does not generate config\nHEALTH_ERR no osds\n2017-09-28 19:30:50 /entrypoint.sh: It looks like /dev/sde isn't consistent, however OSD_FORCE_ZAP is enabled so we are zapping the device anyway\nCreating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.\nCreating new GPT entries.\nThe operation has completed successfully.\nThe operation has completed successfully.\nThe operation has completed successfully.", "stdout_lines": ["ownership of '/var/run/ceph/' retained as ceph:ceph", "ownership of '/var/lib/ceph/mds' retained as ceph:ceph", "ownership of '/var/lib/ceph/mds/ceph-mds-overcloud-cephstorage-0' retained as ceph:ceph", "ownership of '/var/lib/ceph/tmp' retained as ceph:ceph", "ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph", "ownership of '/var/lib/ceph/mon' retained as ceph:ceph", "ownership of '/var/lib/ceph/mon/ceph-overcloud-cephstorage-0' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-mds/ceph.keyring' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-rgw/ceph.keyring' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph", "ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph", "ownership of '/var/lib/ceph/radosgw/overcloud-cephstorage-0' retained as ceph:ceph", "ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph", "ownership of '/var/lib/ceph/osd' retained as ceph:ceph", "2017-09-28 19:30:50 /entrypoint.sh: static: does not generate config", "HEALTH_ERR no osds", "2017-09-28 19:30:50 /entrypoint.sh: It looks like /dev/sde isn't consistent, however OSD_FORCE_ZAP is enabled so we are zapping the device anyway", "Creating new GPT entries.", "GPT data structures destroyed! You may now partition the disk using fdisk or", "other utilities.", "Creating new GPT entries.", "The operation has completed successfully.", "The operation has completed successfully.", "The operation has completed successfully."]}
This is caused by a race condition in ceph-disk as tracked by the following: https://bugzilla.redhat.com/show_bug.cgi?id=1491780 https://bugzilla.redhat.com/show_bug.cgi?id=1494543 The RPM providing ceph-disk will ship as a docker container so new docker containers from ceph will be necessary to address this issue.
We are tracking inclusion of BZ 1496509 into OSP via BZ 1484447 already. *** This bug has been marked as a duplicate of bug 1484447 ***
This is 'fix' (https://github.com/ceph/ceph/pull/14329/files) is not included in the package provided for red hat storage 3 (ceph-base-12.2.1-40.el7cp.x86_64) which makes it impossible to deploy in our 18 node cluster. # On the "ansible director" $ > rpm -q ceph-ansible ceph-ansible-3.0.14-1.el7cp.noarch # In the one of the successfully deployed containers, $ > rpm -qf $(which ceph-disk) ceph-base-12.2.1-40.el7cp.x86_64 When we use ceph-ansible to deploy in our environment, it always fails on 2-3 nodes. I'm not sure I understand from the above comments, is this still a problem in Red Hat Storage 3 (because from what I can see, the patch mentioned here isn't applied in tthe '/usr/lib/python2.7/site-packages/ceph_disk/main.py' from 'ceph-base-12.2.1-40.el7cp.x86_64'. The error message I get is exactly the same as the reported one so I'm pretty sure this is the bug we are hitting. Any input on this would be good. Best regards, Patrik Martinsson Sweden