Created attachment 1358683 [details] the error itself Description of problem: I did 3 deployments, I faced the problem two times. create_partition: Creating data partition num 1 size 0 on /dev/sdl command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:04a127b4-7242-4219-8838-9827970a299f --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdl update_partition: Calling partprobe on created device /dev/sdl command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 command: Running command: /usr/bin/flock -s /dev/sdl /usr/sbin/partprobe /dev/sdl command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid get_dm_uuid: get_dm_uuid /dev/sdl1 uuid path is /sys/dev/block/8:177/dm/uuid populate_data_path_device: Creating xfs fs on /dev/sdl1 command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/sdl1 mkfs is called before /dev/sdl1 is available.
Created attachment 1358684 [details] ceph-install-workflow.log
ceph-ansible-3.0.14-1.el7cp.noarch OSP puddle 2017-11-21.1
The first partition gets created by this command: "Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:04a127b4-7242-4219-8838-9827970a299f --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdl" If the partition is not available this means you're hitting "the" known race condition. Looks like a dub to me.
Hi Sebastien, Could you give the reference of this known race condition.
*** This bug has been marked as a duplicate of bug 1480658 ***