When deploying bluestore using ceph-ansible-3.1.0-0.1.rc9.el7cp.noarch with ceph container d45f6eba4202 [1] ceph-disk hits the following race condition: command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdb1 /dev/vdb1: No such file or directory Usage: mkfs.xfs /* blocksize */ [-b log=n|size=num] /* metadata */ [-m crc=0|1,finobt=0|1,uuid=xxx] /* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num, (sunit=value,swidth=value|su=num,sw=num|noalign), sectlog=n|sectsize=num /* force overwrite */ [-f] /* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2, projid32bit=0|1] /* no discard */ [-K] /* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n sunit=value|su=num,sectlog=n|sectsize=num, lazy-count=0|1] /* label */ [-L label (maximum 12 characters)] /* naming */ [-n log=n|size=num,version=2|ci,ftype=0|1] /* no-op info only */ [-N] /* prototype file */ [-p fname] /* quiet */ [-q] /* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx] /* sectorsize */ [-s log=n|size=num] /* version */ [-V] devicename <devicename> is required unless -d name=xxx is given. <num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB), xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB). <value> is xxx (512 byte blocks). '/usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdb1' failed with status code 1 [1] https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/rhceph-3-rhel7/images/3-9
Created attachment 1470862 [details] sosreport from 1 of the ceph storage nodes
Created attachment 1470863 [details] sosreport from 2 of the ceph storage nodes
*** Bug 1590526 has been marked as a duplicate of this bug. ***