Description of problem: https://storage.googleapis.com/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-azure-4.2/133/build-log.txt add item id 1 name 'osd.1' weight 1 at location {host=cephbox,root=default} to crush map starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal starting mds.cephfs at :/0 Importing image: 3% complete... Importing image: 6% complete... Importing image: 9% complete... Importing image: 13% complete... Importing image: 16% complete... Importing image: 19% complete... Importing image: 23% complete... Importing image: 26% complete... Importing image: 29% complete... Importing image: 33% complete... Importing image: 36% complete... Importing image: 39% complete... Importing image: 42% complete... Importing image: 46% complete... Importing image: 49% complete... Importing image: 52% complete... Importing image: 56% complete... Importing image: 59% complete... Importing image: 62% complete... Importing image: 66% complete... Importing image: 69% complete... Importing image: 72% complete... Importing image: 76% complete... Importing image: 79% complete... Importing image: 82% complete... Importing image: 85% complete... Importing image: 89% complete... Importing image: 92% complete... Importing image: 95% complete... Importing image: 99% complete... Importing image: 100% complete...done. Error EINVAL: crushtool check failed with -22: crushtool: timed out (5 sec) pool 'cephfs_metadata' created Error ENOENT: pool 'cephfs_data' does not exist ceph-fuse[547]: starting ceph client 2019-08-29 15:36:50.922987 7f4501be2f80 -1 init, newargv = 0x5577cf545020 newargc=11 " occurred
Filed https://github.com/openshift/origin/pull/23708
I checked at 4.2.0-0.nightly test result, this case is removed from Sep 7.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922