Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1747343 - [ci][Azure] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support existing single file
Summary: [ci][Azure] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Inl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.2.0
Assignee: Jan Safranek
QA Contact: Wei Duan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-30 07:22 UTC by Qin Ping
Modified: 2019-10-16 06:39 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:39:13 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23708 0 None None None 2019-09-02 14:09:26 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:39:23 UTC

Description Qin Ping 2019-08-30 07:22:25 UTC
Description of problem:
https://storage.googleapis.com/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-azure-4.2/133/build-log.txt

add item id 1 name 'osd.1' weight 1 at location {host=cephbox,root=default} to crush map
    starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
    starting mds.cephfs at :/0
    
Importing image: 3% complete...
Importing image: 6% complete...
Importing image: 9% complete...
Importing image: 13% complete...
Importing image: 16% complete...
Importing image: 19% complete...
Importing image: 23% complete...
Importing image: 26% complete...
Importing image: 29% complete...
Importing image: 33% complete...
Importing image: 36% complete...
Importing image: 39% complete...
Importing image: 42% complete...
Importing image: 46% complete...
Importing image: 49% complete...
Importing image: 52% complete...
Importing image: 56% complete...
Importing image: 59% complete...
Importing image: 62% complete...
Importing image: 66% complete...
Importing image: 69% complete...
Importing image: 72% complete...
Importing image: 76% complete...
Importing image: 79% complete...
Importing image: 82% complete...
Importing image: 85% complete...
Importing image: 89% complete...
Importing image: 92% complete...
Importing image: 95% complete...
Importing image: 99% complete...
Importing image: 100% complete...done.
    Error EINVAL: crushtool check failed with -22: crushtool: timed out (5 sec)
    pool 'cephfs_metadata' created
    Error ENOENT: pool 'cephfs_data' does not exist
    ceph-fuse[547]: starting ceph client
    2019-08-29 15:36:50.922987 7f4501be2f80 -1 init, newargv = 0x5577cf545020 newargc=11
    "
occurred

Comment 3 Jan Safranek 2019-09-02 14:10:35 UTC
Filed https://github.com/openshift/origin/pull/23708

Comment 5 Wei Duan 2019-09-09 05:34:06 UTC
I checked at 4.2.0-0.nightly test result, this case is removed from Sep 7.

Comment 6 errata-xmlrpc 2019-10-16 06:39:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.