test: [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents is failing frequently in CI, see search results: https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=%5C%5Bsig-storage%5C%5D+In-tree+Volumes+%5C%5BDriver%3A+cinder%5C%5D+%5C%5BTestpattern%3A+Dynamic+PV+%5C%28default+fs%5C%29%5C%5D+fsgroupchangepolicy+%5C%28Always%5C%29%5C%5BLinuxOnly%5C%5D%2C+pod+created+with+an+initial+fsgroup%2C+new+pod+fsgroup+applied+to+volume+contents FIXME: Replace this paragraph with a particular job URI from the search results to ground discussion. A given test may fail for several reasons, and this bug should be scoped to one of those reasons. Ideally you'd pick a job showing the most-common reason, but since that's hard to determine, you may also chose to pick a job at random. Release-gating jobs (release-openshift-...) should be preferred over presubmits (pull-ci-...) because they are closer to the released product and less likely to have in-flight code changes that complicate analysis. FIXME: Provide a snippet of the test failure or error from the job log
There are a large number of tests related to Driver: cinder that have the same pass/fail rate as the test in the title, so I would assume they have the same cause and shouldn't create new bugs for them. They are: [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should store data[sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
All the flakes are caused by connection to API server being rejected or times out. We track it in 1890131. I did not notice a single flake of the test caused by storage / fsgroup implementation in the past 7 days. *** This bug has been marked as a duplicate of bug 1890131 ***