Bug 1878756 - [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Summary: [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup fo...
Keywords:
Status: CLOSED DUPLICATE of bug 1877355
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: aos-storage-staff@redhat.com
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-14 13:15 UTC by Lalatendu Mohanty
Modified: 2020-09-15 12:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
[sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Last Closed: 2020-09-15 12:00:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Lalatendu Mohanty 2020-09-14 13:15:49 UTC
test:
[sig-storage] PersistentVolumes-local  [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted 

is failing frequently in CI, see search results:
https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=%5C%5Bsig-storage%5C%5D+PersistentVolumes-local++%5C%5BVolume+type%3A+dir-link%5C%5D+Set+fsGroup+for+local+volume+should+set+different+fsGroup+for+second+pod+if+first+pod+is+deleted

Error snippets

E oauth-apiserver OAuth API is not responding to GET requests
E kube-apiserver Kube API started failing: Get "https://api.ci-op-sw57lbn7-08daf.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers
 Sep 13 00:42:53.628 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-156.us-west-2.compute.internal node/ip-10-0-146-156.us-west-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): Bookmarks=true&resourceVersion=20176&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0913 00:42:51.949237       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/networks?allowWatchBookmarks=true&resourceVersion=21331&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0913 00:42:51.956782       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=23488&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0913 00:42:52.416715       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nI0913 00:42:52.560945       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0913 00:42:52.561009       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-146-156_9e94ab44-58f4-4685-b620-20be1b71b0e5 stopped leading\nF0913 00:42:52.561056       1 controllermanager.go:291] leaderelection lost\nI0913 00:42:52.572804       1 node_lifecycle_controller.go:601] Shutting down node controller\nI0913 00:42:52.583425       1 garbagecollector.go:147] Shutting down garbage collector controller\nI0913 00:42:52.583432       1 pv_protection_controller.go:93] Shutting down PV protection controller\nE0913 00:42:52.607937       1 event.go:272] Unable to write event: 'Post https://localhost:6443/api/v1/namespaces/default/events: dial tcp [::1]:6443: connect: connection refused' (may retry after sleeping)\n 


In last 7 days it has zero % success rate.

Comment 1 Jan Safranek 2020-09-15 12:00:32 UTC

*** This bug has been marked as a duplicate of bug 1877355 ***


Note You need to log in before you can comment on or make changes to this bug.