Bug 1313391
Summary: | Node of pod using a NFS PVC, successfully mount but immediately unmount it. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Christophe Augello <caugello> |
Component: | Storage | Assignee: | Bradley Childs <bchilds> |
Status: | CLOSED ERRATA | QA Contact: | Jianwei Hou <jhou> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 3.1.0 | CC: | aos-bugs, bchilds, bleanhar, caugello, cryan, erjones, fsollami, hchen, iievstig, jokerman, pep, tdawson |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-12 16:30:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1267746 |
Description
Christophe Augello
2016-03-01 14:20:05 UTC
Hi all, Please prioritize this bugzilla. It is quite urgent for the customer. The temprorary workaround does not seem like an option for that. Thanks, Julia Team Lead GSS EMEA I can you provide more details about the user and SCC settings being used here? @Bradley defaults are used. It looks like this issue: https://github.com/kubernetes/kubernetes/issues/20734 Which is fixed upstream and will be in 3.2 but is not in 3.1. k8s 19600 is ported to openshift origin 1.1.3 https://github.com/openshift/origin/commit/3aa75a49ff71a38dcb128d5165d417afc4758568 it is also in OSE v3.1.1.901 https://github.com/openshift/ose/commit/3aa75a49ff71a38dcb128d5165d417afc4758568 Should be in OSE v3.1.1.911 which was pushed to QE today. (In reply to Bradley Childs from comment #6) > It looks like this issue: > > https://github.com/kubernetes/kubernetes/issues/20734 For reference: also tracked in Origin as bug 1298284 *** Bug 1314924 has been marked as a duplicate of this bug. *** Verified on openshift v3.1.1.911 kubernetes v1.2.0-alpha.7-703-gbc4550d etcd 2.2.5 According to bug 1298284, verification steps are: 1. create a PV and a claim (I use Cinder volumes, but I saw it on AWS and GCE too) 2. create a pod that uses the claim 3. In a loop: 3.1 create the pod 3.2 wait until it's running 3.3 run 'kubectl describe pods' 3.4 delete it 3.5 wait until the volume is unmounted and detached from the node (this is important!) at step 3.3, I have not seen pod has been restarted. I've written a script to repeat the test 20 times, can not reproduce. This issue is fixed (In reply to Bradley Childs from comment #6) > It looks like this issue: > > https://github.com/kubernetes/kubernetes/issues/20734 > > Which is fixed upstream and will be in 3.2 but is not in 3.1. Apparently this was backported to 3.1 via bug 1318472. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1064 |