Bug 1313560 - EBS persistent volume claims fail on re-use
Summary: EBS persistent volume claims fail on re-use
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
urgent
Target Milestone: ---
: 3.1.1
Assignee: Sami Wagiaalla
QA Contact: Jianwei Hou
URL:
Whiteboard:
: 1317741 (view as bug list)
Depends On: 1305417 1318518
Blocks: OSOPS_V3 1267746 1317577
TreeView+ depends on / blocked
 
Reported: 2016-03-01 22:21 UTC by Sten Turpin
Modified: 2019-10-10 11:24 UTC (History)
15 users (show)

Fixed In Version: atomic-openshift-3.1.1.6-4.git.26.9549be3.el7aos
Doc Type: Bug Fix
Doc Text:
Cause: EBS volume attachment data cannot be cached reliably. Additionally after a claim is released it retains a referenc to the claim to which it was bound. Consequence: Volumes that had been detached were not made available as they should have been. Also, if a claim is deleted and the volume it was bound to is released and another claim is created with the same name it will try to bind the old PV. Fix: We no longer rely on cached volume attachment data and instead check on each request to mount or unmount an EBS volume. We also now check the claim's UID when trying to bind a PV based on the claim to ensure that the correct claim matches the correct PV Result: EBS volume attachment tasks are much more reliable.
Clone Of:
: 1317577 1318161 (view as bug list)
Environment:
Last Closed: 2016-03-24 15:53:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
volumes stuck in attaching state (322.93 KB, text/plain)
2016-03-04 18:23 UTC, Stefanie Forrester
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0510 0 normal SHIPPED_LIVE Red Hat OpenShift Enterprise bug fix update 2016-03-24 19:53:32 UTC

Description Sten Turpin 2016-03-01 22:21:13 UTC
Description of problem: When the postgresql-persistent template is deployed, it creates a pv claim which goes into failed state. The pod then either fails or never leaves ContainerCreating.


Version-Release number of selected component (if applicable):


How reproducible: consistently, in one namespace on one cluster


Steps to Reproduce:
1. Deploy postgresql-persistent from the web ui

Actual results:

[root@tsi-master-06764 ~]# oc get pv | grep Failed
pv-2-tsi-master-d4536-vol-379a3b9e    type=ebs   2Gi        RWO           Failed      test-staging/postgresql             11d
[root@tsi-master-06764 ~]# oc get pods -n test-staging | grep Creating
postgresql-1-mzvda       0/1       ContainerCreating   0          1h

Expected results:
template should deploy

Additional info:

Comment 1 Stefanie Forrester 2016-03-01 22:44:43 UTC
This issue happens specifically when using EBS PVs as storage. Out of 4 failures we've seen today, we've had 2 instances of EBS volumes getting stuck in "attaching" state (seen in the AWS web console). One of those has been stuck attaching for nearly 4 hours.

Deploying apps using the API worked most times, but one of them still got stuck "attaching". Deploying apps using the openshift web console seems to reliably result in the PVs entering Failed state.

Here are some logs from one app that was deployed using the web console:

[root@tsi-master-06764 ~]# oc describe pods postgresql-1-mzvda -n cocoon-staging
...
Successfully assigned postgresql-1-mzvda to ip-172-31-54-4.ec2.internal
Unable to mount volumes for pod "postgresql-1-mzvda_cocoon-staging": unsupported volume type
Error syncing pod, skipping: unsupported volume type

[root@tsi-master-06764 ~]# oc describe pvc -n cocoon-staging
Name:           postgresql
Namespace:      cocoon-staging
Status:         Bound
Volume:         pv-2-tsi-master-d4536-vol-379a3b9e
Labels:         template=postgresql-persistent-template
Capacity:       2Gi
Access Modes:   RWO


[root@tsi-master-06764 ~]# oc describe pv pv-2-tsi-master-d4536-vol-379a3b9e
Name:           pv-2-tsi-master-d4536-vol-379a3b9e
Labels:         type=ebs
Status:         Failed
Claim:          cocoon-staging/postgresql
Reclaim Policy: Recycle
Access Modes:   RWO
Capacity:       2Gi
Message:        no recyclable volume plugin matched
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   aws://us-east-1c/vol-379a3b9e
    FSType:     ext4
    Partition:  0
    ReadOnly:   false

Viewing EBS volume vol-379a3b9e in the AWS web console, it shows as Available.

Comment 2 Sami Wagiaalla 2016-03-02 16:39:18 UTC
I'll take a look.

It would help me get to this faster if you can give me more concise steps to reproduce this issue:
- Which version of Origin/openshift is this using ?
- How have you configured it/set it up ?
- Where can I get the postgresql-persistent template ?

Comment 3 Stefanie Forrester 2016-03-02 18:18:42 UTC
We're using version 3.1.1, installed using the BYO installer here:

https://github.com/openshift/openshift-ansible/blob/master/playbooks/byo/config.yml

I'll attach our BYO inventory to this bug to show the settings used. Once the installer finishes, the postgresql-persistent template will be present in the system, since it's part of the openshift-examples templates.

https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_examples/files/examples/v1.1/db-templates/postgresql-persistent-template.json

Here are the package versions. All packages on the system were up-to-date as of 1-2 weeks ago.

[root@tsi-master-06764 ~]# rpm -qa atomic-openshift*
atomic-openshift-3.1.1.6-3.git.18.5aabe62.el7aos.x86_64
atomic-openshift-node-3.1.1.6-3.git.18.5aabe62.el7aos.x86_64
atomic-openshift-master-3.1.1.6-3.git.18.5aabe62.el7aos.x86_64
atomic-openshift-clients-3.1.1.6-3.git.18.5aabe62.el7aos.x86_64
atomic-openshift-sdn-ovs-3.1.1.6-3.git.18.5aabe62.el7aos.x86_64

Here is an overview of our infrastructure, in case that helps:

We have 3 masters using native HA, 2 infra nodes (which just run the router/registry), and 4 compute nodes.

The 3 masters are added to an ELB for public access (DNS CNAME api.tsi.openshift.com). The instances are also added to a second internal-only ELB for internal communication between masters/nodes (internal.api.tsi.openshift.com).

The 2 infra nodes are added to an ELB to give us HA routing (CNAME *.e19a.tsi.openshiftapps.com).

After the installer completes, we do some post-install configuration, including setting up the cluster for the AWS cloud-provider, which will allow us to use the EBS Persistent Volumes.

https://docs.openshift.com/enterprise/3.1/install_config/configuring_aws.html

(Note that if you have more than one master, the credentials will have to be added to /etc/sysconfig/atomic-openshift-master-api and /etc/sysconfig/atomic-openshift-master-controllers )

Then we create some EBS volumes in AWS and configure them as persistent volumes using this playbook.

https://github.com/openshift/openshift-ansible/tree/master/playbooks/adhoc/create_pv

Comment 5 Stefanie Forrester 2016-03-03 18:51:42 UTC
I created a test cluster and tried out the postgresql-persistent template using the web interface. The first deploy done in each Project worked fine. Though if I tried delete the app and re-create it in the same project, it would fail on subsequent attempts to start up the postgres pod, erroring with "unsupported volume type", which kept it in state ContainerCreating forever.

I think what's happening is the pvc keeps binding to the same old persistent volume. Even after I delete the pvc and create a new postgresql-persistent app in the same namespace, it tries to use the old PV, which is now in Failed state. (So that's a brand new pvc, binding to the old Failed PV).

The PV enters Failed state after being used once. If we were using NFS instead of EBS, the volume would be 'Recycled' after having been used, so that other apps can use it. In this case, I think all we're seeing is a lack of Recycle functionality for EBS PVs. And maybe some poor handling of Failed PVs. 

Ideally, a Failed PV should never be chosen by a new pvc. Especially when there are plenty PVs in Available state.

Comment 6 Stefanie Forrester 2016-03-03 20:54:22 UTC
I tested it again using Retain instead of Recycle in the PV definitions. There wasn't any improvement, but I did notice a new issue that will make EBS PVs very difficult for us to use.

The EBS volumes are getting stuck in "attaching" state. And I think it's because kubernetes/openshift is trying to use the same mountpoint for multiple volumes. I'm not sure how to verify, but this is what I'm seeing:

When I create a couple persistent storage apps, one after another, I see their event logs showing similar errors.

9m          9m        1         postgresql-1-u8lsa   Pod                 FailedMount   {kubelet ip-172-31-3-236.us-west-1.compute.internal}   Unable to mount volumes for pod "postgresql-1-u8lsa_dakinitest6": Error attaching EBS volume: InvalidParameterValue: Invalid value '/dev/xvdf' for unixDevice. Attachment point /dev/xvdf is already in use


36s         26s        2         postgresql-1-fftpi    Pod                                                         FailedSync          {kubelet ip-172-31-3-236.us-west-1.compute.internal}   Error syncing pod, skipping: Error attaching EBS volume: InvalidParameterValue: Invalid value '/dev/xvdg' for unixDevice. Attachment point /dev/xvdg is already in use
            status code: 400, request id: 
36s         26s       2         postgresql-1-fftpi   Pod                 FailedMount   {kubelet ip-172-31-3-236.us-west-1.compute.internal}   Unable to mount volumes for pod "postgresql-1-fftpi_dakinitest8": Error attaching EBS volume: InvalidParameterValue: Invalid value '/dev/xvdg' for unixDevice. Attachment point /dev/xvdg is already in use
            status code: 400, request id: 
44s         6s        3         postgresql-1-fftpi   Pod                 FailedSync   {kubelet ip-172-31-3-236.us-west-1.compute.internal}   Error syncing pod, skipping: Error attaching EBS volume: InvalidParameterValue: Invalid value '/dev/xvdf' for unixDevice. Attachment point /dev/xvdf is already in use
            status code: 400, request id: 
44s         6s        3         postgresql-1-fftpi   Pod                 FailedMount   {kubelet ip-172-31-3-236.us-west-1.compute.internal}   Unable to mount volumes for pod "postgresql-1-fftpi_dakinitest8": Error attaching EBS volume: InvalidParameterValue: Invalid value '/dev/xvdf' for unixDevice. Attachment point /dev/xvdf is already in use

Comment 7 Stefanie Forrester 2016-03-03 21:46:15 UTC
Possibly related: https://github.com/kubernetes/kubernetes/issues/18106

Comment 8 hchen 2016-03-03 21:58:13 UTC
another possible one
https://github.com/kubernetes/kubernetes/pull/19600

Comment 9 Sami Wagiaalla 2016-03-04 15:08:34 UTC
Stefanie,

Can you reproduce the issue using just the CLI ? Or is it still only through the web UI ?

Also, if a pod is recreated using the same claim name as before it should attach to the same PV that is so it can continue from the state it was it before (database data for example) if the pod is recreated with a new PVC name it should attach to a new volume.

> The PV enters Failed state after being used once.

Hmm.. that does not seem right. That is probably where the issue is.

Comment 10 hchen 2016-03-04 15:37:51 UTC
@Sami, PR 19600 is about a PV cleanup leak. It might be the case once the PV is used, it fails to be cleaned up and becomes unusable.

Comment 11 Stefanie Forrester 2016-03-04 18:12:20 UTC
(In reply to Sami Wagiaalla from comment #9)
> Stefanie,
> 
> Can you reproduce the issue using just the CLI ? Or is it still only through
> the web UI ?
> 

Yes, I can reproduce the issue using the CLI too. I don't think it's related to the web interface anymore. It's more likely to be related to whichever node the containers land on, since this morning I found an issue with one specific node on a new test cluster. After restarting atomic-openshift-node, 3/4 of the pods found their volumes, which it were actually attached to the instance the whole time.

> Also, if a pod is recreated using the same claim name as before it should
> attach to the same PV that is so it can continue from the state it was it
> before (database data for example) if the pod is recreated with a new PVC
> name it should attach to a new volume.
> 
> > The PV enters Failed state after being used once.
> 
> Hmm.. that does not seem right. That is probably where the issue is.

I think the PV entering Failed state makes sense, because we had the PVs set to Recycle instead of Retain. It tries to Recycle the PV, and then errors because it "can't find a recycle plugin" for this type of storage. Since I've switched to Retain, I can delete and re-create pods successfully. They no longer enter Failed state. So that was a configuration issue on my part.

The main thing keeping me from re-creating the pods previously is that the volumes were not being attached properly: some were stuck in 'attaching', others were already attached to the correct node, but the node didn't seem to understand that.

This morning I had 4 pods that were stuck in ContainerCreating for the past 21 hours, waiting for the volumes to attach. I finally restarted the atomic-openshift-node service on that node, and surprisingly 3 of the pods found their volumes. Two volumes that I checked had actually been attached to the correct node all along, but the node didn't know that. I'll attach logs of that incident.

One of the 4 stuck pods is still stuck in ContainerCreating, even after restarting atomic-openshift node, it's repeatedly trying to attach the volume. But the EBS volume is stuck in 'attaching' state, attaching to this node instance in AWS.

Mar 04 13:05:36 ip-172-31-3-236.us-west-1.compute.internal atomic-openshift-node[96455]: I0304 13:05:36.173374   96455 aws.go:885] Assigned mount device f -> volume vol-cbdb9764
Mar 04 13:05:36 ip-172-31-3-236.us-west-1.compute.internal atomic-openshift-node[96455]: I0304 13:05:36.599796   96455 aws.go:903] Releasing mount device mapping: f -> volume vol-cbdb9764
Mar 04 13:05:36 ip-172-31-3-236.us-west-1.compute.internal atomic-openshift-node[96455]: E0304 13:05:36.599850   96455 kubelet.go:1521] Unable to mount volumes for pod "postgresql-1-1aj0p_dakinitest5": Error attaching EBS volume: VolumeInUse: vol-cbdb9764 is already attached to an instance

Comment 13 Stefanie Forrester 2016-03-04 18:23:07 UTC
Created attachment 1133231 [details]
volumes stuck in attaching state

4 pods are stuck in ContainerCreating. The node thinks their volumes still need to be attached, but 3 of them are already attached to the correct node.

The 4th one is somewhat attached to the node, but its status in AWS is "attaching" to that node. It's been stuck in "attaching" for 21+ hours, possibly due to the way the node is trying to attach it? Or maybe due to choosing an attachment point that is unavailable?

These logs show the state of one stuck pod, and the restarting of atomic-openshift-node that gets 3/4 pods unstuck.

Comment 14 Sami Wagiaalla 2016-03-04 18:31:59 UTC
 
> The 4th one is somewhat attached to the node, but its status in AWS is
> "attaching" to that node. It's been stuck in "attaching" for 21+ hours,
> possibly due to the way the node is trying to attach it? Or maybe due to
> choosing an attachment point that is unavailable?
> 

Can you try detaching the volume through the AWS UI. Does the node reattach it and start the pod ?

Comment 15 Stefanie Forrester 2016-03-04 19:12:00 UTC
> Can you try detaching the volume through the AWS UI. Does the node reattach
> it and start the pod ?

I've seen that work a few times on another cluster, but this last volume here isn't getting unstuck. I force-detached it, and it became available, and then went back into 'attaching'. It's been in attaching for 12 minutes now.

The node is still saying "Error attaching EBS volume: VolumeInUse: vol-cbdb9764 is already attached to an instance", and the pod is still in ContainerCreating.

Comment 16 Bradley Childs 2016-03-04 20:03:01 UTC
Can you verify the exact 3.1 build?  The fix we suspect resolves the issue  is in the latest 3.1.x release (3.1.3) 

https://github.com/openshift/origin/commit/3aa75a49ff71a38dcb128d5165d417afc4758568

Comment 17 Stefanie Forrester 2016-03-04 20:24:36 UTC
[root@tsi-master-d4536 ~]# oc version
oc v3.1.1.6-19-gbd1cff9
kubernetes v1.1.0-origin-1107-g4c8e6f4

[root@dakinitest-master-32b32 ~]# oc version
oc v3.1.1.6-21-gcd70c35
kubernetes v1.1.0-origin-1107-g4c8e6f4

Comment 18 Sami Wagiaalla 2016-03-04 20:36:19 UTC
Okay I think I have reproduce this issues here. There are several issues in this report:
- The recycle policy does not work (that one actually makes sense)
- Creating, deleting, and creating the pod does not work. (the claim is not getting bound again)

The remaining issue is multiple PV's trying to use the same mount point.. could not reproduce that one still working on it

Comment 19 Stefanie Forrester 2016-03-04 20:56:09 UTC
I actually just hit another problem with EBS PVs... It could need to be separated out into a different bug, but I thought I'd post it here since I'm not sure.

On another cluster, I just hit an issue where a pod was unable to start for days because it was waiting on the EBS volume state. Which kind of looks like this issue: https://github.com/kubernetes/kubernetes/issues/15073

The volume was in 'available' state in AWS, but the node kept giving timeout errors when trying to get the state.

Unable to mount volumes for pod "hawkular-cassandra-1-p9o6l_openshift-infra": Timeout waiting for volume state
Error syncing pod, skipping: Timeout waiting for volume state

It was in that condition for 3 days according to 'oc events'. I fixed it by restarting the atomic-openshift-node service on the node hosting that container.

Comment 20 Sami Wagiaalla 2016-03-04 22:15:37 UTC
> [root@dakinitest-master-32b32 ~]# oc version
> oc v3.1.1.6-21-gcd70c35
> kubernetes v1.1.0-origin-1107-g4c8e6f4

Are you able to try with a newer version ? (3.1.3)
There are two issues at play here which contribute to the general instability: neither of which are hot fixes, but if you try with the latest version perhaps we can focus on the most immediate issue and come up with a short term work around to increase the stability.

Comment 21 Stefanie Forrester 2016-03-04 22:17:07 UTC
I have to switch away from testing to work on another cluster build, but my next build (next week) will be version 3.2. I'll do some testing on that when I get it online.

Comment 22 Andy Grimm 2016-03-08 19:15:14 UTC
Changing the Product for this to OSE to make it clear that OSE is what we are deploying here.

Comment 23 Jan Safranek 2016-03-09 10:20:05 UTC
Jumping late into this discussion, I have few observations:

1) don't use Recycler with EBS. It won't work and all PVs with Recycle policy will enter Failed state with "no recyclable volume plugin matched" message. This is expected, we support recycling only for NFS volumes.

2) Current Kubernetes EBS code expects that /dev/xvd[f-p] devices are for Kubernetes only. Bad things will happen if some other device (root disk, ...) gets device name in this interval, probably accompanied with log message "Attachment point /dev/xvdf is already in use"). See https://github.com/kubernetes/kubernetes/issues/18106. This needs to be fixed.

3) In addition, Kubernetes EBS code can use *only* /dev/xvd[f-p] devices for PVs. Bad things will happen when more than 11 PVs need to be attached to a single node. See https://bugzilla.redhat.com/show_bug.cgi?id=1315995. This needs to be fixed.


4) If you manually restart atomic-openshift node, it looses track of what is attached there and you must manually detach all PVs in AWS console afterwards! This can log either "Attachment point /dev/xvdf is already in use" or "vol-cbdb9764 is already attached to an instance". This should be fixed, but it's hard.  See https://github.com/kubernetes/kubernetes/issues/20262.  


I'll try to find out if there are any other issues with EBS.

Comment 24 Jan Safranek 2016-03-09 11:22:28 UTC
5) "EBS volume is stuck in 'attaching' state, attaching to this node instance in AWS." - that's odd. If the AWS console itself shows the volume as "attaching" without succeeding/failing in reasonable time (few minutes), there must be a bug in AWS.

Comment 25 Jan Safranek 2016-03-09 14:08:18 UTC
I was able to reproduce a pod stuck in ContainerCreating forever with "Error syncing pod, skipping: unsupported volume type".

I just created 12 PVs and 12 claims and waited for all claims to be bound. Then I created 12 pods that use these claims. 5 pods started, the rest got these errors:

    Scheduled    Successfully assigned pod-6 to ip-172-18-0-64.ec2.internal
    FailedMount  Unable to mount volumes for pod "pod-6_default": unsupported
    FailedSync   Error syncing pod, skipping: unsupported volume type

Running Kubernetes from openshift/kubernetes, commit 4c8e6f4 (= tag 1.1.0-origin).

Comment 26 Jan Safranek 2016-03-09 14:14:47 UTC
.. and the culprit is persistent controller binder. I have 12 PVs and 12 claims, they should be bound to each other, but I get 8 claims bound to single PV instead:

$ cluster/kubectl.sh get pvc
NAME       LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
claim-1    <none>    Bound     vol-1     1Gi        RWO           3m
claim-10   <none>    Bound     vol-5     1Gi        RWO           3m
claim-11   <none>    Bound     vol-5     1Gi        RWO           3m
claim-12   <none>    Bound     vol-5     1Gi        RWO           3m
claim-2    <none>    Bound     vol-2     1Gi        RWO           3m
claim-3    <none>    Bound     vol-3     1Gi        RWO           3m
claim-4    <none>    Bound     vol-4     1Gi        RWO           3m
claim-5    <none>    Bound     vol-5     1Gi        RWO           3m
claim-6    <none>    Bound     vol-5     1Gi        RWO           3m
claim-7    <none>    Bound     vol-5     1Gi        RWO           3m
claim-8    <none>    Bound     vol-5     1Gi        RWO           3m
claim-9    <none>    Bound     vol-5     1Gi        RWO           3m

'vol-5' is bound too many times.

/me starts backporting binder patches

Comment 27 Jan Safranek 2016-03-09 14:18:27 UTC
(In reply to Jan Safranek from comment #23)
> 4) If you manually restart atomic-openshift node, it looses track of what is
> attached there and you must manually detach all PVs in AWS console
> afterwards! This can log either "Attachment point /dev/xvdf is already in
> use" or "vol-cbdb9764 is already attached to an instance". This should be
> fixed, but it's hard.  See
> https://github.com/kubernetes/kubernetes/issues/20262.  


I was wrong, restarted atomic-openshift node *does* reload list of volumes attached volumes and it should continue working as it was not restarted. No manual intervention is be necessary. Unless the code is buggy, of course.

Comment 28 Jan Safranek 2016-03-09 15:50:27 UTC
Fix for comment #26: https://github.com/kubernetes/kubernetes/pull/16432

Comment 29 Jan Safranek 2016-03-09 16:33:10 UTC
(In reply to Jan Safranek from comment #28)
> Fix for comment #26: https://github.com/kubernetes/kubernetes/pull/16432

Umm, it's already fixed in v3.1.1.6:

commit d0f9d1a5558d7bb4fa4260b9792225b42d48fe50
Author: markturansky <mturansk>
Date:   Wed Nov 4 13:40:31 2015 -0500

    UPSTREAM: 16432: fixed pv binder race condition


Scratch comments #25-26 then, I can't reproduce anything. Back to the beginning...

Comment 30 Jan Safranek 2016-03-10 13:30:08 UTC
I can finally reproduce and fix a pod stuck in ContainerCreating forever with "Error syncing pod, skipping: unsupported volume type" in real OpenShift Enterprise.

Sami's reproducer:
  ./oc new-app postgresql-persistent -l name=postgresql
  ./oc delete pods,dc,svc,pvc -l name=postgresql
  the pv goes to the "released" phase
  ./oc new-app postgresql-persistent -l name=postgresql
  The claim tries to bind the old "released volume because it has the
  same name as before.
  The claim says Bound but the PV is still "Released"
  The pod fails with the "unsupported volume type" error 

Claim binder should check not only claim name and namespace but also UID, just  in case someone re-creates a claim with the same name as before.

It's already fixed upstream:
https://github.com/kubernetes/kubernetes/pull/20197

I created a patch and pushed it to my OSE fork:
https://github.com/jsafrane/ose/tree/enterprise-3.1



Now I'll focus on error "VolumeInUse: vol-cbdb9764 is already attached to an instance".

Comment 31 Jianwei Hou 2016-03-11 02:59:59 UTC
There are some know bugs related with this one:

Any PV(NFS and hostPath excluded) with recycle policy will become 'FAILED' once the claim is deleted: https://bugzilla.redhat.com/show_bug.cgi?id=1298813

Inconsistent PV and PVC status(PV released, but PVC bound)
https://bugzilla.redhat.com/show_bug.cgi?id=1298813

Comment 32 Jan Safranek 2016-03-11 13:45:59 UTC
Now I am reliably able to reproduce "Error attaching EBS volume: VolumeInUse: vol-cbdb9764 is already attached to an instance":

1. Prepare a Kubernetes with several AWS EBS PVs and bound claims. Using postgresql-persistent template is fine. Do not start any pods yet!

2. Prepare ~10-20 dummy AWS volumes. Size does not matter.

3. In a busy loop, attach and detach (using "aws ec2 attach-volume" command) your dummy volumes as /dev/xvdaa - /dev/xvdaz. This will load AWS a bit.

4. Start 1-10 pods that use AWS PVS (prepared in step 1). AWS is busy now, so you may see "Timeout waiting for volume state" and "Error attaching EBS volume: VolumeInUse: vol-cbdb9764 is already attached to an instance". If not, you need to load AWS with attaching/detaching more with additional dummy volumes.

5. Stop the busy loop in step 2 and detach all your dummy devices.


Actual result:
Pods end up in "ContainerCreating" state with "Error attaching EBS volume: VolumeInUse: vol-cbdb9764 is already attached to an instance". It never recovers. (restart of openshift-node should help)

Expected result:
After some time (several minutes!) all container should be Running. Only 11 AWS EBS volumes can be attached to the node, so some pods may be stuck in ContainerCreating when they want to attach more. That's expected (for now), see bug #1315995.

Comment 33 Jeremy Eder 2016-03-11 15:08:54 UTC
> 4) If you manually restart atomic-openshift node, it looses track of what is
> attached there and you must manually detach all PVs in AWS console
> afterwards! This can log either "Attachment point /dev/xvdf is already in
> use" or "vol-cbdb9764 is already attached to an instance". This should be
> fixed, but it's hard.  See
> https://github.com/kubernetes/kubernetes/issues/20262.  

Wait, what? Is this true that restarting the node service will wedge all EBS PVs?

Comment 34 Sami Wagiaalla 2016-03-11 15:20:37 UTC
(In reply to Jeremy Eder from comment #33)
> > 4) If you manually restart atomic-openshift node, it looses track of what is
> > attached there and you must manually detach all PVs in AWS console
> > afterwards! This can log either "Attachment point /dev/xvdf is already in
> > use" or "vol-cbdb9764 is already attached to an instance". This should be
> > fixed, but it's hard.  See
> > https://github.com/kubernetes/kubernetes/issues/20262.  
> 
> Wait, what? Is this true that restarting the node service will wedge all EBS
> PVs?

Negative. Please see comment #27

Comment 36 Stefanie Forrester 2016-03-11 22:45:46 UTC
So far the backported fixes look good in testing. I'm not getting any failures today, aside from a few of these:

Error syncing pod, skipping: Error attaching EBS volume: InvalidParameterValue: Value (/dev/xvd) for parameter device is invalid. /dev/xvd is not a valid EBS device name.

Most of them cleared up after I did another restart of atomic-openshift-node. The last one wouldn't clear up until I scaled it down and back up again. But I think things are looking a lot better now.

Comment 37 Jan Safranek 2016-03-14 08:54:43 UTC
(In reply to Stefanie Forrester from comment #36)

> Error syncing pod, skipping: Error attaching EBS volume:
> InvalidParameterValue: Value (/dev/xvd) for parameter device is invalid.
> /dev/xvd is not a valid EBS device name.

That's most likely caused by AWS allowing only 11 EBS volumes attached to a node. This is already reported as #1315995.

Comment 41 Wang Haoran 2016-03-15 05:26:14 UTC
*** Bug 1317741 has been marked as a duplicate of this bug. ***

Comment 42 Wenjing Zheng 2016-03-15 05:47:05 UTC
QE found similar bug during dedicated testing: https://bugzilla.redhat.com/show_bug.cgi?id=1304255.

Comment 47 Jan Safranek 2016-03-15 09:33:32 UTC
From pv.yaml:

    "status": {
        "phase": "Failed",
        "message": "no recyclable volume plugin matched"
    }

This is expected on AWS.

Comment 48 Wang Haoran 2016-03-15 09:57:45 UTC
Matt Woodson, could you please update the pv using correct template, recycle is not supported on AWS as Jan said.

Comment 52 Jan Safranek 2016-03-16 08:15:42 UTC
Pretty please, open new bugs for new issues. We can't track all storage problems in one bug! This bug should track fixes for "VolumeInUse: vol-cbdb9764 is already attached to an instance" and "Error syncing pod, skipping: unsupported volume type".

I marked this as ON_QA and I cloned comment #51 as new bug #1318161, let's continue with missing EBS volumes there.

Btw, sorry for spam, cloning generates lot of noise and I see many people on cc: here...

Comment 53 Wenjing Zheng 2016-03-16 09:52:42 UTC
@jsafrane, yes, the error original reported doesn't exist now, but our testing on ded-int-aws is still blocked by pod keeps in ContainerCreating state for no ebs volumes, so need ops to help on this.  

@mwoodson, could you please help to check on this and make ebs volumes available for pv usage for ded-int-aws env?

Comment 58 Sami Wagiaalla 2016-03-17 18:02:57 UTC
So what is left here that is not addressed ?

Comment 61 Jianwei Hou 2016-03-18 10:07:03 UTC
On ded-int-aws, the PV configurations are valid. Pods using persistent ebs storage are successfully created. I've also verified the postgresql and mongodb persistent template works. Deleting and recreating also works. 

Now the rest issues will be tracked separately as:
bug 1318975: AWS volumes remains in "in-use" status after deleting OSE pods which used them
bug 1318974: Creating pods on OSE v3.1.1.911 with awsElasticBlockStore only assigns devices /dev/xvdb - /dev/xvdp to openshift node

I'll mark this bug as verified now.

Comment 63 errata-xmlrpc 2016-03-24 15:53:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:0510


Note You need to log in before you can comment on or make changes to this bug.