Bug 1415624 - Persistent Volumes on NFS arent all getting recycled
Summary: Persistent Volumes on NFS arent all getting recycled
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.3.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.4.z
Assignee: Jan Safranek
QA Contact: Liang Xia
Depends On: 1392338 1395271 1395276
TreeView+ depends on / blocked
Reported: 2017-01-23 09:41 UTC by Jan Safranek
Modified: 2020-04-15 15:08 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1392338
Last Closed: 2017-02-22 18:11:17 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift ose pull 565 0 None None None 2019-11-26 16:40:22 UTC
Red Hat Product Errata RHBA-2017:0289 0 normal SHIPPED_LIVE OpenShift Container Platform,, and bug fix update 2017-02-22 23:10:04 UTC

Description Jan Safranek 2017-01-23 09:41:19 UTC
Clone for 3.4.x backport

+++ This bug was initially created as a clone of Bug #1392338 +++

Description of problem:
Version-Release number of selected component (if applicable):
openshift v3.3.1.3
kubernetes v1.3.0+52492b4
etcd 2.3.0+git

How reproducible:
1. Create PV in OS namespace.
2. Create new project 
3. Create new MYSQL persistent instances 
4. Delete project.

Steps to Reproduce:
1. Create some NFS persistent volumes on NFS server and in OS (using Recycle). (at least 8)
2. Create new project, add new MYSQL persistent instances (8 in my case)
3. The pods with MYSQL will be deployed and get assigned to a persistent volume using a pvc.
4. Once all pods are up and ready, delete the project, either using web interface or oc delete.
5. Watch PV status => oc get pv -w
6. Some PV get status "AVAILABLE" before the recycler complete the data scrubbing + sometimes the recycler pod didnt start and flag the volume as AVAILABLE while data are still present on the NFS volume.

Actual results:
Some volumes get correctly scrubbed.
Some volumes are not at all scrubbed and their status is changed from Bound to Recycle to Available.

The oc get po -w in openshift-infra namespace list strange pod creation when failing to recycle :
recycler-for-volume1 0/1 Completed
recycler-for-volume1 0/1 Terminating
recycler-for-volume1 0/1 Terminating

in oc get pv -w:
volume1 10Gi RWO,RWX Released projecta/dbmysql 
volume1 10Gi RWO,RWX Released 
volume1 10Gi RWO,RWX Available

=> NFS share still has folder and data.

Expected results:
A volume cannot be in status Available before the recycler successfully delete all data on the NFS volume.

Additional info:
There is no permission issue related here as some volume the data scrubbing is correctly done, based on my observation the issues are related to the recycler pod creation and the PV status update. I feel like the recycler pod creation is not always started correctly like "Pending" -> "Container Creating" -> "Running" -> "Completed" -> "Terminating" 

For some reason sometimes the creation is no done and we have only Completed + terminating in the logs or simply no recycler creation.

--- Additional comment from Jan Safranek on 2016-11-28 07:37:39 EST ---

Origin pull request: https://github.com/openshift/origin/pull/12041

Comment 1 Jan Safranek 2017-01-23 11:14:54 UTC
OSE pull request: https://github.com/openshift/ose/pull/565

Comment 2 Jan Safranek 2017-02-06 09:53:45 UTC

Comment 3 Troy Dawson 2017-02-06 19:27:34 UTC
This has been merged into ocp and is in OCP v3.5.0.17 or newer.

Comment 8 Troy Dawson 2017-02-07 23:03:18 UTC
Sorry about that.  The pull request has already merged and this needs to go onto the next 3.4 release.  Fixing things up.

Comment 10 Troy Dawson 2017-02-10 22:57:05 UTC
This has been merged into ocp and is in OCP v3.4.1.7 or newer.

Comment 12 Liang Xia 2017-02-13 05:10:05 UTC
Did not see any file left after PV recycled back to available.

# openshift version
openshift v3.4.1.7
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Create an NFS server and export 100 folder.
Create 100 recycling PVs.
In a loop, create project, create pvc, create pod, create file in the mounted path within the pod, delete project.

The files are created and removed during the testing.

After several hours running, no file left in the exported nfs server path.

Comment 13 Jan Safranek 2017-02-13 10:00:10 UTC
> Please fill out and follow https://mojo.redhat.com/docs/DOC-1072778 if you need
> to request a back port for a customer.

That's GSS job, @vwalek

Comment 22 errata-xmlrpc 2017-02-22 18:11:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.