Bug 1503015 - Creating volumesnapshot does not generate volumesnapshotdata.
Summary: Creating volumesnapshot does not generate volumesnapshotdata.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.7.0
Assignee: Tomas Smetana
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-17 08:05 UTC by Liang Xia
Modified: 2017-11-28 22:17 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-11-28 22:17:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
contoller and provisioner log (12.72 KB, application/x-gzip)
2017-10-17 08:11 UTC, Liang Xia
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-29 02:34:54 UTC

Description Liang Xia 2017-10-17 08:05:18 UTC
Description of problem:
Creating volumesnapshot does not generate volumesnapshotdata.

Version-Release number of selected component (if applicable):
openshift v3.7.0-0.153.0
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

How reproducible:
Always

Steps to Reproduce:
1. Set up OCP cluster.
2. Start snapshot controller and provisioner.
# snapshot-controller -cloudprovider=gce -kubeconfig=$HOME/.kube/config &> controller.log &
# snapshot-pv-provisioner -cloudprovider=gce -kubeconfig=$HOME/.kube/config &> provisioner.log &
3. Login in as end user, and create project, pvc and pod.
$ cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gce-pvc
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 3Gi
4. Check pvc is bound with dynamic provisioned pv/volume, check pod is runing.
5. Write some data into the mounted volume path.
6. Create a snapshot via cluster-admin (Due to bug 1502945, can not use end user here).
$ cat snapshot.yaml 
apiVersion: volume-snapshot-data.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: snapshot-1
  namespace: myns
spec:
  persistentVolumeClaimName: gce-pvc
7. Check volumesnapshot and volumesnapshotdata.
# oc get volumesnapshot,volumesnapshotdata --all-namespaces

Actual results:
volumesnapshot exist, but NO volumesnapshotdata.

Expected results:
There are volumesnapshotdata which co-exist with volumesnapshot.

Additonal info:
snapshot.log.gzip which contains controller and provisioner logs.

Comment 1 Liang Xia 2017-10-17 08:11:42 UTC
Created attachment 1339616 [details]
contoller and provisioner log

Comment 8 Liang Xia 2017-10-26 10:21:43 UTC
Tested on below version,
# openshift version
openshift v3.7.0-0.178.0
kubernetes v1.7.6+a08f5eeb62
etcd 3.2.8

openshift-external-storage-snapshot-provisioner-0.0.1-3.git78d6339.el7.x86_64
openshift-external-storage-snapshot-controller-0.0.1-3.git78d6339.el7.x86_64


Creating volumesnapshot will gennerate volumesnapshotdata. Bug is fixed.

Comment 10 Liang Xia 2017-10-31 03:28:28 UTC
Based on #comment 8, move bug to verified.

Comment 13 errata-xmlrpc 2017-11-28 22:17:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188


Note You need to log in before you can comment on or make changes to this bug.