Bug 1740201

Summary: Migration from gluster to ceph results in empty volume
Product: OpenShift Container Platform Reporter: Sergio <sregidor>
Component: Migration ToolingAssignee: Scott Seago <sseago>
Status: CLOSED ERRATA QA Contact: Sergio <sregidor>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.0CC: chezhang, dwhatley, dymurray, jmatthew, jortel, rpattath
Target Milestone: ---   
Target Release: 4.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-16 06:35:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sergio 2019-08-12 13:13:33 UTC
Description of problem:

Migrating a volume from gluster to ceph does not migrate the content of the volume. The volume is empty.


Version-Release number of selected component (if applicable):

OCP4
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+838b4fa", GitCommit:"838b4fa", GitTreeState:"clean", BuildDate:"2019-05-19T23:51:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}


OC3
oc v3.11.126
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://XX.XXXX-XXXXXX.X.XXXXXXX.XXXXXXXX.XX:XX
openshift v3.11.104
kubernetes v1.11.0+d4cacc0


velero
    image: quay.io/ocpmigrate/velero:fusor-dev
    imageID: quay.io/ocpmigrate/velero@sha256:b707ae4f22ba1828ca6f9992b190134eaef145364cb57146d84616ccefdafbb7

    image: quay.io/ocpmigrate/migration-plugin:latest
    imageID: quay.io/ocpmigrate/migration-plugin@sha256:d34af290b3c6d808ad360a1f2d41d91e06bff5aa912f9a5a78fed3ea2f0f8f71

    deployment.kubernetes.io/revision: "1"



operator
    image: quay.io/ocpmigrate/mig-operator:latest
    imageID: quay.io/ocpmigrate/mig-operator@sha256:0da94766a038b835c47a97baea740bca2583a188b9f30ad2c76768562a53d386



How reproducible:


Steps to Reproduce:
1. Create this application, for instance

https://gist.githubusercontent.com/jwmatthews/b0432300864c5bf71736c8647fec72bb/raw/7c5723bcb06c8a11591b2ccb313322c8f3261623/nginx_with_pv.ym

And modify it to use gluster if necessary

$ oc get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
nginx-logs   Bound     pvc-a27f3547-bcf7-11e9-806b-0ad64299291c   1Gi        RWO            glusterfs-storage   28m


2. Curl the exposed nginx route

$ curl $(oc get route my-nginx -n nginx-example -o jsonpath='{.spec.host}')


3. Verify that the log is writing information in the volume

oc -n nginx-example rsh $(oc get pods -n nginx-example -o jsonpath='{.items[0].metadata.name}') cat /var/log/nginx/access.log
X.XXX.XX.X - - [12/Aug/2019:11:52:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "X.XXX.XX.X"
X.XXX.XX.X - - [12/Aug/2019:11:53:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "X.XXX.XX.X"


4. Migrate the volume to ceph, with this information in the migration plan
 
    persistentVolumes:
    - capacity: 1Gi
      name: pvc-a27f3547-bcf7-11e9-806b-0ad64299291c
      pvc:
        accessModes:
        - ReadWriteOnce
        name: nginx-logs
        namespace: nginx-example
      selection:
        action: copy
        storageClass: csi-rbd
      storageClass: glusterfs-storage
      supported:
        actions:
        - copy
        - move
 
Actual results:

4. After the migration the volume is empty

Expected results:

5. The volume should contain the information written before migration in the log files.

Additional info:

Comment 1 John Matthews 2019-08-27 18:42:41 UTC
I suspect this issue was resolved from upstream issues Scott/Dylan worked with Velero 1.1 beta and CSI issues.

Will defer to Scott to share more info.

Comment 2 Sergio 2019-09-09 10:14:33 UTC
I verified that the bug is fixed using 

Controller:
    image: quay.io/ocpmigrate/mig-controller:stable
    imageID: quay.io/ocpmigrate/mig-controller@sha256:7ec48a557240f1d2fa6ee6cd62234b0e75f178eca2a0cc5b95124e01bcd2c114
Velero:
    image: quay.io/ocpmigrate/velero:stable
    imageID: quay.io/ocpmigrate/velero@sha256:957725dec5f0fb6a46dee78bd49de9ec4ab66903eabb4561b62ad8f4ad9e6f05
    image: quay.io/ocpmigrate/migration-plugin:stable
    imageID: quay.io/ocpmigrate/migration-plugin@sha256:b4493d826260eb1e3e02ba935aaedfd5310fefefb461ca7dcd9a5d55d4aa8f35
OCP4
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-09-08-232045   True        False         123m    Cluster version is 4.2.0-0.nightly-2019-09-08-232045
OCP3
oc v3.11.144
kubernetes v1.11.0+d4cacc0

The volumes have now the right content.

Comment 3 errata-xmlrpc 2019-10-16 06:35:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922