Bug 1260388 - secretRef does not overwrite secretFile for ceph volume
secretRef does not overwrite secretFile for ceph volume
Status: CLOSED WONTFIX
Product: OpenShift Origin
Classification: Red Hat
Component: Storage (Show other bugs)
3.x
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: hchen
Liang Xia
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-06 06:58 EDT by Jianwei Hou
Modified: 2015-09-24 09:03 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-24 09:03:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jianwei Hou 2015-09-06 06:58:04 EDT
Description of problem:
With a secret created containing ceph admin keyring, when a pod with secretRef is created, the system is still looking for the secretFile /etc/ceph/admin.secret.

Version-Release number of selected component (if applicable):
openshift v3.0.1.0-1-5-ge51f583-dirty
kubernetes v1.0.0

How reproducible:
Always

Steps to Reproduce:
1. Create a secret that contains ceph keyring
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/cephfs/secret.yaml
2. Create a pod with ceph volume, in pod.yaml, specify the secretRef
https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/cephfs/pod.yaml


Actual results:
After step 2:
Pod wasn't created successfully, on the node, logs showed:
```
Sep  6 18:19:32 openshift-v3 kernel: Key type ceph registered
Sep  6 18:19:32 openshift-v3 kernel: libceph: loaded (mon/osd proto 15/24)
Sep  6 18:19:32 openshift-v3 kernel: ceph: loaded (mds proto 32)
Sep  6 18:19:32 openshift-v3 openshift-node: E0906 18:19:32.719233    4000 mount_linux.go:103] Mount failed: exit status 1
Sep  6 18:19:32 openshift-v3 openshift-node: Mounting arguments: 192.168.0.130:6789,192.168.0.131:6789,192.168.0.132:6789,192.168.0.147:6789:/ /var/lib/origin/openshift.local.volumes/pods/c425702e-5480-11e5-abaf-fa163e24fffc/volumes/kubernetes.io~cephfs/cephfs ceph [ro name=admin,secretfile=/etc/ceph/admin.secret]
Sep  6 18:19:32 openshift-v3 openshift-node: Output: unable to read secretfile: No such file or directory
Sep  6 18:19:32 openshift-v3 openshift-node: error reading secret file
Sep  6 18:19:32 openshift-v3 openshift-node: failed to parse ceph_options
```

It was looking for /etc/ceph/admin.secret, however, secretRef was specified in the pod, which should overwrite the secretFile option.

Expected results:
Pod should be created successfully

Additional info:
After /etc/ceph/admin.secret is created with correct key, the pod is created successfully
For the secret and pod details, pls see the example in the description.
Comment 1 Mark Turansky 2015-09-14 10:43:22 EDT
Reassigning to Huamin, resident Ceph expert and author of the plugin.
Comment 2 hchen 2015-09-14 11:02:14 EDT
Can you add kering: '' to your yaml like the following?

{
    "apiVersion": "v1",
    "id": "cephfs",
    "kind": "Pod",
    "metadata": {
        "name": "cephfs"
    },
    "spec": {
        "containers": [
            {
                "name": "cephfs-rw",
                "image": "jhou/hello-openshift",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/cephfs",
                        "name": "cephfs"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "cephfs",
                "cephfs": {
                    "monitors": [
                                "192.168.0.130:6789",
                                "192.168.0.131:6789",
                                "192.168.0.132:6789",
                                "192.168.0.147:6789"
                     ],
                    "user": "admin",
                    "secretRef": {
                          "name": "ceph-secret"
                     },
                     keyring: ''
                    "readOnly": true
                }
            }
        ]
    }
}
Comment 3 chaoyang 2015-09-16 05:06:57 EDT
hi,
I think kubernetes api does not support 'keyring' for ceph volume

error validating "/root/pod1.json": error validating data: [found invalid field id for v1.Pod, found invalid field keyring for v1.CephFSVolumeSource]; if you choose to ignore these errors, turn validation off with --validate=false

https://github.com/kubernetes/kubernetes/blob/9ed2d842bc3c87db0799a40226320550f2759e24/pkg/api/types.go
admin.secret

If I using "secretFile": "/etc/ceph/admin.secret" , pod can be created successfully
Comment 4 hchen 2015-09-16 08:57:29 EDT
I see, I was thinking of rbd. Let me look at ceph fs. Thanks.
Comment 5 hchen 2015-09-16 14:53:24 EDT
secret overrides secretFile at this line https://github.com/kubernetes/kubernetes/blob/9ed2d842bc3c87db0799a40226320550f2759e24/pkg/volume/cephfs/cephfs.go#L237.

I tested your pod and got this from kubelet log, so secret was used in the mount:


E0916 14:50:54.062950   20356 mount_linux.go:103] Mount failed: exit status 5
Mounting arguments: 192.168.0.130:6789,192.168.0.131:6789,192.168.0.132:6789,192.168.0.147:6789:/ /var/lib/kubelet/pods/b7f272fb-5ca3-11e5-be49-d4bed9b38fad/volumes/kubernetes.io~cephfs/cephfs ceph [ro name=admin,secret=AQAMgXhVwBCeDhAA9nlPaFyfUSatGD4drFWDvQ==]
Output: mount error 5 = Input/output error


Can I login to your kube host?
Comment 6 chaoyang 2015-09-17 04:00:38 EDT
the kube env is on beijing openstack , I don't know if you can access it or not.

and we have a card in trello for this bug
https://trello.com/c/A2Ba5OyY/161-secretref-does-not-overwrite-secretfile-for-ceph-volume-bugzilla
Comment 7 hchen 2015-09-22 15:33:11 EDT
I still cannot reproduce this problem on our OSE setup

[root@host02-rack08 hchen]# oc version
oc v3.0.1.0-528-g8c2fe51
kubernetes v1.0.0


[root@host02-rack08 hchen]# cat cephfs.yaml 
{
    "apiVersion": "v1",
    "id": "cephfs",
    "kind": "Pod",
    "metadata": {
        "name": "cephfs"
    },
    "spec": {
        "containers": [
            {
                "name": "cephfs-rw",
                "image": "tutum/mysql",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/cephfs",
                        "name": "cephfs"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "cephfs",
                "cephfs": {
                    "monitors": [
                                "192.168.0.130:6789",
                                "192.168.0.131:6789",
                                "192.168.0.132:6789",
                                "192.168.0.147:6789"
                     ],
                    "user": "admin",
                    "secretRef": {
                          "name": "ceph-secret"
                     },
                    "readOnly": true
                }
            }
        ]
    }
}

[root@host02-rack08 hchen]# oc get pod
NAME                                    READY     STATUS                                        RESTARTS   AGE
cephfs                                  0/1       Image: tutum/mysql is not ready on the node   0          3m
[root@host02-rack08 hchen]# oc describe pod cephfs
Name:				cephfs
Namespace:			default
Image(s):			tutum/mysql
Node:				host14-rack08.scale.openstack.engineering.redhat.com/10.1.4.118
Labels:				<none>
Status:				Pending
Reason:				
Message:			
IP:				
Replication Controllers:	<none>
Containers:
  cephfs-rw:
    Image:		tutum/mysql
    State:		Waiting
      Reason:		Image: tutum/mysql is not ready on the node
    Ready:		False
    Restart Count:	0
Conditions:
  Type		Status
  Ready 	False 
Events:
  FirstSeen				LastSeen			Count	From								SubobjectPath	Reason		Message
  Tue, 22 Sep 2015 19:27:13 +0000	Tue, 22 Sep 2015 19:27:13 +0000	1	{scheduler }									scheduled	Successfully assigned cephfs to host14-rack08.scale.openstack.engineering.redhat.com
  Tue, 22 Sep 2015 19:28:13 +0000	Tue, 22 Sep 2015 19:30:13 +0000	3	{kubelet host14-rack08.scale.openstack.engineering.redhat.com}			failedMountUnable to mount volumes for pod "cephfs_default": CephFS: mount failed: exit status 5
  Tue, 22 Sep 2015 19:28:13 +0000	Tue, 22 Sep 2015 19:30:13 +0000	3	{kubelet host14-rack08.scale.openstack.engineering.redhat.com}			failedSync	Error syncing pod, skipping: CephFS: mount failed: exit status 5


On Kubelet node, secret is there:
Sep 22 19:28:13 host14-rack08 openshift-node: Mounting arguments: 192.168.0.130:6789,192.168.0.131:6789,192.168.0.132:6789,192.168.0.147:6789:/ /var/lib/openshift/openshift.local.volumes/pods/ed6bff23-615f-11e5-b8c5-b8ca3a627d6c/volumes/kubernetes.io~cephfs/cephfs ceph [ro name=admin,secret=AQAMgXhVwBCeDhAA9nlPaFyfUSatGD4drFWDvQ==
Comment 8 hchen 2015-09-22 16:12:44 EDT
Jianwei, your secret doesn't look right to me, is the secret in your yaml base64 encoded? If not, get base64 encoded secret using this command 

echo AQBT/+tVmLVpNBAASNoemkLGMsIwx6moYpeGzQ== |base64
Comment 9 hchen 2015-09-22 16:13:49 EDT
correction:
echo -n AQBT/+tVmLVpNBAASNoemkLGMsIwx6moYpeGzQ== |base64
Comment 10 Jianwei Hou 2015-09-24 07:19:53 EDT
@hchen Thank you very much for pointing it out! You are right, the secretRef has to have the key base64 encoded. The problem is reproducible when key is not encoded: 'AQBT/+tVmLVpNBAASNoemkLGMsIwx6moYpeGzQ=='.

I have updated the secret with the base64 encoded secret, now the problem is gone!
Comment 11 hchen 2015-09-24 09:03:18 EDT
This problem appears to come from non-base64 encoded secret.

Note You need to log in before you can comment on or make changes to this bug.