Bug 1413687 - Error creating PersistentVolumeClaim. Invalid cinder endpoint detected.
Summary: Error creating PersistentVolumeClaim. Invalid cinder endpoint detected.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OKD
Classification: Red Hat
Component: Storage
Version: 3.x
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: hchen
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-16 16:39 UTC by Tahir Raza
Modified: 2017-01-17 13:58 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-17 13:58:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Tahir Raza 2017-01-16 16:39:43 UTC
Description of problem:
Create PVC claim within openshift origin 3.1. The cinder end point it figures out somehow isnt correct. It errors out with 400. 

I am running Openshift 3.1 origin inside Openstack Kilo. I have 1 master, 1 node, 1 infra-node.

Version-Release number of selected component (if applicable):

Openshift Origin 3.1

Steps to Reproduce:

This is across all masters. Nodes have corresponding configs.
$ cat /etc/cloud.conf
[Global]
auth-url = https://region1.oscloud.razaglobal.com:5000/v2.0
username = openshift
password = <REDACTED>
tenant-id = <REDACTED>
region = region1


$ vi claim2.yaml

kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
  name: "cassandra-pvc-001"
  annotations:
    volume.alpha.kubernetes.io/storage-class: "foo"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "3Gi"

$ oc create -f claim2.yaml

$ oc get pvc
NAME                STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
cassandra-pvc-001   Pending                                      7m


journalctl  -xe

  Jan 16 10:13:27 master1-openshift-dev origin-master[728]: I0116 10:13:27.151543     728 controller.go:1194] failed to provision volume for claim "default/cassandra-pvc-001": Expected HTTP response code [200 201] when accessing [POST https://region1.oscloud.razaglobal.com:8776/v1/b16dc5fe58f245b483fd196afaba9a9a/volumes], but got 400 instead
Jan 16 10:13:27 master1-openshift-dev origin-master[728]: {"badRequest": {"message": "The server could not comply with the request since it is either malformed or otherwise incorrect.", "code": 400}}


Actual results:

fails with 400 error.

Expected results:

Should create PVC and creates cinder volume in openstack cinder, binds it.

Additional info:

So here is my findings. 
Our endpoints 

Cinder: http://region1.oscloud.razaglobal.com:8776 and
Openstack: https://region1.oscloud.razaglobal.com:5000/v2.0

Question1: How does openshift calculate cinder endpoint? It is clear to me its not calculated via /etc/cloud.conf. I have tried with it, without it. Result is same.
Whats important is that openshift is assuming cinder is https://region1.oscloud.razaglobal.com:8776 where as, in reality, our cinder endpoint is http://region1.oscloud.razaglobal.com:8776. Its 'http' not 'https'

Question2: How can I modify/overwrite the cinder endpoint that openshift understands. Is there ansible attribute I can configure? I installed openshift-origin with ansible playbooks (https://github.com/openshift/openshift-ansible). 

Please help or comment. I have been stuck on it for over a week. I have researched a lot and can't make progress.

Comment 1 Tahir Raza 2017-01-16 16:41:25 UTC
I see @Seth Jennings has posted on similar issues like https://bugzilla.redhat.com/show_bug.cgi?id=1400717. If @Seth Jennings, you are available it might be super quick to resolve i guess.

Comment 2 hchen 2017-01-16 18:24:47 UTC
Which OpenStack release in the environment? 
Is this issue relevant? https://bugzilla.redhat.com/show_bug.cgi?id=1237207

Cinder endpoint is created in openstack configuration, not from Kubernetes/OpenShift.

Comment 3 Tahir Raza 2017-01-16 20:29:23 UTC
We are on Liberty. We did an upgrade few months back to it.

When i see our endpoints in Openstack Tenant, I see we have both V1 and V2. Upon further research, it seems that V1 is deprecated and not serviced anymore. And so the issue is potentially on Openstack configuration.

You can close the bug I guess.
Thanks


Note You need to log in before you can comment on or make changes to this bug.