Bug 1480247 - storageclass definition for gluster-block doesn't work if resturl is updated with a routable address instead of ip
storageclass definition for gluster-block doesn't work if resturl is updated ...
Status: ASSIGNED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment (Show other bugs)
cns-3.6
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Jose A. Rivera
Anoop
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-08-10 09:30 EDT by krishnaram Karthick
Modified: 2017-08-10 09:55 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description krishnaram Karthick 2017-08-10 09:30:54 EDT
Description of problem:

In the storage class definition, when resturl is updated with 'http://heketi-storage-project.cloudapps.mystorage.com', pvc creation fails.

Otherwise, the address works

# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# heketi-cli volume list
Id:8f5857d7d5f9d554a461df381b8ef059    Cluster:ec2643d674d0c508a7bfaf8f8def2cf1    Name:heketidbstorage
Id:e35289f5c42657c03386ecaa569b8659    Cluster:ec2643d674d0c508a7bfaf8f8def2cf1    Name:vol_e35289f5c42657c03386ecaa569b8659 [block]


storageclass definition with a resolvable resturl:
====================================================
[root@dhcp47-10 ~]# cat class.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: glusterblock
provisioner: gluster.org/glusterblock
parameters:
    resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
    restuser: "admin"
    restauthenabled: "false"
    #restsecretnamespace: "default"
    #restsecretname: "heketi-secret2"
    opmode: "heketi"
    hacount: "3"
    chapauthenabled: "true"
    #clusterids: "454811fcedbec6316bc10e591a57b472"


storageclass definition with a resturl defined by IP:
=====================================================
[root@dhcp47-10 ~]# cat class-ip.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: glusterblockip
provisioner: gluster.org/glusterblock
parameters:
    resturl: "http://172.30.66.152:8080"
    restuser: "admin"
    restauthenabled: "false"
    #restsecretnamespace: "default"
    #restsecretname: "heketi-secret2"
    opmode: "heketi"
    hacount: "3"
    chapauthenabled: "true"
    #clusterids: "454811fcedbec6316bc10e591a57b472"



oc describe of failed PVC:
=============================
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                                                            SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                                                            -------------   --------        ------                  -------
  8m            8m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glust
erblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorag
e.com on 10.70.46.248:53: read udp 10.130.1.100:53178->10.70.46.248:53: i/o timeout
  7m            7m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glust
erblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorag
e.com on 10.70.46.248:53: read udp 10.130.1.100:54036->10.70.46.248:53: i/o timeout
  7m            7m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glust
erblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:55918->10.70.46.248:53: i/o timeout
  7m            7m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:48133->10.70.46.248:53: i/o timeout
  7m            7m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:50565->10.70.46.248:53: i/o timeout
  6m            6m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:45024->10.70.46.248:53: i/o timeout
  6m            6m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:50959->10.70.46.248:53: i/o timeout
  5m            5m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:33961->10.70.46.248:53: i/o timeout
  4m            4m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:44657->10.70.46.248:53: i/o timeout
  8m            3m              99      persistentvolume-controller                                                     Normal          ExternalProvisioning    cannot find provisioner "gluster.org/glusterblock", expecting that a volume for the claim is provisioned either manually or via external software
  8m            2m              10      gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Normal          Provisioning            External provisioner is provisioning volume for claim "storage-project/claimheketi"
  1m            1m              1       gluster.org/glusterblock 8913b94c-7db3-11e7-b4ff-0a580a820164                   Warning         ProvisioningFailed      (combined from similar events): Failed to provision volume with StorageClass "glusterblock": glusterblock: failed to create volume: [heketi] error creating volume Post http://heketi-storage-project.cloudapps.mystorage.com/blockvolumes: dial tcp: lookup heketi-storage-project.cloudapps.mystorage.com on 10.70.46.248:53: read udp 10.130.1.100:42874->10.70.46.248:53: i/o timeout

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-14.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
create a storage class with resturl updated with a resolvable route
Comment 2 Humble Chirammal 2017-08-10 09:36:19 EDT
As mentioned in one of our scrum, this need proper DNS setup, Jose, can you please help or document how this can be achieved for CNS QE to use the same?
Comment 3 Jose A. Rivera 2017-08-10 09:55:34 EDT
Sure. I can only verify this works on a vagrant environment on my local laptop, and there's no guarantee that this will work for anyone else. :)

All my VMs are running CentOS using NetworkManager.

I first need a working DNS server. libvirt starts a dnsmasq server for my laptop's virtual network address (e.g. 192.168.121.1) that just forwards everything to the name servers in my /etc/resolv.conf .

Next, I set up one of the nodes (or any machine on the virtual network, really) with its own dnsmasq instance with the following configuration:

address=/cloudapps.example.com/<ANY NODE IP IN THE CLUSTER>
server=/cloudapps.example.com/127.0.0.1
server=192.168.121.1

On that node I also add the following two lines to /etc/sysconfig/network-scripts/ifcfg-eth0 to tell NetworkManager to use localhost as the node's name server:

PEERDNS=no
DNS1=127.0.0.1

And finally, open port 53 (DNS) on the same node:

iptables -A INPUT -m state --state NEW -p udp --dport 53 -j ACCEPT

Restart both dnsmasq and NetworkManager.

On all other nodes in the OCP cluster, I add the following line to /etc/sysconfig/network:

IP4_NAMESERVERS=<DNS NODE IP>

On CentOS (and I imagine all Red Hat-based OSes) this is ultimately the value used to configure DNS when you use openshift-ansible to install OCP. Some combination of NetworkManager and the OCP node service will maintain the primary OCP DNS server to this value.

From there, I run openshift-ansible and things just work. Not sure if it does any other black magic behind the scenes, though. :)

HTH!

Note You need to log in before you can comment on or make changes to this bug.