Bug 1735729 - [IPI] [OSP] Cloud provider is not working well
Summary: [IPI] [OSP] Cloud provider is not working well
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2.0
Assignee: Mike Fedosin
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-01 10:44 UTC by weiwei jiang
Modified: 2019-10-16 06:34 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:34:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23578 0 'None' closed Bug 1735729: Read availability zone name from metadata 2021-02-04 02:54:01 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:34:39 UTC

Description weiwei jiang 2019-08-01 10:44:38 UTC
Description of problem:
When try to use default cinder storageclass, got error
# oc describe pvc
Name:          pvc-kknvt
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: wjiangosp0801d-9pkp4-worker-w2x8k
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                From                         Message
  ----       ------                ----               ----                         -------
  Normal     WaitForFirstConsumer  14s (x4 over 36s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Warning    ProvisioningFailed    14s                persistentvolume-controller  Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory
Mounted By:  h-2-j5jwq

Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-07-31-162901

How reproducible:
Always

Steps to Reproduce:
1. oc run h --image=openshift/hello-openshift
2. oc set volume dc/h --add --name=v1 -t pvc --claim-size=1G --overwrite
3. check the PVC status

Actual results:
 oc describe pvc
Name:          pvc-kknvt
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: wjiangosp0801d-9pkp4-worker-w2x8k
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                From                         Message
  ----       ------                ----               ----                         -------
  Normal     WaitForFirstConsumer  14s (x4 over 36s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Warning    ProvisioningFailed    14s                persistentvolume-controller  Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory
Mounted By:  h-2-j5jwq


Expected results:
Should work well

Additional info:

Comment 3 Mike Fedosin 2019-08-07 20:31:03 UTC
Hello! The bug should have been fixed by https://github.com/openshift/cluster-kube-apiserver-operator/pull/544
In https://github.com/openshift/library-go/pull/500 we added a line that generates '--cloud-config=...' option, but because this patch didn't appear in KAO we couldn't create a volume.

With the latest KAO image that includes the patch https://github.com/openshift/cluster-kube-apiserver-operator/pull/544 I couldn't reproduce the bug

Comment 4 weiwei jiang 2019-08-08 09:53:18 UTC
Checked with 4.2.0-0.nightly-2019-08-08-032431, and if I try to consume cinder storage,
the kube-apiserver and kube-controller will down to serve.

And if I delete the pvc, kbue-apiserver and kube-controller will come back to work.

➜  ~ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.2.0-0.nightly-2019-08-08-032431   True        True          False      46m
cloud-credential                           4.2.0-0.nightly-2019-08-08-032431   True        False         False      90m
cluster-autoscaler                         4.2.0-0.nightly-2019-08-08-032431   True        False         False      68m
console                                    4.2.0-0.nightly-2019-08-08-032431   True        False         False      65m
dns                                        4.2.0-0.nightly-2019-08-08-032431   True        False         False      88m
image-registry                                                                 False       True          False      68m
ingress                                    4.2.0-0.nightly-2019-08-08-032431   True        False         False      68m
insights                                   4.2.0-0.nightly-2019-08-08-032431   True        False         True       90m
kube-apiserver                             4.2.0-0.nightly-2019-08-08-032431   True        False         True       86m
kube-controller-manager                    4.2.0-0.nightly-2019-08-08-032431   True        False         True       86m
kube-scheduler                             4.2.0-0.nightly-2019-08-08-032431   True        False         False      83m
machine-api                                4.2.0-0.nightly-2019-08-08-032431   True        False         False      90m
machine-config                             4.2.0-0.nightly-2019-08-08-032431   True        False         False      90m
marketplace                                4.2.0-0.nightly-2019-08-08-032431   True        False         False      68m
monitoring                                 4.2.0-0.nightly-2019-08-08-032431   False       True          True       16m
network                                    4.2.0-0.nightly-2019-08-08-032431   True        False         False      86m
node-tuning                                4.2.0-0.nightly-2019-08-08-032431   False       False         False      3m17s
openshift-apiserver                        4.2.0-0.nightly-2019-08-08-032431   True        False         False      69m
openshift-controller-manager               4.2.0-0.nightly-2019-08-08-032431   True        False         False      87m
openshift-samples                          4.2.0-0.nightly-2019-08-08-032431   True        False         False      67m
operator-lifecycle-manager                 4.2.0-0.nightly-2019-08-08-032431   True        False         False      86m
operator-lifecycle-manager-catalog         4.2.0-0.nightly-2019-08-08-032431   True        False         False      85m
operator-lifecycle-manager-packageserver   4.2.0-0.nightly-2019-08-08-032431   False       True          False      14m
service-ca                                 4.2.0-0.nightly-2019-08-08-032431   True        False         False      90m
service-catalog-apiserver                  4.2.0-0.nightly-2019-08-08-032431   True        False         False      85m
service-catalog-controller-manager         4.2.0-0.nightly-2019-08-08-032431   True        False         False      85m
storage                                    4.2.0-0.nightly-2019-08-08-032431   True        False         False      68m

➜  ~ while :; do oc get pvc ; sleep 1; done
Unable to connect to the server: EOF
Unable to connect to the server: EOF
Unable to connect to the server: EOF
Unable to connect to the server: EOF
Unable to connect to the server: EOF
Unable to connect to the server: EOF
Unable to connect to the server: EOF
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-h88l9   Pending                                      standard       30m
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-h88l9   Pending                                      standard       30m
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-h88l9   Pending                                      standard       30m
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-h88l9   Pending                                      standard       30m
The connection to the server api.wjosp0808a.qe.rhcloud.com:6443 was refused - did you specify the right host or port?
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-h88l9   Pending                                      standard       30m

Comment 5 weiwei jiang 2019-08-08 10:12:56 UTC
Following is the kube-apiserver logs:

E0808 10:06:53.539321       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)                                                                                           
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76                                                                                                                                  
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65                                                                                                                                  
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51                                                                                                                                  
/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:522                                                                                                                                                                                            
/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:82                                                                                                                                                                                             
/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/signal_unix.go:390                                                                                                                                                                                      
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack/client.go:300                                                                                                                          
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack/client.go:344                                                                                                                          
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_client.go:72                                                                                                      
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack.go:919                                                                                                            
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_volumes.go:450                                                                                                    
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_volumes.go:729                                                                                                    
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label/admission.go:391                                                                                              
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label/admission.go:136                                                                                              
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go:85                                                                                                                                
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/openshift/library-go/pkg/apiserver/admission/admissiontimeout/timeoutadmission.go:36                                                                                     
/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/asm_amd64.s:1337                                                                                                                                                                                        
panic: runtime error: invalid memory address or nil pointer dereference [recovered]                                                                                                                                                                                             
        panic: runtime error: invalid memory address or nil pointer dereference                                                                                                                                                                                                 
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x2a29986]                                                                                                                                                                                                        
                                                                                                                                                                                                                                                                                
goroutine 75998 [running]:                                                                                                                                                                                                                                                      
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)                                                                                                                                                                              
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105                                                                                                                   
panic(0x5484fe0, 0xd8321a0)                                                                                                                                                                                                                                                     
        /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:522 +0x1b5                                                                                                                                                                             
github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack.initClientOpts(0x0, 0x5f27ff8, 0x8, 0x0, 0x0, 0x0, 0x0, 0x5f2245d, 0x6, 0x5f27ff8, ...)                                                                                                         
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack/client.go:300 +0x96                                                                                                            
github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack.NewBlockStorageV3(...)                                                                                                                                                                          
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/gophercloud/gophercloud/openstack/client.go:344                                                                                                                  
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).NewBlockStorageV3(0xc025a50140, 0xc0278f1da0, 0x1b, 0x0)                                                                                                                
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_client.go:72 +0xc8                                                                                        
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).volumeService(0xc025a50140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)                                                                                                               
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack.go:919 +0x4e9                                                                                             
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).getVolume(0xc025a50140, 0xc011772480, 0x24, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)                                                                                     
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_volumes.go:450 +0x76                                                                                      
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack.(*OpenStack).GetLabelsForVolume(0xc025a50140, 0x949a9c0, 0xc000074050, 0xc025291b80, 0x0, 0x0, 0xc01536c8e0)                                                                         
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/openstack/openstack_volumes.go:729 +0x84                                                                                      
github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label.(*persistentVolumeLabel).findCinderDiskLabels(0xc0011e7f00, 0xc025291400, 0xc025291400, 0x5f1cc01, 0x2)                                                                
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label/admission.go:391 +0x1df                                                                               
github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label.(*persistentVolumeLabel).Admit(0xc0011e7f00, 0x9525bc0, 0xc010af6800, 0x949bb80, 0xc012838900, 0x1, 0xc0011b93f0)                                                      
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storage/persistentvolume/label/admission.go:136 +0xef6                                                                               
github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/admission/metrics.pluginHandlerWithMetrics.Admit(0x93b4320, 0xc0011e7f00, 0xc0011b9410, 0xc0011b9420, 0x1, 0x1, 0x9525bc0, 0xc010af6800, 0x949bb80, 0xc012838900, ...)                                                  
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go:85 +0xe2                                                                                                                  
github.com/openshift/origin/vendor/github.com/openshift/library-go/pkg/apiserver/admission/admissiontimeout.pluginHandlerWithTimeout.Admit.func1(0xc028ce6a20, 0x7fd350070490, 0xc0011f7170, 0x9525bc0, 0xc010af6800, 0x949bb80, 0xc012838900, 0xc0023fd320)                    
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/openshift/library-go/pkg/apiserver/admission/admissiontimeout/timeoutadmission.go:36 +0xb4                                                                       
created by github.com/openshift/origin/vendor/github.com/openshift/library-go/pkg/apiserver/admission/admissiontimeout.pluginHandlerWithTimeout.Admit                                                                                                                           
        /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/openshift/library-go/pkg/apiserver/admission/admissiontimeout/timeoutadmission.go:33 +0x1cc

Comment 6 Mike Fedosin 2019-08-08 20:07:45 UTC
Hi! I proposed a fix for that: https://github.com/openshift/origin/pull/23578

Also I built a release image which includes the modified hyperkube: export OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/fedosin/origin-release:hk

Could you please test it to make sure that everything work fine now

$ oc describe pvc
Name:          cinder-claim
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-84a732b4-ba17-11e9-8373-fa163e30e225
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: mfedosin-2946q-worker-hgbqm
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age                From                         Message
  ----       ------                 ----               ----                         -------
  Normal     WaitForFirstConsumer   75s (x3 over 95s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Normal     ProvisioningSucceeded  68s                persistentvolume-controller  Successfully provisioned volume pvc-84a732b4-ba17-11e9-8373-fa163e30e225 using kubernetes.io/cinder
Mounted By:  example

Comment 7 Mike Fedosin 2019-08-08 22:49:51 UTC
This is also required https://github.com/openshift/installer/pull/2189

Comment 8 Mike Fedosin 2019-08-08 22:54:51 UTC
After https://github.com/openshift/installer/pull/2189 and https://github.com/openshift/origin/pull/23578

$ oc run h --image=openshift/hello-openshift
kubectl run --generator=deploymentconfig/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deploymentconfig.apps.openshift.io/h created
$ oc set volume dc/h --add --name=v1 -t pvc --claim-size=1G --overwrite
warning: volume "v1" did not previously exist and was not overwritten. A new volume with this name has been created instead.deploymentconfig.apps.openshift.io/h volume updated
$ oc get pods
NAME         READY   STATUS      RESTARTS   AGE
h-2-deploy   0/1     Completed   0          26s
h-2-z2p68    1/1     Running     0          20s
$ oc describe pvc
Name:          pvc-q9hlv
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-f9b22bd1-ba2e-11e9-b3a7-fa163ee1f11d
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
               volume.kubernetes.io/selected-node: mfedosin-gjjvf-worker-5tlsl
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                         Message
  ----       ------                 ----  ----                         -------
  Normal     WaitForFirstConsumer   67s   persistentvolume-controller  waiting for first consumer to be created before binding
  Normal     ProvisioningSucceeded  58s   persistentvolume-controller  Successfully provisioned volume pvc-f9b22bd1-ba2e-11e9-b3a7-fa163ee1f11d using kubernetes.io/cinder
Mounted By:  h-2-z2p68
$ oc describe pv
Name:              pvc-f9b22bd1-ba2e-11e9-b3a7-fa163ee1f11d
Labels:            failure-domain.beta.kubernetes.io/region=moc-kzn
                   failure-domain.beta.kubernetes.io/zone=nova
Annotations:       kubernetes.io/createdby: cinder-dynamic-provisioner
                   pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/cinder
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      standard
Status:            Bound
Claim:             default/pvc-q9hlv
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          1Gi
Node Affinity:     
  Required Terms:  
    Term 0:        failure-domain.beta.kubernetes.io/zone in [nova]
                   failure-domain.beta.kubernetes.io/region in [moc-kzn]
Message:           
Source:
    Type:      Cinder (a Persistent Disk resource in OpenStack)
    VolumeID:      SecretRef:  %v
50e79e69-345b-46c8-97e3-ac27f23b564a
    FSType:                                           nil
    ReadOnly:                                         
%!(EXTRA bool=false, *v1.SecretReference=nil)Events:  <none>

Comment 9 weiwei jiang 2019-08-09 07:05:46 UTC
Checked with quay.io/fedosin/origin-release:hk.
Cinder storage can be dynimanic provision now,
but pods get pending since no failure-domain.beta.kubernetes.io/region label for all nodes,
I think cloud-provider still got some issue for this.
Help have a check, thanks.



➜  ~ oc describe pods h-2-kxlpm
Name:               h-2-kxlpm
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             deployment=h-2
                    deploymentconfig=h
                    run=h
Annotations:        openshift.io/deployment-config.latest-version: 2
                    openshift.io/deployment-config.name: h
                    openshift.io/deployment.name: h-2
Status:             Pending
IP:                 
Controlled By:      ReplicationController/h-2
Containers:
  h:
    Image:        openshift/hello-openshift
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wh759 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  v1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-5nhsm
    ReadOnly:   false
  default-token-wh759:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wh759
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  9m36s                  default-scheduler  Operation cannot be fulfilled on persistentvolumeclaims "pvc-5nhsm": the object has been modified; please apply your changes to the latest version and try again
  Warning  FailedScheduling  9m34s (x2 over 9m36s)  default-scheduler  AssumePod failed: pod ff50a465-ba71-11e9-ba29-fa163e216b6a is in the cache, so can't be assumed
  Warning  FailedScheduling  9m34s (x3 over 9m35s)  default-scheduler  AssumePod failed: pod ff50a465-ba71-11e9-ba29-fa163e216b6a is in the cache, so can't be assumed
  Warning  FailedScheduling  9m34s                  default-scheduler  pv "pvc-fd284cd5-ba71-11e9-84fd-fa163e2a9713" node affinity doesn't match node "wjosp0809a-2ngvz-worker-lgjff": No matching NodeSelectorTerms
  Warning  FailedScheduling  9m33s                  default-scheduler  pv "pvc-fd284cd5-ba71-11e9-84fd-fa163e2a9713" node affinity doesn't match node "wjosp0809a-2ngvz-worker-lgjff": No matching NodeSelectorTerms
  Warning  FailedScheduling  89s (x9 over 9m32s)    default-scheduler  0/6 nodes are available: 3 node(s) had taints that the pod didn't tolerate, 3 node(s) had volume node affinity conflict.
  Warning  FailedScheduling  86s (x8 over 9m32s)    default-scheduler  0/6 nodes are available: 3 node(s) had taints that the pod didn't tolerate, 3 node(s) had volume node affinity conflict.
➜  ~ oc get pv -o yaml         
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      kubernetes.io/createdby: cinder-dynamic-provisioner
      pv.kubernetes.io/bound-by-controller: "yes"
      pv.kubernetes.io/provisioned-by: kubernetes.io/cinder
    creationTimestamp: "2019-08-09T06:50:40Z"
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      failure-domain.beta.kubernetes.io/region: ""
      failure-domain.beta.kubernetes.io/zone: nova
    name: pvc-fd284cd5-ba71-11e9-84fd-fa163e2a9713
    resourceVersion: "23810"
    selfLink: /api/v1/persistentvolumes/pvc-fd284cd5-ba71-11e9-84fd-fa163e2a9713
    uid: fff807a0-ba71-11e9-ba29-fa163e216b6a
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1Gi
    cinder:
      volumeID: 8ce150a4-8add-46fe-a101-49567e1e17c7
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: pvc-5nhsm
      namespace: default
      resourceVersion: "23796"
      uid: fd284cd5-ba71-11e9-84fd-fa163e2a9713
    nodeAffinity:
      required:
        nodeSelectorTerms:
        - matchExpressions:
          - key: failure-domain.beta.kubernetes.io/zone
            operator: In
            values:
            - nova
          - key: failure-domain.beta.kubernetes.io/region
            operator: In
            values:
            - regionOne
    persistentVolumeReclaimPolicy: Delete
    storageClassName: standard
    volumeMode: Filesystem
  status:
    phase: Bound
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
➜  ~ oc get nodes --show-labels
NAME                            STATUS   ROLES    AGE   VERSION             LABELS
wjosp0809a-2ngvz-master-0       Ready    master   55m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=wjosp0809a-2ngvz-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
wjosp0809a-2ngvz-master-1       Ready    master   54m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=wjosp0809a-2ngvz-master-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
wjosp0809a-2ngvz-master-2       Ready    master   54m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=wjosp0809a-2ngvz-master-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
wjosp0809a-2ngvz-worker-gcdgz   Ready    worker   45m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=host-192-168-0-31,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
wjosp0809a-2ngvz-worker-lgjff   Ready    worker   45m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=host-192-168-0-7,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
wjosp0809a-2ngvz-worker-qnc28   Ready    worker   45m   v1.14.0+36031110c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m1.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/zone=nova,kubernetes.io/arch=amd64,kubernetes.io/hostname=host-192-168-0-17,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos

Comment 10 weiwei jiang 2019-08-09 07:07:28 UTC
After I add region label to specific worker, pods get running then.
➜  ~ oc label nodes/wjosp0809a-2ngvz-worker-lgjff failure-domain.beta.kubernetes.io/region=regionOne
node/wjosp0809a-2ngvz-worker-lgjff labeled   
➜  ~ oc rollout latest dc/h
deploymentconfig.apps.openshift.io/h rolled out
➜  ~ oc get pods -o wide 
NAME         READY   STATUS      RESTARTS   AGE   IP            NODE                            NOMINATED NODE   READINESS GATES
h-2-deploy   0/1     Error       0          13m   10.129.2.12   wjosp0809a-2ngvz-worker-lgjff   <none>           <none>
h-3-deploy   0/1     Completed   0          28s   10.129.2.14   wjosp0809a-2ngvz-worker-lgjff   <none>           <none>
h-3-lsvcd    1/1     Running     0          27s   10.129.2.15   wjosp0809a-2ngvz-worker-lgjff   <none>           <none>

Comment 11 Mike Fedosin 2019-08-09 07:09:36 UTC
You are right. The region is automatically set by this patch https://github.com/openshift/installer/pull/2189

Comment 13 weiwei jiang 2019-08-12 08:43:38 UTC
Waiting for a patched acceptable nightly build to verify this issue.
Current latest accepted nightly version is 4.2.0-0.nightly-2019-08-08-103722, it's not patched.

Comment 14 weiwei jiang 2019-08-14 08:39:25 UTC
Checked with 4.2.0-0.nightly-2019-08-13-183722, it's fixed now.

➜  ~ oc get pvc 
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-vx8s8   Bound    pvc-bc0bc4d4-be6d-11e9-ac4f-fa163e85645c   1Gi        RWO            standard       8m27s
➜  ~ oc describe pods h-2-zsjqn
Name:               h-2-zsjqn
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               wjosp0814a-sw7kh-worker-tfh94/192.168.0.25
Start Time:         Wed, 14 Aug 2019 16:34:08 +0800
Labels:             deployment=h-2
                    deploymentconfig=h
                    run=h
Annotations:        openshift.io/deployment-config.latest-version: 2
                    openshift.io/deployment-config.name: h
                    openshift.io/deployment.name: h-2
Status:             Running
IP:                 10.128.2.16
Controlled By:      ReplicationController/h-2
Containers:
  h:
    Container ID:   cri-o://b8617c22da91236e03158b785314670b41a1632425804696879ea43f16cf38bf
    Image:          bmeng/hello-openshift
    Image ID:       docker.io/bmeng/hello-openshift@sha256:06e2138a83893cad0b3484fa7840b820ee6c203813230d554c5010d494784b7c
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 14 Aug 2019 16:34:19 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-c5jjr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  test:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-vx8s8
    ReadOnly:   false
  default-token-c5jjr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-c5jjr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age    From                                    Message
  ----    ------                  ----   ----                                    -------
  Normal  Scheduled               4m41s  default-scheduler                       Successfully assigned default/h-2-zsjqn to wjosp0814a-sw7kh-worker-tfh94
  Normal  SuccessfulAttachVolume  4m36s  attachdetach-controller                 AttachVolume.Attach succeeded for volume "pvc-bc0bc4d4-be6d-11e9-ac4f-fa163e85645c"
  Normal  Pulling                 4m28s  kubelet, wjosp0814a-sw7kh-worker-tfh94  Pulling image "bmeng/hello-openshift"
  Normal  Pulled                  4m28s  kubelet, wjosp0814a-sw7kh-worker-tfh94  Successfully pulled image "bmeng/hello-openshift"
  Normal  Created                 4m28s  kubelet, wjosp0814a-sw7kh-worker-tfh94  Created container h
  Normal  Started                 4m28s  kubelet, wjosp0814a-sw7kh-worker-tfh94  Started container h
➜  ~ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-bc0bc4d4-be6d-11e9-ac4f-fa163e85645c   1Gi        RWO            Delete           Bound    default/pvc-vx8s8   standard                8m45s

Comment 15 errata-xmlrpc 2019-10-16 06:34:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.