Bug 1667606
| Summary: | Flexvolume is broken on Openshift-4.0 | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Hemant Kumar <hekumar> |
| Component: | Storage | Assignee: | Hemant Kumar <hekumar> |
| Status: | CLOSED ERRATA | QA Contact: | Wenqi He <wehe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.1.0 | CC: | aos-bugs, aos-storage-staff, bchilds, hongkliu, karan, pasik, wehe |
| Target Milestone: | --- | ||
| Target Release: | 4.1.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-06-04 10:42:06 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Hemant Kumar
2019-01-18 23:15:38 UTC
This has been fixed in both controller-manager and kubelet. New location for flexvolume plugin is - /etc/kubernetes/kubelet-plugins/volume/exec . It is a writable location on RHCOS machines and place where flexvolume plugins should be installer. controller-manager, kubelet and apiserver already has been updated to use this location. @Hemant, Could you please point out which build contains the fix? Thanks. Hongkai 0.12 installer should already include fixed components. Hey Hemant I am using v0.12 and hitting this BZ $ ./openshift-install-darwin-amd64 version ./openshift-install-darwin-amd64 v0.12.0 Not sure if i am missing something. Have you tried installing flexvolume plugin in location I pointed out? Sorry, don't know how to do that. Can you send me some instructions that i can try and share feedback with you ? Thanks Thanks @Hongkai for the feedback. I got the same results with him. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-02-20-194410 True False 28h Cluster version is 4.0.0-0.nightly-2019-02-20-194410 $ oc get pods -n rook-ceph-system NAME READY STATUS RESTARTS AGE rook-ceph-agent-5zlvz 1/1 Running 0 9m12s rook-ceph-agent-642qp 1/1 Running 0 9m12s rook-ceph-agent-d9mvk 1/1 Running 0 9m12s rook-ceph-agent-g6vf7 1/1 Running 0 9m12s rook-ceph-agent-sdjh2 1/1 Running 0 9m12s rook-ceph-operator-5dd9cd8dc9-9ghms 1/1 Running 0 10m rook-discover-56nrl 1/1 Running 0 9m12s rook-discover-bbbdk 1/1 Running 0 9m12s rook-discover-dhbtj 1/1 Running 0 9m12s rook-discover-tj9jd 1/1 Running 0 9m12s rook-discover-wkr5v 1/1 Running 0 9m12s Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |