Bug 1324371
Summary: | needn't install atomic-openshift packages on nfs server | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Anping Li <anli> |
Component: | Installer | Assignee: | Andrew Butcher <abutcher> |
Status: | CLOSED ERRATA | QA Contact: | Ma xiaoqiang <xiama> |
Severity: | high | Docs Contact: | Johnny Liu <jialiu> |
Priority: | high | ||
Version: | 3.2.0 | CC: | aos-bugs, bleanhar, gpei, jialiu, jokerman, mmccomas, xtian |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-12 16:40:07 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1323057 |
Description
Anping Li
2016-04-06 08:04:38 UTC
Verify this bug with openshift-ansible-3.0.75-1.git.0.83b3b91.el7.noarch.rpm After installation, log into the nfs server host, no openshift related package installed. Found the PR is breaking other things, here is nfs setting options in my inventory: openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_nfs_options='*(rw,root_squash,sync,no_wdelay)' openshift_hosted_registry_storage_nfs_directory=/var/lib/exports openshift_hosted_registry_storage_volume_name=regpv openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_volume_size=17G After installation is done, go to nfs server, checking: # cat /etc/exports /exports/registry *(rw,root_squash) Found nfs server is exporting wrong dir, not the specified one in pv definition, that will make docker-registry deploy failure. $ oc get po NAME READY STATUS RESTARTS AGE docker-registry-1-deploy 0/1 DeadlineExceeded 0 7h docker-registry-2-79t61 0/1 ContainerCreating 0 7h docker-registry-2-deploy 1/1 Running 0 7h $ oc describe po docker-registry-2-79t61 <--snip--> Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7h 7h 1 {default-scheduler } Normal Scheduled Successfully assigned docker-registry-2-79t61 to openshift-xxx 7h 7h 20 {kubelet openshift-xxx} Warning FailedMount Unable to mount volumes for pod "docker-registry-2-79t61_default(0b15a785-fd51-11e5-97d3-fa163ee13a09)": Mount failed: exit status 32 Mounting arguments: openshift-xxx:/var/lib/exports/regpv /var/lib/origin/openshift.local.volumes/pods/0b15a785-fd51-11e5-97d3-fa163ee13a09/volumes/kubernetes.io~nfs/regpv-volume nfs [] Output: Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details. mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified 7h 7h 20 {kubelet openshift-xxx} Warning FailedSync Error syncing pod, skipping: Mount failed: exit status 32 Mounting arguments: openshift-xxx:/var/lib/exports/regpv /var/lib/origin/openshift.local.volumes/pods/0b15a785-fd51-11e5-97d3-fa163ee13a09/volumes/kubernetes.io~nfs/regpv-volume nfs [] Output: Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details. mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified This is breaking installation, so raise its severity. Proposed fix: https://github.com/openshift/openshift-ansible/pull/1733 Verified this bug with that latest openshift-ansible master branch, and PASS. No openshift rpm packages are installed on nfs server, and nfs pv storage created on this NFS server is running well. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1065 |