Bug 1554206 - Failed to deploy CNS3.9 on OCP3.7
Summary: Failed to deploy CNS3.9 on OCP3.7
Keywords:
Status: CLOSED DUPLICATE of bug 1553653
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-deploy-tool
Version: cns-3.9
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Michael Adam
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-12 04:48 UTC by Apeksha
Modified: 2018-03-14 02:34 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-03-13 11:47:50 UTC
Embargoed:


Attachments (Terms of Use)

Description Apeksha 2018-03-12 04:48:20 UTC
Description of problem:
Failed to deploy CNS3.9 on OCP3.7

Version-Release number of selected component (if applicable):
cns-deploy-6.0.0-5.el7rhgs.x86_64

How reproducible: Once


Steps to Reproduce:
1. Install OCP3.7
2. Now try to run cns-deploy, it fails, while deploying glusterpods saying crashloopbackoff.

[root@dhcp46-157 ~]# oc get pods
NAME                             READY     STATUS             RESTARTS   AGE
glusterfs-2lbrv                  0/1       CrashLoopBackOff   697        2d
glusterfs-62gnt                  0/1       CrashLoopBackOff   698        2d
glusterfs-84mj6                  0/1       CrashLoopBackOff   698        2d
storage-project-router-1-r45qq   1/1       Running            0          2d


Additional info:

[root@dhcp46-157 ~]# oc describe pod glusterfs-2lbrv 
Name:		glusterfs-2lbrv
Namespace:	storage-project
Node:		dhcp47-13.lab.eng.blr.redhat.com/10.70.47.13
Start Time:	Fri, 09 Mar 2018 22:29:42 +0530
Labels:		controller-revision-hash=1493370872
		glusterfs=pod
		glusterfs-node=pod
		pod-template-generation=1
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"storage-project","name":"glusterfs","uid":"42db3b70-23bb-11e8-9580-005056a5c5c6","...
		openshift.io/scc=privileged
Status:		Running
IP:		10.70.47.13
Created By:	DaemonSet/glusterfs
Controlled By:	DaemonSet/glusterfs
Containers:
  glusterfs:
    Container ID:	docker://f13f9c34a25dd93f55812175a2502845860c8cc73606ac71e541127ad1acf30a
    Image:		rhgs3/rhgs-server-rhel7:3.3.1-7
    Image ID:		docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7@sha256:dd27c75e50c9f5b1578991b69bc706a21f0949ba83adae0a66803120059c4646
    Port:		<none>
    State:		Terminated
      Reason:		Error
      Exit Code:	1
      Started:		Mon, 12 Mar 2018 10:02:12 +0530
      Finished:		Mon, 12 Mar 2018 10:02:13 +0530
    Last State:		Terminated
      Reason:		Error
      Exit Code:	1
      Started:		Mon, 12 Mar 2018 09:57:00 +0530
      Finished:		Mon, 12 Mar 2018 09:57:00 +0530
    Ready:		False
    Restart Count:	698
    Requests:
      cpu:	100m
      memory:	100Mi
    Liveness:	exec [/bin/bash -c systemctl status glusterd.service] delay=40s timeout=3s period=25s #success=1 #failure=50
    Readiness:	exec [/bin/bash -c systemctl status glusterd.service] delay=40s timeout=3s period=25s #success=1 #failure=50
    Environment:
      GB_GLFS_LRU_COUNT:	15
      TCMU_LOGDIR:		/var/log/glusterfs/gluster-block
      GB_LOGDIR:		/var/log/glusterfs/gluster-block
    Mounts:
      /dev from glusterfs-dev (rw)
      /etc/glusterfs from glusterfs-etc (rw)
      /etc/ssl from glusterfs-ssl (ro)
      /etc/target from glusterfs-block (rw)
      /run from glusterfs-run (rw)
      /run/lvm from glusterfs-lvm (rw)
      /sys/fs/cgroup from glusterfs-cgroup (ro)
      /var/lib/glusterd from glusterfs-config (rw)
      /var/lib/heketi from glusterfs-heketi (rw)
      /var/lib/misc/glusterfsd from glusterfs-misc (rw)
      /var/log/glusterfs from glusterfs-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ghtwh (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  glusterfs-block:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/target
  glusterfs-heketi:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/heketi
  glusterfs-run:
    Type:	EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:	
  glusterfs-lvm:
    Type:	HostPath (bare host directory volume)
    Path:	/run/lvm
  glusterfs-etc:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/glusterfs
  glusterfs-logs:
    Type:	HostPath (bare host directory volume)
    Path:	/var/log/glusterfs
  glusterfs-config:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/glusterd
  glusterfs-dev:
    Type:	HostPath (bare host directory volume)
    Path:	/dev
  glusterfs-misc:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/misc/glusterfsd
  glusterfs-cgroup:
    Type:	HostPath (bare host directory volume)
    Path:	/sys/fs/cgroup
  glusterfs-ssl:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/ssl
  default-token-ghtwh:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-ghtwh
    Optional:	false
QoS Class:	Burstable
Node-Selectors:	storagenode=glusterfs
Tolerations:	node.alpha.kubernetes.io/notReady:NoExecute
		node.alpha.kubernetes.io/unreachable:NoExecute
Events:
  FirstSeen	LastSeen	Count	From						SubObjectPath			Type		Reason	Message
  ---------	--------	-----	----						-------------			--------	------	-------
  2d		57m		688	kubelet, dhcp47-13.lab.eng.blr.redhat.com	spec.containers{glusterfs}	Normal		Pulled	Container image "rhgs3/rhgs-server-rhel7:3.3.1-7" already present on machine
  2d		2m		16527	kubelet, dhcp47-13.lab.eng.blr.redhat.com	spec.containers{glusterfs}	Warning		BackOff	Back-off restarting failed container

Comment 2 Apeksha 2018-03-12 04:54:11 UTC
oc logs of gluster-pod:

env variable is set. Update in gluster-blockd.service
Couldn't find an alternative telinit implementation to spawn.

Comment 6 Humble Chirammal 2018-03-13 11:47:50 UTC

*** This bug has been marked as a duplicate of bug 1553653 ***


Note You need to log in before you can comment on or make changes to this bug.