Bug 1672772 - Fluentd unable to deploy because of SCC not created by OLM
Summary: Fluentd unable to deploy because of SCC not created by OLM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.1.0
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
: 1674934 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-05 21:06 UTC by Jeff Cantrill
Modified: 2019-06-04 10:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:31 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:43:52 UTC

Description Jeff Cantrill 2019-02-05 21:06:16 UTC
Deployed deploying cluster-logging-operator with latest image

apiVersion: "logging.openshift.io/v1alpha1"
kind: "ClusterLogging"
metadata:
  name: "example"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      storage: {}
      redundancyPolicy: "SingleRedundancy"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "30 3 * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

which resulted in:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  creationTimestamp: 2019-02-05T20:46:30Z
  generation: 1
  labels:
    component: fluentd
    logging-infra: fluentd
    provider: openshift
  name: fluentd
  namespace: openshift-logging
  ownerReferences:
  - apiVersion: logging.openshift.io/v1alpha1
    controller: true
    kind: ClusterLogging
    name: example
    uid: 7e43a8fb-2984-11e9-aa30-028343db471a
  resourceVersion: "40832"
  selfLink: /apis/extensions/v1beta1/namespaces/openshift-logging/daemonsets/fluentd
  uid: 1d4c4e03-2987-11e9-aa30-028343db471a
spec:
  minReadySeconds: 600
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      component: fluentd
      logging-infra: fluentd
      provider: openshift
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        component: fluentd
        logging-infra: fluentd
        provider: openshift
      name: fluentd
    spec:
      containers:
      - env:
        - name: MERGE_JSON_LOG
          value: "false"
        - name: K8S_HOST_URL
          value: https://kubernetes.default.svc
        - name: ES_HOST
          value: elasticsearch
        - name: ES_PORT
          value: "9200"
        - name: ES_CLIENT_CERT
          value: /etc/fluent/keys/app-cert
        - name: ES_CLIENT_KEY
          value: /etc/fluent/keys/app-key
        - name: ES_CA
          value: /etc/fluent/keys/app-ca
        - name: OPS_HOST
          value: elasticsearch
        - name: OPS_PORT
          value: "9200"
        - name: OPS_CLIENT_CERT
          value: /etc/fluent/keys/infra-cert
        - name: OPS_CLIENT_KEY
          value: /etc/fluent/keys/infra-key
        - name: OPS_CA
          value: /etc/fluent/keys/infra-ca
        - name: JOURNAL_SOURCE
        - name: JOURNAL_READ_FROM_HEAD
        - name: BUFFER_QUEUE_LIMIT
          value: "32"
        - name: BUFFER_SIZE_LIMIT
          value: 8m
        - name: FILE_BUFFER_LIMIT
          value: 256Mi
        - name: FLUENTD_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: fluentd
              divisor: "0"
              resource: limits.cpu
        - name: FLUENTD_MEMORY_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: fluentd
              divisor: "0"
              resource: limits.memory
        - name: NODE_IPV4
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: quay.io/openshift/origin-logging-fluentd:latest
        imagePullPolicy: IfNotPresent
        name: fluentd
        resources: {}
        securityContext:
          privileged: true
          procMount: Default
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /run/log/journal
          name: runlogjournal
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /etc/fluent/configs.d/user
          name: config
          readOnly: true
        - mountPath: /etc/fluent/keys
          name: certs
          readOnly: true
        - mountPath: /etc/docker-hostname
          name: dockerhostname
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/sysconfig/docker
          name: dockercfg
          readOnly: true
        - mountPath: /etc/docker
          name: dockerdaemoncfg
          readOnly: true
        - mountPath: /var/lib/fluentd
          name: filebufferstorage
      dnsPolicy: ClusterFirst
      priorityClassName: cluster-logging
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: fluentd
      serviceAccountName: fluentd
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - hostPath:
          path: /run/log/journal
          type: ""
        name: runlogjournal
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 420
          name: fluentd
        name: config
      - name: certs
        secret:
          defaultMode: 420
          secretName: fluentd
      - hostPath:
          path: /etc/hostname
          type: ""
        name: dockerhostname
      - hostPath:
          path: /etc/localtime
          type: ""
        name: localtime
      - hostPath:
          path: /etc/sysconfig/docker
          type: ""
        name: dockercfg
      - hostPath:
          path: /etc/docker
          type: ""
        name: dockerdaemoncfg
      - hostPath:
          path: /var/lib/fluentd
          type: ""
        name: filebufferstorage
  templateGeneration: 1
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 0
  desiredNumberScheduled: 0
  numberMisscheduled: 0
  numberReady: 0


with error:

  Warning  FailedCreate  3m (x19 over 14m)  daemonset-controller  Error creating: pods "fluentd-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[6]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[7]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[8]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[9]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

Comment 1 Jeff Cantrill 2019-02-05 21:26:35 UTC
Per my conversation with Eric, we are missing something to create [1] which probably should not be the cluster-logging-operator:

[1] https://github.com/openshift/openshift-ansible/blob/release-3.11/roles/openshift_logging_fluentd/tasks/main.yaml#L73-L79

Comment 2 Paul Weil 2019-02-06 12:52:43 UTC
Yes, this error comes from the SCC provider.go and indicates that you did not pass SCC review based on the pod spec and allowable permissions.  Can the operator check this prior to launching or is it safe to launch and allow the deployment to fail?  There is a cli command called oc adm policy scc-subject-review that checks if a user can create a pod that may be useful here (the backing code, I mean).

Comment 3 Jeff Cantrill 2019-02-06 18:58:25 UTC
Ref [1] as an example of the change required in CSV

[1] https://github.com/robszumski/helm-operators/blob/master/cockroachdb/cockroachdb.v2.0.9-2.clusterserviceversion.yaml#L72-L82

Comment 5 Jeff Cantrill 2019-02-11 19:10:43 UTC
*** Bug 1674934 has been marked as a duplicate of this bug. ***

Comment 6 ewolinet 2019-02-13 17:30:59 UTC
Two PRs from #c4 merged in

Comment 12 errata-xmlrpc 2019-06-04 10:42:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.