Bug 1465378 - template file for rhgs-s3-server
Summary: template file for rhgs-s3-server
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-deploy-tool
Version: rhgs-3.2
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: ---
: CNS 3.6
Assignee: Saravanakumar
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks: 1445448
TreeView+ depends on / blocked
 
Reported: 2017-06-27 10:36 UTC by Saravanakumar
Modified: 2017-10-11 07:12 UTC (History)
9 users (show)

Fixed In Version: cns-deploy-5.0.0-7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-11 07:12:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2881 0 normal SHIPPED_LIVE cns-deploy-tool bug fix and enhancement update 2017-10-11 11:11:43 UTC

Description Saravanakumar 2017-06-27 10:36:28 UTC
Description of problem:

Provide a OpenShift template file which comprises of pod, service and related objects related to rhgs-s3-server.

Comment 5 Saravanakumar 2017-07-06 06:27:27 UTC
Corresponding upstream pull request - WIP :

https://github.com/gluster/gluster-kubernetes/pull/277

Comment 6 Prasanth 2017-08-30 06:33:46 UTC
Verified.

#############
[root@dhcp46-249 gluster-s3]# pwd
/usr/share/heketi/templates/gluster-s3
[root@dhcp46-249 gluster-s3]# ls -al
total 16
drwxr-xr-x. 2 root root   75 Aug 29 17:05 .
drwxr-xr-x. 4 root root  226 Aug 29 17:05 ..
-rw-r--r--. 1 root root 4428 Aug 29 16:16 gluster-s3-template.yaml

# cat gluster-s3-template.yaml
---
kind: Template
apiVersion: v1
metadata:
  name: glusters3template
  annotations:
    description: Gluster s3 service template
    tags: glusterfs,heketi,gluster-s3
objects:
- kind: Service
  apiVersion: v1
  metadata:
    name: glusters3service
    labels:
      name: glusters3
  spec:
    ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
    selector:
      name: glusters3
    type: ClusterIP
    sessionAffinity: None
  status:
    loadBalancer: {}
- kind: Route
  apiVersion: v1
  metadata:
    name: glusters3object
    labels:
      glusterfs: glusters3object-route
  spec:
    to:
      kind: Service
      name: glusters3service
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: glusterfs-s3-claim
    annotations:
      volume.beta.kubernetes.io/storage-class: "${STORAGE_CLASS}"
  spec:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: "${VOLUME_CAPACITY}"
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: glusterfs-s3-claim-meta
    annotations:
      volume.beta.kubernetes.io/storage-class: "${STORAGE_CLASS}"
  spec:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: 1Gi
- kind: DeploymentConfig
  apiVersion: v1
  metadata:
    name: glusters3
    labels:
      name: glusters3
    annotations:
      openshift.io/scc: privileged
      description: Defines how to deploy gluster s3 object storage
  spec:
    replicas: 1
    selector:
      name: glusters3
    template:
      metadata:
        name: glusters3
        labels:
          name: glusters3
      spec:
        volumes:
        - name: glusterfs-cgroup
          hostPath:
            path: "/sys/fs/cgroup"
        - name: gluster-vol1
          persistentVolumeClaim:
            claimName: glusterfs-s3-claim
        - name: gluster-vol2
          persistentVolumeClaim:
            claimName: glusterfs-s3-claim-meta
        containers:
        - name: glusters3
          image: rhgs3/rhgs-s3-server-rhel7:3.3.0-6
          imagePullPolicy: IfNotPresent
          ports:
          - name: gluster
            containerPort: 8080
            protocol: TCP
          env:
          - name: S3_ACCOUNT
            value: "${S3_ACCOUNT}"
          - name: S3_USER
            value: "${S3_USER}"
          - name: S3_PASSWORD
            value: "${S3_PASSWORD}"
          resources: {}
          volumeMounts:
          - name: gluster-vol1
            mountPath: "/mnt/gluster-object/${S3_ACCOUNT}"
          - name: gluster-vol2
            mountPath: "/mnt/gluster-object/gsmetadata"
          - name: glusterfs-cgroup
            readOnly: true
            mountPath: "/sys/fs/cgroup"
          terminationMessagePath: "/dev/termination-log"
          securityContext:
            privileged: true
        restartPolicy: Always
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirst
        serviceAccountName: default
        serviceAccount: default
        securityContext: {}
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status swift-object.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status swift-object.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
  status: {}
parameters:
- name: S3_ACCOUNT
  displayName: S3 account
  description: S3 storage account that will be backed by a GlusterFS volume
  value: test
  required: true
- name: S3_USER
  displayName: S3 user
  description: S3 user who can access the s3 storage service
  value: admin
  required: true
- name: S3_PASSWORD
  displayName: S3 user authentication
  description: S3 user password
  value: testing
  required: true
- name: STORAGE_CLASS
  displayName: Storage class with GlusterFS provisioner
  description: Storage class with GlusterFS provisioner for creating gluster volumes
  value: s3storageclass
  required: true
- name: VOLUME_CAPACITY
  displayName: Volume capacity
  description: Volume capacity availabe for s3 object store, e.g. 1Gi, 2Gi.
  value: 2Gi
  required: true
#############

Comment 8 errata-xmlrpc 2017-10-11 07:12:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2881


Note You need to log in before you can comment on or make changes to this bug.