Bug 1465378 - template file for rhgs-s3-server
template file for rhgs-s3-server
Status: VERIFIED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: cns-deploy-tool (Show other bugs)
3.2
x86_64 All
unspecified Severity high
: ---
: CNS 3.6
Assigned To: Saravanakumar
Prasanth
:
Depends On:
Blocks: 1445448
  Show dependency treegraph
 
Reported: 2017-06-27 06:36 EDT by Saravanakumar
Modified: 2017-08-30 02:33 EDT (History)
8 users (show)

See Also:
Fixed In Version: cns-deploy-5.0.0-7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saravanakumar 2017-06-27 06:36:28 EDT
Description of problem:

Provide a OpenShift template file which comprises of pod, service and related objects related to rhgs-s3-server.
Comment 5 Saravanakumar 2017-07-06 02:27:27 EDT
Corresponding upstream pull request - WIP :

https://github.com/gluster/gluster-kubernetes/pull/277
Comment 6 Prasanth 2017-08-30 02:33:46 EDT
Verified.

#############
[root@dhcp46-249 gluster-s3]# pwd
/usr/share/heketi/templates/gluster-s3
[root@dhcp46-249 gluster-s3]# ls -al
total 16
drwxr-xr-x. 2 root root   75 Aug 29 17:05 .
drwxr-xr-x. 4 root root  226 Aug 29 17:05 ..
-rw-r--r--. 1 root root 4428 Aug 29 16:16 gluster-s3-template.yaml

# cat gluster-s3-template.yaml
---
kind: Template
apiVersion: v1
metadata:
  name: glusters3template
  annotations:
    description: Gluster s3 service template
    tags: glusterfs,heketi,gluster-s3
objects:
- kind: Service
  apiVersion: v1
  metadata:
    name: glusters3service
    labels:
      name: glusters3
  spec:
    ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
    selector:
      name: glusters3
    type: ClusterIP
    sessionAffinity: None
  status:
    loadBalancer: {}
- kind: Route
  apiVersion: v1
  metadata:
    name: glusters3object
    labels:
      glusterfs: glusters3object-route
  spec:
    to:
      kind: Service
      name: glusters3service
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: glusterfs-s3-claim
    annotations:
      volume.beta.kubernetes.io/storage-class: "${STORAGE_CLASS}"
  spec:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: "${VOLUME_CAPACITY}"
- kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: glusterfs-s3-claim-meta
    annotations:
      volume.beta.kubernetes.io/storage-class: "${STORAGE_CLASS}"
  spec:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: 1Gi
- kind: DeploymentConfig
  apiVersion: v1
  metadata:
    name: glusters3
    labels:
      name: glusters3
    annotations:
      openshift.io/scc: privileged
      description: Defines how to deploy gluster s3 object storage
  spec:
    replicas: 1
    selector:
      name: glusters3
    template:
      metadata:
        name: glusters3
        labels:
          name: glusters3
      spec:
        volumes:
        - name: glusterfs-cgroup
          hostPath:
            path: "/sys/fs/cgroup"
        - name: gluster-vol1
          persistentVolumeClaim:
            claimName: glusterfs-s3-claim
        - name: gluster-vol2
          persistentVolumeClaim:
            claimName: glusterfs-s3-claim-meta
        containers:
        - name: glusters3
          image: rhgs3/rhgs-s3-server-rhel7:3.3.0-6
          imagePullPolicy: IfNotPresent
          ports:
          - name: gluster
            containerPort: 8080
            protocol: TCP
          env:
          - name: S3_ACCOUNT
            value: "${S3_ACCOUNT}"
          - name: S3_USER
            value: "${S3_USER}"
          - name: S3_PASSWORD
            value: "${S3_PASSWORD}"
          resources: {}
          volumeMounts:
          - name: gluster-vol1
            mountPath: "/mnt/gluster-object/${S3_ACCOUNT}"
          - name: gluster-vol2
            mountPath: "/mnt/gluster-object/gsmetadata"
          - name: glusterfs-cgroup
            readOnly: true
            mountPath: "/sys/fs/cgroup"
          terminationMessagePath: "/dev/termination-log"
          securityContext:
            privileged: true
        restartPolicy: Always
        terminationGracePeriodSeconds: 30
        dnsPolicy: ClusterFirst
        serviceAccountName: default
        serviceAccount: default
        securityContext: {}
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status swift-object.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status swift-object.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
  status: {}
parameters:
- name: S3_ACCOUNT
  displayName: S3 account
  description: S3 storage account that will be backed by a GlusterFS volume
  value: test
  required: true
- name: S3_USER
  displayName: S3 user
  description: S3 user who can access the s3 storage service
  value: admin
  required: true
- name: S3_PASSWORD
  displayName: S3 user authentication
  description: S3 user password
  value: testing
  required: true
- name: STORAGE_CLASS
  displayName: Storage class with GlusterFS provisioner
  description: Storage class with GlusterFS provisioner for creating gluster volumes
  value: s3storageclass
  required: true
- name: VOLUME_CAPACITY
  displayName: Volume capacity
  description: Volume capacity availabe for s3 object store, e.g. 1Gi, 2Gi.
  value: 2Gi
  required: true
#############

Note You need to log in before you can comment on or make changes to this bug.