Bug 1487514

Summary: fstab entry is missing in heketi.json file
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: CNS-deploymentAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: krishnaram Karthick <kramdoss>
Severity: high Docs Contact:
Priority: unspecified    
Version: cns-3.6CC: akhakhar, annair, asriram, asrivast, hchiramm, jarrpa, kramdoss, madam, mliyazud, mzywusko, pprakash, rhs-bugs, rreddy, rtalur, srmukher
Target Milestone: ---Keywords: Regression
Target Release: CNS 3.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cns-deploy-5.0.0-35 Doc Type: Bug Fix
Doc Text:
Prior to this update, the fstab entry was missing in heketi.json file and as a result, any node reboots did not persist the mountpoints. With this fix, the cns-deploy build now contains the fstab entry with the mountpaths updated.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-10-11 07:09:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1445448    

Description krishnaram Karthick 2017-09-01 07:08:02 UTC
Description of problem:

fstab entry is missing in heketi.json file and as a result any node reboots won't persist the mountpoints. This seems to be a regression with the latest build.

heketi.json file from the heketi pod
======================================
sh-4.2# cat /etc/heketi/heketi.json 
{
        "_port_comment": "Heketi Server Port Number",
        "port" : "8080",

        "_use_auth": "Enable JWT authorization. Please enable for deployment",
        "use_auth" : false,

        "_jwt" : "Private keys for access",
        "jwt" : {
                "_admin" : "Admin has access to all APIs",
                "admin" : {
                        "key" : "My Secret"
                },
                "_user" : "User only has access to /volumes endpoint",
                "user" : {
                        "key" : "My Secret"
                }
        },

        "_glusterfs_comment": "GlusterFS Configuration",
        "glusterfs" : {

                "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
                "executor" : "ssh",

                "_db_comment": "Database file name",
                "db" : "/var/lib/heketi/heketi.db",

                "kubeexec" : {
                        "rebalance_on_expansion": true
                },

                "sshexec" : {
                        "rebalance_on_expansion": true,
                        "keyfile" : "/etc/heketi/private_key",
                        "port" : "22",
                        "user" : "root",
                        "sudo" : false
                },

                "_auto_create_block_hosting_volume": "Creates Block Hosting volumes automatically if not found or exsisting volume exhausted",
                "auto_create_block_hosting_volume": true,

                "_block_hosting_volume_size": "New block hosting volume will be created in size mentioned, This is considered only if auto-create is enabled.",
                "block_hosting_volume_size": 500
        },

        "backup_db_to_kube_secret": false
}

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-25.el7rhgs.x86_64

How reproducible:
Always

Comment 9 Humble Chirammal 2017-09-11 07:57:44 UTC
Karthick, do "touch "/var/lib/heketi/fstab" file before you experiment.

Comment 10 krishnaram Karthick 2017-09-11 14:12:04 UTC
The path seems to be "/var/lib/heketi/fstab"

This is from one of the gluster nodes in CRS configuration.

[root@dhcp46-1 ~]# ls /var/lib/heketi/fstab
/var/lib/heketi/fstab
[root@dhcp46-1 ~]# cat /var/lib/heketi/fstab
/dev/mapper/vg_897874df938416aaf48ee7fac42b60a0-brick_449cea83f0e7264d4df1f39290ad9ba4 /var/lib/heketi/mounts/vg_897874df938416aaf48ee7fac42b60a0/brick_449cea83f0e7264d4df1f39290ad9ba4 xfs rw,inode64,noatime,nouuid 1 2
/dev/mapper/vg_897874df938416aaf48ee7fac42b60a0-brick_24b369e35699682c036e656c8f1b5364 /var/lib/heketi/mounts/vg_897874df938416aaf48ee7fac42b60a0/brick_24b369e35699682c036e656c8f1b5364 xfs rw,inode64,noatime,nouuid 1 2
/dev/mapper/vg_ee31f88eac1727cd2424eaaab05715ee-brick_a821148f97d1294a6a70da95740a0f8e /var/lib/heketi/mounts/vg_ee31f88eac1727cd2424eaaab05715ee/brick_a821148f97d1294a6a70da95740a0f8e xfs rw,inode64,noatime,nouuid 1 2
/dev/mapper/vg_14b96508efc651ef89339b63e3a72030-brick_3637f357378b84563b3c8ab3b1e6c3bd /var/lib/heketi/mounts/vg_14b96508efc651ef89339b63e3a72030/brick_3637f357378b84563b3c8ab3b1e6c3bd xfs rw,inode64,noatime,nouuid 1 2

Comment 12 Humble Chirammal 2017-09-11 15:16:01 UTC
https://github.com/gluster/gluster-kubernetes/pull/351 . Thanks Rtalur++

Comment 13 krishnaram Karthick 2017-09-14 13:19:16 UTC
verified in build - cns-deploy-5.0.0-38

fstab entry now has the mount paths updated

[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Wed Dec 21 16:29:01 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel_dhcp47--183-root /                       xfs     defaults        0 0
UUID=73247581-2655-4553-8992-efd1630d8141 /boot                   xfs     defaults        0 0
/dev/mapper/rhel_dhcp47--183-home /home                   xfs     defaults        0 0
UUID=7286b595-05da-4449-81a0-d957becb0a1d /var                    xfs     defaults        0 0
UUID=fd4f0650-a0ad-4139-8a32-e9ef5037c34a swap                    swap    defaults        0 0
/dev/mapper/vg_61a08769bf24a9a791e45e31bf5687a0-brick_8d88ef746a1869d4cb668ad2aca75335 /var/lib/heketi/mounts/vg_61a08769bf24a9a791e45e31bf5687a0/brick_8d88ef746a1869d4cb668ad2aca75335 xfs rw,inode64,noatime,nouuid 1 2
/dev/mapper/vg_61a08769bf24a9a791e45e31bf5687a0-brick_3d992dc51aa5260cc8c95224a0bd4be8 /var/lib/heketi/mounts/vg_61a08769bf24a9a791e45e31bf5687a0/brick_3d992dc51aa5260cc8c95224a0bd4be8 xfs rw,inode64,noatime,nouuid 1 2
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# 
[root@dhcp46-1 ~]# df -h
Filesystem                                                                              Size  Used Avail Use% Mounted on
/dev/mapper/rhel_dhcp47--183-root                                                        50G  1.9G   49G   4% /
devtmpfs                                                                                 24G     0   24G   0% /dev
tmpfs                                                                                    24G     0   24G   0% /dev/shm
tmpfs                                                                                    24G  8.7M   24G   1% /run
tmpfs                                                                                    24G     0   24G   0% /sys/fs/cgroup
/dev/sda1                                                                              1014M  231M  784M  23% /boot
/dev/sdb1                                                                                40G  826M   40G   3% /var
/dev/mapper/rhel_dhcp47--183-home                                                        50G   33M   50G   1% /home
/dev/mapper/vg_61a08769bf24a9a791e45e31bf5687a0-brick_8d88ef746a1869d4cb668ad2aca75335  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_61a08769bf24a9a791e45e31bf5687a0/brick_8d88ef746a1869d4cb668ad2aca75335
/dev/mapper/vg_61a08769bf24a9a791e45e31bf5687a0-brick_3d992dc51aa5260cc8c95224a0bd4be8  500G  476G   25G  96% /var/lib/heketi/mounts/vg_61a08769bf24a9a791e45e31bf5687a0/brick_3d992dc51aa5260cc8c95224a0bd4be8
tmpfs                                                                                   4.7G     0  4.7G   0% /run/user/0

Moving the bug to verified.

Comment 15 Raghavendra Talur 2017-10-04 09:26:59 UTC
did a minor change, rest of the doc text looks good to me

Comment 16 errata-xmlrpc 2017-10-11 07:09:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2879